[Home] [Top] [Archives] [About] [Options]

OLDaily

Sorry about the blank email yesterday. Today's email contains yesterday's items as well as today's.
Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Could a Large Language Model be Conscious?
PhilPapers, 2023/02/02


Icon

David Chalmers asks, what would count as evidence that large language models are or could be conscious? That doesn't mean they're sentient or aware of their own existence, just that there is some sense in which we can say what it's like to be an AI (just as Nagel asks, "what is like to be a bat?"). There isn't an operational definition of consciousness; that is, there are no benchmarks for measuring it in a machine. We're not going to believe it just because it says it is conscious. At the same time, it's not obvious that it lacks anything it needs to be conscious. Do we say it has to have something like a 'world-model' over and above mere statistical feature recognition? Maybe, but future AI are likely to have that capacity. Ultimately, says Chalmers, the problem is two-fold: we don't understand consciousness, and we don't understand what's going on inside an AI.

Web: [Direct Link] [This Post]


The Nature of Believing
David Hunter, PhilPapers, 2023/02/02


Icon

I am somewhat in agreement with this paper (which feels a lot longer than it is) in which David Hunter argues for a conception of mind that "has a rational agent at its heart, one whose acts, thoughts, and feelings can depend on how things are or could be, including how she is and could be" but in which "neither knowing nor believing essentially involves representing those facts and possibilities." Rather, "they involve being so positioned that those facts and possibilities can explain what one does, thinks, and feels." Somewhat, because I'm not sure how possibilities, including especially counterfactuals, can be 'facts' about the external world. I'm also hesitant to ascribe 'being good', however it's defined, as a motivation for beliefs and actions. But I can understand how you could describe physical systems that can act rationally without internal representational states, which is what Hunter is up to here. Image: Frontiers.

Web: [Direct Link] [This Post]


Why We're Not 'Screwed' By AI
Maha Bali, Reflecting Allowed, 2023/02/02


Icon

Maha Bali argues that we need not worry about AI because there will still be plenty of work for people to do. For example, "teachers could focus on what they do best, what they cannot be replaced in." The argument is based pretty much entirely on the assertion that AI will not be able to do this or that: things like "caring for students and knowing them as people and supporting them emotionally and being role models for good humans and good citizens." I have two comments. First, a lot of people are really bad at these things. And second, there's no real reason to believe that AI won't actually be better than (most) humans at these. No, the proper response to AI isn't to place faith in our place in future industry. It's to begin ensuring now that the wealth created by all this automation is distributed evenly, so we don't have to worry about being thrown out of employment.

Web: [Direct Link] [This Post]


Educator considerations for ChatGPTEducator considerations for ChatGPT
OpenAI, 2023/02/02


Icon

This is a page provided by OpenAI, the makers of chatGPT, that educators should perhaps read before using it or, for that matter, complaining that it doesn't do this or that. "ChatGPT has no external capabilities and cannot look things up in external sources. This means that it cannot access the internet, search engines, databases, or any other sources of information outside of the current chat. It cannot verify facts, provide references, or perform calculations or translations."

Web: [Direct Link] [This Post]


Put Down the Shiny Object: The Overwhelming State of Higher Education Technology
Lindsey Downs, WICHE Cooperative for Educational Technologies, 2023/02/02


Icon

I'm sympathetic with the main line of argumentation in this article, but there's a disconnect that's glaring. The suggestion is that students aren't as good at tech as people suppose, so institutions and (especially) professors should think twice about introducing some 'shiny new tech' as part of the course. And I get that. But the disconnect is this: when the students are asked, they report struggling with the core technologies in use, things like the LMS, or file management and editing with MS-Word or Excel. And I wonder whether the problem isn't that students are struggling with shiny new tech so much as they're having difficulties with crappy old tech. The online world students use has evolved past these tools. Maybe higher ed should too.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.