Content-type: text/html Downes.ca ~ Stephen's Web ~ Could a Large Language Model be Conscious?

Stephen Downes

Knowledge, Learning, Community

David Chalmers asks, what would count as evidence that large language models are or could be conscious? That doesn't mean they're sentient or aware of their own existence, just that there is some sense in which we can say what it's like to be an AI (just as Nagel asks, "what is like to be a bat?"). There isn't an operational definition of consciousness; that is, there are no benchmarks for measuring it in a machine. We're not going to believe it just because it says it is conscious. At the same time, it's not obvious that it lacks anything it needs to be conscious. Do we say it has to have something like a 'world-model' over and above mere statistical feature recognition? Maybe, but future AI are likely to have that capacity. Ultimately, says Chalmers, the problem is two-fold: we don't understand consciousness, and we don't understand what's going on inside an AI.

Today: 2 Total: 1148 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 25, 2024 03:07 a.m.

Canadian Flag Creative Commons License.

Force:yes