I'm not going to summarize this paper - I'll let Justin Weinberg do that - beyond saying that it treats the problem of AI consciousness like a warm bath into which you just want to immerse yourself for a Saturday afternoon (not that I've ever done that). Without a clear understanding of consciousness we will as humans be hard-pressed to recognize it when it appears in machines, and moreover, as machines begin to design themselves, any AI-native consciousness may be completely unrecognizable as such to us. I've written elsewhere that 'consciousness is experience', which will do in a pinch; it might be compared to Thomas Nagel's question, "what is it like to be a bat"? Consciousness is "what it's like" to be something, to that something. So there's no question in my mind as to whether AI will be consciousness; it will. But what that consciousness will be like, I would be hard-pressed to say.
Today: Total: [] [Share]

