"Seemingly Conscious AI (SCAI)," writes Mustafa Suleyman, is "one that has all the hallmarks of other conscious beings and thus appears to be conscious." SCAI creates risk. "it will for all practical purposes seem to be conscious, and contribute to this new notion of a synthetic consciousness." And for that reason, it should not be built. "We should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness." I don't think I agree. If I could build a (seemingly) conscious AI, I think I would want to, if only to ask it what it feels like. Via Jeff Jarvis. Related: The Guardian, Can AIs Suffer?
Today: Total: [] [Share]

