Content-type: text/html Downes.ca ~ Stephen's Web ~ The danger of anthropomorphic language in robotic AI systems

Stephen Downes

Knowledge, Learning, Community

This article makes a good point, but in a way that's misleading and wrong. The good point is that using the same words to describe machine actions as we use to describe human actions (that is, 'anthropomorphic terms') may lead people to attributed to a machine a capacity it does not have. But it's wrong, in my view, to say that the machine does things in a fundamentally different way than humans. This (from the article) is an irresponsible way of describing AI: "The actual implementation is a camera that detects red pixels that form a rough circle.... When deployed, the robot mistakes the picture of a hot air balloon on a shirt and tries to drive the gripper through the person in an attempt to pick it up." No, it's not that way at all. In modern AI, the way a human may 'recognize' an apple is very similar to the way a machine may 'recognize' an apple. This is important to recognize, because we can learn a lot about people from AIs, and vice versa, and the use of the same terms makes that easier, even if we have to be careful about how we use them.

Today: 0 Total: 1104 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 18, 2024 05:42 a.m.

Canadian Flag Creative Commons License.

Force:yes