This article complains that we use anthropomorphizing language too much when talking about AI and recommends instead that we talk about it in terms of functions. That is, we talk about it as though it's human, and we shouldn't. "A more deliberate and thoughtful way forward is to talk about 'AI' systems in terms of what we use systems to do, often specifying input and/or output... Rather than saying a model is 'good at' something (suggesting the model has skills) we can talk about what it is 'good for'." So, I guess it would be saying an AI is 'good for' translating Harlequin romances, rather than saying AI is 'good at' translating them. Seems like a small difference to me. But the real question concerns our use of anthropomorphizing language. Does it really matter? Are we really fooled? We use anthropomorphizing language all the time to talk about pets, appliances, the weather, other people. Are we really making specific ontological commitments here? Or are we just using a vocabulary that's familiar and easy? Via Ton Zijlstra.
Today: Total: [] [Share]

