Content-type: text/html Downes.ca ~ Stephen's Web ~ Can GPT-3 be honest when it speaks nonsense?

Stephen Downes

Knowledge, Learning, Community

To 'be honest' is to have the possibility of telling a falsehood, but to choose not to do so. It's something that can be attributed to devices - we would ask, for example, whether a scale gave an honest weight. But normally we think it has to do with intention; the agent has to know what they're saying isn't true. But if an AI simply can't distinguish between truth and falsehood (as appears to be the case when we look at some of the nonsense it produces) then there's no way to say it can be honest. Right? Well, not so fast. As this article points out, while an AI is always ready with a response to a question, it can calculate degrees of uncertainty about the answer. "Having language models express their uncertainty is a crucial aspect of honesty: there will always be things that models don't know for sure, and so uncertainty is necessary for conveying the model's knowledge faithfully."

Today: 2 Total: 932 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 10, 2024 4:34 p.m.

Canadian Flag Creative Commons License.

Force:yes