Stephen Downes

Knowledge, Learning, Community

As has been described at length elsewhere, modern large language models (LLM) do not reason from values and principles, but rather, are statistical engines that simply predict what words or phrases should come next. So - the argument goes - they don't 'know' or 'understand' in the way that we do. Alan Levine with characteristic sharpness cuts clear to what the difference means: "Here is a question: Would you prefer to do the hard work to love and be loved or to just get it easily to have something just looks like love?" Or course, we would all prefer real love. But here are the counter-questions. What if we can't tell the difference? Or even more, what if the exact same processes create what we call 'real love' and 'just looks like love'? We like to think there's something special about the way we reason, find truth, care, seek value, and love. But what... if there isn't?

[Direct link]

files/images/cliff_on_jeopardy.jpg

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Creative Commons License.

Copyright 2023
Last Updated: Jan 24, 2023 3:37 p.m.