Content-type: text/html Downes.ca ~ Stephen's Web ~ Chad GPT on Research-y Volcanology

Stephen Downes

Knowledge, Learning, Community

As has been described at length elsewhere, modern large language models (LLM) do not reason from values and principles, but rather, are statistical engines that simply predict what words or phrases should come next. So - the argument goes - they don't 'know' or 'understand' in the way that we do. Alan Levine with characteristic sharpness cuts clear to what the difference means: "Here is a question: Would you prefer to do the hard work to love and be loved or to just get it easily to have something just looks like love?" Or course, we would all prefer real love. But here are the counter-questions. What if we can't tell the difference? Or even more, what if the exact same processes create what we call 'real love' and 'just looks like love'? We like to think there's something special about the way we reason, find truth, care, seek value, and love. But what... if there isn't?

Today: 0 Total: 1405 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: May 03, 2024 04:10 a.m.

Canadian Flag Creative Commons License.

Force:yes