Stephen Downes

Knowledge, Learning, Community

This paper argues that "hallucinations in pretrained language models will arise through natural statistical pressures... due to the way most evaluations are graded -- language models are optimized to be good test-takers, and guessing when uncertain improves test performance." Or as I argued today on Mastodon, "Young Stephen and young AI reached the same conclusion when it comes to answering questions... always write something in response to test questions. Leave it blank, get a zero. Write something (and even better, take a guess on multiple choice) and you'll get at least partial marks." Probably should have posted this link earlier, but better late than never.

Today: Total: [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Sept 22, 2025 11:45 a.m.

Canadian Flag Creative Commons License.