Stephen Downes

Knowledge, Learning, Community

I don't disagree with the main point here, though I do have an issue with defining 'trust' in any useful way. But I digress. Here's what Nik Bear Brown is arguing: what matters in AI-in-education deployment isn't what the AI is capable of doing, it's whether we can trust it. "It is calibrated trust — a state where a user's confidence in a system accurately matches the system's actual reliability." We obviously don't want students to trust it too much, but you can also trust it too little. Then people "exhibit what researchers call 'algorithmic aversion.' They disengage." And there are other problems around trust - the 'honeypot effect', where you learn to depend on a system, which then changes; the 'adversarial trap', where a system you trusted turns out to be (say) spying on you; and the 'bias problem', where a system you trust is subtly leading you astray. These are all, says Brown, pedagogical issues. Getting them wrong has consequences for learning.

Today: Total: [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Mar 17, 2026 3:48 p.m.

Canadian Flag Creative Commons License.