Stephen Downes

Knowledge, Learning, Community

I did enjoy Michael Feldstein's reflections on the education sector's general failure to understand AI (and I encourage him to post more open access blog posts). "AIs have weird failure modes that we don't understand yet," he writes. "That's likely because the industry has not been rigorously studying them yet. We need to recognize the reality of where we are so we can minimize risk of disasters... AI labs are heavily populated by two kinds of experts: Mathematicians and engineers. Neither discipline is trained on falsifiable theory as the standard for a good explanation. Mathematicians trust proofs. Engineers trust optimizations." Feldstein paints a picture here that (to elide the details) approximates 'science' with 'theory' and 'explanation'. My question back is: what if it's education (the academic research discipline) that has the definition of science wrong. What if we don't get neat theories and predictions? What if 'understanding' doesn't mean 'tell a causal story'? Related and important: prediction and causation in machine learning and neuroscience. Also: AI that explains its discoveries. And: what metaphor should drive AI research? The field is wide open here.

Today: Total: [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Mar 24, 2026 4:06 p.m.

Canadian Flag Creative Commons License.