Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

The key problem with the "brain in a vat" thought experiment
78995 image icon

This short article uses a philosophical classic to address what might be called 'the embodiment problem'. The classic is, of course, the question, 'how do we know we are not brains in vats'? All our sensations, all our physical experiences, could be wired up as inputs into the brain. Could we tell the difference. This article argues that we could, because it would be much too complex to simulate our experiences. "Thompson and Cosmelli conclude (18 page PDF) that to really envat a brain, you must embody it. Your vat would necessarily end up being a substitute body." Well - sure. Even the simplest version of 'brain in a vat' postulates some external mechanism standing in for the human body. That's the whole point. But the question is more subtle: is it the case that there can be one and only one possible cause for a given set of conscious experiences? If the answer is 'yes', then our options for both ourselves and for AI are fundamentally limited. But on what grounds would you argue 'yes'? This article doesn't really offer those grounds, beyond saying it's complex. But complexity doesn't prove necessity.

Today: Total: Adam Frank, Big Think, 2026/03/17 [Direct Link]
Robots Didn't Kill the Internet
78994 image icon

Carlo Iacono argues convincingly that today's 'dead internet' isn't the result of AI, it's the result of incentives. Platforms are asking for things that hold attention and produce a useful signal. "That question, applied at scale and compounded over years, is what killed the internet. Not robots. Incentives." The internet has become a giant casino, he argues. Websites are engineered to keep people clicking, and they collect their cut in the form of advertising revenue. "The internet did not start rotting because robots learned to write. It started rotting when platforms became casinos. The robots are just very efficient casino staff." 

Today: Total: Carlo Iacono, Hybrid Horizons, 2026/03/17 [Direct Link]
Who Owns AI-Generated Content?
78992 image icon

"The legal trajectory of AI-generated content presents a pivotal opportunity for open education, directly addressing the twin problems of legal uncertainty and eroded trust outlined at the outset," writes Rory McGreal. First, AI-generated content is automatically open content. "The clear consensus that purely AI-generated works are not copyrightable and belong to the public domain provides a stable legal foundation. Educators can use such content without fear of copyright infringement, licensing fees, or complex attribution chains. This demystifies a major part of the 'minefield,' transforming the 'what if' from a source of dread into a clear guideline: autonomous GenAI can be used to create OER lessons." That doesn't mean 'anything goes'. "The academic community must uphold principles of authorship, accountability, and transparency. Using public domain AI content does not absolve educators of the need for due diligence, citation of specific sources, or ethical disclosure of AI assistance in human-AI collaborations."

Today: Total: Rory McGreal, unitwin-unoe, 2026/03/16 [Direct Link]
AI should be the Guide, not the Ghostwriter
78991 image icon

The point of Donald Clark's article is to offer what I guess we can call 'the standard argument': "Generating words, knowledge and solutions is better than simply reading, highlighting text or getting AI to do it for you. Acts of personal generation provide the context for greater understanding and subsequent recall.... This is a short-term pain, long-term gain idea, where desirable difficulties are learning challenges that make the learner study harder in the short term to improve long-term retention and understanding." He then offers an eight-step approach to writing essays along these lines. It's funny, but I would do the eight steps in reverse order - write a version, test my conclusion, identify what's missing, etc. The idea that you use your writing to reason things out and to reach a conclusion is, in my mind, just wrong.

Today: Total: Donald Clark, Donald Clark Plan B, 2026/03/16 [Direct Link]
Openness, transparency and reach: three reasons why public institutions should embrace the Fediverse
78990 image icon

This article is focused mostly on European institutions, though its conclusions could be more widely applied. Ultimately, I think, the recommendation is for institutions to at least include federated social media (such as Mastodon) among their accounts lists. The three reasons are openness, user agency, and reach. People need "a public, open communications platform that is accessible to all citizens, without the need for an account; an independent network not subject to censorship due to opaque algorithms or political bias." 

Today: Total: Elena Rossini, 2026/03/16 [Direct Link]
Random Audits as a Scalable Deterrent to Cheating: Using Game Theory to Design Fair and Effective Academic Integrity Systems for the AI Era
78989 image icon

On the one hand, I think the proposal is sound: using random audits to deter cheating rather than mass surveillance. On the other hand, however, I think that David Wiley's argument misses the point: if using an AI counts as 'cheating', then probably whatever you are assessing for is the wrong thing to assess. 16 page PDF.

Today: Total: David Wiley, SSRN, 2026/03/16 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Mar 17, 2026 1:37 p.m.

Canadian Flag Creative Commons License.