Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

The Taxonomy of Strangers
78636 image icon

"We keep asking," says Carlo Iacono, "does the machine think like us?" But, he says, the question is a trap. "It assumes that intelligence has a natural shape, and that shape happens to be ours. It assumes that anything which diverges from the human pattern is therefore not thinking at all, merely simulating, merely pattern matching, merely autocomplete with better marketing." There is something that is thinking that isn't any of this. "Whatever the machine does, it cannot be what we do, because we are special and it is not." But honestly, "This is not science. This is theology wearing a lab coat." And I agree. It's similar to what Ethan Mollick says here: there are different shapes of thinking. And as Iacono says, "The universe is under no obligation to make intelligence bipedal, social, emotional, or narratively satisfying. It only has to work. And work, it turns out, can take shapes we never imagined."

Today: Total: Carlo Iacono, Hybrid Horizons, 2025/12/24 [Direct Link]
If AI Can't Stop a Student From Cheating, How Can It Ever Be Safe?
78635 image icon

My first thought was that this is kind of a dumb question, but there's better logic behind it than it may seem: "If AI companies are honest and say that they cannot build guardrails into their models that stop students from taking quizzes, completing assignments, or writing essays, then why would we believe they are capable of making AI safe or responsible?" The implication (since AI companies all say they can make their products safe) is that they are not being honest when they say they can't stop students from using AI. That's why the second part of the article focuses on how instructors can clamp down on students. But as I've said before: it's disappointing to see academics resort to authoritarianism in the face of the challenge from AI.,

Today: Total: Marc Watkins, Rhetorica, 2025/12/23 [Direct Link]
Backing up Spotify
78634 image icon

This is as remarkable as it is illegal. Anna's Archive reports that "We backed up Spotify (metadata and music files). It's distributed in bulk torrents (~300TB). It's the world's first 'preservation archive' for music which is fully open (meaning it can easily be mirrored by anyone with enough disk space), with 86 million music files, representing around 99.6% of listens." It's represents part of their effort to catalogue and preserve all music, and though Spotify doesn't quite have that, "it's a good start". It's also notable that over the last few years it has become impossible to say how much music is being produced because "the amount of procedurally and AI generated content makes it hard to find what is actually valuable." So this might be the best, last and only comprehensive collection of human-authored music. See also: Billboard.

Today: Total: Anna's Archive, 2025/12/23 [Direct Link]
Why AI-driven education must replace an outdated learning model
78633 image icon

Educators should know that this argument is being repeated in tech and political ciricles worldwide: "The failures of STEM education are particularly glaring in its cost inefficiencies. Running STEM programs requires expensive lab facilities, high-cost materials, and specialized faculty, which ultimately drive up tuition and student debt. Meanwhile, the rise of AI and automation in the workplace has rendered many traditional STEM skills obsolete before students even graduate... it is clear that STEM education, as it currently exists, is unsustainable. Its rigid, costly, and outdated model does not serve the needs of a world driven by AI, automation, and rapid technological change." Of course, this has often been said in the past (because it's a valid criticism). The question is, will we see a real alternative rise in the future? (Answer: probably.)

Today: Total: Dr. John Johnston, eCampus News, 2025/12/23 [Direct Link]
Just a moment...
78629 image icon

This is a review of Alan F. Blackwell's open access book Moral Codes: Designing Alternatives to AI (239 page PDF). I haven't read the book yet but it seems right up my alley. Christopher Newfield summarizes, "we must organize widespread social means to learn everyday programming that is rooted in 'MORAL CODES.' The first word is an acronym for More Open Representation for Accessible Learning... More Open Representations allow information to be exchanged, Access to Learning allows it to be acquired, and Control Over Digital Expression [the second word] allows it to be expressed." Now there is a good point here, and that is that the ethos of computer programming when it was in its infancy was agency. We weren't passive subjects of a machine, we could control it. Now that remains true (to an extent) in the age of AI. But. "It's hard to imagine the spread of programming skills in a country like the United States, where fewer than half of adults read even one book a year. But by now it's pretty much do or die. So better do it."

Today: Total: Christopher Newfield, Critical AI, 2025/12/22 [Direct Link]
OpenAI’s New AI Foundations Course Promises 'Job-Ready' Skills and Credential
78628 image icon

The AI Foundations course is being offered in pilot projects through "employers and public-sector partners," which means it's not generally available and hence not worth writing about. The ChatGPT Foundations for Teachers course is open access through Coursera, however, and along with some 14,000 other enrollees, I took a look at it today (its launch date). My quick assessment is that the course is pretty basic, but will offer a functional working knowledge of ChatGPT, which I suppose is pretty useful if you've had no exposure at all. I know, there's a ton of other (free and open) learning opportunities out there; I just happened to land on this one. See also OpenAI, Launching our first OpenAI Certifications courses.

Today: Total: Liz Ticong, TechRepublic, 2025/12/22 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Dec 23, 2025 08:37 a.m.

Canadian Flag Creative Commons License.