Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Chatbot data harvesting yields sensitive personal info
78943 image icon

This is being presented as an AI vulnerability, but what's happening is that untrustworthy extensions are "overriding the browser's native fetch() and XMLHttpRequest() functions in order to capture every prompt and every response." This is a much deeper issue that impacts a wide range of applications, not just AI. It bothered me enough that I looked more deeply into it.  XMLHttpRequest() is depreciated and your apps shouldn't be using it. You can use metadata headers to prevent a number of scripting attacks. But the best method is probably cache the native fetch() function (either as a variable or in a hidden iframe) before any extensions run. Of course, if you're using an application written by someone else, you can't do this; this is yet another reason people should learn to create their own applications (using AI, of course) rather than depending on what's out there.

Today: Total: Thomas Claburn, The Register, 2026/03/05 [Direct Link]
Towards the Permissive and Transparent use of Generative AI in Education
78942 image icon

This article introduces a website called PETRA AI (the Permissive and Transparent use of AI in education). It doesn't look like much at first, just a bunch of icons for different uses of AI, but if you click on 'I am a Student' or 'I am a Teacher' (near the top) it becomes interactive, so that when you select the AI uses, it creates a graphic (see the left side) you can download to add to your project or assignment. I could quibble with some of the categories (eg. why 'source' instead of 'search'?) and there are some things it's hard to know (does your spell-check use AI?) but it really is a very elegant piece of work and I like it a lot. Just one thing: why doesn't PETRA use its own icon set? We have no idea whether it was created from scratch by hand or whether Claude Code came up with the whole thing. It seems like an oddly missing feature that undermines its whole message. Via Alan Levine.

Today: Total: Stoo Sepp, 2026/03/05 [Direct Link]
College students, professors are making their own AI rules. They don't always agree
78940 image icon

As Lee Gaines writes, "More than three years after ChatGPT debuted, AI has become a part of everyday life — and professors and students are still figuring out how or if they should use it." I think the question revolves around means to an end. "What we need is students to go through the process of writing research papers so they can become better thinkers, so they can put together a cogent argument, so they can differentiate between a good source and a bad source," Cryer says. Well, yeah, I can see that. But is writing research papers the only way to become a better thinker? It seems very limited to me. In an AI-enabled would we should be a lot more hands-on, solving problems, testing solutions, that sort of thing. What is the actual work we want to be able to do? Focus on that.

Today: Total: Lee V. Gaines, NPR, 2026/03/05 [Direct Link]
Narrative as a Fundamental Way of Making Meaning
78939 image icon

I have spent my entire life resisting the idea of the narrative and storytelling (which is a hard place to be in for a writer). For Keith Hamon, though, the narrative is the core. He cites Pria Anand's The Confabulations of Oliver Sacks, where a 'confabulation' is "a neurological repair where the brain fills memory gaps with stories that the teller believes to be true." Well there's no doubt there are these gaps that are filled, but are they filled with stories? Hamon thinks so. "Narrative is the biological software that converts raw, chaotic data into a liveable reality. It's an instinctive search for order that slips beneath consciousness to insure that we always have a coherent sense of ourselves and our worlds." It strikes me as wrong, though, that the only sort of coherent sense we can have is a text-based linear structure. At the very least, it's a fabric - "it's all a rich tapestry," as Andrea likes to say. And for me, at least, it's thickly woven, multi-modal, and generally non-linguistic. I can, if I really try, represent it with a narrative, but it doesn't come naturally at all. I think we do people a disservice if we tell them all they can imagine is stories.

Today: Total: Keith Hamon, Learning Complexity, 2026/03/04 [Direct Link]
Two Paths, One Purpose: How Fair Dealing and Open Education Work Together
78938 image icon

The term 'open education' has a variety of meanings, most being based on the idea of creating access to learning opportunities and resources. The term 'fair dealing' is a legal term providing reader rights to use copyright material under certain conditions, analogous to 'fair use' in the U.S. This article finds a lot on common between them and argues "they're rooted in the same values: fairness, accessibility, and a commitment to the public good." I mostly agree with the authors' vision: "Imagine an educational landscape where learners have rich, meaningful choices: open textbooks they can customize and adapt, fair dealing excerpts for highly specialized knowledge, collaborative assignments that contribute to shared knowledge, and community-created resources that reflect the world students live in." Also available: the Open Education Workbook (content is in the menu that runs across the top of the page in hard-to-see dark grey).

Today: Total: Amanda Grey, Karen Meijer, Kwantlen Polytechnic University, Teaching & Learning Commons, 2026/03/04 [Direct Link]
Are service typologies the key to scaling agentic AI systems across public services?
78937 image icon

There's more to this than meets, the eye, but I've added Updates from GOV.UK AI Studio to my RSS reader and will likely track further developments. Here's the gist: Kay Dale writes, "We've identified 8 different types of government service to help us see where agentic AI can add most value." These topologies, as they're called, underlie the existing list of 75 digital services they've identified across government. Of course this sort of analysis could be undertaken for any sort of service, including learning services. I think this sort of this is going to matter, and will watch how it plays out. If you're wondering, the eight types (illustrated) are: informational hub, task list, portal, application, register, license, appointment, and payment. Via Doug Belshaw, Tom Loosemore.

Today: Total: Kay Dale, GOV.UK AI Studio, 2026/03/04 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Mar 05, 2026 11:37 a.m.

Canadian Flag Creative Commons License.