Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

Constructing AI in education
78749 image icon

When Ben Williamson talks about 'constructing AI' here he is not talking about actually building AI, but rather, he is talking about how we 'socially construct it', in the sense outlined by Eynon and Young, by describing ways people talk about it and think about it. These constructions begin with framing, that is, the outlining of key terms and values informing the concept. The bulk of this describes a number of different frames from various groups involved in some way in the realm of AI and education, including (for example) government agencies, education leaders, educational advocates, AIEd evangelizers, and abolitionists. As I've said before, educational theorists love a taxonomy, and so far as taxonomies go, this is a pretty good one, though it seems to me to overly generalize what we mean by 'AI'. Williamson positions himself as 're-framing' AI in education as a 'public problem' or 'matter of concern', rather than an 'entirely positive' phenomenon, but (in my view) that just sets up a false dilemma. Some AI is a matter of concern, some AI is entirely positive, and most types of AI fall somewhere in between. 

Today: Total: Ben Williamson, Code Acts in Education, 2026/01/19 [Direct Link]
Scapegoating and Careerism in Universities: Signals of System Failure?
78748 image icon

The focus is on Australian public universities but I think this discussion could be generalized. As suggested in the title, Colin Beer argues that scapegoating ("placing responsibility for systemic problems on individuals or small groups") and careerism ("advancement (titles, metrics, visibility) becomes more important than educational or scholarly contributions") have become mechanisms for protecting the status quo. "These dynamics normalise the gap between rhetoric (excellence, student‑centred, public benefit, etc) and practice (cost‑cutting, restructuring, market share, brand management, university rankings, etc)." I don't know whether the mechanisms he suggests address these issues - universities need to do more than reform governance and accountability, I think.

Today: Total: Colin Beer, Col's Weblog, 2026/01/19 [Direct Link]
What we miss about teaching critical and creative thinking
78747 image icon

Do we really present creative thinking and critical thinking as distinct capabilities? I'm not sure, but this post makes a good argument for integrating them. "Creative thinking should be understood as including the capacity to generate possibilities that are disciplined by reasons. Critical thinking should be understood as the capacity to apply standards in ways that shape inquiry, not merely audit its outcomes." 

Today: Total: Peter Ellerton, The Education Contrarian, 2026/01/19 [Direct Link]
View of A Framework for Ethical Online Course Development with Universal Design for Learning
78746 image icon

This article ( page PDF) attempts to blend the ethics of Universal Design for Learning (UDL), "based on three principles: multiple means of engagement, representation, and action and expression" with academic integrity, based on "six values of academic integrity: courage, fairness, honesty, respect, responsibility, and trust," and Indigenous academic integrity, based on "relationality, reciprocity, and respect." The authors argue, "our layered approach demonstrates how these parallel ways of understanding integrity can strengthen online course design by providing multiple entry points for students to connect with ethical academic practices." It's hard to avoid this being an alphabet soup of principles; the authors propose a three-layered approach, the first two of incorporate UDL design principles, while the last adds academic integrity to the mix.

Today: Total: Lorelei Anselmo, Sarah Elaine Eaton, Canadian Journal of Learning and Technology, 2026/01/19 [Direct Link]
Don't fall into the anti-AI hype
78745 image icon

The simple argument here is that "you can't control (AI) by refusing what is happening right now. Skipping AI is not going to help you or your career." I know not everybody writes software, but the experience of software developers is an important indicator generally. It's this: "It is simply impossible not to see the reality of what is happening. Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it (and, about this second part, LLMs are great partners, too)." And the 'AI business model' people are concerned about is irrelevant. "It does not matter if AI companies will not be able to get their money back and the stock market will crash... It does not matter if this or the other CEO of some unicorn is telling you something that is off putting, or absurd. Programming changed forever, anyway." I also agree, "this technology is far too important to be in the hands of a few companies." 

Today: Total: antirez, 2026/01/19 [Direct Link]
Judge orders Anna’s Archive to delete scraped data; no one thinks it will comply
78744 image icon

The story here is that a judge has ordered Anna's Archive to delete the OCLC's WorldCat data it scraped from the website in 2023. WorldCat is "the world's largest library metadata collection." In a blog post Anna's explained that it needed the data in order to develop a comprehensive list of all the books in the world, so it can preserve them. Other sources were inadequate. "We were very surprised by how little overlap there was between ISBNdb and Open Library, both of which liberally include data from various sources, such as web scrapes and library records." OCLC noted the cost it bore as the scraping occurred. "Beginning in the fall of 2022, OCLC began experiencing cyberattacks on WorldCat.org and OCLC's servers that significantly affected the speed and operations of WorldCat.org." Having fought off scrapers on my own site, I can understand OCLC's frustration. But I have to ask, why is this data locked down in the first place? I know, I know, 'business models'. But it seems to me that if anything should be publicly available, it's library metadata.

Today: Total: Jon Brodkin, Ars Technica, 2026/01/19 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Jan 19, 2026 3:37 p.m.

Canadian Flag Creative Commons License.