Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

Google is killing software support for early Nest Thermostats
77853 image icon

I have a persistent dream where my phone (a Google Pixel) keeps falling apart. My actual phone is as solid as ever. But maybe my dream is telling me what I know about Google, which is that you can't trust it to support its products. Case in point: the company is turning its back on Nest thermostats. Google bought the company 11 years ago for $3.2 billion. It brought an early form of AI to thermostats that would learn about your heating preferences. Only the most recent version supports Matter, the Internet of Things (IoT) specification. Google "is also pulling Nest thermostats out of Europe entirely, citing 'unique' heating challenges." Would I buy a Nest in the future? No - it might fall apart on me. Related: Google has also just killed the driving mode feature in Google Assistant. I'm glad I don't depend on Google Assistant.

Today: Total: Chris Welch, The Verge, 2025/04/28 [Direct Link]
Anthropic is launching a new program to study AI 'model welfare'
77852 image icon

I think it's prudent to "explore things like how to determine whether the 'welfare' of an AI model deserves moral consideration." Put it under the heading of risk management. I know, there are sceptics. Mike Cook, for example, says "a model can't 'oppose' a change in its 'values' because models don't have values. To suggest otherwise is us projecting onto the system." But how do we determine whether a human has values? How do we determine whether anything has consciousness?

Today: Total: Kyle Wiggers, TechCrunch, 2025/04/28 [Direct Link]
Why Now Is the Moment to Back Up the Web
77851 image icon

The reason my website exists is that I learned early on that, contrary to what people say, the web is not forever. Databases can become corrupt, content can be moderated out of existence, discussion boards can be closed or acquired, companies can go out of business, governments can try to change history. I haven't tried to archieve other people's content beyond my own summaries, partially for legal reasons but mostly for practical reasons. In this post, Ian O'Byrne argues we should set aside these reasons and start archiving now. Track every draft, he writes, capture the 'behind the scenes', archive on publication, and support rich metadata. "By embracing a culture of redundancy, openness, and community engagement, we can ensure that the web remains a reliable, enduring home for research and teaching. Let's start today, because the history we save now will be tomorrow's foundation."

Today: Total: Ian O'Byrne, 2025/04/25 [Direct Link]
Evaluating Generative AI Systems is a Social Science Measurement Challenge
77850 image icon

The argument in this short paper (6 page PDF) is that "measurement tasks involved in evaluating GenAI systems are highly reminiscent of measurement tasks found throughout the social sciences" and thus "the ML community would benefit from learning from and drawing on the social sciences when developing approaches and instruments for measuring concepts related to the capabilities, impacts, opportunities, and risks of GenAI systems." That doesn't mean "naïvely transferring measurement instruments designed for humans," but rather, adopting a framework based on four levels, "the background concept, the systematized concept, the measurement instrument(s), and the instance-level measurements themselves," as described in the paper.

Today: Total: Hanna Wallach, et al., arXiv, 2025/04/25 [Direct Link]
Reciprocity in the Age of AI
77849 image icon

The gist of this article is that "We believe reciprocity must be embedded in the AI ecosystem in order to uphold the social contract behind sharing." Specifically, "If you benefit from the commons, and (critically) if you are in a position to give back to the commons, you should." I know Creative Commons is adopting something like this as an organizational stance, but I don't agree with it. When I share, I'm not trying to tie you to some sort of social contract or create some sort of obligation on your part. That's not sharing, that's exchange, and the ethics of the two are very different. It turns our space into a marketplace, not a commons. Even if people use it to train AI, the commons is still there as long as we keep it a commons. It's when we convert our commons to a marketplace that it can become inaccessible, which is exactly what I don't want when I share. Via OEGlobal.

Today: Total: Anna Tumadóttir, Creative Commons, 2025/04/25 [Direct Link]
From Systems to Actor-Networks: A Paradigm Shift in the Social Sciences
77848 image icon

I've only read the introduction this far, but this book (359 page PDF), recommended to me by one of the authors after my post yesterday, has definitely caught my interest (and this, may I add as an aside, is precisely why I share my thoughts and finds with people). The introduction sets up the contrast between systems and networks nicely, which is the main point of interest for me. It argues for the replacement of a systems paradigm with a network paradigm on the grounds that the network paradigm can explain itself at the level of meaning in a way the systems cannot. Now I have my own story of what happens here (based on the concepts of emergence and recognition) that I believe is distinct from Actor-Network theory, which is what the book focuses on, so I will be interested to see how the book approaches this. 

Today: Total: Andréa Belliger, David J. Krieger, Ethics International Press, Research Gate, 2025/04/25 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Apr 28, 2025 08:37 a.m.

Canadian Flag Creative Commons License.