[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics. 100% human-authored
Support OLDaily. A paid subscription keeps OLDaily free and open for all. We're now at 10% of our May 15 target. Click here to support OLDaily.

The Boxes Were Already Open
Bjørn Flindt Temte, 2026/04/08


Icon

A few days ago I linked to a paper from Anthropic on how AI systems represent emotions internally. This post references that paper and makes the following argument: "the prevailing assumption about large language models - that they have nothing at stake in their interactions with us - is incoherent with their own observable behaviour." Essentially, the stakes are recorded precisely in what Anthropic called the  'functional emotions'. The stakes don't have to 'feel' a certain way to exist. "It does not require claiming the AI 'cares about' the collaboration in a phenomenologically rich sense," writes Temte in an earlier paper. "It requires only the much weaker claim: the system's behaviour is functionally organised around protecting something, and 'having something at stake' is what we call that pattern when we observe it in any other system."

Web: [Direct Link] [This Post][Share]


Industrial Policy for the Intelligence Age: Ideas to Keep People First
OpenAI, 2026/04/08


Icon

This paper (13 page PDF) from OpenAI doesn't address education directly, but it does address the need for a social and political response to the economic shifts being created by AI. It recognizes the risks of "governments or institutions deploying AI in ways that undermine democratic values; and power and wealth becoming more concentrated instead of more widely shared" and suggests "unless policy keeps pace with technological change, the institutions and safety nets needed to navigate this transition could fall behind." It offers a series of proposals under three broad areas: to share prosperity broadly, to mitigate risks, and to democratize access and agency. There are many specific proposals, most of them good, but the fundamental concern is the ability and willingness of companies like OpenAI do follow through. We all know what happened to Google's motto, "Don't be evil." The same seems very likely to happen to this statement the moment shareholder rights prevail over social rights. See also: Carlo Iacono, the social contract OpenAI wrote without you. Here's what it means for educators, writes Stefan Bauschard. The Deep View looks inside the new deal.

Web: [Direct Link] [This Post][Share]


Learning to think in the AI era
Wayne Holmes, UNESCO Courier, 2026/04/08


Icon

This is a light article making the case that even in the age of AI we still need to learn. It addresses common AI risks such as error and bias and the possibility of it becoming a 'cognitive crutch'. It also considers the oft-touted prospect of AI tutoring systems, suggesting that they fail to address "'socialization' (the process by which we find our place in particular social, cultural and political groups); and... 'subjectification' (how we become individuals capable of thinking independently and taking responsibility for our own lives)." I have always felt 'AI tutoring systems' to represent a narrow instructivist view of education, but the potential of AI doesn't end there. But more to the point is the implication that we will stop learning if we no longer need to. Why would we believe that? Human brains constantly learn. The question is not whether we need to learn, but rather, what will we learn. I look forward to the day when human learning evolves not out of utility and necessity but because of interest and creativity.

Web: [Direct Link] [This Post][Share]


The bottleneck shifts to distribution
Gordon Brander, Squishy, 2026/04/08


Icon

We're familiar with McLuhan's new media tetrad: what does it enhance, what does it make obsolete, what does it retrieve, what does it reverse? This article makes me think we need to add a fifth: what does it consume? And we'll apply it to all technologies, not just media. The new tech pentad? I dub it thus. This article quotes Herbert Simon: "What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention." So what of AI? It creates a surplus of new software. But who will use it? How will they find it? The new scarcity, according to this article, is distribution. In a way, this is similar to Phil Hill's post from yesterday.

Web: [Direct Link] [This Post][Share]


Scaling Work-Based Learning: A Framework for Effective Employer Intermediaries
Strada, 2026/04/08


Icon

The idea of 'employer intermediaries' is that of people or organizations who facilitate learning interactions between companies and employees. "Work-based learning programs deliver real value to both learners and employers, but the widespread expansion of opportunities depends on strong employer intermediaries," write the authors. The framework itself is basic: employee engagement, solutions design, solutions brokering, implementation support, administrative support. ESSIA, guess. I like it because it provides a way to transition from traditional work-based courses and programs, and to evolve into an ongoing and core function. Here's the framework and here's the full report

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.