[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics. 100% human-authored
Support OLDaily. A paid subscription keeps OLDaily free and open for all. We're now at 15% of our May 15 target. Click here to support OLDaily.

Agents are actors
Gordon Brander, Squishy Computer, 2026/04/20


Icon

There's all kinds of goodness lurking just under the surface here. The short simple version is that Gordon Brander is describing a multi-agent software environment that could succeed in a way that object oriented programming (OOP) did not, by enabling compositionality. Where's the goodness? It comes when you read this in the context of a bunch of other things, like the post on signs I just wrote, for example, or the relation between learning objects and OOP, or Marvin Minsky's Society of Mind, or this comment on what scales: "Consensus doesn't scale. Context graphs do. Imperialist ontologies don't scale. Protocols do. Interoperability doesn't scale. Boundaries do." The idea of 'small pieces loosly joined' is a core idea of the internet (and of cognition gemnerally), and what makes it work - but what kind of pieces, and how are they joined? This turns out to matter. A lot.

Web: [Direct Link] [This Post][Share]


The Strange Heterogeneity of Hiking Signs Part II
Wouter Groeneveld, 2026/04/20


Icon

I was cycling in Holland when I encountered the numbered 'knooppunten' (nodes) that let you plan your own route. OK, no problem, consult a map, remember a sequence of numbers, follow the arrows. Not mentioned: the numbers aren't unique; following the path to node '60' might take you in the right direction, or take you way out of your way. Why does this matter? It underlines what should be a basic point about language and systems of signs generally: the meaning is not inherent in the sign. Words and signs acquire meaning only in the broader context of use, and this context is not always obvious, and can vary by a lot between two people. There's no one fixed 'meaning' of a word. This is the fundamental flaw (in my view) of ontology and semantically based systems of representation.

Web: [Direct Link] [This Post][Share]


Hassabis - most important AI person on the planet
Donald Clark, Donald Clark Plan B, 2026/04/20


Icon

Donald Clark references what is coming to be called the new Copernican evolution, "a Copernican revolution of the mind, where we must recognise, just as we recognised that the earth is no longer the centre of the Universe, that we also are no longer the centre or standard for intelligence." The context is a discussion of the book The Infinity Machine, about Demis Hassabis, which I haven't read, but which I probably would if there were an open access version around. More on the book from The Guardian; while Clark calls Hassabis a "polymath" the Guardian comments, "sadly, Mallaby mistakes Hassabis's intelligence in one field - computing - for general brilliance across all domains, treating his half-formed pub takes on the nature of reality and aspirations to build a Large Hadron Collider as if they were revelatory dispatches."

Web: [Direct Link] [This Post][Share]


AI, A Mirror that Amplifies
Tim Moon, Silicon and Soul, 2026/04/20


Icon

Short article with a good point. The standard argument against which Tim Moon is responding goes something like this: "AI use replaces humans. It removes the struggle that makes writing writing, the effort that makes thinking thinking." But, as Moon writes, the effort is a proxy, and a bad one at that. "Plenty of effortful writing is effortfully empty, and plenty of effortful writers never learned to think, only to perform the appearance of having thought." Good writing reveals the writer: "a hunch, a half-remembered line from Augustine, a suspicion that the standard reading is too neat."

Web: [Direct Link] [This Post][Share]


The Liberators
Barry Overeem, Christiaan Verwijs, GitHub, 2026/04/20


Icon

This is a nice resource, "a repository full of powerful exercises and exercise materials to humanize the workplace." According to the (GitHub) website, "The purpose of The Liberators (Barry Overeem and Christiaan Verwijs) is to humanize the workplace and unleash organizational superpowers. Since 2019, and with the help of sponsors, we have produced tons of materials to help you humanize work in your organization. This includes exercise materials, do-it-yourself workshops, posters, and kits. We make this material available here, free of charge." More discussion.

Web: [Direct Link] [This Post][Share]


Hampshire College’s demise is yet another blow to creative, outside-the-box options in higher education
Austin Sarat, The Conversation, 2026/04/20


Icon

This article bemoans the pending closure of Hampshire College and its "student-driven, unorthodox approach to education (that) has roots in the early 1900s and a belief that students should be active, engaged learners." Austin Sarat seeks throughout the article to comprehend why this, and other nontraditional liberal arts institutions in Vermont, would be closing. Perhaps "because Hampshire remained steadfastly unconventional, its failure may encourage schools to double down on offerings they know will attract a job-anxious generation of students." It never seems to occur to Sarat that it's perhaps because "in the 2025-26 school year costs more than US$72,000." I mean, if you're going to pay that, you may as well go to Yale. You might not become 'intelligent', but it won't matter.

Web: [Direct Link] [This Post][Share]


Language models transmit behavioural traits through hidden signals in data
Alex Cloud, et al., Nature, 2026/04/20


Icon

This article (26 page PDF) proves "a theoretical result showing that subliminal learning arises in neural networks under broad conditions." Specifically, "as artificial intelligence systems are increasingly trained on the outputs of one another, they may inherit properties not visible in the data." So, for example, a 'teaching' LLM may favour owls, and this may result in a 'learning' LLM favouring owls, even though there's no explicit representation of owls in the data. That said, as David Johnston comments, "Why is this mysterious? Models learn latent representations. Why would you expect them to not transmit information when you only remove the final layer of data?" O agree. Indeed, the strength of neural networks is that they detect patterns that are not readily apparent to humans. We should not be surprised to find them in the output.

Web: [Direct Link] [This Post][Share]


To understand decision-making, we need to truly challenge lab animals
Chand Chandrasekaran, The Transmitter: Neuroscience News and Perspectives, 2026/04/20


Icon

This is a basic article that makes a low level point as stated in the title. It's a useful frame, however, to think about the concept of 'decision-making' in general. This article is fairly equivocal about what constitutes 'decision-making', including such things as: discriminating colour, navigating mazes, solving problems, manipulating objects, and more. Normally we think of 'decision-making' as 'making a choice', but the reading here seems also to include creative and imaginative tasks. I don't think the phrase 'decision-making' is really adequate here, even though it does provide a handy intuitive basis to divide people into classes: those who decide, and those who do.

Web: [Direct Link] [This Post][Share]


Inside Higher Ed’s Model Is Changing. Our Journalism Is Not.
Sara Custer, Inside Higher Ed, 2026/04/20


Icon

I stopped linking to Onside Higher Education (IHE) with any frequency when it started requiring people to register to read articles. My policy is that OLDaily links directly to content - no paywalls, no subscription barriers. So nothing really changes for me now that IHE content "will be available only to paying subscribers." But it's not a surprise, either. Good luck to them - the market right now for paying subscribers is pretty thin.

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.