Some key elements of deeper learning
Scott Mcleod,
Dangerously Irrelevant | @mcleod,
2026/05/06
There's a bit of a theme in today's OLDaily revolving around the image in this post, "Let kids do work that matters." This image occurs in the context on a discussion of what counts as 'deeper learning', which is here presented as an objective for schools. It's a hard ask; "If you ask people to do high level work in classrooms in the current culture, they will do low-level work and call it high-level work." We converge on a definition that amounts to "three virtues: mastery, identity, and creativity." It's not a bad definition of 'deeper' but is it a good definition of 'what matters'? So much of our foundational landscape is changing these days, shaped in part by AI but also by a softening of some core myths in society - of the role of jobs, of how we manage power, of what constitutes 'meaningful'. When Canadian Prime Minister Mark Carney said, "We are in the midst of a rupture, not a transition," he was referring to the breakdown of the international rules-based order, but when he cites Václav Havel's The Power of the Powerless it becomes clear that he's talking about how each of us sees ourself in relation to others.
Web: [Direct Link] [This Post][Share]
Bananaland University
Josh Brake,
The Absent-Minded Professor,
2026/05/06
"What might the Savannah Bananas teach us about a potential future for higher education?" That's a good question, but not as discussed in this post (trust me, I'm a baseball fan). Josh Brake presents 'bananaball' as the invention of one Jesse Cole. "Cole is ruthless about finding things that aren't working and trying out new things to replace them." Ah - the Founder as Prophet, Founder as Priest myth strikes again. But no. The actual history is much better. The name 'Savannah Bananas' came about because of a fan vote. Players took up the spirit of the fans' silliness and started adding new rules in practice. They played an exhibition game. There was never any 'resistence' per se, no need to 'overcome the sceptics'. So, the lessons? "How would we change the shape of the college and university if we shifted our goal from job to vocation, from career preparation to character development, from creating students with economic utility to forming students who understand their place in the world more deeply." Meh. I mean, it's not bad, but we're still telling students what to do. The real lesson? Let the fans decide.
Web: [Direct Link] [This Post][Share]
Why product discovery matters more than ever in the age of AI
Jared Molton,
Udacity,
2026/05/06
When I was a kid I built a little cabin on an old wagon in our yard. Eventually my father said i was time to take it down and give the neighbours a break. I took it down, then decided to rebuild it even better. The new wood cabin was a huge improvement, but it lasted exactly one day before being taken down. It didn't matter that I had built it better and faster; it was just the wrong thing at the wrong time. Today, now that I don't have a 'job', I've been working hard on my personal learning environment (PLE) application, CList. But is it the right thing for the right time any more? I wrestle with that question, which is why this article appealed to me, even though you can stop reading after maybe the first third (again, it's an AI article that goes on and on and on and on....). The point is good: "A team can release three AI-powered features in a single sprint. If none of them improves conversion, retention, or satisfaction, the speed was wasted. The features were built efficiently. They just were not worth building." (p.s. don't get me wrong - working with code like this is the most fun I've had in a long time and while it would be nice if it was widely adopted, it's not really necessary). (p.p.s I really need a better name than CList - I'm open to ideas).
Web: [Direct Link] [This Post][Share]
Mature AI Use vs. Immature AI Use
Mike Kentz,
How We Frame Machines,
2026/05/06
This paper makes a useful distinction which I'll share here, so you don't have to wade through the AI-generated reams of text. It divides AI use policies into two domains: ethics, and maturity. You can take it from there; the actual paper employs a naive (though commonly held) perspective on ethical frameworks as "meant to govern behavior across a community," while on the other side growth, effort and learning from feedback are taken as indicators of maturity. The useful bit in this paper is that our immediate reaction should not be to just create an ethics policy that governs allowable use. We need to look beyond what we shouldn't do, to what it's worthwhile to do. This requires a lot more thought. (p.s. a link to 'Glow and Grow', for the record).
Web: [Direct Link] [This Post][Share]
Literacy-slop
Doug Belshaw,
Open Thinkering,
2026/05/06
Read the Emily Segal post first, then this post. Belshaw argues here that "If we swap 'Digital literacy' for 'Taste' then it's a socially-negotiated relation between people, tools, practices, contexts, and communities." From which we can argue, "Literacy-slop is the credential without the community of practice; it's the qualification without the learning; the skills certificate for getting an AI agent to click through a self-paced module on digital skills. It looks like literacy, satisfying the classifier. But it's just curation without a social body." Or put another way: "There exists a whole complex of knowledge, dispositions, and social relationships that makes someone capable in various digital contexts." The labels are just the socially accepted markers of success or of capability in that context.
My view: there's a lot right here, but I don't agree with it all. Words don't have meaning on their own, sure. They only have meaning in a context. But context can be anything; it doesn't need to be a community or a society. It doesn't have to be negotiated. There is no process of 'making meaning'. Context is (literally) the network of entities a thing is embedded in; meaning is the emergent pattern in that network that is recognized by a viewer when prompted by the thing. There is no one meaning, no 'real' meaning, obviously, because there are many viewers, many ways of seeing the same things. Any negotiation that happens isn't about the actual meaning; it's about establishing and holding power in that community, a hierarchy of symbolism, just like taste.
Web: [Direct Link] [This Post][Share]
Tasteslop
Emily Segal,
NEMESIS,
2026/05/06
Read this article first, then Doug Belshaw's take. This article is on the phenomenon of 'taste' (as in, "she has good taste"). The point here is "Taste is not really a property of various objects. It is a socially validated relation between objects, people, histories, scenes, and timing." In other words, you can't really have good taste unless there's an audience that sees you and afirms your good taste. OK? Next: "Tasteslop emerges when the visible signs (or 'markers') of taste are extracted from those relations and redeployed generically." It's like slapping a Gucci lable on your t-shirt. The point here is that AI can recognize (via pattern matching) what counts as a sign of good taste (like a Gucci lable) but not when the sign has been misapplied either via "lost meaning or what would need to replace them for things to feel legitimately fresh." AI intensifies this because it will just slap a taste marker into any old context, breaking down the whole culture and 'taste hierarchy' the taste marker belongs to.
Web: [Direct Link] [This Post][Share]
‘Close to zero impact’: US study casts doubt on effect of phone ban in schools
Richard Adams,
The Guardian,
2026/05/06
Surely if anything will cause us to stop looking at 'test scores' as a measure of impact, this will, right? "The report concluded that among schools instituting a ban: For academic achievement, average effects on test scores are consistently close to zero." After all, "Researchers say findings are not reason to shy away from restrictions as MPs consider ban in England's schools."
Web: [Direct Link] [This Post][Share]
The "AI Job Apocalypse" Is a Complete Fantasy
David George,
a16z,
2026/05/06
When a tech or finance company puts the phrase 'understanding of humans' in the subhead, that's a red flag. A folk theory of 'human nature' is not a good basis for informed commentary. Neither is saying 'of course Keynes was wrong.' That doesn't invalidate the entire message here, though. Historically, when a new technology has been introduced, that has increased, not decreased, employment and wealth. It's not a simple case of "we found new and different productive endeavors to fill our time." Would that this were the case. Historically (and this is not the a16z message) though wealth increased, scarcity persisted because that wealth was not shared, and it was never possible to survive on a 15 hour work week. To me, the assertion that AI won't eliminate jobs is not an assertion about AI (which very much could reduce our need for labour) or even an assertion about people (if it were, a16z would be saying a guaranteed income would increase social wealth) but rather an assertion that the exploitation will continue (just as it has through previous rounds of technology development).
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.