Taking action against AI harms
Anil Dash,
2026/02/24
A bit of a theme has emerged in today's newsletter, and it has to do with the ablative effects of AI (Anil Dash does not write about this, but I'm getting to it). In statistics, there's this idea of 'regression toward the mean', which in writing becomes 'regression toward the bland', or as Claudio Nastruzzi terms it, semantic ablation. Well, that's not me. But... why? It wasn't simply protection from AI, because when I was growing up, AI wasn't a thing. But what was a thing was television. Except - I hardly watched it all either as a child or a youth or even through most of my years of university. I would put on some headphones and read or write or code. I was a very serious and studious young man, and very socially inept. Still am. But I also have insights into the world (that I think are) worth having that weren't ablated by relentless commercial media. But now I read Anil Dash (as I finally talk about the article) describing how to protect children from the harms of AI. And - fair enough. Protect them from exploitation and manipulation and regression to the bland. Keep them off X/Twitter (and Meta, and TikTok). Stop schools from using LLMs (not just chatGPT). And - let's see if you can do this for them and for yourself - turn off the television.
Web: [Direct Link] [This Post][Share]
Cannes Declaration on the Sovereignty of the Mind
Dataethics,
Dataetisk Tænkehandletank,
2026/02/24
This article describes The Cannes Declaration on the Sovereignty of the Mind, which was signed by a coalition of experts at the World AI Cannes Festival. 3 page PDF. So OK. It reads, "We ask for the conditions under which innovation can remain compatible with democracy and fundamental freedoms, including a firm boundary against systems designed or used to manipulate thought at scale or to evade human thought and reflection." Sure, their focus is on the potential uses of AI, and I get that. But it strikes me that commercial media and advertising (also synonymous with Cannes) have been responsible for large-scale manipulations of thought and beliefs. I have often said "advertising is the original fake news." I should also say "advertising is the original hallucination." But these experts (pictured) don't see the world the way I do.
Web: [Direct Link] [This Post][Share]
Attention is All You Need to Bankrupt a University
Hollis Robbins,
Anecdotal Value,
2026/02/24
"A transformer," writes Hollis Robbins, "performs a four-step operation: it takes an input, selects which features of the input to attend to, weights those features based on patterns learned from training data, and generates the most probable output." Aggregate, remix, repurpose, feed forward. Doesn't need to be 'most probable'; it's usually 'most relevant' or 'most salient'. But I digress. "Since 2000," continues Robbins, "American universities built an enormous infrastructure around a mode of instruction that performs the same kind of operation: converting particulars into categories and generating outputs from learned patterns." That's not exactly the same, but again I digress. The differences aren't really significant. Then finally: The university's scaling operation succeeded. The millions of graduates carried the four-step operation into every American institution... (but) The same formal property that allowed one compliance strategy to work across every discipline allows one machine to perform the operation across every institutional context." But here's the problem: "The university faces a market problem. Does anyone need to pay tuition to learn an operation that a machine performs competently?... I can't see any other future but collapse."
Web: [Direct Link] [This Post][Share]
Semantic ablation: Why AI writing is boring and dangerous
Claudio Nastruzzi,
The Register,
2026/02/24
This is definitely part of the reason why I still prefer to write my own text (recent experiment notwithstanding): "When an author uses AI for 'polishing' a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters - the precise points where unique insights and 'blood' reside - and systematically replaces them with the most probable, generic token sequences." The main thing I bring to my writing is my unique insights and point of view. I just see the world differently from most people. Having AI smooth that rough edge removes the value from my writing. (TIL ablation is also a type of medical procedure; I know the term from engineering applications or as a type of armour).
Web: [Direct Link] [This Post][Share]
Preserving The Web Is Not The Problem. Losing It Is.
Mark Graham,
TechDirt,
2026/02/24
This is just a short opinion post from the director of the Wayback Machine at the Internet Archive but the story it references is a significant development: "some major news organizations - including The Guardian, The New York Times, and Reddit - are limiting or blocking access to their content in the Internet Archive's Wayback Machine." They're doing it of course because they can't collect money from AI companies if those companies think they can just get the content from Wayback. Mark Graham argues that "the Wayback Machine is built for human readers. We use rate limiting, filtering, and monitoring to prevent abusive access." But of course this might not always be the case, so the content companies are protecting their turf. But in the long term, protecting their turf may cause more harm than good: "significant chunks of our journalistic record and historical cultural context simply... disappear."
Web: [Direct Link] [This Post][Share]
How do students regulate their learning with a genAI chatbot?
Lyn Lim, Maria Bannert,
Learning Letters,
2026/02/24
Here's the set-up: "Thirty university students were tasked to read texts and write an essay within 45 minutes." Here's the pay-off: "Chatbot users achieved higher essay scores than non-users. Chatbot interaction frequencies correlated positively with high cognitive activities." How is this possible? This paper (11 page PDF) explores the question, exploring the usual trade-off between cognitive offloading and pedagogically sound design. "The findings highlight the need to support students' learning regulation skills to mitigate their outsourcing of critical processes while using genAI tools." In other words, there is a difference between an AI application that will write an essay for you, and an AI application that will teach you the content so you can write an essay. I think we knew that, though: it's why we discourage parents from completing their children's homework or project. The real issue is, under what circumstances can the student be motivated to turn down the parent's (or AI's) completion of their work even if it is freely offered?
Web: [Direct Link] [This Post][Share]
A.I. Isn't People
Rusty Foster,
Today in Tabs,
2026/02/24
This article begins with the question, "How many Reddit posts does it take to learn to read?" The answer "all of them" is intended to show the difference between human learning and AI learning. The intent is to show '200 lines of Python code does not understand anything'. It's a bizarre supposition, to be sure. But in response I invite the reader to consider the same questions asked about humans. Do the chemicals and interactions in a human neuron 'understand' anything? If given only Reddit posts, would it not take a lot of posts to learn how to read? That's the problem with these human-AI comparisons: we assume these almost-magical human abilities that in reality stem from (a) a wider range of experience from all our senses, and (b) a lot of interconnected neurons. The proposal that human understanding is fundamentally different does not follow from arguments like this. Yet people keep making them.
Web: [Direct Link] [This Post][Share]
Inclusivity, Ethics, and Accessibility for Learners with Disabilities
Munir Moosa Sadruddin, Sehrish Sachwani,
MERLOT,
2026/02/24
What I like about this open access book (131 page PDF) is that it offers a variety of voices from a multi-national perspective addressing issues related to ethics and accessibility from different points of view. There are a couple of technology-specific articles, including a contribution from Silvester Krčméry on transforming inclusion in the age of AI, and an article from Pankhuri Bajpai on subjective well-being in digital education. I also especially appreciated Sehrish Sachwani's article on nervous system regulation as a foundational condition for learning. "A classroom that feels lively, engaging, neutral, or stimulating to one learner may feel overwhelming, exhausting, or threatening to another." I can feel this. The book is listed on Merlot and available on Google Docs, though I found it easiest to read a downloaded version.
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.