[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Unpacking the Ethics of AI in Education
Geoff Cain, Simon Says: Educate!, Spotify, 2026/02/23


Icon

I am interviewed by Geoff Cain for Episode #61 of Simon Says: Educate! Unpacking the Ethics of AI in Education.  See also this summary and useful slide presentation from Ronald Lethcoe. "When a state board, accreditor, or institution publishes an AI ethics framework, that document is not a neutral distillation of shared human values. It reflects who was in the room, what risks they prioritized, and which political and cultural assumptions they brought with them."

Web: [Direct Link] [This Post][Share]


What We Must Do About AI In Education
Eamon Costello, GenAI:N3, 2026/02/23


Icon

Eamon Costello writes, "The USA is AI's primary regulator and ideological driver. Its dominant cultural values will be increasingly embedded in it." I'm not sure that this is true, but let's assume it is. But Costello's observation that the existing "dominant cultural values" are so toxic and should never be allowed to infuse AI strikes me as a very good argument for not continuing to educate people the way we have in the past. Look how that turned out! I mean, how did so many people acquire the values and views that they did? This is not just a U.S. problem, either; we can look to many other societies where the national cultural values have gone wildly astray through no fault of AI (or even of educational technology in general). Where learning and development are concerned, I am personally far more concerned about advertising and mass media than I am about AI. 

Web: [Direct Link] [This Post][Share]


Perfectly Imperfect
Alan Levine, CogDogBlog, 2026/02/23


Icon

In retrospect, it wasn't worth watching the hockey game. But the question of what it is worth doing as a writer or an academic is an interesting and complex debate. Should I, for example, have chosen to write my article by hand instead of AI? What more would I have achieved? This is an old question. For example, some of the most prolific authors use speech instead of a typewriter and then automated transcription to create the article. I've tried this with my talks, but never really made it work. What does it mean to be a scholar in the age of AI? I saw an article on LinkedIn (since lost because the algorithm bounced it out of view before I could capture the link) saying humans should always form the research question, do the literature review, analyze the evidence and draw conclusions. I wanted to ask: why these things? I don't think we have a good answer yet. We poretend it's because AI is flawed, but then we get arguments like "I typo [often] therefore I am [human]." I use spell-check in OLDaily because I used to get complaints about my spelling. Cory Doctorow reports using Ollama, an open-source LLM, as a typo-catcher. This led to a denoument from Jürgen Geuter on the ethics of using AI saying Doctorow "tries to make it (the criticism) look unreasonable by making it just a conversation about tech without regarding how that technology affects the world and the people in it."

Web: [Direct Link] [This Post][Share]


I needed a scheduling tool that respects privacy. So I built one.
Doug Belshaw, Open Thinkering, 2026/02/23


Icon

"Maybe you've been thinking 'someone should make a tool that does X,' says Doug Belshaw. "Maybe that person is you?" At a certain point I may stop running these instances of "I built x tool using AI" but I'll keep posting so long as it remains fun. And it is fun, because it feels like that explosion of cool we saw when the web first reached a large audience in the mid to late 1990s. What we could do then never really went away, not even after the dot com crash of 2003, because the basic tools were in everyone's hands. That's also true today; AI is just math and data, and there's enough of both out there that what we're doing today won't disappear. The real issue isn't "AI yes or no". It's how we can prevent commercial interests from degrading it the way they degraded the web.

Web: [Direct Link] [This Post][Share]


No, you couldn't do this before an LLM because if you could it would have been done already
Mike Caulfield, 2026/02/23


Icon

Another example of the sort of thing being done with the tools today. "I spent all day building a Claude Code skill to remediate handwritten math notes in Canvas courses," writes Mike Caulfield. Again, we need to be clear about how we're evaluating 'success' here. "Will it make me rich? No... I won't be rich, but we will be able to build better and more accessible systems because of this, and that is a very good thing." It is a very good thing. 

Web: [Direct Link] [This Post][Share]


Now That's a Headline
Mark Hurst, Bluesky Social, 2026/02/23


Icon

The headline, in a paywalled article in Fortune (though you can read it here), is "The U.S. spent $30 billion to ditch textbooks for laptops and tablets: The result is the first generation less cognitively capable than their parents." No surprise, the tech sceptics love it, and it has been heavily promoted in social media. But even the article admits, "This is not a debate about rejecting technology. It is a question of aligning educational tools with how human learning actually works." Anyhow, I had ChatGPT write an article refuting the inference stated in the headline. Specifically, "Laptops do not inherently degrade cognition or learning. Poorly designed instructional systems using laptops do." The same, by the way, will be true of AI. (That image in Fortune, by the way, is a masterpiece of propaganda.)

Web: [Direct Link] [This Post][Share]


Man accidentally gains control of 7,000 robot vacuums
Mack DeGeurin, Popular Science, 2026/02/23


Icon

This is more funny than anything else, but it does have a security lesson in there somewhere. Sammy Azdoufal just wanted to steer his DJI Romo with a gaming controller. "While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI's remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries."

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.