On ethical AI principles
Stephen Downes,
Journal of Open, Distance, and Digital Education,
2025/12/24
This is my latest publication, an adaptation of a post I authored last summer. It makes the argument that we cannot base AI ethics (or any other ethics) on a set of commonly-held principles, for several reasons: first, because the principles themselves are too vague to apply; second, because there is not unanimity of support for these principles; and third, even organizations that support these principles are quite willing to dispense with them in particular circumstances. The paper could be better-written (and was improved a lot thanks to the heroic efforts of the editor) and I really do hope the reviewers come out with their rejoinder, which would also be a valuable addition to the discussion. I should also add that my employer wants nothing to do with this paper and does not want to be associated with it in any way.
Web: [Direct Link] [This Post][Share]
What happens when intelligence outgrows its creators
Yuval Noah Harari,
Big Think,
2025/12/24
I have a couple of thoughts about this article. First, I'm reminded of my local coffee shop, where I'm a regular, where they get my stuff ready as soon as they see me enter the door, because I always order the same thing (and then, how I tailor my behaviour to meet that expectation). Only all this with AI. Second, the proble isn't that an AI may become super-intelligent, but that it could become super-powerful, just the way we allow some people to be. "We could be in a situation when the richest person in the United States is not a human being. The richest person in the United States is an a incorporated AI... the richest person in the US is giving billions of dollars to candidates in exchange for these candidates broadening the rights of AIs." If we prevent people from becoming super-powerful, we can prevent AIs from being super-powerful. But, who dares take on a Musk? Or even an Irving
Web: [Direct Link] [This Post][Share]
I turned a hotel key card into a one-tap shortcut for ChatGPT - and now I use it every day
Amanda Caswell,
Tom's Guide,
2025/12/24
This is hilarious. "Once you scan a tag - like a hotel key, sticker, or even a wristband - you can assign it a custom action. Most people use this to play music or turn off lights. I used it to launch ChatGPT and instantly start a new conversation." I could imagine taping the card to my office door and on tapping it immediately putting all my devices into work mode. Teachers could tape a card to their classroom door and have students associate a classroom application with the code (assuming they're still allowed to have computers in the classroom). Via Miguel Guhlin.
Web: [Direct Link] [This Post][Share]
My 2026 Open Social Web Predictions
2025/12/24
These are generally safe predictions and feel right to me. My favourites: "Nostr ↔ ATProto ↔ ActivityPub three-way bridging becomes functional via BridgyFed or another service by end of 2026. The 'protocol wars' narrative collapses into 'just pick your client.'" Also, "At least one fully independent ATProto stack - PDS, Relay, and AppView operating without dependency on Bluesky PBC infrastructure - will achieve viability in 2026, meaning it has paying customers or sustainable funding. This will be the year ATProto proves (or fails to prove) it can exist beyond Bluesky-the-company."
Web: [Direct Link] [This Post][Share]
The Clickbait Audit
Gemini,
2025/12/24
Nathalie Tasler writes, "I made a clickbait auditor: How it works: Paste any suspicious ad link or text into it. It checks for predatory tactics (like hidden mechanisms or shame-based marketing) and gives you a Safety Score (1-10)." I love this, especially after testing it on my own site. Here's (part of) what it said about it: "The Score: 1/10 - Academic/Direct. The Principle Identified: This falls into Zone 1: The 'What You See Is What You Get' Zone. It is a textbook example of non-predatory, non-commercial academic information sharing... If you sign up for the newsletter, you will simply receive a daily summary of news in the field of educational technology. It is a 'Safe Zone' for neurodivergent users who are tired of being sold 'hacks' or 'secrets.'" That last sentence made me feel especially happy.
Web: [Direct Link] [This Post][Share]
How AI coding agents work - and what to remember if you use them
Benj Edwards,
Ars Technica,
2025/12/24
AI systems keep track of what they're asked to do - each prompt is like an amendment to the previous prompts. But this capacity - called 'context' - is limited and subject to 'context rot'. AI coding agents address this problem in a variety of creative ways, including outsourcing tasks to other services and periodically 'forgetting' irrelevant information. It's not completely reliable, and of course the AI system needs to be pretrained how to do this. It strikes me as being similar to the same human problem with cognitive overload.
Web: [Direct Link] [This Post][Share]
My Online Education World 1980-2020: A chronology presenting the history of online eduction
Morten Flate Paulsen,
YouTube,
2025/12/24
Do read the full playlist description before commenting on or viewing the playlist. "These AI-generated videos about the history of online education are based on about 400 chronological anecdotes from the four open-access volumes of My Online Education World: 1980–2020. All books are available at nooa.no/my-online-education-world."
Web: [Direct Link] [This Post][Share]
The Taxonomy of Strangers
Carlo Iacono,
Hybrid Horizons,
2025/12/24
"We keep asking," says Carlo Iacono, "does the machine think like us?" But, he says, the question is a trap. "It assumes that intelligence has a natural shape, and that shape happens to be ours. It assumes that anything which diverges from the human pattern is therefore not thinking at all, merely simulating, merely pattern matching, merely autocomplete with better marketing." There is something that is thinking that isn't any of this. "Whatever the machine does, it cannot be what we do, because we are special and it is not." But honestly, "This is not science. This is theology wearing a lab coat." And I agree. It's similar to what Ethan Mollick says here: there are different shapes of thinking. And as Iacono says, "The universe is under no obligation to make intelligence bipedal, social, emotional, or narratively satisfying. It only has to work. And work, it turns out, can take shapes we never imagined."
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2025 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.