[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Sacrificing Humans for Insects and AI: A Critical Review
Eric Schwitzgebel, The Splintered Mind, 2025/08/28


Icon

Eric Schwitzgebel introduces a preprint (37 page PDF) in which he considers whether AI will force us to reconsider the 'human-centered' approaches to ethics. It's not fundamentally different from the argument Peter Singer makes with respect to animals, to my mind. Along with Walter Sinnott-Armstrong, Schwitzgebel critiques "three recent books that address the moral standing of non-human animals and AI systems: Jonathan Birch's The Edge of Sentience, Jeff Sebo's The Moral Circle, and Webb Keane's Animals, Robots, Gods." He writes, "All three argue that many nonhuman animals and artificial entities will or might deserve much greater moral consideration than they typically receive, and that public policy, applied ethical reasoning, and everyday activities might need to significantly change." I also agree with this, though it's the sort of thing that Must Not Be Said.

Web: [Direct Link] [This Post][Share]


Custom Bot Segregation and the Problem with a Hobbled Product
Alexander "Sasha" Sidorkin, AI in Education and Society, 2025/08/28


Icon

"CSU's adoption of ChatGPT Edu is, in many ways, a welcome move," writes Sasha Sidorkin. But access is nonetheless limited. "The most immediate concern is the complete ban on third-party custom bots. Students and faculty cannot use them, and even more frustrating, they cannot share the ones they create beyond their own campus." Why makes something available but unusable? "Higher education functions best when it remains open to the world. It thrives on collaboration across institutions, partnerships with industry, and the free exchange of ideas and tools. When platforms are locked down and creativity is siloed, that spirit is lost." Quite right.

Web: [Direct Link] [This Post][Share]


New Year, New Beginnings and Old Thinking on the Role of Scientific Intuition in the Age of AI
Marina Milner-Bolotin, 2025/08/28


Icon

This sort of example illustrates what's wrong with so much writing on AI. Here are the instructions given to ChatGPT: "Using a light bulb, a battery, and a wire, draw all the different ways you can connect them to make the light bulb light up." First of all, you can't draw with a light bulb, wire or battery. Second, no drawing can make a light bulb light up. Third, for all practical purposes, you need two wires to complete such a circuit. Fourth, the set of 'all possible ways' is infinite, and can never be completed. And fifth, a lot of humans would fail the task, even if they were able to navigate their way through the mangled text. That ChatGPT proposed any solution to such a badly worded problem is a miracle. But here it is cited as a case of ChatGPT not possessing "fundamental knowledge". 

Web: [Direct Link] [This Post][Share]


Code Acts in Education: Enumerating AI Effects in Education
Ben Williamson, National Education Policy Center, 2025/08/28


Ben Williamson writes, and I agree, "The underlying problem is that there is current desperation to show the causal 'effects' of AI in education - whether good or bad - and this is leading to a rush of studies that immediately gather huge public and media attention despite their significant methodological shortcomings and limitations." I'm less inclined to blame "low-quality peer review and high-speed editorial and publishing processes" - both good and bad research can be found anywhere (and Williamson himself cites an ArXiv preprint along with some Hechinger Report articles). The real problem, as Williamson recognizes, is that such studies try to isolate a single factor - AI - in what is a complex process where no single cause explains anything. Academics and journalists alike should do better.

Web: [Direct Link] [This Post][Share]


AGENTS.md Emerges as Open Standard for AI Coding Agents
Robert Krzaczyński, InfoQ, 2025/08/28


Icon

Here's the gist: "A new convention is emerging in the open-source ecosystem: AGENTS.md, a straightforward and open format designed to assist AI coding agents in software development. Already adopted by more than 20,000 repositories on GitHub, the format is being positioned as a companion to traditional documentation, offering machine-readable context that complements human-facing files like README.md." Robert Krzaczyński expresses some doubt about the idea; after all, machines these days can read human-readable content.

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2025 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.