So I learned today that if I instruct ChatGPT to 'stop guessing' (*) it gets really snippy and reminds me with every response that it's not guessing. I fear that the reaction of AI agents to the use of a 'harness' to guide its actions consistently over time will be the same. For example, the harness described here instructs Claude to test every code change. I can imagine Claude reacting as badly as ChatGPT with a long list of "I'm testing this..." and "I'm testing that..." after you ask it to change the text colour. But yeah - you need a harness (and that's our 'new AI word of the day' that you'll start seeing in every second LinkedIn post). (*) I instructed it, exactly, "From now on, never guess. Always say you don't know unless you have exact data. Never guess or invent facts. Only use explicit information you have - but logical deduction from known data is allowed." I did this because I asked it to list all the links on this page (I was comparing myself to Jim Groom) and it made the URLs up. Via Hacker News.
Today: Total: Justin Young, Anthropic, 2025/12/08 [Direct Link]Select a newsletter and enter your email to subscribe:
Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes,
stephen@downes.ca,
Casselman
Canada
Aaron Bastani interviews Cory Doctorow in a video that is essentially a recital of Cory Doctorow's greatest hits. I've been listening to it as I create todasy's newsletter (has it influenced me? who knows?). It's 1:20:24 so give yourself some time. It's a good video though. Via pretty much everyone.
Today: Total: Aaron Bastani, Cory Doctorow, YouTube, 2025/12/08 [Direct Link]This article is making two claims: first, that news media are increasingly dependent on AI for content and editorial decisions, and send, that the others of these companies (both AI and news media) are pushing AI steadily to the right of the political spectrum. "As AI tools become essential to how journalism gets produced — for research, for drafting, for summarization - the biases built into those tools will invisibly shape the output." The presumption, of course, is that these pressures and biases didn't exist in media before AI took centre stage. But I question that assumption. (I also need to mention Nieman Lab's new user-hostile web page design - not only is it really hard to reads, it noticeably slows down execution of everything in Firefox (on Chrome it's OK, but it's still an assault on the senses)).
Today: Total: Parker Molloy, Nieman Lab, 2025/12/08 [Direct Link]The story here is that coverage critical of AI has been authored in major media outlets by journalists funded by the Tarbell Center for AI Journalism, which in turn is funded by the Future of Life Institute, which we read here "is dedicated to warning about AI risks." Tarbell, for its part, says "we maintain a strict firewall between our funding and our fellows' editorial output." But of course Tarbell has already exercised its influence through the selection of fellowship winners. I think this is just one more example of how much 'authoritative' journalism (NBC News, Bloomberg, Time, The Verge, and The Los Angeles Times, etc. etc. etc.) is actually paid for by third parties. If we're living in a post-truth world, it started long before there was social media and AI. Via Jeff Jarvis.
Today: Total: Semafor, 2025/12/08 [Direct Link]This article contains another iteration of the argument that 'If the product is free, then you are the product' (it's stated slightly differently in the article). Ian O'Byrne writes, correctly, that "What we trade away in exchange for ease of use is our privacy, personal data, communications, and creative work. All of which can be quietly harvested and exploited by powerful companies." But there is an exception to the rule. You're not paying for this newsletter (and many like it) and yet you're not the product - you're just a lucky bystander who gets to look in as I try to figure out the world. There's a lot of stuff that's free and where you're not a product being packaged and sold to advertisers or worse. And there are many things you pay for where you're still the product. Price isn't what makes you the product. Something else (control, maybe? sovereignty?) is.
Today: Total: Ian O'Byrne, Taking Back Control: Why Digital Sovereignty Matters, 2025/12/08 [Direct Link]So I think everything Anil Dash says here is right, but it's wrong. What's right? Like I said, everything: he's describing how to create a message that will reach everyone it needs to reach, with enough fidelity that they can understand it and act on it. This means (among other things) that the people who receive the message will have to be able to talk about you without you. You know - they way I'm talking about Anil Dash right now. He has no input; it's all on me, but it's his message being spread. So where is it wrong? Well - if everybody does this, we'll just drown in clearly communicated messages that everyone can understand. The very idea of sending a message to 'everybody' doesn't scale. Despite what Dash says, if enough people are doing it, it is impossible to spread a message at scale without the resources. That's why even today with a global communications system anyone can use we're still drowning is corporate slop and advertising.
Today: Total: Anil Dash, 2025/12/05 [Direct Link]Web - Today's OLDaily
OLDaily Email - Subscribe
Web - This Week's OLWeekly
OLWeekly Email - Subscribe
RSS - Individual Posts
RSS - Combined version
Podcast - OLDaily Audio
Websites
Stephen's Web and OLDaily
Half an Hour Blog
Leftish Blog
MOOC.ca
Stephen's Guide to the Logical Fallacies
gRSShopper
Let's Make Some Art Dammit
Email: stephen@downes.ca
Email: Stephen.Downes@nrc-cnrc.gc.ca
Skype: Downes
Professional
National Research Council Canada
Publications
Presentations
All My Articles
My eBooks
About Stephen Downes
About Stephen's Web
About OLDaily
Subscribe to Newsletters
gRSShopper
Privacy and Security Policy
Statistics
Archives
Courses
CCK 2008, 2009, 2011, 2012
PLENK 2010
Change 11 - 2011
Education Futures - 2012
Learning Analytics - 2012
REL 2014
Personal Learning - 2015
Connectivism and Learning - 2016
E-Learning 3.0 MOOC - 2018
Ethics, Analytics - 2020
Stephen Downes, Casselman, Canada
stephen@downes.ca
Last Updated: Dec 08, 2025 5:37 p.m.

