This is quite a good article and more than does the job of setting the tone for today's OLDaily. What we're offered here is an excellent statement of the idea that human consciousness is fundamentally diftinct from artificial intelligence. There's a lot going on in this article, but this captures the flavour of the argumentation: "Unlike computers, even computers running neural network algorithms, brains are the kinds of things for which it is difficult, and likely impossible, to separate what they do from what they are." The article hits on a number of subthemes: the idea of autopoiesis, from the Greek for 'self-production"; the way they differ in how they relate to time; John Searle's biological naturalism; the simulation hypothesis; "and even the basal feeling of being alive". All in all, "these arguments make the case that consciousness is very unlikely to simply come along for the ride as AI gets smarter, and that achieving it may well be impossible for AI systems in general, at least for the silicon-based digital computers we are familiar with." Yeah - but as Anil Seth admits, "all theories of consciousness are fraught with uncertainty."
Today: Total: Anil Seth, NOEMA, 2026/01/14 [Direct Link]Select a newsletter and enter your email to subscribe:
Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes,
stephen@downes.ca,
Casselman
Canada
OK, how do I express this? Here's the conclusion of this long O'Reilly article on humans, art and creativity: "The fundamental risk of AI 'artists' is that they will become so commonplace that it will feel pointless to pursue art, and that much of the art we consume will lose its fundamentally human qualities." Now, we humans have always made art, long before anyone thought of paying for it - long before there was even money. Why? What makes Taylor Swift better than an AI-generated singer-songwriter? My take is that it's not the content of the art, but instead, it's the provenance. I've written before about the human experience behind her work. Similarly, what's the difference between my videos and somewhat better photosets from Iceland and something a machine might create? It's that I was there and I'm reporting on the lived experience. There's nothing in the media that distinguishes between AI and human generated media, only in why it was made and why we're interested. If you want to get at why any of this matters, you have to look past the economics of it, and ask why it was ever made at all.
Today: Total: Anjali Ramakrishnan, O'Reilly, 2026/01/14 [Direct Link]There are some interesting bits in this article (22 page PDF) even if, in my view, the research basis doesn't allow us to generalize meaningfully. The first is the proposition that news reporting by humans is fundamentally different from that produced by machines. "Journalists engage in selective representation, deciding which events in the world are noteworthy or relevant to their audience, thus shaping public discourse. They accordingly choose words based on what they deem best captures what they wish to report or analyze... While human text represents ideas and can typically provide reasoning behind the choice of words and constructions, algorithmically generated texts merely render outputs without such explanations." Second, and as a result, "the instrumental, efficiency-oriented purposes served by LLMs exist in tension with the values expressed by the individuals interviewed in this study, particularly around accuracy, transparency, editorial autonomy, and accountability." My scepticism exists along two fronts: first, whether the reporter's art is based as much on reason as averred in the article, and second, whether machines are not in fact capable of exercising the same mechanisms themselves.
Today: Total: Alexander Wasdahl, Ramesh Srinivasan, First Monday, 2026/01/14 [Direct Link]Peter Adamson's monumental 'History of Philosophy Without Any Gaps' podcast series has made it to the mid-1600s and Pascal's Wager. Here it is: "Let us weigh the gain and the loss in wagering that God is. Let us estimate these two chances. If you gain, you gain all; if you lose, you lose nothing." By contrast, if you wager that God doesn't exist, you risk losing all, while gaining only a finite amount if you win. Arguably all of choice, game and decision theory follows from this single challenge (let alone a whole school of theological argument). For me, the significance is that it marks the transition to thinking of life in terms of 'value', that is, something that can be counted, weighed and measured. Pascal's wager falls in the middle of the Cartesian revolution I've written about elsewhere, where we transition from sensing to calculating. We are at the end of this stage (Jeff Jarvis describes this in the Gutenberg Parenthesis while John Ralston Saul offers his take on the same phenomenon in Voltaire's Bastards). Can we imagine a future were we no longer weighed, measured and found wanting?
Today: Total: Peter Adamson, History of Philosophy Without Any Gaps, 2026/01/14 [Direct Link]This seems to be a day for focusing on human skills in an AI world, and yet I find the descriptions of them to be so lacking. This article is a case in point. John Spencer begins by criticizing efficiency as a value, which is fine, but we need to look at what the alternatives are, and why we prefer them. Here are the sorts of human skills Spencer references: confusion, productive struggle, slower learning, divergent thinking, one's own voice, empathy, contextual understanding, wisdom, and extended focus. Sure, these are all human traits. Some of them could probably be accomplished by an AI, while others we probably wouldn't bother (for example, it's probably hokum that slower learning produces 'lasting knowledge'). I don't think humans are unique, or especially excel, in any part of the cognitive domain. Rather, what we bring to the table is embodied human experience. But we don't see any of the 'how to adapt to AI' literature talking about 'how to have experiences'
Today: Total: John Spencer, Spencer Education, 2026/01/14 [Direct Link]I think there are some good points to be made in this longish post ruminating on how to decide what needs to be made and what needs to be done in the world. The main advice is in the title, where 'redemptive' is defined as "I sacrifice, we win" and contrasted with 'exploitive' ("I win, you lose") and 'ethical' ("I win, you win"). This is more than just 'catering to the desires of your users' but instead "seeking to understand their deepest needs and to seek their good, even if that means that we cannot maximize our returns or profit margin." This is had because 'their good' is often not seen as also 'my good'. The same post also references Kurt Vonnegut Jr.'s novel Player Piano, which is probably my favourite of all the Vonnegut novels.
Today: Total: Josh Brake, The Absent-Minded Professor, 2026/01/14 [Direct Link]Web - Today's OLDaily
OLDaily Email - Subscribe
Web - This Week's OLWeekly
OLWeekly Email - Subscribe
RSS - Individual Posts
RSS - Combined version
Podcast - OLDaily Audio
Websites
Stephen's Web and OLDaily
Half an Hour Blog
Leftish Blog
MOOC.ca
Stephen's Guide to the Logical Fallacies
gRSShopper
Let's Make Some Art Dammit
Email: stephen@downes.ca
Email: Stephen.Downes@nrc-cnrc.gc.ca
Skype: Downes
Professional
National Research Council Canada
Publications
Presentations
All My Articles
My eBooks
About Stephen Downes
About Stephen's Web
About OLDaily
Subscribe to Newsletters
gRSShopper
Privacy and Security Policy
Statistics
Archives
Courses
CCK 2008, 2009, 2011, 2012
PLENK 2010
Change 11 - 2011
Education Futures - 2012
Learning Analytics - 2012
REL 2014
Personal Learning - 2015
Connectivism and Learning - 2016
E-Learning 3.0 MOOC - 2018
Ethics, Analytics - 2020
Stephen Downes, Casselman, Canada
stephen@downes.ca
Last Updated: Jan 14, 2026 5:37 p.m.

