The Mythology Of Conscious AI
Anil Seth,
NOEMA,
2026/01/14
This is quite a good article and more than does the job of setting the tone for today's OLDaily. What we're offered here is an excellent statement of the idea that human consciousness is fundamentally diftinct from artificial intelligence. There's a lot going on in this article, but this captures the flavour of the argumentation: "Unlike computers, even computers running neural network algorithms, brains are the kinds of things for which it is difficult, and likely impossible, to separate what they do from what they are." The article hits on a number of subthemes: the idea of autopoiesis, from the Greek for 'self-production"; the way they differ in how they relate to time; John Searle's biological naturalism; the simulation hypothesis; "and even the basal feeling of being alive". All in all, "these arguments make the case that consciousness is very unlikely to simply come along for the ride as AI gets smarter, and that achieving it may well be impossible for AI systems in general, at least for the silicon-based digital computers we are familiar with." Yeah - but as Anil Seth admits, "all theories of consciousness are fraught with uncertainty."
Web: [Direct Link] [This Post][Share]
The Problem with AI "Artists"
Anjali Ramakrishnan,
O'Reilly,
2026/01/14
OK, how do I express this? Here's the conclusion of this long O'Reilly article on humans, art and creativity: "The fundamental risk of AI 'artists' is that they will become so commonplace that it will feel pointless to pursue art, and that much of the art we consume will lose its fundamentally human qualities." Now, we humans have always made art, long before anyone thought of paying for it - long before there was even money. Why? What makes Taylor Swift better than an AI-generated singer-songwriter? My take is that it's not the content of the art, but instead, it's the provenance. I've written before about the human experience behind her work. Similarly, what's the difference between my videos and somewhat better photosets from Iceland and something a machine might create? It's that I was there and I'm reporting on the lived experience. There's nothing in the media that distinguishes between AI and human generated media, only in why it was made and why we're interested. If you want to get at why any of this matters, you have to look past the economics of it, and ask why it was ever made at all.
Web: [Direct Link] [This Post][Share]
Algorithms and authors: How generative AI is transforming news production
Alexander Wasdahl, Ramesh Srinivasan,
First Monday,
2026/01/14
There are some interesting bits in this article (22 page PDF) even if, in my view, the research basis doesn't allow us to generalize meaningfully. The first is the proposition that news reporting by humans is fundamentally different from that produced by machines. "Journalists engage in selective representation, deciding which events in the world are noteworthy or relevant to their audience, thus shaping public discourse. They accordingly choose words based on what they deem best captures what they wish to report or analyze... While human text represents ideas and can typically provide reasoning behind the choice of words and constructions, algorithmically generated texts merely render outputs without such explanations." Second, and as a result, "the instrumental, efficiency-oriented purposes served by LLMs exist in tension with the values expressed by the individuals interviewed in this study, particularly around accuracy, transparency, editorial autonomy, and accountability." My scepticism exists along two fronts: first, whether the reporter's art is based as much on reason as averred in the article, and second, whether machines are not in fact capable of exercising the same mechanisms themselves.
Web: [Direct Link] [This Post][Share]
HoP 484 You Bet Your Life: Pascal's Wager
Peter Adamson,
History of Philosophy Without Any Gaps,
2026/01/14
Peter Adamson's monumental 'History of Philosophy Without Any Gaps' podcast series has made it to the mid-1600s and Pascal's Wager. Here it is: "Let us weigh the gain and the loss in wagering that God is. Let us estimate these two chances. If you gain, you gain all; if you lose, you lose nothing." By contrast, if you wager that God doesn't exist, you risk losing all, while gaining only a finite amount if you win. Arguably all of choice, game and decision theory follows from this single challenge (let alone a whole school of theological argument). For me, the significance is that it marks the transition to thinking of life in terms of 'value', that is, something that can be counted, weighed and measured. Pascal's wager falls in the middle of the Cartesian revolution I've written about elsewhere, where we transition from sensing to calculating. We are at the end of this stage (Jeff Jarvis describes this in the Gutenberg Parenthesis while John Ralston Saul offers his take on the same phenomenon in Voltaire's Bastards). Can we imagine a future were we no longer weighed, measured and found wanting?
Web: [Direct Link] [This Post][Share]
The Human Advantage: Nine Skills We Can’t Afford to Lose in an AI-Powered World
John Spencer,
Spencer Education,
2026/01/14
This seems to be a day for focusing on human skills in an AI world, and yet I find the descriptions of them to be so lacking. This article is a case in point. John Spencer begins by criticizing efficiency as a value, which is fine, but we need to look at what the alternatives are, and why we prefer them. Here are the sorts of human skills Spencer references: confusion, productive struggle, slower learning, divergent thinking, one's own voice, empathy, contextual understanding, wisdom, and extended focus. Sure, these are all human traits. Some of them could probably be accomplished by an AI, while others we probably wouldn't bother (for example, it's probably hokum that slower learning produces 'lasting knowledge'). I don't think humans are unique, or especially excel, in any part of the cognitive domain. Rather, what we bring to the table is embodied human experience. But we don't see any of the 'how to adapt to AI' literature talking about 'how to have experiences'
Web: [Direct Link] [This Post][Share]
Be Redemptive
Josh Brake,
The Absent-Minded Professor,
2026/01/14
I think there are some good points to be made in this longish post ruminating on how to decide what needs to be made and what needs to be done in the world. The main advice is in the title, where 'redemptive' is defined as "I sacrifice, we win" and contrasted with 'exploitive' ("I win, you lose") and 'ethical' ("I win, you win"). This is more than just 'catering to the desires of your users' but instead "seeking to understand their deepest needs and to seek their good, even if that means that we cannot maximize our returns or profit margin." This is had because 'their good' is often not seen as also 'my good'. The same post also references Kurt Vonnegut Jr.'s novel Player Piano, which is probably my favourite of all the Vonnegut novels.
Web: [Direct Link] [This Post][Share]
Don’t let AI change what it means to teach
Allison Littlejohn,
National Institute of Education (NIE),
2026/01/14
"If we want to know where AI belongs in schools, we have to be honest about what teaching is," writes Allison Littlejohn in this Singapore publication. "Teaching isn't a bundle of tasks. It's a demanding set of cognitive, emotional and social practices that machines can assist with but not replicate." The article looks at a number of things she argues only teachers can do: interpreting "subtle cues such as shifts in attention, hesitation, confusion or sudden insight"; sequencing "concepts and ideas, anticipate misconceptions, frame productive questions and construct sequences"; and shaping "the emotional climate in which learning happens." There's also a plug for Navigo Game, developed to teach children learning English as a foreign language. This tool "demonstrates that teachers, students and their parents are important stakeholders who must be co-creators if the technology is to address their needs." Well, it actually does no such thing, and as important as the three sets of things she describes, there isn't a good reason to believe that non-teachers, or even non-humans, can't or won't be able to perform these functions. Image: Wikipedia.
Web: [Direct Link] [This Post][Share]
Why Learner Wallets Will Fail (And How to Make Sure They Don’t)
Mason Pashia, Beth Ardner,
Getting Smart,
2026/01/14
This is quite an interesting post about learner wallets (which I will take to be what we're calling 'personal learning environments' (PLE) now, and are called 'learner employment records' (LERs)). It says, in essence, that developers are focusing on what industry needs by treating them as summaries of accomplishments (ie, 'summarizing identity') rather than what individual learners need, which is a way to build accomplishments (ie., 'building identity'). Successful apps don't sell themselves as "preparation for your professional future." Instead, "they work because they tap into something deeper: the fundamental human need to understand, craft, and articulate who we are, all while being within full control of the user. This means radical control over privacy settings, data sovereignty and more." The article builds on this idea with a scenario describing "Leo: The Storm Chaser". I could quibble about the details of this, but not with the core idea. There's some good discussion of key principles and a list of sample applications. This is probably two article combined into one, as two separate authors are listed; I'll credit both.
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.