AI is Destroying the University and Learning Itself
Ronald Purser,
Current Affairs,
2025/12/04
I've written articles like this - the ones where I take a pile of notes (like, say, these OLDaily posts) from the last year or so, organize them into themes, and then build a narrative around them. I pinnacled in the form around 12 years ago in London at Greenwich and the LSE. This article is like that, documenting the many many ways AI is going to kill the university or (in the words of Tyler Cowen) "will persist as a dating service, a way of leaving the house, and a chance to party and go see some football games." The sheer volume of notes is evidence of a community roiling. But this is a community based on rigid hierarchy and protocol, that exploits a large percentage of its work force, that denies access to the majority of society, and fails a third of those who enter. I'm not going to say that AI is the answer to all things, but it's directly impacting things that have needed attention as long as I have been active in the sector.
Web: [Direct Link] [This Post][Share]
What does it mean to understand language?
Colton Casto, Anna Ivanova, Evelina Fedorenko, Nancy Kanwisher,
arXiv,
2025/12/04
I have mixed feelings about this paper (17 page PDF). On the one hand, the turning point for my efforts to learn French can when I began to apply frames to what I was trying to say (so I could decide things like tense and gender once and then not worry about it). On the other hand, I think of fMRI as a modern form of phrenology. What I can derive from this is the idea that language learning isn't just about language. As the authors put it, "a deep understanding of language... requires the exportation of information from the brain's core language system to other cognitive and neural systems that can build models." It's not, they say, that the whole brain is responsible for language processing, it's just that language processing depends on (shall we say) other systems that have multiple uses.
Web: [Direct Link] [This Post][Share]
The Q, K, V Matrices
Arpit Bhayani,
2025/12/04
This is a useful reconstruction of the transformer architecture introduced in 2017 describing 'attention' and kicking off what would become the AI revolution stating in 2022. As Arpit Bhayani writes, "at the core of the attention mechanism in LLMs are three matrices: Query, Key, and Value. These matrices are how transformers actually pay attention to different parts of the input." This tells us what words in a sentence matter the most, and allows us to create the three matrices to more accurately predict what should come next. This is why AI isn't going away; look how simple and straightforward this is.
Web: [Direct Link] [This Post][Share]
Beyond Infographics: How to Use Nano Banana to *Actually* Support Learning
Philippa Hardman,
Dr Phil's Newsletter,
2025/12/04
This is quite a good discussion on how to use Nano Banana to support genuine learning activities. As Philippa Hardman points out, the software is trained using text, not images, and thus avoids many of the issues of other image-generating software, while having issues of its own (including, as I showed a few days ago, making stuff up instead of relying on the source). But these examples - things like creating metaphors or generating fill-in-the-blank images - are generally resilient to that.
Web: [Direct Link] [This Post][Share]
James Marriott calls my critique "frustratingly naive."
Carlo Iacono,
Substack,
2025/12/04
I think this is well stated: "The cognitive impacts of smartphone adoption are documented... So where's the real disagreement? His diagnosis: screen culture inherently biases toward poorer quality thought. As he puts it, "the general bias of a screen culture is towards poorer quality thought and information." The medium itself degrades cognition. My diagnosis: we've built extractive attention economies that exploit cognitive vulnerabilities for profit, and we're blaming the victims of this extraction for their own exploitation. The problem isn't screens; it's what we've designed screens to do." Image: Kidtown Melbourne.
Web: [Direct Link] [This Post][Share]
The résumé is dying, and AI is holding the smoking gun
Benj Edwards,
Ars Technica,
2025/12/04
This is the sort of thing digital badges were supposed to manage, but really, there was never any hope. "Due to AI, the traditional hiring process has become overwhelmed with automated noise... The flood of ChatGPT-crafted résumés and bot-submitted applications has created an arms race between job seekers and employers, with both sides deploying increasingly sophisticated AI tools in a bot-versus-bot standoff that is quickly spiraling out of control." Where does it leave my own prediction, specifically, that AI will allow employers to find potential staff from their online work? I'm not sure - I mean, it could still work, especially if we don't care whether or not they used AI to produce their work.
Web: [Direct Link] [This Post][Share]
I Went All-In on AI. The MIT Study Is Right.
Josh Anderson,
The Leadership Lighthouse,
2025/12/04
So let's be clear about what was defined as 'failure' here: "I got the product launched. It worked (but) I needed to make a small change and realized I wasn't confident I could do it. My own product, built under my direction, and I'd lost confidence in my ability to modify it... Not immediate failure - that's the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you've built." But is this failure, really? Perhaps it is, by the traditional standards of software design. But why not just as the AI to make the change? As usual, what counts as 'failure' depends on what you're trying to do. Create an application that works? AI is often successful. Become an expert software developer? Then having someone (or something) is a non-starter. But we knew that.
Web: [Direct Link] [This Post][Share]
Looking for Root Causes is a False Path: A Conversation with David Blank-Edelman
David Blank-Edelman, Michael Stiefel,
InfoQ,
2025/12/04
I watch Mayday with interest not simply because I like airplanes so much but because the investigation of why aircraft crash (and why so few of them do) teaches me a lot about how we know what we know. This article isn't about airplane crashes, it's about site reliability, but many of the conclusions are the same, for example, about why there's rarely a 'root cause' for any event, about why 'human error' is rarely the cause of any crash, and about how complex systems are, well, complex, which means there are elements of them that elude understanding entirely. As Michael Stiefel says in this interview, it's like "Rilke's famous expression, 'Living the question', because an architecture is never done. You never should think of it as done."
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2025 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.