Learn Your Way: Reimagining textbooks with generative AI
Gal Elidan, Yael Haramaty,
Google Research,
2025/09/22
I've been talking about using AI to generate open learning resoruces for what feels like years and we're getting ever closer to that prospect. Here we have an article from Google Research describing the process as it applies to AI. But I think they're making it harder than it has to be. First, why generate a whole textbook? On any given day, all you need is a specific unit or module. Second, why create something that needs to be 'personalized' or depends on 'multiple representations of content'. Just create each item from scratch for the learner based on their actual current needs - learning that is personal rather than merely personalized. I would also hope the AI generated text wouldn't be as awful as we see in the illustrated example. That said, the idea is sound.
Web: [Direct Link] [This Post][Share]
Web Standards and the Fall of the House of Iamus
Infrequently Noted,
2025/09/22
This is an interesting, detailed and (slightly) opinionated piece about web standards. It's part four of the series Effective Standards Work. In practice, writes Alex Russell, "(Standards-body) Working Groups do not invent the future, nor do they hand down revealed truths by divining entrails like prophets of the House of Iamus. In practice, they are diligent, thoughtful historians of recent design expeditions." That's why you'll se me often refer to 'specifications' or 'protocols' or perhaps 'proposals' (if they're being considered by a standards body). There's a lot that happens before something becomes a standard - a lot of discussion debates, prototypes and playing out. Finally, it gets to the standards stage, which is, as Russell says, "all about patents. Standards Development Organisations are, practically speaking, IPR clearing houses." Readers interested in this topic will probably have already consulted the W3C's Web Standards documentation.
Web: [Direct Link] [This Post][Share]
The Death of Search: How Shopping Will Work In The Age of AI
Alex Rampell, Justine Moore,
The a16z Newsletter,
2025/09/22
Can we imagine AI being 'The Death of Advertising'? I think that if we substitute the word 'learning' for 'shopping' a lot of this article carries over, but a lot of it doesn't, and the trick is to distinguish between the two. The clue, I think, lies in the subhead: "The web is unhealthy, and AI agents are about to rewrite how we shop." Now whether you think "the web is unhealthy" depends a lot on your point of view. I still love the web - but I don't spent much time on the commercial side of it. I'd rather read blog posts than online magazines, chat with friends on Mastodon than doomscroll through X/Twitter, and learn from individual videos and how-to posts than subscription-based courses and programs. There's a lot AI can do to make this experience better, but streamlined shopping isn't one of them. There's some stepping through that's necessary - there's an interesting section on Costco part way through this article - showing how the relationship, rather than the commercial, might be the really valuable thing.
Web: [Direct Link] [This Post][Share]
Beyond Newtonian causation in neuroscience
Luiz Pessoa,
The Transmitter: Neuroscience News and Perspectives,
2025/09/22
One of the things students learn in first-year philosophy is David Hume's argument that the notion of cause-and-effect is a 'useful fiction' we create through 'custom and habit', and not a real thing in the world. And indeed, the determinism offered by traditional models of cause-and-effect pose a challenge for our conception of things like free will. But what if causation isn't purely mechanical in the way the traditional model suggests? New models of cognition and neuroscience may be pushing us in that direction, according to this article. For example, there's the concept of 'criterial causation' "emphasizes the broader conditions under which neural activity becomes effective in producing behavior." Or there's 'semantic causation', based on "the meanings (neural signals) represent to the organism based on past experiences and adaptive significance."
Web: [Direct Link] [This Post][Share]
Turning the Corner
Alex Usher,
HESA,
2025/09/22
Alex Usher returns to one of his favourite topics: increasing tuition fees. He offers two basic arguments: first, fees are lower than in the recent past as a percentage of median family income (aged 45-54), and second, if we total aid from all sources, "we spend about $3 billion more in student aid than we take in from tuition fees." He ends with an appeal for "the courage to put the requirements of institutions that actually build economies and societies ahead of the cheap, short-term sugar highs of chasing things like 'affordability'." Now there are good arguments for funding these institutions, but given that, if you're a government, why would you ask the least able to pay to pay the cost of supporting them? And if these institutions are so important for the economy, why would you make it harder to benefit from them? This is especially the case where we expect institutions to support lifelong learning, not just the 18-23 rich kid set. If you're a young adult without parental support (as I was) then forget about education. If you have low income parents, then forget about education. How does this help the country? The higher tuition fees are, the less institutions are doing for the country as a whole, and the less likely it is that governments will want to fund them.
Web: [Direct Link] [This Post][Share]
What Will Academia.edu Do with Its New Rights to Your Name, Likeness, and Voice?
Justin Weinberg,
Daily Nous,
2025/09/22
Justin Weinberg reports that "Users of the Academia.edu service are cancelling their subscriptions in response to perceived overreach by the firm in its recent update to its terms of service." The new terms grant Academia "a worldwide, irrevocable, non-exclusive, transferable license, permission, and consent for Academia.edu to use your Member Content and your personal information (including, but not limited to, your name, voice, signature, photograph, likeness, city, institutional affiliations, citations, mentions, publications, and areas of interest) in any manner." It also grants additional rights over uploaded content. Academia has been demanding users create accounts to read uploaded papers, so this is the natural progression of this data squeeze.
Web: [Direct Link] [This Post][Share]
Why Language Models Hallucinate
Adam Tauman Kalai, Ofir Nachum, Santosh S. Vempala, Edwin Zhang,
arXiv,
2025/09/22
This paper argues that "hallucinations in pretrained language models will arise through natural statistical pressures... due to the way most evaluations are graded -- language models are optimized to be good test-takers, and guessing when uncertain improves test performance." Or as I argued today on Mastodon, "Young Stephen and young AI reached the same conclusion when it comes to answering questions... always write something in response to test questions. Leave it blank, get a zero. Write something (and even better, take a guess on multiple choice) and you'll get at least partial marks." Probably should have posted this link earlier, but better late than never.
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2025 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.