Three Years from GPT-3 to Gemini 3
Ethan Mollick,
One Useful Thing,
2025/11/18
Ethan Mollick reviews Google's new Gemini 3 AI model and reflects on how far the technology has come in the last three years. Contrary to the many people who have proclaimed that AI has stopped improving or reached a cognitive plateau, it appears that it's still getting better. "Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker." I'm still writing articles and OLDaily posts without AI assistance, and that won't change, but it's getting hard for me to imagine doing anything technical without AI assistance.
Web: [Direct Link] [This Post][Share]
Cloudflare coughs, half the internet catches a cold
Richard Speed,
The Register,
2025/11/18
Cloudflare suffered an outage today, taking half the internet down with it. This is probably not news to most readers, but it offers an opportunity to once again point to the folly of running the entire internet through a single centralized gateway, no matter how well-meaning the company may be.
Web: [Direct Link] [This Post][Share]
Beyond Authorship Vibes: Preserving Judgment and Trust in the Age of AI
Eli Alshanetsky,
Daily Nous,
2025/11/18
Eli Alshanetsky advocates "a bottom-up approach to governing AI: rather than trying to stop progress at the level of capability development, we secure how AI is allowed to enter and operate within human practices." In other words, 'AI in the loop'. Also known sometimes as 'Human Autonomy Teaming'. I'm in agreement with this, but would like to focus on a completely separate point, based on this: "An essay can be rough around the edges and still count as real progress if the student grappled with an idea. When the work no longer reveals the student's thinking, the teacher's feedback has nowhere to land." I see the merit of this point, but have to ask: what counts as 'grappling with an idea' and why is it valuable? There's a leap here from 'show your work' to 'where you went wrong' to 'effective intervention'. We presumes the effectiveness and efficiency of the student essay as a diagnostic tool, but that presumption may well be wrong, especially given the affordances writing with AI may generate.
Web: [Direct Link] [This Post][Share]
Building resilient social media
Mathew Lowry,
Medium,
2025/11/18
Readers see this argument a lot on OLDaily and here it is again: resilient media is decentralized, a complex network of sites connected by protocol and not owned and managed by any individual or institution. "The Web is a commons, guaranteed by its protocol. I can build and manage a website and publish it to the world... I don't need permission to publish, you don't need permission to read, and there's nothing billionaires or governments can do about it because they can't get between us." That's not completely true, of course - they can interfere with open networks, but it's a lot harder when media is decentralized. The original version of this article is here, on the massive wiki. See also: The Blacksky path towards resilient social media (wiki version).
Web: [Direct Link] [This Post][Share]
Profit, Education, and Student Grants | HESA
Alex Usher,
HESA,
2025/11/18
Alex Usher argues against banning Canada Student Financial Assistance Program (CSFAP) grants to students attending for-profit institutions. Here's the argument, in a (tiny) nutshell: "it implies that ownership is the cause of bad results. And, frankly, I am not sure this is true." And poof! The entire left of the political spectrum has disappeared in a single sentence. Here's the real problem, as I see it: private institutions are fundamentally extractive. Decisions are made on the basis of profit or some similar self-serving motivation. They provide as little as possible for the revenue they receive. Public institutions, by contrast, are fundamentally constructive. They exist to promote the social good or (if we believe Mark Carney) serve underlying social values. This is not a guarantee of better outcomes, but intentions matter.
Web: [Direct Link] [This Post][Share]
Inside Yale's Quiet Reckoning with AI
Alex Moore,
The New Journal,
2025/11/18
I'm going to make an unusual admission here: I never cheated in school or university. Sure, it meant taking the occasional C but I can't imagine making a decision to use ChatGPT any more than I would have collaborated with friends or relied on inside information. So what do I make of Justin Weinberger's lament that "When the takeaway is finally, a gifted and morally sensitive student at a top university realizes that cheating is bad for her, what hope should we have for everyone else?" Maybe we should look at what we're teaching. "Part of Yale culture, he said, is to 'take advantage of every possible thing.'" Yeah. Sure, "Demanding extracurricular activities crowd out learning." I spent my university years in the student newspaper office. I wrote my essays at the last minute, usually as first draft the day (or night) before it was due. I learned a ton at university, but what I didn't learn was to "take advantage". Thus (to my mind) the only significant different between me and a Yale graduate.
Web: [Direct Link] [This Post][Share]
We're Building the Wrong Intelligence
Digitally Literate,
2025/11/18
I think this article makes some good points (Ian O'Byrne has gotten a lot better recently, and I'm not complaining). The core argument here is that existing AI systems are optimized for engagement, not learning. They're not based on getting the facts right, understanding basic principles like causation, or prioritizing student safety. They just want to draw you into a conversation and keep you talking, driving up that token count (which, eventually, you will have to pay for). I think it's a good point and that we should think about what we want to use in education before we build frameworks and governance for it. Image: nifty graph of related articles that's a bit hard to read but fun to play with.
Web: [Direct Link] [This Post][Share]
Dialogue at Scale: AI, Soft Skills, and the Future of Assessment
Mihnea Moldoveneau, Tanya Gamby, David Kil, Rachel Koblic, Paul LeBlanc, George Siemens,
EDUCAUSE Review,
2025/11/18
This is a decent article that takes a high-level look at the use of AI to create dialogue-based learning resources. For example, instead of offering a text that students can read, dialogue-bases systems offer an interactive version of the text you can converse with. The article lists a number of such tools already in existence; alongside OpenAI, Anthropic, and Google's dialogic tools, it references: MuDoC, a multi-document reader; VISAR, a scenario-building interactive writing assistant; and StatZ, that models statistical concepts (not to be confused with the betting tool of the same name). As a result, the article suggests a "rethinking the canon of learning activities," replacing reading, writing, modeling and coding with "interactive" versions of the same thing. I'm left wondering whether this 'interactive' mode is what we want in many, if not most, learning scenarios. It may work well for processes that are already iterative - writing a long article, say, or coding a complex application. But it feels like an interactive function would slow me down in a lot of cases where I don't really need to have a conversation.
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2025 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.