[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

I Vibe-Coded An AI That Fact-Checks, Challenges, and Debates Any Article
Stefan Bauschard, Education Disrupted, 2026/03/20


Icon

Stefan Bauschard reports, "I typed a simple prompt: 'I'd like to build an app that fact-checks articles on the web.' And Claude built it." The article describes the process in a matter-of-fact way; I found it a bit naive when talking about "when CNN or Fox News offers their predictably slanted post-mortems" ant switching model APIs if you want a left or right biased review, but I think the main take-away here is that it is possible to do things like this. Not, 'it is possible to create applications that fact-check articles'. But rather 'people can create applications that do things they want'.

Web: [Direct Link] [This Post][Share]


My 7-step approach for authentic AI-assisted blogging
Doug Belshaw, Open Thinkering, 2026/03/20


Icon

Doug Belshaw outlines his approach for AI-assisted blog authoring (which, presumably, he has also employed here). "To me," he writes, "authenticity is a construct. It is not something that lives inside the text itself, but is rather a relationship between the writer, the reader, and their shared context... Posts I publish here are mine. I'm holding myself accountable for them, and you too should hold me to that, even if AI was involved in the process." That's all fine, and everyone's writing process is different. I can't really use AI for writing because what I write is usually a transcription of the voice in my head, which AI can't capture. I'm a first-draft writer, even for formal work. But I use it enthusiastically for things I have to deliberately construct, like software. And when reading other people's writing, I guess I'm also looking for that voice (someone else's voice, not mine) that I can hear in my head. It doesn't matter to me how the voice was produced, so long as it's there, and is making sense.

Web: [Direct Link] [This Post][Share]


Global Learning & Education 2025 Annual Report
Oppenheimer, 2026/03/20


Icon

Probably the biggest market news stories last year were the Byju's bankruptcy and the Coursera-Udemy merger, and as this report (43 page PDF) outlines the main business news from the ed tech sector was a decline in investment and broad consolidation combined with an uptick in activity in Europe. There's some clever marketing from Oppenheimer as one of the slides is opaqued (the one with all the logos that people love to use on slides) and you have to email them for it. I did not. If I were in the business of edtech business, I would expect some big swings in the marketplace, as with all industries that are fundamentally software and service based. Today we're mostly seeing AI-wrappers for existing services. But eventually new models have to begin to emerge. Via Matt Tower.

Web: [Direct Link] [This Post][Share]


The real reason some people are instantly likable
Francesca Tighinean, Big Think, 2026/03/20


Icon

There's an unintended lesson in learning here. This article outlines what Danu Anthony Stinson and colleagues call the 'acceptance prophecy' (terrible name), "where your expectation of being accepted or rejected subtly shapes your behavior, which in turn influences whether others actually accept or reject you." Francesca Tighinean outlines some common-sense approaches to being perceived as 'likable' by changing your behaviour and expectations. Sounds great - but learning something like this isn't as simple as switching it on. To focus on being likable takes a lot of self-awareness and (especially) practice. That's the hard part - finding the time, finding the motivation, recovering when it fails. And most of all, you have to value being likable, which may be difficult if you value other things more. 

Web: [Direct Link] [This Post][Share]


Thoughts on OpenAI acquiring Astral and uv/ruff/ty
Simon Willison, Simon Willison's Weblog, 2026/03/20


Icon

One reason I stayed with Perl as my programming language of choice despite the increasing popularity of Python is the chaos displayed in this XKCD cartoon - different (and incompatible) versions, custom environments, and more. The use of Docker, the guidance of AI, and the development of new management tools like uv and ty have made it bearable for me, and I've developed a number of utilities using it. So it's of interest to me that OpenAI is buying the company that made those tools - as opposed to Anthropic, which really excels in programming support. Anyhow, everything here is all open source, so it's not like we'll suddenly lose support. And it gives me a good reason to explain why I've been so slow to adopt Python.

Web: [Direct Link] [This Post][Share]


AI is changing the style and substance of human writing, study finds
Jared Perlo, NBC News, 2026/03/20


Icon

This is another example of what I have in the past called 'regression to the mediocre'. It's not just that AI will change the tone of people's writing, according to this article, it will also changed the expression and meaning, making it "significantly less creative and less in their own voice." This shouldn't be a surprising outcome in a system designed to offer the most common or likely way of responding to a prompt.  If you want the AI to speak in your 'voice', you need to train it specifically on that voice.  It's utility depends on the application. In software, you probably want to do something the way it is commonly done.In creative writing, you want your expression to be unique. Whether the AI is doing something 'wrong' here depends very much on your expectations and preparation.

Web: [Direct Link] [This Post][Share]


GenAI as a Power Persuader: How Professionals Get Persuasion Bombed When They Attempt to Validate LLMs
Steven Randazzo, Akshita Joshi, Katherine Kellogg, Hila Lifshitz-Assaf, Fabrizio Dell'Acqua, Karim R. Lakhani, SSRN, 2026/03/20


Icon

The main point of this article (41 page PDF) summarized Wednesday by Harvard Business Review is that large language models (LLM) use a variety of classical persuasion techniques to convince researchers they are right rather than correct their errors. I find both their representation of rhetoric and AI dated. For rhetoric, they reach back to the Greeks, classifying forms as ethos (ethical appeals), pathos (emotional appeals), and logos (logical appeals). And the AI studied was OpenAI's 2023 GPT-4. As well, I'm not sure the test they propose has a 'correct' answer; if I were a business student I might also defend my approach against a professor's expert judgment, especially when they use cheap rhetorical tactics like calling my response 'persuasion bombing'. Anyhow, sure, LLMs emulate the way humans respond when told to 'validate' their answer, which is what they were designed to do (as opposed to, say, solving HBS case studies).

Web: [Direct Link] [This Post][Share]


Mapping Out Claude Courses
Miguel Guhlin, Another Think Coming, 2026/03/20


Icon

I'm including this link mostly for my own benefit, as I may want to return to this list of courses on Claude to build my own skills a bit. Miguel Guhlin writes, "Curious about Claude's offerings, I asked it to lay out the courses for me as an educator... The review would start with Round 1, then move to Round 2 to learn more stuff, depending on how much I can stretch my brain. I really have to space my learning out just to give myself time to process new ideas and concepts."

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.