[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Retired and on to the Next Thing!
Geoff Cain, Brainstorm in Progress, 2026/03/31


Icon

Geoff Cain retires, says some nice things about me (aw, shucks) and describes his new life. "I currently have a couple of unfinished novels that I have been working on that I am going to finish..... I have also been involved in art in various media, but in my retirement, I am focusing on acrylics and watercolors. It is not the end of my teaching, of course, because part of being involved in art is participating and sharing in community. I will be moving this blog to archive it somewhere and then pick up the New Adventures of Geoff somewhere else." You beat me by seven days, Geoff.

Web: [Direct Link] [This Post][Share]


Critical thinking, expertise and intelligence
Peter Ellerton, The Education Contrarian, 2026/03/31


Icon

I just want to mention this article because I think these concepts are about to enter a period of considerable redefinition. What made me think this way was author Peter Ellerton's assertion that "intelligence contains almost no content knowledge" while I distinctly recall intelligence tests I have taken asking for very specific content knowledge (including in one memorable example the definition of 'ookpik'). Meanwhile, Ellerton reports, "critical thinking contains a substantial body of content knowledge," which I'm pretty sure isn't true at all. Finally, he writes that "expertise develops primarily through deliberate practice," which is partially true, though expertise is also (partially) describable as one's position in a community. For my own part, I think in the future we'll see 'intelligence' defined as 'natural capacity for pattern recognition' while critical thinking is based on mastery of a particular type (of sets of) pattern recognition, while expertise references consonance with one's own pattern recognition with others in a specific domain. Or something like that.

Web: [Direct Link] [This Post][Share]


AI could undermine meaningful learning unless feedback stays rooted in connection, study recommends
Phys.org, 2026/03/31


Icon

This appears to be a press release from the University of Surrey edited for inclusion in Phys.org and describes research into what counts as meaningful feedback in the age of AI. It's interesting in passing to note that this sort of institutional press release is often the difference between a paper getting noticed and it languishing in obscurity; my soon-to-be former employer should take note. Anyhow, the work is based on the team's 2025 international manifesto on feedback in the age of AI (note that the link in the article is wrong; I point to the correct resource here). I personally find the manifesto to be pretty light, and would recommend a more substantial inquiry into this subject. Similarly, the story doesn't mention 'connection' specifically, though it does describe an approach that "treats feedback not as a set of comments, but as an ongoing process of dialogue, reflection and growth." Here's the link to the full paper, though it might be behind a paywall (I can't tell from my office).

Web: [Direct Link] [This Post][Share]


Claude Code Snapshot for Research
GitHub, 2026/03/31


Icon

Generative AI is based on models built by neural networks, but there's a lot of surrounding code that makes them useful. If you're curious about it, there's an accidental release of Antropic's Claude currently available on GitHub. "This repository mirrors a publicly exposed Claude Code source snapshot that became accessible on March 31, 2026 through a source map exposure in the npm distribution." It's substantial: "Scale: ~1,900 files, 512,000+ lines of code." What I found useful was the list of tools and commands.

Web: [Direct Link] [This Post][Share]


Wikipedia bans AI-generated content in its online encyclopedia
Oliver Milman, The Guardian, 2026/03/31


Icon

As a number of sources reported over the weekend, Wikipedia has officially banned the use of AI to author articles. The ban includes two exceptions: for translations, and to make minor copy edits (for example, one would think, spelling). A project for AI Cleanup has been launched. As the page notes, some AI content includes fake sources, some includes genuine but irrelevant sources, and some references genuinely useful sources that should be retained for use by human authors. The Wikipedia statement says the ban is necessary to protect its core standards; these include "verifiability, no original research, neutrality, and compliance with copyright rules." I think we're beginning to see an emerging divide: while AI is useful for ephemeral on-demand generated content for learning, entertainment and information, core source-of-truth content needs to be authored, and anchored, by humans.

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.