[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

It’s Already Here!
Jim Groom, bavatuesdays, 2026/01/20


Icon

I have nothing like the deep fascination Jim Groom has for film and video culture but I do appreciate as Groom does the work Mike Caulfield is doing to understand what AI is doing when it creates ("a great recent example," says Groom, "is his "book chapter" on My Cousin Vinny (and) he even built a film-aware fact-checking tool). That's the genre Groom is actually working in here as he has ChatGPT take a look at an essay he wrote in 1999, analyze it for originality ("Verdict! Bloggable!") and produce a post that would be relevant in 2026. The post stands up and I'm sure film-Groom's audience would be interested. For my part, tech-Groom's post about the post is also a pretty good read, one in which I am interested. And to return to his main subtheme, the question of assimilation, well, that is a theme that has been very well worked over in film, literature, and reality. Sure, the discussion is topical ("the ways in which our writing and thinking is being mapped, grafted, and commodified in a tool like ChatGPT is some next-level horror," writes Groom) but compared to what might be it's also hyperbolic.

Web: [Direct Link] [This Post][Share]


Leveraging media for impact: A Guide for Early Career Researchers
EduResearch Matters, EduResearch Matters, 2026/01/20


Icon

I have mixed feelings about this article. It recommends 'early career researchers' reach out to media to gain the traction needed to attract the 'lightning strike' in the form of a big grant or scholarship. My feelings are mixed because while I definitely encourage engagement with multiple media, I don't see career advancement as the primary purpose of it. At the risk of overgeneralizing, I think there are two types of researcher: careerists, and scholars. Careerists are worried about publicity for their own advancement. Scholars are interested in exploring and disseminating ideas. In my world the careerists are disposable, while the scholars are the people worth following and listening to. It is a problem if grants and scholarships are based on publicity rather than scholarship, and not something people should be encouraging.

Web: [Direct Link] [This Post][Share]


Using video: from passive viewing to purposeful engagement
David Hopkins, Education & Leadership, 2026/01/20


Icon

The core tenets of using video in learning haven't changed since 2013, says David Hopkins. "Then, as now, video works best when it is framed by purpose, it draws attention to key ideas, it is embedded in activity, (and) it leads somewhere (discussion, reflection, application)." What has changed is that purpose is now critical. "Students need to know why they are watching, what to listen for, (and) what they will be expected to do with it afterwards." He adds that AI isn't ready to design learning with and around such videos. "If a task can be completed by passively watching or automatically summarising, it probably wasn't a learning task in the first place." To me this misses the point a bit. The utility of video is that it shows what can't be easily summarized. It then (if well designed) invites and enables physical replication of the task or process being demonstrated - simple reflection or discussion don't really count as 'activities' in this scenario. And yeah - AI can't yet produce such videos. But I'm watching for them.

Web: [Direct Link] [This Post][Share]


CommonsDB Feasibility Study part 2: from Design to Deployment
Doug McCarthy, CommonsDB, 2026/01/20


Icon

If you publish something online and want to declare that it is, say, open access, what do you do? You could just slap a Creative Commons logo on it, but these offer no guarantees - there are many cases where people have applied the license to content they don't own. This project addresses that problem. "CommonsDB uses a combination of cryptographic signatures and structured metadata to ensure the integrity and authenticity of content declarations." This post reports on the second phase of the feasibility study (49 page PDF) (the first was reported on last July). This phase "implemented the trust model in production, deployed three public APIs with a Developer Portal, enabled Data Suppliers to make live declarations, and launched the CommonsDB Explorer to expose registry content." It would be interesting to work with the APIs to both declare and use content. Related: Distributed Identity Foundation (DIF) Creator Assertions working group's user experience guidance document.

Web: [Direct Link] [This Post][Share]


What I Learned After Trying Out Every Exoskeleton at CES
Beth Skwarecki, LifeHacker, 2026/01/20


Icon

I'm not sure exactly how exoskeletons will become a part of learning technology (though I can imagine them playing a role), but my real focus in this post is to consider point of view when evaluating technology. Here Beth Skwarecki evaluates six systems that were on display at CES 2026, considers five of them to be effective, but then asks, "Who is going to spend $1,000 to $5,000 for a little assistance in walking or hiking? Serious hikers and runners would probably rather train harder to handle tough terrain and spend the money on gear or coaching." Now, I injured my knee last summer and needed a knee brace to walk. I was able to afford a cheap one, but proper knee braces can cost close to $1000. Being able to walk wasn't a matter of training harder, it was a matter of exercise but with support and restraint - exactly what an exoskeleton would excel at. The standard for evaluating tech should be "I can't imagine using this". It needs to consider multiple perspectives.

Web: [Direct Link] [This Post][Share]


No - Duh: Countless Teachers Like Me Have Long Known the AI Risks Brookings Just "Discovered"
David Cutler, Medium, 2026/01/20


Icon

David Cutler responds to the recently published AI's future for students is in our hands and companion Prosper, prepare, protect report published by Brookings. He is "furious" because "teachers have been sounding the alarm since the moment generative AI tools landed in students' hands. Yet, it often takes a major institution to 'confirm' what classroom teachers already knew." Yes. This is how research works. A large number of anecdotal cases is not a sufficient basis on which to form policy and opinions. It's not 'knowledge' until it's tested and in some way confirmed. The real question, to me, is whether Brookings is the right institution to do this work. I would never question, as Cutler does, the need for the work to be done.

Web: [Direct Link] [This Post][Share]


Generative AI: product safety standards
Gov.UK, 2026/01/20


Icon

The Department for Education has released new 'safety standards' for AI for students in England. After listing some educational use cases the document then addresses a series of topics (for example: filtering, monitoring and reporting, design and testing, etc.). Each topic is briefly defined, addressed with a set of standards, and placed in the context of relevant legislation. The standards seem designed to cover all possible risks, even if they're theoretical (for example: "Edtech developers and suppliers of products should make every effort to mitigate the potential for cognitive deskilling, or long-term developmental harm to learners"). Ian Grove-Stephensen comments, "The real aim isn't to protect students, but to protect the system... no way are we going to let a mere technological revolution change the way we've been doing things since 1870."

Web: [Direct Link] [This Post][Share]


GenAI in Higher Education
Sam Illingworth, Rachel Forsyth, Bloomsbury Publishing, 2026/01/20


Icon

I had a general feeling of unease reading this open access book (153 page PDF) partially because of the content and partially because of the presentation. The book positions itself very much as a traditional textbook with learning objectives at the beginning of each chapter and summary exercises at the end. The content is a bit dated (examples use ChatGPT 4o) and the advice strikes me as unsound - even in an AI world it's still recommending multiple-choice assessments and essay-writing (in 'controlled' conditions), for example. And structured as it is, it feels like it repeats the same points a lot - we are told what we're going to learn, then an outline of that, then the items in sequence (which don't always match the outlone), then a summary, then review exercises. Even more, it felt odd learning about AI in education in a linear-text presentation. It's hard for me to imaging any learning material presented in such a text-forward manner. It's like the script for a class with no visual aids and no tools. In 2026 this feels really inadequate. 

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.