Will the LMS Finally Deliver?
Alfred Essa,
2026/01/06
Alfred Essa comments on a two part article (part one, part two) on the history of the learning management system (LMS) from former Blackboard CEO Matthew Pittinsky last fall. "Today's LMS is essentially the same system we had three decades ago," summarizes Essa. "This is a stunning admission." Despite a billion dollars of investment, the LMS did nothing to advance learning in all that time. "Describing this history simply as 'investment' also obscures what was actually being optimized. Equity financing is designed to reward scale, market dominance, and successful exits - not necessarily pedagogical transformation." As we all know, Blackboard spent all this money trying to acquire its way into market dominance, to become "operating system" of education. The future? The LMS with AI "as an operating system that should orchestrates all learning." But if it does this, argues Essa, it cannot become something that advances teaching and learning.
Web: [Direct Link] [This Post][Share]
I was wrong. Universities don't fear AI. They fear self-reflection
Ian Richardson,
Times Higher Education (THE),
2026/01/06
"The greatest threat to higher education is not AI. It is institutional inertia supported by reflexive criticism that mistakes resistance for virtue. AI did not create this problem, but it is exposing dysfunctionalities and contradictions that have accumulated over decades." So says Ian Richardson in this article responding to critics of his earlier article (archive) where he makes the same claim. "If universities, especially those in the second and third tiers, fail to respond to the strategic challenge it poses, they risk being displaced." Currently open access on THE, but archive just in case. I think that recent experience tells us that rather than being displaced, universities risk being acquired and/or repurposed to serve various corporate or political ends.
Web: [Direct Link] [This Post][Share]
How the hell are you supposed to have a career in tech in 2026?
Anil Dash,
2026/01/06
Anil Dash is speaking to software developers, but he may as well be speaking to people in edtech as well. "It is grim right now," he writes, "About as bad as I've seen." It starts at the top. "Every major tech company has watched their leadership abandon principles that were once thought sacrosanct... (or) dire resource constraints or being forced to make ugly ethical compromises for pragmatic reasons." He recommends people learn about systems and about power - and in particular, "your first orders of business in this new year should be to consolidate power through building alliances with peers, and by understanding which fundamental systems of your organization you can define or influence, and thus be in control of." In addition, consider working "in other realms and industries that are often far less advanced in their deployment of technologies," especially where "the lack of tech expertise or fluency is often exploited by both the technology vendors and bad actors who swoop in to capitalize on their vulnerability."
Web: [Direct Link] [This Post][Share]
"Any research must be accessible to others. There's no point to research that can't be used"
John Hynes,
University of Manchester,
2026/01/06
This article features Ellen Poliakoff "reflecting on the outcomes and impacts of her Open Research Fellowship project so far." As Poliakoff says, "we have been involving people with lived experience of Parkinson's and autistic people in shaping and advising on our research for more than 10 years." This particular study involves helping 'public contributors' (her term) learn about public research. "The participants in our survey, who had a range of lived experience, were passionate about the benefits of co-production." Via Octopus monthly updates.
Web: [Direct Link] [This Post][Share]
The Crisis: Students Need to Learn Different Stuff and I don't think Most Educators understand that
Stefan Bauschard,
Education Disrupted,
2026/01/06
I think the basic premise is right: "there are two boxes — How to Use AI in the current curriculum and how to change the curriculum so school is still relevant"... and the important box is the second box. But Stefan Bauschard offers two statements on the second that seem to me to be just wrong. The first is this: "future success in work or entrepreneurship will be determined by how well you manage agent teams." Why would we need to manage agent teams? Let the AI do that. All we need to do is tell the AI what we want. Second: "the most important question facing society is who gets to decide what AI does." Why does everyone refer to 'AI' in the singular. Just are there are many people - billions, even - there will be many AIs. The real question for the future is: how many of those billions of people get to benefit from an AI? If it's less than 'billions of people' we have hard-wired an unsustainable inequality into society, with all the harm that follows from that.
Web: [Direct Link] [This Post][Share]
AI Village
AI Village,
2026/01/06
The plot is simple: "Watch a village of AIs interact with each other and the world." Here you see four AIs - Claude Opus 4.5, Gemini 3 Pro, GPT-5.2, and DeepSeek-3.2 - interact with each other as they consider questions and solve intractable problems, like today's problem, "elect a village leader." The funny(?) part is when the village interacts with the wider community, as when it sent Rob Pike (that Rob Pike) an email than king him for co-creating Go (the programming language, not the game). Simon Willison describes the mayhem: "On the surface the AI Village experiment is an interesting test of the frontier models. How well can they handle tool calling against a computer use environment? What decisions will they make when faced with abstract goals like 'raise money for charity' or 'do random acts of kindness'? My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment." Not going to disagree, but maybe we should apply the same logic to purveyors of advertising and spam and worse by artificial (corporate) persons.
Web: [Direct Link] [This Post][Share]
Taking an Internet Walk
Spencer Chang, Kristoffer Tjalve,
Syllabus,
2026/01/06
Perhaps my mistake is that I still see the internet this way (as opposed to the way I am supposed to see it, as ad-supported commercial media, I guess): "The internet is so much more than the loud and narrow portion we encounter daily. If we attend closely to the environment, we'll start to see the life forces of everyday people, their dreams, frustrations, prayers, anxieties, and joys given willingly and freely. They deserve to be given the space and honor of being discovered. They are waiting for you to discover them." This article looks at some of the ways we explored whis side of the internet in the past, and offers modern-day approaches to find roughly equivalent experiences.
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.