[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

The Questions We Ask About AI in Education
Stephen Downes, 2026/02/06


Icon

This is an early version of an article I wrote intended to appear elsewhere, and the two versions are quite different, and I like this version, so I thought I would post it here. The main intent is to move the discussion toward better questions about AI in education than we have been asking to this point. "To begin at the beginning: what is it we are even trying to do in education? Why exactly do we want to impart students with new knowledge and new skills?" Much of the discussion of AI assumes there is consensus on this point, but there generally isn't, and this colours a lot of our perspectives. But maybe it shouldn't: "What we are not asking, though, is whether we will need to do any of these things in the future at all."

Web: [Direct Link] [This Post][Share]


Why higher ed can't ignore Reddit
Liz Gross, Campus Sonar, 2026/02/06


Icon

Sure, Reddit is part of my own media diet, and has been for a number of years now. But Reddit is its own place, and its important, first, to not generalize about Reddit (each of its discussion topics, or 'subs', is a distinct entity, with its own (often fickle) moderators and sense of community), and second, take anything you read on Reddit with a large dose of scepticism (there's a lot of cheerleading, brigading, and influencing going on). Depending on Reddit is like depending on a really unrepresentative and undersized survey - it might tell you something exists or is a possibility, but that's the extent of its predictive or diagnostic powers. Remember, on Reddit, you're taking to individuals, some of whom might even be real, not to communities.

Web: [Direct Link] [This Post][Share]


Same old tired narrative: "Classes were built for the 1900s"
Apostolos Koutropoulos, Multilitteratus Incognitus, 2026/02/06


Icon

Finding arguments to criticize on LinkedIn is like shooting fish in a barrel (except maybe that the fish on LinkedIn want to be shot; any exposure is good exposure). Still I can be a little bit sympathetic with the criticism as presented here, because it is (a) one we've been hearing for the last 20 years, and (b) points to a real problem, but one that is outside the means of edtech or instructional design to correct. As Apostolos Koutropoulos says at one point, "You know what hasn't changed? The operating environment we work in. Organizations want click-and-submit kind of eLearning - for better or for worse. This is mostly for compliance." I mean, for the most part taking 'courses' doesn't really make sense any more, especially in a work context. But organizations aren't clamouring for a better way to deploy learning (unless it's to train AI models... but I digress). That's a wider problem, and not solved simply by pointing to the 'right' way to do it.

Web: [Direct Link] [This Post][Share]


CC at the AI Impact Summit: Core Interventions for the Public Interest - Creative Commons
Annemarie Eayrs, Creative Commons, 2026/02/06


Icon

Most of us will have no voice at the AI Impact Summit in Delhi, and we need to be careful how we are represented by those who would speak for us. For many in the open learning community, Creative Commons takes on that role. This concerns me, because in my own case we have our difference. A case in point can be seen in the proposed system for "preferences to communicate how data holders wish their data to be used in AI is at its core a data governance mechanism." I know the words sound great, but the plan "to equip creators and data-holding communities with legible, scalable forms of agency" representing a shift from promoting openness to promoting greater means of control. And when they write "data governance is about making decisions, about choice," I don't agree. There's a vast difference between picking from predefined options, and forging one's own path. At this and other summits Creative Commons should be clear that its underlying interest isn't in representing openness, but in advocating for ownership. 

Web: [Direct Link] [This Post][Share]


"Artificial Ignorance" and Data Sycophancy
Cathy N. Davidson, 2026/02/06


Icon

'AI sycophancy' is: "the tendency of AI models to prioritize user agreement and approval over truthfulness, accuracy, or independent reasoning." The argument here is that "From 'mirroring' to offering 'confirmation bias,' sycophancy is unhealthy. It can lead to a range of bad consequences and again contribute to Artificial Ignorance:  if a major factor in learning is seeing where one is wrong or has made a mistake and then working to address that error and make a correction, what happens if one is never wrong?" I can see the concern, but it seems wrong to generalize from a few simple prompts to everything AI is or will be. Why wouldn't we ask AI to respond differently when we're learning than when we're just trying to get things done? 

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.