[Home] [Top] [Archives] [About] [Options]

OLWeekly

Inside Higher Ed’s Model Is Changing. Our Journalism Is Not.
Sara Custer, Inside Higher Ed, 2026/04/20


Icon

I stopped linking to Onside Higher Education (IHE) with any frequency when it started requiring people to register to read articles. My policy is that OLDaily links directly to content - no paywalls, no subscription barriers. So nothing really changes for me now that IHE content "will be available only to paying subscribers." But it's not a surprise, either. Good luck to them - the market right now for paying subscribers is pretty thin.

Web: [Direct Link] [This Post][Share]


Your Favourite Commenter Might Not Be Writing Their Own Comments
Sam Illingworth, Slow AI, 2026/04/17


Icon

I don't really get many comments, and I've always wondered about that. Part of it definitely involves commenting on other people's work so they start reading yours and commenting back. That creates an opportunity for people to use AI to support social network optimization (SNO): "Investigation of 4,929 Substack comments reveals real people using AI agents to comment on their behalf. Data on the 1:1 engagement signal, live Turing tests, canary traps, and what automated engagement means for online writing communities." It's not just on Substack. "On LinkedIn, ghost commenting is an industry. The practice scales. Commenting builds algorithmic visibility without providing a traceable email." So comments, I guess, are a bit like money. Great wealth is prima facie evidence of cheating. (p.s. fair warning; I'm pretty sure this article is in large part authored by AI, but of course I can't prove it - but I did learn the definition of 'canary trap' as a result of it, so there's that).

Web: [Direct Link] [This Post][Share]


Blank.Page
René Galindo, Mohamed Boudra, Blank.Page, 2026/04/17


Icon

Another item just for myself. Blank.page is a simple text editor. What makes this somewhat distinct is a voice microphone based input that actually appears to work fairly well. Here's the source on GitHub (had to search for it; it's three months old and might not be fully up to date). There's also a newsletter (with no content yet).

Web: [Direct Link] [This Post][Share]


Free online vector editor & procedural design tool
Graphite, 2026/04/17


Icon

This is mostly a reminder for myself. Graphite is an open source vector graphics creation and editing tool. You can run it locally in a browser with no login or registration; it exports SVG, PNG, and JPG files. "Starting life as a vector editor, Graphite is evolving into a general-purpose, all-in-one graphics toolbox that is built more like a game engine than a conventional creative app. The editor's tools wrap its node graph core, exposing user-friendly workflows for vector, raster, animation, and beyond." Here's the source on GitHub. There's also a newsletter.

Web: [Direct Link] [This Post][Share]


Can We Teach Critical Thinking
Althea Need Kaminske, The Learning Scientists, 2026/04/17


Icon

It's always nice to see a reference to my old friend Tim van Gelder (I spent three months on a fellowship with him in Australia in 2001). Here the reference is to Teaching Critical Thinking: Some Lessons From Cognitive Science. In this undated article (a repost from some time in the past) author Althea Need Kaminske summarizes the paper but also slants it to a degree, I think. For example, I don't think that when van Gelder says 'critical thinking requires practice' he is saying "instruction on critical thinking needs to done explicitly and deliberately." I also don't think he is saying critical thinking skills in one domain cannot be transferred to another domain, only that "students also must practice the art of transferring the skills from one situation to another." Critical thinking can be taught, and the skills by their very nature are general and widely applicable, but (says van Gelder) instructors have to do more than teach theory and hope students acquire the skills. Learning critical thinking requires practice.

Web: [Direct Link] [This Post][Share]


The "Cognitive Offloading" Paradox
Philippa Hardman, Dr Phil's Newsletter, 2026/04/17


Icon

According to Philippa Hardman, over the last year "the field was starting to move beyond 'AI is bad for learning' toward a harder question: when is it bad, and when might it actually help?" That's the purpose of this study (30 page PDF), she reports. "Cognitive offloading emerges as enabler rather than inhibitor of transformation, with threshold effects indicating that substantial delegation liberates mental resources for higher-order reflection." So, sure, cognitive offloading happens. But the mental space that's freed up allows people to focus on higher order problems. This leads Hartman to suggest six principles describing how we should and shouldn't encourage learners to us AI, including a recommendation to frame AI as a partner, not a tool.

Web: [Direct Link] [This Post][Share]


Scaffolding Human-AI Collaboration: A Field Experiment on Behavioral Protocols and Cognitive Reframing
Alex Farach, Alexia Cambon, Lev Tankelevitch, Connie Hsueh, Rebecca Janssen, arXiv.org, 2026/04/17


Icon

This paper from Microsoft proposes that people get better results from AI if they think of it as a conversational partner rather than a simple tool like a search engine. It compares two educational approaches designed to support this approach: behavioural scaffolding, which "refers to explicit protocols that structure how humans interact with AI systems", and cognitive scaffolding, which "refers to interventions that reshape users' mental models of AI." The results? "Behavioral scaffolding was associated with lower output quality, consistent with coordination costs exceeding collaboration benefits." Meanwhile, "Cognitive scaffolding may shift mental models but the evidence for genuine training-induced change is not strong enough to confirm it." Overall, "The implication is not that collaboration with AI is harmful, but that mandating a specific synchronous protocol under the infrastructure conditions of this study was associated with worse outcomes than allowing flexible use." Despote the ambiguity of the results readers can learn a lot from this paper. See also the interactive data explorer from the paper. Via AI Mindset, which interprets the results far more positively than I did.

Web: [Direct Link] [This Post][Share]


Biases in Scientific Inquiry
Jamie Shaw, Manuela Fernández Pinto, Torsten Wilholt, PhilSci-Archive, 2026/04/16


Icon

We read a lot about bias in AI or even bias in education. "According to common usage, 'bias' is always a deviation that systematically tends toward a certain direction." Fair enough. But what is it exactly that bias is a deviation from? The best we can manage is something like "a pattern considered to be correct and desirable in some way." But that opens the door to a pressing need to consider more deeply something commonly perceived to be a problem. That's what this paper (22 page MS-Word) does. It offers a five-category taxonomy (mechanism, effect, content, stage, feature) that is "a taxonomy of bias individuation practices." This is extended to complete a much larger categorization of types of bias. These are then applied to a five-stage 'research pipeline' to identify where they occur. I think it's a start, but a taxonomy is no more than a way of describing a landscape, and should not be mistaken as an understanding of that landscape.

Web: [Direct Link] [This Post][Share]


Youth don't have a voice problem; they have a strategy problem
Max Genin, World Education Blog, 2026/04/16


Icon

Youth are not being heard, according to this article. "Of the youth and student organizations surveyed, 57% submitted feedback on education policy. Only 35% saw that feedback reflected in final decisions. Fewer than one in six were ever asked to monitor implementation." This article argues that it's their own fault. "The problem is that youth, by and large, are trying to change institutions using tools those institutions have no structural reason to respond to. The strategy is the gap." What they should be doing, argues Max Genin, is to learn international law and cite it as a demand for compliance when they meet with institutions. "What needs to change first is not government willingness," he writes. "It is the technical fluency of youth organizations themselves, their ability to walk into a room knowing the treaty, the budget cycle, and how decisions actually get made." Ridiculous.

Web: [Direct Link] [This Post][Share]


Stealing Your Webinar as a Service: Enpoopification Poster Child
Alan Levine, CogDogBlog, 2026/04/16


Icon

Alan Levine shares his experience with a "sleazy outfit" called WebinarTV that surreptitiously records what it calls public webinars and puts them up on its website. As Levine explains, "Just because you set up registration, it does not protect your webinar. You actually have to do extra work of approving registrations or verifying attendees, adding special links/passcodes to 'protect' your events from being taken by WebinarTV. You have to create barriers of access for event participation." How are they doing this? "Most people will assume that they are somehow registering bots to attend events and record, like notetaking ones. This is not what's happening, IMHO," says Levine. This would mean they're accessing the recordings somehow directly from Zoom. But Zoom says WebinarTV "accesses meetings using links that have been shared publicly, then records the sessions using browser extension or 'other tools.'" See also this report from CyberAlberta.

Web: [Direct Link] [This Post][Share]


Reinventing the Wheel, Again
Glenda Morgan, On Student Success, 2026/04/16


Icon

In a LinkedIn article, Dan Meyer proclaimed the 'death' of Khanmigo. Not death in the sense of going away, but in the sense thjat nobody is using it. "For a lot of students, it was a non-event," Khan told Matt Barnum last week in Chalkbeat, referring to Khanmigo's release three years ago. "They just didn't use it much" This article examines that proposition. Why does the software go unused? "The recurring blind spot (is) EdTech's promises of frictionless scale. EdTech repeatedly promises low-cost, scalable transformation - but often repackages old models without solving engagement or economics." The argument here is that in order to scale a technology company needs "sustained marketing investment, institutional credibility, student support infrastructure, and retention strategies that go well beyond content delivery." All true - but as Glenda Morgan notes, the real issue is that these companies repackage the old model over and over again. See also this discussion in Learning Engineering.

Web: [Direct Link] [This Post][Share]


Yale needs major reforms to rebuild public trust, faculty committee says
Asher Boiskin, Aria Lynn-Skov, Leo Nyberg, Yale Daily News, 2026/04/16


Icon

This article points to a report (58 page PDF) addressing a variety of issues facing "Wealthy selective private universities such as Yale" such as "cost, admissions, political homogeneity, self censorship, (and) grade inflation." According to the report, "universities exist to preserve, create, and share knowledge." It recommends a return to this foundational principle and suggest it forms the basis for all the recommendations in the report, but that relationship is hard to see. Indeed, from where I sit, there's noting in this foundational principle that recommends a path of being wealthy, selective or private, but that is essentially what Yale seeks to preserve. Don't get me wrong: the discussion is good, up to the point of the recommendations (which go off the rails and on their own tangent starting at recommendation 10 ('recenter the classroom')). Jeff Jarvis is unsparing in his criticism, saying it "prostrates itself before the cancel-culture trope," which it does, but the greater fault is that it never questions the fundamental elitism on which Yale is founded, and which lies at the heart of the mistrust it faces today.

Web: [Direct Link] [This Post][Share]


What do universities owe the public?
Ted Hewitt, University Affairs, 2026/04/15


Icon

The title of this post may as well be switched around: what does the public owe universities? That's the tenor of the article, even as author Ted Hewitt in the same breath says "the fundamental role of the university, within its broader societal and community context, is, in my view, seriously at risk, along with our own liberal democracy." He recommends three things: protection of academic freedom, university leaders to maintain an open dialogue with the public, and resourcing. Now sure, the example south of the border is nothing to be emulated. But the response of the university system can't be "support us or the bunny gets it." What the system needs to do, in my opinion, is what it utterly failed to do in the U.S. - enable full participation and success in higher education regardless of socio-economic status. Universities need to directly benefit the entire community. That, to me, looks very different from what we have today. It's certainly not being offered by Hewitt in this article.

Web: [Direct Link] [This Post][Share]


Trust is the Silver Bullet
Josh Brake, The Absent-Minded Professor, 2026/04/15


Icon

I said the other day that another word for 'cognitive offloading' might be 'trust'. We don't manually check everything the AI does because we trust it to do (more or less) the right thing. So the title of this post caught my eye; Josh Brake summarizes and to a degree endorses Stephen M. R. Covey's book The Speed of Trust. "Covey decomposes trust into two main elements: character and competence... character, he argues, is composed of integrity and intent... competence can similarly be decomposed into two pieces, capability and results." These all together represent "a strong foundation of character" that ought to be instilled in students, suggests Brake. But is it a good account of trust? I don't think so, because truth is a much broader concept. We can trust things that are not human and do not possess virtue or intent: trust the math, trust the process, trust the ice, trust in the future. Trust isn't a property of the thing being trusted, it is a willingness on our side to grant certain expectations to it regarding the outcome.

Web: [Direct Link] [This Post][Share]


An Explanation of AI that Could Be Wrong (Which is Good)
Michael Feldstein, eLiterate, 2026/04/15


Icon

This is an introduction to a paper, the full text of which is found here. Michael Feldstein has alo provided some AI structures that both help explain what the paper says and test the predictions offered in the paper. I didn't use the AI components, but I did read the intro post and the paper as a whole, which was well worth the effort. It defies summarization in a short post such as this, but here goes: transformer-based AI (such as ChatGPT) learn complex and apparently rule-based systems (such as language or chess) by preserving distinctions that have predictive import in a given context, and discarding the rest. Feldstein calls this the conservation of predictive meaning (CPM) theory. My assessment is that he is not wrong. I say it that way because I would word things a bit differently and draw slightly different conclusions. What he calls 'distinctions' I would call 'patterns'. What he calls "a general mechanism to reduce predictive surprise" I would call 'salience'. I would not say language learners are "like effective cryptographers", nor would I say they "decode what has been communicated." Overall, though, I think he is on the right track.

Web: [Direct Link] [This Post][Share]


One size fits none: let communities build for themselves
Ben Werdmuller, 2026/04/15


Icon

I think this is exactly right, and applies to learning technology as well: "In a world where custom code can be created far more easily than it could in the past, communities can more easily build bespoke spaces for themselves. There's no need to adopt a one-size-fits-all platform - even an open source one - when you can ask for the exact features you want." Rather, "What would be needed then are agreed-upon rules about how community platforms behave." This is where it gets tricky, because protocol-writers have historically been over-ambitious in their scope. My view is that syntax belongs to protocols, while semantics belongs to communities. That's (in my view) what Werdmuller describes as "the human stuff that rises to the top when code becomes more of a solved problem."

Web: [Direct Link] [This Post][Share]


Literate communities have always looked different to their critics
Doug Belshaw, Open Thinkering, 2026/04/15


Icon

This is a well-argued and dare I say literate response to critics of screen culture. "When you've closed 800 libraries and gutted the infrastructure through which people build reading communities," writes Doug Belshaw, "blaming screens is a conclusion in search of a cause." Woven through the argument is Belshaw's account of what it means to be literate. "To be literate is to be part of a literate community. This involves sharing references, arguing about ideas, and having the knowledge to participate in discourse." Different communities have different kinds of literacies. It's also a good response, to my mind, to the argument based on 'cognitive offloading' and AI. "These young people weren't less capable than previous cohorts; they were differently capable." Ultimately, "If we want to defend democracy, we should be defending the conditions that make critical engagement with it possible."

Web: [Direct Link] [This Post][Share]


Whatever Happens to Music Will Happen to AI (2026)
Bruce Sterling, Medium, 2026/04/14


Icon

Bruce Sterling was cyberpunk back when cyberpunk was a thing, and his voice, though it has mellowed with age, still resonates, now with the silky smooth notes of old oak and maple, seeing things like Jazz and AI for what they are and also what they aren't, reminding us not only that we live and create, but that we fade away and that what carries on is really only those unique notes me make in the ether. I hope to be one of those.

Web: [Direct Link] [This Post][Share]


UNESCO’s Higher Education Roadmap: What it Gets Right and What it Asks of Us
Maia Chankseliani, NORRAG -, 2026/04/14


Icon

As is so often the case, I have mixed feelings about this discussion. Maia Chankseliani summarizes: "The roadmap links equity with pluralism in a way that goes well beyond the access agenda... genuine inclusion requires going beyond the removal of barriers to entry, to engaging seriously with plural forms of knowing, ways of understanding the world developed across different communities, traditions, and geographies." Sure, we need to do more than just open the doors to traditional universities. And there are different ways of knowing. But we do people a disservice if we open access and they find it's not based in anything like genuine knowledge at all. Telling stories is not to my mind equivalent to a proper scientific enquiry. Opening access also means opening access to the type of learning people desire. And as Chankseliani says, "Precisely how institutions can maintain meaningful quality standards while simultaneously recognising plural epistemologies is a genuine unresolved tension."

Web: [Direct Link] [This Post][Share]


A Chapter Closes, the Impact Remains
Campus Manitoba, 2026/04/14


Icon

It's disappointing to read this. "For 35 years, Campus Manitoba has supported Manitoba's post-secondary landscape through collaboration, care, and shared commitment. We are now sharing an important and difficult update. Due to changes in provincial funding, Campus Manitoba will be closing its doors on June 26, 2026." The change appears to be pretty sudden, as they just recently advertised for a new position, and also just recently made the switch to Bluesky. They operated the the OpenEd Manitoba Repository, which "is no longer being updated at this time." There's no similar notification on eCourses Manitoba, but I would expect changes there too.

Web: [Direct Link] [This Post][Share]


The impact of AI on software engineers in 2026: key trends
Gergely Orosz, The Pragmatic Engineer, 2026/04/14


Icon

The two big impacts in software developers early in 2026 have been increasing AI costs and usage limits. Claude Code has been hit especially hard, with widespread complaints about session limits, but the trend is evident across the board. I think what we'll see is more of an emphasis on on-premises AI (to avoid the usage limits) using more open-source and open-weight models and software (to address costs). 

Web: [Direct Link] [This Post][Share]


TED, Khan Academy and ETS announce new institute to reimagine higher education for the AI age
TED Blog, 2026/04/14


Icon

Obviously there's a lot of room for scepticism in this announcement, but I would be remiss if I didn't mention it here, because it will probably form the basis for a lot of the discussion - pro and con - of AI-informed learning in the future. "TED, Khan Academy and ETS announced a joint plan to launch the Khan TED Institute, a new higher education collaboration designed for an AI‑driven era. The Khan TED Institute aims to prepare learners for the next generation of jobs while cultivating the uniquely human skills required to thrive in work, life and society amid rapid technological change."

Web: [Direct Link] [This Post][Share]


I Thought I Knew When Students Were Engaged in Math Class. I Was Wrong (Opinion)
Michael Norton, Education Week, 2026/04/14


Icon

There's this thing where some educators argue students must be thinking of the precise educational point at all times, and that anything else is a distraction. That's what's happening here. But it is, in my view, the wrong approach, because it completely does away with association, and focuses on rote memorization. But which outcome is better: a memorized fact about the underground railroad, or a person associating biscuits with the underground railroaf every time they make biscuits, for the rest of their life? The same, though less obvious, point can be applied to mathematics. "Were they thinking about math concepts? No, they were thinking about rectangles, lines, and shading." But those are math concepts - just not formalized notational math. Abstract concepts not associated with ground truth are lost, not only forgotten, but never useful in the first place.

Web: [Direct Link] [This Post][Share]


Password Manager Angst
ongoing by Tim Bray, ongoing by Tim Bray, 2026/04/13


Icon

As Tim Bray says, if you're not using a password manager, you should probably start using one. I personally use 1Password, because it works quite well with my browser. But the whole business of managing credentials is about to get more complicated and more important as we begin to use services (and especially AI services) to access our various accounts. This post compares 1Password to BitWarden. My experience was similar. 

Web: [Direct Link] [This Post][Share]


All in the Verbs
Heart Soul Machine, Heart Soul Machine, 2026/04/13


Icon

I like what Tim Klapdor is doing with Bloom's Taxonomy, both in the last post and in this post. "What I'm really trying to achieve is a more three-dimensional version of Bloom's. Learning has height, breadth, and width – but the affective and operative components have often been lost. These are often dismissed as 'soft skills' or treated as trade-specific concerns, but they matter deeply in higher education. Higher education that includes vocational training and degrees in which cognitive, affective, and operative development are genuinely intertwined." Quite so. There's a lot to be discovered analyzing these concepts in detail - though at any point it may be tempting to crystalize that emerged in a far too quick abstraction.

Web: [Direct Link] [This Post][Share]


An illustrated guide to resisting "AI is inevitable" in education
Benjamin Riley, Cognitive Resonance, 2026/04/13


Icon

What I like about this article is that it takes the reader step by step through the argument against the use of AI in education, with clear examples and references to real work. It begins by pushing back against the inevitability of AI and by pointing to what has come to be called 'cognitive surrender'. Then a series of examples shows how it fails in education. Good stuff; I'm sure this will be popular. But let's reframe. What if we replaced 'cognitive surrender' with the word 'trust', and thought of using AI as no different than depending on a library of books or statements made by people who have written them? And what if, instead of thinking of AI as some sort of tutoring system, we thought of it as similar to writing, which exists everywhere in society, but which serves to guide though not replace human experience?

Web: [Direct Link] [This Post][Share]


Through the Prism: Illuminating Educational Impact with Brookfield’s Four Lenses
Danielle Hinton, #LTHEchat, 2026/04/13


Icon

This is an outline of Stephen Brookfield's 'four lenses' theory of reflective practice. The four lenses, for the record, are: "student feedback, emotions, behaviour patterns, peer insights, theory." The idea is that learning from diverse forms of evidence creates "stronger, more credible reflections." The diversity part makes a lot of sense to me. Calling these forms of evidence 'lenses' does not. The use is at best metaphorical, though I fear people take it literally, like this: "Each lens refracts our teaching differently, offering a unique hue in the spectrum. Where these colours converge or contrast, we begin to see the fuller pattern of our work. Using diverse forms of evidence is like adjusting how the prism catches the light - subtle shifts that reveal deeper detail, sharpening our understanding of what our teaching truly looks like in practice." 

Web: [Direct Link] [This Post][Share]


What Happens When Students Stop Believing Their Work Matters
Marc Watkins, Rhetorica, 2026/04/13


Icon

I think we have to ask this not only of students but of people generally. The framing for education, though, is to pose this as opposed to more oft-cited concerns about cheating. "There's a much deeper wound here that we're only now beginning to see. When a machine can now mimic the work of a human being, many of us, especially students, must be asking what the point is anymore." Of course, this is being written from the perspective, I guess, of someone whose work has always mattered. But a lot of people look at the world of work generally, with or without AI support, and say "what's the point?" For a lot of people, if they didn't have to earn a paycheque, they wouldn't be doing this work at all. 

Web: [Direct Link] [This Post][Share]


Supporting AI Literacies for Young Adults Aged 14-19
Rhianne Jones, Doug Belshaw, Laura Hilliger, We Are Open Co-op, 2026/04/13


Icon

This report (39 page PDF) for the BBC's Responsible Innovation Centre benefits from Doug Belshaw's background in digital literacies as well as a survey of 40 definitions of AI literacy (another growth industry in our field). "The report outlines that public service media (PSM) have a key role to play in addressing a gap between technical and functional skills, and critical skills through creative learning interventions that blend these elements." I'm a bit sceptical that the literacies all line up neatly as words that begin with 'c' (aka Angela Gunder's "Dimensions of AI Literacies") but do agree that "the competencies involved in AI Literacies are not something that can be 'delivered' but only developed." I think we could talk about 'core values' and literacy; while I support EDI and human rights, I'm not sure they belong in a definition of 'AI literacies'.

Web: [Direct Link] [This Post][Share]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.