[Home] [Top] [Archives] [About] [Options]

OLWeekly

Feature Article
Those Takeaways
Stephen Downes, Half an Hour, 2026/03/27


I'd like to offer a rejoinder to Junhong Xiao and David CL Lim's paper Is AI the solution to the problems that make higher education "ill" in the first place? Towards a technology-agnostic, future-proof approach. Near the end of the paper is a section titled "Key takeaways for policymakers and institutional leaders". This post addresses those takeaways specifically.

[Share]

[Link] [Local copy]


The Old Internet is Still Here
Tyler Gaw, 2026/03/27


Icon

I've seen this referenced in a few placed. Tyler Gaw argues, "Those things we're missing aren't gone. They're still right here. They never went anywhere, they just got layered over by time... I don't consider myself an outlier here. I would wager (without any data) that most people who had a personal site and/or blog 20 years ago, still have one today. And there's a high likelihood they've maintained it throughout those years... But Good Internet is still here. We're still making stuff we care about and sharing that stuff on our websites. We're making it for ourselves first, but we're also making it for you." Yup. That would be me. :)

 

Web: [Direct Link] [This Post][Share]


The Five Biggest Pitfalls of Collaborative Grouping (And How to Avoid Them)
John Spencer, Spencer Education, 2026/03/27


Icon

I used to hate groupwork, but that was usually because one of the five problems with collaborative work described by John Spencer in this reasonably detailed article. The problems are: one student does all the work (usually me, heh); one student takes creative control (also me); groupthink (except for me); conflict (usually with me); and project management (not needed, because of me). OK, I jest a bit, but Spencer identifies some good approaches to address these issues (without once mentioning the jigsaw method, though properly speaking that's a cooperative work approach).

Web: [Direct Link] [This Post][Share]


Values-led Generative AI in Design Education: A Toolkit for Confident, Critical Practice
#ALTC Blog, 2026/03/27


Icon

I always ask "whose values" when I see stuff titled like this. But anyhow: "The case studies and scenarios in the toolkit are intended as a starting point rather than a prescription. Every teaching context is different, and the activities can be adapted for a wide range of disciplines and levels. By grounding AI integration in design values and pedagogic reflection, we hope the toolkit empowers educators to build confidence, spark debate, and support students in navigating an evolving creative landscape."

Web: [Direct Link] [This Post][Share]


AEGIS-OA launches to advance sustainable Diamond Open Access publishing in Europe
2026/03/27


Icon

"The initiative brings together a consortium of 24 partners from 16 European countries, including 14 beneficiary partners and 10 associated partners...  to reinforce community-led publishing models and improve coordination across disciplines, institutions, and national contexts." Obviously this is a welcome development, especially right after reading about the Canadian initiative. Maybe we're finally releasing the stranglehold of commercial publishers over academic discourse. In case you're curious, the orginaztion is called Activate European Guidance and Incentives for Sustainable Open Access publishing (AEGIS-OA).

Web: [Direct Link] [This Post][Share]


The birth of the bio-edu-data-sciences
Ben Williamson, Code Acts in Education, 2026/03/27


Icon

This is a long post that is very much worth reading. "In summer 2025, a startup technology company from Silicon Valley announced the launch of a genetic IQ test for embryos," reports Ben Williamson. "Now we could just respond to this by saying it's modern eugenics and snake oil, as other critics have." But we shouldn't. For one thing, despite the questionable ethics, what we've seen is people would if they could. So it could become a thing. And as Williamson makes the case, the necessary infrastructure is already being set up, and it doesn't really matter whether it's a 'real' science or not. "Educational genomics needs to be understood as an inventive science," he writes. It doesn't just unveil what's there, it fabricates or invents "new genomic facts about learning."

Web: [Direct Link] [This Post][Share]


What The AI Consciousness Question Conceals
Barton Friedland, NOEMA, 2026/03/26


Icon

We've seen the argument a few times now that computation and embodiment are fundamentally different, and that AI is one, and humans are the other, and that therefore AI cannot be conscious. I linked to The Abstraction Fallacy making this point a few days ago, and Barton Friedland links to Anil Seth's The Mythology of Conscious AI making much the same case back in January. I've covered both here. My response is to collapse the distinction; computation is embodiment (that's why, for me, a 'connection' exists only when one entity can change the state of another). Here, Friedland takes a different approach, combining the two layers via the mechanism of 'augmentation'. "In the human-AI arrangement, value lies not inside the machine, not inside the skull, but in the configuration between them." It's an interesting idea. Writes Friedland: If cognition is distributed, enacted and extended, then the relevant unit of analysis is not the individual brain (biological or artificial), but rather the configuration in which intelligence operates."

Web: [Direct Link] [This Post][Share]


Coalition Publica launches new website for advancing open access
Coalition Publica Communications, Public Knowledge Project, 2026/03/26


Icon

By 'diamond open access' we mean open access publishing that has neither subscription fees for readers nor publication fees for authors. Usually the publications are supported by an academic institution, foundation, or government office. This article announces "A new website for the Canadian diamond open access community" for  Coalition Publica (CP), the partnership between Érudit and the Public Knowledge Project (PKP). There's no icon but you can find their RSS feed here.

Web: [Direct Link] [This Post][Share]


Top LLM PyPl package compromised to steal user details
Sead Fadilpašić, TechRadar, 2026/03/26


Icon

As reported here and widely elsewhere, "A hugely popular Python package called LiteLLM was compromised and used to deploy an infostealer malware to hundreds of thousands of devices." The malware grabbed API keys.env credentials, personal information, and much more. The danger is magnified because the package is frequently used by Claude Code, so people might not be aware their projects contain it. This points to the related question of how we store keys and credentials generally if we're working in a distributed that may involve AI agents and remote applications. To address this, Bitwarden has developed and offered as open source a software development kit (SDK) for "credential access with designated human oversight and robust end-to-end encryption, helping ensure passwords are never exposed or used without explicit authorization." Here's my own work (in collaboration with Claude) in this area - it's not quite as strong as what Bitwarden is proposing, but it's pretty strong.

Web: [Direct Link] [This Post][Share]


What's Up With That?
Marshall Kirkpatrick, 2026/03/26


Icon

I haven't tried this, because browser extension are blocked in the office, but I will when I'm home, and it's definitely worth a look. The idea idea is that the extension looks at the article you're reading, compares it to what else it can find on the same topic, and runs it through an analysis telling you what it adds that's new, what sort of wider analysis might be recommended, adjacent topics worth pursuing, etc. Here's an intro video. It's all relevant to me, of course, because I do the same thing to a limited extent here in my newsletter. As with everyone else in the world, I'm asking myself what people get from me that they can't get from a robot. I'm guessing it's my kindly demeanour and wry sense of humour. Via Intelligent Machines.

Web: [Direct Link] [This Post][Share]


Gen Xfest and the Better Suit Industrial Complex
Jim Groom, bavatuesdays, 2026/03/26


Icon

"Nothing against the suit, Hicks... a just-folks image that may've worked fine once... but the more we expect to be face-to-face with the well-to-do, you get it? ... The work changes, sure. But more importantly, who the work is for changes... What I was seeing on the floor wasn't just innovation (though AI can still blow your mind pretty quickly). It was an industry repositioning. The scrappy, independent hosting outfits are being written out of the geopolitical narrative." This - from my perspective - is the history of leaning technology. We've always been pushed toward the suits - sell to the institutions, sell to government, sell to corporations. When our real clients all along should have been those lease able to pay - the learners themselves.

Web: [Direct Link] [This Post][Share]


The Grant Application Is Dead. What Comes Next?
Tom Watson, Tomcw.xyz, 2026/03/26


Icon

"Come down the rabbit hole with me," invites Tom Watson, as he describes "how federated protocols, local agents, and organisational self-sovereignty could replace the broken funding model." There's a lot to like about this approach, and ideally, our future looks something like this. But wait, there's more. Why limit the model to grant applications for organizations? The existing system of earning degrees and submitting job applications is just as broken. With the right infrastructure support (which I would expect to become a future role for government) this becomes a model that replaces job applications generally. More from Tom Watson on open recommendations. More, from me.

Web: [Direct Link] [This Post][Share]


A Model of Disunified Human Experience
Eric Schwitzgebel, The Splintered Mind, 2026/03/25


Icon

As background, you might want to first read Conscious Processing and the Global Neuronal Workspace Hypothesis (it's OK, I hadn't seen it either). This is a great article with an even better diagram articulating how (conscious) experience in a 'neuronal workspace' may be connected to and informed by (unconscious) more specialized 'workspaces'. It makes me think of 'communities of communities'. "Baars's global workspace involves processors related to the past (memory), present (sensory input, attention), and future (value systems, motor plans, verbal report). Thus, the global workspace achieves experiential integration that is, in terms drawn from the philosophy of mind, both synchronic (at a particular point) and diachronic (over time)." Now that we're caught up, Eric Schwitzgebel throws a spanner into the works - what if there is no global unified workspace? "On this model, disunity is the normal human condition. Our experiences are fragmented, except when we pull them together through attention. We just don't realize that fact."

Web: [Direct Link] [This Post][Share]


In a 'Test', Google Is Automatically Rewriting News Headlines in Its Search Results
Nick Heer, Pixel Envy, 2026/03/26


Icon

Altering a title sounds really bad. And there's a reason why I keep the title of the article the same when I write a commentary on it - I want to be sure I'm not distorting the intent of the author by altering their title. So this Verge report seems concerning: "Google is beginning to replace news headlines in its search results with ones that are AI-generated." But a search engine's responsibility is a bit different from mine. As reported here, "For content that could impact someone's health, finances, or legal situations, Google seems far more concerned with making sure titles are accurate and helpful rather than keyword-optimized." That's actually a useful service, especially in a world where titles are so often used to mislead.

Web: [Direct Link] [This Post][Share]


The Fallacy Fallacy
Maarten Boudry, Persuasion, 2026/03/27


Icon

According to Maarten Boudry, the problem with teaching students how to spot fallacies is that they start seeing them everywhere. "They hurled labels and considered the job done. Worse, most of the "fallacies" they identified did not survive closer scrutiny." And the gist of the article as a whole is that "human reasoning is far more sophisticated and subtle than the theory of 'fallacies' suggests." As someone who has taught and written about fallacies, I am inclined to agree with both parts of this. But I never abandoned the teaching of fallacies, though I did adapt my method. Identifying fallacies is a three step process, I said. First, you can learn to recognize the 'signs' that a fallacy is present. But signs are often misleading; you need to reconstruct the reasoning to confirm that there is, indeed, a fallacy present. Finally, you need to show not simply that the fallacy is present, but to use your understanding of the fallacy to show that the reasoning is incorrect. If you name the fallacy in your response, I would say, you're doing it wrong.

Web: [Direct Link] [This Post][Share]


Why We Should Be Reading Paul Churchland Right Now
Matthew J. Brown, the hanged man, 2026/03/26


Icon

I am at least partially influenced by the fact that I did read Paul Churchland when I was younger, and came to much this sort of belief: "It is very common to see confident assertions that LLMs mimic language use but do not really understand or use it the way that we do, that LLMs do not really reason or think, that they cannot know or understand things. On examination, these claims are often grounded in a folk-psychological understanding about how we think, know, or use language, or, at best, in ideas from philosophy or cognitive psychology that are profoundly disengaged from any understanding of the underlying mechanisms of the brain." 

Web: [Direct Link] [This Post][Share]


Who will get us there?
David Truss, Daily-Ink, 2026/03/25


Icon

This article starts by quoting in full a post of mine that has gotten some traction on LinkedIn describing "the impact of AI on higher education." It was preparation for an event I'll participate in later this year. The thrust of David Truss's comment isn't to agree or push back, but rather, to ask, who will get us to this vision? Who is this 'we' of which I speak? "'We' won't get there following the guidance of financially lucrative edu-tech business," he writes. "'We' won't get there like we did with Web2.0 tools in the late 2000's and early 2010's, on the backs of tech savvy educators leading the charge. 'We' won't get there because of some governmental vision pushing a new AI enhanced curriculum." Fair point. If the model of 'educators' is 'teachers working in schools following institutional guidelines' then they are unlikely to move us from point A to point B. No, I was thinking (and this should surprise no one) of 'educators' as 'people like me' - working as educators but not typically in education. I have long said that change will come from outside the system. I don't doubt today that this remains true. 

Web: [Direct Link] [This Post][Share]


The Role of Higher Education Journal in Shaping Global Knowledge Networks
Aydın Bulut, Higher Education, 2026/03/25


Icon

I have long been fascinated by the observation that the fields I cover here are composed of what might be called 'communities of communities', that is, clusters of writers and practitioners that tend to coalesce into smaller cooperative networks while still being connected to the wider community. This article (29 page PDF) in Higher Education both embraces and resists that idea when it comes to a history of its own contents. It wants to be a systemic review, but the data don't coalesce into a single overarching theme. We see an ebb and flow of ideas and concepts, along with the citation networks of practitioners that swirl around them. "The early 2000s saw... the onset of institutional and methodological transformation... 2006-2015... indicates a shift towards macro-level analyses, emphasising structural, political, and social dimensions of higher education... 2021-2025... suggests a renewed orientation towards measurement, pedagogical modelling, and teacher-centred research." (p.s. the diagrams could have used much tighter editing; the headings of table 2 are incorrect, the hierarchical structure of Figure 3 is masked by lines flowing for no reason behind blue circles, the prominent (and hyphenated) 'higher-education' in the word cloud is suspicious, and the flow from concept to concept in Figure 7 appears to be arbitrary).

Web: [Direct Link] [This Post][Share]


The First Minutes: Designing Care-Based, Culturally Relevant Class Openings
Norline Wild, Faculty Focus, 2026/03/25


Icon

This article promises to help instructors "learn how care-based, culturally relevant class openings build belonging, strengthen faculty-student relationships, and increase student engagement from the first minutes." It struck me as I read it how the centre of focus and attention is on the instructor throughout. My approach is different, more direct, and (if I may say) less performative. Near the beginning of most of my talks or presentations, I say something like "this presentation is about you, not me." What that means, I say briefly, is that participants can change what's happening at any time - ask questions, make comments, challenge arguments, switch to a different topic. I tell them what I have planned, and ask if that's OK. Most audiences just go with the flow, which makes sense, because they've come to take advantage of my expertise, but sometimes they want to do something different, and I'm always game for that, because what we're doing is something mutual, together, and not 'me doing something to them'.

Web: [Direct Link] [This Post][Share]


Carving Linoleum Continues
Tom Woodward, Bionic Teaching, 2026/03/25


Icon

This article fits into a category of articles I might call 'humans still doing things machines could probably do better'. Recently I saw an article ask, "why do we still have children run in gym class when an Uber could take them the same distance in a minute?" I reflected on my own experiences being taught (badly) gold, curling and dancing by a gym teacher. Anyhow, in this article Tom Woodward practices carving linoleum to create prints. I like the "mediocre" birds, the misshapen elephants' ears, the revisited can of sardines. I like the mistakes, the scrawl, and the originality. It reminds us that evolution happens in the errors; design didn't produce the human brain, mistakes did. And as Woodward says, "Learn all the stuff. Do as many things as you can. Avoid repetitive stress syndrome in your brain, body, and soul."

Web: [Direct Link] [This Post][Share]


New, free language learning tool in Google Translate
Donald Clark, Donald Clark Plan B, 2026/03/25


Icon

You'll find this new tool in the mobile version of Translate, not the desktop version (I checked). For me the hardest part of language learning is understanding what speakers are speaking (my reading comprehension in a number of languages is way better). This tool is great for listening practice. I found it similar in many ways to Duolingo, but there's more user control over the exercises and activities (for example, 'make it harder' buttons). What it really needs, though, are voice controls. Then I could practice my Spanish and French while riding my bike. The tool, currently free, is in beta.

Web: [Direct Link] [This Post][Share]


Toward a Critical Agentic Systems Design Practice
Eryk Salvaggio, Cybernetic Forests, 2026/03/25


Icon

You could prpobably skip the first half of this talk, which makes the point that "As these (AI stochastic) parrots stack and interact with one another, we come to the pandemonium, this crisis of the stochastic flock: unmanaged, independently motivated systems competing or depending on one another, constrained by this mixture of probability and reference with cascading and uncontrollable results." I mean, it's a good image, but that's about it. The really practical value comes in the second part where Eryk Salvaggio describes "seven warnings for critical agentic design". They're short, but they take some thought. What does it mean to say "agents require air traffic control," for example? Or that "agents haunt and are haunted?" It's not your usual list of concerns; it's a layer deeper.

Web: [Direct Link] [This Post][Share]


Why high-talent teams still underperform
Chief Learning Officer, 2026/03/24


Icon

According to this article, "Teams can face challenges simply because everyone approaches work differently. Variations in work styles can manifest in planning and organization, decision-making speed and follow-through, communication style and preferred levels of collaboration." Wait a minute. If there are no learning styles (or so we're told over and over) how can there be working styles? The article is credited to 'Aperian' (as opposed to an author) so presumably an answer is forthcoming?

Web: [Direct Link] [This Post][Share]


Literally Nobody Understands AI. That's bad.
Michael Feldstein, e-Literate, 2026/03/24


Icon

I did enjoy Michael Feldstein's reflections on the education sector's general failure to understand AI (and I encourage him to post more open access blog posts). "AIs have weird failure modes that we don't understand yet," he writes. "That's likely because the industry has not been rigorously studying them yet. We need to recognize the reality of where we are so we can minimize risk of disasters... AI labs are heavily populated by two kinds of experts: Mathematicians and engineers. Neither discipline is trained on falsifiable theory as the standard for a good explanation. Mathematicians trust proofs. Engineers trust optimizations." Feldstein paints a picture here that (to elide the details) approximates 'science' with 'theory' and 'explanation'. My question back is: what if it's education (the academic research discipline) that has the definition of science wrong. What if we don't get neat theories and predictions? What if 'understanding' doesn't mean 'tell a causal story'? Related and important: prediction and causation in machine learning and neuroscience. Also: AI that explains its discoveries. And: what metaphor should drive AI research? The field is wide open here.

Web: [Direct Link] [This Post][Share]


Statistical Significance Isn’t the Same as Practical Significance
Rachel Banawa, NN/Group, 2026/03/24


Icon

"Statistical significance helps establish whether a result is reliable," writes Rachel Banawa, "while practical significance helps determine whether it is worth acting on." Banawa is writing in the context of user interface design, but there are of course implications of this distinction in wider learning and education research. An intervention, for example, might result in a 0.2 percent increase in test scores, but the effort to implement it might be too costly, or ethically questionable, or involving a score not really worth improving in the first place. That's why 'what works' research (related) needs to be taken with at least some scepticism. It's not good to be going faster if you're going faster in the wrong direction.

Web: [Direct Link] [This Post][Share]


I built a tool that shows live journalist requests so you can skip cold outreach
Reddit, 2026/03/24


Icon

More from the world of vibe-coded tools. I'm not recommending this one (because you actually have to pay to use it) but I'm mentioning it here because it speaks volumes about how the news gets its news in the first place. In a nutshell: people who have something to sell (consulting, products, books) employ agents to contact journalists. Journalists, meanwhile, put out calls across their networks (which are followed by these agents) for sources and spokespeople. The thing is, the less journalists spend on researching good sources, the worse the sources they actually find tend to be, which is how funding cuts erode quality journalism. I don't see this tool changing that equation any time soon, though I'm always on the lookout for something that might work on the journalists' end.

Web: [Direct Link] [This Post][Share]


Blocking the Internet Archive Won't Stop AI, But It Will Erase the Web's Historical Record
Joe Mullin, Electronic Frontier Foundation, 2026/03/24


Icon

"Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper," writes Joe Mullin. "That's effectively what's begun happening online in the last few months." He elaborates, "turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn't start, and didn't ask for. If publishers shut the Archive out, they aren't just limiting bots. They're erasing the historical record." For the record, I agree. Via Slashdot.

Web: [Direct Link] [This Post][Share]


The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness
Alexander Lerchner, Google DeepMind, PhilPapers, 2026/03/25


Icon

This is a lovely paper (16 page PDF) that needs someone to really take the time to do a proper refutation. The main argument is based on the idea that for the symbols manipulated by an AI to mean anything, they must be based on actual physically experienced causal events. And "if an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture." One might ask, if this is true, why are some syntactic architectures (eg., human neural networks) conscious while others (eg., rocks) are not. Indeed, to push the questioning further, on what basis do we argue that there even is a syntactic architecture over and above the physical instantiation. One may as well argue (as I would) that the physical constitution doesn't 'have' consciousness, it is consciousness, and consciousness in physical systems arises if they are organized in certain ways (which are described by network topologies). 

Web: [Direct Link] [This Post][Share]


Webclaw MCP server, 10 tools for web extraction, runs locally
Valerio (Massi), Reddit, 2026/03/24


Icon

I am not saying you should go out and use this application (goodness, no, please don't). What I am doing is using it to introduce the new trend of 'Claw' applications, a term used to designate automated AI agents like the original ClawBot (renamed MoltBot, renamed Open Claw). Here's another one: MetaClaw. "Inspired by how brains learn. Meta-learn and evolve your claw from every conversation in the wild. No GPU required." According to Ali Zulfiqar it "Intercepts every OpenClaw conversation and scores it, builds a skill bank from real usage not synthetic data, and auto-generates new skills every time the agent fails." Meanwhile Markus J. Buehler introduces ScienceClaw, "is an open-source crowdsourcing AI swarm for decentralized scientific discovery, inspired by MIT's Infinite Corridor (Github)." Meanwhile, over at Cisco we have DefenseClaw, "the agentic governance layer that sits on top of OpenShell and includes Cisco's open sourced scanners into something a developer can deploy in under five minutes" (OpenShell is a sandboxed environment for OpenClaw, providing kernel isolation, deny-by-default network access, etc.).

Web: [Direct Link] [This Post][Share]


Is AI the solution to the problems that make higher education "ill" in the first place?
Junhong Xiao, David CL Lim, Journal of Applied Learning & Teaching, 2026/03/24


Icon

When a paper beings with a question like "Is AI always good for education?" I always worry. Nothing this complex is 'always' anything. And it makes me wonder why they're asked the question this way. Anyhow, the authors address this question from two perspectives: the potential for AI to improve curricula, and the potential for AI to improve access. The latter includes the possibility of personalization, automation and cost reduction. The authors argue in their very long conclusion, "put simply, the jury remains out on whether AI can meaningfully enhance higher education quality." Meanwhile, "In terms of widening access, AI is far from inexpensive." In a series of "takeaways for policymakers and institutional leaders" they offer a range of criticisms of AI, though I think more research and deeper consideration would have led to more plausible reflections.

Web: [Direct Link] [This Post][Share]


Introducing Keytrace
Orta Therox, Keytrace Blog, 2026/03/24


Icon

Is this the one? "Keytrace lets you cryptographically link your atproto/Bluesky account to external accounts across the web." It's based on the distributed ID specification (DID) and you can use Keytrace to publish yours (if you have a Bluesky account, you have a DID, but there are other ways of getting one as well). I'm not posting a DID for myself just yet (Ben Werdmuller has done his) because I'm, working on a plan to generate my own DID in my own software.

Web: [Direct Link] [This Post][Share]


Your Perspective or Mine?
Arthur Krystal, The American Scholar, 2026/03/23


Icon

I enjoyed this light treatment of subjectivism but as a refutation of the idea it fails utterly. The author traces its origin to the horrors of World War I, where people could quite rightly be justified in rejecting the idea of 'absolute truth' (one sees a similar response to the horrors of the Thirty Years War). But Arthur Krystal's association of subjectivism in philosophers with their emotional responses fails to justify the distinction between a trauma-free world of facts and trauma-fill world of emotions. You can't just say "the findings or conclusions of scholars and scientists existed independently of those who formulated them and those who interpreted them" and leave it hanging. Even if there is a 'reality' independent of our perception of it, there is an infinity of ways to understand that reality, and none of them has any a priori claim to being 'true'.

Web: [Direct Link] [This Post][Share]


Teachers Move Beyond AI Basics to More Sophisticated Instructional Uses
Sarah D. Sparks, Education Week, 2026/03/23


Icon

I think this article is mostly marketing for the National Academy for AI Instruction, "a five-year, $23 million partnership between the American Federation of Teachers and three of the largest AI developers - Anthropic, Microsoft, and OpenAI," but I also think it's an initiative worth noting. It's interesting to look at the article's use of language, for example, "We're in this race for teachers to get this knowledge" of more meaningful use of AI, said Randi Weingarten, the AFT president. "This will become the most disruptive technology in our time." And also, "A lot of teachers are doing this work at home, just wracking their brains." And also, "these tools, if gotten into the wrong hands, can be very dangerous for our students, for our profession, and for our jobs." And also, "there's still a lot of fear in the absence of federal guardrails on privacy, on safety, on disinformation, on academic freedom." This article is heavy heavy on the emotional language.

Web: [Direct Link] [This Post][Share]


Histomat of F/OSS: We should reclaim LLMs, not reject them
Hong Minhee, on Things, 2026/03/23


Icon

So what about LLMs training on free and open content? "I don't believe the answer is rejection," writes Hong Minhee. "I believe it's reclamation." The core question here, says Minhee, is that of who owns the models? "Who benefits from the commons that trained them? If millions of F/OSS developers contributed their code to the public domain, should the resulting models be proprietary?" OK, I take the point. But here's the thing. Why do we assume we're 'one and done' with models trained on open content. The content isn't going anywhere. Anyone can train models on this content (unless, of course, we lock it down, which would simply preserve the status of the existing proprietary models). We act today as though there could only even be one Anthropic Code or ChatGPT. But that's absurd. There are already many many properly open source models. We will be drowning in them! Open content ultimately means genuinely open AI - if we allow it to. Via Ben Werdmuller.

Web: [Direct Link] [This Post][Share]


Experimenting with 'human.json'
Paul Walk, Paul Walk's Website, 2026/03/23


Icon

I have no idea whether this idea will take off, but I'm onboard with it. The idea is that people (real human people) who create their own websites can add a file called 'human.json' where they attest that their website is created by a human and vouch for other websites they know are created by humans. It should remind you of the old 'web of trust' and I now dub this the 'web of humans'. Using it is a three-step process: create a human.json file on your website (here's mine); put a link rel="human-json" into the header of your web page pointing to the human.json file; and then install the browser extensions for Firefox and Chrome to identify sites maintained by humans. Here are the full instructions. See also Alan Levine.

Web: [Direct Link] [This Post][Share]


A Rising New Era of Personal Tools
Ton Zijlstra, Interdependent Thoughts, 2026/03/23


Icon

This trend is going to have a far greater impact on learning technology than most people realize, I think. Here's the trend: "Vibecoding, and especially the Claude Code style of vibe coding, is bringing people to create their own tools, who weren't able to do so before... Tools built by people realising they are pretty predictable to themselves, and that such highly localised and specifically contextualised predictability now lends itself to automation by the intended user themself." When people can easily build their own tools, what becomes of educational technology, which is in large part based on the authoring and sale of such tools? Se also a16z: Good news: AI Will Eat Application Software.

Web: [Direct Link] [This Post][Share]


matduggan.com
Matt Duggan, matduggan.com, 2026/03/23


Icon

This is a bit of an ironic article for a website with the subhead "It's JSON all the way down" but it's still worth a read. Here's the gist: websites and structured data were originally defined using 'markup' languages like HTML and XML (these are the ones with all the angle brackets). But these specifications became more and more complex, with the peak of absurd complexity reached in Microsoft's OOXML. So some developers came up with an alternative tool, called markdown, that could translate some very basic easy-to-remember formatted text that into HTML. "You can learn it in ten minutes, write it in any text editor on any device, read the source file without rendering it, diff it in version control, and convert it to virtually any output format."

Web: [Direct Link] [This Post][Share]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.