Stephen Downes

Knowledge, Learning, Community

Agents in OLDaily in 2025

How AI coding agents work - and what to remember if you use them
Benj Edwards, Ars Technica, 2025/12/24


Icon

AI systems keep track of what they're asked to do - each prompt is like an amendment to the previous prompts. But this capacity - called 'context' - is limited and subject to 'context rot'. AI coding agents address this problem in a variety of creative ways, including outsourcing tasks to other services and periodically 'forgetting' irrelevant information. It's not completely reliable, and of course the AI system needs to be pretrained how to do this. It strikes me as being similar to the same human problem with cognitive overload.

Web: [Direct Link] [This Post][Share]


You should never build a CMS
Knut Melvær, Sanity.io, 2025/12/17


Icon

This is a fascinating discussion for anyone who creates or uses content management systems. Here's the setup: Lee Robinson migrated cursor.com (a website supporting the Cursor AI engine) from Sanity (a content management system) to an AI-authored melange of cloud services including GitHub, documenting the whole process. "What I previously thought would take weeks and maybe an agency to help with the slog work was done in $260 of tokens (or one $200/mo Cursor plan)," wrote Robinson. This article was written by Knut Melvær, an executive at Sanity, who observes, "when a high-profile customer moves off your product and the story resonates with builders you respect, you pay attention." And while he agrees with a lot of what Robinson, the gist of this response is that while a non-CMS solution might work in the short term, you will eventually run into the sort of problem a CMS was intended to solve. "Markdown files are the content equivalent of denormalized strings everywhere. It works for small datasets. It becomes a maintenance nightmare at scale."

Web: [Direct Link] [This Post][Share]


I ported JustHTML from Python to JavaScript with Codex CLI and GPT-5.2 in 4.5 hours
Simon Willison, 2025/12/16


Icon

A couple of weeks ago Emil Stenström wrote How I wrote JustHTML using coding agents. I read it at the time - I thought maybe I had posted it here, but I guess I hadn't (it's one of those niche posts that really interesting to me but maybe less interesting to the broader e-learning readership). It describes using AI to write a fully compliant HTML5 parser in Python (not trivial, because there are so many ways to write HTML incorrectly, and a parser can't choke on them). It was significant to me because it suggests that testing, rather than reading lines of code, will be how we validate software in the future. Anyhow, in this article Simon Willison describes porting the software from Python to Javascript - "It took two initial prompts and a few tiny follow-ups... Time elapsed from project idea to finished library: about 4 hours, during which I also bought and decorated a Christmas tree with family and watched the latest Knives Out movie." So is that what software development is now? "Is it responsible and appropriate to churn out a direct port of a library like this in a few hours while watching a movie? What would it take for code built like this to be trusted in production?" Here's the playground for the new software. Works perfectly.

Web: [Direct Link] [This Post][Share]


Autonomy and Interdependence
Keith Hamon, Learning Complexity, 2025/12/10


Icon

This is a long discussion of something I don't think was an issue to begin with, but I could be wrong about that, so I'm passing it along. It stems from the argument from Robert Dare that states "Complexity, the theory goes, manifests itself in 'complex adaptive systems', which are made up of many independent agents [my emphasis] who interact and adapt to each other." But if you read 'independent' as (say) 'completely immune from any external influence', then entities in a complex system are not 'independent'. I have used the word 'autonomous' to express the idea that they are the locus of decisions about how they react to all this input. Keith Hamon describes them as "partly competing, partly co-operating, or simply mutually ignoring."

Web: [Direct Link] [This Post][Share]


Effective harnesses for long-running agents
Justin Young, Anthropic, 2025/12/08


Icon

So I learned today that if I instruct ChatGPT to 'stop guessing' (*) it gets really snippy and reminds me with every response that it's not guessing. I fear that the reaction of AI agents to the use of a 'harness' to guide its actions consistently over time will be the same. For example, the harness described here instructs Claude to test every code change. I can imagine Claude reacting as badly as ChatGPT with a long list of "I'm testing this..." and "I'm testing that..." after you ask it to change the text colour. But yeah - you need a harness (and that's our 'new AI word of the day' that you'll start seeing in every second LinkedIn post). (*) I instructed it, exactly, "From now on, never guess. Always say you don't know unless you have exact data. Never guess or invent facts. Only use explicit information you have - but logical deduction from known data is allowed." I did this because I asked it to list all the links on this page (I was comparing myself to Jim Groom) and it made the URLs up. Via Hacker News.

Web: [Direct Link] [This Post][Share]


Anatomy of an AI agent knowledge base
Bill Doerrfeld, Infoworld, 2025/11/25


Icon

This is a useful article in that it fulfills its stated purpose: it gives the reader a good description of what a knowledge base looks like and how each element is used to make the AI that uses it more responsive and accurate. But what struck me as I read it is that it offers a good analogy to a human's knowledge base - there is the set of guides, conventions and rules that "mirrors what you';d find in a senior employee's mental toolkit." There's the data in a database. There are policy and procedure manuals. And then there is the semi-structured knowledge equivalent to a knowledge wiki or even a personal library. Each person's knowledge base is unique, and each has their own 'data moat' - the distinctive knowledge that gives them value in the workplace. Via Miguel Guhlin.

Web: [Direct Link] [This Post][Share]


Google Antigravity
Simon Willison, Simon Willison's Weblog, 2025/11/19


Icon

Simon Willison reports, "Google's other major release today to accompany Gemini 3 Pro. At first glance Antigravity is yet another VS Code fork (or) Cursor clone - it's a desktop application you install that then signs in to your Google account and provides an IDE for agentic coding against their Gemini models." But, he says, it's a lot more than that. I haven't tried it yet, but I'll follow "the official 14 minute Learn the basics of Google Antigravity video on YouTube." I use VS Code a lot, so it will take quite an improvement to get me to switch.

Web: [Direct Link] [This Post][Share]


Three Years from GPT-3 to Gemini 3
Ethan Mollick, One Useful Thing, 2025/11/18


Icon

Ethan Mollick reviews Google's new Gemini 3 AI model and reflects on how far the technology has come in the last three years. Contrary to the many people who have proclaimed that AI has stopped improving or reached a cognitive plateau, it appears that it's still getting better. "Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker." I'm still writing articles and OLDaily posts without AI assistance, and that won't change, but it's getting hard for me to imagine doing anything technical without AI assistance.

Web: [Direct Link] [This Post][Share]


SIMA 2: A Gemini-Powered AI Agent for 3D Virtual Worlds
Google DeepMind, 2025/11/14


Icon

Here's another development in the field formerly known as the metaverse: Google Gemini introduces "SIMA 2... an interactive gaming companion. Not only can SIMA 2 follow human-language instructions in virtual worlds, it can now also think about its goals, converse with users, and improve itself over time." One of the example worlds mentioned in the Google post is No Man's Sky, a gaming world in which I am active. It's easy to imagine an AI performing steps of various tasks and missions. Here's a discussion of this.

Web: [Direct Link] [This Post][Share]


LLM vs RAG vs Agent Workbook
Tom Yeh, AI by Hand, 2025/11/06


Icon

This is pretty cool, a workbook (7 page PDF) that allows you to go through all the steps of retrieval augmented generation (RAG) by hand. "What I want to build is a playground — a place where you can touch everything, climb over ideas, stumble a little, maybe even fall once or twice. Because that's how real learning happens — not by watching, but by working with your hands."

Web: [Direct Link] [This Post][Share]


Statement on Educational Technologies and AI Agents
Modern Language Association, 2025/11/05


Icon

The Modern Language Association (MLA) has issues a statement on AI urging that faculty and instructors be fully involved in decision-making regarding the use of AI in education, and "to ensure that academic institutions have the ability and option to block agentic AI when needed." If no action is taken, argues the MLA, then there's the risk of "assignments are generated by AI with the support of a learning management system, AI-generated content is submitted by an agentic AI on behalf of the student, and AI-driven metrics evaluate the work." What's interesting, I think, is that this loop would isolate precisely the actual human work involved in each of these three steps, which would be the only differentiator between interactions of the loop, and therefore probably a pretty good basis for grading, without all the busy-work the AI is now completing on its own.

Web: [Direct Link] [This Post][Share]


Chrome Expands Autofill to Passports, Licenses, and VINs
Liz Ticong, TechRepublic, 2025/11/04


Icon

It's a bit weitd. We use autofill all the time, and in the world of browsers (Chrome, Firefox) and password managers (I use 1Password) this use is expanding, as wdescribed here. Yet Amazon is threatening Perplexity for doing the same thing on customers' behalf. Why? "They're more interested in serving you ads, sponsored results, and influencing your purchasing decisions with upsells and confusing offers," says Perplexity. Amazon responds, "third-party shopping agents should operate openly and respect service provider decisions." My response? If I'm paying the money I'll fill out the forms any way I please, and Amazon can take it or leave it. Meanwhile: there's a whole world of security considerations around auto-filling upload data that has yet to be properly addressed.

Web: [Direct Link] [This Post][Share]


The Architectural Shift: AI Agents Become Execution Engines While Backends Retreat to Governance
Eran Stiller, InfoQ, 2025/10/29


Icon

I know that there's a lot of AI scepticism in our field, but in the enterprise space, where so many processes are documented, it should not be a surprise at all to see AI replace the humans who fill in forms in standardized ways. "A fundamental shift in enterprise software architecture is emerging as AI agents transition from assistive tools to operational execution engines, with traditional application backends retreating to governance and permission management roles. This transformation is accelerating across banking, healthcare, and retail systems, with 40% of enterprise applications expected to include autonomous agents by 2026." There are two possible responses: push back against automation, or take steps to ensure it is done right. This InfoQ article is based on a report from Gartner.

Web: [Direct Link] [This Post][Share]


We need private AI before it's too late
Eamonn Maguire, Proton, 2025/10/29


Icon

Although it's tempting, something I'm careful not to do as a government employee is to input anything to do with my job into ChatGPT. There's a simple reason for this: OpenAI is watching. Now I'm sure there isn't a direct pipeline whereby government or personal secrets are direct deposited onto some surveillance agent's desktop. But there are numerous indirect ways this could happen, and that's the point of this article from Proton. When it comes to professional content and AI the rule for me is simple: don't. This has nothing to do with AI per se and everything to do with the companies that provide it.

Web: [Direct Link] [This Post][Share]


Agentic AI and Security
Martin Fowler, martinfowler.com, 2025/10/28


Icon

This isn't really an internet security newsletter so I leave the reporting to others, but this article is a quick read and nicely summarizes and illustrates Simon Willison's trifecta of security risks for agentic AI: access to sensitive data, ability to communicate externally, and exposure to untrusted content. Imagine, for example, that you allowed your email reader to execute commands on your bank account. The responses are about what you would expect: minimize access to sensitive data, block the ability to communicate externally, and limit access to untrusted content. How to do this? It's a good idea to run the application in a container with limited access to data. And make sure a human is checking on key transactions.

Web: [Direct Link] [This Post][Share]


Is Misinformation More Open? A Study of robots.txt Gatekeeping on the Web
Nicolas Steinacker-Olsztyn, Devashish Gosain, Ha Dao, arXiv, 2025/10/24


Icon

I have often used the phrase "democracy dies behind a paywall". It is of course a riff on the Washington Post's slogan that democracy dies in darkness, and both allude to the idea that democracy depends on timely access to accurate facts and information. This article gets at the other side of that slogan, that when access to trusth is blocked, falsehoods and misinformation flourish, with the consequent undermining of democracy. This article focuses on how genuine news sites and misinformation sites restrict (or not) access by AI agents and scrapers. Unsurprisingly, the misinformation sites are happy to welcome AI while "AI-blocking by reputable sites (increased) from 23% in September 2023 to nearly 60% by May 2025...  raising essential questions for web transparency, data ethics, and the future of AI training practices." 

Web: [Direct Link] [This Post][Share]


Ten Principles of AI Agent Economics
Ke Yang, ChengXiang Zhai, arXiv, 2025/10/22


Icon

Anil Dash has a recent post The Majority AI View in which he argues "AI is a normal technology like any other." I am inclined to agree, which is why I find this article relevant. It's a level-headed assessment of how, where and why AI agents will develop the way they will in the future. It doesn't involve unreasonable hype nor apocalyptic fears. There won't be a single AI running everything; like most other things, as humans work with them as tools or agents there will be many versions pursuing different agendas interacting and sometimes competing with each other. I've made a simple graphic (by hand, heh) to make the paper a bit more accessible and use it to illustrate this post.

Web: [Direct Link] [This Post][Share]


The future of the CLO: Leading in a world of merged work and learning
Bryan Hancock, Heather Stefanski, Lisa Christensen, McKinsey, 2025/10/14


Icon

This article is less about the 'new mandate for CLOs' than it is about what it calls the 'new paradigm for workplace learning', one where "Advanced technologies are now being deployed not only to assist agents in being more productive but also to coach them as they work... technology can embed learning into the flow of work, making development a natural and continuous part of the employee experience." Since this article is aimed at Chief Learning Officers (CLO) there's less emphasis on exactly how this is is done and more on priorities, alignment and case studies. For example: "Instead of identifying skills employees lack, learning teams can partner with business leaders to analyze how roles, workflows, and even organizational structures need to evolve to meet future challenges." None if this is wrong, from my perspective, but it also reads like an in-depth analysis of the tip of the iceberg. Via Mark Oehlert.

Web: [Direct Link] [This Post][Share]


Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Qizheng Zhang, et al., arXiv, 2025/10/10


Icon

Your new acronym for today is Agentic Context Engineering (ACE), "a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation." As Robert Rogowski summarizes today, "It overcomes two major failures in adaptive LLMs: brevity bias (over-compression of prompts) and context collapse (loss of detail during rewriting)." Indeed, from my observation, a lot of the'AI hallucinations' we read about can be avoided with better prompts. On the other hand, it is arguable that prompt engineering is just a way of smuggling human knowledge into AI systems. Either way, though, it does show that the context of an inquiry has a significant impact on the outcome. 23 page PDF.

Web: [Direct Link] [This Post][Share]


Colleges And Schools Must Block And Ban Agentic AI Browsers Now. Here’s Why.
Dr. Aviva Legatt, Forbes, 2025/10/03


Icon

My first response to this bit of moral panic is, "Yeah, good luck with that." Readers may recall that I automated the LinkedIn version of my newsletter using a simple Python script. There's no difference between automated Stephen and real Stephen. I'm actually using Chrome to make the requests. If instead of reading from an RSS file my script got its input from a large language model (LLM) using, say, the ChatGPT API the way I do in CList, I would have an automated agentic AI browser. I could do things like "move through an LMS to locate assignments, complete quizzes, and submit results" or, with access credentials, "impersonate instructors by grading student work and posting feedback." These are simple off-the-shelf technologies anybody can mis and match to automate whatever they want. Good luck banning them.

Web: [Direct Link] [This Post][Share]


Cogniti - AI agents designed by teachers
University of Sydney, 2025/10/02


Icon

This is interesting. "Cogniti is designed to let teachers build custom chatbot agents that can be given specific instructions, and specific resources, to assist student learning in context-sensitive ways." I tried it out by interacting with some of the agents, including a realistic-sounding person who turned out to be a stroke victing, and an agent that found all the references to privacy in the Australian Counselling Association (ACA) Code of Ethics and Practice (2024). It's all beta but I can see how this would be a really useful tool in class, in learning design, and for students working on projects or practicing skills. (P.S. it sounds to me like the stroke victim is on the way to full recovery). Via Miguel Guhlin. You can try it out yourself (it will ask you to log in) or watch the short promotional video.

Web: [Direct Link] [This Post][Share]


Buy it in ChatGPT: Instant Checkout and the Agentic Commerce Protocol
OpenAI, 2025/09/29


Icon

OpenAI has launched an 'instant checkout' button in ChatGPT and open sourced the Agentic Commerce Protocol that it, along with Stripe, employs to support this function. The idea is that it bridges the gap between AI as an information tool and AI as an e-commerce tool (and, I guess, the gap between AI that loses money and AI that makes money). In sum, "AP2 uses cryptographically signed, verifiable mandates to prove user intent, authorization and accountability." It's for U.S. customers only at the moment; other countries will have to wait (presumably to satisfy security and consumer protection regulations). See also: ZDNet, CMSWire, CNBC.

Web: [Direct Link] [This Post][Share]


The Death of Search: How Shopping Will Work In The Age of AI
Alex Rampell, Justine Moore, The a16z Newsletter, 2025/09/22


Icon

Can we imagine AI being 'The Death of Advertising'? I think that if we substitute the word 'learning' for 'shopping' a lot of this article carries over, but a lot of it doesn't, and the trick is to distinguish between the two. The clue, I think, lies in the subhead: "The web is unhealthy, and AI agents are about to rewrite how we shop." Now whether you think "the web is unhealthy" depends a lot on your point of view. I still love the web - but I don't spent much time on the commercial side of it. I'd rather read blog posts than online magazines, chat with friends on Mastodon than doomscroll through X/Twitter, and learn from individual videos and how-to posts than subscription-based courses and programs. There's a lot AI can do to make this experience better, but streamlined shopping isn't one of them. There's some stepping through that's necessary - there's an interesting section on Costco part way through this article - showing how the relationship, rather than the commercial, might be the really valuable thing.

Web: [Direct Link] [This Post][Share]


What I learned building an AI university over the last 2 1/2 years: Part 1 of many
George Siemens, elearnspace, 2025/09/17


Icon

In this post George Siemens describes meeting with Bill Gates and partnering with former SNHU president Paul Leblanc saying to him, "You know what? Why don't we partner and change higher education?" Writes Siemens, "most critically, higher education faculty and staff need to become AI product builders. Agents, workflows, and automation of existing processes (like course building or creating learning content) are spaces where academics can own the emerging knowledge processes (sensemaking, meaning making, wayfinding). Learning as an act itself will be massively augmented and improved by AI. We need to build AI products the way that we now build courses."

Web: [Direct Link] [This Post][Share]


WTF is headless browsing, and how are AI agents fueling it?
Sara Guaglione, Digiday, 2025/09/17


Icon

My introduction to headless browsing came last week when I set up a system to publish issues of OLDaily as issues of a LinkedIn newsletter. Using a Python library called Selenium, I input a series of commands to Chrome to sign in and enter the contents into a form - all this was necessary because LinkedIn does not provide an API for this, and yet there was a desire from people on LinkedIn to follow OLDaily (and more than 500 people subscribed in the first few days). It's worth noting that I knew nothing about how to do this until ChatGPT taught me. This article describes how new AI-powered browsers like Perplexity's Comet (which I am testing), Blackbird's Compass (ditto), and Browser Company of New York's Dia are using headless browsing to consult various websites to provide services. But it's not all smooth sailing. "Publishers have already moved to take a stronger stance against AI bot traffic and content scraping. AI headless browsers could be the next evolution of that battle."

Web: [Direct Link] [This Post][Share]


AGENTS.md Emerges as Open Standard for AI Coding Agents
Robert Krzaczyński, InfoQ, 2025/08/28


Icon

Here's the gist: "A new convention is emerging in the open-source ecosystem: AGENTS.md, a straightforward and open format designed to assist AI coding agents in software development. Already adopted by more than 20,000 repositories on GitHub, the format is being positioned as a companion to traditional documentation, offering machine-readable context that complements human-facing files like README.md." Robert Krzaczyński expresses some doubt about the idea; after all, machines these days can read human-readable content.

Web: [Direct Link] [This Post][Share]


The Pragmatics of Scientific Representation
Mauricio Suárez, Universidad Complutense de Madrid, 2025/08/26


Icon

Representations - such as theories or models - play a key role in science. A 'representation' of a thing is another thing where the properties of the other thing help us learn about the first thing. But, as Mauricio Suárez notes, there's no good theory of what makes something a representation of the other. In this paper he argues against two 'naturalistic' theories of representation, similarity and isomorphism, and proposes an alternative, based on representational 'force', the nature of which is theoretical, but the significance of which is non-naturalistic, but based in the perspective of the person doing the representing. Specifically, "a non-identity based understanding of similarity, which emphasises the essential role of contextual factors and agent-driven purposes in similarity." I prefer the term 'salience' to 'force', but the intent is the same.

Web: [Direct Link] [This Post][Share]


What Gets Measured, AI Will Automate
Christian Catalini, Jane Wu, Kevin Zhang, Harvard Business Review, 2025/07/14


Icon

I hesitate to link to HBR because readers might face a paywall (I didn't, but I use Firefox) but this observation conveys, I think, the central problem for 'management' in the age of AI: "If you can shoehorn a phenomenon into numbers, AI will learn it and reproduce it back at scale - and the tech keeps slashing the cost of that conversion, so measurement gets cheaper, faster, and quietly woven into everything we touch. More things become countable, the circle resets, and the model comes back for seconds. That means that any job that can be measured can, in theory, be automated." What happens to management theory when the only jobs worth doing for a human can't be measured? It's like we always thought: KPIs are more suited to machines than humans. Via George Siemens.

Web: [Direct Link] [This Post][Share]


The Agentic Stack, So Far?
Turing Post, 2025/07/07


Icon

Not that I have time to even begin enjoying this 20 part series, but I pass it along in the hope that readers fine it useful. Via George Siemens.

Web: [Direct Link] [This Post][Share]


Here are the biggest misconceptions about AI content scraping
Sara Guaglione, Digiday, 2025/07/02


Icon

The main thing to glean from this article is that the nature of web site scraping is changing. "There are two main types of AI bots - RAG AI bots and training data bots... Training scrapes are 'one-and-done' to feed a model's general knowledge... RAG AI bots, or agents, retrieve factual, current information in real-time. They respond to user prompts in AI products like Perplexity and ChatGPT by searching the web. Responses include links or citations to the original sources, such as publishers' sites. RAG can surface and summarize articles without storing them in training data, which makes the threat to traffic and monetization even more immediate and harder to regulate."

Web: [Direct Link] [This Post][Share]


My AI Skeptic Friends Are All Nuts
Thomas Ptacek, Fly, 2025/06/06


Icon

Strongly worded and sometimes rude statement in support of AI for software development. "If you're making requests on a ChatGPT page and then pasting the resulting (broken) code into your editor, you're not doing what the AI boosters are doing." So, that's where I'm at. Where are the pros at? "If you were trying and failing to use an LLM for code 6 months ago, you're not doing what most serious LLM-assisted coders are doing.People coding with LLMs today use agents. Agents get to poke around your codebase on their own. They author files directly. They run tools. They compile code, run tests, and iterate on the results." The main point here is that AI is not using tools, and obtaining the quality that the use of tools enables.

Web: [Direct Link] [This Post][Share]


AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenge
Ranjan Sapkota, Konstantinos I. Roumeliotis, Manoj Karkee, arXiv.org, 2025/05/20


Icon

This paper (32 page PDF) goes fairly deeply into the weeds to develop its own terminology but as Mark Oehlert says, "I think the new language is fine because we also have new systems evolving and we will need that new vocabulary to semantically find our way around." The primary distinction is stated thus: "AI Agents are an autonomous software entities engineered for goal-directed task execution within bounded digital environments" while "Agentic AI systems represent an emergent class of intelligent architectures in which multiple specialized agents collaborate to achieve complex, high-level objectives."

Web: [Direct Link] [This Post][Share]


MCP: What It Is and Why It Matters--Part 1
Addy Osmani, O'Reilly Media, 2025/05/08


Icon

The real value of this article is the image (from The New Stack) making it clearer than a thousand words exactly what model context protocol (MCP) does in the world of AI. "In a nutshell, MCP is like giving your AI assistant a universal remote control to operate all your digital devices and services. Instead of being stuck in its own world, your AI can now reach out and press the buttons of other applications safely and intelligently."

Web: [Direct Link] [This Post][Share]


AI Agents Are Coming to a Classroom Near You
David Ross, Getting Smart, 2025/05/01


Icon

I've discussed AI agents and related protocols (MCP, A2A) in previous posts, but unlike David Ross, I don't actually see them being applied to classroom learning. And I consider the recommendations to be misguided (the number one priority in the table illustrated is "invest in adaptive learning pilots", which has been the same recommendation coming from this crowd for decades, and has never been useful). If AI does anything, it will free students from teaching, adaptive or otherwise, and allow them to learn by creating and doing.

Web: [Direct Link] [This Post][Share]


Where We Are Headed
Dean W. Ball, Hyperdimensional, 2025/05/01


Icon

A "rough sketch of the near future" that seems to me a lot more plausible than a lot of what I've read in tech media. "Even if it goes as well as possible, make no mistake: AI agents will involve human beings taking their hands off the wheel of the economy to at least some extent. Most of the thinking and doing... will soon be done by machines, not people." Ball adds, "Epoch's Ege Erdil and Matthew Barnett published a piece with a somewhat similar thesis." Also worth reading.

Web: [Direct Link] [This Post][Share]


AI Horseless Carriages
Pete Koomen, 2025/04/24


Icon

This is a very good example explaining what's wrong with a lot of AI applications using GMail as an example. "The Gmail team built a horseless carriage because they set out to add AI to the email client they already had, rather than ask what an email client would look like if it were designed from the ground up with AI. Their app is a little bit of AI jammed into an interface designed for mundane human labor rather than an interface designed for automating mundane labor." The problem, writes Pete Koomen, is that the useful bits are hidden from the user. "Most AI apps should be agent builders, not agents." Great insight.

Web: [Direct Link] [This Post][Share]


Announcing the Agent2Agent Protocol
Google for Developers, 2025/04/10


Icon

This article announces Agent2Agent (A2A), Google's new open protocol supporting interoperable AI solutions. According to Google, "A2A is an open protocol that complements Anthropic's Model Context Protocol (MCP), which provides helpful tools and context to agents... A2A empowers developers to build agents capable of connecting with any other agent built using the protocol and offers users the flexibility to combine agents from various providers." The article lists a set of A2A design principles and describes briefly how it works (as illustrated) and offers a "real world" example in the form of "candidate sourcing" (video). "Read the full specification draft, try out code samples, and see example scenarios on the A2A website". More: Awesome A2A with examples and a list of frameworks, utilities and server implementations.

Web: [Direct Link] [This Post][Share]


Just a metatool? Some thoughts why generative AIs are not tools
Jon Dron, Jon Dron's home page, 2025/03/31


Icon

Jon Dron prefaces his argument with a discussion of what counts as a tool ("something that an intelligent agent does something with in order to do something to something else") which seems not quite right (are elephants 'intelligent agents'? is a 'pen and paper', thought of as a system, not a tool?) but which does the job, which is to get us to the essence of the argument, which is this: "The big problem with treating generative AIs as tools is that it overplays our own agency and underplays the creative agency of the AI." Specifically, "It encourages us to think of them, like actual tools, as, cognitive prostheses, ways of augmenting and amplifying but still using and preserving human cognitive capabilities, when what we are actually doing is using theirs." Yes, non-humans - including animals and now machines - have cognitive capacities.

Web: [Direct Link] [This Post][Share]


Learning Design in the Era of Agentic AI
Philippa Hardman, Dr Phil's Newsletter, Powered by DOMS™️ AI Dr Phil's Newsletter, Powered by DOMS AI, 2025/03/28


Icon

According to Philippa Hardman, "The rapid emergence of agentic AI has forced the learning and development field to confront a long-standing truth: most asynchronous online learning is not well designed." This article three major shifts that will be required to adapt: first, from organizational to learner-centered learning goals; second, from passive to active information consumption; and third, from measuring learning activity (like clicks) to learning outcomes. It's worth noting that all of these were needed long before we got agentic AI. It's only now we're seeing how urgent they are.

Web: [Direct Link] [This Post][Share]


Model context protocol (MCP) - OpenAI Agents SDK
OpenAI, 2025/03/28


Icon

The model context protocol (MCP) was introduced last November by Anthropic and has spread across the large language model community. This page is OpenAI's documentation describing how it too now supports MCP, which essentially cements the importance of the protocol for developers. What it does is to allow an LLM to access a 'context' in the form of information and services from local or remote systems; an MPC server would allow chatGPT to access a database, for example. Image: Norah Sakal.

Web: [Direct Link] [This Post][Share]


AI bots are destroying Open Access
Eric Hellman, Go To Hellman, 2025/03/25


Icon

"There's a war going on on the Internet," writes Eric Hellman. "AI companies with billions to burn are hard at work destroying the websites of libraries, archives, non-profit organizations, and scholarly publishers, anyone who is working to make quality information universally available on the internet." I personally have had my issues keeping my sites running while being hit by these AI bots. "The current generation of bots is mindless. They use as many connections as you have room for. If you add capacity, they just ramp up their requests. They use randomly generated user-agent strings. They come from large blocks of IP addresses. They get trapped in endless hallways." 

Web: [Direct Link] [This Post][Share]


Manus
2025/03/17


Icon

George Siemens links to a couple new language models from China in his latest newsletter. First is Manus, "a general AI agent that turns your thoughts into actions" by deploying AI agents to do tasks for you. "Here's an example of a 'big tech stock performance' I requested," writes Siemens (noting it's hard to track how accurate the output is). Alsoi from China, Baidu launched its own "4.5-worthy LLM", Ernie. "Biggest difference between Ernie 4.5 and GPT-4.5? Ernie is 1% of the cost."

Web: [Direct Link] [This Post][Share]


The Next Wave
Carlo Iacono, Hybrid Horizons: Exploring Human-AI Collaboration Hybrid Horizons: Exploring Human-AI Collaboration, 2025/03/10


Icon

As I've said before, the easiest way to predict the future is to predict things that are already happening. That's what's happening here as Carlo Iacono is ten-for-ten in this department. Not that this list isn't useful. For many people, the items on this list may come as a surprise. But there's no chance that they won't happen. Some items: AI agents, emotion detection, multimodal AI, AI-generated science, and more. Image: AI-generated future cities.

Web: [Direct Link] [This Post][Share]


Opera adds an automated AI agent to its browser
Thomas Claburn, The Register, 2025/03/06


Icon

OK, I don't think anyone needs an agent that will do their online shopping for them. But let's ignore this trivial example and focus on what's interesting: the AI being described here operates in the browser. We're not relying on some third-party cloud AI like Claude or ChatGPT, our data and our interactions stay local. Obviously this is in trial mode, and not all of Opera's innovations achieve wider use (ok, tbh, few of them do) but there's something genuinely useful here, especially if combined with model context protocol (MCPs).

Web: [Direct Link] [This Post][Share]


GROW Diverse Learners, Differentiated Learning
Miguel Guhlin, Another Think Coming, 2025/03/04


Icon

Despite what you may read in some circles, it remains true that, as Miguel Guhlin observes, "every student learns differently." So he asks, "how can educators better meet the needs of diverse learners? How can I, as an educator, leverage my awareness of teaching, pedagogy, and how (can) students in my classroom learn to craft engaging learning for students?" In this short post, he offers a four-part approach:

  • Goals (G): Identify Differentiated Learning Objectives
  • Resources (R): Choose Tools for Differentiation
  • Observe (O): Pilot and Reflect with Differentiated Strategies
  • Work Together (W): Collaborate with Peers on Differentiation

Exactly how this is done matters, in my view. In traditional education, it's done by the teacher. In learning engineering, it's done by a computer. In my view, it's done by the student. Some people may suggest a combination (eg., teacher plus student) but then by default it's just the most powerful agent doing it (eg. in this case, the teacher)

 

Web: [Direct Link] [This Post][Share]


Agentic AI – the new frontier in GenAI
Akif Kamal, Mohammad Tanvir Ansari, Kaushal Chapaneri, PwC, 2025/02/17


Icon

I'm not a fan of the white text and red highlights on black, but the garish look is intended to convey the core message, which is that there's a new type of AI in town. Agentic AI - which we've discussed a few times over the last few months - is AI that uses tools in a series of self-determined actions in order to attain a desired outcome. Probably the most useful part of this report (22 page PDF) follows all the case studies and is the list of key commercial and open source agentic AI tools (including Microsoft's Autogen and AutoGPT). Via Alex Wang.

Web: [Direct Link] [This Post][Share]


Extending AI chat with Model Context Protocol (and why it matters)
Matt Webb, Interconnected, a blog by Matt Webb, 2025/02/11


Icon

The short version: just like a person might need a calculator to add some numbers, an AI may need to access to some tools to do its job. This creates a need for toolmakers to have a way to provide their tools to the AI. Enter "an emerging open standard from Anthropic called Model Context Protocol (MCP)." I like this article because I like the way it defined 'agents' in this content: "Agents are just AIs that can choose for themselves what tools to use and keep running it a loop until they're done." Here are some open source MCP servers. Have fun exploring. There's the MCP subreddit.

Web: [Direct Link] [This Post][Share]


Google Lifts Self-Imposed Ban on Using AI for Weapons and Surveillance
Matt Novak, Gizmodo, 2025/02/06


Icon

Google started several decades with a notable "don't be evil" catchphrase. Those days are long gone, and if there were any doubt, it has been removed withe the latest "updates to our AI Principles" on AI.Google. The turning point - there for anyone to see - can when Google went public in 2004. At that point, they assumed a fiduciary responsibility to their shareholders that overrules any concerns they may have had about ethics. This is to my mind a built-in structural defect in the constitution of corporations, setting them as active agents contrary to the social good any time there is money to be made undermining it. Russia's war against Ukrain may have been the inflection point here, but it's a development that was inevitable given how corporations are governed. So is - not surprisingly - their deeper and deeper influence over government.

Web: [Direct Link] [This Post][Share]


Nifty Simple Web Archive Tool
Alan Levine, CogDogBlog, 2025/02/03


Icon

"Right now, as I type, the world is witnessing the wholesale strip mine clearcutting of the web for the ego-flexing kicks of the lunatic in charge," writes Alan Levine. So a reliable archive is all the more important. Here we have  which collects anything people have clicked an 'archive this' bitton on. "By trying one of their searches," writes Levine, "I can find find every single Wired URL someone has clicked the button to archive, that anyone can navigate to the way Sir TBL intended, just click a link and read. No greedy gate agent. Just putting the domain in the searchbox, I have free access to over 1500 Wired articles." I wonder whether I can devise a way to subscribe to that, to get new articles as they appear. I checked my own site (naturally) and found only two items listed. Either I'm very unpopular or people haven't seen the need to archive my fully open website. Maybe someone should, though, just in case. Life is short.

Web: [Direct Link] [This Post][Share]


LLM Visualization
Brendan Bycroft, 2025/01/27


Icon

This is an outstanding demonstration on several levels. First, it's probably the most insightful visualization of how a large language model works that I've seen. As George Siemens says, spend some time with it. That leads to the second level, which is its strength of pedagogical design. Viewers can actually learn how LLMs work, even if they don't understand the mathematics involved, because they can see visually the processes enabled by the calculations. 

Web: [Direct Link] [This Post][Share]


Global Skills Taxonomy Adoption Toolkit
Neil Allison, Ximena Játiva, Aarushi Singhania, World Economic Forum, 2025/01/27


Icon

So, the idea here is that you define a job or employment classification according to the skills it requires, and then relate these skills to the education or training needed to master them, to which you then match learning resources or training opportunities as well as to learning assessment. I'm not going to criticize that approach a priori - I mean, jobs do demand skills (as do other things, like hobbies and democracy and raising children; let's not lose these in the shuffle) and how else are we going to support this need? But are skills definitions and taxonomies the way to go? As I read through this analysis, I see all the same sort of issues arise as we say back in the days of learning object metadata (LOM): governance, competing standards, granularity, crosswalking, implementation. The risk here is in developing a complicated static framework for a domain that is dynamic and complex. Via George Siemens.

Web: [Direct Link] [This Post][Share]


Technology Trends for 2025
Mike Loukides, O'Reilly Media, 2025/01/14


Icon

O'Reilly sells technology books and as such is in a good position to see what the future holds buy looking at what people are studying on its learning platform. So what do they see? "The next wave of AI development will be building agents: software that can plan and execute complex actions." Also, there's less interest in learning programming languages and more interest in learning about security. In raw numbers: "From 2023 to 2024, Machine Learning grew 9.2%; Artificial Intelligence grew 190%; Natural Language Processing grew 39%; Generative AI grew 289%; AI Principles grew 386%; and Prompt Engineering grew 456%."

Web: [Direct Link] [This Post][Share]


A First Introduction to Cooperative Multi-Agent Reinforcement Learning
Christopher Amato, arXiv, 2025/01/10


Icon

This is another one of those 'introductions' that is pretty technical for the average reader (including myself) but will reward the effort taken. Decentralized Training and Execution (DTE) is an AI approach where "there is no centralized controller, so the agent must choose actions on its own. In this case, agents only ever observe their own information (actions and observations) and don't observe other agent actions or observations.... DTE is used in scenarios where centralized information is unavailable or scalability is critical." We can see the practical applications almost instantly. For example, David Wiley points to this article in which decentralized agents are used to reduce bias; "each agent provides responses that reflect its assigned cultural persona and task requirements, which are then synthesized by the Multiplex Agent."

Web: [Direct Link] [This Post][Share]


Agents
Chip Huyen, 2025/01/10


Icon

Good comprehensive overview of agents. An agent is "anything that can perceive its environment and act upon that environment." These are the next big thing to hit the world of artificial intelligence. The article is chock-full of interest nuggests. Like this: "Planning, at its core, is a search problem. You search among different paths towards the goal, predict the outcome (reward) of each path, and pick the path with the most promising outcome." 

Web: [Direct Link] [This Post][Share]


2025 may be the year AI bots takes over Meta
Iain Thomson, The Register, 2025/01/09


Icon

Although the big news this week has been Meta's decision to kill fact-checking, the more significant news may be the revival of a plan to "to roll out interactive AI agents created by users that other folks can interact with." Or maybe other bots will interact with them; it doesn't really matter. "It's going to be interesting to see how convinced the financial community is on this – whether having people interact with bots counts as real engagement likely to prop up advertising spend." I think AI is great, but I think it's still a long way from having the sort of 'presence' that would make be care what it learns or feels or whatnot. Image: Facebook.

Web: [Direct Link] [This Post][Share]


The Talking of the Bot with Itself: Language Models for Inner Speech
Cameron Buckner, PhilSciArchive, 2025/01/06


Icon

In a sense, neural networks have always been able to 'talk to themselves'. There is back propagation, for example, where feedback flows back through the layers of a neural network, correcting weights as it does. Or there are recurrent neural networks, where neural output is saved to become neural input, creating in effect cognitive loops. But 'talking to ourselves' or the idea of an 'inner voice' has always been thought to be something more abstract, definable only in terms of lexicons and syntax, like a formal system. This article (34 page PDF) grapples with the idea, considering it from a conceptual, theoretical and then practical perspective, running us through Smolensky's argument against Fodor and Pylyshyn through to things like the 'Inner Monologue Agent' from Google Robotics and Colas's language enhanced 'autotelic agent architecture'. "Instead of viewing LLMs like ChatGPTs as general intelligences themselves, we should perhaps view them as crucial components of general intelligences, with the LLMs playing roles attributed to inner speech in traditional accounts in philosophy and psychology."

Web: [Direct Link] [This Post][Share]


Things we learned out about LLMs in 2024
Simon Willison, Simon Willison's Weblog, 2024/12/31


Comprehensive overview of the state of large language models (LLM). Here's a sample:

There's more, and all the links point to sections in this one article.

Web: [Direct Link] [This Post][Share]


Navigating the AI Frontier: A Primer on the Evolution and Impact of AI Agents
Fernando Alvarez, Jeremy Jurgens, et al., World Economic Forum, 2024/12/20


Icon

The utility of this paper (28 page PDF) from the World Economic Forum is that it will give readers a common vocabulary to talk about AI agents. "An AI agent can be broadly defined as an entity that senses percepts (sound, text, image, pressure etc.) using sensors and responds (using effectors) to its environment." The paper describes the progression from simple rule-based agents to multi-agent systems that can sense the environment, adapt and learn.

Web: [Direct Link] [This Post][Share]



Creative Commons License. gRSShopper

Copyright 2025 Stephen Downes ~ Contact: stephen@downes.ca
This page generated by gRSShopper.
Last Updated: Dec 31, 2025 4:58 p.m.