Transparency and Accountability in AI use in K-12 (Game Created Using AI)
Maha Bali,
Reflecting Allowed,
2026/05/11
Maha Bali plans to use this game in an upcoming talk. That's good; something interactive is always fun. My concern here is not that it is AI-generated. The AI did a nice job. My complaint is that it's too easy. It presents each of eight scenarios with three possible answers. It is very easy to pick the 'best' and 'worst' choice, given any degree of background knowledge. And this makes it seem like the actual choices regarding AI are easy. They're not, and the use of this game is a case in point. A critic said it was 'too easy'. Do you go back and ask the AI to make it harder? Do you ask for a justification of the difficulty level? Or do you just go ahead and use it as it?
Web: [Direct Link] [This Post][Share]
Chrome Is Quietly Downloading a 4GB AI Model Without Your Permission
Jibin Joseph,
PC Mag,
2026/05/11
I've hesitated to cover this story because I have Chrome on my machine and haven't been able to find the 4 gig file where it's supposed to be (or anywhere else, for that matter). Mind you, I use Chrome only for testing; for day-to-day I use Firefox. But I've seen the story from enough sources now, including some that would actually check the data, that I'm inclined to believe it's true. Having said that - I'm sure that this is only the tip of the iceberg. For example, I use Adobe's noise reduction feature in Lightroom and found the other day my C:\ drive filled by a huge 'cr_sdk' file. There's no documentation of this and no way to manage it. Is it AI? No idea. Could it be? Sure - and I'd never know. Are other services running local AI models? Even if not, they probably will in the future.
Web: [Direct Link] [This Post][Share]
Why the Canvas hack was innevitable
Tim Klapdor,
Heart Soul Machine,
2026/05/11
The Canvas hack, writes Tim Klapdor, was inevitable. "When the decree from management increasingly mandates that all core systems must be off-the-shelf products from established vendors (a policy that sounds like sensible risk mitigation) the result is that all your vendors share the same infrastructural single point of failure. When Canvas went down, it took every system routed through it with it." This argument has been made many times in these pages - a distributed and decentralized system is much more resilient. The push toward optimization and efficiency, if taken too far, greatly increases fragility.
Web: [Direct Link] [This Post][Share]
How public scholarship can unintentionally undermine journalism
Michael J. MacKenzie,
Policy Options,
2026/05/11
Public scholarship is a good thing, writes Michael J. MacKenzie. But too much of that good thing can undermine and "blur into substituting for paid (and trained) journalistic labour." Nobody would deny that journalism is in a fragile state. But it's not clear whether the best way forward is to continue to pay for it as we always have, just as it's not clear the best way for scholars is to focus their efforts on in-class instruction. The presumption underlying the post - everything is good the way it is - is questionable. I'd rather get scientific and other knowledge straight from the scholars, rather than through the filter of "media, advocacy organizations, policy shops, political actors."
Web: [Direct Link] [This Post][Share]
There Is No 'Hard Problem Of Consciousness'
Carlo Rovelli,
NOEMA,
2026/05/08
You can read my essay on consciousness or you can read this article - the point being made is exactly the same (and certainly not unique to either of us). "Experience is not over and above the processes that happen in the brain, as Chalmers assumed upfront. The dualism between a first-person description of experience and a third-person (or scientific) account of the same is a normal perspectival difference: the same brain phenomenon as experienced by that same brain itself, or by another. Experience for both - not evidence of two different kinds of reality."
Web: [Direct Link] [This Post][Share]
W Social uncovered: the reality behind the hype
Elena Rossini,
2026/05/08
The tagline says it well: "this is "an article dispelling myths about W Social, the new European platform that aims to rival X: it is a fork of Bluesky that shares many similarities with Eurosky and requires government ID to sign up." Leaving aside internal European politics, it's worth noting that W Social was launched at Davos with a lot of fanfare (while Eurosky, the European Bluesky instance, set up a side event, and Mastodon, also based in Europe, went unmentioned). There's a lot more in the article, but as Elena Rossini says, "W Social is set to launch tomorrow May 9th on Europe Day. As it happened when it was first announced in January, it is likely to receive a lot of uncritical, superficial press coverage. Please exercise critical thinking and try to look at the reality behind its hype." (p.s. I'm stealing the 'written by a human' image from this page :) ).

Web: [Direct Link] [This Post][Share]
FediProfile — The Decentralized Link-in-Bio
Maho Pacheco,
2026/05/08
This is an interesting project (paradoxically written in Razor) that presents itself as an "open-source, federated alternative to Linktree," or more specifically, a "decentralized link-in-bio that connects to the Fediverse (where you can) share links, collect badges, and own your online presence (with) no corporation in the middle." Code on GitHub. Via Johanna Botari.
Web: [Direct Link] [This Post][Share]
The Boring Internet
https://indieweb.social/@tg,
Terry Godier,
2026/05/08
"The internet is not dying," writes Terry Godier, "A commercial veneer glued on top of it is dying." I'm not a fan of the muted text in dark mode and the scroll reveal presentation, but the message in my view is sound. Godier presents a three-layered internet: platforms, which are the commercial veneer; services like GitHub and Cloudflare, which can be large and influential but are not actually necessary; and finally, the actual set of protocol-based services, things like Icecast (shoutout to Soma.fm), RSS (I subscribed), HTML (you're soaking in it) and finger. Boring: "too useful to disappear, too uncool to hype, too federated to acquire, and too awkward to turn cleanly into a platform."
Web: [Direct Link] [This Post][Share]
Expanding OER with GenAI
Lance Eaton, Larry Davis,
EDUCAUSE,
2026/05/08
This article blends the ideas of generative AI and open educational resources and the results are about what is to be expected. The authors begin with licensing concerns, asserting that "educators must still assign an open license if they intend for others to use, share, or adapt that content freely," though they are aware that "a creator might not be able to assign a license because the work is not copyrightable." They offer a 'GenAI-OER Adoption Framework' based around the idea of adoption, adaptation and building. They should have included 'share' somewhere in there. The rest is boilerplate advice: start slow, share what you've learned, "align practice with the core values of openness, equity, agency, and care."
Web: [Direct Link] [This Post][Share]
Hackers steal students’ data during breach at education tech giant Instructure
Lorenzo Franceschi-Bicchierai,
TechCrunch,
2026/05/08
This week's big story is a confirmed data breach at Instructure affecting students' private information. As TechCrunch and others report, the hacking group ShinyHunters claimed responsibility. Hacks started Tuesday and continued through the week. The company is reported to be held for ransom by the attackers. Ian Linkletter comments, "Instructure has been bragging about their data hoard to investors for years, and now it's been stolen." In all, "the hackers claimed to have stolen data from almost 9,000 schools around the world, with the stolen files allegedly containing information on 231 million people." Here's more.
Web: [Direct Link] [This Post][Share]
Artificial intelligence as a site of global educational governance: the case of UNESCO
Eleni Christodoulou, Michalinos Zembylas,
Journal of Education Policy,
2026/05/08
This paper is a critical reflection of UNESCO's policy role in the governance of AI in education (AIED). It's a jigsaw puzzle of a paper, consisting less of a logical flow and more as a set of pieces that fit together to form a perspective. Two pieces are UNESCO's own view of its own role in AIED: on the one hand, as a champion of human rights and justice, and on the other hand, anticipating and sometimes facilitating disruptive technologies. Four pieces are different ways of viewing international organizations (IO): "global governance theory, Foucauldian governmentality, AI ethics critiques, and critical education policy." Three additional pieces are functions regarding AIED: "observatory, capacity development, and normative frameworks." Then there are pieces describing what UNESCO does: "production of policy guidance, teacher support tools and mapping exercises focused on the incorporation of AI into national curricula." The picture that emerges is a "structural decoupling between UNESCO's rhetorical commitments and its policy implementation in the governance of AIED." UNESCO should be supporting development and human rights, but may instead see AIED as an opportunity to enhance its governance function. Image: UNESCO IITE.
Web: [Direct Link] [This Post][Share]
Vibe-Coding a MultiLingual #Free #Whiteboard: #DrawSplat
Miguel Guhlin,
Another Think Coming,
2026/05/07
"As a technology director," writes Miguel Guhlin, "what solutions might have I vibe-coded if I had Claude AI or ChatGPT at my beck and call INSTEAD of paying money to expensive companies?" How about this? "Kid Pix + digital whiteboard + CMAP Tools, all mixed in together." Three hours of work and something usable was thrown together, thanks to ChatGPT. Run it though a couple more versions and an edit by Claude, and something reasonable emerges. Is it perfect? Is it commercializable? No - but it doesn't have to be. You can play with it here.
Web: [Direct Link] [This Post][Share]
Paradigm shifts, bricoleers [sic], and other animals
Jon Dron,
Jon Dron's home page,
2026/05/07
This article begins as it should with a discussion of Ben Werdmuller's Elgg social software. Eventually, it gets around to its main point: "The key lesson to be drawn from this is that, if the architecture is sufficiently and cleanly modular (as Elgg's is), then it may now be more effective to recreate components from scratch than to maintain the ones you have already written." As a community, we've thought a lot over the years about what this architecture looks like. Jon Dron discusses what he calls "a new paradigm for building plugin-based social applications." Quite right. This is important to me because that's exactly the philosophy I'm using to design CList - you can see the architecture emerging here. The major difference, though, is that I want individuals to be able to design their own environment, and not to be locked into whatever a community decides. An autotecture, if you will, instead of an ochlotecture.
Web: [Direct Link] [This Post][Share]
View of Artificial Intelligence and Communities of Inquiry: Reimagining Educational Experiences
Stefan Stenbom, D. Randy Garrison,
The International Review of Research in Open and Distributed Learning,
2026/05/07
This is useful work. "We examine the potential for AI to assume multiple roles within a community of inquiry - supporting instructional design, guiding learners as an independent resource, assisting instructors through analytics, participating in discussions, and sustaining dialogical partnerships with students." This maybe takes the argument a step too far: "we contend that AI can contribute to worthwhile educational experiences only when framed within a coherent conceptual perspective that emphasizes skeptical engagement, collaborative reflection, and the preservation of human purpose." (Always be wary of statements of the form 'A only if B' - such statements often reflect a conflation of necessary and sufficient conditions). The paper as a whole is a worthwhile overview of AI's use in supporting learning communities.
Web: [Direct Link] [This Post][Share]
View of The Answerthis.io AI App Looks at My Interaction Equivalency Theory
Terry Anderson,
2026/05/07
My experience with Answerthis.io was not nearly as productive as Terry Anderson's, which was reported in the current IRRODL, mostly because half way through answering it stopped and asked me for money. Still, Anderson reports that "The tool does a credible job of summarizing ways that other scholars have used the theory. It looks to be both accurate and thorough. In a couple of minutes, AI was able to scour the literature and find applications of the theory, made by others, that I had not heard of nor could have imagined today, much less than when I originally wrote the paper." However, he adds, "It shows that assigning tasks such as this as an assessment activity in a senior undergraduate or graduate course hardly seems worthwhile, given the time and effort taken by a teacher for assessment." And, of course, when you're preparing a literature review for a publication, it's important to have a human do it, not a machine. That's what graduate students are for.
Web: [Direct Link] [This Post][Share]
Am I an LLM?
Arturo Nereu,
2026/05/07
Obviously it's not a serious question. But it's worth reflecting on the similarities between humans and LLMs. "As I learn more about how they work, I sometimes pause and wonder if there's a chance that I (and this can be extended to other people, but I will speak from my POV) can be some sort of LLM." Yes, it's true, "the idea of humans thinking about the latest invention as the way of how brains work is not new." But this time, we actually designed the latest machine to mimic the physical structure of the brain. We can learn from that. Short article; worth reading.
Web: [Direct Link] [This Post][Share]
Some key elements of deeper learning
Scott Mcleod,
Dangerously Irrelevant | @mcleod,
2026/05/06
There's a bit of a theme in today's OLDaily revolving around the image in this post, "Let kids do work that matters." This image occurs in the context on a discussion of what counts as 'deeper learning', which is here presented as an objective for schools. It's a hard ask; "If you ask people to do high level work in classrooms in the current culture, they will do low-level work and call it high-level work." We converge on a definition that amounts to "three virtues: mastery, identity, and creativity." It's not a bad definition of 'deeper' but is it a good definition of 'what matters'? So much of our foundational landscape is changing these days, shaped in part by AI but also by a softening of some core myths in society - of the role of jobs, of how we manage power, of what constitutes 'meaningful'. When Canadian Prime Minister Mark Carney said, "We are in the midst of a rupture, not a transition," he was referring to the breakdown of the international rules-based order, but when he cites Václav Havel's The Power of the Powerless it becomes clear that he's talking about how each of us sees ourself in relation to others.
Web: [Direct Link] [This Post][Share]
Bananaland University
Josh Brake,
The Absent-Minded Professor,
2026/05/06
"What might the Savannah Bananas teach us about a potential future for higher education?" That's a good question, but not as discussed in this post (trust me, I'm a baseball fan). Josh Brake presents 'bananaball' as the invention of one Jesse Cole. "Cole is ruthless about finding things that aren't working and trying out new things to replace them." Ah - the Founder as Prophet, Founder as Priest myth strikes again. But no. The actual history is much better. The name 'Savannah Bananas' came about because of a fan vote. Players took up the spirit of the fans' silliness and started adding new rules in practice. They played an exhibition game. There was never any 'resistence' per se, no need to 'overcome the sceptics'. So, the lessons? "How would we change the shape of the college and university if we shifted our goal from job to vocation, from career preparation to character development, from creating students with economic utility to forming students who understand their place in the world more deeply." Meh. I mean, it's not bad, but we're still telling students what to do. The real lesson? Let the fans decide.
Web: [Direct Link] [This Post][Share]
Why product discovery matters more than ever in the age of AI
Jared Molton,
Udacity,
2026/05/06
When I was a kid I built a little cabin on an old wagon in our yard. Eventually my father said i was time to take it down and give the neighbours a break. I took it down, then decided to rebuild it even better. The new wood cabin was a huge improvement, but it lasted exactly one day before being taken down. It didn't matter that I had built it better and faster; it was just the wrong thing at the wrong time. Today, now that I don't have a 'job', I've been working hard on my personal learning environment (PLE) application, CList. But is it the right thing for the right time any more? I wrestle with that question, which is why this article appealed to me, even though you can stop reading after maybe the first third (again, it's an AI article that goes on and on and on and on....). The point is good: "A team can release three AI-powered features in a single sprint. If none of them improves conversion, retention, or satisfaction, the speed was wasted. The features were built efficiently. They just were not worth building." (p.s. don't get me wrong - working with code like this is the most fun I've had in a long time and while it would be nice if it was widely adopted, it's not really necessary). (p.p.s I really need a better name than CList - I'm open to ideas).
Web: [Direct Link] [This Post][Share]
Mature AI Use vs. Immature AI Use
Mike Kentz,
How We Frame Machines,
2026/05/06
This paper makes a useful distinction which I'll share here, so you don't have to wade through the AI-generated reams of text. It divides AI use policies into two domains: ethics, and maturity. You can take it from there; the actual paper employs a naive (though commonly held) perspective on ethical frameworks as "meant to govern behavior across a community," while on the other side growth, effort and learning from feedback are taken as indicators of maturity. The useful bit in this paper is that our immediate reaction should not be to just create an ethics policy that governs allowable use. We need to look beyond what we shouldn't do, to what it's worthwhile to do. This requires a lot more thought. (p.s. a link to 'Glow and Grow', for the record).
Web: [Direct Link] [This Post][Share]
Literacy-slop
Doug Belshaw,
Open Thinkering,
2026/05/06
Read the Emily Segal post first, then this post. Belshaw argues here that "If we swap 'Digital literacy' for 'Taste' then it's a socially-negotiated relation between people, tools, practices, contexts, and communities." From which we can argue, "Literacy-slop is the credential without the community of practice; it's the qualification without the learning; the skills certificate for getting an AI agent to click through a self-paced module on digital skills. It looks like literacy, satisfying the classifier. But it's just curation without a social body." Or put another way: "There exists a whole complex of knowledge, dispositions, and social relationships that makes someone capable in various digital contexts." The labels are just the socially accepted markers of success or of capability in that context.
My view: there's a lot right here, but I don't agree with it all. Words don't have meaning on their own, sure. They only have meaning in a context. But context can be anything; it doesn't need to be a community or a society. It doesn't have to be negotiated. There is no process of 'making meaning'. Context is (literally) the network of entities a thing is embedded in; meaning is the emergent pattern in that network that is recognized by a viewer when prompted by the thing. There is no one meaning, no 'real' meaning, obviously, because there are many viewers, many ways of seeing the same things. Any negotiation that happens isn't about the actual meaning; it's about establishing and holding power in that community, a hierarchy of symbolism, just like taste.
Web: [Direct Link] [This Post][Share]
Tasteslop
Emily Segal,
NEMESIS,
2026/05/06
Read this article first, then Doug Belshaw's take. This article is on the phenomenon of 'taste' (as in, "she has good taste"). The point here is "Taste is not really a property of various objects. It is a socially validated relation between objects, people, histories, scenes, and timing." In other words, you can't really have good taste unless there's an audience that sees you and afirms your good taste. OK? Next: "Tasteslop emerges when the visible signs (or 'markers') of taste are extracted from those relations and redeployed generically." It's like slapping a Gucci lable on your t-shirt. The point here is that AI can recognize (via pattern matching) what counts as a sign of good taste (like a Gucci lable) but not when the sign has been misapplied either via "lost meaning or what would need to replace them for things to feel legitimately fresh." AI intensifies this because it will just slap a taste marker into any old context, breaking down the whole culture and 'taste hierarchy' the taste marker belongs to.
Web: [Direct Link] [This Post][Share]
‘Close to zero impact’: US study casts doubt on effect of phone ban in schools
Richard Adams,
The Guardian,
2026/05/06
Surely if anything will cause us to stop looking at 'test scores' as a measure of impact, this will, right? "The report concluded that among schools instituting a ban: For academic achievement, average effects on test scores are consistently close to zero." After all, "Researchers say findings are not reason to shy away from restrictions as MPs consider ban in England's schools."
Web: [Direct Link] [This Post][Share]
The "AI Job Apocalypse" Is a Complete Fantasy
David George,
a16z,
2026/05/06
When a tech or finance company puts the phrase 'understanding of humans' in the subhead, that's a red flag. A folk theory of 'human nature' is not a good basis for informed commentary. Neither is saying 'of course Keynes was wrong.' That doesn't invalidate the entire message here, though. Historically, when a new technology has been introduced, that has increased, not decreased, employment and wealth. It's not a simple case of "we found new and different productive endeavors to fill our time." Would that this were the case. Historically (and this is not the a16z message) though wealth increased, scarcity persisted because that wealth was not shared, and it was never possible to survive on a 15 hour work week. To me, the assertion that AI won't eliminate jobs is not an assertion about AI (which very much could reduce our need for labour) or even an assertion about people (if it were, a16z would be saying a guaranteed income would increase social wealth) but rather an assertion that the exploitation will continue (just as it has through previous rounds of technology development).
Web: [Direct Link] [This Post][Share]
No Need Rushing for AI in Education
Thomas Ultican,
tultican,
2026/05/05
Readers might find this surprising to hear, but I agree with the core sentiment of this article (though not with many of the details). In broad strikes, the argument is the same as has been made elsewhere: there's a concerted industry campaign to get schools to spend on AI technology, but it's not at all clear that it will be a good investment. "The AI pitch looks suspiciously like the same education technology song and dance bombarding schools for more than a century," writes Thomas Ultican. Quite right. But let's analyze this. From a business standpoint, schools (and education generally) represents a huge pool of money, especially when the market is approached at the district or the state/province level. Fortunes can be made. Companies will promise anything. We never see results because none of these products changes anything about that model of education - that's the last thing the companies want to change, because then the huge pools of money become harder to access. So most product results in money being spent on technology to do the same thing as before. I would advise: let's not do that this time. But it's not a popular message - the companies don't like it, obviously, and neither do the schools, because what they want is to keep doing the same thing they've always done.
Web: [Direct Link] [This Post][Share]
Hawkeye
Marshall Kirkpatrick,
What's Up With That,
2026/05/05
According to the website, "Turn market intelligence into communications that lead your field. Hawkeye monitors your ecosystem and helps you organize intelligence into newsletters, commentary, and outreach that strengthen your field-level radar and get people excited about the future." I don't use anything like this (I use my brain instead) but I do wonder how many influencers - especially those on Substack and LinkedIn with hundreds of thousands of followers - are doing exactly this. A partner app called 'What's Up With That' (previously covered here) which looks at a web page and analyzes the content behind it. "Hawkeye monitors your ecosystem at the macro level. What's Up With That? operates at the point of reading - showing you what's new and important in anything you read, with 40+ AI power tools to go deeper. Together they close the loop between ecosystem awareness and individual sensemaking."
Web: [Direct Link] [This Post][Share]
Pedagogical partnerships with generative AI in higher education: how dual cognitive pathways paradoxically enable transformative learning
Shaofeng Wang, Hao Zhang,
International Journal of Educational Technology in Higher Education,
2026/05/05
The gist of this research paper is that cognitive offloading to AI "liberates mental resources for higher-order reflection, thereby enhancing transformative learning experiences." An analysis of the types of AI use reveals what the authors describe as a U-shaped pattern, where casual cognitive offloading diminishes critical evaluation, but more deliberate and strategic offloading actually enhances that function. The more people use AI in a structured, studious way, the more likely they are to be sceptical of what it produces and to assess output in terms of what they are trying to achieve. Via Tawnya Means, who summarizes the main points (don't bother, though, the article is mostly AI filler - I don't know why people who write using AI have not learned that more is not better).
Web: [Direct Link] [This Post][Share]
What Works for the Most Vibrant Experiences in a Hybrid Conference Format?
Alan Levine,
Google Groups,
2026/05/05
Over a two-week period in 2002 I wrote a daily commentary for Australia's online Net*Working conference. It was a deep and engaging experience for me, and a success as a conference newsletter. They asked me to do it again the following year, but this time the conference was hybrid, and I felt cut off from what was happening, and it was a failure. This has been my experience with hybrid events ever since. I don't think there's any way to make the online participants feel equal to the people who paid their way through flights and fees to that exclusive in-person experience. It's not a technology divide, to my mind, it's a class divide. The only way to make it work is what we did when the government foolishly foisted hybrid 'back to office' mandates on us: we kept doing it all online. See also this second conversation thread.
Web: [Direct Link] [This Post][Share]
Intelligence?
Nik Bear Brown,
GitHub,
2026/05/05
This is a pre-release version of Nik Bear Brown's Intelligence?, an engaging and fascinating exploration of the concept behind not only AI but life in general. It rewards a good reading. "It is the book's thesis: that intelligence is not a single thing being accumulated across the tree of life, but a family of distinct capacities, each with its own evolutionary history, each extendable by different tools, and none of them well described by any definition we currently have." Or put another way, "Every cognitive tool humans have built - from the first written word to the GPS to the bomb-sniffing dog - extended a specific capacity while requiring human judgment to direct it. The microscope extended pattern recognition. Writing extended memory. GPS extended spatial navigation. AI extends pattern recognition, prediction, and associative memory to superhuman scale." Definitely take the time to have a look, and even better, interact with the author on the concepts and themes. It came up in the course of a conversation on Google Groups.
Web: [Direct Link] [This Post][Share]
Educational AI Insights | MagicSchool Blog
MagicSchool,
2026/05/05
I know a lot of education technology specialists don't want to see this, but here it is anyway. According to its chatbot (I asked, "What does MagicSchool provide for teachers?"), "The platform uses AI to generate customized, standards-aligned resources tailored to your specific grade level, subject, and classroom needs. You can access tools through the search bar in the app to find exactly what you need." Also, "Just let me know what you're working on, and I can help you create or develop the resource directly! Whether it's a lesson plan, worksheet, rubric, or anything else - I'm here to support your work." I wouldn't expect learning outcomes to magically improve, though teachers might find their work gets a little easier. No RSS though.
Web: [Direct Link] [This Post][Share]
Open Source Wishlist
Open Source Wishlist,
2026/05/05
This is a nascent project that has potential if it gains traction. The idea of Open Source Wishlist is that it "connects open source maintainers with expert practitioners for sustainability: governance, funding strategy, security audits, succession planning, and more." I think that while there is potential it will require tending and moderation. Problems like scam developers and fly-by-night projects are real. Via Scott Leslie.
Web: [Direct Link] [This Post][Share]
DeepMind Vision Banana: A Unified Vision Architecture
Roboflow Blog,
2026/05/05
Vision Banana is "a unified model introduced by Google DeepMind that both generates RGB images and performs visual understanding tasks within a single architecture, controlled entirely through text prompts." Or in short: "image generators are generalist vision learners." It's interesting because it blends visual tasks and semantic tasks (eg., find all the cats' ears in the photo) in a single architecture. Just your regular reminder that AI is far more than large language models. (p.s. my take on the 'banana' name: it originates from the meme in image sites (like Imgur) of using a 'banana for scale').
Web: [Direct Link] [This Post][Share]
The Adaptive Enterprise: AI, Learning, and the Work of Making Sense
Jane Bozarth,
ATD,
2026/05/04
This article blends a bunch of things together. Let me tease them out. The first is the main point, that "AI is an amplifier, not a driver." We've been seeing this a lot; the human who needs the work done is the driver. "Judgment still requires context; context still requires experience." This isn't strickly true (context can come from anywhere) but human experience is the part of context AI cannot provide on its own. "Design experiences that build relationships as well as skills." Right. But do we need this? "Facilitate the conversations that support shared interpretation." This is based on the idea that "the center of the framework is human meaning-making." In other words, "meaning still emerges through human conversation and reflection." But (in my view) the task here isn't 'making meaning'. Anyhow, these are all blended together in what eventually becomes a word salad.
Web: [Direct Link] [This Post][Share]
Claude Code's Limits Are Generous. The Problem Is Your Setup.
Paweł Huryn,
The Product Compass,
2026/05/04
This is the first part of a paid article but there's more than enough here to justify reading the free segment. I'm noting it here mostly for my own purposes. I found it after seeing an algorithm-recommended LinkedIn post that referenced some of the approaches but did not link to actual code; it turns out that the content was lifted word for word from this article without attribution. It's a list of methods people can use to optimize their Claude usage so their requests don't consume so much computer time and token use. It looks like good stuff. I use the cheapest possible Claude plan so this will help a lot. Maybe.
Web: [Direct Link] [This Post][Share]
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.