Why the Difference Between AI Literacy and AI Fluency Matters
David Ross,
Getting Smart,
2026/02/17
This is a useful article because it links to a number of recent AI literacy frameworks. It also makes the case that our mastery of AI will need to go beyond literacy. "I suggest we adopt a tried-and-true educational model that governs our thinking around curriculum frameworks: A scope and sequence. The final outcome of that scope and sequence should be AI fluency." This could be true - but I would caution that what counts as AI literacy, much less fluency, is very much a moving target. As a case in point, for example: do we or don't we need prompt engineering? New, related, and not mentioned in the article: Acadia University's free Introduction to AI Literacy course (and CBC coverage). Also: the U.S. Department of Labor's new Framework for Artificial Intelligence Literacy (15 page PDF).
Web: [Direct Link] [This Post][Share]
Our Emerging Planetary Nervous System
Rimma Boshernitsan,
NOEMA,
2026/02/17
This is a longish article with some good examples showing a future state (and likely applications) of what the author calls our planetary nervous system. What is meant by that is the interconnected network of sensors and indicators that respond to what's happening in the natural world, with inputs ranging from waterflows to migration patterns to the spread of wildfires. "This is machine intelligence at its most vital," writes Rimma Boshernitsan, "not replacing judgment, but extending our senses." The objective is "to integrate so coherently with the biosphere that the whole can self-regulate rather than just react." And what we want, I would say, is for this integration to be available to everyone, the way the Global Biodiversity Information Facility (GBIF) "weaves millions of records from field notes, museum collections, citizen observations and satellite traces into a living archive" creating "a global network and open-access infrastructure funded by governments worldwide." If we don't require that this data be open access, someone will attempt to privatize it.
Web: [Direct Link] [This Post][Share]
Agentic Email
Martin Fowler,
martinfowler.com,
2026/02/17
Martin Fowler reports, "I've heard a number of reports recently about people setting up LLM agents to work on their email and other communications. The LLM has access to the user's email account, reads all the emails, decides which emails to ignore, drafts some emails for the user to approve, and replies to some emails autonomously." As enthusiastic as I am about AI, I agree with him that it's far too early to trust agentic email with real access to my email, and not only because of the security risks. I mean, I can't even trust my anti-spam services to keep out all the spam and only the spam. I'm not ready to let it make statements on my behalf. And oh yeah, the security risk.
Web: [Direct Link] [This Post][Share]
Top Priorities for Global Heads of Learning and Talent
iVentiv,
2026/02/17
This report (26 page PDF) is based on a survey of 468 heads of learning and talent from 394 companies in Europe, the United
States, the UK and the Middle East. The results are not surprising. The top priority continues to be leadership and executive development, as ever, and close on its heels is artificial intelligence. Also, "In last year's data, the phrase 'skills-based organisations' came up more than twice as often as in 2024.... That persistence reflects the 'skills based' approach becoming part of the
mainstream as we head into 2026." This priority, and also the emphasis on 'learning culture', reflects the need to adapt to a rapidly changing skills landscape; for this reason as well it is difficult to link learning programs directly to return on investment (ROI). The conditions before and after learning are often completely different, and it's often a case of 'adapt or get left behind' for individuals and companies. The report is behind a spamwall, and you can give them your contact information if you're feeling nice, but this direct link should work as well.
Web: [Direct Link] [This Post][Share]
The double-edged sword: Open educational resources in the era of Generative Artificial Intelligence
Ahmed Tlili, Robert Farrow, Aras Bozkurt, Tel Amiel, David Wiley, Stephen Downes,
Journal of Applied Learning & Teaching,
2026/02/16
I contributed to this paper (9 page PDF) - not a ton, but definitely not nothing. Here's the argument that came out of our exchanges: "We analyze several emerging tensions: the ontological crisis of human authorship, which challenges traditional copyright frameworks; the risk of 'openwashing' where proprietary models appropriate the language of the open movement," and some ethical issues. "This paper argues that the binary definition of 'openness' is no longer sufficient. We conclude that ensuring equity in the AI era requires a transition from open content creation to the stewardship of 'white box' technologies and transparent digital public goods." Now there's a lot of uncharted territory in that final statement. This paper just begins to touch on it, and (in my view) concludes without really explaining what we might mean by all that.
Web: [Direct Link] [This Post][Share]
From data to Viz - Find the graphic you need
Yan Holtz and Conor Healy,
2026/02/17
Tom Woodward links to three interesting graphing resources in one post. This first item, a tool for selecting the sort of graphic you want to use, is a number of chart type selections classified according to the number of variables you're looking at. Their poster is probably the best value of the three. If you prefer a more open-ended selection, there's this complete guide to graphs and charts. This page also links to "on-demand courses show you how to go beyond the basics of PowerPoint and Excel to create bespoke, custom charts" costing about $100 per. And how do you make the charts? You could use SciChart, a 'high-performance' Javascript chart and graph library. But the pricing is insane, starting at $116 per developer per month. I'm pretty sure ChatGPT will teach you about the types of charts (actually, I just made one for you while writing this post) and Claude Code will be able to write you a free version of SciChart.
Web: [Direct Link] [This Post][Share]
GenAI as automobile for the mind, and exercise as the antidote: A metaphor for predicting GenAI's impact
Mark Guzdial,
Computing Ed Research - Guzdial's Take,
2026/02/17
I like this analogy. "Some of you may remember the Apple ads that emphasized the computer as a 'bicycle for the mind.' GenAI is not like a bicycle for the mind. Instead, it's more like an automobile." Or, says Mark Guzdial, "As Paul Kirschner recently wrote, GenAI is not cognitive offloading. It's outsourcing. We don't think about how to do the tasks that we ask GenAI to do. As the recent Anthropic study showed, you don't learn about the libraries that your code uses when GenAI is generating the code for you (press release, full ArXiv paper)." Maybe. But it depends on how you use AI - there is a 'bicycle method' (to coin a phrase) when using AI, which is what (I think) I do - making sure I understand what's happening each step of the way. As Guzdial says, "Generative AI is a marshmallow test. We will have to figure out that we need to exercise our minds, even if GenAI could do it easier, faster, and in some cases, better." See also: To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making.
Web: [Direct Link] [This Post][Share]
mist: Share and edit Markdown together, quickly (new tool)
Matt Webb,
Interconnected,
2026/02/16
This is pretty cool: it's a collaborative markdown editor with a couple of interesting features: "all docs auto-delete 99 hours after creation. This is for quick sharing + collab"; and "Roundtripping: Download then import by drag and drop on the homepage: all suggestions and comments are preserved." Built over the weekend using Claude Code. And it reminds me or a remark I heard on TWIT: coding with AI is the best video game out there right now. "You know it's very addictive using Claude Code over the weekend. Drop in and write another para as a prompt, hang out with the family, drop in and write a bit more, go do the laundry... scratch that old-school Civ itch, 'just one more turn.' Coding as entertainment."
Web: [Direct Link] [This Post][Share]
The Intrinsic Value of Diversity
Eric Schwitzgebel,
The Splintered Mind,
2026/02/16
I've made a similar argument in my own writings on ethics: "diversity in general is intrinsically valuable, and there's no good reason to treat moral diversity as an exception." People will have as different understanding than you or I on what's right and good, and overall (within reason) that's a good thing. Now the reasoning offered here is based on aesthetic premises: "a world where everyone liked, or loved, the same things would be a desperate, desolate world." Or as Eric Schwitzgebel summarizes, "An empty void has little or no value; a rich plurality of forms of existence has immense value, no further justification required." My own reasoning is more pragmatic: a world where we all valued the same things would be static and unchanging, and therefore, could never learn or adapt.
Web: [Direct Link] [This Post][Share]
The Shortcut That Costs Us Everything
Alec Couros,
Signals from the Human Era,
2026/02/16
The title is provocative, but maybe a bit overstated. Here's the argument: why not have students analyze AI-generated writing (instead of writing their own essays)? Because "this approach becomes the dominant mode, displacing rather than supplementing the generative work students need to do themselves." You can only get so far studying what others have written; you have to write for yourself to really understand it. Couros decomposes the original suggestion, identifying assumptions it rests on (for example: students are able to analyze writing, students don't need to generate their own). But even more importantly, there's the risk that students won't develop sufficient critical thinking skills. "Critical media literacy isn't just a nice academic skill. It's a survival capacity. And we're proposing to develop it by removing the very experiences that might allow students to understand, at a visceral level, what synthetic content lacks." But... is that the skill people really need? We need better standards than "two legs good, zero legs bad." I think what we really need (and never really been taught well) is the means to distinguish between what can be trusted and what can't (no matter who or what created it).
Web: [Direct Link] [This Post][Share]
Before You Buy AI for Your Campus, Read This
Marc Watkins,
Rhetorica,
2026/02/16
It's like we're asking the same question over and over again. Maybe they can be reframed? Marc Watkins begins with the ethical perspective, but looks at whether institutions should buy AI tools from three different perspectives: would students even use the tools (or would they distrust them); would students use their own AI to bypass institutional guardrails; and why would institutions use a tool that would eliminate the positions they are preparing students for? "Institutions like Gonzaga University," writes Watkins, "are making AI part of their core curriculum by putting it in conversation with their institutional values." Specifically, "Because a commitment to inquiry and discernment serves as the foundation of our core curriculum, our students will engage with AI in ways that are both practical and critical." That makes sense, but there's also the risk that this is just wishful thinking.
Web: [Direct Link] [This Post][Share]
Beautify This Slide
Dean Shareski,
Ideas and Thoughts,
2026/02/16
Dean Shareski pushes back against "the so-called thought leaders out there who seem to have a clear handle on how to best consider AI for learning and schools." You see them a lot on LinkedIn and, of course, on their own web pages, offering "frameworks and approaches neatly packaged, intended to support leaders, educators and students in their professional and instructional use of AI." The reality isn't that straightforward. Take the simple question of using AI to help design slides for a presentation. PowerPoint will incessantly offer suggestions. Sometimes they're useful, but sometimes the personal touch is what's needed. There's no general rule. Me, I prefer to design by hand, but that's mostly because I enjoy designing. Though I like to think there's an intuitive aspect, where my design reinforces my message in a way that an AI-generated design would not. It's hard to say. Image: one of mine, that I'm pretty sure an AI would never use to illustrate this post.
Web: [Direct Link] [This Post][Share]
Oh, good: Discord's age verification rollout has ties to Palantir co-founder and panopticon architect Peter Thiel
Lincoln Carpenter,
PC Gamer,
2026/02/16
Applications like Discord and TikTok aren't instances of educational technology per se, though they are often used in learning contexts. But this story has wider implications as they represent the leading edge of identity verification, and therefore, tracking and surveillance. While previously the sheer number of humans made it impractical to keep track of everybody, broadly used technology and artificial intelligence are making it possible for advertisers and governments to have a personal file on each individual, making it very easy to track patterns of behaviour, determine nationality and immigration status, or discriminate based on culture, demographics, religion or political affiliation. This is especially of concern for a student population that is trying to use the learning experience as a safe space to try on different identities. See also: How TikTok 2.0 Became a Weapon for ICE.
Web: [Direct Link] [This Post][Share]
The Artificial Intelligence Disclosure Penalty: Humans Persistently Devalue AI-Generated Creative Writing
Manav Raj, Justin M. Berg, Rob Seamans,
Journal of Experimental Psychology: General,
2026/02/16
An "emerging body of research suggests that consumers stand to garner enjoyment and value from AI-generated creative goods (only if) they remain unaware that AI was involved in the creation process." The suggestion here (21 page PDF) is that, if people are aware something was created by AI, they value it less. "This AI disclosure penalty is remarkably persistent, holding across the time period of our study; across different evaluation metrics, contexts, and kinds of written content; and across interventions derived from prior research aimed at moderating the effect." What interests me is whether this effect will persist over time, or whether it is a product of a population to which AI is brand new, and not part of the background environment. Via Jonathan Boymal.
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.