How to Write a Good Spec for AI Agents
Addy Osmani,
2026/02/20
This is a detailed description of how to approach writing a specification for AI agents. Long story short: don't try to do it all in one go. Start with a high level description of what you want done, then work the AI to refine it into something comprehensive and robust. "Simply throwing a massive spec at an AI agent doesn't work - context window limits and the model's 'attention budget' get in the way. The key is to write smart specs: documents that guide the agent clearly, stay within practical context sizes, and evolve with the project."
Web: [Direct Link] [This Post][Share]
Using Music to Teach Democracy
Kristina Piskur,
Teach Magazine,
2026/02/20
This is interesting. "MELODY (Music Education for Learning Opportunities and Development of Youngsters) is an Erasmus+ project co-funded by the European Union with a mission that is both innovative and timely: to use the universal language of music as a powerful educational tool to enhance children's participation in democratic life." There's a project Handbook of Best Practices and a toolkit available on the project website. Related: How to solve the tenor shortage, via Chris Corrigan.
Web: [Direct Link] [This Post][Share]
Prototyping a Brightspace Course Coach Application
D'Arcy Norman,
2026/02/20
To be clear, what's interesting here isn't the software itself, which is "a rough proof-of-concept built for informational and exploratory purposes." No, it's that the software exists at all. A couple of nights ago I read on Mastodon, "I wonder if it's possible to build a standalone application that connects to Brightspace and analyzes all course materials and info using a local LLM and then provides a Coach to help me learn? (turns out, yes, and it works well)." This article from the next day describes the process, from the creation of a specifications document to the prompt to the application itself. While watching the Olympics. In one night.
Web: [Direct Link] [This Post][Share]
Comprehensive AI Literacy: The Case for Centering Human Agency
Sri Yash Tadimalla, et al.,
arXiv,
2026/02/20
Report authored at UNC Charlotte. As has become popular recently, the paper differentiates AI literacy, AI fluency, and AI competency. And it stresses four 'pillars' of comprehensive AI literacy: "understanding the scope and technical dimensions of AI, knowing how to interact with (Generative) AI technologies, being able to apply principles of critical, ethical, and responsible AI usage, and analyzing the implications of AI on society" (ironically whatever algo they were using spelled 'usage' as 'U.S.A.ge'). The authors stress the fourth pillar, arguing for "a systemic shift toward comprehensive AI literacy that centers human agency - the empowered capacity for intentional, critical, and responsible choice." Agency requires some preconditions: "True literacy involves teaching about agency itself, framing technology not as an inevitability to be adopted, but as a choice to be made. This requires a deep commitment to critical thinking and a robust understanding of epistemology." There's an image but my image uploader is broken today.
Web: [Direct Link] [This Post][Share]
The History of Open Education in the Maricopa Community Colleges
Lisa C. Young, Deborah Baker, Matthew Bloom,
PressBooks,
2026/02/19
The further we look back in time, the more compressed it feels, and Maricopa Community Colleges' early innovations in open and online learning feel very compressed in the mid 2020s. It's hard to capture those days of the 1990s and while this e-book is reasonably comprehensive, describing the large number of initiatives that came out of the system, it doesn't really have the feeling of being there (that's not really a criticism, just a sentiment). So if you're reading this - and you should - you should supplement it with a review of Alan Levine's 2003 article that links to coverage here and here and even in this here newsletter. Via Alan Levine, naturally.
Web: [Direct Link] [This Post][Share]
Open texture and the reconsideration of the structure of concepts
Veronica Cibotaru,
Linguistics and Philosophy,
2026/02/19
It always interests me to know when people talk of concept-formation in learning and intelligence just what sort of account of 'concept' they are using. Consider, for example, a 'cat'. Suppose we encounter one that is 25 feet tall. Is it still a cat? Is it a new type of cat? Or just an existing cat (a tiger, say) with extraordinary properties. Anyhow. This article (24 page PDF) takes these questions seriously, and in particular, examines Waismann's notion of open texture through the paradigm of the prototype theory of concepts, a theory that in turn evolved out of cognitive linguistics and can be contrasted with empirical and formal theories of concepts. In particular, it involves the idea that concepts can be open ended and vague, similar to Wittgenstein's 'family resemblances', such that (say) different entities can be more or less instances of a given concept (that is, being an instance of a concept is not an 'all or nothing' proposition). This paper is accessible and clearly written, and a good starting point for a serious inquiry into these ideas, if you're so inclined.
Web: [Direct Link] [This Post][Share]
AI fatigue is real and nobody talks about it
Siddhant Khare,
2026/02/19
This is an interesting set of reflections on what it means to use AI to develop software. There's the good and there's the bad. "AI is the most powerful tool I've ever used. It's also the most draining. Both things are true... If you're tired, it's not because you're doing it wrong. It's because this is genuinely hard. The tool is new, the patterns are still forming, and the industry is pretending that more output equals more value. It doesn't. Sustainable output does." And AI fundamentally changes the job from being a creator using deterministic tools to being a code reviewer using (untrustworthy) non-deterministic tools. Again, this all points to the idea that AI is not eliminating the need for skills, but changing the skills we need. Via Martin Fowler, who credits Tim Bray.
Web: [Direct Link] [This Post][Share]
When Nonprofit Leaders Should Think Like Creatives
Zac Hill, Ben Marshall,
SSIR,
2026/02/19
It's that old question: do fish know they're wet? This article considers nature of the paradigms we swim within without even realizing what they are. For example, write authors Zac Hill and Ben Marshall, many non-profits facing challenges focus on "rigorous, well-established ways of working... hiring staff and volunteers to deliver products or services to beneficiaries... emblematic of a way of working we call the institutional paradigm." But it's not the only paradigm. There is the 'democratic paradigm' - where "people are elected or appointed rather than hired", or the 'legal paradigm', or the 'social movement paradigm'. Here, the authors suggest organizations look at what they call 'the creative paradigm', which is "generally more flexible, output-oriented, and driven by expert discernment." There are opportunities, they write, such as a more flexible talent model, different time horizons, and different ways of evaluating work.
Web: [Direct Link] [This Post][Share]
"If Testing Companies Use AI to Grade, Why Can't We?"
Nick Potkalitsky,
Educating AI,
2026/02/19
I like this article because it carefully steps through the question it addresses. Readers of this newsletter will know that AI-scoring has been available for a number of years now, and easily predates the recent generative AI boom. As Nick Potkalitsky notes, "Ohio uses discriminative AI. Its job is to classify and score existing text. You give it an essay, it returns a number: 1, 2, 3, or 4 points." It is trained on human-graded essays and after training does nothing but classify essays into separate categories. By contrast, "The AI teachers worry about, tools like ChatGPT, is generative AI. Its job is to create new text.? It's completely different, shouldn't be used for grading, and probably wouldn't be very good at it (by contrast, discriminative AI is often fairer and more consistent than human graders).
Web: [Direct Link] [This Post][Share]
CC Licenses, Data Governance, and the African Context: Conversations and Perspectives
Annemarie Eayrs,
Creative Commons,
2026/02/19
Creative Commons has offered the next argument in what it describes here as a process of redefining 'open'. "CC licenses are often viewed as neutral tools, but in practice they can amplify existing power imbalances (as we know, infrastructure is not neutral!). For example, marginalized language and data communities may lack the leverage to negotiate how open resources are reused." This is not new and not unique to marginalized language and data communities - anyone not wealthy enough to hire lawyers has no effective rights in a law- and lawyer-based system. But this isn't the issue being flagged by Creative Commons. "We know that openness is much more than a set of legal tools; it is a set of values, a way of belonging, a wish for a better future." The specific value CC seems to be promoting, though, is transactionalism. "Communities are responding by asking for openness that also accounts for agency, consent, reciprocity, and governance." Who speaks for 'communities'? Creative Commons? Related: Google backs African push to reclaim AI language data. Also: Microsoft Research releases PazaBench and Paza automatic speech recognition models, advancing speech technology for low resource languages.
Web: [Direct Link] [This Post][Share]
Books and screens
Carlo Iacono,
Aeon,
2026/02/19
This article begins with an observation: "The same person who cannot get through a novel can watch a three-hour video essay on the decline of the Ottoman Empire. The same teenager who supposedly lacks attention span can maintain game focus for hours." The point is that what some people are calling a cognitive decline is actually a transition to multi-modality, and if sustained attention is a problem, it's more a problem of design and architecture, not modality. Then, as if to prove the point, this essay essentially repeats the same three or four points over and over through more than 3,000 words (yes, I counted). They're good points, sure, but they don't bear repeating that much.
Web: [Direct Link] [This Post][Share]
ExplanAItions 2025: The Evolution of AI in Research
Wiley,
2026/02/19
I came across this (53 page PDF) while doing some desk research and though it's from last year (and nominally behind a spamwall) I thought I'd pass it along. The real value of this report is the 44 use cases for AI in research that it lists and organizes according to how likely and how useful they are (with a number falling into the 'humans preferred' category). There's also what they call the "Wiley AI Framework" describing what people should act on, watch, or envision. It's also interesting to look back on their perspective from about twelve months ago: "This year, GenAI enters the Trough of Disillusionment as organizations gain understanding of its potential and limits. AI leaders continue to face challenges when it comes to proving GenAI's value to the business." The survey does reflect that for 2025, but for 2026 I think we're already seeing significant advances.
Web: [Direct Link] [This Post][Share]
Diamond Open Access Needs Institutions, Not Heroes
Curt Rice,
The Scholarly Kitchen,
2026/02/18
This is a good argument. Diamond open access refers to academic texts that are published, distributed and preserved with no fees to either reader or author. Curt Rice points out that it used to exist in the pre-internet era: "Manuscripts circulated through departmental working papers series and informal scholarly networks." I remember those days. Rice writes, "What made this ecosystem possible was not heroism, but tractability: limited scale, manageable volume, and informal governance. And that is precisely what has changed." A lot of the commercial publishing infrastructure developed as a way to try to account for this scale, but it has failed as a scholarly activity. "Paying reviewers or editors reframes scholarly contribution as a transactional service rather than a professional responsibility embedded in institutional roles... More importantly, payment does not solve the problem of alignment. What many academics seek is... assurance that their professional contributions are recognized, supported, and valued within the institutions that depend on them." This makes sustainability for diamond open access publication an institutional responsibility, and underlines the need to built structures that supports it.
Web: [Direct Link] [This Post][Share]
Agentify Your App with GitHub Copilot’s Agentic Coding SDK
Shittu Olumide,
Machine Learning Mastery,
2026/02/18
OK, forget about all the tech in this post (and there's a lot of tech in this post). Take a look at the diagram and reflect on what it says about thinking generally. In the past we've had things like critical thinking and computational thinking. These were useful concepts. We might now want to coin a new discipline 'agentic thinking' to follow this sort of model, summarized as "autonomous execution, multi-step problem solving, persistent context, tool use," and deploying skills such as 'task planning', 'tool orchestration', 'multi-turn conversation' and 'evaluation'. None of this is new, per se, but organizing our approach to creativity, problem-solving and decision-making is, I think.
Web: [Direct Link] [This Post][Share]
On Student Success
Glenda Morgan,
On Student Success,
2026/02/18
I'm mentioning this here so people know that this relatively new newsletter exists, and also to urge Glenda Morgan to add an RSS feed so I fan follow it in my RSS reader. It's a sibling publication to Phil Hill's On Ed Tech, which mentioned it today. Hill's posts are often for subscribers only, which is why I cite it less often than I might, but I always read the stubs on his RSS feed, just to keep track.
Web: [Direct Link] [This Post][Share]
The Hidden Dangers of Meta's Partnership Offer to Schools
Faith Boninger,
Progressive.org,
2026/02/18
"In April 2025, Meta started recruiting U.S. middle and high schools to participate in Instagram's new School Partnership Program, inviting schools to partner with Instagram to help combat online bullying," reports Faith Boninger. This is an arrangement more likely to benefit Meta than the schools, she writes, by enlisting schools in their (limited) content moderation efforts. And "the idea that Meta will review content reported by partner-school accounts sooner than other reported violations also implies a veiled threat: that schools that do not partner with the company will find themselves waiting longer for review." Schools should disassociate themselves from Meta, she says. "Yes, many kids will still use Instagram. But at least their school won't be leading them there." Via Larry Cuban. Related: Meta plans to add face recognition to its smart glasses.
Web: [Direct Link] [This Post][Share]
The Civic Stakes of Organizational Disagreement
Peter Levine, Dayna L. Cunningham,
SSIR,
2026/02/18
I mostly agree with what's in this article, not simply because disagreements within organizations are essential for democratic processes, but because (as I've often said) diversity is necessary for any organization to learn and adapt. There's a nice way of putting it buried in the centre of this article: "If neutrality is impossible, pluralism is an essential ideal." It doesn't matter whether we're talking about a person, a company, a university or a society: we have to make decisions. We cannot remain neutral (whatever that even means). And in such a case, the best and only reasonable approach is to consider various possibilities and negotiate our way to a resolution. These are never final; we have to do it each time. And as the article notes, the trick is to do it without rending ourselves in the process. "Hannah Arendt argued that a good life requires consequential debate among equals who are meaningfully different; Jürgen Habermas insisted that collective legitimacy depends on free, inclusive, and reasoned deliberation." This isn't just the responsibility of leaders. Everybody has a role to play. That's why, in any company, university or society, everybody is important.
Web: [Direct Link] [This Post][Share]
A Guide to Which AI to Use in the Agentic Era
Ethan Mollick,
2026/02/18
The developments in AI technology continue apace. Today's new concept to learn is the 'harness'. Ethan Mollick writes, "A harness is a system that lets the AI use tools, take actions, and complete multi-step tasks on its own. Apps come with a harness." He continues, "Until recently, you didn't have to know this. The model was the product, the app was the website, and the harness was minimal. You typed, it responded, you typed again. Now the same model can behave very differently depending on what harness it's operating in." This makes a big difference in how an AI works for you, so you need to check. If you're using ChatGPT 5.2, for example, are you using 'auto', 'thinking' or 'instant'? It depends on what you're using it for.
Web: [Direct Link] [This Post][Share]
Current
Terry Godier,
2026/02/18
What I love about the arrival of AI-supported coding is that now people can create their own flavours of old standards. Take RSS readers, for example. Take Current, for example. "Each article has a velocity, a measure of how quickly it ages. Breaking news burns bright for three hours. A daily article stays relevant for eighteen. An essay lingers for three days. An evergreen tutorial might sit in your river for a week. As items age, they dim. Eventually they're gone, carried downstream. You don't mark them as read. You don't file them. They simply pass, the way water passes under a bridge." My needs are different; I wanted to surface unread voices, so I wrote a reader that displays those who post least frequently first. I also wanted itto run locally, but sync across instances. So that's what I wrote. Each person gets their own version, their own flavour, of what they want.
Web: [Direct Link] [This Post][Share]
Why the Difference Between AI Literacy and AI Fluency Matters
David Ross,
Getting Smart,
2026/02/17
This is a useful article because it links to a number of recent AI literacy frameworks. It also makes the case that our mastery of AI will need to go beyond literacy. "I suggest we adopt a tried-and-true educational model that governs our thinking around curriculum frameworks: A scope and sequence. The final outcome of that scope and sequence should be AI fluency." This could be true - but I would caution that what counts as AI literacy, much less fluency, is very much a moving target. As a case in point, for example: do we or don't we need prompt engineering? New, related, and not mentioned in the article: Acadia University's free Introduction to AI Literacy course (and CBC coverage). Also: the U.S. Department of Labor's new Framework for Artificial Intelligence Literacy (15 page PDF).
Web: [Direct Link] [This Post][Share]
Our Emerging Planetary Nervous System
Rimma Boshernitsan,
NOEMA,
2026/02/17
This is a longish article with some good examples showing a future state (and likely applications) of what the author calls our planetary nervous system. What is meant by that is the interconnected network of sensors and indicators that respond to what's happening in the natural world, with inputs ranging from waterflows to migration patterns to the spread of wildfires. "This is machine intelligence at its most vital," writes Rimma Boshernitsan, "not replacing judgment, but extending our senses." The objective is "to integrate so coherently with the biosphere that the whole can self-regulate rather than just react." And what we want, I would say, is for this integration to be available to everyone, the way the Global Biodiversity Information Facility (GBIF) "weaves millions of records from field notes, museum collections, citizen observations and satellite traces into a living archive" creating "a global network and open-access infrastructure funded by governments worldwide." If we don't require that this data be open access, someone will attempt to privatize it.
Web: [Direct Link] [This Post][Share]
Agentic Email
Martin Fowler,
martinfowler.com,
2026/02/17
Martin Fowler reports, "I've heard a number of reports recently about people setting up LLM agents to work on their email and other communications. The LLM has access to the user's email account, reads all the emails, decides which emails to ignore, drafts some emails for the user to approve, and replies to some emails autonomously." As enthusiastic as I am about AI, I agree with him that it's far too early to trust agentic email with real access to my email, and not only because of the security risks. I mean, I can't even trust my anti-spam services to keep out all the spam and only the spam. I'm not ready to let it make statements on my behalf. And oh yeah, the security risk.
Web: [Direct Link] [This Post][Share]
Top Priorities for Global Heads of Learning and Talent
iVentiv,
2026/02/17
This report (26 page PDF) is based on a survey of 468 heads of learning and talent from 394 companies in Europe, the United
States, the UK and the Middle East. The results are not surprising. The top priority continues to be leadership and executive development, as ever, and close on its heels is artificial intelligence. Also, "In last year's data, the phrase 'skills-based organisations' came up more than twice as often as in 2024.... That persistence reflects the 'skills based' approach becoming part of the
mainstream as we head into 2026." This priority, and also the emphasis on 'learning culture', reflects the need to adapt to a rapidly changing skills landscape; for this reason as well it is difficult to link learning programs directly to return on investment (ROI). The conditions before and after learning are often completely different, and it's often a case of 'adapt or get left behind' for individuals and companies. The report is behind a spamwall, and you can give them your contact information if you're feeling nice, but this direct link should work as well.
Web: [Direct Link] [This Post][Share]
The double-edged sword: Open educational resources in the era of Generative Artificial Intelligence
Ahmed Tlili, Robert Farrow, Aras Bozkurt, Tel Amiel, David Wiley, Stephen Downes,
Journal of Applied Learning & Teaching,
2026/02/16
I contributed to this paper (9 page PDF) - not a ton, but definitely not nothing. Here's the argument that came out of our exchanges: "We analyze several emerging tensions: the ontological crisis of human authorship, which challenges traditional copyright frameworks; the risk of 'openwashing' where proprietary models appropriate the language of the open movement," and some ethical issues. "This paper argues that the binary definition of 'openness' is no longer sufficient. We conclude that ensuring equity in the AI era requires a transition from open content creation to the stewardship of 'white box' technologies and transparent digital public goods." Now there's a lot of uncharted territory in that final statement. This paper just begins to touch on it, and (in my view) concludes without really explaining what we might mean by all that.
Web: [Direct Link] [This Post][Share]
From data to Viz - Find the graphic you need
Yan Holtz and Conor Healy,
2026/02/17
Tom Woodward links to three interesting graphing resources in one post. This first item, a tool for selecting the sort of graphic you want to use, is a number of chart type selections classified according to the number of variables you're looking at. Their poster is probably the best value of the three. If you prefer a more open-ended selection, there's this complete guide to graphs and charts. This page also links to "on-demand courses show you how to go beyond the basics of PowerPoint and Excel to create bespoke, custom charts" costing about $100 per. And how do you make the charts? You could use SciChart, a 'high-performance' Javascript chart and graph library. But the pricing is insane, starting at $116 per developer per month. I'm pretty sure ChatGPT will teach you about the types of charts (actually, I just made one for you while writing this post) and Claude Code will be able to write you a free version of SciChart.
Web: [Direct Link] [This Post][Share]
GenAI as automobile for the mind, and exercise as the antidote: A metaphor for predicting GenAI's impact
Mark Guzdial,
Computing Ed Research - Guzdial's Take,
2026/02/17
I like this analogy. "Some of you may remember the Apple ads that emphasized the computer as a 'bicycle for the mind.' GenAI is not like a bicycle for the mind. Instead, it's more like an automobile." Or, says Mark Guzdial, "As Paul Kirschner recently wrote, GenAI is not cognitive offloading. It's outsourcing. We don't think about how to do the tasks that we ask GenAI to do. As the recent Anthropic study showed, you don't learn about the libraries that your code uses when GenAI is generating the code for you (press release, full ArXiv paper)." Maybe. But it depends on how you use AI - there is a 'bicycle method' (to coin a phrase) when using AI, which is what (I think) I do - making sure I understand what's happening each step of the way. As Guzdial says, "Generative AI is a marshmallow test. We will have to figure out that we need to exercise our minds, even if GenAI could do it easier, faster, and in some cases, better." See also: To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making.
Web: [Direct Link] [This Post][Share]
mist: Share and edit Markdown together, quickly (new tool)
Matt Webb,
Interconnected,
2026/02/16
This is pretty cool: it's a collaborative markdown editor with a couple of interesting features: "all docs auto-delete 99 hours after creation. This is for quick sharing + collab"; and "Roundtripping: Download then import by drag and drop on the homepage: all suggestions and comments are preserved." Built over the weekend using Claude Code. And it reminds me or a remark I heard on TWIT: coding with AI is the best video game out there right now. "You know it's very addictive using Claude Code over the weekend. Drop in and write another para as a prompt, hang out with the family, drop in and write a bit more, go do the laundry... scratch that old-school Civ itch, 'just one more turn.' Coding as entertainment."
Web: [Direct Link] [This Post][Share]
The Intrinsic Value of Diversity
Eric Schwitzgebel,
The Splintered Mind,
2026/02/16
I've made a similar argument in my own writings on ethics: "diversity in general is intrinsically valuable, and there's no good reason to treat moral diversity as an exception." People will have as different understanding than you or I on what's right and good, and overall (within reason) that's a good thing. Now the reasoning offered here is based on aesthetic premises: "a world where everyone liked, or loved, the same things would be a desperate, desolate world." Or as Eric Schwitzgebel summarizes, "An empty void has little or no value; a rich plurality of forms of existence has immense value, no further justification required." My own reasoning is more pragmatic: a world where we all valued the same things would be static and unchanging, and therefore, could never learn or adapt.
Web: [Direct Link] [This Post][Share]
The Shortcut That Costs Us Everything
Alec Couros,
Signals from the Human Era,
2026/02/16
The title is provocative, but maybe a bit overstated. Here's the argument: why not have students analyze AI-generated writing (instead of writing their own essays)? Because "this approach becomes the dominant mode, displacing rather than supplementing the generative work students need to do themselves." You can only get so far studying what others have written; you have to write for yourself to really understand it. Couros decomposes the original suggestion, identifying assumptions it rests on (for example: students are able to analyze writing, students don't need to generate their own). But even more importantly, there's the risk that students won't develop sufficient critical thinking skills. "Critical media literacy isn't just a nice academic skill. It's a survival capacity. And we're proposing to develop it by removing the very experiences that might allow students to understand, at a visceral level, what synthetic content lacks." But... is that the skill people really need? We need better standards than "two legs good, zero legs bad." I think what we really need (and never really been taught well) is the means to distinguish between what can be trusted and what can't (no matter who or what created it).
Web: [Direct Link] [This Post][Share]
Before You Buy AI for Your Campus, Read This
Marc Watkins,
Rhetorica,
2026/02/16
It's like we're asking the same question over and over again. Maybe they can be reframed? Marc Watkins begins with the ethical perspective, but looks at whether institutions should buy AI tools from three different perspectives: would students even use the tools (or would they distrust them); would students use their own AI to bypass institutional guardrails; and why would institutions use a tool that would eliminate the positions they are preparing students for? "Institutions like Gonzaga University," writes Watkins, "are making AI part of their core curriculum by putting it in conversation with their institutional values." Specifically, "Because a commitment to inquiry and discernment serves as the foundation of our core curriculum, our students will engage with AI in ways that are both practical and critical." That makes sense, but there's also the risk that this is just wishful thinking.
Web: [Direct Link] [This Post][Share]
Beautify This Slide
Dean Shareski,
Ideas and Thoughts,
2026/02/16
Dean Shareski pushes back against "the so-called thought leaders out there who seem to have a clear handle on how to best consider AI for learning and schools." You see them a lot on LinkedIn and, of course, on their own web pages, offering "frameworks and approaches neatly packaged, intended to support leaders, educators and students in their professional and instructional use of AI." The reality isn't that straightforward. Take the simple question of using AI to help design slides for a presentation. PowerPoint will incessantly offer suggestions. Sometimes they're useful, but sometimes the personal touch is what's needed. There's no general rule. Me, I prefer to design by hand, but that's mostly because I enjoy designing. Though I like to think there's an intuitive aspect, where my design reinforces my message in a way that an AI-generated design would not. It's hard to say. Image: one of mine, that I'm pretty sure an AI would never use to illustrate this post.
Web: [Direct Link] [This Post][Share]
Oh, good: Discord's age verification rollout has ties to Palantir co-founder and panopticon architect Peter Thiel
Lincoln Carpenter,
PC Gamer,
2026/02/16
Applications like Discord and TikTok aren't instances of educational technology per se, though they are often used in learning contexts. But this story has wider implications as they represent the leading edge of identity verification, and therefore, tracking and surveillance. While previously the sheer number of humans made it impractical to keep track of everybody, broadly used technology and artificial intelligence are making it possible for advertisers and governments to have a personal file on each individual, making it very easy to track patterns of behaviour, determine nationality and immigration status, or discriminate based on culture, demographics, religion or political affiliation. This is especially of concern for a student population that is trying to use the learning experience as a safe space to try on different identities. See also: How TikTok 2.0 Became a Weapon for ICE.
Web: [Direct Link] [This Post][Share]
The Artificial Intelligence Disclosure Penalty: Humans Persistently Devalue AI-Generated Creative Writing
Manav Raj, Justin M. Berg, Rob Seamans,
Journal of Experimental Psychology: General,
2026/02/16
An "emerging body of research suggests that consumers stand to garner enjoyment and value from AI-generated creative goods (only if) they remain unaware that AI was involved in the creation process." The suggestion here (21 page PDF) is that, if people are aware something was created by AI, they value it less. "This AI disclosure penalty is remarkably persistent, holding across the time period of our study; across different evaluation metrics, contexts, and kinds of written content; and across interventions derived from prior research aimed at moderating the effect." What interests me is whether this effect will persist over time, or whether it is a product of a population to which AI is brand new, and not part of the background environment. Via Jonathan Boymal.
Web: [Direct Link] [This Post][Share]
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.