Tamagotchigogy: A Pedagogical Framework of Care, Feedback, and Responsiveness
Geoff Cain,
Brainstorm in Progress,
2026/01/09
I'm not sure whether this article is serious, but I'll assume it is for the sake of discussion. "Tamagotchigogy is a new pedagogical framework that uses the Tamagotchi digital pet as a metaphor for learning itself. It emphasizes care, feedback, responsiveness, and engagement as essential to sustaining cognitive and emotional growth." Tamagotchi were popular some 30 years ago; they were a 'digital pet' that you cared for by feeding, showing attention, etc., by pressing buttons on the watch-sized display. You can still buy them, though nobody knows why you would. Anyhow, the idea here (in this surprisingly long article) is that education needs "pedagogies that balance cognitive development with care, adaptability, and presence." So "The educator's role is to monitor signals, regulate input, support development, and promote active, sustained engagement." Note: "The learner is not the pet. The metaphor applies to the learning process, not the person."
Web: [Direct Link] [This Post][Share]
Gmail is entering the Gemini era
Blake Barnes,
Google,
2026/01/09
From Slashdot: "anonymous reader quotes a report from Wired: Google is putting even more generative AI tools into Gmail as part of its goal to further personalize user inboxes and streamline searches." If the music added to the video of Google's announcement is anything to judge by, this new feature will be really annoying. The problem is that Google doesn't seem to understand what the problem is. The problem isn't that I am unable to read and respond to email messages quickly enough. The problem is that I get too many irrelevant emails, emails that were probably authored using the same annoying technology Google is promoting in this video. What I want is email that works more like RSS: it listens actively to a small number of people that I identify ahead of time, and allows me to go out looking if I want to hear from anyone else. Of course, that model doesn't really work well with advertising.
Web: [Direct Link] [This Post][Share]
X Is a Power Problem, Not a Platform Problem
Laurens Hof,
connectedplaces.online,
2026/01/09
"The implicit theory behind the open social web was that platform quality would determine outcomes." But "you cannot out-compete 'where the ruling faction radicalizes and coordinates' by having better moderation policies or algorithmic choice. X is not a platform problem anymore, it is a power problem, and building a different platform does not solve the power problem... It's easy to be highly cynical, and that point of view has been extensively validated over the years, but I do choose to hold to hope that we can build a better, more ethical, social internet out of the toxic waste ground of the current state of the internet." Me too. That's why I'm still working on it.
Web: [Direct Link] [This Post][Share]
How to Choose AI Video Tools That Actually Support Learning
Philippa Hartman,
Dr Phil's Newsletter, Powered by DOMS AI,
2026/01/09
This article first identifies a number of factors relevant to evaluating AI-generated instructional video, and then applies these to an evaluation of four platforms: Colossyan, Synthesia, HeyGen, and NotebookLM. I've tried the first and last; the middle two want me to sign up, which I don't feel like doing. The factors are mostly based on the idea of 'learning as remembering' and focus on such things as spacing and retrieval. Nothing about engagement, practice or application in real life. Philippa Hardman (or her AI) gives high marks to the first two, but to me the first is just a talking head with a slide turner - nothing compelling at all (though it's similar to the e-learning my employer offers as mandatory training). This tells me that either the criteria are wrong or the ranking is wrong. Or both.
Web: [Direct Link] [This Post][Share]
Cambridge college to target elite private schools for student recruitment
Richard Adams,
The Guardian,
2026/01/09
Never let it be said that the elite are going down without a fight. Two items in the Guardian describe efforts by a Cambridge college, Trinity Hall, to recruit privately educated students in order to stem 'reverse discrimination'. Lee Elliot Major, a professor from one of the lesser universities, commented, "At a time when the educational playing field is more unequal than ever, universities should be identifying academic potential wherever it exists, not mistaking polished performance, so often shaped by privilege, for greater raw talent." Alastair Campbell, some unimportant leftist politician, said "For the college to talk about pupils from the top private schools being 'ignored and marginalised' suggests a total departure from reality, which is not a great sign for an elite academic institution." Via Sophie Pender and Change.org.
Web: [Direct Link] [This Post][Share]
Date is out, Temporal is in
Mat "Wilto" Marquis,
Piccalilli,
2026/01/08
I once had a developer friend say to me, "calendars are solved." The remark has stuck in my mind ever since, because if anything is not actually solved in life, if not programming, it's calendars. I could go on about why I have two separate and incompatible calendars, one for home, one for work, just because. But I'll refer instead to this article that describes in detail why Javascript's Date object is fundamentally broken. "My issue with Date is soul-deep. My problem with Date is that using it means deviating from the fundamental nature of time itself." I hear you, Mat, I hear you.
Web: [Direct Link] [This Post][Share]
Claude Code and What Comes Next
Ethan Mollick,
One Useful Thing,
2026/01/08
Suppose I tried this in Claude Code: "Develop a web-based or software-based startup idea that will make me $1000 a month where you do all the work by generating the idea and implementing it." Would it work? Ethan Mollick tries and comes up with something that, yeah, could earn the money. If you didn't mind rip;ping people off, that is. Here's what Claude built him (with the sales link removed). What do we learn? "Claude Code is so good is that it uses a wide variety of tricks in its agentic harness that allow its very smart AI, Opus 4.5, to overcome many of the problems of LLMs." You might also want to look at his side-project, "a Claude Code window where I had the AI building a game for me for fun: a simulation of history where civilizations rise and fall, developing their own languages, cultures, and economies."
Web: [Direct Link] [This Post][Share]
AGI isn't "coming" - it's already reshaping how young people think
Open Thinkering,
2026/01/08
This is a longish article that begins with the problem of people forming emotional attachments to AI systems and tracing through a discussion of AI literacies. I could follow some digressions here, but better to trace directly to the main point, "How do we develop the capacities people need when AI systems are this sophisticated and pervasive?" asks Doub Belshaw. "As I've argued above, the answer isn't better school lessons, but the development of literacies across contexts, through socially-negotiated, context-dependent participation." It's the latter part of this that is most important. I don't know how we get from A to B, but we need somehow to make the transition from classroom-based instruction to content-dependent participation. Forget the 'memory test' model of assessing learning; it's no longer useful, if it ever was. Facility in 'working the network' (whatever that means in a particular context) is what will matter in the future.
Web: [Direct Link] [This Post][Share]
Alan Levine links to this application and web service that converts (almost) any document format to (almost) any other. For me, the biggest missing feature is the ability to convert PDF to anything. This is about 90% of my own use cases for document conversion, but your needs may vary.
Web: [Direct Link] [This Post][Share]
Ungrading 2.0: Labor, Agency, and the Research Archive
Ian O'Byrne,
2026/01/08
Ian O'Byrne is advocating a labour-based system of assessment to replace the existing points-based system. "This is not effort theater. Busywork dressed up as engagement. Labor here means visible, sustained engagement with ideas, people, and artifacts." He writes, " I'm moving to a Labor-Based Grading Contract, paired with a Digital Research Archive (learning log) that replaces quizzes, exams, and most traditional assessments." Sounds great, but in my experience you cannot create a genuine contract between two very unequal parties. Each person has to have the capacity to say no, but students don't have that right. Maybe in theory there's a way to set it up before they enrol in a particular school, program or class, but I don't see anyone in the education system advocating for that kind of openness (though maybe they should).
Web: [Direct Link] [This Post][Share]
The Thinking Class
Carlo Iacono,
Hybrid Horizons,
2026/01/08
I'm looking at this article with this other article in mind: Agile Learning's Strategies for Keeping AI (Mostly) Out of Your Course. Why would you do this? Here's a fresh take from Carlo Iacono: anxiety about AI isn't universal, but reflects the concerns of a specific class of people who practice "a narrower, culturally prestigious form of thinking: abstract, language-heavy, credentialled reasoning that can be made legible to institutions." He calls it 'prestige thinking', practised by "Writers. Researchers. Analysts. Consultants. Managers. Academics. The people who were rewarded for producing legible cognition, and who built identities around the idea that this cognition was both rare and holy. In other words, us." The upshot: "The thinking class is losing its monopoly on prestige thinking. This is probably fine for humanity in the large and definitely painful for those of us whose identities were built on that monopoly. Our anxiety is real, but it is not universal. Our loss is genuine, but it is not everyone's loss."
Web: [Direct Link] [This Post][Share]
The Case for Social AI in Education
Alex Sarlin, Sarah Morin, Ben Kornell, Jen Lapaz,
Edtech Insiders,
2026/01/08
Working with an AI chatbot is essentially a solitary exercise. This leaves the field open for social AI applications, especially in education. Applications include: facilitating group dynamics, matchmaking learners, surfacing hidden connections, relational support, and reflecting patters (there's a bit of cross-categorization in this list, obviously). The authors list a number of companies working on social AI: Breakout Learning, Honor Education, Human2Human AI, OKO Labs, PeerTeach, Swivl's M2, and YoChatGPT. I haven't sampled any of the work from these companies, so I'm not in a position to comment on them, though I've been to each of their websites, and they feel awful (presentation in universally from the teacher's or educational institution's perspective, and while they replicate existing classroom activities, none of these seems to bring anything new to the table). But yes, absolutely, there's a lot of room for AI support for social interaction.
Web: [Direct Link] [This Post][Share]
Zines in Higher Education with Meredith Tummeti
Geoff Cain,
Brainstorm in Progress,
2026/01/07
This short article links to a podcast on zines in education, but I want to highlight this longish quote that is essentially an aside: "there is a lot of concern in education circles around 'AI proofing' one's curricula and the so-called cognitive decline that students are experiencing from the use of AI. I don't think we need to worry about the first issue because it can't be done (any more than you could 'internet proof' your curriculum), and the cognitive decline is based on yet another MIT education study that has too few subjects ('We recruited a total of 54 participants for Sessions 1, 2, 3, and 18 participants among them completed session 4.')." Exactly.
Web: [Direct Link] [This Post][Share]
Diversity Strenghtens Discovery
Malinda Smith,
University Affairs,
2026/01/07
This is a pretty easy argument to support, and I appreciate the effort Malinda Smith has taken to be precise about the various sorts of diversity that need to be considered. "If intellectual breakthroughs depend on anything, it is the collision - rather than convergence - of distinct ways of thinking. Critics of diversity often frame inclusion as antithetical to excellence, implying it compromises standards. This view misinterprets how knowledge is produced, debated and applied." We're familiar (I think) of the phenomenon of arguing for "viewpoint diversity" as important as well. I'm open to this, with one really important caveat - the viewpoint cannot be one that opposes or undermines other forms of diversity.
Web: [Direct Link] [This Post][Share]
Short videos. Your community. Your rules.
Loops,
2026/01/07
If you like the idea of TikTok but didn't like the idea of who owns it or how it is controlled, here's an alternative: Loops. "Loops is federated, open-source, and designed to give power back to creators and communities across the social web. Build your community on a platform that can't lock you in." Looking for people to follow (besides me, I mean)? Try explore. Or search for a hashtag. You can comment on and stitch videos, just like TikTok. You can follow me on Loops - here's a video I've uploaded. How much will use it? I have no idea - we'll see how much value I get out of it (where value() = f(interaction+creativity+fun)).
Web: [Direct Link] [This Post][Share]
We Need to Talk About How We Talk About 'AI'
Emily M. Bender, Nanna Inie,
Tech Policy Press,
2026/01/07
This article complains that we use anthropomorphizing language too much when talking about AI and recommends instead that we talk about it in terms of functions. That is, we talk about it as though it's human, and we shouldn't. "A more deliberate and thoughtful way forward is to talk about 'AI' systems in terms of what we use systems to do, often specifying input and/or output... Rather than saying a model is 'good at' something (suggesting the model has skills) we can talk about what it is 'good for'." So, I guess it would be saying an AI is 'good for' translating Harlequin romances, rather than saying AI is 'good at' translating them. Seems like a small difference to me. But the real question concerns our use of anthropomorphizing language. Does it really matter? Are we really fooled? We use anthropomorphizing language all the time to talk about pets, appliances, the weather, other people. Are we really making specific ontological commitments here? Or are we just using a vocabulary that's familiar and easy? Via Ton Zijlstra.
Web: [Direct Link] [This Post][Share]
Will the LMS Finally Deliver?
Alfred Essa,
2026/01/06
Alfred Essa comments on a two part article (part one, part two) on the history of the learning management system (LMS) from former Blackboard CEO Matthew Pittinsky last fall. "Today's LMS is essentially the same system we had three decades ago," summarizes Essa. "This is a stunning admission." Despite a billion dollars of investment, the LMS did nothing to advance learning in all that time. "Describing this history simply as 'investment' also obscures what was actually being optimized. Equity financing is designed to reward scale, market dominance, and successful exits - not necessarily pedagogical transformation." As we all know, Blackboard spent all this money trying to acquire its way into market dominance, to become "operating system" of education. The future? The LMS with AI "as an operating system that should orchestrates all learning." But if it does this, argues Essa, it cannot become something that advances teaching and learning.
Web: [Direct Link] [This Post][Share]
I was wrong. Universities don't fear AI. They fear self-reflection
Ian Richardson,
Times Higher Education (THE),
2026/01/06
"The greatest threat to higher education is not AI. It is institutional inertia supported by reflexive criticism that mistakes resistance for virtue. AI did not create this problem, but it is exposing dysfunctionalities and contradictions that have accumulated over decades." So says Ian Richardson in this article responding to critics of his earlier article (archive) where he makes the same claim. "If universities, especially those in the second and third tiers, fail to respond to the strategic challenge it poses, they risk being displaced." Currently open access on THE, but archive just in case. I think that recent experience tells us that rather than being displaced, universities risk being acquired and/or repurposed to serve various corporate or political ends.
Web: [Direct Link] [This Post][Share]
How the hell are you supposed to have a career in tech in 2026?
Anil Dash,
2026/01/06
Anil Dash is speaking to software developers, but he may as well be speaking to people in edtech as well. "It is grim right now," he writes, "About as bad as I've seen." It starts at the top. "Every major tech company has watched their leadership abandon principles that were once thought sacrosanct... (or) dire resource constraints or being forced to make ugly ethical compromises for pragmatic reasons." He recommends people learn about systems and about power - and in particular, "your first orders of business in this new year should be to consolidate power through building alliances with peers, and by understanding which fundamental systems of your organization you can define or influence, and thus be in control of." In addition, consider working "in other realms and industries that are often far less advanced in their deployment of technologies," especially where "the lack of tech expertise or fluency is often exploited by both the technology vendors and bad actors who swoop in to capitalize on their vulnerability."
Web: [Direct Link] [This Post][Share]
Funders "should mandate change in science publishing"
Research Information,
2026/01/09
In case we had forgotten, the authors of , The Drain of Scientific Publishing (12 page PDF) remind us that " academic publishing is dominated by profit-oriented, multinational companies for whom scientific knowledge is a commodity to be sold back to the academic community who created it" as well as to the wider community that funded it. Publishers have subverted open access mandates through the application of publication fees (aka 'article processing fees' (APC)) to earn even more than before. "APCs have exacerbated the distortions of commercial publishing. Whereas the Open Access movement aimed to make knowledge freely accessible, publishers found ways to shift paywalls from readers to authors." Via Octopus monthly updates.
Web: [Direct Link] [This Post][Share]
Public trust in statistics requires three kinds of openness
Ed Humpherson,
Impact of Social Sciences,
2026/01/08
"If one cannot distinguish between lies and statistics, if statistics can be easily manipulated and presented to fit preferred narratives, what then is the real value of statistics as a social technology?" It's a good question. Research involving statistics, argues Ed Humpherson, requires three kinds of openness (paraphrased): first, making the statistical data available; second, being clear about the limits of statistics; and third, listening to users of the statistics and being willing to recognise when users have valid criticism. Humpherson focuses the article on government statistics, but of course these considerations would apply to any company, organbization or institution presenting statistical research to the community.
Web: [Direct Link] [This Post][Share]
Open funder metadata is essential for true research transparency
Hans de Jonge, Katharina Rieck, Zoé Ancion,
Impact of Social Sciences,
2026/01/07
This article is in response to two recent reports. "The Committee on Publication Ethics (COPE) released best practices for journals on formatting funding statements, while the International Science Council (ISC) linked funding transparency to combating mis- and disinformation." In addition, though, the authors point to "the need for funding information as open metadata." They note that "The ISC highlights the 'playbook' phenomenon—strategies where the relationship between funding sources to research is disguised... (and) cases where governments engage in the spread of misinformation, to advance their anti-science agendas." They argue that "including funding statements in articles, while necessary, is insufficient. Funding information as open metadata creates high value for the whole research community." I agree. Via Octopus monthly updates.
Web: [Direct Link] [This Post][Share]
"Any research must be accessible to others. There's no point to research that can't be used"
John Hynes,
University of Manchester,
2026/01/06
This article features Ellen Poliakoff "reflecting on the outcomes and impacts of her Open Research Fellowship project so far." As Poliakoff says, "we have been involving people with lived experience of Parkinson's and autistic people in shaping and advising on our research for more than 10 years." This particular study involves helping 'public contributors' (her term) learn about public research. "The participants in our survey, who had a range of lived experience, were passionate about the benefits of co-production." Via Octopus monthly updates.
Web: [Direct Link] [This Post][Share]
Stephen's Retirement FAQ
Stephen Downes,
Half an Hour,
2026/01/05
As you may have deduced from the title, I am retiring from my position at NRC. On the day of my retirement - April 8, 2026 - I'll be 67 and more than this change. This article talks a bit about what I plan to do and what people who depend on my services, including OLDaily, can expect. Comments are welcome on my retirement threads on Mastodon, Bluesky or LinkedIn.
Web: [Direct Link] [This Post][Share]
The Crisis: Students Need to Learn Different Stuff and I don't think Most Educators understand that
Stefan Bauschard,
Education Disrupted,
2026/01/06
I think the basic premise is right: "there are two boxes — How to Use AI in the current curriculum and how to change the curriculum so school is still relevant"... and the important box is the second box. But Stefan Bauschard offers two statements on the second that seem to me to be just wrong. The first is this: "future success in work or entrepreneurship will be determined by how well you manage agent teams." Why would we need to manage agent teams? Let the AI do that. All we need to do is tell the AI what we want. Second: "the most important question facing society is who gets to decide what AI does." Why does everyone refer to 'AI' in the singular. Just are there are many people - billions, even - there will be many AIs. The real question for the future is: how many of those billions of people get to benefit from an AI? If it's less than 'billions of people' we have hard-wired an unsustainable inequality into society, with all the harm that follows from that.
Web: [Direct Link] [This Post][Share]
AI Village
AI Village,
2026/01/06
The plot is simple: "Watch a village of AIs interact with each other and the world." Here you see four AIs - Claude Opus 4.5, Gemini 3 Pro, GPT-5.2, and DeepSeek-3.2 - interact with each other as they consider questions and solve intractable problems, like today's problem, "elect a village leader." The funny(?) part is when the village interacts with the wider community, as when it sent Rob Pike (that Rob Pike) an email than king him for co-creating Go (the programming language, not the game). Simon Willison describes the mayhem: "On the surface the AI Village experiment is an interesting test of the frontier models. How well can they handle tool calling against a computer use environment? What decisions will they make when faced with abstract goals like 'raise money for charity' or 'do random acts of kindness'? My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment." Not going to disagree, but maybe we should apply the same logic to purveyors of advertising and spam and worse by artificial (corporate) persons.
Web: [Direct Link] [This Post][Share]
Taking an Internet Walk
Spencer Chang, Kristoffer Tjalve,
Syllabus,
2026/01/06
Perhaps my mistake is that I still see the internet this way (as opposed to the way I am supposed to see it, as ad-supported commercial media, I guess): "The internet is so much more than the loud and narrow portion we encounter daily. If we attend closely to the environment, we'll start to see the life forces of everyday people, their dreams, frustrations, prayers, anxieties, and joys given willingly and freely. They deserve to be given the space and honor of being discovered. They are waiting for you to discover them." This article looks at some of the ways we explored whis side of the internet in the past, and offers modern-day approaches to find roughly equivalent experiences.
Web: [Direct Link] [This Post][Share]
The Small-Tent Path to Disaster
Alex Usher,
HESA,
2026/01/05
Alex Usher picks up where he left off, criticizing the Canadian Association of University Teachers (CAUT) for applying 'purity tests' to what he calls 'pretty minor differences' (for people not tuned to the dog whistles here, this is the equivalent of calling CAUT a bunch of reactionary and irrelevant leftists). The differences cited - all four, not just the two he keys in on - and in fact quite significant and have been the subject of ongoing dispute within the sector. One of the 'lesser areas' cited by Usher is the issue of performance-based funding, which has been the subject of acrimonious job action in Quebec recently. The other is 'alignment of academic programs with the labour market', which cedes (I guess?) ultimate wisdom on what to teach and study to people most interested in serving short-term business interests. The more serious (to Usher) items concern "taking on research contracts with industry partners" (a practice that has a checkered history, at best) and "restrictive collective agreements often layered with tenure," in other words, a union workplace, which again isn't just a 'purity test' but speaks to the economic well being, working conditions, and basic freedoms of those employed in the sector. It's not surprising to see Usher line up with the banking industry on higher education policy, but it's disappointing.
Web: [Direct Link] [This Post][Share]
Politics meets protocols in Berlin
Matthew Lowry,
Medium,
2026/01/05
Summary of Eurosky Live from last November, "bringing the worlds of politics, media and business - who seems to spend at least half their lives in conference centres - face to face with developers and entrepreneurs at the cutting edge of the open social web (which is what we're calling it now)." There's a lot on the table, including European data sovereignity, the (substantial) Canadian contribution, some innovative services (Sill.social, Gander Social, Tangled, ATConnect and Slices.
Web: [Direct Link] [This Post][Share]
Mastodon creator shares what went wrong with Threads and ponders the future of the fediverse
Jon Henshaw,
Coywolf,
2026/01/05
This is an engaging interview with Eugen Rochko, creator of the decentralized social network Mastodon. Some good bits: his analysis of why Threads never fully integrated with ActivityPub (lawyers became a problem, then Threads become popular so they didn't need to integrate), what ActivityPub needs to become successful (the people matter more than the technology) and why there isn't an ActivityPub ATmosphere merger (AT is more like RSS for social, while AP is more thought out and developed by W3C).
Web: [Direct Link] [This Post][Share]
Exploring the value of values: Does higher education need to abandon a 'skills transferability' focus in favour of 'values transferability'?
Jeffrey Naqvi,
The Journal of Teaching and Learning for Graduate Employability,
2026/01/05
The nexus of this article (20 page PDF) is the concept of 'protean careers' that are "characterised by their foundation in the values and motives of the individual, driving career decisions (and) an individual responsibility taken for career development such as re-training, a desire for meaningful work, as well as individualised, subjective definitions of success." So we get the question, "what is the role of the personal values of learners as a basis for their onward career development?" This set against what might be called a 'skills-first' approach to education and development. The research method ("interviews of approximately 30 –45 minutes each with 15 participants" with a "pulse check" follow-up) makes this feel to me more like an opinion piece than anything else, though I'm fine with that, so long as we stay within that framing. I certainly support the ideal of "the importance of understanding values and connecting that to career choice," though I would have to say this is often a prerogative of privilege and opportunity.
Web: [Direct Link] [This Post][Share]
First Draft Code of Practice on Transparency of AI-Generated Content
Kalina Bontcheva, Anja Bechmann, et al.,
European Commission,
2026/01/05
'Transparency' is one of those 'ethical AI' virtues that sounds good in the abstract, but becomes harder to define (and reach consensus on) the closer you look at it. Here the European Commission offers a first draft (32 page PDF), though what we have is not so much an ethical code as the beginning of a legal framework. Still, it's progress. So, what is transparency? Here's one take: "marking and detection of AI-generated and manipulated content." This raises questions of technical feasibility (especially for smaller enterprises), agreement on open standards and specifications, and trust and cooperation along the value chain. Additionally, such marking needs to be detectable by the people and systems that access the content. This requires "understandable and accessible disclosure of verification and detection results," whatever that means, and "literacy for AI content provenance and verification." So - is it a part of AI ethics to require (in some sense) AI literacy training? How can we have "transparency" otherwise? There's also language on measurement and markings, leading to the question of what sort or how much AI-assistance counts as 'AI manipulation' or 'deepfakes'. See also: Deepfakes leveled up in 2025.
Web: [Direct Link] [This Post][Share]
3 philosophical debates from the 20th century that neuroscience is reshaping
Rachel Barr,
Big Think,
2026/01/05
So I want to wrap three separate posts into one commentary, because they each take a different perspective on the same set of problems. The first is Doug Belshaw's reflections on understanding ourselves. Here he considers the implications of "'unhooking' from thoughts. You stop treating them as literal truths or commands." In other words, "observing that the thought is just language, just noise, passing through awareness... it doesn't have to direct your next thought or action." In a similar manner, Carlo Iacono writes, "There is no uncontaminated source. The self that seems to speak is itself a construction, built from materials that arrived from elsewhere, assembled by processes you don't control and can't fully access." As he notes, none of this is new. What is new is the perspective from neuroscience that actually makes sense of this perspective. As Rachel Barr writes, "The past shapes us, but shaping is not the same as puppeteering... Brains are neither pure dice nor pure clockwork; they sit somewhere in between."
Web: [Direct Link] [This Post][Share]
In Praise of Assistance
Nick Potkalitsky,
Educating AI,
2026/01/02
As Nick Potkalitsky writes, "Study after study warns that students who rely on AI tools experience diminished critical thinking skills, reduced cognitive engagement, and what researchers term 'cognitive offloading'". I've mentioned some here in OLDaily. However, as Potkalitsky writes, "The cognitive offloading critique rests on a historical fiction: the autonomous learner, working in productive isolation, building cognitive muscle through solo effort. This student never existed, or existed only for the few." Another way to put it is that "Students have always learned through assistance. From peers, from teachers, from resources, from the structured support of the classroom environment itself." Instead, he writes, "Owen Matson offers a fundamentally different framework. In Beyond Augmentation: Toward a Posthumanist Epistemology for AI and Education, he argues that we're witnessing not the addition of a tool but 'a shift in the epistemic conditions under which learning takes place.'" Specifically, "When we frame AI assistance as cognitive offloading to be resisted, we're making a choice: preserve the purity of unassisted struggle for students who've never had assistance in the first place, while students who've always had extensive support continue to benefit from it." Do read this one.
Web: [Direct Link] [This Post][Share]
Reflecting on What a university is and can do
Tom Worthington,
Higher Education Whisperer,
2026/01/02
Interesting reflections on what universities should do. "The immediate challenge for the universities is to redesign leaning and assessment in response to AI. This is not just about stopping student cheating. It is about teaching staff learning how to teach using AI and teach students to use AI." So - more of the same, but better. The article also speaks to a simpler time. "When I decided to affiliate with a university, early in the previous century, I wrote to every one in Canberra. The first... response came within five minutes, with a very simple offer: 'Turn up Monday, we have an office for you'. The other universities wanted to have meetings, and discuss pay and conditions." I've never known anyone to have a job-finding experience like that.
Web: [Direct Link] [This Post][Share]
Out of Sight, Out of Mind
Hollis Robbins,
Anecdotal Value,
2026/01/02
I don't know whether I have aphantasia or not. "Aphantasia is an inability to voluntarily call up mental images," writes Hollis Robbins. "I've written about my own aphantasia and differences in mental modeling." It seems to me that I don't have mental imagery - I can hear voices clearly in my mind, but for me 'visualization' is nothing like that. I am terrible at remembering faces (if I've ever offended you by just walking by as though you're a stranger, this is why). This article is interesting to me because it describes how website design interacts with aphantasia. "Many of us with aphantasia cannot build mental maps. The locations do not attach to an internal picture, so the user is repeatedly reading labels and scanning the page." That is definitely my experience. Despite 45 years experience using keyboards, when my the lable on my keys wore out I had to find keys by counting ('q', 'w', 'e', 'r', 't'...). "Every instruction that says 'click the megaphone' or 'open the three-dot menu next to the assignment' leads to more scanning and uncertainty." Yes, exactly.
Web: [Direct Link] [This Post][Share]
Return to Me: Teaching, AI, and the Longing to Connect
Bonnie Stachowiak,
Teaching in Higher Ed,
2026/01/02
Bonnie Stachowiak on the crisis in education, quoting Dave Cormier: "The more I reflect on these desires to return to another time when it was easier to connect with students, the more I'm convinced that it has always been incredibly challenging. Dave Cormier describes the longer arc of these challenges, which are just that much more visible through the rapid expansion of chat-based large language models in his post In Search of Quality Points of Contact with Students. He writes: "I think the crisis is 25 years in the making and AI is the lens through which can finally see the problem for what it is. We have spent 250 years (give or take) trying to find ways to scale up our education system to try and teach more people, often with fewer resources."
Web: [Direct Link] [This Post][Share]
Universities and the Future of Civilisation. A talk by Iain McGilchrist
Jenny Connected,
2026/01/02
Jenny Mackness summarizes a talk by Iain McGilchrist, who has recently been made Chancellor of Ralston College, succeeding Jordan Peterson. "For McGilchrist universities are the cornerstones of civilisation; they are of medieval origin... These days, McGilchrist told us, tradition has a bad reputation, because it is being ossified, but he said, tradition is inherently dynamic, a living phenomenon. Universities must introduce a grounding of tradition because nothing creative can be done without tradition." It seems to me to be the usual call for universities to gt back to basics, back to their original foundation and intention, which really does not seem like a very good idea to me.
Web: [Direct Link] [This Post][Share]
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.