Ethics and Regulation of Human Brain Organoid Research: Recommendations from the Asia Pacific Neuroethics Working Group
Shu Ishida, et al.,
Asian Bioethics Review,
2026/05/01
I'm not sure where this will fit into the definition of learning technology exactly, but there has to be some overlap, and it's better to be thinking of these issues before the fact rather than while we're in the middle of it. "Human brain organoids (HBOs) are three-dimensional structures derived from human stem cells that model aspects of brain development." They're not conscious, sentient, or capable of experience the way we define it, but the ethical issues are still numerous, from grounds of privacy (regarding stem cell donors, for example), commercialization, and application (such as transplanting of human brain cells into animals). This paper is a good overview of the ethical issues that may arise, with due regard for public misperception, cultural variation, and future developments. Image: PubMedCentral.
Web: [Direct Link] [This Post][Share]
All we’re doing is reading today
Emily Zerrenner,
ACRLog,
2026/05/01
This effort goes about the way you would expect it to: "(the) entire class plan was to bring down carts of books related to the class topic, have the students pick something they were interested in, and then read for about an hour." So they read, they fidgeted, and in the end, everyone marvelled at how great an hour of reading was. And sure, I get it. But what struck me is that when I was in school I used to get into trouble for reading in the classroom. The books were apparently a distraction from the much more important (and oh so boring) stuff happening at the front of the room. For the rest of my life, I've always had something to read with me (more usually digital these days) under the desk. It has always been one of the differences between me and the people who just did what they were told.
Web: [Direct Link] [This Post][Share]
AI, tractors, and the productivity paradox
Sachin,
Technically,
2026/05/01
Good article that makes the following case: "If AI is so impactful, why isn't it showing up in the productivity stats? The Solow paradox answer is that firms haven't reorganized yet. The computer took nearly a decade to show up in productivity numbers because the organizational work - flattening hierarchies, redrawing workflows, retraining workers, rebuilding integration machinery around the new technology - took nearly a decade to do." I would argue that this is also why we are not seeing 'learning gains' (whatever those are) as a result of AI intervention. The necessary reorganization and rethinking of methods and pedagogy hasn't happened yet.
Web: [Direct Link] [This Post][Share]
Beyond free courses and resources: 4 takeaways about the future (or the present) of open education
Jackie Bucio,
Medium, Creative Commons: We Like to Share,
2026/05/01
This post is a mixture of reflections from an ICDE conference last November, but the main message centers around an alternative vision for AI in education: "This vision moves beyond simply deploying AI, to focusing on its ethical and innovative application in the very design of two-way-learning experiences." I agree, and like the author, it is my experience with MOOCs that makes this clear. "Learners are not passive recipients of technology but active agents who bend platforms to their will... These 'hacks' expose a critical gap between how educational technology is designed and how it is actually used. They indicate that effective learner-centric design requires observing and empowering user behavior, not just building more (AI) features just because we can."
Web: [Direct Link] [This Post][Share]
Why Can’t OER Be All in One Place?
Medium, Creative Commons: We Like to Share,
2026/05/01
The answer to the question post in the title is pretty self-evident: funding, and quality issues. It takes a lot of money to host a single centralized data repository, and it's something that needs constant vetting and curation for inaccurate content, out-of-date content, and these days, AI slop. Efforts well known from the past - MERLOT and OER Commons - have faltered and now struggle with obsolescence. "Therefore," writes James Thibeault, smaller repositories, or decentralized models, that focus on certain specialties are not only more attainable, but they can also host far better OER to the public."
Web: [Direct Link] [This Post][Share]
Open Data Structures
Pat Morin,
2026/05/01
Maybe you don't need the information in this book. But if you do any serious work in development and programming, including analytics and graphs, then the contents should be second nature to you, and if they're not, you need this book. "Open Data Structures covers the implementation and analysis of data structures for sequences (lists), queues, priority queues, unordered dictionaries, ordered dictionaries, and graphs." What I like is that it references "data structures in this book (that) are all fast, practical, and have provably good running times. All data structures are rigorously analyzed and implemented in Java and C++." This makes it a good reliable source not only for humans but also for generative LLMs used to encode these data structures.
Web: [Direct Link] [This Post][Share]
There Will Be a Scientific Theory of Deep Learning
Jamie Simon, et al.,
arXiv.org,
2026/05/01
I will be the first to admit that it would take me weeks - maybe more - to comprehend this paper (41 page PDF) in detail, but it surely seems like an important statement, and I wonder whether it could be applied to learning in general. 'Deep learning' is the term used to describe multi-layered neural networks, and these form the basis (at a much smaller scale) for things like large language models and (arguably) human neural networks. The authors argue that "there will be a scientific theory of deep learning; that we can see pieces of this theory starting to emerge; and that this theory will take the form of a mechanics of the learning process." They suggest, "The measurability of deep learning makes observation and empiricism a particularly fruitful approach, since experimentation can be iterated on quickly, while revealing mathematically simple relations and structure in trained models." This is based on the manipulation of what they call "numerical knobs", termed "hyperparameters," which include "optimization hyperparameters such as the learning rate, batch size, momentum, and initialization variance, as well as architecture hyperparameters such as width, and depth." This opens the possibility of universality in representations: "It has been shown that networks trained to solve different tasks learn similar representations across training datasets." A combination of top-down hypotheses and empirical observation may well yield the theory of deep learning the authors are looking for.
Web: [Direct Link] [This Post][Share]
Comparing Decentralized Identifiers (DID) Methods
Lymah,
DEV Community,
2026/05/01
This is something I'm referring to as I develop distributed identity for CList. "In Web5, a user's Decentralized Identifier(DID) links their identity to their data via Decentralized Web Nodes (DWNs). This removes users off from centralized data storage, granting them full control over their data. DIDs are unique identifiers that users can create and control without relying on third parties. DIDs use cryptoanalysis techniques to demonstrate ownership." What's important is that "Different DID methods implement unique mechanisms for creating, updating, and resolving DIDs." There are trade-offs among the mechanisms. I'm starting with DID:web but building a migration path to DID:dht if the CList ecosystem ever matures.
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.