Signals and Field Notes #9
Mark Oehlert,
Relentlessly Curious and Deeply Engaged Relentlessly Curious and Deeply Engaged,
2026/03/30
The story I want to focus on is the first in this edition of Mark Oehlert's newsletter and ends at the first 'Upgrade to Paid' notification (I mean, you can keep reading, but I am not commenting on any of that). He references a post three days ago from Daniel Käfer in LinkedIn three days ago asserting "Microsoft just eliminated its entire senior HR leadership in a single announcement." I would have chosen a different verb. But what's more interesting isn't the fate of senior leadership, its how the HR function itself has been reorganized. "The function is being rebuilt around four new pillars: skills intelligence, AI-enabled workforce planning, product-aligned people support, and culture." Now these might be seen as related to skills taxonomies (see the post on that today) but probably isn't, because "the pace of change is exceeding what our current operating model and decision rhythms were built for. We're no longer being asked to scale for stability; we need to scale for adaptability."
Web: [Direct Link] [This Post][Share]
Structural Blindness: Why Neither Humans Nor AI Reason as Well as We Think
Steve Hargadon,
2026/03/30
I'm always wary of arguments of the form "here's something humans can do that AIs cannot" because they usually over-represent human capacities and under-represent that of the AIs. Here Steve Hargadon cannot be accused of the former. In fact, I think overstates the view that "are not naturally good reasoners", though he usefully describes institutions we've established to countervail that failing: the naming of logical fallacies, the legal system of trial and evidence, the scientific method, and government systems of checks and balances. Then, based on his experience with Grok, he argues AI does not have access to the same sort of resistance. Well - maybe not in Grok, which was explicitly designed to subvert reason and evidence. Responsible adults should not use it. But I see evidence doubt, reason and evidence in AI all the time. Sometimes - like humans - AI needs to be prompted to exercise these capacities. But they are not wholly absent. Image: Dropbox 404.
Web: [Direct Link] [This Post][Share]
What Is a Skills Taxonomy? Practical Guide to Skill Clarity
Venudhar Bhatt,
Upside Learning,
2026/03/30
This is much longer than Upside Learning's usual content - read into that what you may. It still makes some pretty good points. Basically, the "$300 Billion Problem" being addressed here concerns how we name, describe and measure skills. This becomes all the more complex as skills (and therefore skills definitions) continue to change. I'm not sure I would even want to try managing this by means of a skills taxonomy, just because there's so much upfront effort required, and even a fine-grained taxonomy is a blunt instrument when employment gets any more complex than basic machine operations. Still, if that's your preferred route, this article recommends starting at Kirkpatrick Level 3 - identifying the actual changes in performance learning and training is expected to provide.
Web: [Direct Link] [This Post][Share]
New Publication: Documenting & Disclosing AI
Lance Eaton,
AI + Education = Simplified AI + Education = Simplified,
2026/03/30
This post is a discussion from the authors of a new resource posted on EDUCAUSE of a framework for transparent generative AI use in higher education. You can check it out for yourself; I personally found the four-level structure too limiting. The overall concern, says Carol Damm, is that AI has caused "an inherent lack of trust of anything that we see and read," and "transparency in how an author uses AI is a powerful mechanism for rebuilding that trust." That may be optimistic, but it's still useful to encourage "the possibility of interoperability and record-keeping among AI tools," if only because provenance in everything is what's going to matter a lot more in the future. But we really need to describe AI use more proecisely that what is depicted here.
Web: [Direct Link] [This Post][Share]
Local Impact, Praxis, and Digital Overwhelm
Ann,
All Things Pedagogical,
2026/03/30
Ann (just Ann, no last name that I can find) discusses a question posted by Karen Costa on LinkedIn: "How do you rationalize sharing your work and words with the world amidst this chaos?" I like the response: "Maybe the answer to some of this is soup. Sure LinkedIn is not going to like supporting a post on soup in its algorithm, but I am telling you the ROI of soup is higher than any pyramid scheme... So if you can, go make soup and share it with someone, there is always someone who needs soup. Soup is the local impact we need right now." That's how I feel as well. I'll share to LinkedIn because some people are there who could maybe use what I have to offer, and as for the rest of the noise and pollution on that website, well, I also offer some quieter nooks where people can still feel caught up and in touch.
Web: [Direct Link] [This Post][Share]
More than a game: some thoughts on David Wiley’s “Random Audits as a Scalable Deterrent to Cheating”
Jon Dron's home page,
2026/03/30
Jon Dron says a bunch of nice things about David Wiley's proposal to employ random audits as a deterrent against cheating and then says "for all that is good about it, I think it's almost exactly the wrong idea, though I have an idea to save it." Wiley's proposal is "far from infallible, because few of us are rational game players." The ranks of the unreasonably wealthy are filled with those who rolled the dice despite the risk. Moreover, Wiley's proposal "is a very much stronger signal of the authority and control that the teacher/institution has over the the student than the conventional process." Ultimately, "it doesn't deal with or consider the reasons that students cheat in the first place: it's just a response to the fact that some do." Instead, Dron recommends combining assessment from a range of different courses. "If done with commitment, it could largely decouple learning and assessment because instrumental revision would not be an option." Dron's approach is (as he admits) too structured for occasional or informal learning. But there is merit to the decoupling of learning and assessment - not that traditional universities would ever been keen on that.
Web: [Direct Link] [This Post][Share]
Endgame for the Open Web
Anil Dash,
2026/03/30
The argument here is that "the hectobillionaires have begun their final assault on the last, best parts of what's still open, and likely won't rest until they've either brought all of the independent and noncommercial parts of the Internet under their control, or destroyed them." It is supported with a series of examples documenting that assault, from the abuse of open APIs to hammering by ill-behaved AI bots to closed platforms for things like podcasts. We've seen evidence of all the things Anil Dash lists here, but I think there's a lack of specificity about both the attackers and those they attack. Many of the abusive AI crawlers, for example, come not from the billionaires but from much smaller players. And while it's harder for independent creators to make a living and "go without winning awards or the other trappings of big media," that's always been the case. I'm not saying that Dash doesn't have a point - he does - but that it's more nuanced than presented here.
Web: [Direct Link] [This Post][Share]
Who (Or What) Filled Out Your Course Evaluation?
Marc Watkins,
Rhetorica,
2026/03/30
This article has a lot of generic content about how AI is being used more and more, and how people (including especially instructors) will have to be responsible for AI-generated content created on their behalf. The more interesting remark, though, concerns the possibility of students using AI-enabled browsers to complete teacher evaluation forms. Not much is added to this, but it shows that we won't be able to just block AI-based submissions. It will be indistinguishable from human-typed content. So I agree with the assessment that "we're going to need to design procedures that mimic banking-style transactions that show chain-of-custody actions that services in the financial industry use to validate your credit or debit card."
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.