[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

The Epistemology of Logic
Ben Martin, Cambridge University Press, 2025/10/29


Icon

This item popped up in my RSS feed today, and as happens surprisingly often, it ties together a bunch of threads from my experiences on the internet today. Here's what the paper (80 page PDF) discusses: how does logic lead us to the conclusions it does? "The starting point of formal logic (is) that an argument is good in virtue of its form (in some sense)," writes Ben Martin. But does the form make the argument good, or is it just how we recognize that it's good? How do logicians actually draw their conclusions? This ties into the discussion of Colors and Numbers (also in this issue) on Mastodon. Does a form (or a pattern) exist in such a way as to be able to cause events, such as the formation of a belief? If different people see the world differently, do different things exist? It also speaks to my response on LinkedIn to Patrick Dempsey as he trots out the old canard that "thinking skills cannot readily be separated from one subject matter and applied to other subject matters." But that's exactly what patterns and forms do - they let us reason using common forms and patterns across multiple disciplines. That's what makes these patterns relevant, and other patterns (that we nonetheless often ask children to memorize) irrelevant. As Martin says, "the mechanisms by which logics are chosen are those we are accustomed to from the sciences: predictive success, explanatory power, and compatibility with other well-evidenced commitments."

Web: [Direct Link] [This Post][Share]


The Architectural Shift: AI Agents Become Execution Engines While Backends Retreat to Governance
Eran Stiller, InfoQ, 2025/10/29


Icon

I know that there's a lot of AI scepticism in our field, but in the enterprise space, where so many processes are documented, it should not be a surprise at all to see AI replace the humans who fill in forms in standardized ways. "A fundamental shift in enterprise software architecture is emerging as AI agents transition from assistive tools to operational execution engines, with traditional application backends retreating to governance and permission management roles. This transformation is accelerating across banking, healthcare, and retail systems, with 40% of enterprise applications expected to include autonomous agents by 2026." There are two possible responses: push back against automation, or take steps to ensure it is done right. This InfoQ article is based on a report from Gartner.

Web: [Direct Link] [This Post][Share]


Elon Musk's Grokipedia launches with AI-cloned pages from Wikipedia
Jay Peters, The Verge, 2025/10/29


Icon

After a bit of a rocky start, Elon Musk has launched Grokopedia, his truth-massaged alternative to Wikipedia. Some of its content was derived from Wikipedia and there are overlaps, but there are differences. I was curious, so I asked ChatGPT to compare the lists of references for the two sites for a specific topic ('Berlin','Berlin') and found that Grokopedia cites a bunch of things "Wikipedia normally excludes these for verifiability/reliability reasons": "Quora, Reddit, Facebook posts... commercial tour and blog sites: Original Berlin Tours, freewalkingtour.com, BerlinExperiences, Walled-in-Berlin, Berlin Avenue, personal/blog formats, aggregator/how-to/SEO pages: Numbeo, Joberty, ResearchGermany product pages, 'top sector forecast' style posts, etc., (and) think-tank and policy PDFs not tied to the article's narrative," among others. See also: the Register.

Web: [Direct Link] [This Post][Share]


We need private AI before it's too late
Eamonn Maguire, Proton, 2025/10/29


Icon

Although it's tempting, something I'm careful not to do as a government employee is to input anything to do with my job into ChatGPT. There's a simple reason for this: OpenAI is watching. Now I'm sure there isn't a direct pipeline whereby government or personal secrets are direct deposited onto some surveillance agent's desktop. But there are numerous indirect ways this could happen, and that's the point of this article from Proton. When it comes to professional content and AI the rule for me is simple: don't. This has nothing to do with AI per se and everything to do with the companies that provide it.

Web: [Direct Link] [This Post][Share]


Colors and Numbers
Eryk Salvaggio, Cybernetic Forests, 2025/10/29


Icon

There's tons of good stuff in this post about neurodivergent ways of thinking; I'll highlight this: "Coming to grips with the ways we think, as opposed to the ways we are 'expected' to think, can help unravel universalist assumptions about there being any one way to think at all." Similarly, as quoted by Laura Hilliger: "When we assume language reflects thinking, we may also assume that all thinking reflects our thinking." I don't have anything as cool as synesthesia, but I do know that I think in a way that is - somehow - different from most. I'm fine with that.

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2025 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.