[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

When Is a Theory Superficial?
Jeremy Pober, Eric Schwitzgebel, The Splintered Mind, 2025/05/02


Icon

According to Google, I'm the first to say this: classes aren't causes.

Let me explain what I mean. This article asks what makes a theory 'deep', as opposed to superficial. The authors suggest, "superficial theories have minimal explanatory content, whereas deep theories have excess explanatory content." An explanation typically involves a causal relationship, but the authors use this formulation to allow for classes of things - functions, dispositions, categories - to describe causes. As in, for example, "poison causes death." But the property of 'being a poison' isn't what makes a thing cause death. That's just a classification of a substance, a way of describing it. We may as well say 'skull and crossbones cause death'. So I say "classes aren't causes." For me, a 'deep' theory describes a specific mechanism where actual interactions between one thing and another cause a specific effect. Such explanations may be generalized, but the generalization forms no part of the explanation. In my view.

Web: [Direct Link] [This Post][Share]


slop capitalism
Aidan Walker, How To Do Things With Memes How To Do Things With Memes, 2025/05/02


Icon

This is an interesting article, though I resist its central argument. It is, in a nutshell: "The problem with slop capitalism, in my view, is its attempt to... replace the 'jungle of functionalist rationality' which de Certeau saw in the cities of the 1980s with the desert of artificial rationality we see in the cities and social platforms of 2025." The use of the term 'slop' is suggestive, as is this image: we suppose that what we get from AI is inferior to what we had before. But I was alive in the 1980s. The 'functionalist rationality' of those times was a mess, a mix of Thatcherism, Reganomics, total quality management, 22.5% interest, Bhopal, garbage, famine, corruption and war. Today is bad but it's actually better than it was in the 1980s. Rather than a narrowing of the channels, what many people see in AI is an opening of the floodgates, something the functionaries may view with suspicion and fear, but for the rest of us, signifies hope and the possibility of change for the better.

Web: [Direct Link] [This Post][Share]


Google search’s made-up AI explanations for sayings no one ever said, explained
Kyle Orland, Ars Technica, 2025/05/02


Icon

I first encountered the phrase on TWIT Sunday. "Last week, the phrase 'You can't lick a badger twice' unexpectedly went viral on social media. The nonsense sentence - which was likely never uttered by a human before last week - had become the poster child for the newly discovered way Google search's AI Overviews makes up plausible-sounding explanations for made-up idioms." Ironically, the phrase 'You can't lick a badger twice' now has a meaning - but it's meta-metaphorical, meaning something like (to paraphrase), "garbage in, a workable interpretation of garbage out". Via Doug Belshaw

Web: [Direct Link] [This Post][Share]


What Would “Good” AI Look Like?
Anil Dash, 2025/05/02


Icon

This is an interesting question that could probably have received a more thorough treatment than it receives here. Some of the qualities are uncontroversial - they describe AI that is green, error-free, open source, based on consent, etc. But what about things like governance? Anil Dash writes, "Alternative creation, ownership and governance models for AI tools that address the corporate chaos of today's big names are well past due." That's what existed before, when AI was a research exercise, and as soon as there was money to be made (at least in theory) it went corporate. Why wouldn't these same stewards do the same thing again?

Web: [Direct Link] [This Post][Share]


Connectomics 2.0: Simulating the brain
Laura Dattaro, The Transmitter: Neuroscience News and Perspectives, 2025/05/02


Icon

It's one thing to build a complete connectome of a fly brain. It's quite another to understand how that brain works. "Even if you could incorporate every detail about the imaged neurons and their interactions with one another, the connectome would still represent a single moment in time—devoid of information about how these connections change with experience. 'To me, that is what makes a brain a brain,' says Adriane Otopalik, a group leader at Janelia who previously worked in Marder's lab as a graduate student. 'It seems odd to me to design a model that totally ignores that level of biology.'" The connectome describes the fly's knowledge, but not how it learns or acts.

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2025 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.