[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Order on Fair Use
United States District Court Northern District of California, 2025/06/24


Icon

The ruling is essentially that "the use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use under Section 107 of the Copyright Act." That doesn't excuse Anthropic from having used pirated copies of books. But the judgement is pretty clear that using books as input to large language models (LLM) is not a violation of copyright. In this I am in agreement with the court. You can't make learning from what you read illegal, even if it's a computer that is doing the learning. Pretty much everyone in this Bluesky thread disagrees with me though.

Web: [Direct Link] [This Post][Share]


10 best open source ChatGPT alternative that runs 100% locally
Emmanuel Mumba, DEV Community, 2025/06/24


Icon

I'm making an exception to a longstanding rule of mine in order to point to a good resource and to make a point. This site may ask you to create an account; there's no cost, but I hate sites that force you to login and mostly never link to them. The resource, though, is a list of open source alternatives to ChatGPT that run 100% locally. It underlines a point I've made all along about AI: it's not just the big corporations, and it's not going away. It's just programming, and you can run it on your own computer. It's not some conspiracy by techbros to take over society; it's just math. OK, you don't need to run any of these models if you don't want to. Just know that they exist. Via Clint Lalonde.

Web: [Direct Link] [This Post][Share]


Inside The Anti-Social Century
Derek Thompson, Big Think, 2025/06/24


Icon

The main data point in this video transcript is that "the average American now spends 20% less time socializing in person than they did just 20 years ago, and a record amount of time spent alone by themselves." This is presumed to be bad, though I really think that this depends on your perspective. I mean, is watching TV at home really worse than going out to the bar and getting drunk every night? Is cycling in a group really better than cycling alone? And also, what counts as social? Is it social if I go to the football game by myself (and yes, I have done this)? And I know these don't count as leisure time, but I can do without things like meetings, standing in line for groceries or at the DMV, or interactions with medical and security professionals. I mean, it's all a complex tapestry not reducible to "spending less time socializing is bad". But what about anxiety? you may ask. Sure. In a world of rising costs, privatization of social spaces, militarization of police, and uncertainty about the war, sure, let's blame AI.

Web: [Direct Link] [This Post][Share]


Emergent Symbolic Mechanisms Support Abstract Reasoning in Large...
Yukang Yang, Declan Iain Campbell, Kaixuan Huang, Mengdi Wang, Jonathan D. Cohen, Taylor Whittington Webb, ICML 2025, 2025/06/24


Icon

A longstanding critique of neural network methods (aka 'artificial intelligence') is that they cannot reason abstractly (known historically in Chomsky as "Plato's Problem"). This critique persists to the present day, where even 'reasoning' large language models (LLM) aren't reasoning abstractly. This paper (35 page PDF) shows that the mechanisms can be developed using neural networks along; they can learn abstract reasoning by themselves. There's even a remark at the end of the paper suggesting how neural networks might deal with "'content effects', in which reasoning performance is not entirely abstract," as oft-noted in cognitive psychology. I won't pretend to have understood the details of the paper, much less the long exchange between the authors and reviewers in the open review process (that would take years, I suspect) but the paper is clear enough that it's importance is recognizable. Via Scott Leslie, who mutters, "But yeah, sure, 'stochastic parrots' ,,"

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2025 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.