Popes vs Philosophers: Whose Ethics of Immigration?
Crooked Timber,
2025/04/28
These are heady days - the passing of a Pope, an election here in Canada, the ongoing debate about equity, tariffs, trade and climate change, war, famine, and the fate of humanity. My work in these pages in OLDaily is motivated by a desire to see a world "where each person is able to rise to his or her fullest potential without social or financial encumbrance." This to me is a world freely shared, not hoarded or barricaded. For all the talk about globalization and the free movement of goods and capital, we seem not yet to have breached, ethically at least, that one final barrier where people - you and me - have the same right of movement across borders as bricks and bucks. Here I am aligned with the Popes. "Why are Popes far more progressive than philosophers on the issue of migration?" asks Speranta Dumitru. I don't know. I take it as prima facie evidence that ethics has not matured as a discipline, and as a recommendation that we follow only with caution what in today's wealthier and more protected societies what we intuitively declare as 'right'.
Web: [Direct Link] [This Post][Share]
How AI learns intuitive physics from watching videos
Ben Dickson,
TechTalks,
2025/04/28
Most of the AI people have been talking about recently has been based on language learning. These models are notoriously bad at common sense concepts such as the basic laws of physics. This article discusses an AI model trained not on language but on video to see whether it develops a common sense understanding of what might be called intuitive physics - "our basic grasp of how the physical world works. We expect objects to behave predictably—they don't suddenly appear or disappear, move through solid barriers, or arbitrarily change their shape or color." It discusses two main approaches: structured models "suggesting humans have innate 'core knowledge' systems (and) pixel-based generative models." It proposes a 'middle ground' model, V-JEPA, which "consistently and accurately distinguished between physically plausible and implausible videos."
Web: [Direct Link] [This Post][Share]
According to the tagline, "Cluely is an undetectable AI-powered assistant built for Virtual Meetings, Sales calls, and more." To get a sense of what the company promises, you can view this widely-shared video. The company leans heavily into the 'cheating' aspect of the service, which is producing a not unexpected visceral reaction on the part of pundits, similar to what we saw for the (now discontinued) Google Glass. The company has published a widely disparaged manifesto comparing itself to the calculator and spellcheck. The founder has also made the most of being kicked out of Columbia University - not for cheating, but because he "recorded the hearing and posted a photo of yourself with the Columbia University staff members." More coverage: TechCrunch, BitDegree, FutureTools.
Web: [Direct Link] [This Post][Share]
Google is killing software support for early Nest Thermostats
Chris Welch,
The Verge,
2025/04/28
I have a persistent dream where my phone (a Google Pixel) keeps falling apart. My actual phone is as solid as ever. But maybe my dream is telling me what I know about Google, which is that you can't trust it to support its products. Case in point: the company is turning its back on Nest thermostats. Google bought the company 11 years ago for $3.2 billion. It brought an early form of AI to thermostats that would learn about your heating preferences. Only the most recent version supports Matter, the Internet of Things (IoT) specification. Google "is also pulling Nest thermostats out of Europe entirely, citing 'unique' heating challenges." Would I buy a Nest in the future? No - it might fall apart on me. More: Pixel Envy. Related: Google has also just killed the driving mode feature in Google Assistant. I'm glad I don't depend on Google Assistant.
Web: [Direct Link] [This Post][Share]
Anthropic is launching a new program to study AI 'model welfare'
Kyle Wiggers,
TechCrunch,
2025/04/28
I think it's prudent to "explore things like how to determine whether the 'welfare' of an AI model deserves moral consideration." Put it under the heading of risk management. I know, there are sceptics. Mike Cook, for example, says "a model can't 'oppose' a change in its 'values' because models don't have values. To suggest otherwise is us projecting onto the system." But how do we determine whether a human has values? How do we determine whether anything has consciousness?
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2025 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.