'Pokemon Go' Players Unknowingly Trained Delivery Robots With 30 Billion Images
Slashdot,
2026/03/17
The main lesson here isn't that some company tricked its users into providing data for free. It seems pretty clear that Pokémon Go players understood that the information, and especially the photos, they submitted would be used to train the AI. In a similar fashion, I am under no illusion that the photos and reviews I upload to Google Maps won't be used in the same way. Of course it will. No, the main takeaway is that we're moving from an era where all AIs were trained on text into an era where they are trained on geospatial data, photographs, and other non-text data. See also Popular Science.
Web: [Direct Link] [This Post][Share]
The Trust Tax: Why Every AI Deployment in Education Fails or Succeeds on a Single Variable
Nik Bear Brown,
2026/03/17
I don't disagree with the main point here, though I do have an issue with defining 'trust' in any useful way. But I digress. Here's what Nik Bear Brown is arguing: what matters in AI-in-education deployment isn't what the AI is capable of doing, it's whether we can trust it. "It is calibrated trust — a state where a user's confidence in a system accurately matches the system's actual reliability." We obviously don't want students to trust it too much, but you can also trust it too little. Then people "exhibit what researchers call 'algorithmic aversion.' They disengage." And there are other problems around trust - the 'honeypot effect', where you learn to depend on a system, which then changes; the 'adversarial trap', where a system you trusted turns out to be (say) spying on you; and the 'bias problem', where a system you trust is subtly leading you astray. These are all, says Brown, pedagogical issues. Getting them wrong has consequences for learning.
Web: [Direct Link] [This Post][Share]
The key problem with the "brain in a vat" thought experiment
Adam Frank,
Big Think,
2026/03/17
This short article uses a philosophical classic to address what might be called 'the embodiment problem'. The classic is, of course, the question, 'how do we know we are not brains in vats'? All our sensations, all our physical experiences, could be wired up as inputs into the brain. Could we tell the difference. This article argues that we could, because it would be much too complex to simulate our experiences. "Thompson and Cosmelli conclude (18 page PDF) that to really envat a brain, you must embody it. Your vat would necessarily end up being a substitute body." Well - sure. Even the simplest version of 'brain in a vat' postulates some external mechanism standing in for the human body. That's the whole point. But the question is more subtle: is it the case that there can be one and only one possible cause for a given set of conscious experiences? If the answer is 'yes', then our options for both ourselves and for AI are fundamentally limited. But on what grounds would you argue 'yes'? This article doesn't really offer those grounds, beyond saying it's complex. But complexity doesn't prove necessity.
Web: [Direct Link] [This Post][Share]
Robots Didn't Kill the Internet
Carlo Iacono,
Hybrid Horizons,
2026/03/17
Carlo Iacono argues convincingly that today's 'dead internet' isn't the result of AI, it's the result of incentives. Platforms are asking for things that hold attention and produce a useful signal. "That question, applied at scale and compounded over years, is what killed the internet. Not robots. Incentives." The internet has become a giant casino, he argues. Websites are engineered to keep people clicking, and they collect their cut in the form of advertising revenue. "The internet did not start rotting because robots learned to write. It started rotting when platforms became casinos. The robots are just very efficient casino staff."
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2026 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.