Stephen Downes

Knowledge, Learning, Community

First Law of Robotics

Metafilter, May 25, 2018

The problem isn't with automated systems or artificial intelligence. The problem is with companies deploying such systems with the same due care and attention they pay to their customers needs and interests on a day-to-day basis. Case in point: Uber. "There were no software glitches or sensor breakdowns that led to a fatal crash, merely poor object recognition, emergency planning, system design, testing methodology, and human operation." For example, "Uber chose to disable emergency braking system before fatal Arizona robot car crash, safety officials say." I think we can trust artificial intelligence in learning, but not artificial intelligence managed by Silicon Valley corporations in learning.

[Direct link]


Stephen Downes Stephen Downes, Casselman, Canada

Creative Commons License.

Copyright 2021
Last Updated: Mar 30, 2021 9:42 p.m.