First Law of Robotics

Metafilter, May 25, 2018
Commentary by Stephen Downes
files/images/dt.common.streams.StreamServer.cls.jpg

The problem isn't with automated systems or artificial intelligence. The problem is with companies deploying such systems with the same due care and attention they pay to their customers needs and interests on a day-to-day basis. Case in point: Uber. "There were no software glitches or sensor breakdowns that led to a fatal crash, merely poor object recognition, emergency planning, system design, testing methodology, and human operation." For example, "Uber chose to disable emergency braking system before fatal Arizona robot car crash, safety officials say." I think we can trust artificial intelligence in learning, but not artificial intelligence managed by Silicon Valley corporations in learning.

Views: 0 today, 384 total (since January 1, 2017).[Direct Link]
Creative Commons License. gRSShopper

Copyright 2015 Stephen Downes ~ Contact: stephen@downes.ca
This page generated by gRSShopper.
Last Updated: Jun 25, 2018 02:02 a.m.