European Union regulations on algorithmic decision making and a “right to explanation”

Adrian Colyer, The Morning Paper, Jan 31, 2017
Commentary by Stephen Downes
files/images/gdpr-uncertainty-bias.jpeg

We've seen how an AI can become a racist xenophobe in one day of training. We've also seen how propaganda can create the same effect in an entire nation. So it stands to reason that algorithms can embody the prejudice and hate present in the data set used to train it, and can even magnify that effect in its decision-making. So it's reasonable to require that the decisions made by these AIs be vetted in some way. This is the purpose of European Union regulations related to profiling, non-discrimination and the right to an explanation in algorithmic decision-making.

Views: 0 today, 954 total (since January 1, 2017).[Direct Link]
Creative Commons License. gRSShopper

Copyright 2015 Stephen Downes ~ Contact: stephen@downes.ca
This page generated by gRSShopper.
Last Updated: Dec 10, 2017 11:02 p.m.