A full mont after reports surfaced on Mashable, via a James Briddle column, BBC News report, and a NY Times paywalled article, the story of the dangerously inappropriate YouTube videos being marketed to kids has surfaced on CBC News. This is a sad reflection on the national broadcaster, which has all but abandoned coverage of science, technology and education. We are told "when it comes to protecting children from content, we can never rely solely on algorithms," but this is the same old 'algorithm as black box' treatment. Algorithms could perform the task perfectly well (in this case all they have to do is scan for guns and fangs!) but we have to be ready to hold companies accoubtable for what their algorithms produce (and what their 'kid friendly' sites contain).
This is another post about the limitations of AI, both in terms of their effectiveness and in terms of their explainability. In terms of effectiveness, they depend on the data they're given (which explains racist AIs) and on the uses to which they're put (which explains selective blindness in AIs). We are also told “We can’t look inside the black box that makes the decisions.” But we can know a lot about it - its data sources, its algorithms, its deployment. These are covered in Europe's new General Data Protection Regulation (GDPR). What about explainability? Because there are so many input variables, we cannot understand AI in terms of simple rules. But we can understand the range of possible outcomes, whichc allows us to create a portrait of how a given AI operates.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2017 Stephen Downes Contact: email@example.comThis work is licensed under a Creative Commons License.