Content-type: text/html Downes.ca ~ Stephen's Web ~ Understanding and Debugging Deep Learning Models: Exploring AI Interpretability Methods

Stephen Downes

Knowledge, Learning, Community

A lot of recent discussion of responsible or ethical artificial intelligence centres around whether it is 'explainable'. For various reasons(*), I think this is the wrong word, and much prefer the concept of 'interprebility'. As Andrew Hoblitzell writes, "This includes understanding the relationships between the input, the model, and the output. Interpretability increases confidence in the model, reduces bias, and ensures that the model is compliant and ethical." The article outlines six "methods of providing interpretability" and follows up with two case studies applying these methods. (* to 'explain ' is to answer a 'why' question; to interpret includes this but is broader, and refers to finding the 'meaning' of various aspects of the AI system.)

Today: 0 Total: 1108 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 20, 2024 02:55 a.m.

Canadian Flag Creative Commons License.

Force:yes