Content-type: text/html Downes.ca ~ Stephen's Web ~ 4 human-caused biases we need to fix for machine learning

Stephen Downes

Knowledge, Learning, Community

The response to three of the four biases reported in this post is diversity. And I would add that just as machines need diversity in order to avoid bias, so do humans. That's why diversity is valuable. Anyhow here's the breakdown of the three (all quoted from the text):

  • Sample bias is a problem with training data... (we need a sample) both large enough and representative enough to mitigate sample bias.
  • Measurement bias happens when there's an issue with the device used to observe or measure... best avoided by having multiple measuring devices (and I would add, multiple types of measuring devices)
  • Algorithm bias is a mathematical property of an algorithm. The counterpart to bias in this context is variance.

The fourth type of bias is prejudice bias, and "is a result of training data that is influenced by cultural or other stereotypes." To fix this, "the humans who label and annotate training data may have to be trained to avoid introducing their own societal prejudices or stereotypes into the training data."

Today: 5 Total: 1103 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 24, 2024 1:41 p.m.

Canadian Flag Creative Commons License.

Force:yes