Content-type: text/html Downes.ca ~ Stephen's Web ~ An AI toolkit for libraries

Stephen Downes

Knowledge, Learning, Community

"There is a need for a set of skills for evaluating new tools and measuring existing ones," writes Michael Upshall. These "should enable anyone commissioning or managing AI utilities to understand what questions to ask, what parameters to measure and possible pitfalls to avoid when introducing a new utility." Maybe. But it's not clear to me that "the skills required are not technical." Consider an AI that recommends learning strategies to instructors. How is this to be evaluated by the non-technically inclined? Some examples are presented, such as a spell-checker and a metadata generation tool. These may feel like they could be used by non-technical people, but that's only because we've already internalized the skills of spelling and categorization. And could a non-technical person detect bias in a training set? Or that the optimal algorithm is in use? Related: Marta Samokishyn, the role of algorithmic literacy.

Today: 2 Total: 1134 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 27, 2024 01:41 a.m.

Canadian Flag Creative Commons License.

Force:yes