Content-type: text/html ~ Stephen's Web ~ Fairness without demographics in repeated loss minimization

Stephen Downes

Knowledge, Learning, Community

The key to representing minorities fairly is to recognize that there are minorities. But what if information about minorities isn't in the data? A lot of machine learning datasets have this problem, which is why they perform poorly with respect to them. A speech-recognition system might perform poorly with minority accents, for example. The solution is to assume there are minorities and then account for them based on distributionally robust optimization (DRO) . This post describes a paper that describes the process. Interestingly, it's a machine learning system based on the philosophy of John Rawls describing 'justice' as fairness.

Today: 0 Total: 25 [Direct link] [Share]

Stephen Downes Stephen Downes, Casselman, Canada

Copyright 2024
Last Updated: Jun 12, 2024 12:27 p.m.

Canadian Flag Creative Commons License.