Stephen Downes

Knowledge, Learning, Community

There's a lot of relevant discussion about motivations and misdirected theories of 'fairness' that is well worth reading. But the most telling point has nothing to do with Facebook in particular. It's this: "Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way." What we would need is a generalized fake news detection system. This is something even humans find difficult. For machines not trained with general intelligence, it's even more so.

Today: 1058 Total: 1073 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Mar 29, 2024 10:58 a.m.

Canadian Flag Creative Commons License.

Force:yes