Content-type: text/html Downes.ca ~ Stephen's Web ~ Study shows AI-generated fake reports fool experts

Stephen Downes

Knowledge, Learning, Community

In the past I have talked about the possibility of automatically generated OER. These learning resources would be created by transformers, like BERT from Google and GPT from OpenAI, which (as the article says) "use natural language processing to understand text and produce translations, summaries and interpretations." That's great, but the technology also introduces the potential for the use of OER to spread misinformation. This article suggests that AI-generated misinformation may be good enough to fool experts. It offers some examples where fake reports were used to fool security experts about potential intrusions and threats. Misleading OER ('MOER'?) could create havoc, especially if they pass undetected in peer review. Something to think about.

Today: 4 Total: 1116 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 25, 2024 3:15 p.m.

Canadian Flag Creative Commons License.

Force:yes