This article reports on an an analysis posted online by Pangram Labs that found around 21% of next year's International Conference on Learning Representations (ICLR) "peer reviews were fully AI-generated, and more than half contained signs of AI use... Among other things, they flagged hallucinated citations and suspiciously long and vague feedback on their work." It doesn't help that "many manuscripts that had been submitted to the conference with suspected cases of AI-generated text: 199 manuscripts (1%) were found to be fully AI-generated; 61% of submissions were mostly human-written; but 9% contained more than 50% AI-generated text." People may be quick to blame AI, which is fair, but I think we need to ask questions about the incentives at work and the ethics on display - especially as these same authors and reviewers are those teaching today's students. (This is one of those articles where enough content is posted online to attract reposters and search engines, but where the last six paragraphs are behind the paywall).
Today: Total: [] [Share]

