This is from a few days ago but I left it on my computer before I went on my short vacation because I wanted to make sure it was noted. Donald Clark writes, "In a rather astonishing blind trial study (markers were unaware) by Scarfe (2024) (33 page PDF), they inserted GenAI written submissions into an existing examination system... (the results): 94% AI submissions undetected; AI submission grades on average half grade higher than students." Unlike Clark, I don't actually consider that shocking, not simply because AI is good, but also (mainly) because assessments today are basically language tests. We give them a bunch of language (direct instruction, readings, other content) and then ask them to perform a generative language task (multiple choice, short answer, essay). What do we think the result is going to be?
Today: 1 Total: 100 [Share]
] [