My first thought was that this is kind of a dumb question, but there's better logic behind it than it may seem: "If AI companies are honest and say that they cannot build guardrails into their models that stop students from taking quizzes, completing assignments, or writing essays, then why would we believe they are capable of making AI safe or responsible?" The implication (since AI companies all say they can make their products safe) is that they are not being honest when they say they can't stop students from using AI. That's why the second part of the article focuses on how instructors can clamp down on students. But as I've said before: it's disappointing to see academics resort to authoritarianism in the face of the challenge from AI.,
Today: Total: [] [Share]

