Skip to main contentSkip to navigationSkip to navigation
Bristol University
Bristol University is among the institutions to have issued new guidance on how to detect the use of ChatGPT. Photograph: Adrian Sherratt/Alamy
Bristol University is among the institutions to have issued new guidance on how to detect the use of ChatGPT. Photograph: Adrian Sherratt/Alamy

AI makes plagiarism harder to detect, argue academics – in paper written by chatbot

This article is more than 1 year old

Lecturers say programs capable of writing competent student coursework threaten academic integrity

An academic paper entitled Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT was published this month in an education journal, describing how artificial intelligence (AI) tools “raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism”.

What readers – and indeed the peer reviewers who cleared it for publication – did not know was that the paper itself had been written by the controversial AI chatbot ChatGPT.

“We wanted to show that ChatGPT is writing at a very high level,” said Prof Debby Cotton, director of academic practice at Plymouth Marjon University, who pretended to be the paper’s lead author. “This is an arms race,” she said. “The technology is improving very fast and it’s going to be difficult for universities to outrun it.”

Q&A

AI explained: what is a large language model (LLM)?

Show

What LLMs have done for text, “generative adversarial networks” have done for images, films, music and more. Strictly speaking, a GAN is two neural networks: one built to label, categorise and rate, and the other built to create from scratch. By pairing them together, you can create an AI that can generate content on command.

Say you want an AI that can make pictures. First, you do the hard work of creating the labelling AI, one that can see an image and tell you what is in it, by showing it millions of images that have already been labelled, until it learns to recognise and describe “a dog”, “a bird”, or “a photograph of an orange cut in half, showing that its inside is that of an apple”. Then, you take that program and use it to train a second AI to trick it. That second AI “wins” if it can create an image to which the first AI will give the desired label.

Once you’ve trained that second AI, you’ve got what you set out to build: an AI that you can give a label and get a picture that it thinks matches the label. Or a songOr a video. Or a 3D model.

Read more: Seven top AI acronyms explained


Was this helpful?

Cotton, along with two colleagues from Plymouth University who also claimed to be co-authors, tipped off editors of the journal Innovations in Education and Teaching International. But the four academics who peer-reviewed it assumed it was written by these three scholars.

For years, universities have been trying to banish the plague of essay mills selling pre-written essays and other academic work to any students trying to cheat the system. But now academics suspect even the essay mills are using ChatGPT, and institutions admit they are racing to catch up with – and catch out – anyone passing off the popular chatbot’s work as their own.

The Observer has spoken to a number of universities that say they are planning to expel students who are caught using the software.

The peer-reviewed academic paper that was written by a chatbot appeared this month in the journal Innovations in Education and Teaching International. Photograph: Debby RE Cotton

Thomas Lancaster, a computer scientist and expert on contract cheating at Imperial College London, said many universities were “panicking”.

“If all we have in front of us is a written document, it is incredibly tough to prove it has been written by a machine, because the standard of writing is often good,” he said. “The use of English and quality of grammar is often better than from a student.”

Lancaster warned that the latest version of the AI model, ChatGPT-4, which was released last week, was meant to be much better and capable of writing in a way that felt “more human”.

Nonetheless, he said academics could still look for clues that a student had used ChatGPT. Perhaps the biggest of these is that it does not properly understand academic referencing – a vital part of written university work – and often uses “suspect” references, or makes them up completely.

Cotton said that in order to ensure their academic paper hoodwinked the reviewers, references had to be changed and added to.

Lancaster thought that ChatGPT, which was created by the San Francisco-based tech company OpenAI, would “probably do a good job with earlier assignments” on a degree course, but warned it would let them down in the end. “As your course becomes more specialised, it will become much harder to outsource work to a machine,” he said. “I don’t think it could write your whole dissertation.”

Bristol University is one of a number of academic institutions to have issued new guidance for staff on how to detect that a student has used ChatGPT to cheat. This could lead to expulsion for repeat offenders.

skip past newsletter promotion

Prof Kate Whittington, associate pro vice-chancellor at the university, said: “It’s not a case of one offence and you’re out. But we are very clear that we won’t accept cheating because we need to maintain standards.”

Prof Debby Cotton of Plymouth Marjon University highlighted the risks of AI chatbots helping students to cheat. Photograph: Karen Robinson/The Observer

She added: “If you cheat your way to a degree, you might get an initial job, but you won’t do well and your career won’t progress the way you want it to.”

Irene Glendinning, head of academic integrity at Coventry University, said: “We are redoubling our efforts to get the message out to students that if they use these tools to cheat, they can be withdrawn.”

Anyone caught would have to do training on appropriate use of AI. If they continued to cheat, the university would expel them. “My colleagues are already finding cases and dealing with them. We don’t know how many we are missing but we are picking up cases,” she said.

Glendinning urged academics to be alert to language that a student would not normally use. “If you can’t hear your student’s voice, that is a warning,” she said. Another is content with “lots of facts and little critique”.

She said that students who can’t spot the weaknesses in what the bot is producing may slip up. “In my subject of computer science, AI tools can generate code but it will often contain bugs,” she explained. “You can’t debug a computer program unless you understand the basics of programming.”

With fees at £9,250 a year, students were only cheating themselves, said Glendinning. “They’re wasting their money and their time if they aren’t using university to learn.”

More on this story

More on this story

  • US regulators investigate whether OpenAI investors were misled, say reports

  • ‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says

  • AI expert warns against telling your secrets to chatbots such as ChatGPT

  • Llama 2: why is Meta releasing open-source AI model and are there any risks?

  • Claude 2: ChatGPT rival launches chatbot that can summarise a novel

  • ChatGPT developer OpenAI to locate first non-US office in London

  • Two US lawyers fined for submitting fake court citations from ChatGPT

  • AI race is disrupting education firms – and that is just the start

Most viewed

Most viewed