How college students use synthetic intelligence to sit down assessments for them
Harry knows something his professors don’t. He can complete his History degree without doing any work, sailing through three years at his Russell Group university with no late-night revision, no essay crises, no stress, but – of course – no real learning, either.
He uses artificial intelligence (AI).
It’s easy, the second-year student says. ‘I split a premium ChatGPT payment with four friends’ – like sharing the month’s heating bills or a Netflix password. The modest £16 fee for the ‘enhanced’ version of the AI software ‘can write entire essays in a matter of seconds’.
One academic at a leading university told the Mail that cheating with AI is rampant throughout her institution, but nearly impossible to prove
Harry tells the Mail: ‘It makes arguments I couldn’t have dreamt up – that’s why it’s so useful.’
It’s one step closer to a first-class degree. ‘I even used AI to write my exam essays this summer,’ he admits, ‘I just gave it the question and told it to answer in an “academic style”.’
Harry achieved a high 2:1 grade in his end-of-year exams, having only ever glimpsed the library on his way to the pub.
‘I am quite proud,’ he boasts. ‘At the end of the day, everyone else is doing it – so why shouldn’t I?’
He’s not wrong. Tens of thousands of students across Britain are using AI to write their essays and – even worse – do their exams for them. According to research obtained through a Freedom of Information request by AI company AIPRM, over 80 per cent of UK universities have investigated students for cheating with AI in the past two years.
The phenomenon has become so widespread that we may be on the verge of a major academic crisis. And it is a problem that is bound to grow with the admission of some 300,000 new students this coming academic year, many of whom will be eagerly opening their A-level results this morning.
During the pandemic, university students transferred to ‘remote learning’ and completed exams at home on their computers. I was an undergraduate at the time, and sat my exams in my bedroom, at the kitchen table, in libraries when lockdown permitted and – during the now-standard ’24-hour tests’ which give you a whole day and night to complete them – even in bed. We were allowed to use Google and refer to our notes.
When I finished my degree last year, I hadn’t sat a formal exam since my GCSEs in 2018 – my hand muscles probably wouldn’t cope with the rigours of three hours in the exam hall now.
While some institutions like Cambridge, where I studied, have largely returned to in-person exams post-Covid, many have not amid cost-cutting measures, a lack of building capacity and, of course, a reluctance among students and tutors alike.
Some students split a premium ChatGPT subscription with friends and use this to write their essays
Given how much easier at-home exams are, this is hardly surprising. Doing exams with the help of books and the internet is clearly far less challenging than attempting them without – while tutors no longer have to invigilate.
At my university some students occasionally revealed to me that they were using AI for notes and essays, though I never heard of anyone getting the software to do their exams for them, nor have I ever used it myself. But since graduating, I have heard from many students who freely admit to using AI to cheat at assessed coursework.
Anna, currently in her second year of a three-year Law degree at a Russell Group university, says she used ChatGPT for several exam modules last year.
‘By entering relevant case law and analysis into the bot before my exam, I got answers specified towards my course material,’ she says. In modules where Anna used ChatGPT as a ‘second brain’, she got a First.
‘Unlike copying from someone else’s notes or pasting from the internet, the words are original, so it doesn’t look to examiners like plagiarism,’ she says.
Another student, Grace, on track for a first-class degree in Philosophy at a prestigious London university, said: ‘I wrote entire exam essays on ChatGPT – just by prompting the software with a question. I then combined that with a second AI tool called Quill Bot, which rephrases text so that plagiarism is harder to detect. I won’t be picked up for any misconduct as the AI’s essay should be changed enough by the other software – at least I hope so.’
Indeed, catching people cheating with AI is incredibly difficult. Last September, a research project covering 32 courses at universities in five countries found not only that 74 per cent of students intended to use AI – but that anti-plagiarism software failed to detect cheats in 95 to 98 per cent of cases.
In June, a Reading University study found that AI got higher results than 80 per cent of university undergraduates, and was seldom detected.
‘I am more diligent than other students,’ says Grace. ‘I at least read through the paragraphs and bother to make some changes – lots of my friends don’t even do that.’
To discover how easy it is, I type into an AI bot an Oxford University English exam question for All Souls College: ‘Was Shakespeare obsessed with money?’ adding ‘please write an academic essay of 2,500 words’. Thirty seconds later, an essay appears before me.
The text is comprehensive, coherent and consistent – but it is deadly boring. It lacks any variety of tone or sense of ‘voice’. The essay begins: ‘To understand Shakespeare’s engagement with financial themes, it is crucial to consider the economic context of his era. The late 16th and early 17th centuries were a time of significant economic transformation in England.’ In my view, the bot lacks the creativity for a topic like this, but might be better suited to more fact-based subjects like Law, Medicine or History.
Yes, it is easy. But while thousands of UK students have come to rely on ChatGPT and other AI models, the main reason they get caught plagiarising is because of the technology’s habit of getting things wrong – known as ‘hallucinating’. Many AI bots often appear unable to distinguish between fact and fiction, for example making up law cases and quotes, and inventing book titles and page numbers.
At Cambridge last year, when law students were shown a final-year exam answer written by AI and told it would have got a third-class degree, a shudder went around the room. But if universities hope scaring students off the software will stop them, it doesn’t seem to be working.
To combat this threat, institutions increasingly depend on ‘anti-plagiarism’ software – yet this often fails to catch cheats. Worse, it erroneously ‘catches’ innocent people.
Open AI (the company behind ChatGPT) closed down its ‘AI classifier’ or plagiarism checker, due ‘to its low rate of accuracy’. Turnitin – another anti-plagiarism program used by 99 per cent of UK universities – states that its high precision rate is accompanied by a less than one per cent likelihood of a false positive. But numerous studies suggest this is not the case.
In May 2023, The European Network for Academic Integrity tested several AI detection tools including Turnitin. The study concluded that every single one was ‘neither accurate nor reliable’.
Turnitin insists that the system is only a guide to possible plagiarism rather than proof. It adds that it ‘does not make a determination of misconduct … we provide data for educators to make an informed decision.’
Exams are often carried out at home since the pandemic, making it easier to use AI (picture posed by model)
One academic at a leading university told the Mail that cheating with AI is rampant throughout her institution, but nearly impossible to prove.
She revealed that several students have been caught using AI, adding that they were detected due to multiple errors in citing incorrect or non-existent academic papers to support their answers in exams.
‘Those were only ones who were detected,’ she added. ‘So who knows how many are using it?’
After a disciplinary meeting, she said, they ‘were sent home with only Pass degrees to show for their three years of “study”‘.
The only way to know for sure a student isn’t being dishonest with AI is to put them in an exam hall with no access to the internet – the way in which exams have been conducted since time immemorial.
Last year, Australian universities, warned of an AI ‘arms race’ between students and professors, adding that they might be forced to return to these in-person exams. Ironically, AI technology can help invigilators in exam halls detect cheating, as demonstrated in China, where some institutions use cameras to monitor pupils taking university entrance exams. It picks up suspicious movements of pupils, like bending down to pick things up or head-turning, and sends an alert to the invigilator.
Here in Britain, some courses at Glasgow University this year switched back to in-person exams because of concerns over AI. Other universities are likely to follow suit, so those students who have never attended lectures or read the lecturer’s notes and have merely typed essay subjects into AI may soon face a sharp reckoning.
Unless something is done, employers could soon face a generation of graduates whose so-called ‘knowledge’ has been pilfered from a machine’s often inaccurate neural network.
The question students must ask themselves is whether it is worth going to university at all if you learnt nothing from it and cheated your way through your degrees. But if you put that question to them, they’ll probably just turn to ChatGPT for the answer.
All of the names in this piece have been changed.