
Revealed: Thousands of UK university students caught cheating using AI
https://www.theguardian.com/education/2025/jun/15/thousands-of-uk-university-students-caught-cheating-using-ai-artificial-intelligence-survey
by ThatchersDirtyTaint

Revealed: Thousands of UK university students caught cheating using AI
https://www.theguardian.com/education/2025/jun/15/thousands-of-uk-university-students-caught-cheating-using-ai-artificial-intelligence-survey
by ThatchersDirtyTaint
45 comments
I don’t know why they didn’t see this coming.
Even when I was at uni, 5 years ago, people would try to cheat their way out of writing essays.
I’m applying for jobs and apparently one of the things I can do to make my applications stand out is to not use AI. It’s absolutely insane how lazy people have become.
PSA for university students, if you can do your coursework with ChatGPT/AI it probably means your future job can be done via ChatGPT/AI.
Oh and watch assessment move back to being 100% exam based wherever possible.
My girlfriend is a university lecturer and she caught other lecturers using it to mark student papers.
_caught_ being the key word.
Less than <5% of cases will be caught and punished.
Back in the 90s at uni people were posting their homework on news groups, memory may be wrong here but I feel like alt.science.math was a popular one. Also “ask dr math” was another place you could get strangers to do your homework.
Point is – cheating at uni isn’t a new thing. It’s just easier to get caught now
The article doesn’t say what ‘cheating’ entails.
Using it to write an entire essay? Probably cheating, but ChatGPT is terrible at writing from scratch.
Redrafting bits of text, getting feedback, brainstorming ideas – not cheating imo
Students at uni were cheating before the internet was widespread. I remember one particularly lazy student who would wait in the common room for a suitable person to come so that she could copy their assignment. She knew better than to ask me.
I’ve decided to go into industry rather than continue on in academia after my PhD partly because of this.
Its infuriating to hold seminars where most of the group cannot speak a word of English, but then they submit work for review in perfect English which is blatantly AI created. There is still integrity in academia, but not at undergrad and postgrad levels.
Well, duh. Did they think AI would just be dismissed by the Uni students? It’s a tool a lot of my friends love to use (though the validity of its answers are varied). One of my lecturers in a coding module told us to use it, because that’s one thing it’s not too bad with.
Yeh. Enjoy your freedoms with things like ChatGPT while it lasts because in the future you’re probably going to need a special license to use this shit.
I’m not surprised, my 12 year old cousin has been doing his homework with chatgpt, why wouldn’t uni students also be doing that?
Practically every single student at uni uses AI to cheat on assignments.
It’s a tool that people just need to learn properly. I imagine this is how it was when calculators came in.
I did my undergrad and first postgrad without it. I am dyslexic and found that having someone read over my work and ask what I was trying to say and help me structure it helps me go from Cs to As. That and when I first properly learnt how to do a literature review and to find resources and properly put them all together.
What AI helps me with is having someone to help me convey what I want to say clearly. It hallucinates too much to ever write an essay. The essays would be shit and would never pass at undergraduate or post graduate level. It is also very good at creating the outline and structure for me, which used to take hours of reading and thinking to come up with. I still have to do the research and synthesis to put the meat on the bones though.
>A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.
Be interesting what proven means in this context. AI tools assessing if AI has been used have been shown to be as useful as flipping a coin.
I know someone lost their phd because of this. The lecturers couldn’t prove it for ages until one of them clicked on a picture in one of her reports and adjusted the crop, it displayed the crop around the image as ChatGPT.
Few thoughts on this –
1. Presumably they were “caught” because ChatGPT writes fairly obvious word salad that is often littered with errors.
2. I don’t see why you’d do this? You’re only cheating yourself. The reason higher education involves writing essays is because writing things out and explaining them generally means they stick in your head and you learn to think about them critically.
3. I hope we’re not seeing too many false-positives where students are being wrongly accused of using AI, because we’ve already seen this happen. Seen a few threads about it on the UK Legal Advice sub – fairly obvious the person wasn’t using AI, but a lecturer used a dodgy AI detection website that was throwing up way too many false positives. Students often have an annoying habit of writing like AI when doing essays, and this long long predates ChatGPT.
I have to write a 300 word thing for work, the temptation is unreal.
Spend an hour or so writing it, or spend 5 minutes writing a good prompt. I’d still have to go through the arseache of editing it either way. I almost certainly won’t do that because it’s not worth the risk but the temptation is absolutely there, so I’m not surprised that students who have to spend dozens of hours most weeks writing essays who have way more opportunities to give in to the temptation find themselves doing it.
I don’t really know what the solution is.
These people have no idea they’re asking for a lifetime of impostor syndrome
Something that people should fear more than students using AI and being caught is students who haven’t used AI being accused of it by AI detection systems which are usually not always accurate. If you’ve used AI and don’t get caught then great good on you, if you use it and do get caught then karma is a bitch.
But if you’re a student who hasn’t used AI yet your work has been picked up as “AI made” by a detector then you could literally have your grades and work ruined over something that never even happened.
When u first learn maths… You don’t get a calculator. To train a skill. Where and when is important. For me… I find it mad people use got to get an outline or a structure. I love doing that myself. And I hope people at uni will learn to organise their thoughts on their own before they start using AI for it.
I think AI is a great tool to assist, but not straight up replace.
It’s an interesting one that I am heavily involved in professionally. I think we might see a move away from graded essays and back to interviews/professional discussions. For example for many level 3 apprenticeships the apprentice will create a ungraded portfolio based on a mix of research and experience from working. The assessor will then generate questions that are discussed verbally but focuses skills and real examples.
I’ve already seen learners who have generated ‘fake’ examples and have been made to look very silly when they could not talk about in detail these situations. Doesn’t fix all but might be a better direction for some areas.
AI however is to be embraced and we are just at the start of working out how to integrate this. There were similar discussions in the mid-90s about the internet and we (largely) worked it out. This will be the same.
I am convinced I graduated at just about the worst time — November 2019. Feel free to suggest another time where it was particularly rough.
Too late to the point _everybody_ has a degree, making the achievement a bit redundant and just another universal checkbox in applying for jobs. Degree also attained with the highest annual fees and longest payment period.
Too early where I could just use ChatGPT to buff out a paragraph to hit a certain word count. I’ve also seen videos online of students simply highlighting a question + four multiple choice answers and the AI extension gives you the answer.
Graduated three months before a global pandemic. Lost my graduate job relevant to my degree; have since found work irrelevant to my degree — makes the SFE payments sting with every payslip that I’m actually not in work relating to what I studied. Trying to get back into it but it’s niche, and by 2020 I’m in competition with the following rung of graduates. It’s just impossible in 2025 now — complete runaway train.
If I knew my job today was just going to be generic sales, I would have 100% cheated my entire degree if it meant having an even better social life for those 4 years. The higher education system is compromised and unfair in this country.
Thing that worries me as well .. solid parents who are wealthier and have more time will encourage their kids to think and not just use AI. For the less fortunate there will be no control. Maybe. Could make some inequalities worse rather than better
Literally everyone is using it. You get chatGPT to write an essay, then put it in your own words (or even get chatGPT to do that too!).
I was doing a MSc in GIS and we had a programming with pyrhon module last year, and we could search for each other’s repos on Github. All but a couple were just chatgpt prompts, leaving in the all the comments
The only the way around this is to give a grade based on their contributions during seminars or after an interview-like examination.
Tbf, if I was still at uni I probably would have done the same.
Same old story, new technology. I financed my way thru Uni by writing other people’s essays back in the day.
International students who can’t even speak in class write the most eloquent essays you know.
This is going to end up with them bringing back paper based exams
Maybe essays are no longer the best way to assess learning? Face to face presentations or something would be much better. AI can write for students but it can’t speak for them.
People are setting themselves up for failure, if they get AI to do things without ever learning the skills/knowledge. Using AI, to assist you/save time is a good thing though.
Your work at university is your opportunity to make a contribution to your field and at this level that is what you should be aiming for with every submission. This is the subject you have chosen to pursue at great cost so it’s crazy to me to the extent at which this is being used. It’s a great tool, by all means use it, but not to write the whole thing! Have some faith and pride in yourself.
Well if universities don’t adapt and continue to require useless and outdated metrics, what did they think would happen?
It’ll be a difficult few years working this out, but the solution was always, from the beginning, the creation of assessments that can’t be completed (or completed well) by AI.
A holding position on cracking down on ai usage that I’ve seen adopted was to flag any suspected cheating and compel the author to attend an interview, in which they would be asked questions aimed at determining their understanding of the essay they’d submitted.
I thought that was a good idea, but cuts to temporary teaching staff in response to the university funding crisis have made it impossible to administer.
Crazy how this is upending education and the only choice might be to change the whole assessment system.
So change the way students are tested. It’s obvious.
Most people I know at university have cheated with ai… yes, the majority of people I know have used it at least once to write an essay or a paragraph for assessed work (I’m not talking about
using it for research or planning).
They’ll generate a paragraph or a few and then change a few words or rewrite a few sentences in their own words. It also scares me when people use it for sources and they check a final time before submitting and find out the source doesn’t exist or that it’s been taken out of context
Lots of our originally online exams had to be changed to on paper and in person because there were too many people cheating. Massive shame
I go to a university gym for swimming, and this is a frequent topic between students in the changing rooms: they do it because ‘everyone else’ does, and their courses use a grade curve.
Besides that, there’s a general sense of ‘why bother’ doing something the right way when society equally rewards those that do it the wrong way. I.e., why bother trying to be good at something/knowledgeable about something when the effort is great and the rewards are few. This is the natural consequence of a pretence society, where we tend to believe and have faith in the biggest bullshitters.
Many students appear to see their future careers as decorated users of AI, and have complete faith in its capability.
The coin has been debased so much as to be worthless.
We have some AI detection tools for when I grade my students uni work. Im not sure how effective they are but unis are trying to cover plagiarism. But I think determined cheaters will always find a way
If people are using AI to write from scratch, then that is disgraceful a short cut that only you will fail to benefit from.
AI can be used as a stepping stone quite effectively though, you can run meta analysis in minutes that would have taken hours before (obviously you need to check all the info, because a lot of the time it can be absolute bollocks).
If you use it to supplement what you work on, then happy days, but right now it’s progressing society into being absolute zombies.
It’s not AI we are utilising. It’s OI – Outsourcing Intelligence. Resulting in a lazier and less proficient generation unable to meet the skill requirements for employment. Ironically, the employers are also pushing hard for AI. What I’m trying to say is… I am incredibly fortunate in my current age and I worry for those that are younger.
I would be interested to know exactly what’s considered a ‘proven case of cheating using AI tools’. I say that because many of the so-called ‘AI detectors’ are actually not that reliable. Not only is there problems with actual cheating not being detected (as stated in the article), there’s also issues the opposite way – the detectors flagging work as AI generated when it simply isn’t.
Comments are closed.