As Birmingham Eastside reveals a sharp increase in AI-related academic misconduct at one Midlands university, Alex Ughulu explores the shift from traditional plagiarism to the use of artificial intelligence (AI) in higher education.
In just one year at Wolverhampton University, AI-related misconduct has risen from one in every hundred academic misconduct cases to make up almost a quarter in the last academic year.
The shift presents a wider trend across the academic landscape, as artificial intelligence reshapes how students learn, work and interact.
“ChatGPT and Grammarly have become more prominent in the last year,” says Rounak Chitrakar, English and Journalism student at Birmingham City University, “and cater towards students for efficiency and conciseness.”
AI is also being increasingly integrated into writing, teaching and research tools — including Google.
“I think it’s a very useful tool in research, but it’s got that inbuilt conflict between what is your own research vs what is being done by AI,” says Stewart Sandilands, Library Experience Librarian at Birmingham City University.
Despite its benefits, AI use is raising concerns about academic ethics and honesty, with institutions developing specific policies to ensure AI is used responsibly.
Evolution of academic dishonesty
The move to the use of AI is not just a new form of cheating, but a fundamental shift. Where traditional plagiarism involves finding and reusing existing material and presenting it as your own work, generative AI plagiarism involves getting a tool like ChatGPT to generate material on your behalf — similar to getting another student to write an assignment for you.
At the same time, AI tools can remove obstacles to learning, such as the cost of resources or access to support. This can create confusion over what is deemed acceptable academic help and what isn’t allowed,
“I think it can be a good support,” says Rounak. “But complete reliance is bad as it can hamper interpersonal growth and further learning.”
At the University of Wolverhampton, the increase in AI plagiarism cases demonstrates a significant challenge in universities’ approach to academic misconduct, as institutions rethink how they define and enforce academic integrity.
Detection challenges
AI detection is one challenge. The most advanced software achieves just over 80% accuracy, raising concerns about violations going undetected, as well as false positives.
Detection tools can produce probability scores without clear explanations, making it difficult for educators to justify allegations or defend false allegations.
Universities, meanwhile, are left without clear thresholds for what constitutes proof of AI use.
“I think it’s inevitable that there will be usage of detection tools,” says Stewart. “But I’m conscious that universities will be one step behind as well.
Institutional responses
Institutions’ responses to AI vary. Universities like Oxford emphasise ethical AI research and governance frameworks, while others such as Imperial College London, prioritise innovation and collaboration. Some universities fully integrate AI literacy, while others focus on technical training. Some enforce strict rules and others allow the use of AI within boundaries.
“I think AI is inevitable in the current state of technology,” says Rounak. “Certain universities allow the use of Copilot for assistance in their assignments. I think integration is the best step forward because it is very difficult to monitor AI use during personal study time.”
Eleanor Worth, Library Experience Librarian at Birmingham City University, says: “When it comes to helping students to reference their work, there are courses where lecturers or tutors want students to use AI, and some people don’t,”
“I think there’s a place for it being subjective.”
The rise in AI-related academic misconduct and decline in traditional plagiarism represents a turning point for higher education. Unable to rely on restriction or detection, academics are being challenged to reimagine assessments and authorship for a new academic digital space.
