Brands
Latest top stories
Technology

Turnitin: universities must move beyond AI detection to policy and learning integrity

14 April 2026

 

As schools and universities move from artificial intelligence bans to classroom-level policies, Turnitin argues the real challenge is no longer simply detecting AI use, but understanding how it fits into authentic learning. The company’s first Learning Integrity Insights Report points to a new phase for education.

In practical terms, the report argues that universities now need three things: clear AI-use policies, assignment-level flexibility and greater transparency into how student work is created. The challenge is shifting from simple detection towards policy design, assessment integrity and trust in authentic learning.

The report, based on both platform data and direct conversations with educators and institutions, paints a more nuanced picture than the early AI debate suggested. Student use of AI is now widespread, yet older integrity challenges have not disappeared. For education leaders, that means updating policy frameworks without losing sight of long-standing issues such as plagiarism.

Turnitin's perspective on this issue carries weight: 

“We launched this quarterly report to share the depth and breadth of insight and expertise we’ve gathered from over 25 years of working with educators, students, and institutions as they navigate responsible AI use,” says Megan Belt, Corporate & Technology Public Relations Senior Manager at Turnitin.

Turnitin today works with more than 16,000 educational institutions across 185 countries and territories, supporting schools, colleges, universities and researchers with originality checking, assessment workflows and writing feedback. Best known for its plagiarism detection and similarity-checking software, the company has built its reputation on helping educators verify authorship, protect academic standards and strengthen trust in assessment.

Over time, that remit has expanded. What began as plagiarism detection has evolved into a broader suite of integrity tools, including secure high-stakes assessments, writing feedback systems and newer products designed to show how students use artificial intelligence during the writing process. 

Why are universities moving beyond AI detection?

The clearest signal from Turnitin’s first quarterly report is that institutions are moving beyond an initial AI panic phase. According to the report, many institutions are no longer focused solely on whether AI was used, but on how it can be integrated responsibly at assignment or classroom level. More than 60 per cent of recent customer feedback prioritised transparency in AI use, while fewer than half of institutions report having a formal AI policy in place.

“The shift from fearing AI to thoughtful integration requires timely guidance, and we want to figure out the right answer alongside our customers," says Belt.

Why traditional plagiarism still matters in the AI era

For all the discussion around generative AI, one of the report’s most notable findings is not about AI at all.

“What’s most surprising is that ‘traditional’ plagiarism is still going strong, even with all the talk about AI,” Belt points out.

Turnitin’s own data shows that between 6 and 7 per cent of student papers still record similarity scores above 80 per cent from other sources, indicating that conventional plagiarism remains a persistent issue.

The significance is straightforward: universities cannot afford to replace one academic integrity framework with another. AI governance must sit alongside established student plagiarism policies rather than substitute for them. As Belt puts it: 

“Educators can’t forget about traditional plagiarism. They need to keep enforcing those rules, while also figuring out new strategies for handling AI.”

What does responsible AI use look like in education?

The report repeatedly returns to the idea that responsible AI use exists on a range rather than inside a single institutional rulebook.

“The ‘range’ reflects the reality that there is no one-size-fits-all approach to responsible AI use,” Belt says.

That reflects what Turnitin says it is hearing globally: some institutions prohibit AI entirely, others encourage it across classrooms, while many want rules that can be adjusted by assignment, subject or learning objective.

“Some of our customers do not want AI used anywhere, some want it in every classroom, and many desire customisation to adjust the approved use of AI to the specific class or assignment.”

For higher education, this is increasingly becoming an assessment-design issue rather than simply a misconduct issue.

Why AI policy gaps now create the biggest institutional risk

The sharper concern emerging from the report is governance.

“This lack of clear rules puts educators in a tough spot, trying to maintain academic integrity while also helping students learn,” Belt says.

Turnitin reports that fewer than half of institutions currently have a clear AI policy, with Belt citing a figure of 39 per cent of universities.

That gap creates uneven enforcement and inconsistent student expectations across departments.

“It also causes confusion for everyone when different departments interpret the often unwritten rules in different ways.”

The wider sector is already seeing the consequences

Turnitin’s warning about inconsistent policy comes as the wider education sector is already dealing with the consequences of unclear rules.

A 2025 Guardian investigation found nearly 7,000 proven cases of university students cheating with AI tools in a single academic year in the UK, equivalent to 5.1 cases per 1,000 students and sharply up year on year.

That wider market context reinforces the article’s core point: AI misuse in universities is no longer emerging behaviour at the margins, but a mainstream integrity issue that institutions must now design around.

How widespread is AI use in assessed student work?

A 2026 survey revealed 94 per cent of students are using AI for their assessed work, according to Belt.

While that figure comes from Turnitin’s cited research base, it aligns with broader sector signals showing that AI-assisted study behaviour has become normalised across higher education, and making it clear that guidelines for using AI responsibly are needed.

The strategic response is increasingly shifting towards transparency in process, writing visibility and stronger assessment design rather than relying on binary detection scores alone.

What Turnitin’s report means for the future of assessment

The strongest takeaway from Turnitin’s first Learning Integrity Insights Report is that the debate around AI in education has matured.

The sector is moving away from blunt yes-or-no questions and towards something more operational: where AI adds value, where it undermines learning, and how universities can set boundaries that reflect real classroom needs.

“Faculties need consistent guidance and professional development to feel confident using AI responsibly in their classrooms,” Belt concludes.

 

Further reading on MoveTheNeedle.news: 

AI alone will not deliver productivity gains without learning, Pearson research finds

RWS says it translated one trillion words in a year: why that matters for AI translation — and where the limits still are