Native Platform Ltd
© 2017-2025
All rights reserved
Industry: Higher Education
Expert: Joe Myers, Lecturer at UCL and former Machine Learning consultant for Quilgo
Topic: Online Assessments, AI Proctoring, and the Evolving Landscape of Academic Integrity
In an era when tools like ChatGPT can generate essays in seconds, and various cheating methods garner millions of views on TikTok, the very foundation of academic assessment is being called into question. What does it truly mean to test fairly? Can online exams ever be secured again?
To explore these topics, we spoke with Joe Myers, a mathematics lecturer at University College London’s School of Management. Joe has worked as a machine learning consultant with Quilgo, where he contributed to the development of proctoring tools for remote assessments. He operates at the intersection of education and machine learning, and his insights on academic integrity are both pressing and practical.
Quilgo: Joe, can you tell us a bit about yourself and your role at Quilgo?
Joe: I’m a lecturer at UCL’s School of Management, where I teach mathematics and data analytics. Over the past few years, I’ve witnessed the digital transformation of education - especially during the COVID-19 shift - and the new challenges that came with it.
At Quilgo, I focus on developing machine learning-based algorithms for online proctoring. We build tools that monitor student behaviour during tests - from camera activity to screen usage - to help institutions preserve academic integrity, especially in remote settings. With the rise of large language models (LLMs) like ChatGPT, which can generate incredibly human-like text, traditional assessments, such as essays, are now far more vulnerable to cheating. That’s why our work at Quilgo is more relevant than ever.
Quilgo: What are the most prominent challenges educators encounter with online testing today?
Joe: It ultimately comes down to a trade-off between flexibility and integrity. Online testing offers significant flexibility, being scalable, accessible, and convenient. However, it also increases the potential for academic dishonesty. This trade-off is what prevents many major institutions from fully adopting digital assessments.
Quilgo: So, how do you see the future of assessments evolving?
Joe: The essay is dying. I believe we’re seeing a shift back toward time-constrained, high-intensity assessments. Essays are now too vulnerable to LLMs. The more scalable, fair, and data-driven approach is to test comprehension under real-time conditions - something tools like Quilgo enable.
Quilgo: What does a "great" digital testing experience look like to you?
Joe: For instructors, a great platform focuses on providing valuable insights. It should deliver precise data on where students struggle, particularly in formative tests. Additionally, it should automate grading, streamline test delivery, and minimize errors.
For students, a great experience is characterized by clarity and smooth progression. They require uninterrupted testing with clear instructions and, ideally, constructive feedback.
Quilgo: In your view, what role does AI play in protecting test integrity?
Joe: A huge one. We’re already seeing AI vs. AI: students using AI to cheat and educators using AI to catch it.
Take plagiarism. With the ease of generating essays using LLMs, plagiarism is evolving rapidly. But AI can fight back - by detecting machine-generated content and scanning proctoring footage for unusual patterns. Imagine a system that flags suspicious behaviour in real time, allowing a human proctor to focus only on red flags. That’s the direction we’re going.
Quilgo: What do educators still misunderstand about AI in assessments?
Joe: There is potential for misuse of AI if assessments are left solely to the AI in a bureaucratic manner. However, healthy usage involves AI-powered assessments where the assessor focuses on identifying key weaknesses in students and providing tailored feedback.
Additionally, human oversight is crucial for determining instances of cheating. With human involvement, AI can significantly enhance productivity in the assessment process, multiplying it by 10 or even 100 times.
Quilgo: Do you use Quilgo in your teaching?
Joe: Yes, I do, particularly for midterm quizzes in my math modules. It has enabled me to intervene early with students who are struggling. Understanding who is underperforming and why has completely transformed my approach to tutoring and providing feedback.
Quilgo: Have you noticed any changes in student behaviour or performance since transitioning to online proctoring?
Joe: Absolutely. Digital proctoring provides insights that traditional testing cannot. You can observe engagement levels, behavioural trends, and even micro-patterns in how students interact with the test. This type of data not only helps shape future assessments but also enables more targeted feedback. It's like having a window into each student's experience, both on an individual level and across the entire cohort.
Joe believes that assessments are undergoing a significant transformation. While flexibility is now a permanent feature, maintaining integrity will require effort. Platforms like Quilgo, which integrate AI, behavioral insights, and human oversight, can provide educators with the necessary tools to create a secure, scalable, and fair assessment model for the future.
If Joe's prediction comes true, we might eventually consider traditional essays to be outdated relics of a pre-AI era.
Read our feature: The Rise of AI Cheating Tools and How Quilgo Stays Ahead →
Ready to modernize your assessments?