Some describe the assessment issue in higher education as apocalyptic, destroying the very fabric of higher education. This is exaggerated. This piece offers an alternative, not just to turning learning and assessment into a toxic cat-and-mouse game, where there are many more Accept that , and the mice are winning, but towards a more human and demonstrable expression of what the learner has learnt. It is about better and more authentic assessment.
There is no silver bullet, as this is a multivariant problem involving student motivations, teaching, institutional practices and technology. Those who simply shout ‘bring back in-person exams’ are ignoring the causes and not offering a wholly adequate solution, although that is part of the solution. The solution involves several steps.
Step 1: Stop the blame
The first step is to admit to the problem but also accept it is not the students’ fault. There is a bias in teaching, research and assessment, towards the teacher and researcher. They are the means to an end, not the end in itself. The profession all too ready to blame students and dismiss AI, when these problems were explicit before AI hit the scene. It is hopelessly utopian to expect learners not to use AI.
Rather than accuse, the solution may be to not tempt them with shortcuts through poor teaching, an accusatory environment and poor assessment. Redesign the system to allow more time for teaching and eliminate the temptation that may lead to a toxic environment of accusations, false positives and expulsions; a life-changing disaster for any student.
It may also be a life-changing disaster for the faculty member or administrator who ends up making a false accusation. This has already happened with A Minister for Higher Education and leaders of major educational institutions.
Most students do not want to cheat. However, when the pressure is overwhelming, from the perception of peers (everyone is doing it, I’d be a fool not to) and parents, the teaching not perhaps as good as it should be, the assessments poorly designed, students take any shortcuts available. Step 1 is to recognise that cheating is normal and that in high stakes exams, people will take high stakes risks, so don’t blame the students.
There is a fundamental flaw in most debate about cognitive surrender to AI. The argument that we should be keeping learning difficult, is very different from the idea of useful, deliberate difficulty. The former; dull lectures, poor teaching, obscure content and accessibility by learners, is far more common than the latter.
In truth students use AI because they find it useful in learning, to find things out, expand on concepts, unravel things they find difficult, test themselves, produce flashcards for revision and practice. To ban AI would be to throw the baby out with the bathwater and the bath.
Step 2: Accept in-person exams not the silver bullet
Cheating has always been rampant in education. It was there before AI, with a range of techniques and technologies. I wrote about this in my book ‘Learning Technologies’. There was a whole section on such technology from Confucian silk cheat sheets to repurposed calculators, false arms and even surgical implants, cheating has been an intrinsic feature of educational assessment. As long as there are exams, people will try to take shortcuts.
Let’s take a good hard, look at in-person essay-based exams. For generations, smart students have looked at past papers, worked out the probability of topics appearing in their exam, pre-writing essays, then memorising them for regurgitation in the exam. The assumption was that we were testing critical thinking. Yet no one who has ever written anything using critical thinking would claim that a piece of writing, written in pen from top left to bottom right of page, without redrafting, reordering and rewriting, even approximated critical thinking. Critical thinking is an internal dialogue where you think, reconsider, revise, seriously reorder and rewrite, as you proceed. This is as far away from regurgitating essays in exams as you can get.
Even in normal formative essay assessments, students would readily beg, steal and borrow essays from each other, get help from their graduate parents or pay essay mills. These mills were huge enterprises with tens of thousands employed in Nairobi, China and elsewhere, where the well-educated poor provided essays and dissertations for the rich. It was generally ignored by the system (no real moral outrage) as with AI) even though everyone knew it was endemic, especially among students who were studying in their second language. Why? This became a lucrative source of income, the real reason for sliding the problem under the carpet. AI suddenly became one big essay mill, free or cheap, and everyone had access to its services. The revenues of the known cheat companies plummeted.
This is why the current emergency over assessment is really the surfacing of an old and existing problem. We can pretend it is all about AI but AI has merely surfaced a deep, existing problem. It is not fundamentally an AI problem, it is a system and human problem. A lot of cheating is the artefact of of the existing teaching and assessment processes and design. It’s the same problem that pushes parents to help students with their assignments, hire tutors and pay for exam prep.
Step 2 is to recognise that in-person exams may help, and are not to be scoffed at, but they are not the solution.
Step 3: Create a Hub
Policies are, at best, sticking plasters, at worst they exacerbate the problem. They are certainly not a solution to a large and evolving problem. They suck up a huge amount of collective effort and are often ignored, then often just sit there unrevised and unloved and often out of date in relation to the advances and uses of AI.
A policy is one thing, strategy another. Rather than rant and rail around academic integrity, or blaming students, one must take a long, cool look to understand the problem, then come up with ‘workable’ solutions.
There is a lot of anger and ethical hubris centred around words like integrity, responsible, ethical, trust and so on. This form of abstraction comes easy to academics and administrators but it does not tackle the problems head-on. This is a practical problem that needs practical, design solutions.
As AI was the fasted adopted technology in the history of our species and has continued to get better at an astonishing and exponential rate. It is continuously educated, trained and learns. The solution, therefore, needs to involve a process not a single policy or event. That process needs to be owned and maintained by the institution or cluster of institutions. At first, this needs to be a one-stop-shop for advice, tools and services on assessment. This can be part of a wider hub for the use of AI in general, by all in the institution, admin, researchers, teachers and students. A technology that is globally universal, used by almost all students to learn, warrants this level of attention.
Step 4: Multimodal assessment
Multimodal assessment is the optimisation of assessment by moving beyond text. This is now made possible, as AI has become truly multimodal. Models have integrated all media types and can ingest text, audio, images and model as well as output these media from your text, offering the opportunity for assessment to free itself from the tyranny of text.
In many subjects, if you depend on just writing as proof of learning, you do not have an AI problem, you have a learning design and assessment problem. When writing is treated as the sole proof of learning for everything, AI exposure isn’t the flaw. the flaw is in the assessment design.
In the real world, all jobs involve the doing of things, dealing with people, using tools, practical tasks. 80% of jobs in the world are deskless and those that are deskless are being automated by AI. If you do want to switch towards skills that just focus on expression through text, then other forms of teaching and assessment are necessary.
We are also on the other side of the Gutenberg Parenthesis, where more is available in multiple media formats, from which one can learn; teachers actually speak and listen, we now listen to audiobooks and podcasts, recorded lectures, videos and audio dialogue using AI. Oranges are not the only fruit.
One useful shift, in a world where listening and speaking to others is likely to be more useful than simply writing, is to record oral assessments and use AI to grade them. The main argument against oral exams is that they take too much time, but transcription and automatic grading can speed up the process. You also eliminate the stress and problems of worrying about who used AI. Hartmann's (2025) reoriented an upper-level humanities course around oral exams and tracked the time to show that oral exams can verify student understanding directly and, importantly, that they may not take more instructor time than essay.“ Instructor time investment proved comparable to traditional paper grading” with oral exams taking 13 hours , compared to 15 for grading papers. The main point made by the paper was that oral assessment can be integrated back into your existing courses. They may even force your students to do the work, through the idea that they will be adequately assessed.
Video based assessment of performance is also becoming possible as AI can recognise what the person is doing in an uploaded video. This swings effort away from detecting AI, towards designing assessments that assess student performance. All outputs from learners can be ingested and interpreted by AI; text, audio, images and video.
Digial portfolios are also an option for gathering evidence across the course. What is often missed is that is shows the student thinking, the ability to recall the foundational knowledge that allows thought to flourish and build a case can be done orally. It shows skills, not in just writing, but a fuller form of expression and doing. This is not to say it is right for all subjects and skills but it should be part of the teaching and assessment toolkit.
A multimodal approach also offers a solution to other problems: accessibility and dyslexia. Dyslexics love AI, as it has for years offered a text to speech option. Others find that rewriting and summarising through AI translates the content into something they can understand more readily than often abstruse academic language. In other words, AI is often used by learners to simply ‘access’ and understand content. The argument that we should be keeping learning difficult, is very different from the idea of useful deliberate difficulty. The former is far more common than the latter.
Step 5: Automate assessment
Automating assessment has become possible in many cases. This is not to say that all assessment should be automated, only ‘optimally’ automated. There can still be expert validation and quality checking. It has the additional advantage of freeing up busy teaching time for actual teaching.
The simple generation and marking of quizzes in formative assessment can clearly be automated. Students do this routinely. They instinctively know the ‘test effect’ works, so build their own quizzes and flashcards with spaced-practice, using AI. Many now use specialist tools like NotebookLM and ChatGPT’s education features to help them improve the productivity of their own learning. Learning is even being integrated into tools like Google Translate, so that you can practice learning a language through role play and immersion. There is a strong argument for automating much formative assessment via platforms which gives data back to teachers about individual student performance.
Summative assessments have to have strong input by faculty but the questions, rubrics and marking can often be automated by AI. Automating marking is the single most effective way to free up time for teaching, research and other activities. This also includes the marking and feedback of essays.
Far too much assessment has no forward-looking pedagogy. It is seen by students as an end-point, when it should be feeding forward. AI can do this. Have experts in the loop by all means but look at ways to take the bulk of the work can be automated. This as a field that is advancing rapidly, as AI capability progresses. There is even evidence now that simply adding memory to AI services increases its personalised responsiveness to improve feedback.
Step 6: Test Centres
The Opposite of Cheating by Tricia Bertrand Gallant presents a different set of perspectives. She and fellow author David Rettinger, flip the argument and start, not from the institutional, but student perspective.
She argues, based on flipping the debate towards students’ needs, that testing should be the responsibility of separate and shared ‘Test Centres’. These would provide assessment expertise and the ability to design and manage the delivery of assessments. This removes the pressure on faculty, who, on the whole, do not have the necessary expertise on assessment or its delivery. They would also look at automating as much as possible, while being careful about verification and standards. It is clear that as AI improves, and it is at a blistering pace, so the automation of assessment and marking will become easier, better and cheaper.
Assessment is a rapidly evolving problem that needs this rapid and adaptive response. This is not a final solution as such, but a new approach to a growing problem that focuses expertise, while relieving the system of the unbearable pressure of policing assessment. Teachers are not cops.
The question is whether this should be a single, clustered or national initiative. Huge savings would be possible if it were organised by the sector nationally. This is unlikely, as there is not real legal or political mechanisms for such a strategic approach. Tertiary education institutions are not known for their sharing, so even that is unlikely. These institutions are more like small city states, at war with the others, so tend to do things for their own benefit.
These test centres could be physical but more sensibly virtual. This would allow testing at any time. It is odd that one can only get tested on one day of the year. Imagine of this were true of driving? A Test Centre would allow testing on demand.
Conclusion
Oddly, this issue is forcing tertiary education to rethink, and reassess its own assessment. This is long overdue. It is acting as a catalyst for rethinking the role of the teacher, learner in relation to technology, accepting that AI is here to stay. The alternative is to boil like the proverbial frog, fail to respond with anything other than a policy document, constantly accusing the students and/or institution for failing to properly assess learners. This is the road to ruin and regret, not the road to success.
Bibliography
Hartmann, C. (2025). Oral exams for a generative AI world: Managing concerns and logistics for undergraduate humanities instruction. College Teaching.
