Some describe the assessment issue in higher education as apocalyptic, destroying the very fabric of higher education. This is exaggerated. This piece offers an alternative, to turning learning and assessment into a toxic cat-and-mouse game, where there are many more mice, and the mice are winning, It is about better and more authentic assessment.
There is no silver bullet, as this is a multivariant problem involving student motivations, teaching, institutional practices and technology. Those who simply shout ‘bring back in-person exams’ are ignoring the causes and not offering adequate solutions. The solution involves several steps.
Step 1: Stop the blame
The first step is to admit the serious nature and scale of the problem but also accept it is not the students’ fault. There is a bias in teaching, research and assessment, towards the teacher and researcher. They are the means to an end, not the end in itself. The profession all too ready to blame students and dismiss AI, when these problems were explicit before AI hit the scene. It is hopelessly utopian to expect learners not to use AI.
Rather than accuse, the solution has to be involve not tempting them with shortcuts that resut from an accusatory environment and poor assessment. Redesign the system to allow more time for teaching and eliminate the temptation that may lead to a toxic environment of accusations, false positives and expulsions; a life-changing disaster for any young person.
It may also be a life-changing disaster for the faculty member or administrator who ends up making a false accusation. This has already happened with a Minister for Higher Education and leaders of major educational institutions, accused of plagiarism themsleves, and removed.
Most students do not want to cheat. However, when the pressure is overwhelming, from the perception of peers (everyone is doing it, I’d be a fool not to) and parents, tteaching not as good as it could be, assessments poorly designed; students will take available shortcuts. Step 1 is to recognise that cheating is normal and that in high stakes exams, people will take high stakes risks, so don’t blame the students.
We also need to cool down on ths idea that using idea is simply congitive surrender, destroying the leaner's ability to learn. There is a fundamental flaw in most debate about cognitive surrender to AI. The argument that we should be keeping learning difficult, is very different from the idea of useful, deliberate difficulty. As I said earlier, dull lectures, poor teaching, obscure content and accessibility is a big problem that requires more focus on teaching.
Students use AI because they find it useful in learning, to find things out, expand on concepts, unravel things they find difficult, test themselves, produce flashcards for revision and practice. To ban AI would be to throw the baby out with the bathwater, and the bath.
Step 2: No silver bullet
Cheating has always been rampant in education. It was there before AI, with a range of techniques and technologies. I wrote about this in my book ‘Learning Technologies’. There was a whole section on cheating technology from Confucian silk cheat sheets to repurposed calculators, false arms and even surgical implants. Cheating has been an intrinsic feature of educational assessment. As long as there are exams, people will try to take shortcuts.
Let’s take a cold, hard look at in-person essay-based exams. For generations, smart students have looked at past papers, worked out the probability of topics appearing in their exam, pre-writing essays, then memorising them for regurgitation in the exam. The assumption was that we were testing critical thinking. Yet no one who has ever written anything using critical thinking would claim that a piece of writing, written in pen from top left to bottom right of page, without redrafting, reordering and rewriting, even approximated critical thinking. Critical thinking is an internal dialogue where you think, reconsider, revise, seriously reorder and rewrite, as you proceed. This is as far away from regurgitating essays in exams as you can get.
Even in formative essay assessments, students would readily beg, steal and borrow essays from each other, get help from their graduate parents or pay essay mills. These mills were huge enterprises with tens of thousands employed in Nairobi, China and elsewhere, where the well-educated poor provided essays and dissertations for the rich. It was generally ignored by the system (no real moral outrage, as with AI) even though everyone knew it was endemic, especially among students who were studying in their second language. Why? This became a lucrative source of income, the real reason for sliding the problem under the carpet. AI suddenly became one big essay mill, free or cheap, and everyone had access to its services. The revenues of the known cheat companies plummeted.
This is why the current emergency over assessment is really the surfacing of an old and existing problem. We can pretend it is all about AI but AI has merely surfaced a deep, existing problem. It is not fundamentally an AI problem, it is a system and human nehavu=ioural problem. A lot of cheating is the artefact of of the existing teaching and assessment processes and design. It is the same problem that pushes parents to help students with their assignments, hire tutors and pay for exam prep.
Step 2 is to recognise that in-person exams may help, and are not to be scoffed at, but they are not the whole solution, not the silver bullet.
Step 3: Create a Hub
Policies are, at best sticking plasters, at worst they exacerbate the problem. They are certainly not a solution to a large and evolving problem, as they suck up collective effort, are often ignored, then just sit there unrevised and unloved until out-of-date in relation to the advances and uses of AI.
A policy is one thing, strategy another. Rather than rant and rail around academic integrity, or blaming students, one must understand the problem, then come up withe ‘workable’ solutions.
There is a lot of hand-wringing and ethical hubris centred around words like integrity, responsible, ethical, trust and so on. This form of abstraction comes easy to academics and administrators but it does not tackle the problems head-on. This is a practical problem that needs workable, pragmatic design solutions.
AI was the fastedst adopted technology in the history of our species and has continued to get better, as it learns. The solution, therefore, needs to involve a process, not a single policy or event. That process needs to be owned and maintained by the institution or cluster of institutions, even nationally. At first, this needs to be a one-stop-shop for advice, tools and services on assessment. This can be part of a wider hub for the use of AI in general, by all in the institution; administrators, researchers, teachers and students. A technology that is globally universal, used by almost all students to learn, warrants this level of attention. Create that hub and keep it up to date.
Step 4: Multimodal assessment
In many subjects, if you depend on just writing as proof of learning, you do not have an AI problem, you have a learning design and assessment problem. When writing is treated as the sole proof of learning for everything, AI exposure isn’t the flaw. the flaw is in the assessment design.
In the real world, all jobs involve the doing of things, dealing with people, using tools, practical tasks. 80% of jobs in the world are deskless, and those that are deskless are being automated by AI. If you do want to switch towards skills that just focus on expression through text, then other forms of teaching and assessment are necessary.
We are now on the other side of the Gutenberg Parenthesis, where more is available in multiple media formats, from which one can teach and learn. teachers actually speak and listen, we now listen to audiobooks and podcasts, recorded lectures, videos and audio dialogue using AI.
Multimodal assessment is the optimisation of assessment by moving beyond text, now made possible, as AI has become truly multimodal. Models have integrated all media types and can ingest text, audio, images and video, as well as output these media from your text. This offers you the opportunity to free assessment from the tyranny of pure text.
One useful shift, in a world where listening and speaking to others is likely to be more useful than simply writing, is to record oral assessments and use AI to grade them. The argument against oral exams is that they take too much time, but transcription and automatic grading can speed up the process. You also eliminate the stress and problems of worrying about who used AI. Hartmann's (2025) reoriented an upper-level humanities course around oral exams and tracked the time to show that oral exams can verify student understanding directly and, importantly, that they may not take more instructor time than essay.“ Instructor time investment proved comparable to traditional paper grading” with oral exams taking 13 hours , compared to 15 for grading papers. The main point made by the paper was that oral assessment can be integrated back into your existing courses. They may even force your students to do the work, through the idea that they will be properly assessed.
Video based assessment of performance is also becoming possible as AI can recognise what the person is doing in an uploaded video. This swings effort away from detecting AI, towards designing assessments that assess student performance. All outputs from learners can be ingested and interpreted by AI; text, audio, images and video.
Digial portfolios are also an option for gathering evidence across the course. What is often missed is that is shows the student thinking, the ability to recall the foundational knowledge that allows thought to flourish and build a case. It shows skills, not in just writing, but a fuller form of expression and doing. This is not to say it is right for all subjects and skills but portfolios and oral exams should be part of the teaching and assessment toolkit.
A multimodal approach also offers a solution to other problems: accessibility and dyslexia. Dyslexics love AI, as it has for years offered a text to speech option. Others find that rewriting and summarising through AI translates the content into something they can understand more readily than often abstruse academic language. In other words, AI is often used by learners to simply ‘access’ and understand content. So widen out your assessment options and think beyond just text.
Step 5: Automate assessment
Automating assessment has become possible in many cases. This is not to say that all assessment should be automated, only ‘optimally’ automated. There can still be expert validation and quality checking. This has the additional advantage of freeing up busy teaching time for actual teaching.
The simple generation and marking of quizzes in formative assessment can clearly be automated. Students do this routinely. They instinctively know the ‘test effect’ works, so build their own quizzes and flashcards with spaced-practice, using AI. Many now use specialist tools like NotebookLM and ChatGPT’s education features, to help them improve the productivity of their own learning. Learning is also being integrated into tools like Google Translate, so that you can practice learning a language through role play and immersion. There is a strong argument for automating much formative assessment via platforms which gives data back to teachers about individual student performance.
Summative assessments have to have strong input by faculty but the questions, rubrics and marking can often be automated by AI. Automating marking is the single most effective way to free up time for teaching, research and other activities. This also includes the marking and feedback of essays.
We can also automate much more detailed feedback. As Dylan Wiliam has been saying for years, far too much assessment has no forward-looking pedagogy. It is seen by students as an end-point, when it should be feeding forward. AI can do this. Have experts in the loop by all means but look at ways to automate the bulk of the work. This as a field that is advancing rapidly, as AI capability progresses.
Step 6: Test Centres
The Opposite of Cheating by Tricia Bertrand Gallant presents a different set of perspectives. She and fellow author David Rettinger, flip the argument and start, not from the institutional, but student perspective.
She argues, based on flipping the debate towards students’ needs, that testing should be the responsibility of separate and shared ‘Test Centres’. These would provide assessment expertise and the ability to design and manage the delivery of assessments. This removes the pressure on faculty, who, on the whole, do not have the necessary expertise on assessment or its delivery. These centres would look at automating as much as possible, while being careful about verification and standards. It is clear that as AI improves, and it is at a blistering pace, so the automation of assessment and marking will become easier, better and cheaper.
Assessment is a rapidly evolving problem that needs this rapid and adaptive response. This is not a final solution as such, but a new approach to a growing problem that focuses expertise, while relieving the system of the unbearable pressure of producing and policing assessment. Teachers are not cops.
The question is whether this should be a single, clustered or national initiative. Huge savings would be possible if it were organised by the sector nationally. This is unlikely, as there is no real legal or political mechanisms for such a strategic approach. Tertiary education institutions are not known for their sharing, so even clusters are unlikely. That does not invalidate the strengt hof the idea.
These test centres could be physical but more sensibly virtual. This would allow testing at any time, on any subject. It is bizarre that one can only get tested on one day of the year, resits often not available for months, even a year later. Imagine of this were true of driving? A Test Centre should allow testing on demand.
Conclusion
Oddly, the AI prwessure is forcing tertiary education to rethink, and reassess its own assessment. This is long overdue. It is acting as a catalyst for reshaping the role of the teacher and learner in relation to technology, accepting that AI is here to stay. The alternative is to boil like the proverbial frog, fail to respond with anything other than a policy document, constantly accusing students and/or institution for failing to properly assess learners. This is the road to ruin and regret, not the road to success.
Bibliography
Hartmann, C. (2025). Oral exams for a generative AI world: Managing concerns and logistics for undergraduate humanities instruction. College Teaching.

No comments:
Post a Comment