Saturday, March 21, 2026

New, free language learning tool in Google Translate


A new teaching and learning tool has appeared in Google Translate. When a teaching and learning tool is embedded in an existing platform with over 1 billion users, in dozens of languages, it has an advantage in terms of visibility and potential.

It is easy to use, simple but powerful. It is also free! We will see a lot more of this, useful tools added to existing services, as that is where the audience lies. Those who are using Translation services are far more likely to want to learn a second language. There is the secondary audience, teachers and learners who have already decided to learn a language. It is this marbling of learning into workflows that is interesting.

Here are the steps for using ‘Practice’ in Google Translate:

1. Open Google Translate and tap ‘Practice’

 On the main Translate screen, tap the Practice button at the bottom. It is marked Beta on the image.

2. Start the personalised practice setup

You then see a screen titled Personalized conversation practice. Tap ‘Get started’.

3. Choose your language level

Select your current level in the target language, in the example shown, Basic, Intermediate, or Advanced, then tap Next.

4. Create a practice scenario

On the ‘Create your own practice scenario’ screen, choose the mode:

  • Listening
  • Roleplay

Then enter or select a scenario, such as asking for a vegetarian option, passing immigration, or other everyday situations.

5. Review the word list

Before starting, Google Translate shows a ‘Word list’ with useful vocabulary and audio pronunciation for key terms you may need in the scenario.

6. Begin the roleplay or listening practice

 The conversation screen opens with the scenario. You can:

  •     listen to the prompt,
  •    use translation/help icons,
  •    tap Hints or Tasks,
  •    and press the ‘microphone’ button to speak and practise your response.

In effect, the flow is: tap Practice → get started → choose level → choose listening or roleplay and scenario → review vocabulary → start speaking/listening practice.


Monday, March 16, 2026

AI should be the Guide, not the Ghostwriter

Illusion of competence

In their 2006 research, Karpicke and Roediger used rereading as the control, as that is what most learners do, see Karpicke, Butler and Roediger (2009). In a survey of 117 students, they asked them to list their study strategies, then also choose from a list of set strategies. The majority chose ‘rereading’ as a strategy with relatively few using self-testing or free recall. They christened this the ‘illusion of competence’. 

Bjork, Karpicke and Roediger believed that both teaching and learning can be improved and optimised by introducing techniques that force cognitive effort. Interestingly, they believe that teachers and learners are often delusional about learning, assuming it happens with little effort other than attending lectures, rereading and highlighting text. This is why they, and others recommend, among other techniques, generative learning. Bjorg builds his recommendations on the idea of desirable difficulty.

Desirable difficulty

Generating words, knowledge and solutions is better than simply reading, highlighting text or getting AI to do it for you. Acts of personal generation provide the context for greater understanding and subsequent recall. This is why we set essays and assignment for learners. They need to learn by genuine effort and this is achieved using ‘desirable difficulty’. 

This is a short-term pain, long-term gain idea, where desirable difficulties are learning challenges that make the learner study harder in the short term to improve long-term retention and understanding. That doesn’t in itself tell us how this should be achieved, so let’s bring in the fundamental reason for making learners create essays and assignments; generative learning.

Generative activities

Wittrock developed a generative theory of learning, as well as researching its effectiveness and applying it in practice. Learners, for Wittrock, are not passive receivers of knowledge, they are active reorganisers of knowledge creating meaning from their own generative activities. His generative learning theory was built on the idea of learners integrating new knowledge and skills into what they already know through generative activities, where effective teaching facilitates leavers to construct meaning from various generative experiences.

His model encourages learners to generate meaning and understanding from instruction through effortful, generative activities. The model has four major processes; 1) Attention, 2) Motivation, 3) Knowledge and preconceptions, 4) Generation.

Attention is directing generative processes on relevant incoming material and stored knowledge. This is what most learners use AI for, to get that initial attention on a position or starting argument. AI also helps with motivation, getting started and a willingness to really invest the time and effort to make sense of material. In particular, it helps with build that initial platform of knowledge and preconceptions. The problems come with generation, the sense making. Delegate this to AI and learning may suffer.

Scaffolding

Bruner’s four principles addressed the issue of assisting learners as they move forward in their learning process, with some concrete recommendations. 1) Readiness, 2) Structure, 3) Sequence and 4) Generation. His point is that just throwing an essay title at them is far from adequate. The learner must have a readiness in terms of a predisposition to learn and so their experiences and context must be considered. If you set them up without adequate support they will treat it as a transactional demand and respond by taking short cuts with AI. It is surely worth providing or pointing towards some sort of structure and sequencing, so that it can be grasped by the learner. But it is in the generation brings in extrapolation, manipulation, a filling in the gaps and expansion beyond the learners existing knowledge.

Bruner saw the solution these problems as scaffolding. He gave us the word ‘scaffolding’ in educational theory, and the recognition that learners need to be either self-aware or helped to build on existing knowledge, is certainly a useful device, albeit a little hazy. The problem with this constructivist generalisation is that it immediately begs more detailed questions about what we mean by ‘structure’, ‘sequence’ and ‘scaffolding’. Here, I think, the use of AI can be used to good effect.

Students should be encouraged to use AI to find, support, sequence and critique their work, not generate it from scratch. This is a vital distinction – the generation by AI of support, as opposed to solutions.

AI should be used as a dialogue between the learner and an imagined mentor. We need to accept that it will be used, because it is a useful mentor, not a generative tool in itself. It is a mistake to see contemporary AI as simply generating text. It has reasoning and can support, critique, identify gaps in reasoning and generally act as an expert mentor providing an external perspective.

What can this well researched work tell us about the use of AI by learners in learning? I’ll describe this as instructions to the learner.

Using AI as a mentor, not a ghostwriter

Many learners are simply overwhelmed by the blank piece of paper when set an essay or assignment. They can’t find a way in, a solid place to start. It seems like a mountain to climb. AI can help alleviate that fear, by helping them get started, providing the right equipment, mentoring them forward, to reach the top.

There is a big difference between AI writing an assignment for you, where you are outsourcing the effort, to using AI to help you move forward, keep up momentum and critique your own effort as you go. Do not get discouraged because you feel you don’t know what to do next. Use AI as your geode on that route, pointing out which way you should go. 

The trick is to preserve the desirable difficulty, the real learning gain in writing or doing an essay or assignment, rather than you floundering or taking shortcuts. Rather than using AI to write an essay or assignment, use it as a mentor to help you climb, one stage at a time. Imagine it is your teacher, tutor or lecturer, sitting next to you, gently guiding you forward. Let AI hold the ladder, not climb for you.

1. Just get started

Do NOT see AI as writing on your behalf, but as a tool to make your thinking more rigorous. Rather than asking AI to produce your essay or assignment submission from scratch, write a rough argument, an outline, set of notes, statement of the topic, suggested reading list or sources, anything, just to get started. Do not worry if this first version is messy, see it as a very rough starting point. Generate this on your own to start with or ask AI to generate several one paragraph starting points, then use your judgement to think about what you want as a starting position.

2. Build your first draft

Now use AI diagnostically. Instead of asking AI to produce a full essay, ask the system to suggest the best sources for researching or building a case, solution or argument. It is here you can get AI to give you pieces of the jigsaw that allow you to create a first draft. Write a first draft, in your own words. Keep it structured and simple. It is here that you will get some idea of structure and sequence. A page will suffice.

3. Diagnose weaknesses

Now use AI to identify weaknesses in your reasoning; weak logical jumps, unsupported claims, vague terms, hidden assumptions and places where an expert would say you can’t make that jump or take that direction up the rock face. AI is not helping you avoid the effort, it is helping you expose where the next stage of the work still needs to be done, a direction of travel. Use AI to critique, not compose

4. Rebuild your argument

Once those weaknesses have been identified, start climbing again, rebuild your argument. This is a crucial stage, because the improvement has to come from your own judgement. Tighten your logic, clarify any vague concepts or ideas, remove exaggeration and add supporting evidence or explanations for what was identified as missing. The value of this process lies in the climbing, looking for secure hand and footholds.

5. Check against sources

Now check your argument against the actual readings, papers or evidence you are using. AI can be helpful here by highlighting where an argument oversimplifies an author, contradicts the source material or misses an important distinction. Used properly, you push yourself up on a solid route. Instead of just citing sources you have skimmed, you are forced into a more genuine understanding of what those sources actually say and how far their own claims can be supported.

6. Identify what’s missing

Now that you have established your route upwards, now you can move into a deeper level of reflection and quality of climb. Again, ask what is still missing from your arguments, what counterarguments have not been addressed, what assumptions remain undefended and what a critic from another perspective might say. This is often the stage that turns a reasonable piece into a genuinely thoughtful one, because it forces you to see your own arguments from other, outside perspectives and recognise their limits. This allows you to push on to towards the summit.

7. Test conclusion

Before conquering the peak, test your conclusion. It may be a false peak, the real peak being just beyond what what you see, A conclusion often falls short claiming less or more than your work really justifies. Ask whether the conclusion really does follow on from the evidence, whether it overstates the case or whether there is another bolder conclusion that can be justified. You can then make it more precise and defendable.

8. Write final version yourself

Only then should you write the final version. At every point on the climb, you will have understood what you were reading and writing. At this point, you produce the finished, polished work in your own words, built on strong foundations than you have managed alone. AI has not written the essay, it has served as a critic, a preserver of method and difficulty a guide pushing you forward to your peak.

Conclusion

The purpose of setting an essay or assignment is to make you think and learn, so use the tools that allow you, not to flounder and struggle, but move forward with confidence, overcoming difficulties as you proceed. Keep the useful difficulty and preserving the learning. This gives you work that is your own, defensible and constructive.


Saturday, March 14, 2026

AI assessment apocalypse – a 5-step solution

Some describe the assessment issue in higher education as apocalyptic, destroying the very fabric of higher education. This is exaggerated. This piece offers an alternative,  to turning learning and assessment into a toxic cat-and-mouse game, where there are many more mice, and the mice are winning, It is about better and more authentic assessment.

There is no silver bullet, as this is a multivariant problem involving student motivations, teaching, institutional practices and technology. Those who simply shout ‘bring back in-person exams’ are ignoring the causes and not offering adequate solutions. The solution involves several steps.

Step 1: Stop the blame

The first step is to admit the serious nature and scale of the problem but also accept it is not the students’ fault. There is a bias in teaching, research and assessment, towards the teacher and researcher. They are the means to an end, not the end in itself. The profession all too ready to blame students and dismiss AI, when these problems were explicit before AI hit the scene. It is hopelessly utopian to expect learners not to use AI. 

Rather than accuse, the solution has to be involve not tempting them with shortcuts that resut from an accusatory environment and poor assessment. Redesign the system to allow more time for teaching and eliminate the temptation that may lead to a toxic environment of accusations, false positives and expulsions; a life-changing disaster for any young person. 

It may also be a life-changing disaster for the faculty member or administrator who ends up making a false accusation. This has already happened with a Minister for Higher Education and leaders of major educational institutions, accused of plagiarism themsleves, and removed.

Most students do not want to cheat. However, when the pressure is overwhelming, from the perception of peers (everyone is doing it, I’d be a fool not to) and parents, tteaching not as good as it could be, assessments poorly designed; students will take available shortcuts. Step 1 is to recognise that cheating is normal and that in high stakes exams, people will take high stakes risks, so don’t blame the students.

We also need to cool down on ths idea that using idea is simply congitive surrender, destroying the leaner's ability to learn. There is a fundamental flaw in most debate about cognitive surrender to AI. The argument that we should be keeping learning difficult, is very different from the idea of useful, deliberate difficulty. As I said earlier, dull lectures, poor teaching, obscure content and accessibility is a big problem that requires more focus on teaching.

Students use AI because they find it useful in learning, to find things out, expand on concepts, unravel things they find difficult, test themselves, produce flashcards for revision and practice. To ban AI would be to throw the baby out with the bathwater, and the bath.

Step 2: No silver bullet

Cheating has always been rampant in education. It was there before AI, with a range of techniques and technologies. I wrote about this in my book ‘Learning Technologies’. There was a whole section on cheating technology from Confucian silk cheat sheets to repurposed calculators, false arms and even surgical implants. Cheating has been an intrinsic feature of educational assessment. As long as there are exams, people will try to take shortcuts.

Let’s take a cold, hard look at in-person essay-based exams. For generations, smart students have looked at past papers, worked out the probability of topics appearing in their exam, pre-writing essays, then memorising them for regurgitation in the exam. The assumption was that we were testing critical thinking. Yet no one who has ever written anything using critical thinking would claim that a piece of writing, written in pen from top left to bottom right of page, without redrafting, reordering and rewriting, even approximated critical thinking. Critical thinking is an internal dialogue where you think, reconsider, revise, seriously reorder and rewrite, as you proceed. This is as far away from regurgitating essays in exams as you can get.

Even in formative essay assessments, students would readily beg, steal and borrow essays from each other, get help from their graduate parents or pay essay mills. These mills were huge enterprises with tens of thousands employed in Nairobi, China and elsewhere, where the well-educated poor provided essays and dissertations for the rich. It was generally ignored by the system (no real moral outrage, as with AI) even though everyone knew it was endemic, especially among students who were studying in their second language. Why? This became a lucrative source of income, the real reason for sliding the problem under the carpet. AI suddenly became one big essay mill, free or cheap, and everyone had access to its services. The revenues of the known cheat companies plummeted.

This is why the current emergency over assessment is really the surfacing of an old and existing problem. We can pretend it is all about AI but AI has merely surfaced a deep, existing problem. It is not fundamentally an AI problem, it is a system and human nehavu=ioural problem. A lot of cheating is the artefact of of the existing teaching and assessment processes and design. It is the same problem that pushes parents to help students with their assignments, hire tutors and pay for exam prep.

Step 2 is to recognise that in-person exams may help, and are not to be scoffed at, but they are not the whole solution, not the silver bullet.

Step 3: Create a Hub

Policies are, at best sticking plasters, at worst they exacerbate the problem. They are certainly not a solution to a large and evolving problem, as they suck up collective effort, are often ignored, then just sit there unrevised and unloved until out-of-date in relation to the advances and uses of AI. 

A policy is one thing, strategy another. Rather than rant and rail around academic integrity, or blaming students, one must understand the problem, then come up withe ‘workable’ solutions. 

There is a lot of hand-wringing and ethical hubris centred around words like integrity, responsible, ethical, trust and so on. This form of abstraction comes easy to academics and administrators but it does not tackle the problems head-on. This is a practical problem that needs workable, pragmatic design solutions.

AI was the fastedst adopted technology in the history of our species and has continued to get better, as it learns. The solution, therefore, needs to involve a process, not a single policy or event. That process needs to be owned and maintained by the institution or cluster of institutions, even nationally. At first, this needs to be a one-stop-shop for advice, tools and services on assessment. This can be part of a wider hub for the use of AI in general, by all in the institution; administrators, researchers, teachers and students. A technology that is globally universal, used by almost all students to learn, warrants this level of attention. Create that hub and keep it up to date.

Step 4: Multimodal assessment

In many subjects, if you depend on just writing as proof of learning, you do not have an AI problem, you have a learning design and assessment problem. When writing is treated as the sole proof of learning for everything, AI exposure isn’t the flaw. the flaw is in the assessment design.

In the real world, all jobs involve the doing of things, dealing with people, using tools, practical tasks. 80% of jobs in the world are deskless, and those that are deskless are being automated by AI. If you do want to switch towards skills that just focus on expression through text, then other forms of teaching and assessment are necessary.

We are now on the other side of  the Gutenberg Parenthesis, where more is available in multiple media formats, from which one can teach and learn. teachers actually speak and listen, we now listen to audiobooks and podcasts, recorded lectures, videos and audio dialogue using AI.

Multimodal assessment is the optimisation of assessment by moving beyond text, now made possible, as AI has become truly multimodal. Models have integrated all media types and can ingest text, audio, images and video, as well as output these media from your text. This offers you the opportunity to free assessment from the tyranny of pure text.

One useful shift, in a world where listening and speaking to others is likely to be more useful than simply writing, is to record oral assessments and use AI to grade them. The argument against oral exams is that they take too much time, but transcription and automatic grading can speed up the process. You also eliminate the stress and problems of worrying about who used AI. Hartmann's (2025) reoriented an upper-level humanities course around oral exams and tracked the time to show that oral exams can verify student understanding directly and, importantly, that they may not take more instructor time than essay.“ Instructor time investment proved comparable to traditional paper grading” with oral exams taking 13 hours , compared to 15 for grading papers. The main point made by the paper was that oral assessment can be integrated back into your existing courses. They may even force your students to do the work, through the idea that they will be properly assessed. 

Video based assessment of performance is also becoming possible as AI can recognise what the person is doing in an uploaded video. This swings effort away from detecting AI, towards designing assessments that assess student performance. All outputs from learners can be ingested and interpreted by AI; text, audio, images and video. 

Digial portfolios are also an option for gathering evidence across the course. 
What is often missed is that is shows the student thinking, the ability to recall the foundational knowledge that allows thought to flourish and build a case. It shows skills, not in just writing, but a fuller form of expression and doing. This is not to say it is right for all subjects and skills but portfolios and oral exams should be part of the teaching and assessment toolkit.

A multimodal approach also offers a solution to other problems: accessibility and dyslexia. Dyslexics love AI, as it has for years offered a text to speech option. Others find that rewriting and summarising through AI translates the content into something they can understand more readily than often abstruse academic language. In other words, AI is often used by learners to simply ‘access’ and understand content. So widen out your assessment options and think beyond just text.

Step 5: Automate assessment

Automating assessment has become possible in many cases. This is not to say that all assessment should be automated, only ‘optimally’ automated. There can still be expert validation and quality checking. This has the additional advantage of freeing up busy teaching time for actual teaching.

The simple generation and marking of quizzes in formative assessment can clearly be automated. Students do this routinely. They instinctively know the ‘test effect’ works, so build their own quizzes and flashcards with spaced-practice, using AI. Many now use specialist tools like NotebookLM and ChatGPT’s education features, to help them improve the productivity of their own learning. Learning is also being integrated into tools like Google Translate, so that you can practice learning a language through role play and immersion. There is a strong argument for automating much formative assessment via platforms which gives data back to teachers about individual student performance.

Summative assessments have to have strong input by faculty but the questions, rubrics and marking can often be automated by AI. Automating marking is the single most effective way to free up time for teaching, research and other activities. This also includes the marking and feedback of essays.

We can also automate much more detailed feedback. As Dylan Wiliam has been saying for years, far too much assessment has no forward-looking pedagogy. It is seen by students as an end-point, when it should be feeding forward. AI can do this. Have experts in the loop by all means but look at ways to automate the bulk of the work. This as a field that is advancing rapidly, as AI capability progresses.

Step 6: Test Centres

The Opposite of Cheating by Tricia Bertrand Gallant presents a different set of perspectives. She and fellow author David Rettinger, flip the argument and start, not from the institutional, but student perspective.

She argues, based on flipping the debate towards students’ needs, that testing should be the responsibility of separate and shared ‘Test Centres’. These would provide assessment expertise and the ability to design and manage the delivery of assessments. This removes the pressure on faculty, who, on the whole, do not have the necessary expertise on assessment or its delivery. These centres would look at automating as much as possible, while being careful about verification and standards. It is clear that as AI improves, and it is at a blistering pace, so the automation of assessment and marking will become easier, better and cheaper.

Assessment is a rapidly evolving problem that needs this rapid and adaptive response. This is not a final solution as such, but a new approach to a growing problem that focuses expertise, while relieving the system of the unbearable pressure of producing and policing assessment. Teachers are not cops.

The question is whether this should be a single, clustered or national initiative. Huge savings would be possible if it were organised by the sector nationally. This is unlikely, as there is no real legal or political mechanisms for such a strategic approach. Tertiary education institutions are not known for their sharing, so even clusters are unlikely. That does not invalidate the strengt hof the idea.

These test centres could be physical but more sensibly virtual. This would allow testing at any time, on any subject. It is bizarre that one can only get tested on one day of the year, resits often not available for months, even a year later. Imagine of this were true of driving? A Test Centre should allow testing on demand. 

Conclusion

Oddly, the AI prwessure is forcing tertiary education to rethink, and reassess its own assessment. This is long overdue. It is acting as a catalyst for reshaping the role of the teacher and learner in relation to technology, accepting that AI is here to stay. The alternative is to boil like the proverbial frog, fail to respond with anything other than a policy document, constantly accusing students and/or institution for failing to properly assess learners. This is the road to ruin and regret, not the road to success.

Bibliography

Hartmann, C. (2025). Oral exams for a generative AI world: Managing concerns and logistics for undergraduate humanities instruction. College Teaching.