Monday, November 04, 2024

5 big surprises in Wharton report on GenAI

Enjoyed this report as it was so goddamn honest, contradicting everything the 'consultants' and 'ethics' folk were recommending for the last year and more!

1. Gen AI Strategy is led internally—NOT by consultants

This bucks the trend. The strategy work is not led by the usual suspects or suspect advisors, many of whom have no real experience of building anything in AI. The bandwagon got bogged down by the sheer weight of hucksters. This technology gives so much agency to individuals within organisations, from to anyone producing text of any kind to coders, that the sisters and brothers are doing it for themselves. You wonder whether consultancy itself is under real threat from AI?

2. NO stringent policies on use in organisations
Interesting. Seems like a contradiction – massive rise in use but little sign of policies being used. I suspect that people have seen through the platitudes that are so often seen in these documents and statements of the blinding obvious, over-egging and exaggerating the ethical dangers.

3. Most employees do NOT face heavy restrictions in accessing Gen AI at their companies
The scepticism, regulatory effort, fear-mongering, even doomsters last year seem to have given way to a more level-headed recognition that this is a technology with real promise, so let's allow folk to use it, as we know they already do! My guess is that this AI on the SLY stuff happened so quickly that organisations just couldn't and didn't know how to respond. Like a lot of tech - it just happens.

4. Companies are adapting by expanding teams and adding Chief AI Officer (CAIO) roles
I wasn’t sure about this, as the first I heard about was in this report! I suspect this is a US thing or exaggerated in the sense of just having someone in the organisation who has emerged as the knowledgeable project manager. Can see it happening though.

5. LESS negativity and scepticism
More decision-makers feel ‘pleased’, ‘excited’, and ‘optimistic’, and less ‘amazed’, ‘curious’ and ‘sceptical’. Negative perceptions are softening, as decision-makers see more promise in Gen AI's ability to enhance jobs without replacing employees. This makes sense. A neat study showed that the scepticism tended to come from those who hadn’t used GenAI in anger. Now that adoption has surged, nearly doubling across functional areas in one year, the practical experimentation has shifted sentiment.


We seem to be going through the usual motions of a technological shift, where we get a period of fierce fear and resistance that gives way to the acceptance that it is all right really. The nay-sayers needed get it out of their systems, before use surges and realism prevails.

https://ai.wharton.upenn.edu/focus-areas/human-technology-interaction/2024-ai-adoption-report/



Wednesday, October 30, 2024

Should L&D be renamed the ‘PERFORMANCE Department’?


4 hours agoL&D is a wagon without a horse, with no pulling power or sense of direction. Could a change of focus provide the direction, momentum and horsepower that has been lacking?

Could we rename Learning & Development, as just ‘Performance’, like Marketing, Legal and Finance Departments?  Learning puts too much focus on courses. ‘Development’ is too vague. We must link what we do with measurable outcomes linked to proven performance, productivity and progress.


‘Performance’ puts the individual into the flow of the organisation. we need to foster a work environment where employees feel valued, motivated, and invested in – through their performance, not through a diet of ‘courses’.


A wider perspective seeing learning as performance would link what we do and what needs to be done. The organisation would see results and more sophisticated learning and work solutions related to strategy would emerge.


Linking learning with performance go hand in glove, allowing us to rise above just course delivery into more considered informal and incremental learning, also more performance support, linking inputs with outputs.


Focusing on performance can be challenging and needs a clear causal link between learning initiatives and productivity gains but it can be done.


Of course, we don’t just want ‘faster horses’ as Henry Ford said, we need new different forms of locomotion. The technology of the age is digital and AI. Employees can be more autonomous, self-driven through bottom-up agency.


AI has shown there is an immense thirst among employees for going ahead and doing things faster and better. We use this technology to both learn and get things done, seeing no difference between the two. Learning and doing things faster and better become fused.


In writing something with evidence and import, many use AI to do fast research, write concisely, even critique the proposition. They see AI as a means to an end, the end being solid output.


AI gives us agency to improve ourselves with guidance from the organisation. It fuses learning and productivity, easy to use, fast and motivating. It should be in our domain, our responsibility.


It would also force us to be more data savvy. Forget happy sheets and bums on seats (measures wrong end of learner), let’s get serious with actual evaluative data. AI gives us a data scientist in our pockets – let’s use it.


Of course, AI as just one catalyst for action, there are many others, such as performance support, the wider world of career development, developing strategic skills for future needs and challenges.


The name matters less than the direction of travel. Strategic alignment with organisational goals should be our new goal. It puts an end to those endless discussions about the future of L&D by asking ‘to what problem are we a solution’? We all know it needs to be more performance focused, more relevant, more integrated with the organisation.

Tuesday, October 29, 2024

Apple Intelligence has finally dropped from the tree! A primer...

All big on privacy, it is also big on that personal experience. That’s what Apple is all about – the user experience. This won’t blow your mind, it will make life easier for your mind. What they’ve done is inject intelligence into your personal workflow with a souped up Siri, lots of writing support (rewriting, proofreading, and summarise), summarisation of almost everything, search through everything and create photo/video stories.

Writing

With Rewrite, you choose from different versions of what you write and can:

Adjust tone (professional, concise, or friendly)

Proofread (checks grammar, word choice, and sentence structure)

Suggest edits (with explanations)

Summarise (digestible paragraph, bullets, table, list)

Any learning task that requires writing will benefit from these features.

Siri

Siri becomes far better and more integrated with the whole user experience:

glowing light wraps around edge of screen

place it anywhere on desktop to access as you work. 

type to Siri at any time on iPhone, iPad, and Mac

switch fluidly between text and voice

get help

This is the assistant you want when learning. 

Photos App

search for just about anything 

also across videos

go right to segment of video

Memories

Memories feature gives users the ability to: 

create movies you want to see by typing a description

picks out the photos and videos based on a user’s description

crafts a storyline with chapters based on themes identified from photos

arrange them in a movie with its own narrative arc

This is neat as it allows learning summaries and experiences to be quickly created.

Email & notifications:

shows the most urgent emails,

see summaries without opening

summarise long threads

smart reply suggests quick responses


Notes and Phone apps allow you to record, transcribe & summarise audio then summarise the transcript.

Users can choose whether or not to enable the ChatGPT integration

Further integration with ChatGPT also coming.

Show does it shape up?

Google and Samsung already have AI features, such as translating translate conversations in real-time, automatically organising notes, and search for something by drawing a circle around it. Apple has at last joined the race here.


Monday, October 28, 2024

Holy Trinity of AI, Fusion and Quantum computing could create new future

In combination, the individual paradigms of AI, fusion and quantum could change the trajectory of history. The timescales are now more certain, as milestones are being met. This has released more capital for research, investment and actual builds. It now seems possible in the 2030-40s that this holy trinity could save us, as well as give unimaginable leaps in productivity.

Multipliers

There are several multipliers with technology that are often missing when we see technology as linear progress of a singular technologies; writing, printing, steam, oil, electricity, radio, TV, personal computer internet, smartphone and now AI. They are often presented as a series on a graph. This linear thinking misses combinatorial technology. 

Technology begets technology

The iPhone was truly combinatorial as was the printing press. Gutenberg did not ‘invent’ printing. He brought together several different technologies that had crossed usable thresholds. Movable Type, which came from China and Korea, relied on metal alloy development using lead, tin and antimony, along with new casting techniques and precision moulds. The press was adapted from wine/olive presses with a screw mechanism for applying even pressure. Then there was paper technology, brought from China, using European paper mills that has designed manufacturing processes based on linen-based paper production, far more economical than parchment. And don’t forget the new oil-based ink, far more durable than water-based inks, with better adhesion to metal type, that lasted longer on paper. Finally, the need for socially agreed standardisation on type; letters, punctuation, capitalisation, agreed standardisation of type sizes and consistent letter heights. It was the combination of all of these elements, some taking centuries to develop, others quick spurts, that came together in a systematic way to allow the mass production with efficient workflows that created printing in the 15th and 16th centuries. This combinatory technology had a profound effect on our culture, politics, the trajectory of history, as well as unleashing our ability to create new technologies.

Will AI hit walls of energy & compute?

Other technologies can combine with other gargantuan leaps to make exponential progress – with much bigger and faster leaps. Some technologies, like micro-processors, have intrinsic exponential progress (Moore’s Law), as they got better and faster. There are all sorts of laws that define exponential growth across networks. But one thing that halts growth is the sudden collapse of a technology as it hits a major constraint.

With AI continuing its exponential ascent, many see it hitting the brick walls of energy and/or compute. Yet, a push on one technology often results in the pull of another. We see Google, Amazon and others invest in Fusion. Google and others have also strongly invested in Quantum computing. This can be seen as the Holy Grail of contemporary tech – AI, Fusion and Quantum.

Fusion

Emission free energy is a constraint on AI. A stepping stone to Fusion, practically, not technologically, are SMR nuclear reactors. That’s solves the emissions problem. It does not necessarily solve the energy demand problem. 

Fusion comes, not from splitting the atom but fusing atoms – fusion. Push Hydrogen atoms together to form helium and huge amounts of energy are released. Target gain was achieved recently (more energy produced than the amount required to perform the experiment) and they now have the brains and money to make fusion happen. 

Major tech companies have invested in Fusion, which promises unlimited energy at lower cost. Microsoft have a major investment in Commonwealth Fusion Systems and have partnered with ITER for computing support. They are keen on providing computer support for start-ups and have several hundred million invested. Google invested in TAE Technologies, who use AI for plasma control research and computational support for fusion modelling. They also have a partnership with Princeton Plasma Physics Lab. Bezos Expeditions invested in General Fusion and AWS is also providing computing resources to fusion projects. Amazon itself has made multiple rounds of funding into Commonwealth Fusion. Eric Lander led the Human Genome Project has a MacArthur ‘genius’ grant is also head of the head of the US Office of Science and Technology Policy for Biden. Recent breakthroughs have led him to become CEO of Pacific Fusion.

Quantum leap

Conventional computing is a constraint on AI. Quantum computing promises to do in minutes what took thousands of years to solve problems. It changes the whole idea of what is computable. Once again, like Deepmind’s Alphafold saving centuries of research with one piece of AI software, we have a technology (quantum computing) that could accelerate another technology’s (AI) capabilities. This promises enormous productivity gains in research and problem solving.

AI could be accelerated with quantum neural networks, better pattern recognition and much faster solving of optimisation problems. Climate change could be modelled with weather prediction, climate change simulations, as well as atmospheric modelling and ocean current analysis. Our energy production and grid management could also be far better managed with power flow optimisation, load balancing, grid stability analysis and better renewable energy integration. This reduces the load needed for global use of AI.

Leaps in productivity include cryptography and security, to keep personal and other data safe through quantum key distribution. Drug discovery and materials science through simulating molecular interactions and designing new materials with specific properties, for example, in battery development. Supply chain and logistics could use route optimisation across delivery networks, warehouse optimisation and fleet management.

It can also help solve several critical challenges in both fusion and nuclear reactor design. In fusion on plasma physics simulation and materials science, also modelling in fusion. Small Nuclear Reactors (SMRs) may also benefit from modelling and materials science. AI solves problems, Fusion fuels AI. It is a virtuous circle.

Quantum computers still have limited qubits and they still have high error rates that require error correction but they exist and AI companies like Google have significant investments in the field. Incredibly difficult to build, they require super-cooling to temperatures unknown to us in the past. They are also sensitive to environmental interference. This raises the problem of cost and access to quantum hardware. But progress has been made.

Quantum computing could also potentially help solve several critical challenges in both fusion and nuclear reactor design. 



Conclusion

In combination, the individual paradigms of AI, fusion and quantum will change the trajectory of history. The timescales are now becoming more certain, as milestones are being met. This releases more capital for research and investment. It now seems possible in the 2030s and 2040s, this holy trinity is likely to save us, as well as give unimaginable leaps in productivity.

Note that AI is here. That's a given. Fusion is a known process (Sun is one giant Fusion reactor burnitruning 4.5 million tons of mass to energy a second) and Quantum computing (known in physics) has been built. This is not blue-sky research on theory but design and engineering problems. Those can be solved. Time will tell.



Sunday, October 27, 2024

Solid paper on personalised learning

A solid paper on the advantages for of self-paced learning. The promise of AI is to deliver personalised learning, sensitive to the learner’s AR (acquisition rate). 

The acquisition rate in learning refers to the speed or efficiency with which a learner acquires new knowledge, skills, or behaviors. It is a measure of how quickly a person can grasp new information or master new concepts. This rate can vary significantly depending on factors such as the complexity of the material, the learner’s prior knowledge, motivation, cognitive abilities, and the methods of instruction being used.

STUDY

tested different ways of teaching multiplication tables (6s, 7s, and 8s) to 3rd & 4th graders

split into three groups: 

Group 1: learned 2 facts at a time

Group 2: learned 8 facts at a time

Group 3: learned facts based on their personal "acquisition rate" (how much they could handle)

RESULTS

Key Findings (from the table and text):

Learning time varied a lot: 

2 facts group: 3 minutes

Personal pace group: 7 minutes

8 facts group: 14 minutes


What worked best: 



Personal pace method (acquisition rate) worked really well

Teaching 8 facts at once was least effective (only 39% remembered)

Teaching 2 facts at a time was middle ground (47% remembered)

Students remembered 76% of facts when taught at their personal pace

One size does not fit all when it comes to learning multiplication facts. The average student could handle learning about 4 new facts at a time. Tailoring the number of facts to each student's learning capacity was most effective. Personalised, self-paced learning works.

Bloom's lesser known research

Bloom is known for his pyramid (he never drew one and it is hopelessly simplistic) but researched this in detail. His other research, rarely known, led him to believe, in Human Characteristics and School Learning (1976), that learners could master knowledge and skills given enough time. It is not that learners are good or bad but fast and slow. The artificial constraint of time in timed periods of learning, timetables and fixed-point final exams, is a destructive filter on many. The solution is to loosen up on time to democratise learning to suit the many not the few. Learning is a process not a timed event. Learning, free from the tyranny of time, allows learners to proceed at their own pace.

Bloom proposed three things could make mastery learning fulfil its potential:

1. Entry diagnosis and adaption (50%) - diagnose, predict and recommend. David Asubel made the point that “The most important single factor influencing learning is what the learner already knows” yet pre-testing is relatively rare form of assessment.

2. Address maturation (25%) - personalise and adapt. AI can deliver personalised learning based on continuous assessment and adaption.

3. Instructional methods (25%) - match to different types of learning experiences and time. Sensitivity to the type of learning, a far more complex issue that Blook thought with his falsely progressive pyramid and tripartite distinction of Cognitive domain (knowledge-based), Psychomotor (action-based) and Affective (emotion-based)

https://www.researchgate.net/figure/Retention-and-Efficiency-Data-by-Condition_tbl1_281801350


Saturday, October 26, 2024

Ai-Da: Challenging creativity's boundaries in a post-humanist, post-creation era?

AI-Da is the world’s first artist robot. It creates art through a combination of algorithms, cameras, and robotic movements. Created in 2019, she has become famous (or infamous), exhibiting at the Design Museum and the Venice Biennale. AI-Da’s art challenges traditional ideas about creativity and the role of the artist, especially as we grapple with the integration of AI into creative fields. This is the sort of subversion I love in art but which the reactionary art world loathes.

AI-Da raises crucial questions about the nature and boundaries of art. She creates, draws and paints through the use of AI cameras and robotic precision. Is creativity a uniquely human trait? Is this intersection of art and technology, evolving art? Should we be judging the art or artist?

My own view is that we have become trapped in the late 18th Romantic view of authorship, the unique, divinely-inspired, creative spark of the individual. Traditional art since then has valued this mysterious homunculus of creativity. Does AI-Da’s work instead represent a collaboration between humans and artificial intelligence, a reflection of our current technological era rather than a purely autonomous creation? A stronger proposition is that a machine be credited as the creative force, even when it lacks personal experiences or emotions to draw from.

AI-Da is a subversive challenge to the Romantic, human-centric view of creativity as a uniquely human trait. But as I’ve claimed, AI-Da’s very existence suggests we might be entering a post-humanist era, where machines become active participants in cultural production. I have challenged the Romantic view of aesthetics here.

Postproduction

There is an interesting idea from the French writer Bourriaud, that we’ve entered a new era, where art and cultural activity now interprets, reproduces, re-exhibits or utilises works made by others or from already available cultural products. He calls it ‘Postproduction’ I thank Rod J. Naquin for introducing me to this thinker and idea. 

Postproduction. Culture as Screenplay: How Art Reprograms the World (2002) was Bourriaud’s essay which examines the trend, emerging since the early 1990s, where a growing number of artists create art based on pre-existing works. He suggests that this "art of postproduction" is a response to the overwhelming abundance of cultural material in the global information age.

The proliferation of artworks and the art world's inclusion of previously ignored or disdained forms characterise this chaotic cultural landscape. Through postproduction, artists navigate and make sense of this cultural excess by reworking existing materials into new creations.

Postcreation

I’d like to universalise this idea of Postproduction to all forms of human endeavour that can now draw upon a vast common pool of culture; all text, images, audio and video, all of human knowledge and achievements – basically the fruits of all past human production to produce, in a way that can be described as ‘Postcreation’.

This is inspired by the arrival of multimodal LLMs, where vast pools of media representing the sum total of all history, all cultural output from our species, has been captured and used to train huge multimodal models that allow our species to create a new future. With new forms of AI, we are borrowing to create the new. It is a new beginning, a fresh start using technology that we have never seen before in the history of our species, something that seems strange but oddly familiar, thrilling but terrifying – AI.

AI, along with us, does not simply copy, sample or parrot things from the past – together we create new outputs. Neither do they remix, reassemble or reappropriate the past – together we recreate the future. This moves us beyond simple curation, collages and mashups into genuinely new forms of production and expression. We should also avoid seeing it as the reproduction of hybrids, reinterpretations or simple syntheses.

It should not be too readily reduced to one word, rather pre-fixed with ‘re-’; to reimagine, reenvision, reconceptualise, recontextualise, revise, rework, revamp, reinterpret, reframe, remodel, redefine and reinvent new cultural capital. We should not pin it down like a broken butterfly with a simple pin, one word, but let the idea flutter and fly free from the prison of language.

We have been doing this on a small scale for a long time under the illusion, reinforced by late 18th and 19th century Romanticism, that creation is a uniquely human endeavour, when all along it has been a drawing upon the past, therefore deeply rooted in what the brain has experienced and takes from its memories to create anything new. We are now, together, taking things from the entire memory of our cultural past to create the new in acts of Postcreation.

Communal future

This new world or new dawn is more communal, drawing from the well of a vast shared, public collective. We can have a common purpose of mutual effort that leads to a more co-operative, collaborative and unified effort. There were some historical dawns that hinted at this future, the Library at Alexandria, open to all containing the known world's knowledge, Wikipedia a huge, free communal knowledge base, but this is something much more profoundly communal.

The many peoples, cultures and languages of the world can be in this communal effort, not to fix some utopian idea of a common set of values or cultural output but creation beyond what just one group sees as good and evil. This was Nietzsche’s re-evaluative vision. Utopias are always fixed and narrow dystopias. This could be a more innovative and transformative era, a future of openness, a genuine recognition that the future is created by us, not determined wholly by the past. AI is not the machine, it is now ‘us’ speaking to ‘ourselves’, in fruitful dialogue.

Some questions

This is not a stated position, merely the start of a complex debate. So here's a few questions.


What defines creativity—can it exist without human emotion or intention? I think it can, especially if it draws, as humans draw on previous experiences in language or images, as humans do.

Does the value of art diminish if it is created by an AI rather than a human? No. AI-Da's paintings are already selling and slate for six-figure sums. The art establishment have made their judgement.

Is AI-Da truly an autonomous creator, or merely an advanced tool shaped by its programming? It is both. It has been shaped by humans but is autonomous to a degree.

How does AI-Da’s art alter our perception of the boundaries between human and machine creativity? Art moves on and this moves art on. Art has no boundaries. That is the whole point of art, to push boundaries.

Can we still view art as a human-centric endeavour, or is there a shift towards accepting non-human contributors as legitimate artists? This is a shift, albeit the first example of a shift.

How does AI-Da’s art reflect broader societal trends towards technological integration and reliance on algorithms in decision-making? It moves us on. We have something concrete to discuss. That’s good.


Thursday, October 24, 2024

Taming the AI & Assessment Conflict: 21 Practical Resolutions

There is a battle underway, with assessors in education and institutions on one side and Generative AI on the other. It often descends into a cat and mouse game but there are too many mice, and the mice are winning. Worse still is when it descends into a tug-of war, tech showdown or brutal legal collision.


We clearly need to harmonise assessment with our evolving technology and navigate our way through AI and assessment to avoid friction, with some practical solutions to minimise disruption. Assessment is at a crossroads.

Legal confrontations between assessors and the assessed will do great damage to institutions. When Sandra Borch, Norway’s Minister of research and higher education cracked down on plagiarism, taking a student to the Supreme Court, another student uncovered serious plagiarism on her Master’s theses – she had to resign. A more distasteful case was Claudine Gay, President of Harvard, who had to resign after an attack by a right-wing politician, Bill Ackerman. The whole thing turned toxic as people uncovered plagiarism in the work of his academic wife. As many US kids have lawyers as parents, a slew of cases are hitting the courts putting the reputations of schools and colleges at risk. This is not the way forward.

Problem

Assessment stands right in the middle of the earthquake zone as the two tectonic plates of AI and traditional education collide and rub up against each other. This always happens with new technology; printing, photocopying, calculators, internet, smartphones… now AI.



We are currently in the position where the mass use of AI is common, because it is a fabulous tool, but is being used on the SLY. There is widespread use of unsanctioned AI tools to save time. Learners and employees are using it in their hundreds of millions, yet educational institutions and organisations are holding out by ignoring the issue, or doing little more than issuing a policy document. AI is therefore seeping into organisations like water, a rising tide that never ebbs.

The problem has just gotten way more challenging as AI agents are here (automation). The Claude Sonnet 3.5 upgrade has just gone 'agentic', in a very practical way. It can use computers the way people do. It can look at your screen, go find stuff, analyse stuff, complete a chain of tasks, move your cursor, click on buttons and type text. To be clear, it understands and interacts with your computer just as you are doing now. This shift is in ability means it can to do open-ended functions like; sit tests, do assignments, even open research.

In truth…

We need to be more honest about these problems and stop shouting down from moral high horses, such as ‘academic integrity’. Human nature determines that if people find they are being played, or have an imperative that is different, they will take paths of least resistance. E.O. Wilson’s observation that we have “ Palaeolithic minds, Medieval institutions & Godlike technology” leads people to take shortcuts, through fear of failure, financial consequences, even panic. It is pointless to continually say 'They shall not pass' if the system has below par teaching and assessment is poorly constructed. We cannot simply blame students for the systemic failures of a system. Education must surely be a domian where teachers and learners are in some sort of harmony.

It doesn't help that we delude ourselves about the past. Cheating has been and still is common. We’re kidding ourselves if we think parents don’t do stuff for their kids at school and in universities. Essay mills and individuals writing assessments, even Master’s theses, exist in their tens of thousands in Nairobi and other places, feeding European and US middle-class kids with essays. Silk cloths with full essays go back to Confucian times. Technology can be bougt on the internet with button cameras and full comms into invisible earpieces. Cheating is everywhere. That's not to condone it, just to recognise tat it is ALWAYS a problem, nout just an AI problem.

In truth, we also have to be honest as accept that assessment is far too ‘text’ based. Much of it does not assess real skills or performance – even critical thinking. Writing an essay in an exam does not test critical thinking. No one writes critically starting top left, finishing bottom right – that’s why students memorise essays and regurgitate them in exams. Essay setting is easy, actual assessment is hard. We also have to be honest and accept that most educators designing and delivering assessment know little about it.

In the workplace, few take assessment seriously. At best it is multiple choice or e-learning thinly peppered with MCQs. L&D doesn’t take assessment seriously because they are not driven by credentials, nor do make much effort to evaluate training. With MCQs, you can guess, (1 in 4), distractors are often poor or simply distract, are difficult to write, easy to design badly, often too factual or unreal, require little cognitive effort and can be cheated, (longest, opposites etc.). An additional problem is that online authoring tools lock us into MCQs.

Assessments are odd. People settle for 70-80% (often an arbitrary threshold) as tests are seen as an end-point. They should have pedagogic import and give the learners momentum, yet there is nothing meaningful on improvement in most assessment and marking. Even with high scorers, full competence rarely the aim as a high mark is seen as enough, not full competence. The aim is to pass the test not master the subject. 

Plagiarism checkers do not work, so DO NOT use detectors. Neither should you depend on your gut – that is just as bad, if not worse. There are too many false positives and they consistently accuse non-native speakers of cheating. Students KNOW this tech better than you. They will always be one step ahead and even if they are not, there will be places and tools they can use to get round you.

Neither does setting traps for the mice, like "Include in your work icitations from <fictional name>", as when employed, once the trap is revealed, the learners can use the tech to reveal the fictional trap.

In a study by Scarfe et al., GenAI submissions were seeded into the exam system for five undergraduate modules across all years of BSc Psychology in a UK University. 94% of the AI submissions went undetected, with the AI submissions getting grades a half grade boundary higher than real students.

In a recent survey of Harvard students, some had made different course choice because of AI, other reported a sense of purposelessness in their education. They were struck by the fact that they were often learning what will be done differently in a world of AI. It is not just assessment that needs to change but also what is taught to be assessed. Assessment is a means to an end, assessing what is known, learnt or taught. If what we need to know, learn to teach changes, so should assessment.

Solutions

Calculators generate numbers, GenAI generates text. We now live in a post-generative AI world where this is the norm. Most writing is also now digital, so why are so many exams written? 

Most writing in the workplace is not a postmodern critique of Macbeth but fairly brief, bureaucratic and banal, getting things done by email, comms, plans, docs and reports. Management is currently laden with admin and GenAI promises to free us from admin to focus on what matters - the goal. It is here to stay because there is a massive need in the real world to raise productivity on speed and quality and not nget bogged down on redrafting or pretending you are Proust.  Why expect everyone to be writers of brilliant prose, when the goal is to simply get things done.

1. We have to move beyond text-only assessment into more performance-based assessments. Kids go to school at 5 and come out up to 20 years later having did little else other than read, write and comment on text. There is this illusion that one can assess skills through text – which is plainly ridiculous. Accept that people use GenAI to improve their writing beyond their own threshold. Encourage them to use AI to help them make their writing morce concise through summarisations. Allow them to critique their own work through AI.

2. Build 'pedagogy' into creating assessments with AI. We have done this by taking the research, on say transfer and action, then building that into the AI assessment creation process. You get better, more relevant assessment items, along with relevant rubrics.

3. Also build good assessment design practices into creating assessments with AI. There are clear DOs and DON’Ts in the design of assessment items. Build these into the AI creation process. Go further and match assessments to quality assessments standards. Believe me, this can be done in AI.

4. Match assessments more closely to what was actually taught. This alignment can be done using AI, including the identification of gaps, representative coverage, weaknesses on emphasis identified. The documents and transcripts used in teaching and/or the curriculum, can be used by AI to create better quality assessments.

5. Do more pre-assessments. David Asubel said “The most important single factor influencing learning is what the learner already knows.” I totally agree, yet it is rarely done. This gives assessment real pedagogic import – it propels or feeds forward into the learning process and helps teachers. These can be created quickly by AI.

6. Let's have more retrieval practice. This can be created quickly by teachers, even learners themsleves. We know that this works better than underlining notes and highlighting. Making leatrners or learners themselves making the effort to recall ideas and solutions intheir own minds helps get stuff into long-term memory.

7. Move away from MCQs towards short open text, assessed by AI. Open text is intrinsically superior as it demands recall from memory rather than identification and discrimination (correct answer in MCQs is there on the paper or screen). Open response more accurately reflects actual knowledge and skills.

8. Move away from authoring tools that lock you into fixed, templated MCQ assessment items. They also template things into rather annoying cartoon content with speech bubbles etc. Here's 25 ways I think bad design makes content and assessments suck.

9. Use more scenario and simulation assessment. They match real world skills, have more relevance and can set more sophisticated assessments on decision making and other skills. AI can create such scenario and sims assessment and the content you need to populate the scenarios.

10. On formative assessment, 'test out' more. Testing out means allowing people to progress if they pass the text. they may be able to skip modules, even the entire course. This should be the norm in compliance training, or training full stop, where people get the same courses year after year.

11. Get AI to create mnemonics and question-based flashcards for learners to self-assess, practice and revise and create personalised spaced-practice assessment, so they can get learning embedded. 

12. On formative assessment, use more AI designed and delivered adaptive and personalised learning. Adaptive learning can be 1) PRE-COURSE: Student data or pre-tests define pathways. (Never use learning styles or preferences). 2) IN-COURSE: Continuous adaption which needs specialised AI software (this is difficult but systems can do it). 3) POST-COURSE: With shared data across courses/programmes, also Adaptive assessment, Adaptive retention such a personalised spaced practice and performance support.

13. AI created Avatars can be built into assessments. These can be customers in sales or customer care training; employees in leadership. management and specific skills such as recruitment interviewing; or patients in medical education, where you can interact with people in assessments to provide realism.

14. Automate marking. Most lecturers, teachers and trainers have heavy workloads, many rightly complain about this, so focus on teaching not marking. Automated marking will also give you insights into individual performance and gaps. The Henkel study (2024), in a series of experiments in different domains at Grade levels 5-16 (Key Stage 2/3/4), showed that AI was as good at marking as humans.

15. Use audio to deliver assessment results, along with feedback and encouragement. This is more personal and motivating for the learner. It also forces the assessor to be more articulate and precise.

16. Use AI post-assignment or post-assessment techniques, such as generated audio questions that interrogate the learners understanding of their own work. Their audio answers could be transcribed by AI, even assessed by AI. 

17. Make assessments more accessible on language and content. Far too many assessments have overly academic language or assessment items that are to abstract and can be turned into better expressed and relevant prose and problems. Translate overly-academic language to readable and vocational using AI. Critique and translate into more readable prose.

18. AI has revolutionised accessibility thorough text-to-speech and speech to text. It has now provided text and speech chatbots and automatically created podcasts (free NotebookLM) AI has also given us live captioning and real-time transcription. For dyslexics (5–15% of population), T2S & S2T, spell checks, Grammar Assistants, Predictive Text and voice dictation have been incredibly useful in reducing fear and embarrassment. AI can do wonders in making assessment more accessible.

19. Use AI for data analysis on your assessment data. It is as simple as loading up a spreadsheet and asking questions you want answered.

20. Stop being so utopian. Most people at school and University will not become researchers and academics. Don’t assess them as if that is ‘their’ goal.

21. What are the skills left over for education to focus on after you get GenAI into common use both in education, the workplace and life? The pat answer is too often - soft skills. I disagree. Move deeper into hard expertise and skills, with a broad perspective, that can be enhanced, magnified, even executed and automated by AI.

Conclusion

AI is moving steadily away from prompting to automation. This is happening in writing, coding spreadsheet analysis, image creation even moving image creation. It is happening with avatars and in real-time advanced dialogue as speech.

Just like calculators generated numbers GenAI generates text. We need to recognise this and change what we expect of learners. Turning it into a cat and mouse game will not work, there are too many mice and they're winning.

PS
This is a sort of summary of my talk in Berlin at the ATP European Assessment Conference.