Sunday, April 23, 2023

11 words in AI that I wish were used with more care...

I’m normally sanguine about the use of language in technology and avoid the definition game. Dictionary definitions are derived from actual use, not the other way around - meaning is use. Like those who want to hair split about pedagogy, didactics, androgogy, heutology… whatever. I prefer practical pedagogy to pedantry.

Another is the age old dispute about whether learning can be used as both a verb and a noun, some claiming the noun ‘learning’ is nonsense. Alexander Pope used it as a noun in 1709:

A little learning is a dangerous thing’ 

The next two lines are instructive

'Drink deep, or taste not the Pierian Spring

There shallow draughts intoxicate the brain' 

In the same work he also wrote 'To err is human: to forgive divine" and 'Fools rush in where angels fear to tred.'  Pretty impressive for a 23 year old! It is a wise warning to those who want to bandy about loose language in AI.


In the discourse around AI, several terms make me uneasy, mainly because other forces are at work, such as confirmation bias and anthropomorphism. We tend to treat tech as if it were animate, like us. Nass & Reeves did 35 brilliant studies on this, published in their wonderful work ‘The Media Equation’, well worth a read. But this also leads to all sorts of problems as we tend to make abrupt contextual shifts, for example, between different meanings of words like hallucinations, bias and learning. 


Another form of anthropomorphism is to bring utopian expectations to the table, such as the idea that software is or should be a perfect single source of truth. There can be no such thing if it includes different opinions and perspectives. Epistemology, the science of knowledge, quickly disabuses you of this idea as do theories of ‘truth’. Take science, where the findings are not ‘true’ in any absolute sense’ but provide the best theories for the given evidence at that time. In other words, avoid absolutes.


These category errors, the attribution of a concept from one context to something very different, are common in AI. First, the technology is often difficult to understand – how it is trained, the nature of knowledge propagation from Large Language Models (LLMs), what is known about how they work, the degree which emergent qualities, such as reasoned behaviour, arises from such models and so on. But that should make us more, not less, wary of anthropomorphism.



'Artificial' Intelligence has a pejorative ring, like artificial grass or 'artificial sweeteners' - crude, man-made and inferior, a poor imitation. It qualifies the whole field as something less worthy, a bit fake, hence the cliched jokes - 'artificial stupidity' and so on. Of course it works on the tension between artificial and real, cleaving the two worlds apart.


John McCarthy, who invented the phrase ‘Artificial Intelligence’ regretted his definition. Again, taking a word used largely in a human or animal content, is an odd benchmark, given that humans have limited working memories, are forgetful, inattentive, full of biases, sleep eight hours a day, can’t network and die. We have evolutionary baggage; genetic, instincts, emotions and cognitive abilities. In other words, human ‘intelligence’ is rather limited.


LLMs are certainly good at generating text, images and will be at audio and video but by characterising them as 'generators' is, I think, a mistake. They are much more than this, reviewing, spotting errors, evaluating, judging and reviewing things. Its summarising and other functions are astounding.


The one area around AI that produces most ‘noise’ is a fundamental lack of understanding about what ‘ethics’ is as an activity. Ethics is not activism, that is a post-ethical activity that comes from those who have come to a fixed and certain ethical conclusion. It tends to be shrill, accusatory and anyone who blunts their certainty becomes their enemy. This faux ethics, replaces actual ethics, the deep consideration of what is right and wrong, upsides and downsides, known and unknown, the domain of real moral philosophy and ethics. One could do worse that start with Aristotle who recommends some moderation on such issues or Hume who understood the difficulty of deriving an ‘ought’ from an ‘'is’, even an appreciation of Mill whose Utilitarian views can be a useful lens through which we can identify whether there is a net benefit or deficit in a technology. It is not as if ethics is some kind of new kid on the block.



Large Language Models tend to ‘hallucinate’. This is an odd semantic shift. Humans hallucinate, usually when they’re seriously ill, mentally ill or on psychedelic drugs, so it brings with it, a lot of pejorative baggage. Large Language Models do NOT hallucinate in the sense of imagining in its consciousness. It is competent without comprehension. They optimise, calculate weightings, probabilities and all sorts of mathematical wonders and generate data – they are NOT conscious, therefore cannot hallucinate. That word exaggerates the problem, suggesting that it is dysfunctional. In a sense it is a feature not a bug, one that can be altered and improved and that is exactly what is happening. ChatGPT produces poems and stories – are those hallucinations because they are fiction?



Another word that seems loaded with bias, is ‘bias’! Meaning is use and the meaning of bias has been largely around human bias, whether innate or social. Daniel Kahneman got a Nobel Prize for uncovering many of these. This makes the use of the word difficult when applied to technology. It needs to be used carefully. There is statistical bias, indeed statistics is the science of identifying and dealing with bias. Statistics is a field that aims to provide objective and accurate information about a population or a phenomenon based on data. Bias can occur in software and statistical analysis, which can lead to incorrect conclusions or misleading results. There is also a mathematical definition of bias, the difference between the expected value and the real value of the parameter. The entire science of statistics, where statisticians have developed a ton of methods to deal with bias; random sampling, stratified sampling, adjusting for bias, blind and double-blind studies and so on, help to ensure that the results are as objective and accurate as possible. As AI uses mathematical and statical techniques it works hard to eliminate or at least identify bias. We need to be careful not to turn the ethical discussion of bias into a simple expression of bias. Above it needs objectivity and careful consideration.



Another terms that worries me is ‘reason’. We say that AI cannot reason. This is not true. Some forms of AI specialise in this skill, as they employ formal logic in the tool itself. Others, like ChatGPT show emergent reasoning. We, as humans are actually rather bad at logical reasoning, we second guess and bring all sort of cognitive biases to the table almost every time we think. It is correct to question the reasoning ability of AI but this is a complex issue, and if you want it to behave like humans, reason cannot be your benchmark. 


This word, in common parlance, was something unique to human and many animals. When we talk about learning in AI, it has many different meanings, all involving very technical methods. We should not confuse the two, yet the basic idea that a mind or machine is improved by the learning process is right. What is not right is the straight comparison. Both may have neural networks but the similarity is more metaphorical than real. AI has gained a lot from looking at the brain as a network and I have traced this in detail from Hebb onwards through McUlloch & Pitts, Rosenblatt, Rumelhart and Hinton. Yet the differences are huge – in terms of the types of learning, inputs, natures of memory and so on.



This word worries me as many assume there is such a thing as a single set of human values to which we can align AI systems. That is not true. The value and moral domain is rife with differences and disputes. The danger is in thinking that the US, EU or some body like UNESCO knows what this magical set of human values is – they don’t. This can quickly turn into the imposition of one set of values on others. Alignment may be a conceit - 'be like us' they implore. No - be better.

Stochastic Parrot

Made popular in a 2021 paper by Bender, it is 'parroted' by people who are often a big age about what that paper claimed. That LLMs are probabilistic (stochastic) is correct but that is only part of the story, a story which ignores the many other ingredients in the cake, such as Reinforcement Learning from Human Preferences (RLHF), where humans train the model to optimise identified policies. These models are trained in both unsupervised and supervised ('reward' methods). It is simply incorrect to say that it merely spits out probabilistic words or tokens from a unsupervised trained model. The word 'parrot' is also misleading as it suggests direct copying or mimicry, which is exactly what LLMs do not do, as they create freshly minted words and text.



These are just a few examples of the pitfalls of sloganeering and throwing words around as if we were certain of their meaning when what we are actually doing is slight of hand, using a word in one context and taking that meaning into another. We are in danger of parroting terms without reflection on their actual use and meaning. Conceptual clarification is a good thing in this domain.

Tuesday, April 04, 2023

Are we heading towards a ‘Universal Teacher’?

But it can't do what teachers do well….

ChatGPT "Hold my beer...."


Universal Soldier is a film franchise that sees a series of soldiers, some melded with AI, to create a superhuman soldier. But what if we imagined something far more useful and beneficial. In a world where we have a shortage of teachers, especially in poorer countries where teachers are scarce, poorly paid and class sizes huge, a Universal Teacher would be a Godsend. Not for very young children but certainly for more autonomous learners from secondary school onwards would surely benefit from a teacher that could teach any subject, to anyone, anyplace at anytime, accessible, personalised, at any level in any language.

The launch of OpenAIs GPT service, is the first small step in this direction. One can create a personal teaching chatbot and have it available to oneself, limited to a link or in a marketplace. This opens the door for a huge amount of innovation in teaching and learning.


The advent of universal internet access, globally, without any blind spots is now with us via ground and satellite services and the ubiquity of devices, many with AI chips, that allow device computing, do things that were unimaginable only a year ago, especially with voice. The ecosystem is providing the technical means for Universal Teachers to emerge and thrive.

What cannot be denied are the numbers. ChatGPT was the fastest adopted technology in the history of our species and at 100 million, with more than a billion uses a month, it has seen astonishing levels of engagement. What is more astonishing is the nature of that use - mostly to learn something new, complete a task or solve a problem. It's basically a performance support tool - a learning tool.


Universal Teacher is launched

That ChatGPT4 launched with Khan Academy and Duolingo as partners spoke volumes about its potential for teaching and learning. At the same time Bill Gates wrote a piece called The Age of AI, which saw education as the great net beneficiary of Generative AI. Salman Khan, who has been in this game for years, thinks the benefits are enormous and has already launched what could be described as a Universal Tutor in education - Khanmigo. What has followed on Twitter was an explosion of innovative and often ingenious applications in teaching and learning around the world.

In just one year we have seen the release of GPT4, a moment in. our history when a new species of technology swept the world. 

Does our unease with this question simply indicates the dying days of human exceptionalism? Copernicus de-centred our species and threw us out into orbit, Darwin de-anchored us further to show we were just a smart ape. One way we got smart was to teach and learn. I wrote a book on this Learning Technology. Yet that exchange has always used learning technologies; painting, writing, printing, internet and now AI. What Generative AI provides is a new pedAIgogy (see more on this), where monologue is replaced by dialogue, what good teachers do in classrooms, tailored to the needs of the learner, something that is difficult, if not impossible with previous technologies, in one-to-many teaching environments. 

A very real question is now on the table. To what degree can AI now replace teachers? That is seen by some as a disturbing question. It is, in the sense of possibly dehumanising learning. Nevertheless, it is a worthwhile thought experiment.

The old diad model of teachers and learners is long gone as technology has become a serious mediator, especially online Technology technology. We now have software that learns and can be taught. That, when they can teach, are two new kids on the block.


We have to remember that teaching is always a means to an end, that end being the improvement of the learner. It is easy to see teaching as an end in itself but it is not. As a profession it is difficult, hard work, sometimes distressing, often deeply satisfying. Yet the workload can be crushing, the psychological pressure oppressive and it is clearly a demanding job. It is difficult to sustain that pace over weeks, months, never mind years. That contributes to constant teacher shortages, especially in difficult to teach subjects like maths, science and foreign languages. Could technology be a solution to the crushing workload and recruitment problems? I think so. 


Right across the learning journey from learner engagement, learner support, content creation, content delivery, personalisation, accessible learning and assessment, AI, especially generative AI can deliver increasingly sophisticated learning. This is effortful, personalised learning, not as learning events but as a process.


Generative AI is coming close to delivering personalised tutoring, as good as say the tutors that middle-class parents frequently hire for their kids. I have been convinced of this for some time after seeing its effect in adaptive learning systems at University level in the US, where attainment rose, and dropout fell. The problem with these older adaptive systems is that they were difficult to build and populate. New generative AI is an entirely different beast. It is smart, very smart and as it has been trained on a vast repository of our stored culture, it is smart on all subjects. It is as if it had degrees in all subjects, with some teacher training on top.


Universal Teaching

In Khan Academies Khanmigo tuition across dozens of subjects are available to ignite your curiosity, at Elementary school, Middle school, High school and College levels. Paired with revision schedule software these systems would provide lasting and efficient learning support. You can even practice exams with open input and dialogue. You can ask for hints but the standard of dialogue is superb. This is where it gets interesting, like a full discussion with a good teacher, who knows every subject, in remarkable detail, available 24/7/365, all the way up to College level. Even at College level I can see how 101 courses could be delivered this way, then developed to cope with ever more complex aspects of a degree course. These systems need not be as good as teachers to reduce workload and provide real learning for learners across all subjects.



Along comes ChatGPT4 to deliver high quality, personalised Maths tuition. It gives you a maths problem, asks ‘What do you think the first step should be?’, accepts open input, identifies where you went wrong along with a suggestion to move forward. It even relates that maths problem to a subject you love, let’s say soccer. Maths is a difficult subject to teach and learn, a subject where catastrophic failure is common. Here is a system that pays attention to your every step, is engaging and endlessly patient. What’s not to like?


Language learning

Duolingo has tens of millions of active users and has used AI for some time, especially to determine practice patters with the half-life algorithms. They have partnered with ChatGPT4 two features that add two powerful pedagogic techniques to their teaching:

1. Explain My Answer, where, if you get something wrong in your second language

Duolingo gives you explanation and 'elaborates’ on what was wrong. It also delivers 'examples' to point you in right direction.

2. Roleplay, where chat with a native speaker who knows your level of competence

and uses human written scenarios to give you endless and much needed practice and immersion. Once completed, each roleplay session gives you a report to suggest improvements.

The ease of translation in LLM models also helps deliver a second language, as it can be readily translated.


Critical thinking

Even critical thinking is taught, where you debate against ChatGPT. You present your arguments, it counters, and the debate continues, endlessly knowledgeable, positive, patient and helpful. This Socratic approach to learning plays to our natural propensity toward talking to each other. It is as if we now have a universal Socrates. Debate, a s a formal taxing of the mind, the chess of ideas, where a motion id debated to explore its strengths, weaknesses and limits is not new. To play such mental chess with a machine is what is new.


Dialogue with the Greats

Conversations, real dialogue, with dozens of intellectual figures from the past are also available, including scientists like Isaac Newton and Marie Curie, Presidents from Washington to Lincoln, authors like William Shakespeare and Mark Twain, historical figures such as Genghis Khan and Napoleon. As the works of almost all past intellectuals and figures of note have been used to train the model, it can speak as if it were that person. This is on a par with original sources and provides a less dry and text based inroad into the subject. To hear theories and opinions expressed by the person who initiated these thoughts and theories, is surely a novel pedAIgogic approach.


Even fictional characters are available, such as Greek Gods like Zeus and  Achilles, Shakespearian characters from Macbeth to Othello and characters from novels such as Mr Darcy to Jay Gatsby, also Dr Frankenstein and Don Quixote. This is an interesting and novel vector into great literature, being able to chat with, interrogate and explore a character, maybe several characters in a work. This is surely a new type of what I’ve called pedAIgogy. 


Learning difficulties

Another feature of the Universal Teacher would be sensitivity to accessibility issues. With sight and hearing impairment, text to speech and speech to text through AI delivered via smartphones have already been transformative. ChatGPT was launched with the astounding Bemyeyes app. Dyslexia, autism, ADHD and learning difficulties have all shown some signs of being diagnosed by AI and generative AI can deliver content sensitive to the needs of such learners. 


We humans are deeply biased, the biases being largely part of the way the brain has evolved. Daniel Kahneman got a Nobel Prize for uncovering many of these biases and sees them as ‘uneducable’. AI, on the other hand, learns and need not have these biases. It need not ‘know’ the sex, race, cultural background, accent or socio-economic status of a learner. Fairness can be developed as we proceed. This I see as a distinct advantage of the Universal teacher – its lack of bias.


Teacher workload

For teachers it can create lesson hooks, lesson plans, assessments, rubrics for assessments, almost any piece of planning and paperwork can be done using Generative AI. As a tool to reduce workload, it has huge potential. This must surely be one of its first deployments.



Imagine being able to do all of the above in almost any language. That is becoming possible. An underestimated feature of ChatGPT and LLMs is translation. That's a global feature for services and products that has huge human benefits. Massive reductions in costs possible through AI here. We would have far less students having to learn, and often struggle to learn in a second language, although that option would still be open, even a blend of languages would be possible.



We don’t yet have a Universal Teacher (UT), what we have is a Universal Teaching Assistant UTA). But the Universal teacher is now on the horizon. Difficult to tell how far that horizon is but we have seen the exponential growth of generative AI in just a few months from a good but still error prone service to something far more accurate that has reach across all subjects and has moved technology from embodying simple pedagogic principles such exposition, knowledge assessment, scenario based learning and spaced practice towards new pedAIgogies, such as tutoring, teaching, lesson creation, multimodal content creation, learning support, dialogue, error correction, language immersion and debate.


Generative AI will only get better, not only on parameter size, which has proved to be exponentially useful above a certain threshold. Size does matter but so do all the other tools that go into making a Universal Teacher – combinations with web search, maths solvers, and other tools. We are seeing ensembles of technologies create learning tools that solve the problems of provenance, accuracy and efficiency. Generative AI may be the beating heart of the Universal teacher but it has many other weapons it can employ.

An experiment is already underway in the partnership with Khan Academy and OpenAI, with a much in schools already underway. They are taking it carefully, which is right. We will see more of this, AI tutors, like Khanmigo, on all subjects, for all ages.


Speculating further, future developments will surely see the embodiment of such teachers as avatars inside 3D virtual worlds, where learning by doing can also occur. We will learn within Digital Twins of the workplace, airport or hospital ward with virtual customers or patients. That is the subject of my next book. The Universal Doctor is surely a spin off, an entity that can investigate, diagnose and treat anyone anywhere.


In one way the Universal Teacher is not a teacher at all, it is the learner learning from the cultural achievements from the past. The Universal teacher is us being taught by our own hivemind, a supermind. This is not a mind like ours but it does mediate what we already know so that we can all progress. We can all be in this together. We are being taught by our collective self and that may be the fundamental beauty of this idea.


Sunday, April 02, 2023

Ethics, Experts and AI

Ethics of AI has been discussed for decades. Even before Generative AI hit the world, the level of debate, from the philosophical to regulatory - was intense and at a high level. It has got even more intense over the last few months.

It is an area that demands high end thinking in complex philosophical, technical and regulatory domains. In short, it needs expertise. In public domain, the spate of Lex Friedman podcasts have been excellent - the recent Altman and Yudkoswsky ChatGPT programmes, in particular, represent the two ends of the spectrum. I also recommend high-end AI experts, such as Marcus, Brown, Hassabis, Karpathy, Lenat, Zaremba, Brockman and Tegman. At the philosophical level, Chalmers, Floridi, Bostrom, Russell, Larson, and no end of real experts have published good work in this field. They are now always easy to read but are worth the effort if you want to enter the debate. As you can imagine there is a wide spectrum from the super-optimists to catastrophic pessimists and everything in between. However, AI experts tend to be very smart and very academic, therefore often capable of justifying and taking very certain moral positions, even extremes, overdosing on their own certainty. 

Debate in the culture

In popular culture dozens of movies from Metropolis onwards have tacked the issues of mind and machine, more recently the excellent Her and ex machina, along many series, such as Black Mirror have covered many of the moral dilemmas. Harari covers an issue in Homo Deus and here's no end of books, some academic, some more readable and some pot-boilers. No one can say this topic has received no attention.


At one end there are the catastrophists like Yudkoswsky, Leahy and Musk clustered around the famous Open Letter. The people behind the letter, the Future of Life Institute, have an ax to grind and couldn't even get the letter right, as it had fake names and some have backtracked. There is also something odd about a self-selecting group claiming that we are all stupid and easily manipulated, whereas they are enlightened and will fix it, then release to to use on their say so.


It is hard to align the catastrophists with the pessimists, like Noam Chomsky, who basically says, 'nothing to see here, move on'. There have been harbingers, not of doom but disinterest in LLMs, as they thought they would produce little that was useful. We can, I think, safely say they were wrong. They may well be right on such models reaching limits of functionality but in tandem with other tools they are here to stay.


Then there’s a long tail of pragmatists, with varying levels of concern, from Larson, Hinton (his vision of an alternative Mortal Computer is worth listening to), as is Sam Altman and Yann LeCun, who thinks the open letter is a regressive, knee-jerk reaction, akin to previous censorious attitudes in history towards technology. Everyone of any worth or background in the field rightly sees problems, that is true of ant technology, which is always a double -edged sword. Unlike the catastrophists I have yet to read an unconditional optimists, who sees absolutely no problems here. In education, for example.some big hitters, such as Bill Gates and Salman Khan have put their shoulders into the task of realising the he benefits this technology has in learning.

Regulatory bodies

Further good news is that the regulatory bodies publish both their thinking and results and they have been, on the whole, deep, reasoned and careful. Rushed legislation that is too rigid does not respond well to new advances and can cripple innovation and I think they have been progressing well. I have written about this separately but the US, UK, EU and China, all in their different ways have something to offer and international alignment seems to be emerging.

My pragmatist view

I am aligned with LeCun and Altman on this and sit in the middle.The technology changes quickly and as it changes I take a dynamic view of these problems, a Bayesian view if you will, revising my view in the light of new approaches, models, data and problems as they arise. This was after a lot of reading, listening and the completion of my books on ‘AI for Learning’ where I looked at a suite of ethical issues, then ‘Learning Technologies’ where I wrote in depth about the features of new technologies such as writing, printing, computers, the internet and AI, including their cultural impact and the tendency for counter-reformations. One can see how all of these were met with opposition, even calls to be banned, certainly censored. I have lived through moral panics on calculators, computers, the internet, Wikipedia, social media, computer games and smartphones. There is always a ‘ban it’ movement.


I’d rather take my time, as scalable alignment is the real issue. It is not as if there is one single, universal set of social norms and values to align to. Everybody's in an echo chamber says the people who think they're not. Alignment is therefore tricky, and needs software fixes (guardrails), human training of models and moderation. It may even need adjustments for different cultural contexts.

This is a control issue and we control the design and delivery of these systems, their high-level functions, representations and data. The only thing we don’t control is the optimization. That’s why there is no chance, for now, that these systems will destroy our civilization. As Lecun said “Some folks say "I'm scared of AGI. Are they scared of flying? No! Not because airplanes can't crash. But because engineers have made airliners very safe. Why would AI be any different? Why should AI engineers be more scared of AI than aircraft engineers were scared of flying?" The point is that we have successfully regulated, worldwide, carsbanking, pharma, encryption, the internet. We should look at this sector by sector. The one that does worry me is the military.


The big argument is that, without stopping it now, AGIs could learn to pursue goals which are undesirable (i.e. misaligned) from our human goals. They may, for example,  literally escape out onto the internet and autonomously cause chaos and real harm. I have seen no evidence that this is true, although the arguments are complex and I could be convinced. The emphasis in Generative AI into switch from pure moderation, the game of swatting issues as they arise, to training through RLHF (Reinforcement Learning through Human Feedback). This makes sense.

Ghosts in the machine

I’m wary of people soaked in a sci-fi culture, or more recently a censorious culture, coming up with non-falsifiable dystopian arguments, suggestions and visions that have no basis in reality. It is easy to project ideas into the depths of a layered neural network but much of the that projection is of ghosts in the machine. I’m also wary of those who are activists, not practitioners with an anti-tech agenda and who don’t really do ethics, in the sense of weighing up the benefits and downsides, but want to focus entirely on the downsides. I’m also wary of people bandying around general hypotheses on misinformation, alignment and bias, without much in the way of empirical data or definitions.

In fact, the signs so far on ChatGPT4 are that is has the potential to do great good. Sure you could, like Italy ban it, and allow the EU, despite its enormous spend on research, to fall further behind as investors freeze all investment across the EU. I’m with Bill Gates when he says the benefits could be global, with learning the No 1 application. In a world where there are often poorly trained teachers with classes of up to 50 children and places where there is one Doctor for every 10,000 people, the applications are obvious. This negativity over generative AI could do a lot of harm and have bad economic consequences. So let's not stop it for six months and not look at those opportunities.


The Oxford handbook of ethics of AI by Dubber, M. D., Pasquale, F., & Das, S. 

Ethics of artificial intelligence by Matthew S. Liao

AI Narratives: A history of imaginative thinking about intelligent machines by Stephen Cave, Kanta Dihal, & Sarah Dillon 

Human Compatible by Stuart Russell

Re-Engineering Humanity by Brett Frischmann and Evan Selinger

Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence by Patrick Lin, Keith Abney, and George A. Bekey

Artificial Intelligence Safety and Security edited by Roman V. Yampolskiy

Robot Rights by David J. Gunkel

Artificial Intelligence and Ethics: A Critical Edition" by Heinz D. Kurz and Mark J. Hauser

Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Colin Allen

Artificial Intelligence and the Future of Power: 5 Battlegrounds" by Rajiv Malhotra

The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff

AI Ethics by Mark Coeckelbergh

Saturday, April 01, 2023

Learning Designers will have to adapt or die. 10 ways to UPSKILL to AI….

Interactive Designers will have to adapt or die. AI and Generative AI (not the same thing) has started to play a major part of the online learning landscape, right across the learning journey. it is now being used for learner engagement, syllabus planning, core skills identification, learner support, content creation, assessment and so on. It will eat relentlessly into the traditional skills that have been in play for nearly 35 years. The old, core skillset was writing, media production, interactions and assessment, all delivered through an authoring language. It remained unchanged for nearly 30 years. In many ways it got worse as the tools began to determine the content, so we got lots of cartoony content, with click on speech bubbles, clumsy, drag and drop, stock photos and MCQs. It was expensive and took months. 

Every online learning company on the planet is now having strategy meetings to face uo to the challenge.

This is not easy as many of those involved in traditional online content creation will find it difficult to adapt. Others, however, will embrace the change. Many will have to identify individuals with the skills and attitudes to deal with this new demand. This means understanding the new technology (not trivial), learning how to write for new dialogic tools, and dealing more with AI-aided design and curation, rather than doing this for themselves. It’s a radical shift.

In a Keynote over five years ago, I summarised this shift as follows...


This is the new version…


In another context, using a tool like ChatGPT meant not using traditional interactive designers, as the software largely does this job. It identifies the learning points, automatically creates the interactions, finds the curated links and assesses, formatively and summatively. It creates content in minutes not months. This is the way online learning is going. I’m involved in several projects where Generative AI is appearing in real product. One was launched at BETT this week (Glean), others are on their way. This stuff is here, now.


The gear-shift in skills is interesting and, although still uncertain, here’s some suggestions based on my concrete experience of making and observing this shift in a number of different companies.


1. Technical understanding

Designers, IDs, LXDs, Learning Engineers or whatever they’re called now or in the future, will need to know far more about what the software does, its functionality, strengths and weaknesses. In some large projects we have found that a knowledge of how the NLP (Natural language Processing) works has been an invaluable skill, along with an ability to troubleshoot by diagnosing what the software can, or cannot do. Those with some technical understanding fare better here as they understand both the potential and limitations.


This is not to say that you need to be able to code or have pure AI or data science skills. It does mean that you will have to know, in detail, how the software works. If it uses semantic techniques, make the effort to understand the approach, along with its weaknesses and strengths. With ChatGPT, for example, you really do need to keep up with the speed of releases, ChatGPT came out in late Nov 2022, ChatGPT, an order of magnitude better came out in March 2023. I see far too many people still using and basing their opinions on ChatGPT3. Keep up or become irrelevant.


In a series of category errors, any of the silly clickbait 'look at what I've just done' screen shots don't really understand the underlying technology. most are from ChatGPT3, not 4, like using a version of Wikipedia from 2003. The fine tuning, RLHF and guardrailing is ignored, yet these are the things learning professionals need to know about the technology. This will take time. You always have to go through the clickbait phase to get to the serious comment and use cases. The good news is the amazing things people are doing with the tool, especially in learning.


Similarly with data analysis. With traditional online learning, the software largely delivers static pages with no real semantic understanding, adaptability or intelligence, hence the stickiness of SCORM. AI created content is very different and has a sort of  ‘life of its own’, especially when it uses machine learning. At the very least get to know what the major areas of AI are, how they work and feel comfortable with the vocabulary.


2. Writing

Text remains the core medium in online learning. It remains the core medium in online activity generally. We have seen the pendulum swing towards video, graphics and audio but text will remain a strong medium, as we read faster than we listen, it is editable and searchable. That is why much social media and messaging is still text at heart. When I ran a large traditional online learning company we regarded writing as the key skill for IDs. We put people through literacy tests before they started, no matter what qualifications they had. It proved to be a good predictor, as writing is not just about turn of phrase and style, it is really about communications, purpose, order, logic and structure. I was never a fan of ‘storytelling’ or ‘creativity’ as identifiable skills.


However, the sort of writing one has to do in the new world of AI has more to do with being sensitive to what generative AI does, and dialogue. Prompt writing is important and this is where experienced IDs and graphic artists can excel. Prompting is best done by domain experts. A good graphic artists will know how to ask for the right image in terms of style, fonts, look and feel, with all the right parameters. Similarly with good IDs, who will know how to prompt for great questions, not just fact checking. It is a matter of taking your skills and applying them to using these new tools and technologies.


3. Interaction

Hopefully we will see the reduction in the formulaic Multiple-Choice Question production. MCQs are difficult to write and often flawed. Then there is the often vicariously used ‘drag and drop’ and hideously patronising ‘Let’s see what Philip, Alisha and Sue think of this… ‘ you click on a face and get a speech bubble of text. I find that this is the area where most online learning really sucks. This, I think, will be an area of huge change as the limited forms of MCQ start to be replaced by open input; of words, numbers and short text answers. NLP allows us to interpret this text. There is also voice interaction to consider, which many will implement, so that the entire learning experience, all navigation and interaction, is voice-driven. This needs some extra skills in terms of managing expectations and dealing with the vagaries of speech recognition software. If you don’t know about Whisper, you should. Personalisation may also have to be considered. These tools are basically AI on tap. This software is far too complex to build on your own. Yet it makes smart implementation in scale possible.


4. Media production

As online learning became trapped in ‘media production’ most of the effort and budget went into the production of graphics (often illustrative and not meaningfully instructive), animation (often overworked) and video (not enough in itself). Media rich is not necessarily mind rich and the research from Nass, Reeves, Mayer and many others, shows that the excessive use of media can inhibit learning. Unfortunately, much of this research is ignored. We will see this change as the balance shifts towards effortful and more efficient learning. There will still be the need for good media production but it will lessen as AI becomes multimodal, creating text, images, audio and video, even 3D worlds. 


Video is never enough in learning and needs to be supplemented by other forms of active learning. AI can do this, making video an even stronger medium. Curation strategies are also important. We often produce content that is already there but AI helps automatically link to content or provides tools for curating content. Lastly, a word on design thinking. The danger is in seeing every learning experience as a uniquely designed thing, to be subjected to an expensive design thinking process, when design can be embodied in good interface design. We are now in the world of rapid design by smart software that has done a lot of the A/B testing on a gargantuan scale. These methods look more and more out of date.


5. Assessment

So many online learning courses have a fairly arbitrary 70-80% pass threshold. The assessments are rarely the result of any serious thought about the actual level of competence needed, and if you don’t assess the other 20-30% it may, in healthcare, for example, kill someone. There are many ways in which assessment will be aided by AI in terms of the push towards 100% competence, adaptive assessment, digital identification, open input, transfer, good generated rubrics and so on. This will be a feature of more adaptive and dialogue-driven AI driven content. Generative AI produces assessments at speed and with relevance to the competences. That was never the case in traditional online learning.


6. Data skills

SCORM is looking like an increasingly stupid limit on online learning. To be honest it was from its inception – I was there. Completion is useful but rarely enough. It is important to supplement SCORM with far more detailed data on user behaviours. But even when data is plentiful, it needs to be turned into information, visualised to make it useful. That is one set of skills that is useful, knowing how to visualise data. Information then has to be turned into knowledge and insights. This is where skills are often lacking. First you have to know the many different types of data in learning, how data sets are cleaned, then the techniques used to extract useful insights, often machine learning. You need to distinguish between data as the new oil and data as the new snake oil.


We take data, clean it, process it, then look for insights – clusters and other statistically significant techniques to find patterns and correlations. For example, do course completions correlate with an increase in sales in those retail outlets that complete the training? Training can then be seen as part of a business process where AI not only creates the learning but does the analysis and that is all in a virtual and virtuous loop that informs and improves the business. It is not that you require deep data scientist skills, but you need to become aware of the possibilities of data production, the danger of GIGO, garbage-in/garbage out and the techniques used in this area. AI is now a feature of data-centric learning solutions. Acquire some basic knowledge of data science, nothing fancy, just get to know the lie of the land.


7. User testing

In one major project , we produced so much content, so quickly, that the client had trouble keeping up on quality control at their end. We were making it faster than it could be tested! You will find that the QA process is very different, with quick access to the actual content, allowing for immediate testing. In fact, AI tends to produce less mistakes in my experience as there is less human input, always a source of spelling, punctuation and other errors. I used to ask graphic artists to always cut and paste text as it was a source of endless QA problems. The advantage of using AI generated content is that all sides can screen share to solve residual problems on the actual content seen by the learner. We completed one large project without a single face-to-face meeting. This quick production also opens up the possibility of quick testing with real learners. 


8. Learning theory - pedAIgogy

In my experience, few interactive designers can name many researchers or identify key pieces of research on, let's say the optimal number of options in a MCQ (answer at foot of this article), retrieval practice, length of video, effects of redundancy, spaced-practice theory, even the rudiments of how memory works (episodic v semantic). This is elementary stuff but it is rarely taken seriously. With AI you can build pedagogy, or what I call pedAIgogy, into the prompting and therefore learning experiences. We are doing precisely this on one product.


With the implementation of AI, the AI HAS to embody good pedagogic practice. Bill Gates recently published an excellent piece on Generative AI saying that learning will be it b] greatest benefit, but te piece was marred by him pushing ‘learning styles’. Greg Brokman, of Open AI, did te same retweeting a tool based on learning styles. We know better than this and can build good, well-researched, learning practice into the software. Hopefully, this will drive online learning away from long-winded projects that take months to complete, towards production that takes minutes not months, and learning experiences that focus on learning not appearance.

see PedAIgogy


9. Agile production

Communications with AI developers and data scientists is a challenge. They know a lot about the software but often little about learning and the goals. On the other hand designers know a lot about communications, learning and goals. Agile techniques, with a shared whiteboard, scrums and superfast production are useful. I love these. There are formal agile techniques around identifying the user story, extracting features then coming to agreed tasks. Comms are tougher in this world so learn to be forgiving. There will inevitable be friction between the old and the new. Treat that as a normal.


Then there’s communications with the client and SMEs. This can be particularly difficult, as some of the output is AI generated, and as AI is not remotely human (not conscious or cognitive) it can produce mistakes. The good news is that these are now rare. You learn to deal with this when you work in this field. To be honest, A=all of those learning folk telling me that AI shouldn't be used in learning, as it sometimes has an error or two, will happily use content with learning styles, Myers-Briggs, Bloom's pyramid, Maslow and no end of bogus theory and content in courses


This new approach is often not easy for clients to understand, as they will be used to design document, scripts and traditional QA techniques. I had AI once automatically produce a link for the word ‘blow’, a technique nurses ask of young patients when they’re using sharps or needles. The AI linked to the Wikipedia page for ‘blow’ – which was cocaine – easily remedied but odd. You have to be careful but that has always been te case. I can barely think of a single training project where the SME content was spot on.


The great news is that this all means we can reduce iterations with SMEs, even cut them out altogether, as the software often has more knowledge and can write it to any level or style. The cause of much of the high cost of online learning is expensive SMEs and endless iterations. If the AI is identifying learning points and curated content, using already approved documents, PPTs and videos, the need for SME input is lessened. This saves a ton of time and money.


10. Make the leap

AI is here. We are, at last, emerging from a 30 year paradigm of media production and multiple choice questions, in largely flat and unintelligent learning experiences, towards smart, intelligent online learning, that behaves more like a good teacher, where you are taught as an individual with a personalised experience, challenged and, rather than endlessly choosing from lists, engage in effortful learning, using dialogue, even voice. As a Learning designer, Interactive designer, Project Manager, Producer, whatever, this is the most exciting thing to have happened in the last 30 years of learning.
Most of the Interactive Designers I have known, worked with and hired over the last 30  plus years have been skilled people, sensitive to the needs of learners but we must always be willing to 'learn', for that is our vocation. To stop learning is to do learning a disservice. So make the leap!

In addition, those in HR and L and D will have to get to grips with AI. It will change the very nature of the workforce, which is our business. This means it will change WHY we learn WHAT we learn and HOW we learn. Almost all online experiences are now mediated by AI - Facebook, Twitter, Instagram, Amazon, Netflix.... except in learning! But that has just dramatically change. Generative AI heralds a new era, a renaissance oin learning, where we can learn anything, at anytime, anywhere using sophisticated AI tutors. What is needed is a change in mindset, as well as tools and skills. It may be difficult to adapt to this new world, where many aspects of design will be automated. I suspect that it will lead to a swing away from souped up graphic design back towards learning. That would will be a good thing.