Tuesday, May 30, 2023

Africa is not a place and doesn’t need white saviours in learning

Africa is a vast, varied and vexing place but there is no essential thing that is Africa. Dipo Faloyn, the Nigerian writer, sums it up in his very personal book ‘Africa Is Not a Country’. Forget the images you see on TV of safaris, wars, child soldiers and famine, as most of that is through the eyes of visitors who stay in comfortable hotels, working for organisations that have a saviour agenda. There are countries north of the Sahara that are very different from those immediately below. Egypt, Nigeria and South Africa are radically different places at the top, middle and bottom of Africa with substantial economies. There are many Africas, there is no ‘Africa’.


I’m just back from Senegal, the host country this year for E-learning Africa and have been to this event many times. It remains the defining Learning Technology event in that continent. The brainchild of Rebecca Stromeyer, it goes way beyond the normal conference format, lively, intense and friendly networking along with a session for Education Ministers from all over Africa and a ‘challenging’ debate, which I’ve participated in several times.


You can’t shy away from the chaos when travelling in Senegal. One very large group turned up to a Hotel that did not exist. They had been scammed. We turned up at 2am to our hotel and found that, despite it having been booked and paid for, they had sold on our room and we had to find another. Walking around Dakar at 3.30 am looking for a taxi is no fun, especially when every taxi driver we used required Kissinger-level negotiations on price. On one occasion, having agreed a generous fare, one driver turned round and claimed that the fare was per person! My son returned to his room one night and found someone else in his Hotel bed. The lack of organisational cohesion and training is tangible. On the whole the people are wonderful and chilled but I do0n;t want to varnish over this stuff as there is no way it can develop successful tourism with this level of chaos and hassle.


I have travelled a lot and never seen so many police, even in and around the conference.

Also never stepped through so many scanners. Every building has one, yet despite me setting of the red lights and bleeps they just wave you through, many are not looking at the scanner screen at all, bored and on their smartphones. They are experiencing political unrest as we speak, with street protests and a Ministerial building being burned after the opposition leader Ousmane Sonko was arrested on drummed-up sexual assault charges. We were stopped at police roadblocks twice and on the way to the airport we saw people being made to lie down on the road in a massive traffic holdup. 


There is lots of equipment here but little training, so nothing really works and when it breaks it is catastrophic. Let me give you an example. The flight from Senegal was late which meant we couldn’t make our connection. The TAP (Air Portugal ) office at the airport had someone who quite simply couldn’t be bothered helping. She sat there arrogantly waving us away with her hand then asking that we pay for another flight! We confirmed in Lisbon that she could have arranged our second leg. The problem is deep rooted nepotism. She has this job but can’t operate the computer, so just sits there taking the salary. I can’t tell you how many times we had our credit cards declined because the signal or card machine didn’t work.


You have to wrestle with these contradictions. The fundamental fact is that this is a rich continent full of poor people. It is easy to blame colonialism and see everyone as a victim, but there is obvious money around. I saw a blue Bentley outside of the restaurant we were eating in with an old man buying Moet for six women at the table and huge mansions lining the coast. There is wealth here, with huge sums at the peak of the pyramid, a tiny middle class and a vast number who remain poor. The cash is being spent on vanity projects and a vast army and police force who keep the oligarchy in power.


Ok so that is the economic and political backdrop. As I was here to deliver some sessions on AI, how do you position the technology thing? 


The white saviour complex sees ‘others’ as the only hope for these hapless Africans. Yet it is the white saviour complex that caricatures them as unable to help themselves, people to be saved from themselves. As I walked through the real market in Dakar, not the ones selling beads back to Western tourists ( a neat and startling reversal), I saw a highly entrepreneurial  society. A lorry load of old spare parts were being dumped out onto the road, while people bid for the spares. Recycling is a necessity here. This is not a functionally helpless society, it is a society held back by its own leaders, who strip it bare. The tech kids I meet here are highly entrepreneurial and will move things forward. It is the anti-business, keep them on a NGO ride that will stop Africa. NGOs are band-aids, not the answer. I feel as though, since the 1980s it is the NGOs who seem to be full of colonial rhetoric, the new institutional colonialism. The Geldof inspired ‘Do they know it’s Christmas?’ is perhaps the most condescending song title in history. Africa has more Christians than any other continent, who maybe knew it was Christmas, it also has  450 million Muslims – wonder how they felt? The hideous, theatrical spectacle of celebrity activists grew out of that time and the simplistic rhetoric of binaries continues, northern hemisphere-southern hemisphere, rich-poor, white-black, evil-good.


In any case, we’re not here as tourists, we’re here to get something done. Education and health remain problematic in all poor countries, especially countries like Senegal, where the politicians are clearly rapacious and corrupt. I was the only person in the hall who refused to stand for the Prime Minister after we had waited over and hour and a half for him to grace us with his banal presence. Being here as a white saviour was not my intention. I have been here before in the African Congress building in Addis, Ethiopia, debating that Africa needs vocational skills, not abstract University degrees – we won. I have always supported the spread of free tools and learning on the internet to get to the poor. This has happened with finance, it can happen with education. The infrastructure problem is being solved, via satellites and ever-cheaper smartphones. Sure the digital divide exists but constantly painting a picture of a glass half empty while it is being filled is another example of white saviour behaviour.


Michael, my partner in crime in the debate, from Kenya, does something about the problems through his ‘4gotten Bottomillions’ (4BM), Kenya's largest and most trusted WhatsApp Platform, connects the unqualified and poor to real work and jobs. He is critical of white saviour attacks on US tech companies who pay for data work in Kenya. This, for him, is rich white virtue signalling denying poor Africans a living. These companies pay above the minimum wage and the work is seen as lucrative. Michael wants to empower Africans, not disempower them through pity. His message to everyone is to suck it up and get on with things, not to wait on grants and NGO benefactors. Until that happens, Africa will remain in psychological chains to others.


My first event was a three hour workshop on AI, as an astounding technology that really can deliver in education and health. It is already being used by hundreds of millions worldwide to increase productivity and deliver smart advice, support and learning free of charge. This is exactly what Africa needs. Let the young use it, don’t cut Africa out of the progress as that will turn an entire continent into North Korea and Iran, the two countries that banned it (even Italy saw the error of its ways). 


I did another session on online language learning where the presenters were asking young Africans to learn and get tested on, either English or French. French is the, they claimed, the language of culture. Yes – but whose culture? The French. My position was that Generative AI already delivers in over 100 languages, shortly moving towards 1000, many of them African. We have the chance of having a universal teacher on any subject, with strong tutoring and teaching skills available to anyone, anywhere with an internet connection, for free, in any language.

The solution is to get this fantastic, free and powerful educational software to everywhere, not just Africa. Stop using specious arguments about northern hemisphere bias and footnote ethical concerns to stop action. Let Africans decide what's good for them and allow Africa to use AI in the local context. Getting this stuff into the hands of the young and poor is the answer.


My final event was in the Big Debate, on whether ‘AI will do more harm than good in Africa. Michael and I lost. Why? The audience was largely either the donors or receivers of white saviour aid. They do this for a living and anything that threatens to bypass them and go straight to the working poor is seen as a threat. They need to be in control, seeing themselves as providing educational largesse. You get this a lot at Learning Technology conferences, people who fundamentally don’t like technology.

I'm well aware that the sermon has always been the favoured propaganda method of the saviour and see that I too am part of the problem at such events. But the lack of honesty about the saviour-complex has become a real problem. My hope os that this powerful technology bypasses the oligarchies, saviours and gets into the hands of the people that matter.

The opposition played the usual cards. First the argument from authority, which I always find disturbing, ‘I’m a Professor, therefore I’m right, you’re wrong’. My friend Mark, on the opposition side gave an eloquent speech brutally attacking the culture of screens and smartphones. I was sitting behind the lectern and noticed that he had read the entire speech, every last word, verbatim, from, guess what… a smartphone! It’s OK for rich white people to use a full stack of tech but not Africa. I rest my case.

Thursday, May 25, 2023

AI in Africa...

I’m in Africa because it’s important to be in a place that gives you perspective. All the usual stuff when you travel, arrive at hotel to find there is no room, wandering about Dakar finding a taxi to a new hotel, being stopped by the police! In any case, we’re in this amazing city and you have to go with the flow.

E-learning Africa, run by the irrepressible Rebecca Stromeyer and her team, have been doing this for many years, because they think it matters. It’s easy to go to these big conferences in Europe and the US but what you get is a cyclopic view of the world, where we assume that everyone works in offices or at home on Zoom. The interest in the topic at our workshop was intense and I'm in a session on AI for language learning. as well as a huge formal debate on the role fo AI in Africa.


At this conference in these places, you are faced with the realty the rest of the world. Having been to Uganda, Rwanda, Namibia, Ethiopia and this year, in Senegal. It’s a big world out there and the worst conferences I’ve attended have been in two of the more hideous places on our planet Orlando and Vegas, the first seems like America embalmed, the second some sort of theme park from Hell. Come to Africa and see real unvarnished world.


When this new Age of AI struck like lightening in November 2022, I had, for years been extoling the possibilities of its massive impact on learning and health. That impact is not about the already wealthy, the graduate generation in the Northern Hemisphere, it’s about the rest of the world. From the holy pronouncements on ethics and AI from Brussels, you’d think they’d have a little more humility, seeing as well under 10% of the world’s population. The US is only 4.25%.  They both need to get out more.


When you have 1 teacher and classes of 50, or 1 doctor for 10,000 people, you have a different perspective on life. Technology matters more here, which is why Africa leapfrogged the rest of the world on mobile – it mattered when your precarious employment depends on that next text message or that your fragile finances need to be executed efficiently and cheaply from the rural world in which you live.


I do believe that AI technology is finally the technology that franchises almost everyone. It has the ability to raise productivity to pay for the education and healthcare globally. It can deliver the sort of Universal Teacher and Universal Doctor technology that is finally on the horizon. A teachers that knows more than any teacher, in any subject, has the pedagogic knowledge built into its methods, that is tirelessly friendly and supportive, 24/7, anywhere in the world.


But what really matters is delivery in all languages. We live with the legacy of colonial languages. In Africa they are English and French, in South America Spanish and Portuguese. Even in the rest of the word, English has become the lingua franca. Imagine a technology that can teach and deliver healthcare in any first language. Here's the good news. That too is on the horizon. Not only is a wide range of language available from the get go, that number has and is expanding into the thousands. Far from ignoring minority languages it may save them. Generative AI is a modern Babel.


Generative AI through ChatGPT was launched in 95 languages. allowing one to, write, summarise, error check and translate between these languages. That is astonishing but that number has grown to include many more. This has been one of the most used functions with significant increases in productivity. It was also launched knowing 12 computer languages, allowing translation between them. One amazing feature of Generative AI that slipped under the radar was its astounding capabilities in speech to text and text to speech. Whisper from OpenAI was world beating and free. What a start.


On the day I flew to Senegal, Yann Lecun announced a new Multilingual Language Model that works in 4000 languages! There are only 7000 languages in the world, so that’s more than a great start, it’s into majority territory. Early days but the promise is now here, that AI will open up education, health and other areas of human endeavour to everyone, no matter where they live or what language they speak. Once we can all understand each other in real time translation, perhaps we have a chance of better understanding and co-existing with each other.


This really matters, as technology tends to be skewed towards English in particular but also the usual suspects of largely northern hemisphere languages. If we are to see AI as a genuinely global and liberating technology languages do matter. What these large language models have shown us is that language really does matter in learning. Wittgenstein and Vygotsky were right, it is fundamental to learning and intelligence, fundamental to being human. It is not that intelligence produces language but that language produces intelligence.


So let’s not allow the over-developed world get on their moral high-horses and hold back the technology that promises so much to so many. A vast army of amateur ethicists (anyone can be one by simply saying they are one) seem determined to hold us back with so much verbiage on the subject that you could train a Large Language Model on that alone. Unfortunately it would an aloof, finger-wagging, moralising bore.


In truth, for those most in need, who often find themselves furthest away from education and healthcare, its power and reach are potentially immense.

Sunday, May 21, 2023

21st Century Skills – the last refuge of human exceptionalism!

The rhetoric around AI and automation for the last two decades was “
don’t worry - AI & automation will only tackle the routine jobs”. The whole narrative was owned by the professional classes and centred around workers losing their jobs, not the graduate class working at home on Zoom. We were quick to forget how completely wrong we were. The 21st century skills rhetoric has become the last refuge of human exceptionalism, the topic I covered in my last blog.


Stages of 21st C grief

Predictably, the professional class, especially learning professionals, go through their five stages of grief, when faced with new technology: 


Denial – bewildered they want to ridicule it through clickbait examples of how bad it is, then say block it and ban it

Anger – sabre rattling stage where mostly 3rd rate academics write open letters, warn people of the dangers which only ‘they’ perceive

Bargaining - form ethics committees, write frameworks and generally do their report writing thing

Depression – oh dear we do have to actually learn to work with this stuff

Acceptance – alright, I give in


In denial

We’re still in the ‘denial stage. Taking the usual refuge in the rhetoric of vague 21st C skills. But the mistake is to:

1) think these skills are new
2) think we have a clear definition

2) think of them as skills in themselves, as they mostly need domain knowledge 

2) assume that educators have these skills, even if they did exist in a clean form

3) assume that they know how to teach these skills

4) assume that they are wholly unique to we humans


The whole thing goes back to the late 1990s, when we were faced with a new millennium. but a key document was  "Partnership for 21st Century Skills," published in 2002 by a coalition of education organizations in the US. This report put great emphasis  on the ‘c’ words and saw them as essential to prepare students for the challenges and opportunities of the 21st century. 


Then the bandwagon, report-writing brigade, organizations such as the Organization for Economic Cooperation and Development (OECD) and the World Economic Forum (WEF) began pumping out the rhetoric highlighting the significance of 21st-century skills in their discussions on education and workforce readiness.


Nothing new

Mirjam Neelen & Paul A. Kirschner rightfully pointed out, in their excellent blog ‘21st Century Skills don't exist. So why do we need them? (2016) that there is nothing new about these skills, nothing 21st century about them. This was all laid out in the first major counter attack in 2010 from Rotherham & Willingham’s “21st-Century skills. Not new, but a worthy challenge.” I have been making the same point for the last 20 years.


Problem of definition

As they pointed out, the first problem, as expected, is definition. They often appear as a list of five or so things on PowerPoint, most starting with the letter ‘C’; communications, collaboration, critical thinking, creativity… then it tails off into the vagueness of problem solving and innovation. There are literally dozens of lists and variations.


Skills are notoriously difficult to pin down as they are not, as Bloom suggested represented in some simplistic pyramid They are complex sets of personality traits, motivations, learned knowledge and behaviour – it’s the sum of lots of integrated parts that results in an ability, competence or performance. The fact that it is complicated, mostly involving domain knowledge, makes these very vague 21st C skills difficult to extract as a separate ‘skill’. It is doubtful whether they can be abstracted from domain knowledge at all. Knowledge is marbled like fat into the meat of skills.


This 21st C agenda can also be quite dangerous. It can harm learners, especially the disadvantaged by giving them the illusion of progress. Knowledge and skills are not separable like oil and water, they are deeply entwined, so most critical thinking and problem solving requires a depth of knowledge. The 21st C skills agenda led to a hideous focus on ‘Leadership’ in L&D. We assumed that skills were intellectual in some way, so introduced this elitist, hierarchical training, at the expense of real skills and competences for all. We became exclusive not inclusive. That did not go well, as productivity did not increase and although we have lots of rhetoric and training on Leadership, we seem to have precious little of it.


AI hits the fan

We’re 23% into the 21st C and they have the last bastion of human exceptionalism (see last blog). The 21st C skills mantra, that has now turned into an empty trope in the Age of AI, especially among educators and learning professionals who needed a serious sounding phrase to rattle around in conferences and reports. It is usually to be found in some overlong, text-heavy Powerpoint presentation at a conference, accompanied by a cocktail of 'C' words  - communication,, collaboration, creativity, critical thinking. Can the real world really be that alliterative?



Young people communicate every few minutes – it’s an obsession. They text, message, chat, post, comment, whatsapp, use Instagram, Facebook, Tik Tok and tools you're you may never to have heard of in various media, including images and video. Note the absence of email and Twitter, the only place you’re likely to hear of 21st C skills. This generation grew up in the 21st C. Never have so many communicated so often with so many. We readily forget that within a few generations, we have moved from letters to the telegraph, then telephone and now free communications, in a bewildering array of types and formats to almost anyone on the planet – for free. Technology not only increases the possibilities in communications, also the skills, through use.


In fact, one of the features of Generative AI, such as ChatGPT, is in showing us how poor our communications’ skills can be. It invariably improves almost anything we want to write and send.



There’s an abundance of collaboration online, where we share, tag, upload and download experiences, comments, photographs, video, media and now generative AI tools. We collaborate closely in teams, often international, when playing computer games and in the workplace. Never have we shared so much, so often, in so many different ways.


Then along comes someone who wants to teach this collaboration as a 21st C skill, usually in a classroom, where much of this technology is banned, ignoring how collaboration works in the real world. I’m hugely amused at this conceit, that we adults, especially in education, think we always have these skills. There is no area of human endeavour that is less collaborative than education. Teaching and lecturing are largely lone-wolf activities in classrooms. Schools, colleges and Universities share little. Educational professionals are deeply suspicious of anything produced outside of their classroom or their institution. The culture of NIH (Not Invented Here) is endemic. Many educators far from being consistently collaborative, are doggedly fixated with delivery by the individual.


With AI, especially generteive AI, collaboration at a global, human level, a huge, sophisticated and elegant collective model, trained on the sum of human knowledge. Generative AI is collaboration writ large, the whole of our collaborative cultural capital is being put to good use and available to the individual.


Critical thinking 

Critical thinking is the Little Bighorn of human exceptionalism, General Custer’s last stand. Academics, in particular, are fond of these two words, as they see themselves as having these skills in abundance. To be fair soft skills and empathy is not their strong point, so they need something to fall back on. 


One species of critical thinking is often well taught in good schools and universities, but only at the level of research, text and essays. As a practical skill it has all but disappeared in theory dominated courses.  It needs high quality teaching and the whole curriculum and system of assessment needs to adjust to this need. As Arun has shown, there is evidence that in our Universities, this is not happening. Arun (2011), in a study that tracked a nationally representative sample of more than 2,000 students, who entered 24 four-year colleges, showed that Universities were failing badly on the three skills they studied; critical thinking, complex reasoning and communications. This research, along with similar evidence, is laid out in their book Academically Adrift. Even research is now under threat as papers appear, where the hypothesis, data work, even the writing is being done by AI.


The idea that critical thinking is a uniquely human activity is being challenged by tools that do it better, whether in mathematics or other fields. It is now clear that the inference capabilities of AI will outclass humans and that we are no longer the sole producers of critical analysis. We saw how AI became highly competent at all human games. AI has already defined the 3D structure of the 200 million know proteins, a task that, according to Hassabis, the CEO of Deepmind, would have taken us a billion years using traditional human methods. It’s data analysis abilities have become superb and the growth of the inference side of AI is accelerating fast. We can expect current models top become good at this very quickly.



The modern understanding of the artist as a unique individual with a distinct creative vision began to emerge during the Renaissance in Europe, strengthened when the Romantic conception of the artist emerged in the late 18th and early 19th centuries as a reaction to rationalism and industrialization. Romanticism placed a strong emphasis on individualism, emotion, imagination, and the awe-inspiring power of nature. Within this context, the creative artist was seen as a visionary figure, capable of tapping into the depths of human experience and expressing profound emotions and ideas. Technology, of course, has always been part of artistic production and expression, from the tools of painting to photography, synthesizers and other generative tools. The idea that art is some sort of divine spark has long gone. Suddenly, generative AI was winning photography competitions, producing music and being generally ‘creative’, another word that is difficult to define.


Can they be independently taught?

Of course, those who are most vociferous on the subject of 21st C skills are often those who tend to ‘write’ most about them but are the least skilled in them. Education, whether through lectures in Universities or one teacher to 30 plus kids in a classroom, is not the way to teach any of these skills, even if they did exists. Of course, education continued to scrap vocational learning, starve it of funding and push for even more abstract schooling and higher education. So called 21st C skills were fine for others but not them.


Isn’t all this talk of 21st C skills just a rather old, top-down, command and control idea – that we know what’s best for others? Isn’t it just the old master-pupil model dressed up in new clothes? Do the pupils know a tad more about digital skills than the masters?


It is an illusion that these skills were ever, or even can be, taught at school. Teachers have enough on their plate without being given this burden. I’ve seen no evidence that teachers have the disposition, or training, to teach these skills. In fact, in universities, I’d argue that smart, highly analytic, research-driven academics tend, in my experience, often to have low skills in these areas. Formal educational environments are not the answer. Pushing rounded, sophisticated, informal skills into a square, subject-defined environment is not the answer. It is our schools and universities, not young people, who need to be dragged into the 21st century. The change will comes through mass adoption and practice, not formal education. 



There’s a brazen conceit here, that educators know, with certainty, that these are the chosen skills for the next 100 years. Are we simply fetishising the skills of the current management class? Was there a sudden break between these skills in the last century compared to this century? No. What’s changed is the need to understand the wider range of future possibilities and stop relying on human exceptionalism.



Mirjam Neelen & Paul A. Kirschner  ‘21ST CENTURY SKILLS DON’T EXIST. SO WHY DO WE NEED THEM?’ (2016). https://3starlearningexperiences.wordpress.com/2016/11/01/21st-century-skills-dont-exist-so-why-do-we-need-them/

Rotherham, A.J., & Willingham, D.T., (2010). “21st-Century” skills. Not new, but a worthy challenge. American Educator, 17-20. https://www.aft.org/sites/default/files/periodicals/RotherhamWillingham.pdf

Arum, R. and Roksa, J., (2011). Academically adrift: Limited learning on college campuses. University of Chicago Press. 

Friday, May 19, 2023

Human Exceptionalism – we need to get over ourselves

Human exceptionalism is the belief that we are unique. We evolved to see ourselves in this way. This extends to kinship with family, other social groups and our wider species Homo sapiens. Yet, since we started to create technology, especially writing, we have discovered, time and time again, through reflection and inquiry, that we are not as ‘exceptional’ as we assumed.

Copernicus threw our little sphere out into the whirl of the solar system, de-anchoring us from our central place in the known universe. What we had evolved to see was the sun moving round the earth, so the alternative was counterintuitive and shocking. The sun does not rise, it is the earth that falls. Yet symbolic writing – mathematics and data – was definitive on the matter. ‘Flat earther’ is now a pejorative term. 


Darwin then showed that a creator was not necessary for design and that we were just another animal, the product of a process of genetic accidents and selection. There was no designer only the ‘blind watchmaking’ of the evolutionary process. In fact, our cognitive, affective and psychomotor capabilities are limited by the evolutionary process – limited working memory, forgetting, cognitive biases, fallible and failing ling term memories, inability to network efficiently, sleep and death.


Physically, technology trounced us. It was thought we would die if we travelled by train, as we couldn’t handle the speed. Within 66 years of that first Wright Brother’s flight we had landed on the Moon. Physically we are generally weaker, slower and less accurate than the machines we create. That gap is widening. There is no place on earth we can’t get to, no physical task too big, as technology has extended our physical capabilities. Although, to be fair, the robot person, even self-driving car is still some way off. Strength and precision was not our strong point. It was mind. 


For a long time the focus was on robots and the mechanical side of AI, it turns out that the real advances in AI, were in the psychological domain. With this we turned to cognitive qualities, like critical thinking, collaboration and creativity. Surely technology couldn’t encroach on these unique skills? Once again, technology surprised us all – it can,



Economies have seen significant slow-downs in productivity. This has puzzled many. AI as one of the few areas where increased productivity was cleat and measurable. The internet , first with search on Google has massively reduced the need to find information and services. It has created an entire economy based on efficient services from Amazon to Uber. 


Productivity has now been turbo-charged with generative AI, where tasks that took hours can be done in minutes, if not seconds. The tasks that have been identified and already researched include: research, brainstorming, report writing, copy writing, coding, image creation and translation. This is merely the tip of the potential iceberg. 



The real gains come in replacing what expensively educated human being currently do – professional tasks such as managers, consultants, accountants, lawyers, teachers, doctors and so on.


There is nothing uniquely human about management, teaching, medicine, accountancy or the law. The professions really did turn out to be a conspiracy against the laity, creating a class that traded on being uniquely analytic and smart. The rewards went to those who worked with their head, while those that worked with their heart (social care, nursery staff) or hand (workers with physical jobs) were left behind in terms of pay and advancement. A rebalancing will happen.



Productivity has been levelling off for some time, yet the research is already showing significant improvements on speed and quality with AI. This one area has been happening for some time, with Google, search and tools such as word processing and spreadsheets. Almost every manager or administrator who has contact with generative AI sees an immediate increase in productivity.



A whole layer was created of consultants are paid handsomely to do organisational analysis and recommendations. It turns out that they are highly vulnerable to replacement by AI that has wider knowledge, better data analysis and clearer ways to recommend courses of action to organisations. We literally have consultancy on-demand, for free.



Compared to the huge amount of time it used to take researchers, Google Scholar reduced the time taken to find papers and citations buy orders of magnitude. We have also seen advances such as Alphafold, that took a billion years of research into 200 million proteins to create them all in 3D, thereby advancing science, particularly medicine, by disintermediating millions of hours of research and lab work. We will now see a further, more significant, release of potential productivity by researchers through AI, as its data capabilities, including synthetic data, grows to outpace human research in key areas. The bloated and expensive University system can be made far cheaper and more efficient.



If a virtual teacher increases the performance of learners, why would we persevere with teachers? They are areas where demand is huge and universal, yet supply limited and expensive. Forget the digital divide – the educational divide is far worse. The idea of a Universal Teacher is firmly on the horizon, one that has a degree in every subject and the pedagogic abilities of the best teacher, as that pedagogy is built into the personalised teaching process, available 24/7, to anyone, anywhere.



The average GP has close to a 5% error rate on diagnosis. If a virtual Doctor gets below that, let’s say 2%, why would we persevere with general physicians. We are more than happy to replace physical labour with smart machines, why not knowledge workers. Why should they get a pass when others did not?


This is why Bill gates sees learning and healthcare as the two big beneficiaries of AI. 

Being a teacher or physician are means to an end. If the means has better outcomes, we should embrace its potential.


Is there anything left?

We saw ourselves as being uniquely created creatures with exceptional abilities. Skills and abilities were seen as exclusively human. Yet, as we moved from the fields to factories and hardly anyone worked in agriculture, those skills were largely replaced by physical and psychological technology. When manufacturing skills were replaced by machines, we moved from factories to offices. We then thought that we’d all be invincible knowledge workers; high reward professionals who work in offices, or at home on Zoom – we are not. In fact, we are just as vulnerable to technological replacement as those previous groups.


To date that expertise has been expensive and lengthy to replicate in humans, so it remained scarce – too few doctors, teachers, lawyers, accountants, so the professions exist, with exceptionally long and expensive training periods, so are not available to all.  That captured cultural capital is can now be used to benefit us all. What we did not forsee was that writing, then printing then digital storage, had captured much of our expertise. It could be mined to generate through AI expertise on-demand, for anyone.


We have to accept our limitations and deal with this new future as best we can, to suit our needs. This is especially true in ‘learning’, a social good that through digital disintermediation can be democratised and brought to far greater numbers at very low cost. We need to get over ourselves. We are not as exceptional as we thought, in ANY domain. 

Tuesday, May 16, 2023

Smart algorithms work for Google Facebook & Amazon can they work for learning?

Algorithms are everywhere
Christopher Steiner in Automate This: How algorithms came to rule the world describes how algorithms have pretty much got everywhere these days. Use the web, you’re using algorithms. Engage with the financial world, you’re engaging with a world driven by algorithms. Look at any image on any screen, use your mobile, satnav and any other piece of technology and you’re in the world of algorithms. From markets to medicine, the 21st century is being shaped by the power of algorithms to do things faster, cheaper and better than we humans, and their reach is getting bigger every day.
Learning is algorithmic
Our brains clearly take inputs but it is the processing, especially deep processing, that shapes understanding, memory and recall. The brain is not a vast warehouse that stores and recalls memories like video sequences and text, it is a complex, algorithmic organ. However, it would be a mistake to see the brain as a ‘simple’ set of algorithms for learning. Take ‘memory’ alone. We have many types of memory; sensory memory, working memory, long-term memory, flash-bulb memory, memory for faces, and remarkably, memory for remembering future events. Huge gains have been made in understanding how these different types of memory work in terms of our computational brain. Even if one rejects the computational model of the brain, alternative models do seem to require algorithmic explanations, namely inputs, processing and outputs. Learning is therefore always algorithmic.
You’d also be mistaken if you thought that the world of online learning has been immune from this algorithmic reach. From informal to formal online learning, from search to sims, efficient online learning has already been heavily powered by algorithms.
Every time you search for something on Google you’re using seriously powerful algorithms to crawl the web and match its findings to your search item, then produce a set of probable results. It’s algorithms do many other things you may not be aware of, such as identify cheats who create lots of artificial links and keywords to boost their ratings. Every time you use Facebook algorithms determine your newsfeed. The more you engage with someone, the more likely you are to see their posts, and the more Facebook people in general that engage with posts, determines how often that post is presented to others. Every time you use Amazon to see book rankings or get recommendations or offers, they use what they know about you and others to offer you choices. Wouldn’t it be astonishing if this did not happen more and more in online learning?
Even in formal learning, there are algorithms galore. In flight simulators, it’s not just the image-rendering and motion modelling that’s driven by algorithms, it’s also the learning through scenarios and feedback. In games-based learning there are algorithms aplenty. I first implemented algorithms in a BT simulator in the 1980s. What’s new is the fact that the restraints we had then have all but disappeared, with faster processing, better memory and ability to gather large amounts of data for use by adaptive algorithms. They’re here and they’re here to stay.
Teaching is essentially algorithmic, as the teacher (agent) gathers data about the environment (knowledge and students) and attempts to deliver recommendations or responses, based on their experience. In some cases, such as search and citation-driven research, algorithms do this much faster and better than humans. Teachers, tutors and lecturers, most of whom teach many students, have limited abilities when it comes to gathering data about their students, so no matter how good they are at teaching, tailored, personalised feedback and learning is difficult. It is also difficult to identify what students find difficult to learn. What, for example, are the most difficult concepts or weaknesses in the course? Again, data gathered from individual students and large numbers of students, allow algorithms to do their magic.
The bottom line, and there is a bottom line here, is that warm-body teachers are expensive to train, expensive to pay, expensive to retire, variable in quality and, most important of all, non-scalable. This is why education continues to be so expensive. There is no way that current supply can meet future demand at a reasonable cost. The solution to this problem cannot be throwing money at non-scalable solutions. Smart, online solutions that take some elements of good teaching, and replicate them in software, must be considered. As Sebastian Thrun says, ““For me, scale has always been a fascination—how to make something small large. I think that’s often where problems lie in society—take a good idea and make it scale to many people.”
We have seen how smart algorithms, in Google. Facebook and Amazon can significantly enhance users’ experiences, so how can they enhance the learners’ experiences? It is often thought that data is the key factor in this process but data is inert and only becomes useful as input to algorithms that interpret that data to produce useful actions and recommendations. The quality and quantity of data is important but it is the quality of the algorithms that really counts.
Smart software, formula-based algorithms, based on the science of learning, can be used to take inputs and make best guesses based on calculated probabilities. Multiple, linked algorithms can be even smarter. They may use knowledge of you as a learner, such as your past learning experiences and so on. Then, dynamically track how you’re getting on through formative assessment, how long you’re taking in a task, when you get stuck, keystroke patterns and so on. Add to this data gathered from other groups of similar learners and the system gets very smart. So smart, it can serve up optimised, learning experiences that fit your exact needs, rather than the next step in a linear course. The fact that it also learns as it goes make it even smarter.
Iceberg principle
When we learn, most of the processes going on in our brain are invisible, namely the deep processing involved in memory and recall. This activity, that lies beneath consciousness, is where most of the consolidation takes place. Similarly, it is sometimes difficult to see adaptive, algorithmic learning in practice, as most of the heavy lifting is done by algorithms that lie behind the content. Like an iceberg the hard work is done invisibly, below the level of visible content. Different students may proceed through different routes on their learning journey, some may finish faster, others get help more often and take longer. Overall, adaptive, algorithmic systems promise to take learners through tailored learning journeys. This is important, as the major reasons for learner demotivation and drop-out are difficulties with the course and linear, one size-fits-all courses that leave a large percentage of students behind when they get stuck or lost.
One symptom of the recognition of the need for adaptive, algorithmic learning is a recent poll by Inside Higher Ed and Gallup of college and university presidents, who saw more potential in ‘adaptive learning’ (66%) to make a “positive impact on higher education” than they did MOOCs (42%). The 2013 Insider Higher Ed Survey of College and University Presidents. Conducted by Gallup. Scott Jaschik and Doug Lederman, eds. March 2013.

Why would this be so? Well, it’s not only college and University Presidents, the Gates Foundation have also backed ‘adaptive learning’ as an investment. Read their two recent commissioned reports, as well as their funded activity in this area. One UK company, singled out by these reports, as a world class adaptive, algorithmic learning company is CogBooks, who have already taken adaptive learning to the US market. The reason is that it is seen as a way to tackle the problem of drop-out and extended, expensive courses. Personalised learning, tailored to every individual student, has come of age and may just be a solution to these problems.

Saturday, May 06, 2023

Learning technologies 2023 - a tale of two events

Spent two solid days at Learning Technologies Conference in London, Europe’s largest organisational learning technology event. First up, a big thanks to Donald Taylor for inviting me to speak. He was everywhere, with his team, keeping the show on the road.We’re oft confused and that happened twice at this event. At the KoganPage bookstall someone asked me to sign my book but it was the other Donald! Mine had sold out, however, I gave her a spare copy. The other was a summarising video of the conference where my talk on ‘AI changing work and learning’ was captioned as a talk by Donald Taylor. He has solid, bona fide Scottish roots in the Glasgow shipyards, so it’s an honour to be connected by confusion!

Learning Technologies was great but I found it a Janus-faced event. One face was the inward looking exhibition, the other the outward facing Conference. 



A vast loud, noisy exhibition, with so many stand lights and hot air, it turned into a sweaty hell. I did the rounds. Same old, same old. It was like going back to Disneyland, all smiles and promises of fun times but reflecting an embalmed, vastly overpriced and vanishing world. It there any other industry that produces so much that is so disliked by so many? With a small number of exceptions, there was barely a mention of AI, except in that ‘we’re building it into our product’ sort of way, tinkering.


This giant ‘cheese’ factory was churning out courses, stored in LMSs, pouring out the same old scorn, sorry SCORM, data, that ends up as donuts on dashboards. Text - graphic – MCQ – all of the above - repeat. The whole junkyard has become a parody of itself, disengaged from real people and the real world, whose reaction to their latest Leadership, Diversity or Resilience course, is invariably an ‘eye-roll’. We evaluate nothing, which has resulted in the over-production of over-engineered and over-wrought, Disneyfied courses. It is a supply, not demand, industry, not listening to actual business needs, but imposing a therapeutic and moral nonsense. Next thing is they’ll be probing my unconsciousness – hold on…


There was much jaw jaw on skills, but so often that manifested itself in Leadership nonsense, DEI or faddish topics. This year’s thing is 'Resilience' yet another hopeless construct from L&D. An excuse for third rate courses, seeing employees as having yet another deficit or disease, of which they must be 'cured'. This therapeutic culture is relentlessly top-down and arrogant. Employees have this stuff force-fed to them, rather than using it autonomously, as they have been doing with Google, YouTube and social media for two decades.


This is a technology conference but the technology so often felt like something out of the early 2000s, that’s because it is something out of the Cambrian explosion of LMSs created in early 2000s, with content that has changed little in the last two decades. It lacks the smartness of contemporary tech – the AI, the data-driven approach, the dialogue of social media.

I'm being a little unfair, as this is the technology that was available, became embedded at the enterprise level, integrated with other software and was difficult to update. On content, however, there is less excuse.


Meanwhile, literally over the same two days, one edtech company had a half a billion wiped off its market cap, Pearson had a dead cat bounce, IBM has announced that ChatGPT would replace many of its HR staff and the world outside of Disneyland moved on, bypassing this supply pipe, reacting to real demand.



Over the corridor, by contrast, in the conference, AI was the BIG topic. It wasn’t that it was coming, it was already here, with hundreds of millions using it for work. Like some super popular performance support and productivity tool, it seems to have by-passed L&D and most of the vendors. The sisters and brothers are clearly doing it for themselves with AI.


There was passing reference to it in David Kelly’s talk, although the talk seemed quite basic, aimed at people new to old ideas like ‘personalisation’ and ‘performance support’. I was genuinely puzzled at the statement about not publishing their Devlearn US sessions online as it would not be equitable. At a Learning Technologies conference that seemed like a cop out.

The Red-something analyst, Dani, had her versions of the Fosway four-way grids, showing her pet companies, oddly absent were some of the European players than those on her slides. Her grasp of the AI phenomenon was thin as gruel. Not sure why we have US people who don’t really know the European market, telling us about our own turf. Fosway are miles better. I tried to suggest some names she had missed but she wasn't interested and fobbed me off. Real analysts, who work deep inside the investment community are way more knowledgeable than these ‘let’s send out some survey questions’ qualitative research houses.


I hugely enjoyed the ‘AI for Lifelong Learning’ talk, as Conrado Schlochauer was spot on in saying that adults don’t need all of these courses, as they want to be self-directed and that ChatGPT was the way to go. It was easily the best talk on Life Long Learning I've seen. Although it took a strange turn at the end with the claim that AI was making us illiterate fools, stuck in their echo-chambers. I find that argument unpalatable. The world is full of people in their own bubbles calling out others for being in bubbles. 


Talking of bubbles, I find major conference sessions such as ‘Women in Learning’ particularly inward looking. It is a technology conference, not a general L&D conference. I’m thinking about suggesting a ‘Poor People in Learning’ as a counter to the trend to spend all of the budget on Leadership and DEI training that deliberately excludes working class people. L&D seems to assume that everyone works at home or in an office, real practical skills have been underfunded or abandoned in the L&D world and we wonder why the world is falling apart.


What I found really heartening was the recognition among almost everyone I met that AI was a Big Bang thing, not just for L&D but for work and the entire species. The debates were intense and informed, Why? Everyone had used it and had their minds blown by it. They immediately saw its potency and potential. I gave a session that was packed to the guddles with people eager to hear what impact this is having on work and learning. That impact is already profound. I also presented to a large room full of students who were as smart as whips, asking all the right questions about AI. 


But what really mattered was the myriad of conversations I had in passing, in the pub and restaurants was exciting. A ton of conversations with old friends, and even more valuable lots of new friend made, too many to mention. I particularly loved the enthusiasm of youth, who really did seem a little tired of the old and genuinely wanted to ring in the new.