Tuesday, June 06, 2023

Language, AI and what we can learn from Wittgenstein

As Generative AI through Large Language Models (LLMs) started to roll out, I had several déjà vu moments. The philosophy of language has had lots to say on the nature of language, meaning, its relationship to the world and the limits of thought. Wittgenstein is perhaps the most famous of these philosophers and there seemed to be real parallels between what AI was being implemented through AI and his views on language. I kept being drawn back to the Tractatus and Philosophical Investigations.

Many mistakenly see AI, as Wittgenstein saw language in his first book, the Tractatus, as a direct correspondence between propositions and the real world of truth. The early pioneers of AI, with their focus symbolic AI and logic, had limited success as they too had this strictly representational view of language, which is flawed. Wittgenstein famously rejected his earlier idea in his later philosophy for a more complex view of language as a tool for communication not direct representation, as laid out in his ‘Philosophical Investigations’.

 

Family resemblance

He comes close to describing what LLMs do when describing words as having ‘family resemblances’, related to each by various degrees of resemblance in meaning or multifarious relationships. The mistake is to think that universals, like the word ‘game’ can be defined separately from these relationships. We have a tendency to see these universals, or abstract terms, as having some underlying definition, quality or essence that lies beneath all instances of games, but there is no Platonic game or even definition of ‘game’.  Dictionary definitions come from use, not the other way round. There is a nexus of game activities with different qualities. This is the problem with identifying key terms such as AI, intelligence, bias, learning, hallucinations and alignment. We treat these as essential things when there is, in fact, no single definition of any of them.

 

Wittgenstein’s Family resemblance is similar to the way language is represented in LLMs, as relationships between words or tokens. Family resemblance is actually too tight a metaphor, as it has lineage and a timeline, whereas LLMs are mostly timeless and far more complex and multidimensional. Where Wittgenstein is right is in seeing language as the sum total of probabilistic relationships, not defined by the relationships of words to the real world, but in relationships with each other. However, would be a mistake to take this too far, as language also embodies relationships with the real world. It is not as if LLMs have no context and are totally de-anchored from the real word, only that they store and generate outputs using complex relationships of meaning captured in the maths. 

 

Another way of looking at this, is to see meaning as use and LLMs as optimised for use. They respond to prompts by people and continue to respond, in real dialogue. Language has indeed developed for dialogue, not the block representation of the world in written prose, so LLMs play to our natural use of language, which brings us to another of Wittgenstein’s ideas – language games.

 

Language games

In his ‘Philosophical Investigations’ Wittgenstein uses the term ‘language games’ to describe the various ways in which language is used by humans. He saw us engaging is different types of communication, ‘language games’, where language is used in a relatively tight context. Each game represents a specific use of language defined by grammatical rules, and they are embedded in a broader cultural and social context.

 

Here are a few examples of language games that Wittgenstein mentions in his work:

Giving orders, and obeying them, with its direct, purposeful communication, such as to a soldier, at work or in school. Describing an object's appearance, or its measurements describing the physical properties or dimensions of things in the world by architects, artists, engineers, and others. Constructing an object from a description (a drawing) involves not just a description of an object, but one that is intended to help someone else recreate the original object, in areas like engineering and carpentry. Reporting an event communicates what has happened to someone who wasn't there. This is a fundamental to storytelling, journalism, and many forms of social interaction or speculating about an event involves making hypotheses or guesses about what might happen in the future, based on current information. Playing a game with rules, even playing a physical game involves a language game, in which the rules of the game are discussed and agreed upon. There’s also making a joke; telling it, laughing, the complex interaction of telling a joke and understanding the humour, which often requires a shared cultural and linguistic context. These are all examples of the diverse ways in which language is used, each with its own specific rules and contexts. The meaning of words and sentences is closely tied to these activities and cannot be separated from them.

 

As LLMs contain so much language, it would take 22,000 years to read ChatGPT3s training data, they contain many, many different species of ‘language games’, and can reproduce their styles – from academic lectures to chatty dialogue, even formal prose as in stories and poetry, in any genre or style. It is not a homogeneous lump of language but language in all of its forms and stylistic complexity. So when you ask ChatGPT to give you text in the form of a teacher, engineer or journalist, it will do so, as it recognises the language patterns and games within that domain. In one sense, it remarkably uses the closer family resemblance type relationships between words to reproduce languages games. You do get this strong sense when using ChatGPT that it is engaging, not just in the presentation of language but the presentation of language in a style or context.

 

Forms of life

The wider context for using a ‘language game’ for Wittgenstein is what he calls a ‘form of life’, not biological life, but a shared cultural or social background that makes mutual understanding and communication possible. This is the complex, shared background of customs, practices, beliefs, and languages that individuals in a community inhabit. It is the shared human activity that constitutes the background against which our language and our concepts acquire meaning.

 

To imagine a language is to imagine a form of life. In other words, the way we understand language and the meaning of words is deeply influenced by our form of life, our way of living, language is a social practice shaped by these forms of life, and it only has meaning within these specific contexts.

 

This is the problem of assumed context in AI. LLMs both capture these forms of life as embodied in language, but the capture is not complete, as time, action and a world view can be missed. We may be dealing with a limited set of de-anchored or impoverished language games. As AI develops it will include these contexts, sense of time, action and world views.

 

Ethics

Another fascinating area of congruence is in ethics. Wittgenstein was always careful when discussing ethics, as seeing it as beyond language, as something that cannot be discussed meaningfully using the facts of the world, which is the proper domain of language. He categorizes these statements as ‘unsayable’ not in the sense of trivial but because they transcend what can be expressed in language. Ethical statements, in the context of a language game, can be seen as expressions of a way of life, or as part of a particular cultural or social practice. They are not descriptions of facts about the world, but rather they are part of a human activity. They make sense within the practices and forms of life where these language games are played. So ethical statements as deeply embedded within our specific cultural and social practices. They gain their meaning and importance not from corresponding to some objective moral fact, but from the role they play within our lives. In other words, he approaches ethics not as a matter of knowledge, but of practice and living.

 

One could speculate and see him as being highly suspicious of the definite ethical stances taken by the doomsayers and fearmongers, as that is likely to be overreach by language which leads us astray into thinking it has deep certainty and meaning when it does not.

 

Conclusion

For Wittgenstein language is the limitation of thought. He explored this limit by being honest about the very nature of language as generated by the human mind in a cultural and social context. Wittgenstein was a philosopher, mathematician, logician, teacher and architect. His exploration of the nature and limits of thought and language have much to teach us, especially about being careful when thinking about these new forms of AI. His critique of essentialism led him to believe that words do not have common underlying definitions but rather get their meaning from use in relation to other words in language games. They may constitute something we have never encountered before, a new language game or games, a new form of life. His insights into how language works on resemblance between words seems prescient as is his theory of language games as do his views on ethics and language. 

Sunday, June 04, 2023

AI does not have a ‘ghost in the machine’


AI seems to have acquired the concept of a soul or a ‘ghost in the machine’, a phrase coined by Gilbert Ryle, who critiqued the idea as hopelessly Cartesian in his wonderful book ‘The Concept of Mind’. I’m also uncomfortable with this Cartesian shift towards maths and software having a soul. We tend to see objects as having subjects behind them, so readily invent such ghosts in the machine, even seeing our minds as souls. The irreducible ‘I’ in Descartes seems to have been resurrected in A’I’, with the Artificial ‘I’. There is no ‘I’ in ‘AI’. 

Promethean myth

From the Promethean myth onwards, we have imagined technology as a monster. Frankenstein being first modern Prometheus (literally the subtitle of Mary Shelly’s book), a monster in literature manifested as a man in movies. The Terminator generation has its own Promethean icons. The tech kids, brought up on a diet of sci-fi, are fond of these thought experiments, which they think is philosophy. They speculate on tech that ‘has a mind of its own’ that will ‘destroy us’, with no evidence other than thinking of a possible consequence free from any mitigating design or control. Let’s not turn fiction too readily into non-fiction.

 

First, there is no such thing as AI. It is not an entity, it is many things and these many things talk to each other, work in tandem and form a nexus of technology. John McCarthy who coined the phrase in 1956, regretted it. It is a number of different data techniques, training using that data along with several statistical and algorithmic methods for producing a word by word output. It is competent without comprehension.

 

Neither is there is one unitary form of intelligence in human and/or animal form. It is a complex set of cognitive conditions. With AI, it is even more diffuse and complex, not a set of discrete brains, housed in bone skulls, all approximately the same size with roughly the same limitations but an instantly communicating network. Even if we were to use human identity and intelligence as a benchmark, our brains have an incredibly limited working memory, a limited, fallible and declining long-term memory, we forget almost everything we try to learn, we have dozens of identified intrinsic biases, are emotionally unstable, can become dysfunctional through illness, be delusional, even hallucinate, can’t network, often get dementia and Alzheimer’s and die.

 

AI is many things

AI can learn separately but share instantly. Imagine that one person learns, then, instantaneously everyone else knows. There is no person here, no single AI entity, only a growing network of competence. To be fair that is where the danger lies but let’s not read intention, deception and agency into this network.

 

In particular, Generative AI is not a person, it is us. We are staring into a mirror of our own collective, cultural capital. We are speaking to a hive mind, that has competence without comprehension. It can be any of us and all of us, as it knows most of what we know. 

 

In our rush to see an essence in everything, we see AI as one thing. We talk about AI as if it was a being or entity that could be good or evil. A common conceptual leap is to do what Descartes did, with his though experiment around an Evil Demon and see the ghost in the machine as malevolent. A consequence of this ghost in the machine thinking is attributing evil intent to the ghost as a demon who will become autonomous, escape our control, conjure up existential threats to our species and eliminate us. Without that ghost, soul or demon, we strip AI of moral intention, either good or bad. 

 

What exacerbates this mode of thought, that there is an intentional entity that lies or deceives us, is poor prompting, provoking and poking the bear. If you ask it weird things it will come back with weird stuff. In that sense it merely reflects the stupidity or intent of we humans. As something that is competent without comprehension, it speaks back to us, so seems like another person but it is not. We are actually speaking to ourselves as a species. Our social and cultural heritage has been captured and seems to speak back to us but we are speaking with ourselves. It is a mass act of social generation, mediation and reflection. It seems human not because it is human, it just reflects our social production and heritage. You are not talking to an individual, with a soul, but looking into a wide and deep pool of socially structured knowledge. 

 

Anthropomorphising

This is the root cause of much of the anthropomorphism, speculative causality and doomsaying around AI which posits sci-fi levels of autonomy and agency and ignores control. Yann LeCun is their primary critic and constantly refers to this sci-fi speculation as way out of control. For him, we generally ignore good design and engineering to mitigate the wildly speculative risks. There is no form of existing or AI or even anticipated AI that suggests an extinction event for humanity. This risk is negligible compared to risks such as climate change and that is where we need to use AI to help solve our more intractable problems.

 

Argument from authority

Giving ‘hero scientists’ too much credence is also to fall victim to the ‘argument from authority’. This is not the way to deal with such issues. The simple idea of being cautious and careful as we go is being swept aside by the relentless rhetoric and focus on the existential and extinction risk. The lack of debate on how to accelerate beneficial uses of AI is infuriating. This lack of balance now means that the benefits in education and healthcare are in danger of not being realised because of hyperbolic speculation that has become apocalyptic. 

 

AGI

AGI just seems like an unnecessary concept, the most common application of the ghost in the machine principle, the idea that there will be a single threshold, some oddly varied human bar, that will be transcended, rather than lots of separate capabilities, all of which can be designed to be controlled. We can create lots of AI technologies with different specialisations without seeing AI as crossing a single line. 


"How can you design seatbelts for a car if the car does not yet exist?" Yann LeCun. Indeed, the self-diving car is a good example as to why we should not be concerned about AGI. We are nowhere near having a self-driving cars on our roads, which confirms that we are nowhere near AGI. In truth abstract researchers are often far from the coalface of implementation and tend to exaggerate thought experiments and abstractions.

 

Copyright

The issue of creative copyright is a misunderstanding of the technology as some sort of single copying machine or single source which LLMs retrieve from or sample in some way. This is not how it works. There’s both an input and output issues here. On input, namely the data used for training, there is a big difference between ‘learning from’ and ‘copying from’.  The models learn from the trained data, they do not copy that data. Japan has already confirmed in law that there will be no copyright issue here. On output, the models spit out freshly minted words, probabilistically, one by one.  Again there is no copyright problem. We would be as well saying all human brains break copyright in all creative acts as they input through reading, hearing and perceiving text and imagery when learning, then output new words, speech or images. There is no ghost in the machine copying or sampling.

 

Fakery

AI is far more likely to protect us from fake and false information that make us believe in it. Around 400 million to 1.5 billion fake accounts are taken down by AI every quarter (figure oscillates) and hate speech is just 0.01-0.02% of all content and 82% of it is taken down automatically by AI before moderation. This is a known problem with known solutions. We saw this in the clickbait drone story, which turned out to be fake news. A USAF official who was quoted saying the Air Force conducted a simulated test where an AI drone killed its human operator is now saying he “misspoke” and that the Air Force NEVER ran this kind of test, in a computer simulation or otherwise.

 

Conclusion

In positing an ego, soul, agent or ghost in the machine into the technology we make a category mistake in taking a concept from one domain, the human domain, and applying it to maths and software. There is no ‘I’ in ‘AI’.

Saturday, June 03, 2023

Generative AI - my sense of wonder and hope


I feel an absolute sense of wonder at Generative AI. It just seems so surprising that something so simple does things so extraordinary. Letting a mathematical model ‘learn’ by feeding it data, then seeing it create text, images, audio and video is almost sinfully magical. We have seen nothing like this in the history of our species. We are no longer search and retrieving knowledge, we are having a dialogue with knowledge, with something that has its roots in the collaborative hive mind, that tirelessly and politely communicates with us in many voice we choose and creates every new word and image.

I’ve waited my whole life for this. All my adult life has been spent in using technology to do things that humans do, like teach, heal, dispense advice, manage and so on.  For years, going back literally 45 years to Dartmouth College where I first came across AI (it was the home of the modern era of AI), coming back and buying a home computer, my attempts at an intelligent tutor in the late 90s, then putting all my energies into companies and ideas that were heading in this direction since 2005; Learning Pool, then CogBooks, WildFire. I wrote a book in 2020 that said it was coming – soon. And it did – big time.

 

This is not a clickbait fad, it is a profound moment in time, a moment that changes the trajectory of our species. For the first time in my life I see on the horizon, a Universal teacher and Universal doctor, and other useful advisors htat will inform, educated, manage and heal us. In particular, it will dramatically reduce the cost of education and healthcare.  

 

It is clear that once AI embodies great pedagogy, learning science and teaching practice, the Universal Teacher that can teach in any subject at any level, at any time anywhere in any language. Why? Because it can learn these things. What marks this technology apart from all others is its ability to learn and that’s why it will eventually transcend our current model. We will all learn faster because it will learn to teach us. Teaching is a means to an end – that end is learning. If that end can be done in a way that is much cheaper and doesn’t condemn many to failure or keep people trapped and excluded from the world in expensive institutions in long-winded courses, for up to twenty years, then why not?

 

It is also clear that the Universal Doctor is also on the Horizon. As soon as current misdiagnosis rates are undercut by AI, this will happen. It will happen because lives will be saved. It will also learn to do investigative work on available data, blood tests and scans. On top of this we have seen with Alphafold how advances in research and medicines is already being accelerated by AI. Medical practice is a means to an end the health of us all and as patients when we get ill. If that can be achieved at hugely decreased costs, with better diagnosis, investigation and treatment, less suffering, then bring it on.

 

These two benefits alone are why we must calm down on the eschatological doomsaying. It is easy in the comfort of the developed world to want these two things to slow down but of you are in a classroom with no roof in a class of 50 plus and a teacher with little training and a paltry salary, you may thing differently. And of you live in a country where there’s one doctor for up to 10,000 people you may see this as a change to not just improve but save your life.

 

There is no end of people not knowing how to live their own lives, telling everyone else how to live theirs... the moralisers, ethicists, activists. They would rather impose their will on others than realise the benefits of AI. It was always thus, with printing, radio, television, rock ‘n roll, internet, Wikipedia, smartphones, videogames, social media, bioengineering, GM crops, 5G now AI. The usual suspects want to own knowledge and communications; academe, mainstream media, politicians, professional institutions. They have a lot to lose but the the rest of us have a lot to gain with cheaper more productive systems. In truth it is no longer in their grasp. They’d ban it in the blink of an eye if it wasn’t that damn useful.

Tuesday, May 30, 2023

Africa is not a place and doesn’t need white saviours in learning

Africa is a vast, varied and vexing place but there is no essential thing that is Africa. Dipo Faloyn, the Nigerian writer, sums it up in his very personal book ‘Africa Is Not a Country’. Forget the images you see on TV of safaris, wars, child soldiers and famine, as most of that is through the eyes of visitors who stay in comfortable hotels, working for organisations that have a saviour agenda. There are countries north of the Sahara that are very different from those immediately below. Egypt, Nigeria and South Africa are radically different places at the top, middle and bottom of Africa with substantial economies. There are many Africas, there is no ‘Africa’.

 

I’m just back from Senegal, the host country this year for E-learning Africa and have been to this event many times. It remains the defining Learning Technology event in that continent. The brainchild of Rebecca Stromeyer, it goes way beyond the normal conference format, lively, intense and friendly networking along with a session for Education Ministers from all over Africa and a ‘challenging’ debate, which I’ve participated in several times.

 

You can’t shy away from the chaos when travelling in Senegal. One very large group turned up to a Hotel that did not exist. They had been scammed. We turned up at 2am to our hotel and found that, despite it having been booked and paid for, they had sold on our room and we had to find another. Walking around Dakar at 3.30 am looking for a taxi is no fun, especially when every taxi driver we used required Kissinger-level negotiations on price. On one occasion, having agreed a generous fare, one driver turned round and claimed that the fare was per person! My son returned to his room one night and found someone else in his Hotel bed. The lack of organisational cohesion and training is tangible. On the whole the people are wonderful and chilled but I do0n;t want to varnish over this stuff as there is no way it can develop successful tourism with this level of chaos and hassle.

 

I have travelled a lot and never seen so many police, even in and around the conference.

Also never stepped through so many scanners. Every building has one, yet despite me setting of the red lights and bleeps they just wave you through, many are not looking at the scanner screen at all, bored and on their smartphones. They are experiencing political unrest as we speak, with street protests and a Ministerial building being burned after the opposition leader Ousmane Sonko was arrested on drummed-up sexual assault charges. We were stopped at police roadblocks twice and on the way to the airport we saw people being made to lie down on the road in a massive traffic holdup. 

 

There is lots of equipment here but little training, so nothing really works and when it breaks it is catastrophic. Let me give you an example. The flight from Senegal was late which meant we couldn’t make our connection. The TAP (Air Portugal ) office at the airport had someone who quite simply couldn’t be bothered helping. She sat there arrogantly waving us away with her hand then asking that we pay for another flight! We confirmed in Lisbon that she could have arranged our second leg. The problem is deep rooted nepotism. She has this job but can’t operate the computer, so just sits there taking the salary. I can’t tell you how many times we had our credit cards declined because the signal or card machine didn’t work.

 

You have to wrestle with these contradictions. The fundamental fact is that this is a rich continent full of poor people. It is easy to blame colonialism and see everyone as a victim, but there is obvious money around. I saw a blue Bentley outside of the restaurant we were eating in with an old man buying Moet for six women at the table and huge mansions lining the coast. There is wealth here, with huge sums at the peak of the pyramid, a tiny middle class and a vast number who remain poor. The cash is being spent on vanity projects and a vast army and police force who keep the oligarchy in power.

 

Ok so that is the economic and political backdrop. As I was here to deliver some sessions on AI, how do you position the technology thing? 

 

The white saviour complex sees ‘others’ as the only hope for these hapless Africans. Yet it is the white saviour complex that caricatures them as unable to help themselves, people to be saved from themselves. As I walked through the real market in Dakar, not the ones selling beads back to Western tourists ( a neat and startling reversal), I saw a highly entrepreneurial  society. A lorry load of old spare parts were being dumped out onto the road, while people bid for the spares. Recycling is a necessity here. This is not a functionally helpless society, it is a society held back by its own leaders, who strip it bare. The tech kids I meet here are highly entrepreneurial and will move things forward. It is the anti-business, keep them on a NGO ride that will stop Africa. NGOs are band-aids, not the answer. I feel as though, since the 1980s it is the NGOs who seem to be full of colonial rhetoric, the new institutional colonialism. The Geldof inspired ‘Do they know it’s Christmas?’ is perhaps the most condescending song title in history. Africa has more Christians than any other continent, who maybe knew it was Christmas, it also has  450 million Muslims – wonder how they felt? The hideous, theatrical spectacle of celebrity activists grew out of that time and the simplistic rhetoric of binaries continues, northern hemisphere-southern hemisphere, rich-poor, white-black, evil-good.

 

In any case, we’re not here as tourists, we’re here to get something done. Education and health remain problematic in all poor countries, especially countries like Senegal, where the politicians are clearly rapacious and corrupt. I was the only person in the hall who refused to stand for the Prime Minister after we had waited over and hour and a half for him to grace us with his banal presence. Being here as a white saviour was not my intention. I have been here before in the African Congress building in Addis, Ethiopia, debating that Africa needs vocational skills, not abstract University degrees – we won. I have always supported the spread of free tools and learning on the internet to get to the poor. This has happened with finance, it can happen with education. The infrastructure problem is being solved, via satellites and ever-cheaper smartphones. Sure the digital divide exists but constantly painting a picture of a glass half empty while it is being filled is another example of white saviour behaviour.

 

Michael, my partner in crime in the debate, from Kenya, does something about the problems through his ‘4gotten Bottomillions’ (4BM), Kenya's largest and most trusted WhatsApp Platform, connects the unqualified and poor to real work and jobs. He is critical of white saviour attacks on US tech companies who pay for data work in Kenya. This, for him, is rich white virtue signalling denying poor Africans a living. These companies pay above the minimum wage and the work is seen as lucrative. Michael wants to empower Africans, not disempower them through pity. His message to everyone is to suck it up and get on with things, not to wait on grants and NGO benefactors. Until that happens, Africa will remain in psychological chains to others.

 

My first event was a three hour workshop on AI, as an astounding technology that really can deliver in education and health. It is already being used by hundreds of millions worldwide to increase productivity and deliver smart advice, support and learning free of charge. This is exactly what Africa needs. Let the young use it, don’t cut Africa out of the progress as that will turn an entire continent into North Korea and Iran, the two countries that banned it (even Italy saw the error of its ways). 

 

I did another session on online language learning where the presenters were asking young Africans to learn and get tested on, either English or French. French is the, they claimed, the language of culture. Yes – but whose culture? The French. My position was that Generative AI already delivers in over 100 languages, shortly moving towards 1000, many of them African. We have the chance of having a universal teacher on any subject, with strong tutoring and teaching skills available to anyone, anywhere with an internet connection, for free, in any language.


The solution is to get this fantastic, free and powerful educational software to everywhere, not just Africa. Stop using specious arguments about northern hemisphere bias and footnote ethical concerns to stop action. Let Africans decide what's good for them and allow Africa to use AI in the local context. Getting this stuff into the hands of the young and poor is the answer.

 

My final event was in the Big Debate, on whether ‘AI will do more harm than good in Africa. Michael and I lost. Why? The audience was largely either the donors or receivers of white saviour aid. They do this for a living and anything that threatens to bypass them and go straight to the working poor is seen as a threat. They need to be in control, seeing themselves as providing educational largesse. You get this a lot at Learning Technology conferences, people who fundamentally don’t like technology.

I'm well aware that the sermon has always been the favoured propaganda method of the saviour and see that I too am part of the problem at such events. But the lack of honesty about the saviour-complex has become a real problem. My hope os that this powerful technology bypasses the oligarchies, saviours and gets into the hands of the people that matter.

The opposition played the usual cards. First the argument from authority, which I always find disturbing, ‘I’m a Professor, therefore I’m right, you’re wrong’. My friend Mark, on the opposition side gave an eloquent speech brutally attacking the culture of screens and smartphones. I was sitting behind the lectern and noticed that he had read the entire speech, every last word, verbatim, from, guess what… a smartphone! It’s OK for rich white people to use a full stack of tech but not Africa. I rest my case.

Thursday, May 25, 2023

AI in Africa...


I’m in Africa because it’s important to be in a place that gives you perspective. All the usual stuff when you travel, arrive at hotel to find there is no room, wandering about Dakar finding a taxi to a new hotel, being stopped by the police! In any case, we’re in this amazing city and you have to go with the flow.

E-learning Africa, run by the irrepressible Rebecca Stromeyer and her team, have been doing this for many years, because they think it matters. It’s easy to go to these big conferences in Europe and the US but what you get is a cyclopic view of the world, where we assume that everyone works in offices or at home on Zoom. The interest in the topic at our workshop was intense and I'm in a session on AI for language learning. as well as a huge formal debate on the role fo AI in Africa.

 

At this conference in these places, you are faced with the realty the rest of the world. Having been to Uganda, Rwanda, Namibia, Ethiopia and this year, in Senegal. It’s a big world out there and the worst conferences I’ve attended have been in two of the more hideous places on our planet Orlando and Vegas, the first seems like America embalmed, the second some sort of theme park from Hell. Come to Africa and see real unvarnished world.

 

When this new Age of AI struck like lightening in November 2022, I had, for years been extoling the possibilities of its massive impact on learning and health. That impact is not about the already wealthy, the graduate generation in the Northern Hemisphere, it’s about the rest of the world. From the holy pronouncements on ethics and AI from Brussels, you’d think they’d have a little more humility, seeing as well under 10% of the world’s population. The US is only 4.25%.  They both need to get out more.

 

When you have 1 teacher and classes of 50, or 1 doctor for 10,000 people, you have a different perspective on life. Technology matters more here, which is why Africa leapfrogged the rest of the world on mobile – it mattered when your precarious employment depends on that next text message or that your fragile finances need to be executed efficiently and cheaply from the rural world in which you live.

 

I do believe that AI technology is finally the technology that franchises almost everyone. It has the ability to raise productivity to pay for the education and healthcare globally. It can deliver the sort of Universal Teacher and Universal Doctor technology that is finally on the horizon. A teachers that knows more than any teacher, in any subject, has the pedagogic knowledge built into its methods, that is tirelessly friendly and supportive, 24/7, anywhere in the world.

 

But what really matters is delivery in all languages. We live with the legacy of colonial languages. In Africa they are English and French, in South America Spanish and Portuguese. Even in the rest of the word, English has become the lingua franca. Imagine a technology that can teach and deliver healthcare in any first language. Here's the good news. That too is on the horizon. Not only is a wide range of language available from the get go, that number has and is expanding into the thousands. Far from ignoring minority languages it may save them. Generative AI is a modern Babel.

 

Generative AI through ChatGPT was launched in 95 languages. allowing one to, write, summarise, error check and translate between these languages. That is astonishing but that number has grown to include many more. This has been one of the most used functions with significant increases in productivity. It was also launched knowing 12 computer languages, allowing translation between them. One amazing feature of Generative AI that slipped under the radar was its astounding capabilities in speech to text and text to speech. Whisper from OpenAI was world beating and free. What a start.

 

On the day I flew to Senegal, Yann Lecun announced a new Multilingual Language Model that works in 4000 languages! There are only 7000 languages in the world, so that’s more than a great start, it’s into majority territory. Early days but the promise is now here, that AI will open up education, health and other areas of human endeavour to everyone, no matter where they live or what language they speak. Once we can all understand each other in real time translation, perhaps we have a chance of better understanding and co-existing with each other.

 

This really matters, as technology tends to be skewed towards English in particular but also the usual suspects of largely northern hemisphere languages. If we are to see AI as a genuinely global and liberating technology languages do matter. What these large language models have shown us is that language really does matter in learning. Wittgenstein and Vygotsky were right, it is fundamental to learning and intelligence, fundamental to being human. It is not that intelligence produces language but that language produces intelligence.

 

So let’s not allow the over-developed world get on their moral high-horses and hold back the technology that promises so much to so many. A vast army of amateur ethicists (anyone can be one by simply saying they are one) seem determined to hold us back with so much verbiage on the subject that you could train a Large Language Model on that alone. Unfortunately it would an aloof, finger-wagging, moralising bore.

 

In truth, for those most in need, who often find themselves furthest away from education and healthcare, its power and reach are potentially immense.

Sunday, May 21, 2023

21st Century Skills – the last refuge of human exceptionalism!

As Nick Shackleton-Jones rightly points out, the rhetoric around AI and automation for the last two decades was “
don’t worry - AI & automation will only tackle the routine jobs”. The whole narrative was owned by the professional classes and centred around workers losing their jobs, not the graduate class working at home on Zoom. Nick reminded us on “how quickly we forget and how utterly wrong we were”. The 21st century skills rhetoric has become the last refuge of human exceptionalism, the topic I covered in my last blog.

 

Stages of 21st C grief

Predictably, the professional class, especially learning professionals, go through their five stages of grief, when faced with new technology: 

 

Denial – bewildered they want to ridicule it through clickbait examples of how bad it is, then say block it and ban it

Anger – sabre rattling stage where mostly 3rd rate academics write open letters, warn people of the dangers which only ‘they’ perceive

Bargaining - form ethics committees, write frameworks and generally do their report writing thing

Depression – oh dear we do have to actually learn to work with this stuff

Acceptance – alright, I give in

 

In denial

We’re still in the ‘denial stage. Taking the usual refuge in the rhetoric of vague 21st C skills. But the mistake is to:


1) think these skills are new
2) think we have a clear definition

2) think of them as skills in themselves, as they mostly need domain knowledge 

2) assume that educators have these skills, even if they did exist in a clean form

3) assume that they know how to teach these skills

4) assume that they are wholly unique to we humans

 

The whole thing goes back to the late 1990s, when we were faced with a new millennium. but a key document was  "Partnership for 21st Century Skills," published in 2002 by a coalition of education organizations in the US. This report put great emphasis  on the ‘c’ words and saw them as essential to prepare students for the challenges and opportunities of the 21st century. 

 

Then the bandwagon, report-writing brigade, organizations such as the Organization for Economic Cooperation and Development (OECD) and the World Economic Forum (WEF) began pumping out the rhetoric highlighting the significance of 21st-century skills in their discussions on education and workforce readiness.

 

Nothing new

Mirjam Neelen & Paul A. Kirschner rightfully pointed out, in their excellent blog ‘21st Century Skills don't exist. So why do we need them? (2016) that there is nothing new about these skills, nothing 21st century about them. This was all laid out in the first major counter attack in 2010 from Rotherham & Willingham’s “21st-Century skills. Not new, but a worthy challenge.” I have been making the same point for the last 20 years.

 

Problem of definition

As they pointed out, the first problem, as expected, is definition. They often appear as a list of five or so things on PowerPoint, most starting with the letter ‘C’; communications, collaboration, critical thinking, creativity… then it tails off into the vagueness of problem solving and innovation. There are literally dozens of lists and variations.

 

Skills are notoriously difficult to pin down as they are not, as Bloom suggested represented in some simplistic pyramid They are complex sets of personality traits, motivations, learned knowledge and behaviour – it’s the sum of lots of integrated parts that results in an ability, competence or performance. The fact that it is complicated, mostly involving domain knowledge, makes these very vague 21st C skills difficult to extract as a separate ‘skill’. It is doubtful whether they can be abstracted from domain knowledge at all. Knowledge is marbled like fat into the meat of skills.

 

This 21st C agenda can also be quite dangerous. It can harm learners, especially the disadvantaged by giving them the illusion of progress. Knowledge and skills are not separable like oil and water, they are deeply entwined, so most critical thinking and problem solving requires a depth of knowledge. The 21st C skills agenda led to a hideous focus on ‘Leadership’ in L&D. We assumed that skills were intellectual in some way, so introduced this elitist, hierarchical training, at the expense of real skills and competences for all. We became exclusive not inclusive. That did not go well, as productivity did not increase and although we have lots of rhetoric and training on Leadership, we seem to have precious little of it.

 

AI hits the fan

We’re 23% into the 21st C and they have the last bastion of human exceptionalism (see last blog). The 21st C skills mantra, that has now turned into an empty trope in the Age of AI, especially among educators and learning professionals who needed a serious sounding phrase to rattle around in conferences and reports. It is usually to be found in some overlong, text-heavy Powerpoint presentation at a conference, accompanied by a cocktail of 'C' words  - communication,, collaboration, creativity, critical thinking. Can the real world really be that alliterative?

 

Communication

Young people communicate every few minutes – it’s an obsession. They text, message, chat, post, comment, whatsapp, use Instagram, Facebook, Tik Tok and tools you're you may never to have heard of in various media, including images and video. Note the absence of email and Twitter, the only place you’re likely to hear of 21st C skills. This generation grew up in the 21st C. Never have so many communicated so often with so many. We readily forget that within a few generations, we have moved from letters to the telegraph, then telephone and now free communications, in a bewildering array of types and formats to almost anyone on the planet – for free. Technology not only increases the possibilities in communications, also the skills, through use.

 

In fact, one of the features of Generative AI, such as ChatGPT, is in showing us how poor our communications’ skills can be. It invariably improves almost anything we want to write and send.

 

Collaboration

There’s an abundance of collaboration online, where we share, tag, upload and download experiences, comments, photographs, video, media and now generative AI tools. We collaborate closely in teams, often international, when playing computer games and in the workplace. Never have we shared so much, so often, in so many different ways.

 

Then along comes someone who wants to teach this collaboration as a 21st C skill, usually in a classroom, where much of this technology is banned, ignoring how collaboration works in the real world. I’m hugely amused at this conceit, that we adults, especially in education, think we always have these skills. There is no area of human endeavour that is less collaborative than education. Teaching and lecturing are largely lone-wolf activities in classrooms. Schools, colleges and Universities share little. Educational professionals are deeply suspicious of anything produced outside of their classroom or their institution. The culture of NIH (Not Invented Here) is endemic. Many educators far from being consistently collaborative, are doggedly fixated with delivery by the individual.

 

With AI, especially generteive AI, collaboration at a global, human level, a huge, sophisticated and elegant collective model, trained on the sum of human knowledge. Generative AI is collaboration writ large, the whole of our collaborative cultural capital is being put to good use and available to the individual.

 

Critical thinking 

Critical thinking is the Little Bighorn of human exceptionalism, General Custer’s last stand. Academics, in particular, are fond of these two words, as they see themselves as having these skills in abundance. To be fair soft skills and empathy is not their strong point, so they need something to fall back on. 

 

One species of critical thinking is often well taught in good schools and universities, but only at the level of research, text and essays. As a practical skill it has all but disappeared in theory dominated courses.  It needs high quality teaching and the whole curriculum and system of assessment needs to adjust to this need. As Arun has shown, there is evidence that in our Universities, this is not happening. Arun (2011), in a study that tracked a nationally representative sample of more than 2,000 students, who entered 24 four-year colleges, showed that Universities were failing badly on the three skills they studied; critical thinking, complex reasoning and communications. This research, along with similar evidence, is laid out in their book Academically Adrift. Even research is now under threat as papers appear, where the hypothesis, data work, even the writing is being done by AI.

 

The idea that critical thinking is a uniquely human activity is being challenged by tools that do it better, whether in mathematics or other fields. It is now clear that the inference capabilities of AI will outclass humans and that we are no longer the sole producers of critical analysis. We saw how AI became highly competent at all human games. AI has already defined the 3D structure of the 200 million know proteins, a task that, according to Hassabis, the CEO of Deepmind, would have taken us a billion years using traditional human methods. It’s data analysis abilities have become superb and the growth of the inference side of AI is accelerating fast. We can expect current models top become good at this very quickly.

 

Creativity

The modern understanding of the artist as a unique individual with a distinct creative vision began to emerge during the Renaissance in Europe, strengthened when the Romantic conception of the artist emerged in the late 18th and early 19th centuries as a reaction to rationalism and industrialization. Romanticism placed a strong emphasis on individualism, emotion, imagination, and the awe-inspiring power of nature. Within this context, the creative artist was seen as a visionary figure, capable of tapping into the depths of human experience and expressing profound emotions and ideas. Technology, of course, has always been part of artistic production and expression, from the tools of painting to photography, synthesizers and other generative tools. The idea that art is some sort of divine spark has long gone. Suddenly, generative AI was winning photography competitions, producing music and being generally ‘creative’, another word that is difficult to define.

 

Can they be independently taught?

Of course, those who are most vociferous on the subject of 21st C skills are often those who tend to ‘write’ most about them but are the least skilled in them. Education, whether through lectures in Universities or one teacher to 30 plus kids in a classroom, is not the way to teach any of these skills, even if they did exists. Of course, education continued to scrap vocational learning, starve it of funding and push for even more abstract schooling and higher education. So called 21st C skills were fine for others but not them.

 

Isn’t all this talk of 21st C skills just a rather old, top-down, command and control idea – that we know what’s best for others? Isn’t it just the old master-pupil model dressed up in new clothes? Do the pupils know a tad more about digital skills than the masters?

 

It is an illusion that these skills were ever, or even can be, taught at school. Teachers have enough on their plate without being given this burden. I’ve seen no evidence that teachers have the disposition, or training, to teach these skills. In fact, in universities, I’d argue that smart, highly analytic, research-driven academics tend, in my experience, often to have low skills in these areas. Formal educational environments are not the answer. Pushing rounded, sophisticated, informal skills into a square, subject-defined environment is not the answer. It is our schools and universities, not young people, who need to be dragged into the 21st century. The change will comes through mass adoption and practice, not formal education. 

 

Conclusion

There’s a brazen conceit here, that educators know, with certainty, that these are the chosen skills for the next 100 years. Are we simply fetishising the skills of the current management class? Was there a sudden break between these skills in the last century compared to this century? No. What’s changed is the need to understand the wider range of future possibilities and stop relying on human exceptionalism.

 

Bibliography

Mirjam Neelen & Paul A. Kirschner  ‘21ST CENTURY SKILLS DON’T EXIST. SO WHY DO WE NEED THEM?’ (2016). https://3starlearningexperiences.wordpress.com/2016/11/01/21st-century-skills-dont-exist-so-why-do-we-need-them/

Rotherham, A.J., & Willingham, D.T., (2010). “21st-Century” skills. Not new, but a worthy challenge. American Educator, 17-20. https://www.aft.org/sites/default/files/periodicals/RotherhamWillingham.pdf

Arum, R. and Roksa, J., (2011). Academically adrift: Limited learning on college campuses. University of Chicago Press. 

Friday, May 19, 2023

Human Exceptionalism – we need to get over ourselves


Human exceptionalism is the belief that we are unique. We evolved to see ourselves in this way. This extends to kinship with family, other social groups and our wider species Homo sapiens. Yet, since we started to create technology, especially writing, we have discovered, time and time again, through reflection and inquiry, that we are not as ‘exceptional’ as we assumed.
 

Copernicus threw our little sphere out into the whirl of the solar system, de-anchoring us from our central place in the known universe. What we had evolved to see was the sun moving round the earth, so the alternative was counterintuitive and shocking. Yet symbolic writing – mathematics and data – was definitive on the matter. ‘Flat earther’ is now a pejorative term. 

 

Darwin then showed that a creator was not necessary for design and that we were just another animal, the product of a process of genetic accidents and selection. There was no designer only the ‘blind watchmaking’ of the evolutionary process. In fact, our cognitive, affective and psychomotor capabilities are limited by the evolutionary process – limited working memory, forgetting, cognitive biases, fallible and failing ling term memories, inability to network efficiently, sleep and death.

 

Physically, technology trounced us. It was thought we would die if we travelled by train, as we couldn’t handle the speed. Within 66 years of that first Wright Brother’s flight we had landed on the Moon. Physically we are generally weaker, slower and less accurate than the machines we create. That gap is widening. There is no place on earth we can’t get to, no physical task too big, as technology has extended our physical capabilities. Although, to be fair, the robot person, even self-driving car is still some way off. Strength and precision was not our strong point. It was mind. 

 

For a long time the focus was on robots and the mechanical side of AI, it turns out that the real advances in AI, were in the psychological domain. With this we turned to cognitive qualities, like critical thinking, collaboration and creativity. Surely technology couldn’t encroach on these unique skills? Once again, technology surprised us all – it can,

 

Productivity

Economies have seen significant slow-downs in productivity. This has puzzled many. AI as one of the few areas where increased productivity was cleat and measurable. The internet , first with search on Google has massively reduced the need to find information and services. It has created an entire economy based on efficient services from Amazon to Uber. 

 

Productivity has now been turbo-charged with generative AI, where tasks that took hours can be done in minutes, if not seconds. The tasks that have been identified and already researched include: research, brainstorming, report writing, copy writing, coding, image creation and translation. This is merely the tip of the potential iceberg. 

 

Professions

The real gains come in replacing what expensively educated human being currently do – professional tasks such as managers, consultants, accountants, lawyers, teachers, doctors and so on.

 

There is nothing uniquely human about management, teaching, medicine, accountancy or the law. The professions really did turn out to be a conspiracy against the laity, creating a class that traded on being uniquely analytic and smart. The rewards went to those who worked with their head, while those that worked with their heart (social care, nursery staff) or hand (workers with physical jobs) were left behind in terms of pay and advancement. A rebalancing will happen.

 

Managers

Productivity has been levelling off for some time, yet the research is already showing significant improvements on speed and quality with AI. This one area has been happening for some time, with Google, search and tools such as word processing and spreadsheets. Almost every manager or administrator who has contact with generative AI sees an immediate increase in productivity.

 

Consultants

A whole layer was created of consultants are paid handsomely to do organisational analysis and recommendations. It turns out that they are highly vulnerable to replacement by AI that has wider knowledge, better data analysis and clearer ways to recommend courses of action to organisations. We literally have consultancy on-demand, for free.

 

Researchers

Compared to the huge amount of time it used to take researchers, Google Scholar reduced the time taken to find papers and citations buy orders of magnitude. We have also seen advances such as Alphafold, that took a billion years of research into 200 million proteins to create them all in 3D, thereby advancing science, particularly medicine, by disintermediating millions of hours of research and lab work. We will now see a further, more significant, release of potential productivity by researchers through AI, as its data capabilities, including synthetic data, grows to outpace human research in key areas. The bloated and expensive University system can be made far cheaper and more efficient.

 

Teaching

If a virtual teacher increases the performance of learners, why would we persevere with teachers? They are areas where demand is huge and universal, yet supply limited and expensive. Forget the digital divide – the educational divide is far worse. The idea of a Universal Teacher is firmly on the horizon, one that has a degree in every subject and the pedagogic abilities of the best teacher, as that pedagogy is built into the personalised teaching process, available 24/7, to anyone, anywhere.

 

Physicians

The average GP has close to a 5% error rate on diagnosis. If a virtual Doctor gets below that, let’s say 2%, why would we persevere with general physicians. We are more than happy to replace physical labour with smart machines, why not knowledge workers. Why should they get a pass when others did not?

 

This is why Bill gates sees learning and healthcare as the two big beneficiaries of AI. 

Being a teacher or physician are means to an end. If the means has better outcomes, we should embrace its potential.

 

Is there anything left?

We saw ourselves as being uniquely created creatures with exceptional abilities. Skills and abilities were seen as exclusively human. Yet, as we moved from the fields to factories and hardly anyone worked in agriculture, those skills were largely replaced by physical and psychological technology. When manufacturing skills were replaced by machines, we moved from factories to offices. We then thought that we’d all be invincible knowledge workers; high reward professionals who work in offices, or at home on Zoom – we are not. In fact, we are just as vulnerable to technological replacement as those previous groups.

 

To date that expertise has been expensive and lengthy to replicate in humans, so it remained scarce – too few doctors, teachers, lawyers, accountants, so the professions exist, with exceptionally long and expensive training periods, so are not available to all.  That captured cultural capital is can now be used to benefit us all. What we did not forsee was that writing, then printing then digital storage, had captured much of our expertise. It could be mined to generate through AI expertise on-demand, for anyone.

 

We have to accept our limitations and deal with this new future as best we can, to suit our needs. This is especially true in ‘learning’, a social good that through digital disintermediation can be democratised and brought to far greater numbers at very low cost. We need to get over ourselves. We are not as exceptional as we thought, in ANY domain. 

Tuesday, May 16, 2023

Smart algorithms work for Google Facebook & Amazon can they work for learning?

Algorithms are everywhere
Christopher Steiner in Automate This: How algorithms came to rule the world describes how algorithms have pretty much got everywhere these days. Use the web, you’re using algorithms. Engage with the financial world, you’re engaging with a world driven by algorithms. Look at any image on any screen, use your mobile, satnav and any other piece of technology and you’re in the world of algorithms. From markets to medicine, the 21st century is being shaped by the power of algorithms to do things faster, cheaper and better than we humans, and their reach is getting bigger every day.
Learning is algorithmic
Our brains clearly take inputs but it is the processing, especially deep processing, that shapes understanding, memory and recall. The brain is not a vast warehouse that stores and recalls memories like video sequences and text, it is a complex, algorithmic organ. However, it would be a mistake to see the brain as a ‘simple’ set of algorithms for learning. Take ‘memory’ alone. We have many types of memory; sensory memory, working memory, long-term memory, flash-bulb memory, memory for faces, and remarkably, memory for remembering future events. Huge gains have been made in understanding how these different types of memory work in terms of our computational brain. Even if one rejects the computational model of the brain, alternative models do seem to require algorithmic explanations, namely inputs, processing and outputs. Learning is therefore always algorithmic.
You’d also be mistaken if you thought that the world of online learning has been immune from this algorithmic reach. From informal to formal online learning, from search to sims, efficient online learning has already been heavily powered by algorithms.
Every time you search for something on Google you’re using seriously powerful algorithms to crawl the web and match its findings to your search item, then produce a set of probable results. It’s algorithms do many other things you may not be aware of, such as identify cheats who create lots of artificial links and keywords to boost their ratings. Every time you use Facebook algorithms determine your newsfeed. The more you engage with someone, the more likely you are to see their posts, and the more Facebook people in general that engage with posts, determines how often that post is presented to others. Every time you use Amazon to see book rankings or get recommendations or offers, they use what they know about you and others to offer you choices. Wouldn’t it be astonishing if this did not happen more and more in online learning?
Even in formal learning, there are algorithms galore. In flight simulators, it’s not just the image-rendering and motion modelling that’s driven by algorithms, it’s also the learning through scenarios and feedback. In games-based learning there are algorithms aplenty. I first implemented algorithms in a BT simulator in the 1980s. What’s new is the fact that the restraints we had then have all but disappeared, with faster processing, better memory and ability to gather large amounts of data for use by adaptive algorithms. They’re here and they’re here to stay.
Teaching is essentially algorithmic, as the teacher (agent) gathers data about the environment (knowledge and students) and attempts to deliver recommendations or responses, based on their experience. In some cases, such as search and citation-driven research, algorithms do this much faster and better than humans. Teachers, tutors and lecturers, most of whom teach many students, have limited abilities when it comes to gathering data about their students, so no matter how good they are at teaching, tailored, personalised feedback and learning is difficult. It is also difficult to identify what students find difficult to learn. What, for example, are the most difficult concepts or weaknesses in the course? Again, data gathered from individual students and large numbers of students, allow algorithms to do their magic.
The bottom line, and there is a bottom line here, is that warm-body teachers are expensive to train, expensive to pay, expensive to retire, variable in quality and, most important of all, non-scalable. This is why education continues to be so expensive. There is no way that current supply can meet future demand at a reasonable cost. The solution to this problem cannot be throwing money at non-scalable solutions. Smart, online solutions that take some elements of good teaching, and replicate them in software, must be considered. As Sebastian Thrun says, ““For me, scale has always been a fascination—how to make something small large. I think that’s often where problems lie in society—take a good idea and make it scale to many people.”
We have seen how smart algorithms, in Google. Facebook and Amazon can significantly enhance users’ experiences, so how can they enhance the learners’ experiences? It is often thought that data is the key factor in this process but data is inert and only becomes useful as input to algorithms that interpret that data to produce useful actions and recommendations. The quality and quantity of data is important but it is the quality of the algorithms that really counts.
Smart software, formula-based algorithms, based on the science of learning, can be used to take inputs and make best guesses based on calculated probabilities. Multiple, linked algorithms can be even smarter. They may use knowledge of you as a learner, such as your past learning experiences and so on. Then, dynamically track how you’re getting on through formative assessment, how long you’re taking in a task, when you get stuck, keystroke patterns and so on. Add to this data gathered from other groups of similar learners and the system gets very smart. So smart, it can serve up optimised, learning experiences that fit your exact needs, rather than the next step in a linear course. The fact that it also learns as it goes make it even smarter.
Iceberg principle
When we learn, most of the processes going on in our brain are invisible, namely the deep processing involved in memory and recall. This activity, that lies beneath consciousness, is where most of the consolidation takes place. Similarly, it is sometimes difficult to see adaptive, algorithmic learning in practice, as most of the heavy lifting is done by algorithms that lie behind the content. Like an iceberg the hard work is done invisibly, below the level of visible content. Different students may proceed through different routes on their learning journey, some may finish faster, others get help more often and take longer. Overall, adaptive, algorithmic systems promise to take learners through tailored learning journeys. This is important, as the major reasons for learner demotivation and drop-out are difficulties with the course and linear, one size-fits-all courses that leave a large percentage of students behind when they get stuck or lost.
One symptom of the recognition of the need for adaptive, algorithmic learning is a recent poll by Inside Higher Ed and Gallup of college and university presidents, who saw more potential in ‘adaptive learning’ (66%) to make a “positive impact on higher education” than they did MOOCs (42%). The 2013 Insider Higher Ed Survey of College and University Presidents. Conducted by Gallup. Scott Jaschik and Doug Lederman, eds. March 2013.

Why would this be so? Well, it’s not only college and University Presidents, the Gates Foundation have also backed ‘adaptive learning’ as an investment. Read their two recent commissioned reports, as well as their funded activity in this area. One UK company, singled out by these reports, as a world class adaptive, algorithmic learning company is CogBooks, who have already taken adaptive learning to the US market. The reason is that it is seen as a way to tackle the problem of drop-out and extended, expensive courses. Personalised learning, tailored to every individual student, has come of age and may just be a solution to these problems.