Saturday, September 19, 2020

Let's move on from 'Unintelligible Intelligences' - IQ, Multiple Intelligences, Emotional Intelligence, Artificial Intelligence...

 Eysenck (1916-1997) - IQ, assessment and personality...

Binet, the man responsible for inventing the IQ (intelligence quotient) test, never saw it as being a ‘fixed’ for individuals. Sadly, his waning was ignored as education, keen as ever on selection, sought out single measure for intelligence. The 20th C was dominate at first by the Intelligence Quotient, forever associated with Eysenck (1916-1997). This widened out towards the end of the century, first with Gardner’s Multiple Intelligences, then Goleman’s Emotional Intelligence. None have stood the test of scrutiny and time. With a renewed interest in Artificial Intelligence, as we moved into the 21stcentury, here has been renewed interest in the word ‘intelligence’. As the measurement of man has become a growing obsession with ever widening definitions of intelligence, unfortunately, much of it was damaging, badly researched and, at times, used for nefarious purposes. As IQ morphed into MI then EQ and AI the same mistakes were made time after time.

Hans Eysenck was the figure around whom much of the IQ debate figured in the 20th century. What is less well known is his work on personality types and his opposition to psychoanalysis and Freud in particular, explained in The Decline and Fall of the Freudian Empire.

A controversial figure, he put forward the proposition that intelligence had a hereditary component and was not wholly, socially determined. Although this area is highly controversial and complex, the fact that genetic heritability has some role has become the scientific orthodoxy. What is still controversial is the definition and variability of ‘intelligence’ and the role that intelligence and other tests have in education and training. The environment has been shown to play an increasing role but the nature/nurture debate is a complex area, now a rather esoteric debate around the relevance of different statistical methods.

IQ theory has come under attack on several fronts. Stephen Jay Gould’s 1981 book The Mismeasure of Man is only one of many that have criticised IQ research as narrow, subject to reification (turns abstract concepts into concrete realities) and linear ranking, when cognition is, in fact, a complex phenomenon. IQ research has also been criticised for repeatedly confusing correlation with cause, not only in heritability, where it is difficult to untangle nature from nurture, but also when comparing scores in tests with future achievement. Class, culture and gender may also play a role and the tests are not adjusted for these variables. Work by Howe and Eriksson and others explains extraordinary achievement as being the result of early specialisation and a focused investment in over 10,000 hours of practice and not measurable IQ.

The focus on IQ, a search for a single unitary measure of the mind, is now seen by many, as narrow and misleading. Most modern theories of mind have moved on to more sophisticated views of the mind as with different but interrelated cognitive abilities. More modular theories and theories of multiple intelligence have come to the fore. Sternberger’s three-part (analytic, creative, practical) was followed by Gardner’s eight intelligences in Frames of Mind.

Goleman’s Emotional Intelligence (EQ), reflected in other more academic and well researched work, also challenged the unitary theory of intelligence, with its emphasis on the ability to harness emotion in self-awareness, thinking, decision making and in dealing with others. It is not that IQ is the antithesis of EQ, they are merely different. However, even Gardner and Goleman have come under criticism for lacking rigour. In general, however, educational systems in many countries have been criticised for failing to teach this wider set of skills that many now agree are useful in adult life.

Eysenck worked with Cyril Burt at the University of London, the man responsible for the introduction of the standardised 11+ examination in the UK, enshrined in the 1944 Butler Education Act, an examination that, incredibly, still exists in parts of the UK. Burt was subsequently discredited for publishing largely in a journal that he himself edited, falsifying, not only the data upon which he based his work, but also co-workers on the research.

This is just one of many standardised tests that have become common in education but many believe that tests of this type serve little useful purpose and are unnecessary, even socially divisive. On the other hand supporters of test regimes point towards the meritocratic and objective nature of tests. Some, however, argue that standard tests have led to a culture of constant summative testing, which has become a destructive force in education, demotivating and acting as an end-point and filter, rather than a useful mark of success. Narrow academic assessment has become almost an obsession in some countries, fueled by international pressure from PISA.

Interestingly, when measuring IQ, the Flynn Effect, taken from military records, shows that scores have been increasing at the rate of about 3 points per decade and there is further evidence that the rate is increasing This was used by Stephen Johnson in his book Everything bad is Good for You to hypothesize that exposure to new media is responsible, a position with which Flynn himself agrees. This throws open a whole debate and line of research around the benefits of new media in education and learning. Highly complex and interactive technology may be making us smarter. If true, this has huge implications for the use of technology in education and society in general.

Unfortunately, Eysenck and many other psychologists, throughout the middle of the 20th century may have focused too much on narrow IQ tests. This has led to some dubious approaches to early assessment, such as the 11+, that has, to a degree, socially engineered the future educational opportunities and lives of young people. IQ theorists like Eysenck tended to focus on logical and mathematical skills, to the detriment of other abilities, leading some to conclude that education has been over-academic. This, they argue, has led to a serious skew on curricula, assessment and the funding of education to the detriment of vocational and other skills.

 

Multiple Intelligences… uncanny resemblance to current curriculum subjects…

Howard Gardner’s theory of multiple intelligences opposes the idea of intelligence being a single measurable attribute. His is a direct attack on the practice of psychometric tests and behaviourism, relying more on genetic, instinctual and evolutionary arguments to build a picture of the mind. He also disputes the Piaget notion of fixed developmental stages, claiming that a child can be at various stages of development across different intelligences.

For Gardner, intelligence is “the capacity to solve problems or to fashion products that are valued in one or more cultural setting” (Gardner & Hatch, 1989). To identify he nature of intelligence he sought evidence from repots of brain damage showing isolated abilities, the existence of idiot savants, prodigies and other exceptional individuals, an identifiable core operation or group of operations, specific development histories with definable 'end-state' performances, an evolutionary history (at least plausible), evidence from experimental psychology, psychometric findings and the ability to express such intelligences in a symbolic way. In other words, he took a holistic, not a purely experimental or scientific, approach to evidence.

What popped out of studying these criteria was a list of eight ‘intelligences’. To be fair this original list of eight has developed over time but his thoughts on what constitute intelligence have developed over time, as the theory was scrutinized. It opened up the meaning of intelligence to purely rational and logical abilities, which were long held as the essential measures of intelligence.

Loosely speaking, the first two have been typically valued, some would say over-valued, in education; the next three are often associated, but not exclusively, with the arts; the final three are what Gardner called 'personal intelligences':

1. Linguistic: To learn, use and be sensitive to language(s).

2. Logical-mathematical: Analysis, maths, science and investigative abilities.

3. Musical: Perform, compose and appreciate music, specifically pitch, tone and rhythm.

4. Bodily-kinaesthetic: Co-ordination and use of whole or parts of body.

5. Spatial: Recognise, use and solve spatial problems both large and confined.

6. Interpersonal: Ability to read others’ intentions, motivations, desires and feelings.

7. Intrapersonal: Self-knowledge and ability to understand and use one’s inner knowledge.

8. Naturalist: Ability to draw upon the immediate environment to make judgements.

These intelligences complement each other, work together as blends of intelligences. Individuals bring multiple subsets of these intelligences to solve problems.

Gardner also wrote a full set of recommendations on the use of multiple intelligence theory in schools in The Unschooled MindIntelligence Reframed, and The Disciplined Mind, to look at how the theory can be applied in education. As John White observed, one problem with the theory is that it bears an uncanny resemblance to the current curriculum subjects, opening it up to the charge that it reflects what we want to teach, rather than having cognitive certainty. It can look like a simple defence of the classic curriculum.

This has led to a broader more holistic view of education, being less rigid about abstract and academic learning. It demands knowledge of these intelligences among teachers, an aspirational approach to learning, more collaboration between teachers of different disciplines, better and more meaningful curriculum choices and a wider use of the arts.

Many have also criticized the choices as being based on general observations, subject to personal and cultural bias, rather than universal cognitive abilities based on empirical evidence. There is always the problem with identifies ‘intelligences’ such as these not mapping onto the many different forms of cognitive functions, sensory, memory and others. In many of these supposed intelligences, multiple and complex cognitive operations are at work.

Like many forms of measurement in education, from learning styles, through to MBTI and intelligences, the theory can be criticized as it leads to stereotyping and pigeon-holing learners, pushing them towards narrower roads that they would otherwise have been exposed to. It may be their perceived weaknesses that should be addressed not necessarily the most obvious strengths. Like learning styles, it may do more harm than good.

Gardner himself was shocked and often frustrated by the way multiple intelligences was crudely applied in schools, among “a mish-mash of practices…Left Right brain contrasts….learning styles….NLP, all mixed up with dazzling promiscuity”. Some schools in the US even redesigned the whole curriculum, classrooms and entire schools around the theory. His point was that teachers should be sensitive to these intelligences, not to let them prescribe all practice. In his 2003 paper Multiple Intelligences after Twenty Years, for the American Educational Research Association, you could feel his frustration, when he writes, I have come to realize that once one releases an idea – “meme” – into the world, one cannot completely control its behaviour – any more than one can control those products of our genes we call children.”. Like many of these theories, the problem was its simplification and seductiveness. It gave us permission to say anything goes. Rather than promoting a focus on a wider, but still rigorous and relevant curriculum, it was used to confirm he view that here are almost innate ‘talents’ and that young people simply express those through interest. On the other hand it also provided some defence against those who want to labour away at maths all day at the expense of many other subjects or get overly obsessed with STEM subjects.

Like many theories, they develop over time and many teachers who quote and use the theory are unlikely to have fully understood its status and further development by Gardner himself. Few will have understood that it is not supported in the world of science, despite the perception by educators that it arose from that source. Gardner’s first book, Frames of the Mind: The Theory of Multiple Intelligences (1983) laid out the first version of the theory, followed 16 years later by a reformulation in Intelligence Reframed (1999)then again in Multiple Intelligences after Twenty Years (2003). Few have followed its development after 1983 or the critiques and Gardner’s subsequent distancing of the theory from brain science.

Lynn Waterhouse laid out the lack of scientific evidence for the theory in Multiple Intelligences, the Mozart Effect, and Emotional Intelligence: A Critical Review in Educational Psychologist. In many areas of learning, such as reason, emotion, action, music, language and so on, are characterised by their intersecting, distributed and complex patterns of activity in the brain. Islands of functional specificity are extremely rare. In short, Gardner seems to suffer from conceptual invention and simplicity. In short, brain science appears not support the theory. Gardner responded to this absence of neurological evidence for his separate 'intelligence' components, by redefining his intelligences as “composites of fine-grained neurological sub-processes but not those sub-processes themselves”(Gardner and Moran, 2006). Pickering and Howard-Jones found that teachers associate multiple intelligences with neuroscience, but as Howard-Jones states, “In terms of the science, however, it seems an unhelpful simplification as no clearly defined set of capabilities arises from either the biological or psychological research”. However, Project SUMIT (Schools Using Multiple Intelligences Theory) does claims to have identified real progress across the board in schools that have indeed been sensitive to Gardner’s theories. The problem is that Gardener claims that the science has yet to come, but teachers assume it is already there and that the theory arose from the science.

The appeal of Gardner’s Multiple Intelligences is obvious. It can take on the mantle of science, even neuroscience, and claim to have reinforced the view, not that specific knowledge and skills mater but that all knowledge and skills matter. It plays to the socially constructivist idea that anything goes, in a sea of constructions. Critics are right in holding his feet to the fire of experimental rigour and science, to show that these are indeed identifiable ‘intelligences’ and not just his, or the current educational system’s curricular preferences. They also seem to support the popular movement towards separate, so-called 21st century skills, as. A generic set of skills that can be taught beyond knowledge. In other words it chimes with other popular, and possibly erroneous myths in learning. On the other hand, while the theory may be rather speculative, his identified intelligences represent real dispositions, abilities, talents and potential, which many schools could be said to downgrade or even ignore. 

So far it has been one step forward, getting away from the idea of a single measure of intelligence, as a core entity in the mind, towards a more general theory of multiple entities and measures of intelligence. The problem is that this step wasn’t really solid enough to remain stable. It failed to be supported by solid evidence.  But we have a glimpse here of the dangers of the word ‘intelligence’, its tendency to invite forms of essentialism. Like the allure of gold it attracts ‘miners of the mind’ looking for this singular intelligence or multiple set of essential intelligences. It turns out that what is mined is Fool’s Gold. It may look like gold but, on examination, it is rigid and non-malleable.

The ‘intelligence’ movement, then took a surprising turn, as it swung into the affective or emotional territory. IQ ignored this, Multiple Intelligences tried to widen out to include interpersonal skills but the emotional side was still outside of their scope. So along came another form of intelligence ‘emotional intelligence’.

 

Emotional Intelligence – is it even a 'thing'?

Michael Beldoch wrote papers and a book around emotional intelligence in the 1960s and is credited with coming up with the term. But it was Daniel Goleman’s Emotional Intelligence (1995) that launched another education and training tsunami. Suddenly, a newly discovered set of skills, classed as an ‘intelligence’ could be used to deliver yet another batch of courses.

Emotional Intelligence (EQ) is seen by Goleman as an a set of competences that allow you to identify, assess, and control the emotions which you and others have.

He identified five types of Emotional Intelligence: 

Self-awareness: Know your own emotions and be aware of their impact on others

Self-regulation: Manage your own negative and disruptive emotions

Social skill: Manage emotions of other people

Empathy: Understand and take into account other people’s emotions

Motivation: Motivate yourself

For Goleman, these emotional competencies can be learned. They are not entirely innate, but learned capabilities that must be worked on and can be developed to achieve outstanding performance. 

We now have some good research on the subject which shows that the basic concept is flawed, that having EI is less of an advantage than you think. Joseph et al (2015) published a meta-analysis of 15 carefully selected studies, easily the best summary of the evidence so far. What they found was a weak correlation (0.29) with job performance. Note that 0.4 is often taken as a reasonable benchmark for evidence of a strong correlation. This means that EI has a predictive power on performance of only 8.4%. Put another way, if you are spending a lot of money and raining effort on this, it is largely wasted. The clever thing about the Joseph paper was their careful focus on actual job performance, as opposed to academic tests and assessments.

What became obvious as they looked at the training and tools, was that there was a bait and switch going on. EI was not a thing-in-itself but an amalgam of other things, especially personality measures. Indeed, when they unpacked six of the EI tests, they found that many of the measures were actually personality measures, such as conscientiousness, industriousness and self-control. These had been literally lifted from other personality tests. So, they did a clever thing and ran the analysis again, this time with controls for established personality measures. This is where things got really interesting. The correlation between EI and job performance dropped to a shocking -0.2.

Like many fads in HR, such as learning styles, an intuitive error lies at the heart of the fad. It just seems intuitively true that people with emotional sensibility should be better performers but a moment’s thought will make you realize that many forms of performance may rely on many other cognitive traits and competences. In our therapeutic age, it is all too easy to attribute positive qualities to the word ‘emotional’ without really examining what that means in practice. HR is a people profession, people who genuinely care, but when they bring their biases to bear on performance, as with many other fads, such as learning styles, Maslow, Myers-Briggs, NLP and mindfulness, emotion tends to trump reason. When it is examined in detail, EI like these other fads, falls apart. Eysenck, the doyen of intelligence theorists, dismissed Goleman’s definition of ‘intelligence’ and thought his claims were unsubstantiated.

EI tests

Goleman’s claims, that general EI was twice as useful as either technical knowledge, or general personality traits, has been dismissed as nonsense, as is his claim that it accounts for 67% of superior, leadership performance. This undermines lots of Leadership training, as EI is often used as a major plank in its theoretical framework and courses. Føllesdal looked at test results (MSCEIT) of 111 business leaders and compared these with the views of those same leaders by their employees. Guess what – there was no correlation.

Tests often lie at the heart of these fads, as they can be sold, practitioners trained and the whole thing turned into pyramid selling. Practitioners, in this case are sometimes called ‘emotional experts’, who administer and assess EI tests. However, the main test, the MSCEIT, is problematic. First, the company administering the tests (Multi-Health systems) was found by Føllesdal to be peddling a pig with lipstick. To be precise, 19 of the 141 questions were actually being scored wrongly. They quietly dropped the scoring on these questions, while keeping them in the test. Reputations had to be maintained. More fundamentally, the test is weak, as there are no correct answers, so it is not anchored in any objective standard. As a consensus scored test, it is foggy.

Way forward?

Emotional Intelligence has all the hallmarks of other HR fads – the inevitable popular book, paucity of research, exaggerated claims, misleading language, the test, ignoring research that shows it is largely a waste of training time. This is not to say that ‘emotion’ has no role in competences or learning. Indeed, from Hume To Haidt, we have seen that reason is often the slave of the passions. Gardner’s mistake was to over-rationalise emotion. In particular, his use of the word ‘intelligence was misleading.

 

Education became fixated with the search and definition of a single measure of intelligence – IQ. The main protagonist being Eysenck and it led to fraudulent policies, such as the 11+ in the UK, which is still used for selection into schools at age 11. It was promoted on the back of fraudulent research by Cyril Burt. Out of this obsession also came the language of the gifted and talented, still popular in education, despite the fact that the measures are flawed.

Many have criticised IQ research as narrow in definition. This is a key point. Cognitive science has succeeded in unpacking many of these complexities without reducing them to singular measures or short lists. The focus on IQ, a search for a single, unitary measure of the mind, even a small set of such measures, is now seen by many as narrow and misleading. Gardener tried to widen its definition into Multiple Intelligences (1983) but this is weak science and lacks any real rigour. 

Goleman wanted to add another, Emotional Intelligence, but this turned out to be little more than a marketing slogan. The search for ‘intelligence’ still suffers from a form of academic essentialism. Most modern theories of mind have moved on to more sophisticated views of the mind as with different but interrelated cognitive abilities. 

Goleman’s confusing ‘intelligence’ or ‘competences’ with personality traits is telling. Eysenck also contributed (with his wife) to the area of personality traits with idea that personality can be defined in terms of psychoticism, extraversion and neuroticism. This provided the basis for the now widely respected OCEAN model proposed by Costa & McCrae:

Openness

Conscientiousness

Extraversion

Agreeableness

Neuroticism

Eysenck rejected the Costa & McCrae model but in the end it has become the more persuasive theory. This well researched area of ‘personality types’ has largely been ignored in learning, in favour of the more faddish ‘learning styles’ theory. However, it has been argued that this type of differentiation is far more useful when dealing with different types of learners than the essentialism of Eysenck, Gardner and Goleman.

Why we need to drop the word ‘intelligence’

More recently, the rise of AI has produced a lot of debate on what constitutes ‘intelligence’. I discuss this in my book ‘AI for Learning’. Turing’s seminal paper Computing Machinery & Intelligence (1950), along with its nine defences, set the standard on whether machines can think and be intelligent. Yet the word ‘intelligence’ is never mentioned in his sense in the actual paper. But it was John McCarthy who invented the term at the famous Dartmouth Conference in 1956, that is seen as the starting point of the modern AI movement.

We would do well to abandon the word ‘intelligence’, as it carries with it so much bad theory and practice. Indeed AI has, in my view, already transcended the term, as it had success across a much wider sets of competences (previously intelligences), such as perception, translation, search, natural language processing, speech, sentiment analysis, memory, retrieval and other many other domains. All of this was achieved without consciousness. It is all competence without comprehension.

Machine learning has led to successes all sorts of domains beyond the traditional field of IQ and human ‘intelligences’. In many ways it is showing us the way, going back to a wider set of competences that includes both ‘knowing that’ (cognitive) and ‘knowing how’ (robotics) to do things. This was seen by Turing as a real possibility and it frees us from the fixed notion of intelligence that got so locked down into human genetics and capabilities. We can therefore avoid the term ‘intelligence(s)’ thereby avoiding the anthropomorphism around transferring human ideas around intelligence on to non-comprehending, but competent, performance. ‘Intelligence’ embodies too many assumptions around conscious comprehension in a field where man is NOT the measure of all things.

Beyond brains

The brain is the organ that named itself and created all that we are discussing but it is a odd thing. It takes over 20 years of education before it is even remotely useful to an employer or society. To attribute ‘intelligence’ to he organ is to forge that, compared to machines, it can’t pay attention for long, forgets most of what you teach it, is sexist, racist, full of cognitive biases, sleeps 8 hours a day, can’t network, can’t upload, can’t download and, here’s the fatal objection -  it dies. This should not be the gold standard for intelligence, as it is an idiosyncratic organ that evolved for circumstances other than those we find ourselves in.

Let’s take this idea further. Koch (2014) claimed that ALL networks are, to some degree ‘intelligent’. As the boundary for consciousness and intelligence changed over time to include animals, indeed anything with a network of neurons, he argues that intelligence is a property that can be applied to any communicating network. As we have evidence that intelligence is related to networked activity, whether these are brains or computers, could intelligence be a function of this networking, so that all networked entities are, to some degree, intelligent? Clark and Chalmers (1998) in The Extended Mind, laid out the philosophical basis for this approach. This opens up the field for definitions of ‘intelligence’ that are not benchmarked against human capabilities or speciesism. If we consider the idea of competences residing in other forms of chemistry and substrates, and see algorithms and their productive capabilities, as being independent of the base materials in which they arise, then we can cut the ties with the word ‘intelligence’ and focus on capabilities or competences. 

Few would argue that AI has progressed faster than expected, with significant advances in machine learning, deep learning and reinforcement learning.  In some cases the practical applications clearly transcend human capabilities and competences in all sorts of fields, calculation, image recognition, object detection and the may fruits of natural language processing, such as translation, text to speech, speech to text. We do not need to see ‘intelligence’ as the sun the centre of this solar system. The Copernican move is to remove this term and replace it with competences and look to problems that can be solved without comprehension. The means to ends are always means, it is the ends that matter. 

What is wonderful here is the opening up of philosophical issues around the idea of ‘intelligence(s)’. We are far from the existential risk to our species that many foresee but there are many more near-term issues to be considered. Ditching old psychological relics is one. Artificial smartness is with us it need not be called 'intelligent'.

Bibliography

Eysenck, H.J. (1967) The Biological Basis of Personality. Springfield, IL: Charles C. Thomas.

Eysenck, H.J. (1971) The IQ Argument: Race, Intelligence, and Education. New York: Library Press.

Eysenck, H.J. (1985) Decline and Fall of the Freudian Empire

Eysenck, H.J. & Eysenck, S.B.G. (1969). Personality Structure and Measurement. London: Routledge.

Gould, S. J. (1981).The mismeasure of man. New York: Norton.

Beldoch, M. and Davitz, J.R., 1964. The communication of emotional meaning. McGraw-Hill.

Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.

Goleman, D. (1995). Emotional intelligence. New York: Bantam Books.

Howe, M. J. A. (1999). Genius explained. Cambridge, U.K: Cambridge University Press.

Johnson, S. (2005). Everything bad is good for you. London: Allen Lane.

McCrae, R. R., & Costa, P. T. (2003). Personality in adulthood: A five-factor theory perspective. New York: Guilford Press.

Bloom (1956). Bloom's Taxonomy of the Cognitive Domain.

Dennett, D. (1995). Consciousness Explained.

Clark and Chalmers (1998) The Extended Mind

Dreyfus, H., & Dreyfus, S. (1997). Why Computers May Never Think Like People. Knowledge Management Tools, 31-50.

Ebbinghaus, H. (1908). Psychology: An elementary textbook. New York: Arno Press.

Gardner, H. (1983) Frames of mind: The theory of multiple intelligences, New York: Basic Books.

Frey B.C. Osborne M.A. (2013). The Future of Employment, Oxford Martin School.

Harari, Y.N. (2016). Homo Deus: A Brief History of Tomorrow. Harvill Secker, London.

Haugland, J. (1997). Mind design II: Philosophy, psychology, artificial intelligence. Cambridge, MA: MIT Press.

Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.

Koch (2014). "Is Consciousness Universal". Scientific American Mind.

Searle, J. (1980). Minds, Brains and Programs. The Behavioral and Brain Sciences.3, pp. 417–424. (1980)

Susskind, R., & Susskind, D. (2015). The Future of the Professions. Oxford: Oxford University Press.

Turing, A. M. (1950). I.—Computing Machinery And Intelligence. Mind, LIX(236), 433-460.

Friday, September 11, 2020

US Gov Report on Online Learning - a must read

Evaluation of Evidence-Based Practices in Online Learning  A Meta-Analysis and Review of Online Learning Studies

Fascinating report from the US Department of Education. First up, top quality advisors, people like Richard Clark and Dexter Fletcher, who know research methodologies. Secondly, scope, going from 1996 to 2008. Thirdly, rigorous, clearly identifying measurable effects, random assignment, the existence of controls and ignoring teacher perceptions.

Interestingly they lambasted educational research for its lack of rigour, but after filtering out the good stuff, here’s the results:

Blended best 

"Instruction combining online and face-to-face elements had a larger advantage relative to purely face-to-face instruction than did purely online instruction."

Online better than face-to-face 

“The meta-analysis found that, on average, students in online learning conditions performed better than those receiving traditional face-to-face instruction.”

Online and on-task 

“Studies in which learners in the online condition spent more time on task than students in the face-to-face condition found a greater benefit for online learning.”

Online is all good 

“Most of the variations in the way in which different studies implemented online learning did not affect student learning outcomes significantly.”

Blended no better than online 

“Blended and purely online learning conditions implemented within a single study generally result in similar student learning outcomes.”

Let learners learn 

“Online learning can be enhanced by giving learners control of their interactions with media and prompting learner reflection.”

Online good for everyone 

“The effectiveness of online learning approaches appears quite broad across different content and learner types.”

Get them doing things 

“Online learning can be enhanced by giving learners control of their interactions with media and prompting learner reflection.”

Groups not advised

“Providing guidance for learning for groups of students appears less successful than does using such mechanisms with individual learners.”

An interesting little observation, tucked away in the conclusions is, “one should note that online learning is much more conducive to the expansion of learning time than is face-to-face”. In other words it is better to get learners to continue learning after the event. Published 2008 – 12 years later we made all the same mistakes. Education is a slow learner.

 

Thursday, September 10, 2020

Learning Experience Designer... Who are they, what do they do?

Job titles 

Job titles in the world of online learning have been rather fluid over time. This is to be expected in a new field, where the technology moves at a fair clip. Technology is always ahead of sociology, so we find ourselves always in catch-up mode, or as some say, perpetual-beta. 

What do you call yourself?

Interactive Designer
Instructional Designer
eLearning Designer
Learning Designer
eLearning Instructional Designer
Online Learning Designer
Learning Engineer
Blended Learning Designer
Curriculum Designer
UX designer
UX & UI Designer
Learning Experience Designer 

To be fair, the roles vary in context, scope and responsibilities. In an organisation with just a couple of people delivering the whole online learning service, one person may have to handle everything including design, develop and deliver entire projects. This can involve client and stakeholder liaison, project management, solution design, writing, graphics and development. At the other end of the spectrum is the person who sits , say in a large online learning development company. When I ran such a company LXDs sat within a large team and could focus on what the learner saw, heard and did, as they had a highly differentiated team of writers, graphic designers, animators, video producers, audio producers, developers and testers. Between these two extremes, of DEY (Do Everything Yourself) and DIY (Do It Yourself), you have everything in-between. 

The titles have also changed as the vocabulary has changed, across time. The term e-learning has given way to online learning. Some object even to the use of the word e-learning or online, referring just to Learning Design. UI and UX have also come across from the general world of web design. The word ‘Engineer’ has also emerged from the learning engineering movement. It’s all got kind of messy. 

The technology has also changed. Over time, tools have been developed, that are usually template driven. This is a double-edged sword, as the tool frees the designed from having to build from scratch but also locks them into fixed structures. Some argue that this fossilisation has led to too much dependence of multimedia production and not enough on meaningful and effortful learner participation. There is a sense that everything is stuck in multiple choice, drag and drop and so on. More recently the LXP and LRS have emerged, giving rise to the obviously sympathetic term LXD. The job may have to change again, as more contemporary techniques such as AI and data are not possible in these environments. 

These linguistic spats are always on the go but there is a fundamental force at work here. Meaning is use. It is pointless trying to change the language, as it evolves through actual use by actual people over time This is why it is so varied, drifts and changes. So I tend to be relaxed about job titles, they will be what they will be. For the rest of this book, I’ll use LXD, short for Learning Experience Designer and Learning Experience Design. 
Whatever the job title, I tip my hat to anyone who does this work. It is a complex amalgam of art and science, head and heart. A curious mixture of organisational demands, learning demands, learning psychology, media mix, media production and technology. You must try to satisfy everyone, as everyone has a view on learning. They’ve been to school after all! Well, I’ve been on many aeroplanes but I wouldn’t pretend to have the skills to design or pilot the plane. 

Project management 

There is an illusion that LXD is purely a design activity but it is a much more complex role than many imagine. All design is in a context of an organisation and project. Sure the focus must always be on the user or learner but you will also have other internal and external stakeholders. You will also have some constraints such as budgets, schedules, resources, technology and organisational culture. 

There’s always a lot more of this project management malarky than you think. Any LXD project has to juggle people, costs, time, quality, resources and technology. With all of these balls in the air, one or two will fall during the project. The trick is to know that they will almost certainly fall, so expect it, stay calm and manage the situation. You may not be the project manager but you will, to some degree be managing your portion of the project. I have always preferred the job title ‘Producer’ to ‘Project manager’ as the role is a project that demands fiscal, creative and stakeholder management, similar to that of a Producer in the film industry. People You may think your sole focus is on users but there will be other people to think about; Shareholders, Board of Directors, Executive Management, suppliers, standards bodies, unions, subject matter experts, project managers, graphic artists, audio engineers, video teams, developers and testers. People run projects not designs. So you need to know who runs the project externally and internally, who signs of the various stages of the project. You need to know how to communicate with the relevant people in an appropriate way, knowing who to copy in. A lot of friction is caused by inappropriate communication. Communications with stakeholders has to be managed. You can’t speak to a client in the language you’d use online with your friends on Instagram. You may be asked to formally present to client, which needs careful preparation. You may even be asked to facilitate meetings with stakeholders. You will almost certainly have to troubleshoot and solve problems caused by the natural friction between stakeholders. This is perfectly normal. In this business, the learning business, everyone thinks they can do other people’s jobs. 

Iterations 

Iterations are normal in LXD. The aim is always to minimise these iterations. Some are necessary, such as further input from subject matter experts and clients, then there’s useful input from users. Some, however, will cause friction. These tend to be small, avoidable errors, such as spelling, punctuation and grammar. For some reason, people reviewing learning experiences are particularly sensitive n this issue. They will happily make mistakes in print but god forbid that you may make a spelling mistake on the screen. A particular source of such errors is on graphics, where someone whose background is not in writing, types in text. I used to demand that graphic artists never typed in text, that they only ever cut and paste. May seem harsh, but it saved a lot of potential aggro. Similarly with glitches on graphics, audio and bugs. Try to eliminate as many obviously avoidable error was possible. A good rule is get it right first time. You should feel responsible for quality control and not see others, like the project manager, QA folk or client as picking up the slack. 

Costs 

Commercial awareness matters. There will be a budget that determines the envelope in which you design. The budget has allocated resources in terms of people and just as you depend on people supplying your with the necessary information and resources to do your job, so there will depend on you. It is often useful to have a sense of the financial content of a project. The project manager and client will appreciate that you understand the pressures they are under on costs and margins. Coming back to the role of a LXD, cost restraints are usually expressed as time restraints. So you will have to manage your own time and outputs, so will need some project management skills around time, whether it is yourself or others, especially around estimating the time taken for tasks and being firm on extra tasks being lobbed into the project with no extra time given. That’s why contingency time is important.

In my next post on LXD I'll be looking at Emotion and Motivation as drivers behind Learning Experience Design...

Tuesday, September 01, 2020

AI for Learning. So what is the book about?


This is, to my knowledge the first general book about how AI can be used for learning and by that I mean the whole gamut of education and training. It is not a technical book on AI. It is designed for the many people who teach, lecture, instruct or train, also those involved in the administration, delivery, even policy  around online learning, even the merely curious. It is essentially a practical book about using AI for learning, with real examples of real teaching and learning in real organizations with real learners.


AI changes everything. It changes how we work, shop, travel, entertain ourselves, socialize, deal with finance and healthcare. When online, AI mediates almost everything – Google, Google Scholar, YouTube, Facebook, Twitter, Instagram, TikTok, Amazon, Netflix. It would be bizarre to imagine that AI will have no role to play in learning – it already has. 


Both informally and formally, AI is now embedded in many of the tools real learners use for online learning – we search for knowledge using AI (Google, Google Scholar), we search for practical knowledge using AI (YouTube), Duolingo for languages, and CPD is becoming common on social media, almost all mediated by AI. It is everywhere, just largely invisible. This book is partly about the role of AI in informal learning but it is largely about its existing and potential role in formal learning – in schools, Universities and the workplace. AI changes the world, so it changes why we learn, what we learn and how we learn.


It looks at how smart AI can be, and is, used for both teaching and learning. For teachers it can reduce workload and complement what they do, helping them teach more effectively. For learners it can accelerate learning right across the learning journey from learning engagement, support, feedback, creation of content, curation, adaption, personalization and assessment, AI provides smart solutions to make people smarter. 


AI is an IDIOT SAVANT

So how did we get here? Well AI didn’t spring from nowhere. It has a 2500 year pedigree. What matters is where we are today - somewhere quite remarkable. AI is ‘the’ technology of the age. The most valuable tech companies in the world have AI as their core, strategic technology. As it lies behind much of what see online, it literally supports the global web, driving use through personalization. Surprisingly, AI does this as an IDIOT SAVANT, profoundly stupid compared to humans, nowhere near the capabilities of a real teacher, but profoundly smart on specific tasks. Curiously, it can provide wonderfully effective techniques , such as adaptive feedback, on a scale impossible by humans, but doesn’t ‘know’ anything. It is ‘competence without comprehension’ but competence gets us a long way!


AI and teachers

In the book we first look at AI from the teacher or trainer’s perspective, showing that it is not a replacement, but valuable aid, to teaching. Robot teachers are beside the point, a bit like having robot drivers in self-driving cars. The dialectic between AI and teaching shows that there will be a synthesis and increased efficacy in teaching when its benefits are realized. Similarly for learners. AI is not a threat, it is a powerful teaching and learning tool.


AI is the new UI

AI underlies most interfaces online by mediating what you actually see on the screen. More recently it has provided voice interfaces, both text to speech and speech to text. This is important in learning, as most teaching is, in practice, delivered by voice. Then there is the wonderful world of chatbots, the return of the Socratic method, with real success in engagement, support and learning. There’s lots of real examples of how these new interfaces and, in particular, dialogue will expand online learning.


AI creates content

A surprising development has been the use of AI to create of online content. Tools like WildFire have been creating online content in minutes not months with high-retention learning – using AI to semantically interpret answers and get away from the traditional MCQs. AI can also enhance video, which suffers from being a transitory medium in terms of memory like a shooting star leaving a trail of forgetting behind it, towards powerful, high-retention learning experiences. New adaptive learning platforms are proving to be powerful, personalizing learning on scale , delivering entire degrees. AI pushes organisations towards being serious learning organisations by producing and using data to improve performance, not only of the AI systems themselves but also teachers and learners. Models such as GTP-3 are producing content that is indistinguishable, when tested, from human output. This shows that there is far more to AI than at first meets the AI!


AI and learning analytics

Learning is not an event, it is a process. Data describes, analyses, predicts and can prescribe process. Data types, the need for cleaning data, the practical issues around its use in learning and its use in learning analytics along with personalized and adaptive learning shows how AI can educate and train everyone uniquely. Data-driven approaches can also deliver push techniques, such as nudge learning and spaced-practice, embodying potent pedagogic practice. New ecosystems of learning such as Learning eXperience Platforms and Learning Record Stores move us towards more dynamic forms of teaching and learning. Sentiment analysis, using AI to interpret subjective emotions in learning is also covered. AI in this sense, is the rocket with data as its fuel. We explore how you can move towards a more data-driven approach to learning in the book.


AI in assessment

Then there’s assessment, which is being made easier and enhanced by AI. From student identification to the delivery of assessments and forms of assessment, AI promises to free assessment from the costs and restraints of the traditional exam hall. Plagiarism checking is also discussed, as is the semantic analysis of open input in assessment and essay marking.


What next for AI in learning?

Well, there will be a significant shift in the skills needed to use AI in learning away from the traditional ‘media production’ mode and these new skills are explained in detail. More seriously, you can’t have a book on AI for learning without tacking ‘ethics’ and so bias, transparency, race, gender and dehumanisation are all examined. The good news is that AI is not as good as many ethicists think it is and not as bad as you fear. On employment, we look at something few have looked at; the effect of AI on the employment of learning professionals.


AI: the Final Frontier

Finally there a cheeky look at the final frontier. What next? There technology on how AI may accelerate learning through non-immersive and immersive, brain-based technology, as well as speculation on how this may all pan out in the future. It is literally mind-blowing.


Finally…

In these times of pandemic, we have all had to adapt to online learning; teachers, learners and parents. Necessity has become the mother of invention and this book offers a look at the future, where AI technology will provide the sophistication we need to make online learning smart, responsive and up to the the future challenge of a changing world. AI is here, its use is irreversible and its role in learning inevitable. I hope the book answers any questions you may have on AI in learning, more importantly, I hope it inspires you to think about how you may use it in your organization.


Blended baloney

After all of that fuss what did ‘Blended Learning’ do for the world? It had the promise to shake the training world out of its ‘classroom-obsessed’ straightjacket into a fully developed, new paradigm for training. This needed research, evidence-based models and an analytic approach to developing and designing blended learning.

So what happened?

Muddled by metaphor
First, it got muddled by metaphor. Blended learning failed when it got bogged down by banal metaphors. I've heard them all - blended cocktails, meals, even alloys. Within the ‘food metaphor’ mob we got courses, recipes, buffet learning, tapas learning, fast food versus gourmet. My own favourite is ‘kebab learning’ - a series of small bites, repeated in a spaced practice pattern for reinforcement into long-term learning memory, held together with a solid spine of consistent learning content and objectives. Only kidding of course, but that's the problem with metaphoric blended learning. Who's to say that your metaphor is any better than mine? I even had some fool at the Learning Technologies exhibition come up to me with a 'fruit blender' trying to explain the concept in terms of a fruit smoothie!

What happened to analysis?
Blended learning needs careful thought and analysis, the consideration of the very many methods of learning delivery, sensitivity to context and culture and a matching to resources and budget. It also needs to include scalability, updatability and several other variables. All this talk of meals and metaphors has been going on for several years. What it led were primitive, indigestible (sic) 'classroom and e-learning' mixes. It never got beyond vague 'velcro' models, where bits and bobs were stuck together (now that's a metaphor).

Blended learning became blended TEACHING
Second, blended learning books turned out the very opposite of Blended Learning theory, namely Blended TEACHING. Attempts at defining, describing and prescribing blended learning were crude, involving the usual suspects (classroom plus e-learning). It merely regurgitated existing 'teaching' methods, usually around some even vaguer concep like 'learning styles'. Note how vagaue concepts reinforce each other in training. When it did get theoretical it went wildly overboard, with the ridiculous ramblings of the Lego Brick brigade (Hodgins, Masie etc), espousing the virtues of reusable learning objects.

Let me put forward my own food metaphor – blended baloney. What do you get when you blend things in a mixer without due care and attention to needs, taste and palette? What we got was baloney (dull, tasteless sausage meat).

Abandon lectures: increase attendance, attitudes and attainment

In a recent debate with Stephen Downes, I spent some time going through dozens of papers and meta-studies showing that the lecture is a largely disastrous pedagogic technique, devoid of formative assessment, diagnosis of student understanding, actual teaching or inspiration.

I wasn’t surprised at the qualitative nature of Stephen’s response, as I’ve heard it many times before 1) that lectures are not about ‘teaching’ but ‘showing practice ’i.e. what it’s like to be a physicist, whatever, 2) some lectures are good e.g. Martin Luther King’s speech etc. and 3) lectures must be good as they’ve been around for so long.

I don’t buy any of these arguments as 1) that’s not what lecturers or students think, expect or require, 2) the fact that a chosen few can do something well (like surgery or any other form of expertise) doesn’t mean that it should be done by everyone 3) slavery was around for millennia but it doesn’t make it right – you can’t derive an ‘ought’ from an ‘is’. In any case, I’ll beaver on uncovering the evidence where I find it.

In this week’s Science, a Nobel Prize winning physicist and associate director of science at the White House Office of Science and Technology Policy, Carl Wieman, along with researchers Louis Deslauriers and Ellen Schelew, published a paper ‘Improved Learning in a Large Enrolment Physics Class’ that shows improvements in attainment, attendance and attitudes in students when lectures delivered by senior experienced academics are abandoned in favour of approaches where postdocs, interactive techniques and formative assessments are used.

http://www.sciencemag.org/content/332/6031/862.abstract

As the authors say, even in good lectures, with good student reviews, student attainment can be poor. So they cut through the qualitative stuff and compared:

1. Control group (267 students) taught by experienced faculty member with years of experience teaching the physics course and good student evaluations.

2. Experimental group (271 students) taught by a postdoc with almost no teaching experience in introductory physics, using proven, researched, learning techniques.

The groups were taught a module in a physics course, in three one hour sessions in one week. In short; attendance increased, measured attitudes were better (students enjoyed the experience (90%) and thought that the whole course would be better if taught this way (77%)). More importantly students in the experimental group outperformed the control group, doing more than twice as well in assessment than the control group.

Academics will go to great lengths to defend traditional lectures, even abuse, (see my Don’t lecture me! ALT talk complete with abusive Tweets). However, there comes appoint when the evidence (surely a fundamental tenet in HE) must win out. This paper points towards something that decades of research have confirmed, that there must be a rethink on lectures. We may then have a chance to dramatically change teaching in Higher Education for the better, also making to cheaper. In other words, get good teachers to teach and let researchers research. The two competences may overlap but they are not congruent.

Huge study: Do universities really teach critical thinking? Apparently not.

Do universities really teach critical thinking? This huge CLA longitudinal study on 2,322 students for four years from 2005 to 2009 across broad range of 24 U.S. colleges and universities, suggests not. Richard Arum of New York University found that they were woeful at critical thinking, complex reasoning and written communication. 36% showed no significant gains in "higher order" thinking skill. 45% made no significant improvement in critical thinking.

Best subjects

Students with most gains studied:

Humanities

Social sciences

Natural scie

nces

Mathematics

Students with least gains studied:

Business

Education

Social work

Communications

Some surprises

Extra data threw up some other surprises:

Students who studied alone did better than those who studied in groups

Only a fifth of their time spent on academic pursuits

Over 50% of time spent socialising

Students avoi

ded courses that involved a lot of reading and writing

Timely report

This is a timely report and there has already been much soul searching, as many start to question the real value for money that HE delivers in the US. "No one concerned with education can be pleased with the findings of this study," said Howard Gardner.

It questions the fundamental purpose of higher education, as it has been assumed that these skills were precisely what was being taught. What needs to happen is a re-evaluation of ‘teaching’ in Higher education. Fiscal pressure, along with rising student costs and expectations, will make this happen. My own view is that the lazy ‘lecture’ is the dark secret at the heart of academic teaching. Since Benjamin Bloom first showed the pedagogic weaknesses of lectures in the 50s, we’ve had decades of confirmatory research showing their deficiencies. It comes as quite a shock to lecturers when you subject their teaching method to the same levels of academic scrutiny as their own research. Bligh, Gibbs and Mazur all describe their double standards on this issue.

There is no evidence that the dominant 'lecture' approach to teaching promotes critical thinking. Even Bligh, who promotes lectures makes it clear that it does not and couldn't find a single study that claimed it did. All 21 studies showed that other methods were better. Gibbs confirms this view. Mazur's work in the teaching of physics is also clear on the subject. Significant gains in understanding come from avoiding traditional lectures. This is an important debate, because if I, along with Bligh and others are right, there's a serious pedagogic hole in higher education.

This topic is covered in more detail in my own talk ‘Don’t lecture me!’ and in a follow up webinar from ALT. Richard Arum’s new book, Academically Adrift: Limited Learning on College Campuses (University of Chicago Press) discusses the report in detail and is published later this month.

Life's a blend! 10 ways Covid has blended our lives...

Odd that our political parties, educators, academia, journalists seem devoid of fresh policy thinking during Covid... it's a unique opportunity to take some positives out of the pandemic... Could this be reframed with a general concept, already defined as Blended in the learning world?

Blended Work

It is crystal clear that many workplaces will remain empty and leases dropped, so convert offices to housing, reduce commute pollution, solve London sink problem, revitalise regions, save a ton of money for pay rises and hiring, more flexibility on working… what’s not to like with Blended Working.

Blended Learning

Our universities could lower costs, spend less on expensive buildings, be more inclusive, scale teaching and learning, have year round entry points, more flexible learning and online assessment... what's not to like with Blended Learning… 

Blended Entertainment

Our entertainment has been a blend of inside the home and outside, for decades, with books, radio, TV now streaming alongside cinema, theatre and festivals. We barely reflect on the fact that technology has blended the way we entertain ourselves... what's not to like with Blended Entertainment… 

Blended Eating 

Our eating habits have moved from being almost entirely at home or canteens at school and work, towards a blend, first at home and eating out at restaurants to a more recent and complex blend of eating at home, getting more home deliveries and at restaurants... what's not to like with Blended Eating… 

Blended Finance

Our financial management shifted from teller to ATM some time ago, it has shifted further to online services. Our contactless credit cards have worked like a dream and Covid has moved us closer to a cashless society… at's not to like with Blended Finance…

Blended Healthcare

Our healthcare habits have moved from being almost entirely at the doctor's surgery or hospital, now with telephone, online and Zoom sessions, we can have a blend, with triage, free from the tyranny of place and only go somewhere if necessary... what's not to like with Blended Health… 

Blended Conferences

Our conferences went online and we learned that we could listen to experts, communicated with others, without the massive spend, ravel and accommodation costs. Sure it wasn’t quite the same but maybe the old model was a bit excessive… what's not to like with Blended Conferences…

Blended Travel

Our travel habits have changed. Less flights, less commuting, less car use. With climate change this has surely been a step in the right direction. It is not that travel will stop but it will be less. Frequent and we have all leaned to think twice before jumping on a plane, train or automobile… what's not to like with Blended Travel…

Blended Communication

Our communications took a pendulum swing online, as we started to use Zoom and social media with more vigour. Many are now comfortable with video conferencing and have increased our communication with friends and relatives who live at some distance… what's not to like with Blended Communication…

Blended Domesticity

Our home life has changed towards more cooking, baking (country ran out of flour). Growing tomatoes became an obsession for some and DIY flourished. People started doing things for themselves. That balance, or blend, came to the fore… what's not to like with Blended Domesticity…

Life’s a blend...

Saturday, August 29, 2020

More important than man on the moon - the melding of mind and machine

Last night we witnessed a live streamed event that may prove more significant that the moon landing. Elon Musk showed the remarkable progress of Neurlink. AI, robotics, physics, material science, medicine and biology collided in a Big Bang event, where we saw an affordable device that can be inserted into your brain to solve important spinal and brain problems. By problems they meant memory loss, hearing loss, blindness, paralysis, extreme pain, seizures, strokes and brain damage. They also included mental health issues such as depression, anxiety, insomnia and addiction. Ultimately, I have no doubt that this will lead to huge decrease in human suffering. God doesn’t seem to have solved the problem of human suffering, we as a species, through science are on the brink of doing it by and for ourselves.

Current tech

Current technology (Utah array) has only 100 channels per array and the wires are rigid, inserted crudely with an air hammer. You have to wear a box on your head, with the risk of infection, and it requires a great deal of medical expertise. It does a valuable job but is low bandwidth and destroys about a sugarcube of brain matter. Nevertheless, it has greatly improved the lives of over 150,000 people.

Neuralink tech

Musk showed three little piggies in pens, one without an implant, one that had an implant, now removed without any effects and one with an implant (they showed the signal live). Using a robot as surgeon the Neuralink tech can be inserted in an hour, without a general anaesthetic and you can be out of hospital the same day. The coin size device is inserted in the skull, beneath the skull. Its fibres are only 5 microns in diameter (a human hair is 100 microns) and it has ten times the channels of he Utah array, with a megabit bandwidth rate, to and from your smartphone. All channels are read and write.

Smartphone talks and listens to brain

When writing to the brain, you don’t want to damage anything and you need precise control over a range of electric fields in both time and space, also delivering a wide range of currents to different parts of the brain. The device uses Bluetooth to and from your smartphone. Indeed, it is the mass production of smartphone chips and sensors that have made this breakthrough possible.  

Team Musk

What really made this possible was Elon Musk, a remarkable man, who brought together this remarkable team of AI experts, roboticists, material scientists, mechanical engineers, electrical engineers and neurologists. In the Q&A session afterwards, they were brilliant.

What next?

I discussed Neurolink in my book ‘AI for Learning’ speculating that at some distant time machine would meld with mind, and this would open up possibilities for learning. I didn’t imagine that it would be kicked off just a few days after the book’s release… but here we have it. So what are the possibilities for learning?

Insights

At the very least this will give us insights into the way the brain works. We can ‘read’ the brain more precisely but also experiment to prove/disprove hypotheses on memory and learning. This will take a lot more than just reading ‘spikes’ (electrical impulses from one neuron to many) but it is a huge leap in terms of an affordable window into the brain. If we unlock memory formation, we have the key to efficient learning.

Interfaces

Our current interfaces, keyboards, touchscreen, gestures and voice, could also be bypassed, giving much faster ‘thought to and from machine’ communication by tapping into the phonological loop. This would be an altogether different form of interface, more akin to VR. Consciousness is a reconstructed representation of reality anyway and these new interfaces would be much more experiential as forms of consciousness, not just language.

Read memories

Memories are of many types and complex, distributed things in the brain. Musk talked eloquently about being able to read memories, that means they can be stored for later retrieval. Imagine having cherished memories stored to be later experienced, like your wedding photos, only as felt conscious events, like episodic memories. There are conceptual problems with this, as memory is a reconstructive event, but at least these reconstructions could be read for later retrieval. At the wilder end of speculation Musk imagined that you could ‘read’ your entire brain, with all of its memories, store this and implant in another device. 

Imagination

This is not just about memories. It is our faculty of the imagination that drives us as a species forward, whether in mathematics, AI and science (Neuralink is an exemplar) but also in art and creativity. Think of the possibilities in music and other art forms, the opportunities around the creative process, where we can have imagination prostheses.

Write memories

Reading memories is one thing. Imagine being able to ‘write’ memories to the brain. That is, essentially a form of learning. If we can do this, we can accelerate learning. This would be a massive leap for our species. Learning is a slow and laborious process. It takes 20nyears or more before we become functioning members of society, even then we forget much of what we were taught and learned. Our brains are seriously hindered by the limited bandwidth and processing power of our working memory. Overcoming that block, by direct writing to the brain, would allow much faster learning. Could we eliminate great tranches of boring schooling? Such reading and writing of memories would, of course, be encrypted for privacy. You wouldn’t want your brain hacked!

Consciousness

In my book I talk about the philosophical discussion around extended consciousness and cognition. Some think the internet and personal devices like smartphones have already extended cognition. The Neuralink team are keenly aware that they may have opened up a window on the mind that may ultimately solve the hard problem of consciousness, something that has puzzled us for thousands of years. If we can really identify correlates between what we think in consciousness and what is happening in the brain and can even simulate and create consciousness, we are well on the way to solving that problem.

End to suffering

But the real win here, is the opportunity to limit suffering, pain, physical disabilities, autism, learning difficulties and many forms of mental illness. It may also be able to read electrical and chemical signals for other diseases, leading to their prevention. This is only the beginning, like the first transistor or telephone call. It is a scalable solution and as versions roll out with more channels, better interpretation using AI, in more areas of the brain, there are endless possibilities. This event was, for me, more important than man landing on the moon as it has its focus, not on grand gestures and political showmanship, but on reducing human suffering. That is a far more noble goal. It is about time we stopped obsessing with the ethics of AI, with endless dystopian navel gazing, to recognise that it has revolutionary possibilities in the reduction of suffering.

FDA approved

The good news is that they have FDA Breakthrough Device designation and will be doing human trials soon.