Tuesday, September 29, 2020

Transfer - why is it ignored? Here's how to fix it...

Transfer is something that is often completely ignored in experience design. But what is the point of having learning experiences of they don’t transfer to actual application and performance? Learning experiences may not only fail to transfer but actually stop transfer.

You must design with transfer in mind and blends or learning journeys must move learning forward towards action, towards doing, towards practice and performance. No matter how much training you deliver, it can be illusory in the sense of not leading to transfer from cognitive change to actual performance, which in turn has impact on the organisation.

Doing and Practice are experiences. In fact without doing or practice it is unlikely to be retained long-term. Your design must move from experiences that match whatever type of learning you need, cognitive, psychomotor and affective, but practice and application experiences also matter. Your design should provide transfer pathways towards mastery, through actual doing and practice in the formal learning as well as practice and extension activities beyond the initial learning experiences.

Note that observable behaviours can be used but this is notoriously difficult, except in very formal apprenticeship-type learner journeys. Behaviour is notoriously difficult to measure and arguably behaviour must result in an impact in the organisation. It is better to go for data on KPIs, as they are commonly found in organisations. Note also that learning in workflow increases transfer as you are using it immediately. The training is proximate to the task. 

Knowledge can lie inert (Renkl et al 1996) and fail to transfer. Research focused on the idea that elements in the learning must be identical to those in the real world if transfer is to succeed (Singley & Anderson 1989). But it was Tulving who focused more on the retrieval of specific ‘cues’ in memory (Tulving & Thompson, 1973), recommending that such cues be designed into the learning experience, along with retrieval and spaced practice. We use this 'cues' technique in WildFire, where AI is used to create online learning, with cues, in minutes not months.

Near and Far transfer

A useful distinction is between ‘Near’ and ‘Far’ transfer Near transfer is where the task is simple and routine, such as learning how to ‘cut and paste’ in a word processor, where the contexts are similar. Far transfer involves troubleshooting or problem solving, using learned knowledge and skills, such as management skills or learning experience design!, as the contexts, where skills are applied, will be very hugely varied. Far transfer is what is often pointed to as a key component in and increasing number of future jobs, as routine tasks are automated.

Near transfer is easy to design for using methods such as varied worked examples, retrieval, deliberate, directed and spaced practice. 

Far transfer is far trickier. You will want to present the training in as realistic a way as possible, so that the cues can be embedded in the training. So, when doing management training, use real imagery or video within a real office. This suggests that we avoid cartoon representations or imagery that does not match the actual environment in which the training is to be applied. Flight simulators provide a good example of congruence between the training and environment. 

Note that it is often necessary to design learning experiences that are not too open and sophisticated, as the novice would suffer from overload and confusion (Caroll, 1992). In software training, for example, you narrow down the options with guided, step-by-step instruction, so as not to overwhelm the learner. The constraints may be loosened as expertise is built. Similarly in language learning, early learning will be of basic vocabulary and grammar, leading to guided and supported use and finally immersion.

Far transfer needs variation in context so that the principles can be applied in new situations as they arise. Variation in worked examples and applications by the learner will give them the flexibility to adapt what they learn to future problems, so support far transfer.

The classroom is often a poor environment for transfer, whereas on-the-job training provides real cues and context. Transfer is therefore strong argument for learning in the workflow, where you learn and do it immediately in the real world. What is needed is something that approximates the old apprenticeship model, now perhaps re-named as Blended Learning. A true Blended Learning experience integrates theory and practice, providing a process for progress, from novice to expert. The process may take weeks or months and not be restricted to a simple one or two day course or online learning experience. Learning needs transfer and transfer takes time. Specific features of an optimal Blended Learning design, may be experiences that allow you to apply what you learn in the real world, working through real case studies, models of expert performance, make changes and see how they affect the outcome, voice or articulate what you do as you do it to others and, of course, learning from mistakes. In other words learning experiences benefit from actual experiences.

Situated learning, where you learn in the job context, allows for actual results to act as a measure of success, if practiced in a safe environment. But pure situated practice can take a long time and is difficult to execute. It may, as Anderson et al. 1996 found, be an exaggeration to think that it is the optimal solution. Marshall 1995 found that a blend or combination of theory and examples works best and some job or workflow training. 

Spaced practice is one way to overcome delayed performance. Interestingly transfer may be increased by delayed feedback. There is ample evidence to show that spaced-practice will increase retention and transfer.

Druckman and Bjork 1994, showed that delaying feedback, allows learners to make mistakes, learn from those mistakes, so be careful in always giving immediate feedback. We do this in WildFire created content, where you get the presented learning experience, do retrieval practice but only at the end of the online module, where you get Red, Amber and Green feedback, are you asked to go back and correct your Red and Amber mistakes. Giving learners room to think, reflect and make mistakes will in crease retention, subsequent retrieval and transfer.

Technology and transfer

Technology also gives us the opportunities to practice and therefore transfer learning. Simulations have long provided powerful practice and transfer. Pilots really do learn how take-off, fly, land and cope with rare emergencies using simulators. So why are simulators not more commonly used in learning? Well, the pilot goes down with the plane. There is not much hyperbolic discounting when your imagination and the reality of your job takes you to 35,000 feet in 300 tons of metal and 600 passengers. 

One common error is to assume that full fidelity is always needed. This may not be possible on cost and it is vital that a distinction be made between ‘physical’ and ‘psychological’ fidelity. Most tasks require careful design around the psychological or cognitive processes in learning. In fact, low fidelity simulations can be as effective as very expensive high fidelity simulations, if the psychological fidelity is strong. Cox et al. 1969, showed that a cardboard box and photograph simulation could be as effective as a high-fidelity simulator. You often see simple, cockpit and control set-ups in pilot training facilities. I know of one trainer who was an expert in buying old junk equipment from planes for training. They had a hanger full of the stuff. It did the job.

Simple and mini-simulations may be useful for limited tasks. Branched scenario training where decision making is needed. This requires the careful selection of scenarios, based on their most likely occurrence in real life. This breathes life into learning as well as increasing the chances of transfer. Variety of scenario should match the variety of probable real life scenarios as much as possible. Probabilistic presentation of scenarios is also possible. I’ve been involved in high-end scenario training around conflict in healthcare that involved a wide range of scenarios from alcohol and drug users in A&E, to violent patients, colleagues and even those visiting patients. Customer service may require a careful selection of customer types. I designed an airport check-in scenario-based simulator that took a carefully calibrated selection of typical customer types; impatient business traveller, large family, nervous single traveller etc. and integrated the interpersonal skills with the software skills on using the check-in system as well as the physical skills of handling luggage and labels.

A LXP (Learning Experience Platform) may nudge and challenge you to do things in the workflow. When learning experiences are delivered, with an understanding of context and at the point of need, they have a far higher chance of transfer, as they are likely to be applied immediately and in a real world context. One can push out to learners, using predictive techniques, knowing that they are likely to need learning or react to them pulling content when they feel they need the support.

VR also offers opportunities for practice and therefore transfer in expensive, rare and dangerous environments. Oil rigs, inside vehicles, emergency incidents, down to the micro-level or out in space. VR can give high-fidelity environments and now, with haptic experience providing the physical feel of handling objects, as well as cable-free headsets, the freedom to move and experience worlds you may at some time encounter.

Technology evolves fast as we have seen how simulations, LXPs and VR can certainly enable transfer.

Making it work

We must be careful with transfer, as Weinbauer-Heidel who wrote What Makes Training Work, a book on transfer, warns us against transfer strategies if the ‘capability’ is NOT there. So often training cannot deliver on practice and application. She also recommends that NO course certificates are issued, unless transfer has been shown. For her, transfer needs to be levered at the personal, training and organisational levels. 

Personally, the learner has to want to follow through to action and transfer and be confident that they can perform. They must be made aware of the value of doing this in practice. This can be done by increasing relevance and proximity to the actual tasks, which is why learning in the workflow gives a powerful boost to transfer.

Training needs to be clear about what is expected in terms of application and doing, not get stuck in and just stop at pure theory. It means designing experiences that are relevant and practical to individuals, with practice included during and after the training.

Organisationally, learners should be expected to practice, with time available and support from line managers. There are many ways to do this, through nudges, challenges, projects, mentoring, deliberate practice schedules, short apprenticeships. Opportunities for practice experiences can be deliberately recommended and created, such as designing a website, using a spreadsheet or handling things in a lab, in safe environment with no bad consequences for the organisation. This is a matter of people to supervise, space and time to practice. In short, training and managers must take the horse to water AND make it drink.

Yet, what does designing a practice experience actually mean? At some point you must let go and had the reins over to the learner. It is a matter of suggesting, structuring or simulating practice and application, not once but repeatedly. We must understand what deliberate practice means, what spaced practice means.

The danger should be obvious, that we focus on shallow media presentation to get fun or engagement but what really matters is an understanding of how the mind of the learner actually learns. Learning is not like other experiences, such as entertainment, we have learned through many decades of research, that it is a complex issue, that needs to be worked on to optimise the learning experience and result in actual transfer and application of learning. 


Renkl, A., Mandl, H. and Gruber, H., 1996. Inert knowledge: Analyses and remedies. Educational Psychologist31(2), pp.115-121.

Singley, M.K. and Anderson, J.R., 1989. The transfer of cognitive skill (No. 9). Harvard University Press.

Tulving, E. and Thompson, D., 1973. Encoding Specificity and retrieval process in episodic process. Journal of Experimental Psychology87, pp.353-373.

Carroll, J.M., 1992. Minimalist documentation. Handbook of human performance technology.

 Anderson, J.R., Reder, L.M. and Simon, H.A., 1996. Situated learning and education. Educational researcher25(4), pp.5-11.

Marshall, S.P., 1995. Schemas in problem solving. Cambridge University Press.

Druckman, D.E. and Bjork, R.A., 1994. Learning, remembering, believing: Enhancing human performance. National Academy Press.

Cox, J.A., Wood Jr, R.O. and Thorne, H.W., 1965. Functional and Appearance Fidelity of Training Devices for Fixed-Procedures Tasks (No. HUMRRO-TR-65-4). GEORGE WASHINGTON UNIV ALEXANDRIA VA HUMAN RESOURCES RESEARCH OFFICE.

Weinbauer-Heidel, I. 2018. What Makes Training Really Work 

Saturday, September 19, 2020

Let's move on from 'Unintelligible Intelligences' - IQ, Multiple Intelligences, Emotional Intelligence, Artificial Intelligence...

 Eysenck (1916-1997) - IQ, assessment and personality...

Binet, the man responsible for inventing the IQ (intelligence quotient) test, never saw it as being a ‘fixed’ for individuals. Sadly, his waning was ignored as education, keen as ever on selection, sought out single measure for intelligence. The 20th C was dominate at first by the Intelligence Quotient, forever associated with Eysenck (1916-1997). This widened out towards the end of the century, first with Gardner’s Multiple Intelligences, then Goleman’s Emotional Intelligence. None have stood the test of scrutiny and time. With a renewed interest in Artificial Intelligence, as we moved into the 21stcentury, here has been renewed interest in the word ‘intelligence’. As the measurement of man has become a growing obsession with ever widening definitions of intelligence, unfortunately, much of it was damaging, badly researched and, at times, used for nefarious purposes. As IQ morphed into MI then EQ and AI the same mistakes were made time after time.

Hans Eysenck was the figure around whom much of the IQ debate figured in the 20th century. What is less well known is his work on personality types and his opposition to psychoanalysis and Freud in particular, explained in The Decline and Fall of the Freudian Empire.

A controversial figure, he put forward the proposition that intelligence had a hereditary component and was not wholly, socially determined. Although this area is highly controversial and complex, the fact that genetic heritability has some role has become the scientific orthodoxy. What is still controversial is the definition and variability of ‘intelligence’ and the role that intelligence and other tests have in education and training. The environment has been shown to play an increasing role but the nature/nurture debate is a complex area, now a rather esoteric debate around the relevance of different statistical methods.

IQ theory has come under attack on several fronts. Stephen Jay Gould’s 1981 book The Mismeasure of Man is only one of many that have criticised IQ research as narrow, subject to reification (turns abstract concepts into concrete realities) and linear ranking, when cognition is, in fact, a complex phenomenon. IQ research has also been criticised for repeatedly confusing correlation with cause, not only in heritability, where it is difficult to untangle nature from nurture, but also when comparing scores in tests with future achievement. Class, culture and gender may also play a role and the tests are not adjusted for these variables. Work by Howe and Eriksson and others explains extraordinary achievement as being the result of early specialisation and a focused investment in over 10,000 hours of practice and not measurable IQ.

The focus on IQ, a search for a single unitary measure of the mind, is now seen by many, as narrow and misleading. Most modern theories of mind have moved on to more sophisticated views of the mind as with different but interrelated cognitive abilities. More modular theories and theories of multiple intelligence have come to the fore. Sternberger’s three-part (analytic, creative, practical) was followed by Gardner’s eight intelligences in Frames of Mind.

Goleman’s Emotional Intelligence (EQ), reflected in other more academic and well researched work, also challenged the unitary theory of intelligence, with its emphasis on the ability to harness emotion in self-awareness, thinking, decision making and in dealing with others. It is not that IQ is the antithesis of EQ, they are merely different. However, even Gardner and Goleman have come under criticism for lacking rigour. In general, however, educational systems in many countries have been criticised for failing to teach this wider set of skills that many now agree are useful in adult life.

Eysenck worked with Cyril Burt at the University of London, the man responsible for the introduction of the standardised 11+ examination in the UK, enshrined in the 1944 Butler Education Act, an examination that, incredibly, still exists in parts of the UK. Burt was subsequently discredited for publishing largely in a journal that he himself edited, falsifying, not only the data upon which he based his work, but also co-workers on the research.

This is just one of many standardised tests that have become common in education but many believe that tests of this type serve little useful purpose and are unnecessary, even socially divisive. On the other hand supporters of test regimes point towards the meritocratic and objective nature of tests. Some, however, argue that standard tests have led to a culture of constant summative testing, which has become a destructive force in education, demotivating and acting as an end-point and filter, rather than a useful mark of success. Narrow academic assessment has become almost an obsession in some countries, fueled by international pressure from PISA.

Interestingly, when measuring IQ, the Flynn Effect, taken from military records, shows that scores have been increasing at the rate of about 3 points per decade and there is further evidence that the rate is increasing This was used by Stephen Johnson in his book Everything bad is Good for You to hypothesize that exposure to new media is responsible, a position with which Flynn himself agrees. This throws open a whole debate and line of research around the benefits of new media in education and learning. Highly complex and interactive technology may be making us smarter. If true, this has huge implications for the use of technology in education and society in general.

Unfortunately, Eysenck and many other psychologists, throughout the middle of the 20th century may have focused too much on narrow IQ tests. This has led to some dubious approaches to early assessment, such as the 11+, that has, to a degree, socially engineered the future educational opportunities and lives of young people. IQ theorists like Eysenck tended to focus on logical and mathematical skills, to the detriment of other abilities, leading some to conclude that education has been over-academic. This, they argue, has led to a serious skew on curricula, assessment and the funding of education to the detriment of vocational and other skills.


Multiple Intelligences… uncanny resemblance to current curriculum subjects…

Howard Gardner’s theory of multiple intelligences opposes the idea of intelligence being a single measurable attribute. His is a direct attack on the practice of psychometric tests and behaviourism, relying more on genetic, instinctual and evolutionary arguments to build a picture of the mind. He also disputes the Piaget notion of fixed developmental stages, claiming that a child can be at various stages of development across different intelligences.

For Gardner, intelligence is “the capacity to solve problems or to fashion products that are valued in one or more cultural setting” (Gardner & Hatch, 1989). To identify he nature of intelligence he sought evidence from repots of brain damage showing isolated abilities, the existence of idiot savants, prodigies and other exceptional individuals, an identifiable core operation or group of operations, specific development histories with definable 'end-state' performances, an evolutionary history (at least plausible), evidence from experimental psychology, psychometric findings and the ability to express such intelligences in a symbolic way. In other words, he took a holistic, not a purely experimental or scientific, approach to evidence.

What popped out of studying these criteria was a list of eight ‘intelligences’. To be fair this original list of eight has developed over time but his thoughts on what constitute intelligence have developed over time, as the theory was scrutinized. It opened up the meaning of intelligence to purely rational and logical abilities, which were long held as the essential measures of intelligence.

Loosely speaking, the first two have been typically valued, some would say over-valued, in education; the next three are often associated, but not exclusively, with the arts; the final three are what Gardner called 'personal intelligences':

1. Linguistic: To learn, use and be sensitive to language(s).

2. Logical-mathematical: Analysis, maths, science and investigative abilities.

3. Musical: Perform, compose and appreciate music, specifically pitch, tone and rhythm.

4. Bodily-kinaesthetic: Co-ordination and use of whole or parts of body.

5. Spatial: Recognise, use and solve spatial problems both large and confined.

6. Interpersonal: Ability to read others’ intentions, motivations, desires and feelings.

7. Intrapersonal: Self-knowledge and ability to understand and use one’s inner knowledge.

8. Naturalist: Ability to draw upon the immediate environment to make judgements.

These intelligences complement each other, work together as blends of intelligences. Individuals bring multiple subsets of these intelligences to solve problems.

Gardner also wrote a full set of recommendations on the use of multiple intelligence theory in schools in The Unschooled MindIntelligence Reframed, and The Disciplined Mind, to look at how the theory can be applied in education. As John White observed, one problem with the theory is that it bears an uncanny resemblance to the current curriculum subjects, opening it up to the charge that it reflects what we want to teach, rather than having cognitive certainty. It can look like a simple defence of the classic curriculum.

This has led to a broader more holistic view of education, being less rigid about abstract and academic learning. It demands knowledge of these intelligences among teachers, an aspirational approach to learning, more collaboration between teachers of different disciplines, better and more meaningful curriculum choices and a wider use of the arts.

Many have also criticized the choices as being based on general observations, subject to personal and cultural bias, rather than universal cognitive abilities based on empirical evidence. There is always the problem with identifies ‘intelligences’ such as these not mapping onto the many different forms of cognitive functions, sensory, memory and others. In many of these supposed intelligences, multiple and complex cognitive operations are at work.

Like many forms of measurement in education, from learning styles, through to MBTI and intelligences, the theory can be criticized as it leads to stereotyping and pigeon-holing learners, pushing them towards narrower roads that they would otherwise have been exposed to. It may be their perceived weaknesses that should be addressed not necessarily the most obvious strengths. Like learning styles, it may do more harm than good.

Gardner himself was shocked and often frustrated by the way multiple intelligences was crudely applied in schools, among “a mish-mash of practices…Left Right brain contrasts….learning styles….NLP, all mixed up with dazzling promiscuity”. Some schools in the US even redesigned the whole curriculum, classrooms and entire schools around the theory. His point was that teachers should be sensitive to these intelligences, not to let them prescribe all practice. In his 2003 paper Multiple Intelligences after Twenty Years, for the American Educational Research Association, you could feel his frustration, when he writes, I have come to realize that once one releases an idea – “meme” – into the world, one cannot completely control its behaviour – any more than one can control those products of our genes we call children.”. Like many of these theories, the problem was its simplification and seductiveness. It gave us permission to say anything goes. Rather than promoting a focus on a wider, but still rigorous and relevant curriculum, it was used to confirm he view that here are almost innate ‘talents’ and that young people simply express those through interest. On the other hand it also provided some defence against those who want to labour away at maths all day at the expense of many other subjects or get overly obsessed with STEM subjects.

Like many theories, they develop over time and many teachers who quote and use the theory are unlikely to have fully understood its status and further development by Gardner himself. Few will have understood that it is not supported in the world of science, despite the perception by educators that it arose from that source. Gardner’s first book, Frames of the Mind: The Theory of Multiple Intelligences (1983) laid out the first version of the theory, followed 16 years later by a reformulation in Intelligence Reframed (1999)then again in Multiple Intelligences after Twenty Years (2003). Few have followed its development after 1983 or the critiques and Gardner’s subsequent distancing of the theory from brain science.

Lynn Waterhouse laid out the lack of scientific evidence for the theory in Multiple Intelligences, the Mozart Effect, and Emotional Intelligence: A Critical Review in Educational Psychologist. In many areas of learning, such as reason, emotion, action, music, language and so on, are characterised by their intersecting, distributed and complex patterns of activity in the brain. Islands of functional specificity are extremely rare. In short, Gardner seems to suffer from conceptual invention and simplicity. In short, brain science appears not support the theory. Gardner responded to this absence of neurological evidence for his separate 'intelligence' components, by redefining his intelligences as “composites of fine-grained neurological sub-processes but not those sub-processes themselves”(Gardner and Moran, 2006). Pickering and Howard-Jones found that teachers associate multiple intelligences with neuroscience, but as Howard-Jones states, “In terms of the science, however, it seems an unhelpful simplification as no clearly defined set of capabilities arises from either the biological or psychological research”. However, Project SUMIT (Schools Using Multiple Intelligences Theory) does claims to have identified real progress across the board in schools that have indeed been sensitive to Gardner’s theories. The problem is that Gardener claims that the science has yet to come, but teachers assume it is already there and that the theory arose from the science.

The appeal of Gardner’s Multiple Intelligences is obvious. It can take on the mantle of science, even neuroscience, and claim to have reinforced the view, not that specific knowledge and skills mater but that all knowledge and skills matter. It plays to the socially constructivist idea that anything goes, in a sea of constructions. Critics are right in holding his feet to the fire of experimental rigour and science, to show that these are indeed identifiable ‘intelligences’ and not just his, or the current educational system’s curricular preferences. They also seem to support the popular movement towards separate, so-called 21st century skills, as. A generic set of skills that can be taught beyond knowledge. In other words it chimes with other popular, and possibly erroneous myths in learning. On the other hand, while the theory may be rather speculative, his identified intelligences represent real dispositions, abilities, talents and potential, which many schools could be said to downgrade or even ignore. 

So far it has been one step forward, getting away from the idea of a single measure of intelligence, as a core entity in the mind, towards a more general theory of multiple entities and measures of intelligence. The problem is that this step wasn’t really solid enough to remain stable. It failed to be supported by solid evidence.  But we have a glimpse here of the dangers of the word ‘intelligence’, its tendency to invite forms of essentialism. Like the allure of gold it attracts ‘miners of the mind’ looking for this singular intelligence or multiple set of essential intelligences. It turns out that what is mined is Fool’s Gold. It may look like gold but, on examination, it is rigid and non-malleable.

The ‘intelligence’ movement, then took a surprising turn, as it swung into the affective or emotional territory. IQ ignored this, Multiple Intelligences tried to widen out to include interpersonal skills but the emotional side was still outside of their scope. So along came another form of intelligence ‘emotional intelligence’.


Emotional Intelligence – is it even a 'thing'?

Michael Beldoch wrote papers and a book around emotional intelligence in the 1960s and is credited with coming up with the term. But it was Daniel Goleman’s Emotional Intelligence (1995) that launched another education and training tsunami. Suddenly, a newly discovered set of skills, classed as an ‘intelligence’ could be used to deliver yet another batch of courses.

Emotional Intelligence (EQ) is seen by Goleman as an a set of competences that allow you to identify, assess, and control the emotions which you and others have.

He identified five types of Emotional Intelligence: 

Self-awareness: Know your own emotions and be aware of their impact on others

Self-regulation: Manage your own negative and disruptive emotions

Social skill: Manage emotions of other people

Empathy: Understand and take into account other people’s emotions

Motivation: Motivate yourself

For Goleman, these emotional competencies can be learned. They are not entirely innate, but learned capabilities that must be worked on and can be developed to achieve outstanding performance. 

We now have some good research on the subject which shows that the basic concept is flawed, that having EI is less of an advantage than you think. Joseph et al (2015) published a meta-analysis of 15 carefully selected studies, easily the best summary of the evidence so far. What they found was a weak correlation (0.29) with job performance. Note that 0.4 is often taken as a reasonable benchmark for evidence of a strong correlation. This means that EI has a predictive power on performance of only 8.4%. Put another way, if you are spending a lot of money and raining effort on this, it is largely wasted. The clever thing about the Joseph paper was their careful focus on actual job performance, as opposed to academic tests and assessments.

What became obvious as they looked at the training and tools, was that there was a bait and switch going on. EI was not a thing-in-itself but an amalgam of other things, especially personality measures. Indeed, when they unpacked six of the EI tests, they found that many of the measures were actually personality measures, such as conscientiousness, industriousness and self-control. These had been literally lifted from other personality tests. So, they did a clever thing and ran the analysis again, this time with controls for established personality measures. This is where things got really interesting. The correlation between EI and job performance dropped to a shocking -0.2.

Like many fads in HR, such as learning styles, an intuitive error lies at the heart of the fad. It just seems intuitively true that people with emotional sensibility should be better performers but a moment’s thought will make you realize that many forms of performance may rely on many other cognitive traits and competences. In our therapeutic age, it is all too easy to attribute positive qualities to the word ‘emotional’ without really examining what that means in practice. HR is a people profession, people who genuinely care, but when they bring their biases to bear on performance, as with many other fads, such as learning styles, Maslow, Myers-Briggs, NLP and mindfulness, emotion tends to trump reason. When it is examined in detail, EI like these other fads, falls apart. Eysenck, the doyen of intelligence theorists, dismissed Goleman’s definition of ‘intelligence’ and thought his claims were unsubstantiated.

EI tests

Goleman’s claims, that general EI was twice as useful as either technical knowledge, or general personality traits, has been dismissed as nonsense, as is his claim that it accounts for 67% of superior, leadership performance. This undermines lots of Leadership training, as EI is often used as a major plank in its theoretical framework and courses. Føllesdal looked at test results (MSCEIT) of 111 business leaders and compared these with the views of those same leaders by their employees. Guess what – there was no correlation.

Tests often lie at the heart of these fads, as they can be sold, practitioners trained and the whole thing turned into pyramid selling. Practitioners, in this case are sometimes called ‘emotional experts’, who administer and assess EI tests. However, the main test, the MSCEIT, is problematic. First, the company administering the tests (Multi-Health systems) was found by Føllesdal to be peddling a pig with lipstick. To be precise, 19 of the 141 questions were actually being scored wrongly. They quietly dropped the scoring on these questions, while keeping them in the test. Reputations had to be maintained. More fundamentally, the test is weak, as there are no correct answers, so it is not anchored in any objective standard. As a consensus scored test, it is foggy.

Way forward?

Emotional Intelligence has all the hallmarks of other HR fads – the inevitable popular book, paucity of research, exaggerated claims, misleading language, the test, ignoring research that shows it is largely a waste of training time. This is not to say that ‘emotion’ has no role in competences or learning. Indeed, from Hume To Haidt, we have seen that reason is often the slave of the passions. Gardner’s mistake was to over-rationalise emotion. In particular, his use of the word ‘intelligence was misleading.


Education became fixated with the search and definition of a single measure of intelligence – IQ. The main protagonist being Eysenck and it led to fraudulent policies, such as the 11+ in the UK, which is still used for selection into schools at age 11. It was promoted on the back of fraudulent research by Cyril Burt. Out of this obsession also came the language of the gifted and talented, still popular in education, despite the fact that the measures are flawed.

Many have criticised IQ research as narrow in definition. This is a key point. Cognitive science has succeeded in unpacking many of these complexities without reducing them to singular measures or short lists. The focus on IQ, a search for a single, unitary measure of the mind, even a small set of such measures, is now seen by many as narrow and misleading. Gardener tried to widen its definition into Multiple Intelligences (1983) but this is weak science and lacks any real rigour. 

Goleman wanted to add another, Emotional Intelligence, but this turned out to be little more than a marketing slogan. The search for ‘intelligence’ still suffers from a form of academic essentialism. Most modern theories of mind have moved on to more sophisticated views of the mind as with different but interrelated cognitive abilities. 

Goleman’s confusing ‘intelligence’ or ‘competences’ with personality traits is telling. Eysenck also contributed (with his wife) to the area of personality traits with idea that personality can be defined in terms of psychoticism, extraversion and neuroticism. This provided the basis for the now widely respected OCEAN model proposed by Costa & McCrae:






Eysenck rejected the Costa & McCrae model but in the end it has become the more persuasive theory. This well researched area of ‘personality types’ has largely been ignored in learning, in favour of the more faddish ‘learning styles’ theory. However, it has been argued that this type of differentiation is far more useful when dealing with different types of learners than the essentialism of Eysenck, Gardner and Goleman.

Why we need to drop the word ‘intelligence’

More recently, the rise of AI has produced a lot of debate on what constitutes ‘intelligence’. I discuss this in my book ‘AI for Learning’. Turing’s seminal paper Computing Machinery & Intelligence (1950), along with its nine defences, set the standard on whether machines can think and be intelligent. Yet the word ‘intelligence’ is never mentioned in his sense in the actual paper. But it was John McCarthy who invented the term at the famous Dartmouth Conference in 1956, that is seen as the starting point of the modern AI movement.

We would do well to abandon the word ‘intelligence’, as it carries with it so much bad theory and practice. Indeed AI has, in my view, already transcended the term, as it had success across a much wider sets of competences (previously intelligences), such as perception, translation, search, natural language processing, speech, sentiment analysis, memory, retrieval and other many other domains. All of this was achieved without consciousness. It is all competence without comprehension.

Machine learning has led to successes all sorts of domains beyond the traditional field of IQ and human ‘intelligences’. In many ways it is showing us the way, going back to a wider set of competences that includes both ‘knowing that’ (cognitive) and ‘knowing how’ (robotics) to do things. This was seen by Turing as a real possibility and it frees us from the fixed notion of intelligence that got so locked down into human genetics and capabilities. We can therefore avoid the term ‘intelligence(s)’ thereby avoiding the anthropomorphism around transferring human ideas around intelligence on to non-comprehending, but competent, performance. ‘Intelligence’ embodies too many assumptions around conscious comprehension in a field where man is NOT the measure of all things.

Beyond brains

The brain is the organ that named itself and created all that we are discussing but it is a odd thing. It takes over 20 years of education before it is even remotely useful to an employer or society. To attribute ‘intelligence’ to he organ is to forge that, compared to machines, it can’t pay attention for long, forgets most of what you teach it, is sexist, racist, full of cognitive biases, sleeps 8 hours a day, can’t network, can’t upload, can’t download and, here’s the fatal objection -  it dies. This should not be the gold standard for intelligence, as it is an idiosyncratic organ that evolved for circumstances other than those we find ourselves in.

Let’s take this idea further. Koch (2014) claimed that ALL networks are, to some degree ‘intelligent’. As the boundary for consciousness and intelligence changed over time to include animals, indeed anything with a network of neurons, he argues that intelligence is a property that can be applied to any communicating network. As we have evidence that intelligence is related to networked activity, whether these are brains or computers, could intelligence be a function of this networking, so that all networked entities are, to some degree, intelligent? Clark and Chalmers (1998) in The Extended Mind, laid out the philosophical basis for this approach. This opens up the field for definitions of ‘intelligence’ that are not benchmarked against human capabilities or speciesism. If we consider the idea of competences residing in other forms of chemistry and substrates, and see algorithms and their productive capabilities, as being independent of the base materials in which they arise, then we can cut the ties with the word ‘intelligence’ and focus on capabilities or competences. 

Few would argue that AI has progressed faster than expected, with significant advances in machine learning, deep learning and reinforcement learning.  In some cases the practical applications clearly transcend human capabilities and competences in all sorts of fields, calculation, image recognition, object detection and the may fruits of natural language processing, such as translation, text to speech, speech to text. We do not need to see ‘intelligence’ as the sun the centre of this solar system. The Copernican move is to remove this term and replace it with competences and look to problems that can be solved without comprehension. The means to ends are always means, it is the ends that matter. 

What is wonderful here is the opening up of philosophical issues around the idea of ‘intelligence(s)’. We are far from the existential risk to our species that many foresee but there are many more near-term issues to be considered. Ditching old psychological relics is one. Artificial smartness is with us it need not be called 'intelligent'.


Eysenck, H.J. (1967) The Biological Basis of Personality. Springfield, IL: Charles C. Thomas.

Eysenck, H.J. (1971) The IQ Argument: Race, Intelligence, and Education. New York: Library Press.

Eysenck, H.J. (1985) Decline and Fall of the Freudian Empire

Eysenck, H.J. & Eysenck, S.B.G. (1969). Personality Structure and Measurement. London: Routledge.

Gould, S. J. (1981).The mismeasure of man. New York: Norton.

Beldoch, M. and Davitz, J.R., 1964. The communication of emotional meaning. McGraw-Hill.

Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.

Goleman, D. (1995). Emotional intelligence. New York: Bantam Books.

Howe, M. J. A. (1999). Genius explained. Cambridge, U.K: Cambridge University Press.

Johnson, S. (2005). Everything bad is good for you. London: Allen Lane.

McCrae, R. R., & Costa, P. T. (2003). Personality in adulthood: A five-factor theory perspective. New York: Guilford Press.

Bloom (1956). Bloom's Taxonomy of the Cognitive Domain.

Dennett, D. (1995). Consciousness Explained.

Clark and Chalmers (1998) The Extended Mind

Dreyfus, H., & Dreyfus, S. (1997). Why Computers May Never Think Like People. Knowledge Management Tools, 31-50.

Ebbinghaus, H. (1908). Psychology: An elementary textbook. New York: Arno Press.

Gardner, H. (1983) Frames of mind: The theory of multiple intelligences, New York: Basic Books.

Frey B.C. Osborne M.A. (2013). The Future of Employment, Oxford Martin School.

Harari, Y.N. (2016). Homo Deus: A Brief History of Tomorrow. Harvill Secker, London.

Haugland, J. (1997). Mind design II: Philosophy, psychology, artificial intelligence. Cambridge, MA: MIT Press.

Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.

Koch (2014). "Is Consciousness Universal". Scientific American Mind.

Searle, J. (1980). Minds, Brains and Programs. The Behavioral and Brain Sciences.3, pp. 417–424. (1980)

Susskind, R., & Susskind, D. (2015). The Future of the Professions. Oxford: Oxford University Press.

Turing, A. M. (1950). I.—Computing Machinery And Intelligence. Mind, LIX(236), 433-460.

Friday, September 11, 2020

US Gov Report on Online Learning - a must read

Evaluation of Evidence-Based Practices in Online Learning  A Meta-Analysis and Review of Online Learning Studies

Fascinating report from the US Department of Education. First up, top quality advisors, people like Richard Clark and Dexter Fletcher, who know research methodologies. Secondly, scope, going from 1996 to 2008. Thirdly, rigorous, clearly identifying measurable effects, random assignment, the existence of controls and ignoring teacher perceptions.

Interestingly they lambasted educational research for its lack of rigour, but after filtering out the good stuff, here’s the results:

Blended best 

"Instruction combining online and face-to-face elements had a larger advantage relative to purely face-to-face instruction than did purely online instruction."

Online better than face-to-face 

“The meta-analysis found that, on average, students in online learning conditions performed better than those receiving traditional face-to-face instruction.”

Online and on-task 

“Studies in which learners in the online condition spent more time on task than students in the face-to-face condition found a greater benefit for online learning.”

Online is all good 

“Most of the variations in the way in which different studies implemented online learning did not affect student learning outcomes significantly.”

Blended no better than online 

“Blended and purely online learning conditions implemented within a single study generally result in similar student learning outcomes.”

Let learners learn 

“Online learning can be enhanced by giving learners control of their interactions with media and prompting learner reflection.”

Online good for everyone 

“The effectiveness of online learning approaches appears quite broad across different content and learner types.”

Get them doing things 

“Online learning can be enhanced by giving learners control of their interactions with media and prompting learner reflection.”

Groups not advised

“Providing guidance for learning for groups of students appears less successful than does using such mechanisms with individual learners.”

An interesting little observation, tucked away in the conclusions is, “one should note that online learning is much more conducive to the expansion of learning time than is face-to-face”. In other words it is better to get learners to continue learning after the event. Published 2008 – 12 years later we made all the same mistakes. Education is a slow learner.


Thursday, September 10, 2020

Learning Experience Designer... Who are they, what do they do?

Job titles 

Job titles in the world of online learning have been rather fluid over time. This is to be expected in a new field, where the technology moves at a fair clip. Technology is always ahead of sociology, so we find ourselves always in catch-up mode, or as some say, perpetual-beta. 

What do you call yourself?

Interactive Designer
Instructional Designer
eLearning Designer
Learning Designer
eLearning Instructional Designer
Online Learning Designer
Learning Engineer
Blended Learning Designer
Curriculum Designer
UX designer
UX & UI Designer
Learning Experience Designer 

To be fair, the roles vary in context, scope and responsibilities. In an organisation with just a couple of people delivering the whole online learning service, one person may have to handle everything including design, develop and deliver entire projects. This can involve client and stakeholder liaison, project management, solution design, writing, graphics and development. At the other end of the spectrum is the person who sits , say in a large online learning development company. When I ran such a company LXDs sat within a large team and could focus on what the learner saw, heard and did, as they had a highly differentiated team of writers, graphic designers, animators, video producers, audio producers, developers and testers. Between these two extremes, of DEY (Do Everything Yourself) and DIY (Do It Yourself), you have everything in-between. 

The titles have also changed as the vocabulary has changed, across time. The term e-learning has given way to online learning. Some object even to the use of the word e-learning or online, referring just to Learning Design. UI and UX have also come across from the general world of web design. The word ‘Engineer’ has also emerged from the learning engineering movement. It’s all got kind of messy. 

The technology has also changed. Over time, tools have been developed, that are usually template driven. This is a double-edged sword, as the tool frees the designed from having to build from scratch but also locks them into fixed structures. Some argue that this fossilisation has led to too much dependence of multimedia production and not enough on meaningful and effortful learner participation. There is a sense that everything is stuck in multiple choice, drag and drop and so on. More recently the LXP and LRS have emerged, giving rise to the obviously sympathetic term LXD. The job may have to change again, as more contemporary techniques such as AI and data are not possible in these environments. 

These linguistic spats are always on the go but there is a fundamental force at work here. Meaning is use. It is pointless trying to change the language, as it evolves through actual use by actual people over time This is why it is so varied, drifts and changes. So I tend to be relaxed about job titles, they will be what they will be. For the rest of this book, I’ll use LXD, short for Learning Experience Designer and Learning Experience Design. 
Whatever the job title, I tip my hat to anyone who does this work. It is a complex amalgam of art and science, head and heart. A curious mixture of organisational demands, learning demands, learning psychology, media mix, media production and technology. You must try to satisfy everyone, as everyone has a view on learning. They’ve been to school after all! Well, I’ve been on many aeroplanes but I wouldn’t pretend to have the skills to design or pilot the plane. 

Project management 

There is an illusion that LXD is purely a design activity but it is a much more complex role than many imagine. All design is in a context of an organisation and project. Sure the focus must always be on the user or learner but you will also have other internal and external stakeholders. You will also have some constraints such as budgets, schedules, resources, technology and organisational culture. 

There’s always a lot more of this project management malarky than you think. Any LXD project has to juggle people, costs, time, quality, resources and technology. With all of these balls in the air, one or two will fall during the project. The trick is to know that they will almost certainly fall, so expect it, stay calm and manage the situation. You may not be the project manager but you will, to some degree be managing your portion of the project. I have always preferred the job title ‘Producer’ to ‘Project manager’ as the role is a project that demands fiscal, creative and stakeholder management, similar to that of a Producer in the film industry. People You may think your sole focus is on users but there will be other people to think about; Shareholders, Board of Directors, Executive Management, suppliers, standards bodies, unions, subject matter experts, project managers, graphic artists, audio engineers, video teams, developers and testers. People run projects not designs. So you need to know who runs the project externally and internally, who signs of the various stages of the project. You need to know how to communicate with the relevant people in an appropriate way, knowing who to copy in. A lot of friction is caused by inappropriate communication. Communications with stakeholders has to be managed. You can’t speak to a client in the language you’d use online with your friends on Instagram. You may be asked to formally present to client, which needs careful preparation. You may even be asked to facilitate meetings with stakeholders. You will almost certainly have to troubleshoot and solve problems caused by the natural friction between stakeholders. This is perfectly normal. In this business, the learning business, everyone thinks they can do other people’s jobs. 


Iterations are normal in LXD. The aim is always to minimise these iterations. Some are necessary, such as further input from subject matter experts and clients, then there’s useful input from users. Some, however, will cause friction. These tend to be small, avoidable errors, such as spelling, punctuation and grammar. For some reason, people reviewing learning experiences are particularly sensitive n this issue. They will happily make mistakes in print but god forbid that you may make a spelling mistake on the screen. A particular source of such errors is on graphics, where someone whose background is not in writing, types in text. I used to demand that graphic artists never typed in text, that they only ever cut and paste. May seem harsh, but it saved a lot of potential aggro. Similarly with glitches on graphics, audio and bugs. Try to eliminate as many obviously avoidable error was possible. A good rule is get it right first time. You should feel responsible for quality control and not see others, like the project manager, QA folk or client as picking up the slack. 


Commercial awareness matters. There will be a budget that determines the envelope in which you design. The budget has allocated resources in terms of people and just as you depend on people supplying your with the necessary information and resources to do your job, so there will depend on you. It is often useful to have a sense of the financial content of a project. The project manager and client will appreciate that you understand the pressures they are under on costs and margins. Coming back to the role of a LXD, cost restraints are usually expressed as time restraints. So you will have to manage your own time and outputs, so will need some project management skills around time, whether it is yourself or others, especially around estimating the time taken for tasks and being firm on extra tasks being lobbed into the project with no extra time given. That’s why contingency time is important.

In my next post on LXD I'll be looking at Emotion and Motivation as drivers behind Learning Experience Design...

Tuesday, September 01, 2020

AI for Learning. So what is the book about?

This is, to my knowledge the first general book about how AI can be used for learning and by that I mean the whole gamut of education and training. It is not a technical book on AI. It is designed for the many people who teach, lecture, instruct or train, also those involved in the administration, delivery, even policy  around online learning, even the merely curious. It is essentially a practical book about using AI for learning, with real examples of real teaching and learning in real organizations with real learners.

AI changes everything. It changes how we work, shop, travel, entertain ourselves, socialize, deal with finance and healthcare. When online, AI mediates almost everything – Google, Google Scholar, YouTube, Facebook, Twitter, Instagram, TikTok, Amazon, Netflix. It would be bizarre to imagine that AI will have no role to play in learning – it already has. 

Both informally and formally, AI is now embedded in many of the tools real learners use for online learning – we search for knowledge using AI (Google, Google Scholar), we search for practical knowledge using AI (YouTube), Duolingo for languages, and CPD is becoming common on social media, almost all mediated by AI. It is everywhere, just largely invisible. This book is partly about the role of AI in informal learning but it is largely about its existing and potential role in formal learning – in schools, Universities and the workplace. AI changes the world, so it changes why we learn, what we learn and how we learn.

It looks at how smart AI can be, and is, used for both teaching and learning. For teachers it can reduce workload and complement what they do, helping them teach more effectively. For learners it can accelerate learning right across the learning journey from learning engagement, support, feedback, creation of content, curation, adaption, personalization and assessment, AI provides smart solutions to make people smarter. 


So how did we get here? Well AI didn’t spring from nowhere. It has a 2500 year pedigree. What matters is where we are today - somewhere quite remarkable. AI is ‘the’ technology of the age. The most valuable tech companies in the world have AI as their core, strategic technology. As it lies behind much of what see online, it literally supports the global web, driving use through personalization. Surprisingly, AI does this as an IDIOT SAVANT, profoundly stupid compared to humans, nowhere near the capabilities of a real teacher, but profoundly smart on specific tasks. Curiously, it can provide wonderfully effective techniques , such as adaptive feedback, on a scale impossible by humans, but doesn’t ‘know’ anything. It is ‘competence without comprehension’ but competence gets us a long way!

AI and teachers

In the book we first look at AI from the teacher or trainer’s perspective, showing that it is not a replacement, but valuable aid, to teaching. Robot teachers are beside the point, a bit like having robot drivers in self-driving cars. The dialectic between AI and teaching shows that there will be a synthesis and increased efficacy in teaching when its benefits are realized. Similarly for learners. AI is not a threat, it is a powerful teaching and learning tool.

AI is the new UI

AI underlies most interfaces online by mediating what you actually see on the screen. More recently it has provided voice interfaces, both text to speech and speech to text. This is important in learning, as most teaching is, in practice, delivered by voice. Then there is the wonderful world of chatbots, the return of the Socratic method, with real success in engagement, support and learning. There’s lots of real examples of how these new interfaces and, in particular, dialogue will expand online learning.

AI creates content

A surprising development has been the use of AI to create of online content. Tools like WildFire have been creating online content in minutes not months with high-retention learning – using AI to semantically interpret answers and get away from the traditional MCQs. AI can also enhance video, which suffers from being a transitory medium in terms of memory like a shooting star leaving a trail of forgetting behind it, towards powerful, high-retention learning experiences. New adaptive learning platforms are proving to be powerful, personalizing learning on scale , delivering entire degrees. AI pushes organisations towards being serious learning organisations by producing and using data to improve performance, not only of the AI systems themselves but also teachers and learners. Models such as GTP-3 are producing content that is indistinguishable, when tested, from human output. This shows that there is far more to AI than at first meets the AI!

AI and learning analytics

Learning is not an event, it is a process. Data describes, analyses, predicts and can prescribe process. Data types, the need for cleaning data, the practical issues around its use in learning and its use in learning analytics along with personalized and adaptive learning shows how AI can educate and train everyone uniquely. Data-driven approaches can also deliver push techniques, such as nudge learning and spaced-practice, embodying potent pedagogic practice. New ecosystems of learning such as Learning eXperience Platforms and Learning Record Stores move us towards more dynamic forms of teaching and learning. Sentiment analysis, using AI to interpret subjective emotions in learning is also covered. AI in this sense, is the rocket with data as its fuel. We explore how you can move towards a more data-driven approach to learning in the book.

AI in assessment

Then there’s assessment, which is being made easier and enhanced by AI. From student identification to the delivery of assessments and forms of assessment, AI promises to free assessment from the costs and restraints of the traditional exam hall. Plagiarism checking is also discussed, as is the semantic analysis of open input in assessment and essay marking.

What next for AI in learning?

Well, there will be a significant shift in the skills needed to use AI in learning away from the traditional ‘media production’ mode and these new skills are explained in detail. More seriously, you can’t have a book on AI for learning without tacking ‘ethics’ and so bias, transparency, race, gender and dehumanisation are all examined. The good news is that AI is not as good as many ethicists think it is and not as bad as you fear. On employment, we look at something few have looked at; the effect of AI on the employment of learning professionals.

AI: the Final Frontier

Finally there a cheeky look at the final frontier. What next? There technology on how AI may accelerate learning through non-immersive and immersive, brain-based technology, as well as speculation on how this may all pan out in the future. It is literally mind-blowing.


In these times of pandemic, we have all had to adapt to online learning; teachers, learners and parents. Necessity has become the mother of invention and this book offers a look at the future, where AI technology will provide the sophistication we need to make online learning smart, responsive and up to the the future challenge of a changing world. AI is here, its use is irreversible and its role in learning inevitable. I hope the book answers any questions you may have on AI in learning, more importantly, I hope it inspires you to think about how you may use it in your organization.

Blended baloney

After all of that fuss what did ‘Blended Learning’ do for the world? It had the promise to shake the training world out of its ‘classroom-obsessed’ straightjacket into a fully developed, new paradigm for training. This needed research, evidence-based models and an analytic approach to developing and designing blended learning.

So what happened?

Muddled by metaphor
First, it got muddled by metaphor. Blended learning failed when it got bogged down by banal metaphors. I've heard them all - blended cocktails, meals, even alloys. Within the ‘food metaphor’ mob we got courses, recipes, buffet learning, tapas learning, fast food versus gourmet. My own favourite is ‘kebab learning’ - a series of small bites, repeated in a spaced practice pattern for reinforcement into long-term learning memory, held together with a solid spine of consistent learning content and objectives. Only kidding of course, but that's the problem with metaphoric blended learning. Who's to say that your metaphor is any better than mine? I even had some fool at the Learning Technologies exhibition come up to me with a 'fruit blender' trying to explain the concept in terms of a fruit smoothie!

What happened to analysis?
Blended learning needs careful thought and analysis, the consideration of the very many methods of learning delivery, sensitivity to context and culture and a matching to resources and budget. It also needs to include scalability, updatability and several other variables. All this talk of meals and metaphors has been going on for several years. What it led were primitive, indigestible (sic) 'classroom and e-learning' mixes. It never got beyond vague 'velcro' models, where bits and bobs were stuck together (now that's a metaphor).

Blended learning became blended TEACHING
Second, blended learning books turned out the very opposite of Blended Learning theory, namely Blended TEACHING. Attempts at defining, describing and prescribing blended learning were crude, involving the usual suspects (classroom plus e-learning). It merely regurgitated existing 'teaching' methods, usually around some even vaguer concep like 'learning styles'. Note how vagaue concepts reinforce each other in training. When it did get theoretical it went wildly overboard, with the ridiculous ramblings of the Lego Brick brigade (Hodgins, Masie etc), espousing the virtues of reusable learning objects.

Let me put forward my own food metaphor – blended baloney. What do you get when you blend things in a mixer without due care and attention to needs, taste and palette? What we got was baloney (dull, tasteless sausage meat).