Thursday, December 13, 2018

Learning Experience Systems – just more click-through online learning?

I have this image in my lounge. He's skating, a clergyman skating, as we so often do when we think we're learning - just skating over the surface. For all the talk of Learning Experience Systems and ‘engagement’, if all you serve up are flat media experiences, no matter how short or micro, with click-through multiple choice or worse, drag and drop, you’ll have thin learning. Simply rebadging platforms with the word ‘Experience’ in the middle doesn’t cut it, unless we reflect on what those ‘experiences should be. All experience is learning but some experiences are much more effective than others (effortful). Simply plopping the word 'experience into the middle of the old LMS terms is to simply rebadge. 
 As Mayer showed, this does not mean making things media rich; media rich is not mind rich. This often inhibits learning with unnecessary cognitive load.
Neither does it simply mean delivering flat resources. Similarly with some types of explicit gamification, where the Pavlovian rewards become ends in themselves and inhibit learning. Good gamification, does in fact, induce deep thought – collecting coins, leader boards and other ephemera do not, as the gains are short-lived.
The way to make such systems work is to focus on effortful ‘learning’ experiences, not just media production. We know that what counts is effortful, desirable and deliberate practice.
Engagement
Engagement does not mean learning. I can be wholly engaged, as I often am, in all sorts of activities – walking, having a laugh in the pub, watching a movie, attending a basketball game – but I’m learning little. Engagement so often means that edutainment stuff - all tainment and no edu. The self-perception of engagement is, in fact, often a poor predictor of learning. As Bjork repeatedly says, on the back of decades of research, from Roediger, Karpicke, Heustler, Metcalfwe and many others, “we have a flawed model of how we learn and remember”. 
We tend to think that we learn just by reading, hearing and watching. When, in fact, it is other, effortful, more sophisticated practices that result in far more powerful learning. Engagement, fun, learner surveys and happy sheets have been shown to be poor measures of what we actually learn and very far from being optimal learning strategies.
Ask Traci Sitzman who has done the research, Sitzmann (2008). Her work on meta-studies, on 68,245 trainees over 354 research reports, attempt to answer two questions:
Do satisfied students learn more than dissatisfied students?After controlling for pre-training knowledge, reactions accounted for only 2% of the variance in factual knowledge, 5% of the variance in skill-based knowledge, 0% of the variance in training transfer. The answer is clearly no!
Are self-assessments of knowledge accurate? Self-assessment is only moderately related to learning. Self-assessment capture motivation and satisfaction, not actual knowledge levels
Her conclusion based on years of research, and I spoke to her and she is adamant, is that self-assessments should NOT be included in course evaluations and should NOT be used as a substitute for objective learning measures.
Open learning
It’s effort to ‘call to mind’ that makes learning work. Even when you read, it’s the mind reflecting, making links, calling up related thoughts that makes the experience a learning experience. But this is especially true in online learning. The open mind is what makes us learn and therefore open response is what makes us really learn in online learning.
You start with whatever learning resource, in whatever medium you have: text (pdf, paper, book…), text and graphics (PowerPoint…), audio (podcast) or video. By all means read your text, go through a Powerpoint, listen to the podcast or watch a video. It’s what comes next that matters.
With WildFire, in addition to the creation of on line learning, in minutes not months, ae have developed open input by learners, interpreted semantically by AI to. You literally get a question and a blank box into which you can type whatever you want. This is what happens in real life – not selection items from multiple-choice lists. Note that you are not encouraged to just retype what you read saw or heard. The point, hence the question, is to think, reflect, retrieve and recall what you think you know.
Here’s an example, a definition of learning…
What is learning?
Learning is a lasting change in a person’s knowledge or behaviour as a result of experiences of some kind.
Next screen….

You are asked to tell us what you think learning is. It’s not easy and people take several attempts. That’s the point. You are, cognitively, digging deep, retrieving what you know and having a go. As long as you get the main points, that it is a lasting change in behaviour or knowledge through experiences, you’re home and dry. As the AI does a semantic analysis, it accepts variations on words, synonyms and different word order. You can’t cut and paste and when you are shown the definition again, whatever part you got right, is highlighted.  
It’s a refreshing experience in online learning, as it is so easy to click through media and multiple-choice questions thinking you have learnt. Bjork called this the ‘illusion of learning’ and it’s remarkably common. Learners are easily fooled into thinking they have mastered something when they have not.
This fundamental principle in learning, developed in research by Bjork and many others, is why we’ve developed open learning in WildFire
Conclusion
Engagement is not a bad thing but it is neither a necessary, and certainly not a sufficient condition, for learning. LXP theory lacks - well theory and research. We know a lot about how people learn, the excessive focus on surface experience may not help. All experience leads to some learning. But that is not the point, as some experiences are better than others. What those experiences should be are rarely understood by learners. What matters is effortful learning, not ice skating across the surface, having fun but not actually learning much – that is click-through learning. 
Bibliography
Alleger et al. (1997) A meta-analysis of the relations among training criteria. Personnel Psychology 50, 341-357.
Sitzmann, T. and Johnson, S. K. (2012). When is ignorance bliss? The effects of inaccurate self-assessments of knowledge on learning and attrition. Organizational Behavior and Human Decision Processes, 117, 192–207.
Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. (2010). Self-assessment of knowledge: A cognitive learning or affective measure? Academy of Management Learning and Education, 9, 169-191.
Brown, K. G., Sitzmann, T., & Bauer, K. N. (2010). Self-assessment one more time: With gratitude and an eye toward the future. Academy of Management Learning and Education, 9, 348-352
Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K. and Zimmerman, R. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology93, 280-295.

Wednesday, December 12, 2018

Learning is not a circus and teachers are not clowns - the OEB debate

‘All learning should be fun’ was the motion at the OEB Big Debate. No one is against a bit of fun but as an imperative for ALL learning, it’s an odd, almost ridiculous, claim. and sure enough there were some odd arguments. Elliot Masie, the purveyor of mirth, started with his usual appeal to the audience “Let me give you another word for fun - HA HA (that’s two words Elliot, but let’s not quibble)…. turn to your neighbour and say that without one letter. Some, like my neighbour, were genuinely puzzled ‘HAH?’ he said. I think it’s ‘AHA’, says I. Geddit? Oh dear. Elliot wants learning to be like Broadway. I saw him a few of weeks before show some eightball dance routine as a method for police training. 
To be fair Benjamin Doxtdator was more considered with his arguments about subversion in education and the fact that those who design learning were debating what was good not the learners – they were missing. But this was to miss the point. In deciding what treatments to give patients, one must appeal to research to show what works, not rely on the testimonies of patients.
Research matters
What was fun, was to watch anecdote and franky ‘funless’ arguments put to the sword by research. Patti Shank urged us to read Bjork, to consider the need for effort. Desirable difficulties matter and she killed the opposition with the slow drip of research. I suddenly noticed that the audience was not laughing but attentive, listening, making the effort to understand and reflect, not just react. That’s what most learning (other than Kindergarden play) is and should be. Patti Shank talked sense - research matters. Engagement and fun are proxies and the research shows that effort trumps fun every time. Learners may like 'fun' but research shows that learners are often delusional about learning strategies. What matters in the end is mastery, not just the feeling that you have mastered something but actual mastery.
On Twitter and during the audience questions, there were those who simply misread the motion, forgetting the word ‘all’. Some mistook fun for other concepts, like attention, being engrossed, gripped or immersed in a task. I have read literally thousands of books in my life and rarely chortled while reading them. Athletes learn intensely in their sports and barely register a titter. Learning requires attention, focus and effort, not a good giggle. Only those who think that ‘Happy sheets’ are a true indicator of learning adhere to the nonsense that learning should be all fun. Others made non sequiturs, claiming that those who disagree that all learning should be fun, think that all learning should be dull and boring. Just because I don’t think that all clothes should be pink, doesn’t mean I believe they should all be black! It's not that motivation, some fun and the affective side of learning don't matter, just that it is pointless motivating people to embark on learning experiences if they don't actually learn. This is not a false dichotomy, between fun and learning, it is the recognition that there are optimal learning strategies.
It is this obsession that led to the excesses of gamification, with its battery of Pavlovian techniques, which mostly distract from the effort needed to learn and retain. It’s what’s led to online learning being click-through, largely the presentation of text, graphics and video, with little in the way of effortful learning, apart from multiple-choice options. Which is why open input, effortful learning tools like WildFire result in much higher levels of retention. When designers focus relentlessly on fun they more often than not, destroy learning. There is perhaps no greater sin than presenting adults with hundreds of screens of cartoons, speech bubbles and endless clicking, in the name of ‘fun’.
A touch of humour certainly helps raise attention but learning is not stand-up comedy. In fact, we famously forget most jokes, as they don’t fit into existing knowledge schemas. Fun can be the occasional cherry on the cake but never the whole cake.
Conclusion
'Fun', funnily enough, is a rather sad word – it’s naive, paltry, diminishes and demeans learning and I came away from this debate with a heavy heart. There’s an emptiness at the heart of the learning game. A refusal to accept that we know a lot about learning, that research matters. The purveyors of fun, and those who think it’s all about ‘engagement’, are serving up the sort of nonsense that creates superficial, click-through, online learning. This is the dark, hollow world that lies behind the purveyors of mirth. Learning is not a circus and teachers are not clowns.

Tuesday, December 04, 2018

What one, intensively--researched principle in learning is like tossing a grenade into common practice?

Research has given us one principle that is like tossing a grenade into common practice – interleaving. It’s counterintuitive and, if the research is right, basically contradicts almost everything we actually practice in learning.
The breakthrough research was Shea & Morgan (1979), who had students learn in a block or through randomised tasks. Randomised learning appeared to result in better long-term retention. This experiment was repeated by Simon & Bjork (2001), but this time they asked the learners at the end of the activities how they think they’ll perform on day 2. Most thought that the blocked practice would be better for them. They were wrong. Current performance is almost always a poor indicator of later performance.
Interleaving in many contexts
Writing the same letter time after time is not as effective as mixing the letter practice up. HHHHHHIIIIIIIIIIJJJJJJJJJ is not as good as HIJHIJHIJHIJHIJHIJHIJHIJ. This also true in conceptual and verbal skills. Rohrer & Taylor (2007) showed that maths problems are better interleaved. Although it feels as though blocked is better but interleaving was three times better! The result in this paper was so shocking the editors of three major Journals rejected the paper on first reading. The size effect was so great that it was hard to believe, so hard to believe that few teachers even do it.
Interleaved in unrelated topics
Rohrer, Dedrick & Stershic (2015) took this a stage further and took unrelated topics in maths, to compare blocked with interleaved practice. Interleaved produced better performance in both short and long-term (30 days). William Emeny, a teacher in England showed that interleaving is actually done by many teachers but only in run up to exams, but that, he showed was where most of the actual learning was taking place.
Interleaving in inferences
What about learning from examples, learning general skills from exposure to examples, like reading X-rays or inferring a painter’s style by exposure to many paintings by specific painters. Kornell & Bjork (2008) did the painter test, 12 paintings by each of 6 artists, then show learners 48 new paintings. The results showed that interleaving was twice as effective as blocked training. It has been replicated in the identification of butterflies, birds, objects, voices, statistics and other domains. Once again, learners were asked what sort of instruction they thought was best. They got it wrong. In young children, 3 year olds, Vlach et al (2008) showed that learning interleaved with play produced better performance.
So why does interleaving work?
Interleaving works as you are highlighting the ‘differences’ between things. These relationships matter in your own mind. Blocking seems more fluid, where interleaving seems confusing, yet it smooths out comparisons.  Another problem is that learners get years and years of blocking in school. They’re actually taught bad habits and that prevents new, fresh habits from forming or even being tried. 
Conclusion
This is a strange thing. Interleaving, as opposed to blocked learning, feels wrong, feels disjointed, almost chaotic. Yet is it much more effective. It seems to fly in the face of your intuitions. Yet it is significantly more efficient as a learning strategy. Yet how often do we see interleaving in classrooms, homework or online learning? Hardly ever. More worryingly, we’re so obsessed with ‘student’ evaluations and perceptions that we can’t see the wood for the trees. We demand student engagement not learning, encourage the idea that learning is easy when it is not. When it comes to teaching, we’re slow learners.

Wednesday, November 28, 2018

Why almost everything we think about online learning may be wrong and what to do about it…

One thing that research in cognitive psychology has gifted to us over the last decade or so, is clear evidence that learners are delusional when it comes to judgements about their own learning. The big name in the field is Bjork, along with many other high quality researchers, who say that learning is “quite misunderstood (by learners)…. we have a flawed model of how we learn and remember”. There’s often a negative correlation between people’s judgements of their learning, what they think they have learnt and how they think they learn best - and what they’ve actually learnt and the way they can actually optimise their learning. In short, our own perceptions of learning are seriously wrong. This is why engagement, fun, learner surveys and happy sheets are such bad measures of what is actually learnt and the enemy of optimal learning strategies.
Desirable difficulty
Most learning is illusory, as it is too easy. Learning requires Desirable (accomplishable) and difficult learning that requires real effort for high-retention to take place. This is why so much online learning fails. To simply click on faces and see speech bubbles of text, drag and drop labels, choose true or false, even multiple-choice questions, rarely constitutes desirable difficulty. This is click-through learning.
The solution is to provide effortful, retrieval. This means moving beyond the traditional model of text/graphics punctuated by multiple-choice, towards cognitive effort, namely retrieval through open input. This effortful learning gives significant increases in long-term retention and recall. Online learning needs to adopt these techniques if it is to remain credible.
Retrieval 
Retrieve and recall what you need to know Bjork (1975) results in much higher levels of retention. Rather than read, re-read and underline, look away and try to retrieve and recall what you need to know. Rather than click on True or False or an option in a short list (MCQ), look away, think, generate, recall and come up with the answer. The key point here is that research has shown that retrieval is a memory modifier and makes your memory more recallable. Counter-intuitively, retrieval is much more powerful than being presented with the information. In other words it is more powerful than the original ‘teaching’ event.
Take a learning experience that you have probably been through many, many times – the airline safety demonstration. Try to think through what you have to do in the right order – find life jacket, put over head, then what….. not easy is it? Ah yes… inflate it through the blow tube… then there’s the whistle. No. Many choose the ‘inflate’ option but to inflate it inside the aircraft is a BIG no, no... and, in fact, you pull a toggle to inflate. In fact, airlines should set up a spot in the airport, where you actually sit down then have to DO the whole thing. Next time you sit there, watch, then afterwards, close your eyes and retrieve the process, step by step – that also works.
Roediger and Karpicke (2006) researched studying v retrieval testing (without feedback). One week later the retrieval tested group did much better. They also asked them how much you are likely to remember in one week’s time for each method – oddly, the majority of learners got it completely wrong.
Making errors is also a critical component of successful learning. According to Kornell, Hayes and Bjork (2009), generating the wrong thing, then getting it right, leads to stronger learning. The reason is that you are activating the brain’s semantic network. Retrieval testing does better than reading or watching, as it potentiates recall.
So are unsuccessful tests better than presentations? The work by Kornell (2009) shows that even unsuccessful testing is better. Retrieval testing gives you better internal feedback and works even when you get few or no correct answers. Testing, even before you have access to the material, as a learning experience, also helps learning. Once again, almost bizarrely, Heustler and Metcalfe (2012) asked learners what worked best and they were largely wrong.
From Gates (1917) who compared reading and re-reading with retrieval, to Sptzer (1939) who halted forgetting over 2 months with retrieval in 3000 learners, to Roediger (2011) who got a full grade increases with retrieval techniques and McDaniel (2011) who increased attainment in science, the evidence is clear. For a clear summary of this, and detail on the research, this excellent talk by Bjork is pretty good.
Online learning
In online learning the mechanics of this have also been researched. Duchaster and Nungester (1982) showed that although MCQs help you answer MCQs, they are poor in actual retention and recall. Kang (2007) showed that retrieval is superior to MCQs. At the really practical level, Jacoby (1978) showed that typing in retrieved learning was superior, as did MacDaniel (1986) and Hirsham and Bjork (1988) who showed that even typing in some missing letters sufficed. Richland (2005) did real world experiments that also proved efficacy.
We have the tools in Natural Language Processing and AI to do this, so technology has at last caught up with pedagogy. Let's not plough the same furrow we've plowed for the last 35 years. Time to move on.
Conclusion
I wrote, in a rather tongue in cheek manner (25 ways you which your e-learning sucks), about why I think most current e-learning is click-through and therefore low retention eye candy. This research shows that our methods of online learning are sub-optimal. The problem we face is that immediate success often means long-term failure. More focus should be given to retrieval, NOT presentation or clicking on items and multiple-choice. We need to be presented with desirable difficulties, through partial or complete open input. This is exactly what we’ve spent the last two years building with WildFire.

Monday, November 26, 2018

Do we really need all of this 'mentoring' malarkey’?

I’ve never had a mentor. I don’t want a mentor. I don’t much like mentoring. I know this is swimming against the tide of liberal orthodoxy but I value liberal values more than I value fads, groupthink or orthodoxy. Don't mind people doing it but there are many reasons why I’m suspicious of mentoring.
1. Fictional constructs
Mentor was a character in Homer’s The Odyssey and it is often assumed that his role was one of a guiding, experienced guide for his son and family. This is wrong. Mentor was simply an old acquaintance, ill-qualified to play a protective role to his family, and worse, turned out to be a patsy for a hidden force, the God Athena. A similar tale has unfolded in recent times, with mentoring being revived on the back of late 19thcentury psychoanalytic theory, where the original theory has been abandoned but the practice upon which it is based survives.
There is another later work of fiction that resurrected the classical model as a source for the word ‘mentor’ in education, Fenelon’s Les Adventures de Telemaque(1699). This is a tale about limiting the excesses of a king and reinforced the presence of the word ‘mentor’ in both French, then English. Yet Mentor, in this ponderous novel, is prone to didactic speeches about how a king should rule (aided by the aristocracy), hardly the egalitarian text one would expect to spark a revolution in education. Interestingly, it pops up again as one of two books given to Emile in the novel of the same name, by Rousseau.
2. Psychoanalytic veneer
Mentoring came out of the psychoanalytic movement in education, through Freud and Rogers. Nothing survives of Freud’s theories on the mind, education, dreams, humour or anything else for that matter. But Rogers is different. His legacy is more pernicious, as his work has resulted in institutional practice that has hung around many decades after the core theories have been abandoned. We need to learn how to abandon practice when the theories are defunct.
3. Mentoring is a one person trap
As Homer actually showed, one person is not enough. To limit your path in work or life to one person is to be feeble when it comes to probability. Why choose one person (often that person is chosen for you) when there are lots of good people out there? It stands to reason that a range of advice on a range of diverse topics (surely work and life are diverse) needs a range of expertise. Spread your network, speak to a range and variety of people. Don’t get caught in one person’s spider’s web. Mentoring is this sense is a singular trap.
4.  People, social media, books etc. are better
You don’t need a single person, you need advice and expertise. That is also to be found in a range of resources. Sure, a range of people can do the job and the best write books. Books are cheap, so buy some of the best and get reading. You can do it where and when you want and they’re written by the world’s best, not just the person who happens to be chosen in your organisation or a local life coach. And if you yearn for that human face, try video – TED and YouTube – they’re free! I’d take a portion of the training budget and allow people to buy from a wide reading list, rather than institute expensive mentoring programmes. Then there's social media, a rich source of advice and guidance provided daily. This makes people more self-reliant, rather than being infantalised. Twitter also has strong benefits in CPD.
5. Absence of proof
Little (1990) warned us, on mentoring, that, “relative to the amount of pragmatic activity, the volume of empirical enquiry is small [and]... that rhetoric and action have outpaced both conceptual development and empirical warrant.”  This, I fear, is not unusual in the learning world. Where such research is conducted, the results are disappointing. Mentors are often seen as important learning resources in teacher education and in HE teaching development. Empirical research shows, however, that the potential is rarely realized, see Edwards and Protheroe (2003) and Boice (1992). The results often reveal low level "training" that simply instruct novices on the "correct" way to teach Handal and Lauvas (1988), Hart-Landsberg et al., (1992). Indeed, much mentoring has been found to be rather shallow and ineffective Edwards, (1998).
6. Fossilised practice
Practice gets amplified and proliferates through second-rate train the trainer and teacher training courses, pushing orthodoxies long after their sell-by, even retirement, date. Mentoring has sometimes become a lazy option and alternative for hard work, effort, real learning and reflection. By all means strive to acquire knowledge, skills and competences, but don’t imagine that any of this will come through mentoring in any efficient manner.
7. Over-formalised
Mentoring is what parents, grandparents and older members of the community used to do, and well. I’m all for the passing down of learning and wisdom, but when it gets formalized into specific people, with supposedly strong ‘mentoring’ skills, I have my doubts. By all means encourage people to share, especially those with experience but don’t kill the human side of this with an over-formalised process.
Conclusion: get a life, not a coach
I know that many of you will feel uncomforted by these arguments but work and life are not playthings. It is your life and career, so don’t for one minute imagine that the HR department has the solutions you need. Human resources is there to protect organisations from their employees, rarely either human or resourceful. Stay away from this stuff if you really want to remain an independent thinker.
Bibliography
English translation of Les Adventures de Telemaque https://archive.org/stream/adventuresoftele00fene#page/41/mode/thumb
Boice (1992)Lessons learned about mentoring https://onlinelibrary.wiley.com/doi/abs/10.1002/tl.37219925007
Edwards and Protheroe (2003)Learning to See in Classrooms: What are student teachers learning about teaching and learning while learning to teach in schools? British Educational research Journal.
Handal and Lauvas (1988) Promoting Reflective Teaching
Little, J.W. (1990) ‘The Mentor Phenomenon and the Social Organisation of Teaching’, in: Review of Research in Education. Washington D.C: American Educational Research Association.
Warhurst R (2003) Learning to lecture Paper presented at the British Educational Research Association Annual Conference, Heriot-Watt University, Edinburgh.http://www.leeds.ac.uk/educol/documents/00003330.htm

Sunday, November 18, 2018

Why is learning so hard? Hyperbolic discounting – what is it and what to do about it

Julie Dirkson knows a thing or two about learning. Well versed in the research, she is especially good at bringing ‘behavioural psychology’ to the foreground. Understand learners and you understand why it is so difficult to get them to learn. So it was a pleasure seeing her speak and speak with her afterwards. 
Her starting point is the metaphor of the elephant and its rider, the rider the conscious, verbal, thinking brain; the elephant the automatic, emotional visceral brain. Academically this is Kahneman’s two systems, fast and slow explained in his book Thinking Fast and Slow (An alternative is the very readable story in The Undoing Project by Michael Lewis.) 

Hyperbolic discounting
One cognitive bias that is hits learning hard is that of hyperbolic discounting, a well researched feature in behavioural economics. Take two similar rewards, humans prefer the one that arrives sooner rather than later. We are therefore said to discount the value of the later reward and this discount increases with the length of the delay.
If the consequences of our learning are distant, we are likely to take it less seriously. Smokers don’t stop smoking just because you tell them it’s dangerous, and there’s no greater danger than death! In practice, smokers see the consequence as being some time off, so they don’t stop smoking just because you warn them of dying several decades down the line. So it is with learning. Rewards feel distant in learning, which is why students tend to leave study and cram just prior to exams, or write essays on the last night. They are not committed when it is likely that they won’t use their newly acquired knowledge and skills for some time, if at all. No one watches printer problem videos until they have a printer problem. So how do we get the learner to be a rider and not be stopped by the elephant?

Autonomous control
Give people control over their learning as personal agency acts as an accelerant. If I feel that things are not imposed upon me, but that I have chosen to take action, then intrinsic motivation will, on the whole, work better than extrinsic motivation. Giving people the choice over what and when they learn is therefore useful.

Push to engage
Technology allows us to push motivating messages and opportunities to learners. We can nudge them into learning. Nudge theory has been used in everything from insects in urinals to reduce splashes to serious behavioural change. Stream is a LXP that raises learner engagement by nudging and pushing learners forward through timely reminders. We know that learners are lazy and leave things to the last minute, so why not nudge them into correcting that behaviour. Woebot is a counselling chatbot that simply pops up in the morning on Facebook Messenger. You can choose to ignore or re-schedule. It has that drip-feed effect and, as the content is good and useful, you get used to doing just a few minutes every morning. 

Place in workflow
Just in time training, performance support and workflow are all terms for delivering learning when it is needed. This closes the gap between need and execution, thereby eliminating hyperbolic discounting, as there is no delay. Pushes and pulls can sit in Slack, Messenger, Microsoft teams of whatever social or workflow system your organisation uses.

Use events as catalysts
A sense of immediacy can be created by events – a merger, reorganisation, new product, new leader. All of these can engender a sense of imminence. Or manufacture your own mini-event. Several companies have implemented ‘phishing’ training by sending fake phishing emails, seeing how people react and delivering the training on the back of that event.

Recommendations
Almost everything you do online – Google, Facebook, Twitter, Instagram, Amazon and Netflix, use recommendation engines to personalise what the system thinks you need next. Yet this is rarely used in learning, except in adaptive systems, where AI acts like a teacher, keeping you personally on course. 

Visual nudges
Online learning needs to pick up on contemporary UX design and use slight movement, colour changes, positioning and layout to push people into action. In WildFire we use AI to create extra links during the learning experience. These appear as you work through an idea or concept, and are highlighted of the system thinks you didn’t really get it first time. But there’s lots of things you can do to nudge people forward in learning.

Calls to action
A neat combination of events as catalysts, nudge learning and calls to action, used widely in marketing, was a project by Standard Life. They used a merger with another large organisation as the catalyst, short 90 second videos as nudges and challenges (calls to action) to do something in their own teams. Use was tracked and produced great results. Calls to action are foundational in marketing, especially online marketing, where you are encouraged to contact, registered, inquire or buy through a call or button. Have a look at Amazon, perhaps the most successful company in the world, built on the idea of calling to action through their one-click buying button.

Get social
Reframe learning into a more social experience, online or offline, so that learners have their peer group to compare with. If you see tat others are doing things on time, you are more likely to follow than be presented with some distant consequence. Future promises of promotion, even money, have less effect that near experiences of being part of a group doing things together or being encouraged, even peer reviewed, as encouragement and feedback engenders action.

Habit
Habitual learning is difficult to embed, but once adopted is a powerful motivator. Good learners are in the habit of taking notes, always having a book in their bags, reading before going to sleep and so on. Choose your habit and force yourself to do it until it becomes natural, almost unthinking. In Kahneman language you must make sure that your System 2 has some of the features of what were once System 1. Or your elephant starts to get places on its own without the rider urging it along.

Conclusion
Learning is one thing, getting people to learn is another. Psychologically, we’re hard-wired to delay, procrastinate, not take learning seriously and see the rewards as far too far down the line to matter. We have to fight these traits and do what we can to encourage authentic and effortful learning, Make it seems as though it really does matter through all sorts of nudges; social, autonomy, push, place in workflow, events as catalysts, recommendations, visual nudges, recommendations, calls to action and habits.

Bibliography
Lewis, M. (2017) The Undoing Project. Penguin
Kahneman, D. (2011) Thinking fast and Slow.Penguin
Roediger, H. McDaniel M. (2013) Make It Stick. Harvard University Press

Friday, November 02, 2018

World’s first hologram lecture (not really) but are they necessary?

Imperial College, London, claimed to have held the World’s first hologram lecture. What they haven’t mastered is the art of being honest or doing a modicum of research.  There have been hologram lectures before, most notably, by Stephen Hawking. But let’s put to one side the usual hyperbole by ‘Women in Tech’ and look at this critically.

For the Imperial event, 3D figures were beamed in live from the US. They are projected on to a glass screen, with a backdrop giving the illusion of depth of field and interact with other panellists and the audience. Nice idea but will it fly?
Before I start, I’m no fan of slabbing out academic lectures as a method of teaching and learning and would much rather institutions either, recorded lectures, or made sure that the people who deliver them receive some training on teaching and presentation. The number of students who simply don’t turn up is evidence enough that they are a failure. Only 60% turn up, even at Harvard. The very words ‘Lecture’ or ‘Lecturers’ says everything about the appalling state of pedagogy in Higher Education.
One of the saddest learning stories I’ve ever heard was from the actress Tilda Swinton. She was the only student who turned up to a lecture at Oxford by Raymond Williams where he read out his lecture, from notes, from behind the lectern, and neither of them even acknowledged each other. How sad is that? Almost every University has even worse tales of lectures where not one student turned up.

Samuel Johnson saw the folly of it all:
‘Lectures were once useful; but now, when all can read, and books are so numerous, lectures are unnecessary. If your attention fails, and you miss a part of a lecture, it is lost; you cannot go back as you do upon a book... People have nowadays got a strange opinion that everything should be taught by lectures. Now, I cannot see that lectures can do as much good as reading the books from which the lectures are taken. I know nothing that can be best taught by lectures, except where experiments are to be shown. You may teach chymistry by lectures. You might teach making shoes by lectures!’

As David Hume, observed, it is the content, not the person who matters:
‘...as you know there is nothing to be learnt from a Professor, which is not to be met with in books, and there is nothing to be required in order to reap all possible advantages from them, but an order and choice in reading them...I see no reason why we should either go to a University, more than to any other place, or ever trouble ourselves about the learning or capacity of the professor.’

On the other hand I have no problem with talks by experts (not adjuncts and/or PhD students) who command respect and give students a feel for what it is to be a physicist or psychologist. Lectures as inspirational and motivational events I have no problem with, where world class speakers and teachers do their thing – but few have that presence or possess those skills. The problem with a focus on just the technology here, is that a hologram of a bad lecturer won’t solve the problem.
This idea is interesting in terms of getting World Class experts and teachers to talk and teach in institutions around the world. If, as the evidence suggests, it increases presence, that’s great, especially if you feel as though they are really there. But I’m not convinced that hologram lectures are much more than a gimmick. They’re technically difficult to organise, expensive and try too hard to mimic what is, essentially, a flawed pedagogic technique. It perpetuates the traditional lecture format, rather than moving things on. It’s taking something that’s not that good in the real world and mirroring it virtually.

Skype or Zoom
On the other hand, we have to probe this a little? Wouldn’t it be easier and simpler to use video, either Skype or Zoom? These have tools that supplement the experience. For example, Skype translator can translate voice, in real-time, in 10 languages. Its text translator works in 60 languages. For global transmission, this makes sense.

Webinar software
Lectures as webinars make even more sense, as the supplementary tools allow for as much interaction as you wish and it is scalable. It is rather odd that educational institutions don’t make more use of this form of delivery. Questions, chat, polls, links and many other features are available in this type of software. It talks some skill to do this but it is a skill worth learning as it results in better teaching and more importantly, better learning.

VR
Full immersion gives the advantage of full immersion and full attention. One of the most compelling features of VR is the fact that you really are in that environment and the brain finds it difficult to jump out or be distracted. We know that attention is a necessary condition for learning, so this could be the optimal solution. One problem is the difficulty of taking notes, although speech dictation would be possible.

Conclusion
All technologies have their affordances. We need to identify what we want, then use the best technology available. Holograms seem like overreach. If students aren’t even turning up, let’s reflect on that first. If we’re not recording lectures, despite the overwhelming evidence that they are good for students being taught in their second language or those who are absent due to illness. In addition, learners can stop and rewind if they miss something, want to find something out or want to take more detailed notes. But the biggest argument is that they can be used for learning through repeated use and revision.

Wednesday, October 31, 2018

Agile online learning – in minutes not months using AI

So you spend decades trying to convince people to learn online, then in a day or so, the whole word does it - not from choice but necessity. Necessity is clearly the mother of innovation. But educators and trainers hit a blockage. Time was short and the processes for online production long. That's because online learning is still in the hand-crafted stage. It has failed to adopt agile methods using the latest technology. It is largely still in the 'multimedia' world of 20 years ago.

Agile production
Agile learning means relentless attention to education, training and behaviour. By agile I mean the use of AI to create education and training quickly, that really does deliver high retention training to learners on any device. Every piece of content can have an AI generated audio introduction (AI text to speech), that explains what you have to do and to relax about getting things wrong, as it is OK to make mistakes while learning. Learners then read or watch a video and, rather than click on multiple-choice questions, have to bring to mind and type in what they think they know. This ‘effortful’ learning was inspired by recent research in learning that shows recall, with open-input, is superior for retention when compared to simply recognising answers from a list (multiple choice). 

Learning content can literally been created same day, as the AI creates the content, identifying the relevant learning points and automatically creating the learning experiences. This superfast production process means that quality assurance can be done on the real content, without the need for design documents and scripts. There is no need for multiple iterations by SMEs, as the original document, PPT or video contained all that was needed. The look and feel, logo, images, palette numbers for screen features can be quickly agreed. Everything from brief to final delivery can be done online through screen-sharing on Zoom or Skype. Not a single face-to-face meeting is necessary. See case study here.

Agile data
Finally, the modules can be SCORM wrapped for delivery on the LMS. As SCORM is rather limited, we can also embed extra data gathering capability which WildFire harvests for further analysis. This allows detailed analysis of who did what when and within the training, the specific times taken by each individual. Beyond this. WildFire can take data and correlate it with other data, such as sales.

Agile preparation
One important lesson in being agile is the ‘Garbage IN: Garbage OUT’ rule. When you use an agile production process, you need agile preparation. An intensive look at the input material pays dividends. Eliminate all of that extraneous material and text, cut until it bleeds and cut again, catch those pesky spelling and punctuation errors, make sure things are consistent. In our TUI work the source material was edited down to the essential ‘need to know’ content. 

Agile project management
Another essential ingredient is an agile project manager. In both the TUI project, we spoke frequently, but didn’t have a single face-to-face meeting (invaluable in these Covid times). It was all quick decisions, problems solvedon the spot and process change to get things done. The project manager never saw problems, only issues to be solved – quickly.

This was made easier by the fact that AI was used to produce content in minutes not months. That means the quality control was on real content, not paper documents. And as we use approved documents, PowerPoints and videos, there was no real need for intensive SME input, which is the main brake on agile  production.

Agile production
This is where the real gains lie. AI is now being used to create content and add curated resourced at the click of a button. WildFire will take any document, PowerPoint or video and turn it into high retention online learning, in minutes not months. Want an audio podcast or audio introduction to the course?  It takes seconds using AI to do text to speech. The time savings are enormous, as well as costs. AI is used not only to identify learning points, it also constructs the questions, assesses open input (words or full short answer) and locates external resources for further learning.

Conclusion
Agile is as much a state of mind as process. Yet Learning and Development still works to an old model of months not minutes. We procure slowly, prepare slowly, produce slowly and deliver slowly. Despite relentless calls to align with the business and respond to business needs, we are too slow. L and D needs things in days, yet we deliver in months. This is what needs to change.

Online learning has traditionally been a rather slow in design and production. We can now use AI in WildFire to create content quickly, to produce agile learning and data that allows us to adapt to new circumstances. I may have used the word 'Agile' too often here but it captures, in a word, what is now necessary. The days of seeing online learning production as some sort of feature film project with matching budgets and timescales – months, should be re-examined. Sure some high-end content may need this approach but much can be automated and done at 10% of the cost, in minutes not months.

Monday, October 22, 2018

Agile online learning (not a single F2F meeting) in 'impossible timescales' saves £450k with 36% increase in sales


This Award winning, agile, learning project was groundbreaking (I know - bit of a cliche). We used an AI tool, WildFire, to deliver a project to a large multinational (TUI).  The project delivered 138 modules on the locations for its holidays, flights, airport codes and so on. Recognising that they could never produce content on this scale, as the estimated costs for external development were just under £500,000, and it had to be delivered in weeks not the estimated 8 months, they opted for WildFire. This uses AI to create content in minutes not months, along with supplementary curated content, also selected by AI.

Agile production
As WildFire does most of the design and build, an agile approach to production and project management was intrinsic. No scripts were necessary, as the software built the content so quickly that quality assurance could be done on the actual modules.
As it was ‘agile’, let’s cut to the quick...
  • 95% rated the design and approach as good or very good
  • 62% confirmed they could identify a specific sale based on knowledge gained
Their knowledge of the countries, locations, attractions, currencies, airport codes and so on was reported, time and time again, by front-line staff as having helped them sell holidays and flights. Remember - this is a location-driven business. If you want to sell holidays and cruises, you have to know the destinations and attractions.

ROI
Total savings, compared to traditional online learning production, were calculated as: 
  • “£438,000 with extra savings of £15,000 in salary costs”
  • “Freed up 15% of manager time” to do other things"
  • “With a bit of lateral thinking and a lot of tenacity – seemingly impossible timescales were met”
Further benefits
When the modules were released to the wider business, sales benefits started to emerge:
  • 36% increase in sales has already been recognised in the first few months the training has been available”
Within the company the “wider business now recognises the benefits of being bold with new learning technologies” and sees the project as an “outstanding example of achieving our strategy to invest in and develop our people”.

Agile mindset
This was a ground-breaking project, delivered without a single face-to-face meeting. It shows what can be achieved when a training department is innovative and brave. We must get past the model that says it takes months to produce content at prohibitive costs. Agile production needs agile tools, agile production methods and agility of thought and mindset. Organisations have traditionally moved faster than training delivery. That means we’re often out of phase with the business. An agile mindset and production attitude allows us to transcend that historical gap. Once we become responsive to the business they will respect us more and we are more aligned with their natural speed.

Conclusion
At the learning Technologies awards, the judges said the following....
"To speed up production TUI took the brave step of selecting.... an Artificial Intelligence tool, WildFire. The result saw triple savings; six months knocked off the expected timescales, £15,000 in salaries and £438,000 of the development budget."

If you want to know more about WildFire, agile production and high-retention, online learning, contact us here...







Has L&D been hijacked by ‘identity’ politics?

I got asked an interesting question last week on a panel at an Learning and Development conference. Where is L and D? Is it advancing or regressing? Interesting question.
In many ways it has advanced by embracing technology. Few individuals or organisations of any scale ignore online solutions to deliver learning and development. There has also been some movement towards taking research more seriously. I meet far fewer people who believe in learning styles or NLP. There seems to be much more awareness of learning theory and by that I mean cognitive science.
On the other hand, these advances have been matched by a serious reversal - the swallowing, lock, stock and both barrels of ‘identity’ politics. Gone are the days when HR and L and D were at the forefront of personal development. Much ‘training’ is now targeted at protecting the organisation from their own employees. Employees are now subjected to a tsunami of compliance and identity training . Much of what is actually delivered via an LMS is actually protective training, or as almost everyone says, ‘tick-box’ training. We know it and learners know it. We have to be honest about the fact that they are often openly contemptuous about this form of ‘training’. Somewhere along the line our development agenda got hijacked.
Identity and training
Training that establishes difference also implies exclusion. Fixing favoured identity groups, seeing some as oppressed and others oppressors is a destructive force. And when HR aggressively promotes and ‘us’ and ‘them’ culture it does us all a disservice. When we become the managers and administrators of difference, serving up a never-ending diet of identity and diversity training courses, we swirl around in our own echo-chambers. We become the seekers out of ‘wrong-think’, policing ordinary people, as if they were stained by original sins. It results, not in rational consensus within an organisation, but the false exaggeration of difference.
Leadership
One specific identity group that has been put on a pedestal is ‘Leaders’. We have thrown resources at Leaders and Leadership training, as elitist an approach to employee engagement as I’ve seen in my 35 years in this business. In a desperate attempt to appear legitimate we have turned ‘elitism’ into a sort of training cult. No one would seriously call themselves a ‘Leader’ without being ridiculed. This form of single  group identity is a serious distortion of real needs in organisations.
Values
Another facet of identity politics has been the unedifying sight of organisations forcing faux values down the throats of their employees. I have values and have no interest in HR telling me what my values should be. Strangely alliterative – all starting with ‘I’ or “C’ they are abstract nouns that bear no relation to the real world or the workplace. Worse still are those values that fit some acronym, where at least some of the values have been made up to fit that word. They want to shape your identity by imposing their values on you. They are, of course, largely ignored or treated with contempt. Few can recall them, fewer still care.
Diversity
The rot set in long ago, when HR thought it should take the role of therapeutic diagnosis. Myers Briggs, a flawed and crude tool, has been used to determine people’s lives. As if this weren’t enough, a slew of interrogative techniques, from learning styles (a fiction) to NLP and mindfulness, were employed to caricature, categorise and, in some cases, condemn ordinary people. There has been no end to this slicing and dicing, based on dubious diagnostic techniques. The evidence is clear. Diversity training does not work.
Unconscious bias
It has now reached surreal levels of interrogative nonsense, with courses on ‘unconscious bias’. Not satisfied with superficial courses making us consciously compliant, HR suddenly started to probe our unconscious. How did that happen? What on earth gives anyone in HR or L&D the right to even think that my unconscious is open to their investigation, something to be probed by some half-baked questionnaire that has no scientific validity? This is completely out of control. The assumption is that certain out-of-favour groups - white, male, working class - are guilty by virtue of being born, so leaden with bias that they need to be re-educated. It’s a pathological view of human nature, where minds must be interrogated and forced to admit their guilt. This was never our goal. HR and L&D is full of good people but it has been hijacked by a form of policing that has abandoned personal development for personal identity.
Conclusion
All of this is fuelled by so called ‘experts’ who design courses on ‘X’, train others to deliver those courses as ‘practitioners’ who sell their courses for ‘$Y’ so the Ponzi schemes begin. Rather than focus on the real needs of organisations, real knowledge, skills and competences, we have been sucked into a world of exaggerations, abstract concepts and fictions, where people have to be identified, shamed and re-educated, to be ‘correct’ and ‘compliant’. Time and time again I hear pleas for HR and L&D to be more business focussed. What we’ve done, in reality, is turn our back on real business issues, to focus on therapeutic concerns, that invade people’s privacy. So let’s get off the identity bandwagon and get back to business.

Friday, October 19, 2018

"On average, humans have one testicle”... that's why most data analytics projects in education are misleading....

AI will have a huge impact in learning. It changes everything… why we learn, what we learn and how we learn. Of course, AI is not one thing. It is a huge array of mathematical and software techniques. Yet looking at the spend in education and training, people have been drawn to one very narrow area – data analytics. This, I think, is a mistake.

Much of this so-called use of AI is like going over top of your head with your right hand to scratch your left ear. Complex algorithmic and machine learning approaches are likely to be more expensive and far less reliable and verifiable than simple measures like using a spreadsheet or making what little data you have available, in a visualized, digestible form to faculty or managers. Beyond this, traditional statistics is likely to prove more fruitful. Data analytics has taken on the allure of AI, yet many of it is actually plain, old statistics.

Data problems
Data is actually a huge problem here. They say that data is the new oil, more likely the new snakeoil. It is stored in weird ways and places, often old, useless, messy, embarrassing, personaland even secret. It may need to be anonymised, training sets identified and subject to GDPR. To quote that old malapropism, ‘data is a minefield of information’. It may even be massively misleading, as in the testicle example. You must assume, in learning, that your data is quite simply messy.

Paucity of data
The data problem is even worse than mere messiness, as there is another problem – the paucity of data. Institutions are not gushing wells of data. Universities, for example, don’t even know how many students turn up for lectures. I can tell you that the actual data, when collected, paints a picture of catastrophic absence. Data on students is paltry. The main problem with the use of data in learning, is that we have so little of the stuff. 
SCORM, which has been around for 20 plus years literally stopped the collection of data with its focus on completion. This makes most data analytics projects next to useless. The data can be handled in a spreadsheet. It is certainly not as large, clean and relevant, as it needs to be to produce deep and genuine insights.
Other data sources are similarly flawed, as there's little in the way of fine-grained data about actual performance. It's small data sets, often messy, poorly structured and not understood.

Data dumps
Data is often not as clean as you think it is, with much of it in:
  •          odd data structures
  •      odd formats/encrypted
  •      different databases
Just getting a hold of the stuff is difficult.

Defunct data
Then there’s the problem of relevance and utility, as much of it is:
  •        old
  •      useless
  •       messy
In fact, much of it could be deleted. We have so much of the stuff because we simply haven’t known what to do with it, don’t clean it and don’t know how to manage it.

Difficult data
There are also problems around data that can be:
  •        embarrassing
  •        secret
There may be very good reasons for not opening up historic data, such as emails and internal social communications. It may open up a sizeable legal and other HR risks for organisations. Think Wikileaks email dumps. Data is not like a barrel of oil, more like a can of worms. 

Different types of data
Once cleaned, one can see that there are many different types of data. Unlike oil, it has not so much 'fractions' as different categories of data. In learning we can have ‘Personal’ data, provided by the person or actions performed by that person with their full knowledge. This may be gender, age, educational background, needs, stated goals and so on. Then there’s ‘Observed’ data from the actions of the user, their routes, clicks, pauses, choices and answers. You also have ‘Derived’ data inferred from existing data to create new data and higher level ‘Analytic’ data from statistical and probability techniques related to that individual. Data may also be created on the fly.

Just when you thought it was getting clearer. You also have ‘Anonymised’ data, a bit like oil of an unknown origin. It is clean of any attributes that may relate it to specific individuals. This is rather difficult to achieve as there are often techniques to back engineer attribution to individuals.

In AI there’s also ‘Training’ data used for training AI systems and ‘Production’ data which the system actually uses when it is launched in the real world. This is not trivial. Given the problems stated above, it is not easy to get a suitable data set, which is clean and reliable for training. Then, when you launch the service or product, the new data may be subject to all sorts of unforeseen problems not uncovered in the training process. This is a rock on which many AI projects founder.

Data preparation
Before entering these data analytics projects, ask yourself some serious questions about 'data’. Data size by itself, is overrated, but size still matters, whether n = tens, hundreds, thousands, millions, the Law of Small Numbers still matters. Don’t jump until you are clear about how much relevant and useful data you have, where it is, how clean it is and in what databases.

New types of data may be more fruitful than legacy data. In learning this could be dwell time on questions, open input data, wrong answers to questions and so on. More often than not, what you have as data are really proxies. 

Action not analytics
The problem with spending all of your money on diagnosis, especially when the diagnosis is an obvious limited set of possible causes, that were probably already known, is that the money is usually better spent on treatment. Look at improving student support, teaching and learning, not dodgy diagnosis.
In practice, even when those amazing (or not so amazing) insights come through, what do institutions actually do? Do they record lectures because students with English as a foreign language find some lecturers difficult and the psychology of learning screams at us to let students have repeated access to resources? Do they tackle the issue of poor teaching by specific lecturers? Do they question the use of lectures? Do they radically reduce response times on feedback to students? Do they drop the essay as a lazy and monolithic form of assessment? Or do they waffle on about improving the ‘student experience’ where nothing much changes?

Conclusion
I work in AI in learning, have an AI learning company, invest in AI EdTech companies, am on the board of two other AI learning companies, speak on the subject all over the world, write constantly on the subject . You’d expect me to be a big fan of data analytics and recommendation engines - I’m not. Not yet. I’d never say never but so much of this seems like playing around with the problem, rather than facing up to solving the problem. That's not to say you should ignore its uses - just don't get sucked into data analytics projects in learning that promise lots but deliver little. Far better to focus on the use of data in adaptive learning or small scale teaching and learning projects where relatively small amounts of data can be put to good use.

AI is many things and a far better use of AI in learning, is, in my opinion, to improve teaching through engagement, support, personalised, adaptive learning, better feedback, student support, active learning, content creation (WildFire) and assessment. All of these are available right now. They address the REAL problem – teaching and learning.