Saturday, January 19, 2019

Listen up! Voice is here - online learning needs to be unmuted

Curious conundrum
Online learning needs to be unmuted. Almost all online learning involves just clicking on things. Not even typing stuff in, just clicking. We click to navigate, click on menus, click (absurdly) on people to get fictional speech bubbles, click on multiple-choice options. Yet most other online activity involves messaging, typing what you think and being far more active. In real life, of course, we don’t click, we speak and listen. Most actual teaching and training uses voice.
Voice is our first and most natural form of communication. We’ve evolved to speak and listen, grammatical geniuses aged three and are not in any formal sense, ‘taught’ to talk and understand what others say. Whereas it takes many years to learn how to read and write and many struggle, some never achieving mastery in a lifetime. 
Rise of voice
Strangely enough we may be going back to the pre-literate age with technology, back to this almost frictionless form of interface.
This started with services such as Siri and Cortana on our phones. As the AI technology behind these services improved, it was not Apple or Microsoft that took it to consumers but Amazon and Google, with Alexa and Google Home. I have an Alexa which switches my lights on and off, activates my robot vacuum cleaner, plays all of my music and smart TV. I use it to set timers for calls and Skype meetings. We even use it to voice message across the three floors of our house and my son who lives elsewhere. I use it for weather, news, sports results. In Berlin recently, with my son, who has Bluetooth headphones linked to Google Assistant, he wanted a coffee and simply asked where the nearest coffee shop was and it spoke back, giving voiced directions as we walked. Voice is also in our cars, as we can speak commands or get spoken to from Google Maps.
This month we’ve also seen tools emerge that analyse your voice in terms of mood and tone, and also evidence that you can diagnose Dementia, Parkinson’s and other illnesses from frequency level analysis. As Mary Meekford’s analysis shows, voice is here to stay and has become the way we interact with the internet of things.
Voice for learning
1. Podcasts
Another sign that voice is an important medium in itself are podcasts, which have surprised people with their popularity. This is an excellent post on that subject by Steve Rayson. The book  “Podcasting: New Aural Cultures and Digital Media’ by Llinares, Fox and Berry (2018) is an in-depth look at the strengths of voice-only media; the ability to listen when you want (timeshift), use when walking, running, exercising and driving, long pieces therefore have more depth often with multiple participants. In addition, they make you feel as though you are there in the conversation with a sense of intimacy, as this is ‘listening’ not just ‘hearing’, especially when wearing headphones. Podcasts should be used more in learning. 
2. Podcasts and online learning
We’ve been using podcasts in WildFire. One real example is a Senior Clinician, who ran and authored a globally significant medical trial in Asthma. We allow the learner to listen, intently to the podcast (an interview) then take the transcript (automatically translated into text) to produce a more active and effortful learning experience, with free text input. You get the best of both worlds, an intimate and reflective experience with the expert, as if you were there with him, then you reinforce, reflect, retrieve, retain and can recall what you need to learn. Note that the ‘need to know’ stuff is not every single word, but the useful points about the scale of the trail, it’s objectives and findings.
3. Text to speech
We’ve also used AI, text to speech, to cerate introductions to online courses, making them more accessible and human. The basic text file can be edited with easy iof it needs to be changed.
4. Voice input
We’ve also developed voice-input online learning, where you don’t type in answers but ‘voice’ them. This is a very different cognitive and learning experience from just clicking on multiple-choice options. You’re memory recalls what you think you know in your phonological loop, a sort of inner ear where sounds are recalled and rehearsed before being either spoken or written. This is the precursor to expression. Voicing your input jut seems more like dialogue.
The entire learning experience is voiced, navigation and retrieval with open input. This, we believe will be useful for certain types of learning, especially with audiences that have problems with typing, literacy or dyslexia. Voice is starting to creep into online learning. It will grow further.
5. VR
One of the problems in VR is the inability to type and click on anything. Put on a headset, and typing when possible is far too slow and clumsy. It is much more convenient, and natural, to speak within that immersive world. This opens up the possibility of more flexible learning within VR. Many knowledge components, decisions or communications within the simulation can be voiced as they would be in the real world. Voice will therefore enable more simulation training.
6. Voice as a skill
Text-based learning has squeezed out the skills of oration, yet speaking fluently, explaining, presenting, giving feedback, interviewing, managing, critical thinking, problem solving, team working and much of what is called 21stC skills, are actually skills we used to teach more widely through voice. They are skills that are fundamentally expressed as speech, that most fundamental of media. People have to learn to both speak up and when they speak, speak wisely and to good effect. It is also important, of course, to listen. For these reasons, the return of voice to learning is a good thing. Speaking to a computer, I suspect, also results in more transfer, especially if, in the real world, you are expected to articulate things in meetings or in the workplace to your colleagues, face to face.
6. Feedback
Voiced feedback is used by some, obviously in coaching and mentoring, but also in feedback to students about assignments. The ease of recording, along with the higher impact on the learner in terms of perceived interest by the teacher, makes this a powerful feedback method.
7. Assessment
So much learning is text based when so much of the real world is voice based. Spoken assessment is, of course, normal in language training but shouldn’t we be expected to voice our opinions, even voice critical pieces for assessment. It is relatively rare to have oral examinations but this may be desirable if newer softer skills are in demand.
Online learning needs to pay attention to AI-driven voice. It is an underlying consumer technology, now ubiquitous on phones and increasingly in our homes. It’s natural, convenient, intimate and human. It has, when used wisely, the ability to lift online learning out of the text and click model in all sorts of imaginative ways. So listen up folks!

 Subscribe to RSS

Friday, January 11, 2019

This 'less is more' AI technique saves time, money and helps increase retention...

AI is many things and it is best employed in learning on specific narrow tasks. That’s what we’ve been doing, using AI to create content, semantically analyse free text input, create text to speech podcasts and curate content at WildFire.
One problem we have tackled is the simple fact that the INPUTS into learning content tend to be overlong, overwritten and too detailed. Training departments often get given huge PDFs, long slidedecks packed with text or overlong video. To be fair those in video production are normally professional enough to edit it down to a reasonable length but huge documents and PowerPoints are, I’d say the norm.
AI can be used to automatically shorten this text. This can be done in two ways (or in combination):
Extractive keeps the text intact and simply removes what it judges to be less useful content. This uses techniques such as inverse term document frequency that gives you a measure of how important words are within a corpus (dataset of text). It looks for sentences with these words and extracts them. There are many more sophisticated extractive algorithms but you get the idea.
The advantage of this approach is that you retain the integrity of the original text, which may be useful if it has been through a regulatory, legal or or subject matter review.
Abstractive tends to use deep learning models and neural networks to understand the content then generate summarised content as a précis. Free from the constraint of having to be loyal to the original structure, these algorithms will write their own abstract getting down to the real essence of the text.
This more powerful technique is more likely to provide a tighter, more suitable output for learning – shorter and a more optimal distillation of the meaning.
This is useful in increasing the productivity of any educational or training design team, as you dramatically shorten this necessary editing task. On large PDFs, not uncommon in compliance and SOWs, these techniques really do work well. But it also works well with any text from articles, papers, books, even Powerpoint text and video transcripts. We already automatically grab transcripts from YouTube but this extra step is useful in reducing what is spoken text, down to its real substance. You often get asides and content that works well on screen but not as text. This combination of video plus detailed effortful text, where you pick up the detail and have to make the cognitive effort to show understanding and actual recall of the content is a useful combination. Note that you can scale down in steps until you get to what you feel is an optimal précis. We’ve also found it useful as it surfaces overwriting, repetition, even errors.
Once agreed, the shorter text can be put into WildFire, where other forms of AI create the content, in minutes not months, again dramatically decreasing both time to delivery and costs. The AI cerates the content and analyses free-text input, which is significantly better in terms of improving both retention and recall.
Time matters
This reduction in time is important, as training design has traditionally been a bit of a block in the process. A business sponsor comes to the training department and told it will take weeks and months. Their reaction is often to simply walk away. You can also be seen in the business as delivering timely and, importantly, not over-engineered solutions to business problems.
Less is more
A point, that is often overlooked, is that this is wholly in line with the psychology of learning, which screams  ‘less is more’ at us.  A good motto that itself summarises what learning designers have to do, is ‘Occam’s Razor’ – use the minimum number of entities to reach the given goal. This is true of interfaces but it also true of content design, media design and the needs of learners. 
Our limited working memories along with the need for chunking and retrieval, makes it essential to be precise and as short as possible with learning content. Many courses are over-long with content that is not essential and will be soon forgotten. What learners and businesses want is the crisp essence, what the need to know, not the padding.
This AI technique can be used, alongside other techniques to massively increase speed of delivery, cost and just as importantly efficacy. Your learners will be grateful, as will your business sponsors.

 Subscribe to RSS

Thursday, January 10, 2019

10 things you need to know to before you buy or build a chatbot

As Eliza showed 55 years ago, and Nass and Reeves showed was generally true for technology, we are easily fooled into anthropomorphising and reading agency into chatbots and technology in general. In truth, chatbots don’t talk to you, they pretend to talk to you. They are tricksters. In a sense all human-machine interaction is trickery. It is, in the end, only software being mathematically executed with some human scripts thrown in. Nevertheless, they are surprisingly successful. Even simple Alexa has been a massive hit, and she (well it) only answers simple questions, with little or no dialogue.
Interestingly, this immediately raises an issue for chatbot deployment – setting ‘expectations’. Do you tell users that it is just a piece of software or do you keep up the ‘magic’ myth? How honest will you be about its capability, as you may set the bar too high and get lots of disappointed users. Here’s a few other practical things to think about when you enter the weird and wonderful botland….
1. Domain knowledge
First up – on expectations - and this is really important. Remember that chatbots are not generalists. They are domain specific, good at specific tasks within defined domains. Google Duplex works only because it does domain specific tasks – call a restaurant and book a hairdressing appointment. Some services offer domain specificstores of messaging transcript data, with detailed tasks for each industry sector, such as Dialogueflow and Liveperson. Some even focus on core use cases, which are mostly designed around customer service. Most are a long way off being a genuine teacher, coach or mentor, as they lack the general ability to deal with a breadth of unexpected queries and answers. So dial your expectations down a notch or you’ll be setting yourself up for failure.
2. Voice
Your chatbot needs to have a voice. It’s too easy to just throw a jumble of responses into a database and hope for the best. In organisations, you may need to be on brand, talk like an expert and not a teenager, use humour (or not). Define a persona and build styleguide. At the end of the day, lots of responses have to be written and they need to sound as though they have a single voice. In learning especially, you have to be careful in tone. Too many chatbots have a surfeit of phrases that sound as they’re trying too hard to be cool or funny. In learning, one may want to be a little more serious. This depends, of course, on your intended audience and the subject matter. Whatever the project think about the ‘voice’ in this wider sense.
3. Manifestation
Linked to voice is the visual and aural manifestation of your chatbot. Think carefully about the appearance of the chatbot. Some stay sex neutral, others are identified as male or female. Many, perhaps too many, appear like 1950s square robots. Others have faces, micro-expressions, even animation. Then there’s the name. Be careful with this – it matters. And do you want one name or a separate name for each domain or course? Giving your bot a face seems a little odd and I prefer a bot identity that’s a little more hidden, that leaves the persona to be built in the mind of the user, almost unobtrusive.
4. Natural language processing
Understand what level of technology you want to use. This can mean lots of things, from simple keyword recognition to full speech recognition (as in Amazon.lex). Be very careful here, as this is rarely as good a vendors claim it to be. When a vendor says they are using deep learning or machine learning, that can mean many things, from very basic NLP techniques to more dynamic, sophisticated tasks. Get used to the language of ‘intents’ – this is related to the domain specific issue above. Chatbots needs to have defined tasks, namely ‘intents’ (the user’s intention) as identified and named actions and objects, such as ‘show weather’. These are qualified by ‘entities’. It is worth getting to grips with the vocabulary of NLP when buying or building chatbots.
5. Building
Many chatbot services offer a no-coding tool to build your flow, others require more complex skills. Flowcharting tools are common, and these often result in simply asking users to choose from a set of options and branching from them. To be fair, that keeps you (and the bot) on track, which may be the way to go in structured learning. Others will accept open input but steer you towards certain types of responses. One thing is for sure, you need new skill sets. Traditional interactive design skills will help, but not much. This is about dialogue not monologue, about understanding complex technology, not just pages of HTML.
6. Your data
How do you get your data into their system. This is not trivial. How do you get your content, which maybe exist as messages, pdfs, PowerPoints and other assets into the format that is needed. This is far from automatic. Then, if it’s using complex AI techniaues, there’s the training process. Youbreally do need to understand the data issues – what, where and how it is to be managed – and, of course – GDPR.
7. Hand off to humans
What happens when a chatbot fails? Believe me this is common. A number of failsafe tactics can be employed. You can do the common… ask the person to repeat themselves “Sorry, I didn’t catch that?” “Could you elaborate on that?” The chatbot may even try to use a keyword to save the flow, distract, change the subject and come back to the flow a little later. So think about failsafes. If all else fails, and many customer chatbots do – they default out to a real human. That’s fine in customer service, and many services, like Liveperson and, off this functionality. This is not so fine if you’re designing an autonomous learning system.
8. Channels
On what channels can the chatbot appear? There are lots of options here and you may want to look at what comms channels you use in your organistion, like website chat, in-app chat, Facebook Messenger, Slack, Google Assistant , Skype, Microsoft Teams, SMS, Twitter or email. The chatbot needs a home and you may want to think about whether it is a performance support chatbot, on your comms system, or a more specific chatbot within a course.
9. Integration
Does the chatbot have an open API and integrate into other platforms? Don’t imagine that this will work easily from your LMS, it won’t. Integration into other systems may also be necessary.
10. Administration
Your chatbot has to be delivered from somewhere, so what are the hosting options and is there monitoring, routing, and management. Reporting and user statisticsmatters with chatbots, as you really do want to see if they deliver what they say, with user stats, times, fallout stats.How are these handled and visualised? Does your chatbot vendor have 24/7 customer support? You may need it. Lastly, of you are using an external service, be careful about them changing without telling you (it happens), especially the large tech vendors, like IBM and Microsoft.
We are only at the start of the use of chatbots in learning. The trick is to play around with all of the demos online, before you start. Checkout the large vendors such as: 
Remember that these are primarily chatbots for customer service. For learning purposes, I’d start with a learning company first. If you want any further advice on this contact me here.

 Subscribe to RSS

Sunday, January 06, 2019

AI breakthroughs in learning in 2018

AI is good at narrow, prescribed tasks, it is hopeless at general tasks. This, in my view is why big data and learning analytics projects are less appropriate in learning than more precise, proven uses of AI. There’s a paucity of data in learning and it is often messy, difficult to access and subject to overfitting and other problems when trying to make predictions.
On the other hand, using specific techniques at specific points on the learning journey – engagement, support, delivery and assessment, one can leverage AI to best effect. So here’s five ways this was done in 2018, in real projects, in real organisations, some winning major awards.
1. Chatbots
We’ve had hundreds of early projects this year where chatbots have been used in the context of promising a future use of chatbots. These include learning engagement, learning support, performance support, assessment and well-being. Google demonstrated Google Duplex, that mastered conversational structure in a limited domain, but enough to fool restaurants that it was a human calling. It has been rolled out for further trials on selected Pixel phones. This builds on several different areas of natural language processing – speech to text, text to speech, trained neural networks, conversational structures. We can expect a lot more in this area in 2019.
2. Creation
The world of design has got bogged down in media production, as if to just watch, listen or read were enough to learn. Even media production can be automated, to a degree with AI. We have been producing text to speech podcasts, using automated transcript creation from video and content creation, at last recognizing that learning, as opposed to click-through consumption, needs fast AI-generated production of high effort learning experiences. Award winning, world-beating projects are now created with AI with no interactive designers.
3. Cognitive effort
Online learning has been trapped in largely linear media ‘experiences’ with low effort, multiple choice questions. This year we’ve seen services, that use open input, either as single concepts or free text, which are both created by and interpreted semantically by AI. The ability of AI to interpret text input by learners automates both assessment and feedback. This was realized in real projects in 2018. It will only get better in 2019.
4. Personalisation
Good learning takes place with timely and relevant action. Almost everything we do online is mediated by AI that delivers timely and relevant options for people – searching on Google, connecting on Social Media, buying on Amazon, entertaining ourselves on Netflix. Adaptive, personalised learning finally showed convincing results on attainment across courses. We can expect a lot more of this in 2019. 
5. Curation
The ability to curate content or tap into the vast cognisphere that is the web, is happening as part of course creation as well as separate searched curation. One can wire in external links to content to solve problems of competence, comprehension or curiosity.
Forget blockchain, badges and gamification. The underlying tectonic shift in learning technology will use AI. This is happening in healthcare, with significant ‘better than human’ applications appearing in 2018. This is happening in finance, with chatbots at the front-end and AI playing an increasing role in back-end systems. This is happening in manufacturing, with the automation of factories. This is happening in retail, as the selling, buying and delivery is increasingly through algorithms and AI. It is also happening in learning. This matters. If we are to adapt to the new normal of AI processes on employment, commerce and politics, we must make sure that education keeps up and that we equip ourselves and our children with better skills for this future.

 Subscribe to RSS

Wednesday, January 02, 2019

Year of learning dangerously – my 15 highs and lows of 2018

So 2018 is behind us. I look back and think… what really happened, what changed? I did a ton of talks over the year in many countries to different types of audiences, teachers, trainers, academics, investors and CEOs. I wrote 65 blogs and a huge number of Tweets and Facebook posts. Also ran an AI business, WildFire, delivering online learning content and we ended the year nicely by winning a major Award. 
So this is not a year end summary nor a forecast for 2019. It’s just a recap on some of the weirder things that happened to me in the world of ‘learning’…
1. Agile, AI-driven, free text learning
As good a term as I can come up with for what I spent most of my year doing and writing about, mostly on the back of AI, and real projects delivered to real clients of AI-generated award winning content, superfast production times and a new tool in WildFire that gets learners to use free-text, where we use AI (semantic analysis) as part of the learning experience. Our initial work shows that this gives huge increases in retention. That is the thing I’m most proud of this year.
2. Video is not enough
Another breakthrough was a WildFire tool that takes any learning video and turns it into a deeper learning experience by taking the transcript and applying AI, not only to create strong online learning but also use the techniques developed above to massively increase retention. Video is rarely enough on its own. It's great at attitudinal learning, processes, procedures and for things that require context and movement. But is it poor at detail and semantic knowledge and has relatively poor retention. This led to working with a video learning company to do just that, as 2+2 = 5.
3. Research matters
I have never been more aware of the lack of awareness on research on learning and online learning than I was this year. At several conferences across the year I saw keynote speakers literally show and state falsehoods that a moments searching on Google would have corrected. These were a mixture of futurists, purveyors of ‘c’ words like creativity and critical thinking and the usual snakeoil merchants. What I did enjoy was giving a talk at the E-learning network on this very topic, where I put forward the idea that interactive design skills will have to change in the face of new AI tech. Until we realise that a body of solid research around effortful learning, illusory learning (learners don’t actually know how they learn or how they should learn), interleaving, desirable difficulties, spaced practice, chunking and so on… we’ll be forever stuck in click-through online learning, where we simply skate across the surface. It led me to realise that almost everything we've done in online learning may now be dated and wrong.
4. Hyperbolic discounting and nudge learning
Learning is hard and suffers from its consequences lying to far in the future for learners to care. Hyperbolic discounting explains why learning is so inefficient but also kicks us into realising that we need to counter it with some neat techniques, such as nudge learning. I saw a great presentation on this in Scotland, where I spoke at the excellent Talent Gathering.
5. Blocked by Tom Peters
The year started all so innocently. I tweeted a link to an article I wrote many moons ago about Leadership and got the usual blowback from those making money from, you guessed it, Leadership workshops.. one of whom praised In Search of Excellence. So I wrote another piece showing that this and another book Good to great, turned out to be false prophets, as much of what they said turned out to be wrong and the many of the companies they heralded as exemplars went bust. More than this I thought that the whole ‘Leadership’ industry in HR had le, eventually to the madness of Our Great Leader, and my namesake, Donald Trump. In any case Tom Peters of all people came back at me and after a little rational tussle – he blocked me. This was one of my favourite achievements of the year.
6. Chatting about chatbots
Did a lot of talks on chatbots this year, after being involved with Otto at Learning Pool (great to see them winning Company of the Year at the Learning technologies Awards), building one of my own in WildFire and playing around with many others, like Woebot. They’re coming of age and have many uses in learning. And bots like Google’s Duplex, are glimpses into an interesting future based on more dialogue than didactic learning. My tack was that they are a natural and frictionless form of learning. We’re still coming to terms with their possibilities.
7. Why I fell out of love with Blockchain
I wrote about blockchain, I got re-married on Blockchain, I gave talks on Blockchain, I read a lot about Blockchain… then I spoke at an event of business CEOs where I saw a whole series of presentations by Blockchain companies and realised that it was largely vapourware, especially in education. Basically, I fell out of love with Blockchain. What no one was explaining were the downsides, that Blockchain had become a bit of a ball and chain.
8. And badges…
It’s OK to change your mind on things and in its wake I also had second thoughts on the whole ‘badges’ thing. This was a good idea that failed to stick, and the movement had run its course. I outlined the reasons for its failure here.
9. Unconscious bias my ass
The most disappointing episode of the year was the faddish rush towards this nonsense. What on earth gave HR the right to think that they could probe my unconscious with courses on ‘unconscious bias’. Of course, they can’t and the tools they’re using are a disgrace. This is all part of the rush towards HR defending organisations AGAINST their own employees. Oh, and by the way, those ‘wellness’ programmes at work – they also turned out to be waste of time and money.
10. Automated my home
It all started with Alexa. Over the months I’ve used it as a hub for timers (meals in oven, Skype calls, deadline), then for music (Amazon music), then the lights, and finally the TV. In the kitchen we have a neat little robot that emerges on a regular basis to clean the ground floor of our house. It does its thing and goes back to plug itself in and have a good sleep. We also have a 3D printer which we’re using to make a 3D drone… that brings me to another techy topic – drones.
11. Drones
I love a bit of niche tech and got really interested in this topic (big thanks to Rebecca, Rosa and Veronique) who allowed me to attend the brilliant E-learning Africa and see Zipline and another drone company in Rwanda (where I was bitch-slapped by a Gorilla but that, as they say, is another story). On my return I spoke about Drones for Good at the wonderful Battle of Ideas in London (listen here). My argument, outlined here, was that drones are not really about delivering pizzas and flying taxis, as that will be regulated out in the developed world. However, they will fly in the developing world. Then along came the Gatwick incident….
12. Graduation
So I donned the Professorial Gown, soft Luther-like hat and was delighted to attend the graduation of hundreds of online students at the University of Derby, with my friends Julie Stone and Paul Bacsich. At the same time I helped get Bryan Caplan across from the US to speak at Online Educa, where he explained why HE is in some trouble (mostly signalling and credential inflation) and that online was part of the answer. 
13. Learning is not a circus and teachers are not clowns
The year ended with a rather odd debate at Online Educa in Berlin, around the motion that “All learning should be fun”. Now I’m as up for a laugh as the next person. And to be fair, Elliot Masie’s defence of the proposition was laughable. Learning can be fun but that’s not really the point. Learning needs effort. Just making things ‘fun’ has led to the sad sight of clickthrough online learning. It was the prefect example of experts who knew the research, versus, deluded sellers of mirth.
14. AI
I spent a lot of time on this in 2018 and plan to spend even more time in 2019. Why? Beneath all the superficial talk about Learning Experiences and whatever fads come through… beneath it allies technology that is smart and has already changed the world forever. AI has and will change the very nature of work. It will, therefore change why we learn, what we learn and how we learn. I ended my year by winning a Learning technologies award with TUI (thanks Henri and Nic) and and WildFire. We did something ground breaking – produced useful learning experiences, in record time, using AI, for a company that showed real impact.
15. Book deal
Oh and got a nice book deal on AI – so head down in 2019.

 Subscribe to RSS

Thursday, December 13, 2018

Learning Experience Systems – just more click-through online learning?

I have this image in my lounge. He's skating, a clergyman skating, as we so often do when we think we're learning - just skating over the surface. For all the talk of Learning Experience Systems and ‘engagement’, if all you serve up are flat media experiences, no matter how short or micro, with click-through multiple choice or worse, drag and drop, you’ll have thin learning. Simply rebadging platforms with the word ‘Experience’ in the middle doesn’t cut it, unless we reflect on what those ‘experiences should be. All experience is learning but some experiences are much more effective than others (effortful). Simply plopping the word 'experience into the middle of the old LMS terms is to simply rebadge. 
 As Mayer showed, this does not mean making things media rich; media rich is not mind rich. This often inhibits learning with unnecessary cognitive load.
Neither does it simply mean delivering flat resources. Similarly with some types of explicit gamification, where the Pavlovian rewards become ends in themselves and inhibit learning. Good gamification, does in fact, induce deep thought – collecting coins, leader boards and other ephemera do not, as the gains are short-lived.
The way to make such systems work is to focus on effortful ‘learning’ experiences, not just media production. We know that what counts is effortful, desirable and deliberate practice.
Engagement does not mean learning. I can be wholly engaged, as I often am, in all sorts of activities – walking, having a laugh in the pub, watching a movie, attending a basketball game – but I’m learning little. Engagement so often means that edutainment stuff - all tainment and no edu. The self-perception of engagement is, in fact, often a poor predictor of learning. As Bjork repeatedly says, on the back of decades of research, from Roediger, Karpicke, Heustler, Metcalfwe and many others, “we have a flawed model of how we learn and remember”. 
We tend to think that we learn just by reading, hearing and watching. When, in fact, it is other, effortful, more sophisticated practices that result in far more powerful learning. Engagement, fun, learner surveys and happy sheets have been shown to be poor measures of what we actually learn and very far from being optimal learning strategies.
Ask Traci Sitzman who has done the research, Sitzmann (2008). Her work on meta-studies, on 68,245 trainees over 354 research reports, attempt to answer two questions:
Do satisfied students learn more than dissatisfied students?After controlling for pre-training knowledge, reactions accounted for only 2% of the variance in factual knowledge, 5% of the variance in skill-based knowledge, 0% of the variance in training transfer. The answer is clearly no!
Are self-assessments of knowledge accurate? Self-assessment is only moderately related to learning. Self-assessment capture motivation and satisfaction, not actual knowledge levels
Her conclusion based on years of research, and I spoke to her and she is adamant, is that self-assessments should NOT be included in course evaluations and should NOT be used as a substitute for objective learning measures.
Open learning
It’s effort to ‘call to mind’ that makes learning work. Even when you read, it’s the mind reflecting, making links, calling up related thoughts that makes the experience a learning experience. But this is especially true in online learning. The open mind is what makes us learn and therefore open response is what makes us really learn in online learning.
You start with whatever learning resource, in whatever medium you have: text (pdf, paper, book…), text and graphics (PowerPoint…), audio (podcast) or video. By all means read your text, go through a Powerpoint, listen to the podcast or watch a video. It’s what comes next that matters.
With WildFire, in addition to the creation of on line learning, in minutes not months, ae have developed open input by learners, interpreted semantically by AI to. You literally get a question and a blank box into which you can type whatever you want. This is what happens in real life – not selection items from multiple-choice lists. Note that you are not encouraged to just retype what you read saw or heard. The point, hence the question, is to think, reflect, retrieve and recall what you think you know.
Here’s an example, a definition of learning…
What is learning?
Learning is a lasting change in a person’s knowledge or behaviour as a result of experiences of some kind.
Next screen….

You are asked to tell us what you think learning is. It’s not easy and people take several attempts. That’s the point. You are, cognitively, digging deep, retrieving what you know and having a go. As long as you get the main points, that it is a lasting change in behaviour or knowledge through experiences, you’re home and dry. As the AI does a semantic analysis, it accepts variations on words, synonyms and different word order. You can’t cut and paste and when you are shown the definition again, whatever part you got right, is highlighted.  
It’s a refreshing experience in online learning, as it is so easy to click through media and multiple-choice questions thinking you have learnt. Bjork called this the ‘illusion of learning’ and it’s remarkably common. Learners are easily fooled into thinking they have mastered something when they have not.
This fundamental principle in learning, developed in research by Bjork and many others, is why we’ve developed open learning in WildFire
Engagement is not a bad thing but it is neither a necessary, and certainly not a sufficient condition, for learning. LXP theory lacks - well theory and research. We know a lot about how people learn, the excessive focus on surface experience may not help. All experience leads to some learning. But that is not the point, as some experiences are better than others. What those experiences should be are rarely understood by learners. What matters is effortful learning, not ice skating across the surface, having fun but not actually learning much – that is click-through learning. 
Alleger et al. (1997) A meta-analysis of the relations among training criteria. Personnel Psychology 50, 341-357.
Sitzmann, T. & Johnson, S. K. (2012). When is ignorance bliss? The effects of inaccurate self-assessments of knowledge on learning and attrition. Organizational Behavior and Human Decision Processes, 117, 192–207.
Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. (2010). Self-assessment of knowledge: A cognitive learning or affective measure? Academy of Management Learning and Education, 9, 169-191.
Brown, K. G., Sitzmann, T., & Bauer, K. N. (2010). Self-assessment one more time: With gratitude and an eye toward the future. Academy of Management Learning and Education, 9, 348-352
Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K. and Zimmerman, R. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology93, 280-295.

 Subscribe to RSS

Wednesday, December 12, 2018

Learning is not a circus and teachers are not clowns - the OEB debate

‘All learning should be fun’ was the motion at the OEB Big Debate. No one is against a bit of fun but as an imperative for ALL learning, it’s an odd, almost ridiculous, claim. and sure enough there were some odd arguments. Elliot Masie, the purveyor of mirth, started with his usual appeal to the audience “Let me give you another word for fun - HA HA (that’s two words Elliot, but let’s not quibble)…. turn to your neighbour and say that without one letter. Some, like my neighbour, were genuinely puzzled ‘HAH?’ he said. I think it’s ‘AHA’, says I. Geddit? Oh dear. Elliot wants learning to be like Broadway. I saw him a few of weeks before show some eightball dance routine as a method for police training. 
To be fair Benjamin Doxtdator was more considered with his arguments about subversion in education and the fact that those who design learning were debating what was good not the learners – they were missing. But this was to miss the point. In deciding what treatments to give patients, one must appeal to research to show what works, not rely on the testimonies of patients.
Research matters
What was fun, was to watch anecdote and franky ‘funless’ arguments put to the sword by research. Patti Shank urged us to read Bjork, to consider the need for effort. Desirable difficulties matter and she killed the opposition with the slow drip of research. I suddenly noticed that the audience was not laughing but attentive, listening, making the effort to understand and reflect, not just react. That’s what most learning (other than Kindergarden play) is and should be. Patti Shank talked sense - research matters. Engagement and fun are proxies and the research shows that effort trumps fun every time. Learners may like 'fun' but research shows that learners are often delusional about learning strategies. What matters in the end is mastery, not just the feeling that you have mastered something but actual mastery.
On twitter and during the audience questions, there were those who simply misread the motion, forgetting the word ‘all’. Some mistook fun for other concepts, like attention, being engrossed, gripped or immersed in a task. I have read literally thousands of books in my life and rarely chortled while reading them. Athletes learn intensely in their sports and barely register a titter. Learning requires attention, focus and effort, not a good giggle. Only those who think that ‘Happy sheets’ are a true indicator of learning adhere to the nonsense that learning should be all fun. Others made non sequiturs, claiming that those who disagree that all learning should be fun, think that all learning should be dull and boring. Just because I don’t think that all clothes should be pink, doesn’t mean I believe they should all be black! It's not that motivation, some fun and the affective side of learning don't matter, just that it is pointless motivating people to embark on learning experiences if they don't actually learn. This is not a false dichotomy, between fun and learning, it is the recognition that there are optimal learning strategies.
It is this obsession that led to the excesses of gamification, with its battery of Pavlovian techniques, which mostly distract from the effort needed to learn and retain. It’s what’s led to online learning being click-through, largely the presentation of text, graphics and video, with little in the way of effortful learning, apart from multiple-choice options. Which is why open input, effortful learning tools like WildFire result in much higher levels of retention. When designers focus relentlessly on fun they more often than not, destroy learning. There is perhaps no greater sin than presenting adults with hundreds of screens of cartoons, speech bubbles and endless clicking, in the name of ‘fun’.
A touch of humour certainly helps raise attention but learning is not stand-up comedy. In fact, we famously forget most jokes, as they don’t fit into existing knowledge schemas. Fun can be the occasional cherry on the cake but never the whole cake.
'Fun', funnily enough, is a rather sad word – it’s naive, paltry, diminishes and demeans learning and I came away from this debate with a heavy heart. There’s an emptiness at the heart of the learning game. A refusal to accept that we know a lot about learning, that research matters. The purveyors of fun, and those who think it’s all about ‘engagement’, are serving up the sort of nonsense that creates superficial, click-through, online learning. This is the dark, hollow world that lies behind the purveyors of mirth. Learning is not a circus and teachers are not clowns.

 Subscribe to RSS

Tuesday, December 04, 2018

What one, intensively--researched principle in learning is like tossing a grenade into common practice?

Research has given us one principle that is like tossing a grenade into common practice – interleaving. It’s counterintuitive and, if the research is right, basically contradicts almost everything we actually practice in learning.
The breakthrough research was Shea & Morgan (1979), who had students learn in a block or through randomised tasks. Randomised learning appeared to result in better long-term retention. This experiment was repeated by Simon & Bjork (2001), but this time they asked the learners at the end of the activities how they think they’ll perform on day 2. Most thought that the blocked practice would be better for them. They were wrong. Current performance is almost always a poor indicator of later performance.
Interleaving in many contexts
Writing the same letter time after time is not as effective as mixing the letter practice up. HHHHHHIIIIIIIIIIJJJJJJJJJ is not as good as HIJHIJHIJHIJHIJHIJHIJHIJ. This also true in conceptual and verbal skills. Rohrer & Taylor (2007) showed that maths problems are better interleaved. Although it feels as though blocked is better but interleaving was three times better! The result in this paper was so shocking the editors of three major Journals rejected the paper on first reading. The size effect was so great that it was hard to believe, so hard to believe that few teachers even do it.
Interleaved in unrelated topics
Rohrer, Dedrick & Stershic (2015) took this a stage further and took unrelated topics in maths, to compare blocked with interleaved practice. Interleaved produced better performance in both short and long-term (30 days). William Emeny, a teacher in England showed that interleaving is actually done by many teachers but only in run up to exams, but that, he showed was where most of the actual learning was taking place.
Interleaving in inferences
What about learning from examples, learning general skills from exposure to examples, like reading X-rays or inferring a painter’s style by exposure to many paintings by specific painters. Kornell & Bjork (2008) did the painter test, 12 paintings by each of 6 artists, then show learners 48 new paintings. The results showed that interleaving was twice as effective as blocked training. It has been replicated in the identification of butterflies, birds, objects, voices, statistics and other domains. Once again, learners were asked what sort of instruction they thought was best. They got it wrong. In young children, 3 year olds, Vlach et al (2008) showed that learning interleaved with play produced better performance.
So why does interleaving work?
Interleaving works as you are highlighting the ‘differences’ between things. These relationships matter in your own mind. Blocking seems more fluid, where interleaving seems confusing, yet it smooths out comparisons.  Another problem is that learners get years and years of blocking in school. They’re actually taught bad habits and that prevents new, fresh habits from forming or even being tried. 
This is a strange thing. Interleaving, as opposed to blocked learning, feels wrong, feels disjointed, almost chaotic. Yet is it much more effective. It seems to fly in the face of your intuitions. Yet it is significantly more efficient as a learning strategy. Yet how often do we see interleaving in classrooms, homework or online learning? Hardly ever. More worryingly, we’re so obsessed with ‘student’ evaluations and perceptions that we can’t see the wood for the trees. We demand student engagement not learning, encourage the idea that learning is easy when it is not. When it comes to teaching, we’re slow learners.

 Subscribe to RSS

Wednesday, November 28, 2018

Why almost everything we think about online learning may be wrong and what to do about it…

One thing that research in cognitive psychology has gifted to us over the last decade or so, is clear evidence that learners are delusional when it comes to judgements about their own learning. The big name in the field is Bjork, along with many other high quality researchers, who says that learning is “quite misunderstood (by learners)…. we have a flawed model of how we learn and remember”. There’s often a negative correlation between people’s judgements of their learning, what they think they have learnt and how they think they learn best - and what they’ve actually learnt and the way they can actually optimise their learning. In short; our own perceptions of learning are seriously delusional. This is why engagement, fun, learner surveys and happy sheets are such bad measures of what is actually learnt and the enemy of optimal learning strategies.
Desirable difficulty
Most learning is illusory as it is too easy. Learning requires Desirable (accomplishable) and difficult learning that requires real effort for high-retention to take place. This is why so much online learning fails. To simply click on faces and see speech bubbles of text, drag and drop labels, choose true or false, even multiple-choice questions, rarely constitutes desirable difficulty. This is click-through learning.
The solution is to provide effortful, retrieval. This means moving beyond the traditional model of text/graphics punctuated by multiple-choice, towards cognitive effort, namely retrieval through open input. This effortful learning gives significant increases in long-term retention and recall. Online learning needs to adopt these techniques if it is to remain credible.
Retrieve and recall what you need to know Bjork (1975) results in much higher levels of retention. Rather than read, re-read and underline, look away and try to retrieve and recall what you need to know. Rather than click on True or False or an option in a short list (MCQ), look away, think, generate, recall and come up with the answer. The key point is that research has shown that retrieval is a memory modifier and makes your memory more recallable. Counter-intuitively, retrieval is much more powerful than being presented with the information. In other words it is more powerful than the original ‘teaching’ event.
Take a learning experience that you have probably been through many, many times – the airline safety demonstration. Try to think through what you have to do in the right order – find life jacket, put over head, then what….. not easy is it? Ah yes… inflate it through the blow tube… then there’s the whistle. No. Many choose the ‘inflate’ option but to inflate it inside the aircraft is a BIG no, no and, in fact, you pull a toggle to inflate. In fact, airlines should set up a spot in the airport, where you actually sit down then have to DO the whole thing. Next time you sit there, watch, then afterwards, close your eyes and retrieve the process, step by step – that also works.
Roediger and Karpicke (2006) researched studying v retrieval testing (without feedback). One week later the retrieval tested group did much better. They also asked them how much you are likely to remember in one week’s time for each method – oddly, the majority of learners got it completely wrong.
Making errors is also a critical component of successful learning. According to Kornell, Hayes and Bjork (2009), generating the wrong thing, then getting it right, leads to stronger learning. The reason is that you are activating the brain’s semantic network. Retrieval testing does better than reading or watching, as it potentiates recall. So are unsuccessful tests better than presentations? The work by Kornell (2009) shows that even unsuccessful testing is better. Retrieval testing gives you better internal feedback and works even when you get few or no correct answers. Testing even before you have access to the material, as a learning experience, also helps learning. Once again, almost bizarrely, Heustler and Metcalfe (2012) asked learners what worked best and they were largely wrong.
From Gates (1917) who compared reading and re-reading with retrieval, to Sptzer (1939) who halted forgetting over 2 months with retrieval in 3000 learners, to Roediger (2011) who got a full grade increases with retrieval techniques and McDaniel (2011) who increased attainment in science, the evidence is clear. For a clear summary of this, and detail on the research, this excellent talk by Bjork is pretty good.
Online learning
In online learning the mechanics of this have also been researched. Duchaster & Nungester (1982) showed that although MCQs help you answer MCQs, they are poor in actual retention and recall. Kang (2007) showed that retrieval is superior to MCQs. At the really practical level, Jacoby (1978) showed that typing in retrieved learning was superior, as did MacDaniel (1986) and Hirsham and Bjork (1988) who showed that even typing in some missing letters sufficed. Richland (2005) did real world experiments that also proved efficacy.
We have the tools in Natural Language Processing and AI to do this, so technology has at last caught up with pedagogy. Let's not plough the same furrow we've plowed for the last 35 years. Time to move on.
I wrote, in a rather tongue in cheek manner (25 ways you which your e-learning sucks), about why I think most current e-learning is click-through and therefore low retention eye candy. This research shows that our methods of online learning are sub-optimal. The problem we face is that immediate success often means long-term failure. More focus should be given to retrieval, NOT presentation, clicking on items and multiple-choice. We need to be presented with desirable difficulties, through partial or complete open input. This is exactly what we’ve spent the last two years building with WildFire.

 Subscribe to RSS

Monday, November 26, 2018

Do we really need all of this 'mentoring' malarkey’?

I’ve never had a mentor. I don’t want a mentor. I don’t much like mentoring. I know this is swimming against the tide of liberal orthodoxy but I value liberal values more than I value fads, groupthink or orthodoxy. Don't mind people doing it but there are many reasons why I’m suspicious of mentoring.
1. Fictional constructs
Mentor was a character in Homer’s The Odyssey and it is often assumed that his role was one of a guiding, experienced guide for his son and family. This is wrong. Mentor was simply an old acquaintance, ill-qualified to play a protective role to his family, and worse, turned out to be a patsy for a hidden force, the God Athena. A similar tale has unfolded in recent times, with mentoring being revived on the back of late 19thcentury psychoanalytic theory, where the original theory has been abandoned but the practice upon which it is based survives.
There is another later work of fiction that resurrected the classical model as a source for the word ‘mentor’ in education, Fenelon’s Les Adventures de Telemaque(1699). This is a tale about limiting the excesses of a king and reinforced the presence of the word ‘mentor’ in both French, then English. Yet Mentor, in this ponderous novel, is prone to didactic speeches about how a king should rule (aided by the aristocracy), hardly the egalitarian text one would expect to spark a revolution in education. Interestingly, it pops up again as one of two books given to Emile in the novel of the same name, by Rousseau.
2. Psychoanalytic veneer
Mentoring came out of the psychoanalytic movement in education, through Freud and Rogers. Nothing survives of Freud’s theories on the mind, education, dreams, humour or anything else for that matter. But Rogers is different. His legacy is more pernicious, as his work has resulted in institutional practice that has hung around many decades after the core theories have been abandoned. We need to learn how to abandon practice when the theories are defunct.
3. Mentoring is a one person trap
As Homer actually showed, one person is not enough. To limit your path in work or life to one person is to be feeble when it comes to probability. Why choose one person (often that person is chosen for you) when there are lots of good people out there? It stands to reason that a range of advice on a range of diverse topics (surely work and life are diverse) needs a range of expertise. Spread your network, speak to a range and variety of people. Don’t get caught in one person’s spider’s web. Mentoring is this sense is a singular trap.
4.  People, social media, books etc. are better
You don’t need a single person, you need advice and expertise. That is also to be found in a range of resources. Sure, a range of people can do the job and the best write books. Books are cheap, so buy some of the best and get reading. You can do it where and when you want and they’re written by the world’s best, not just the person who happens to be chosen in your organisation or a local life coach. And if you yearn for that human face, try video – TED and YouTube – they’re free! I’d take a portion of the training budget and allow people to buy from a wide reading list, rather than institute expensive mentoring programmes. Then there's social media, a rich source of advice and guidance provided daily. This makes people more self-reliant, rather than being infantalised. Twitter also has strong benefits in CPD.
5. Absence of proof
Little (1990) warned us, on mentoring, that, “relative to the amount of pragmatic activity, the volume of empirical enquiry is small [and]... that rhetoric and action have outpaced both conceptual development and empirical warrant.”  This, I fear, is not unusual in the learning world. Where such research is conducted, the results are disappointing. Mentors are often seen as important learning resources in teacher education and in HE teaching development. Empirical research shows, however, that the potential is rarely realized, see Edwards and Protheroe (2003) and Boice (1992). The results often reveal low level "training" that simply instruct novices on the "correct" way to teach Handal and Lauvas (1988), Hart-Landsberg et al., (1992). Indeed, much mentoring has been found to be rather shallow and ineffective Edwards, (1998).
6. Fossilised practice
Practice gets amplified and proliferates through second-rate train the trainer and teacher training courses, pushing orthodoxies long after their sell-by, even retirement, date. Mentoring has sometimes become a lazy option and alternative for hard work, effort, real learning and reflection. By all means strive to acquire knowledge, skills and competences, but don’t imagine that any of this will come through mentoring in any efficient manner.
7. Over-formalised
Mentoring is what parents, grandparents and older members of the community used to do, and well. I’m all for the passing down of learning and wisdom, but when it gets formalized into specific people, with supposedly strong ‘mentoring’ skills, I have my doubts. By all means encourage people to share, especially those with experience but don’t kill the human side of this with an over-formalised process.
Conclusion: get a life, not a coach
I know that many of you will feel uncomforted by these arguments but work and life are not playthings. It is your life and career, so don’t for one minute imagine that the HR department has the solutions you need. Human resources is there to protect organisations from their employees, rarely either human or resourceful. Stay away from this stuff if you really want to remain an independent thinker.
English translation of Les Adventures de Telemaque
Boice (1992)Lessons learned about mentoring
Edwards and Protheroe (2003)Learning to See in Classrooms: What are student teachers learning about teaching and learning while learning to teach in schools? British Educational research Journal.
Handal and Lauvas (1988) Promoting Reflective Teaching
Little, J.W. (1990) ‘The Mentor Phenomenon and the Social Organisation of Teaching’, in: Review of Research in Education. Washington D.C: American Educational Research Association.
Warhurst R (2003) Learning to lecture Paper presented at the British Educational Research Association Annual Conference, Heriot-Watt University, Edinburgh.

 Subscribe to RSS

Sunday, November 18, 2018

Why is learning so hard? Hyperbolic discounting – what is it and what to do about it

Julie Dirkson knows a thing or two about learning. Well versed in the research, she is especially good at bringing ‘behavioural psychology’ to the foreground. Understand learners and you understand why it is so difficult to get them to learn. So it was a pleasure seeing her speak and speak with her afterwards. 
Her starting point is the metaphor of the elephant and its rider, the rider the conscious, verbal, thinking brain; the elephant the automatic, emotional visceral brain. Academically this is Kahneman’s two systems, fast and slow, which she explains using the elephant and rider metaphor. It works, and is proof that you don’t have to wade through 400 pages of a quite dense book like Thinking Fast and Slow to understand a useful theory. (An alternative is to read the eminently readable story The Undoing Project by Michael Lewis.) 
Hyperbolic discounting
One cognitive bias that is hits learning hard is that of hyperbolic discounting, a well researched feature in behavioural economics. Take two similar rewards, humans prefer the one that arrives sooner rather than later. We are therefore said to discount the value of the later reward and this discount increases with the length of the delay.
If the consequences of our learning are distant, we are likely to take it less seriously. Smokers don’s stop smoking just because you tell them it’s dangerous, and there’s no greater danger than death! In practice, smokers see the consequence as being some time off, so they don’t stop smoking just because you warn them of the consequence. So it is with learning. Rewards feel distant in learning, which is why students tend to leave study and cram just prior to exams, or write essays on the last night. They are not committed when it is likely that they won’t use their newly acquired knowledge and skills for some time, if at all. No one would watch a printer problem video unless they had a printer problem.
So how do we get the learner to be a rider and not be stopped by the elephant?
Get social
Reframe learning into a more social experience, online or offline, so that learners have their peer group to compare with. If you see tat others are doing things on time, you are more likely to follow than be presented with some distant consequence. Future promises of promotion, even money, have less effect that near experiences of being part of a group doing things together or being encouraged, even peer reviewed, as encouragement and feedback engenders action.
Autonomous control
Give people control over their learning as personal agency acts as an accelerant. If I feel that things are not imposed upon me, but that I have chosen to take action, then intrinsic motivation will, on the whole, work better than extrinsic motivation. Giving people the choice over what and when they learn is therefore useful.
Push to engage
Technology allows us to push motivating messages and opportunities to learners. We can nudge them into learning. Nudge theory has been used in everything from insects in urinals to reduce splashes to serious behavioural change. Differ is a learning chatbot that raises learner engagement by nudging and pushing students forward through timely reminders. We know that learners are lazy and leave things to the last minute, so why not nudge them into correcting that behaviour. Woebot is a counselling chatbot that simply pops up in the morning on Facebook Messenger. You can choose to ignore or re-schedule. It has that drip-feed effect and, as the content is good and useful, you get used to doing just a few minutes every morning. LINK
Place in workflow
Just in time training, performance support and workflow are all terms for delivering learning when it is needed. This closes the gap between need and execution, thereby eliminating hyperbolic discounting, as there is no delay. Otto is a chatbot that sits in Slack, Messenger, Microsoft teams of whatever social or workflow system your organisation uses. It provides a natural language interface to learning, when you need it.
Use events as catalysts
A sense of immediacy can be created by events – a merger, reorganisation, new product, new leader. All of these can engender a sense of imminence. Or manufacture your own mini-event. Several companies have implemented ‘phishing’ training by sending fake phishing emails, seeing how people react and delivering the training on the back of that event.
Almost everything you do online – Google, Facebook, Twitter, Instagram, Amazon and Netflix, use recommendation engines to personalise what the system thinks you need next. Yet this is rarely used in learning, except in adaptive systems, where AI acts like a teacher, keeping you, personally on course. These systems are not easy to build but that do exist and is another example of AI in learning.
Visual nudges
Online learning needs to pick up on contemporary UX design and use slight movement, colour changes, positioning and layout to push people into action. In WildFire we use AI to create extra links during the learning experience. These appear as you work through an idea or concept, and are highlighted of the system thinks you didn’t really get it first time. But there’s lots of things you can do to nudge people forward in learning.
Calls to action
A neat combination of events as catalysts, nudge learning and calls to action, used widely in marketing, was a project by Standard Life. They used a merger with another large organisation as the catalyst, short 90 second videos as nudges and challenges(calls to action) to do something in their own teams as calls to action. Use was tracked and produced great results. Calls to action are foundational in marketing, especially online marketing, where you are encouraged to contact, registered, inquire or buy through a call or button. Have a look at Amazon, perhaps the most successful company in the world, built on the idea of calling to action.
Habitual learning is difficult to embed, but once adopted is a powerful motivator. good learners are in the habit of taking notes, always having a book in their bags, reading before going to sleep and so on. Choose your habit and force yourself to do it until it becomes natural, almost unthinking. In Kahneman language you must make sure that your System 2 has some of the features of what were once System 1. Or your elephant starts to get places on its own without the rider urging it along.
Learning is one thing, getting people to learn is another. Psychologically, we’re hard-wired to delay, procrastinate, not take learning seriously and see the rewards as far too far down the line to matter. We have to fight these traits and do what we can to encourage authentic and effortful learning, Make it seems as though it really does matter through all sorts of nudges; social, autonomy, push, place in workflow, events as catalysts, recommendations, visual nudges, recommendations, calls to action and habits.
Lewis, M. (2017) The Undoing Project. Penguin
Kahneman, D. (2011) Thinking fast and Slow.Penguin
Roediger, H. McDaniel M. (2013) Make It Stick. Harvard University Press

 Subscribe to RSS