The idea that they are a great gamification feature is misleading. Zsolt Olah, of Amazon says "It was an easy target for shallow gamification (look, here’s a lot or points to take useless courses to see yourself on the leaderboard and show off your badge) on the lms. Folks, people don’t give blood because they get a sticker. It’s the other way. Pavlovian rewards have a limited effective learning, which is why so much Pavlovian gamification runs out of steam. Leaderboards, collecting badges and so on. Real gamers are intrinsically motivated by the game, its reputation, their experiences of games, their peers views of games and so on. They do not buy and play games because of the scoring system or badges. Bad learning games or gamification techniques are often just a pale imitation of massively popular gaming.
Tuesday, February 20, 2018
I’d have loved the idea of learning badges to have worked – motivational dynamo, more fine-grained rewards and accreditation. The inconvenient truth is that the idea has failed. This is not for want of trying but a classic case of supply not matched by demand. To put it another way, we built it and they didn’t come to the party. Sure you’ll find some localised examples of success but overall, as a significant movement, it has literally run its course - few are now interested.
1. Lack credibility
The main problem has been credibility. When explicit accreditation is not anchored in a major accreditation body with quality and standards, there is no real anchor in the real world. You are up against recognised accreditation with branding, marketing, frameworks, objective assessment and longevity. Overbadging and weak badging have added to the problem of credibility. Badge projects are here today gone tomorrow, mosquitos not turtles.
2. Lack objectivity
A lack of objectivity, in terms of recognition in the real word has plagued their progress. What happens when you take your badges outside of your institution or course, and no one has ever heard of them and don’t care? Simply badging content is a mistake. This is about real people feeling that they are useful, not lapel badges. If your currency is not recognised in the currency exchange, then you’re left with useless paper.
3. Motivationally suspect
They were always motivationally suspect. Extrinsic rewards should always be treated with suspicion. And there is something suspect about badges for online, but not offline, stuff. You can’t slice and dice learning by mode of delivery. The ‘Overjustification effect’ shows that Intrinsic motivation will decrease when external rewards are only given for completing a particular task or only doing minimal work. This is not to say that all extrinsic motivation is useless, only that superfluous extrinsic motivation is damaging to learning. The failure to escape this trap is a major problem for most badge schemes.
4. Not really gamification
5. Badges don't travel
When your badges get stuck in a proprietary system, repository or e-portfolio, with little in the way of interoperability, they’re effectively imprisoned. Badges are often rendered useless by their failure to escape the bounds of their small ecosystems, technical and cultural. Mozilla have, since 2011, tried to provide a framework and structure. I applaud their efforts but the early paper “Open Badges for Lifelong Learning” was hopelessly utopian. A more achievable vision was needed. The most successful badge system I’ve seen is in IBM – but it is in IBM – that’s it. They tend to remain stuck and siloed inside the organisation that promotes them. Badges don’t travel well.
6. Awful branding
Another problem was branding. Making your badges look like silly, clip-art stickers, makes the whole thing look amateurish. For badges to work they needed some serious marketing and design – Mozilla tried but what we got was almost no marketing and sometimes comically bad design. In addition, it always had that boy scout, girl guide feel – something suitable for earnest young people but not adults. Perhaps it was the word ‘badge’ that was a mistake – something with almost trivial connotations.
When people started to get badges for simply attending conferences, I got worried. The motivation for conference attendance is not always learning. It is often the extrinsic reward of travel and time off. How do you measure the usefulness of that attendance? We could say, did you tweet out session, blog and distribute your findings to your fellow employees, write a paper suggesting new implementations based on what you learnt? Badges for just turning up don’t wash it for me. A real problem here is that badges often don’t match real learning and are rarely measured in terms of impact.
We need less, not more, credentialism. Badges were always a bit childish and tacky. Employers don't ask for them, people don't care about them and they've become meaningless artefacts in systems that put the artefacts of learning above actual learning. Whether you see badges as motivational devices, credentials, actual assessments, even evaluative, if they don’t catch on, they’re dead in the water. In short, they’re dead in the water.
Tuesday, February 13, 2018
Woebot is a counselling chatbot – I’m not big on mentors and counsellors preferring the “get a life not a coach” approach. What I liked the most about the experience was the anonymity of the experience. I'm pretty sure most people done;t actually want to go to their parent, teacher, faculty member or a stranger with their problems and would relish an anonymous service. The clinical paper on Woebot suggests that this is the case. So I gave it a go, for research purposes only you understand…
Started with a series of friendly exchanges, where you have little choice in options but that’s fine – it sets the tone. Couple of things I liked abut the first exchanges.
Sorted out a technical issue seamlessly – rerouting me to messenger.com - that was nice. It also linked to the Stanford clinical trial on the bot. comparing it with a non-bot intervention – although sample size is small, impressive. Also honest about the limitations of a bot – doesn’t overpromise.
You do get sucked into thinking it has human agency, even though it’s just coding, pre-scripting and maths. What’s strange is that most of the exchanges are single button presses – not dialogue at all but quite interesting, as they flip the counsellor, counselled role around. You are asking open questions, such as ‘How’, ‘Tell me more…’ ‘Oh’ ‘Sure’ ‘No doubt’ ‘Absolutely’.
Emojis are dropped in for variety and useful (at last), as they’re really are asking for an emotional response – that’s interesting and not easy to do F2F. The unlocked padlock emoji is nice as is the little sapling for hope and progress – sounds hokey, but it’s not.
What’s nice is that the interface is so simple and natural. You focus on what’s being said and asked and in this context, as you’re asked to think and reflect on your own feelings and behaviour - that’s useful. Dialogue is natural, easy and seems so very human.
The up-front promise of absolute anonymity is also good and I can see why this would appeal to people (I’d imagine the majority) who want help but are too shy or embarrassed to come forward. To be honest, I don’t want some random person counselling me… I want the distance.
The first lesson from woebot was to avoid the language of extremes – “all good”, “all bad”, “always” and to adopt a more measured language. All good… ooops!
One small thought here, I’d have liked this as audio. I’m working with a tool that allows learners to input answers by voice – it’s neat.
First session was 74 small exchanges and she said Bye. Speak again to tomorrow.
Prompted me at 10.53, when I was active on Facebook. Asked politely if I wanted to continue. This time we’re onto multiple-choice questions about ‘all or nothing thinking’ and ‘should’ statements. Quite like the upbeat tone and lively feedback – seems appropriate in a session like this. I’m typing in more, rather than accepting responses – feels more like dialogue. Just 5 mins – small but sweet. I could get used to this.
Had two days in London, so no time to do anything but woebot was patient.
“No worries, talk soon”. You have the option of continuing, rescheduling or waiting on the daily prompt. This, of course, is one of the great advantages of online counselling, indeed online anything, it’s 365/24/7. You do it when you feel like doing it, not when an expensive counsellor timetables you into their practice.
Starts with asking me about my mood (emoji input from me). Gives me options
‘Work on stuff’, ‘Teach me’ or ‘Curated videos’ – not sure about these things – I don’t want to ‘wor’, want a ‘teacher’ or ‘curator’ – first really dissonant point. However, I fancied a video…
OK then.. here are some of my fav's:
1. Emotion Stress and Health (Crashcourse)
2. David Burns, MD TED
3. A video to help with sleep
4. Language is Important (featuring Me!)
5. Overcoming negative voices
6. Don't trust your feelings!
8. The worlds most unsatisfying video
9. Funny cats!
10. The importance of flattery
This led me, weirdly to Reggie Watts – I know him – hilarious and talented but this is a tangent, maybe not… but I felt like some fun…
Actually Reggie will really mess with your mind… he’s way out there… so I’m not sure how suitable that was to someone who really is on the edge…
Now a quick reflection here, a real, human therapist can’t really do this easily – direct you something really, rally interesting – you’re sort of stuck in dialogue.
Woebot says – see ya tomorrow – odd session – but fun.
The whole thing is very upbeat, chatty…. Then it came up with SMART objectives – getting a bit of jargonish – not sure about this. Actually popped in a joke today – quite funny actually. SMART objectives – really? Getting a sense of CBT being a bit flakey – a bag of bad management technique marbles.
That was good - tracking my mood…
Oh no it’s on to ‘mindfulness’ – but in for a penny, in for a pound of bullshit…
“Mindfulness is the opposite of mindfulness” it says, breaking its previous advice not to fall for the language of extremes… Tried disagreeing with woebot here but it was having none of it – clearly not listening, in short, not mindful
Now a breathing exercise – 10 mindful breaths.
Long quiz – not sure about this – far too long
Feedback – “Your greatest strength is your love of learning! You are just like Hermione Granger from ‘Harry Potter’”
That was hopeless – trite and I hate Harry Potter….
Got a bit technical with ‘should statements’ – not so sure that this area of CBT is entirely clear – seems a bit simplistically linguistic.
Asked me to talk about labels I use about myself – reasonable question – promises research tomorrow – didn’t like the way it cut this short – should allow me to go on if I want.
I think I prefer chatbots on-demand, like Replika, which you just tap on your phone to speak to. Replika is famous for teasing out the most intimate of thoughts from its 1.5 million users. It uses ‘cold-reading’ techniques from magicians, who claim to read minds.
Ellie’s another, created for DARPA. Designed to help doctors at military hospitals detect post-traumatic stress disorder, depression, and other mental illnesses in veterans returning from war, but is not meant to provide actual therapy, or replace a therapist. There is good evidence that people are more likely to open up to a bot than a person.
Today is adopting a Growth Mindset. Good to see something a little bit more solid, as it reduces my general skepticism about therapeutic techniques, which seem to be a mixed bag of populist techniques almost thrown together…
Woebot wants to tell me a story to explain, I say yes… Story about woebot being told it was smart, believed it was smart but wasn’t really. This led to the wrong mindset – unable to cope with setbacks and failure. Fixed mindsets are bad so open yourself up to always learning and developing – be more open and fluid in your thinking. Be more accepting of setbacks and mistakes. Get out of polarized ‘smart v stupid labels. Then gave a link to a Carol Dweck video – good these video links. Good session.
It has its limitations and oddities but it’s good to chat to something that doesn’t judge you and has a few surprises up its sleeve. Woebot is a bit of fun, then again, I don’t feel I’m in need of help, many do. If I found it interesting, they are far more likely to get more out of the experience. You always have the chance of accepting, rescheduling or saying no to Woebot – which is useful. I’m often too busy or not in the mood for therapy but the fact that it is ‘pushed’ out to you is a real plus. I rather like its daily prompts – a bit reassuring and a bit of fun. Try it – you just might learn something – even about yourself.
Thursday, February 01, 2018
Healthcare is a complex business. So many things to learn, so much new knowledge to constantly master. The sector is awash with documents from compliance to clinical guidelines, all with oodles of detail and never enough time to train, retain and recall. As it is patients health, even lives, that matter, there’s little room for error. Yet so much training is still delivered via lectures and PowerPoint in rooms full of professionals who are badly needed on the front line. There must be a better way to deliver this regulatory and clinical knowledge?
Online learning is part of the solution but traditional online learning takes months to produce and even one 50 page clinical guideline is often prohibitively expensive. With this in mind, rather than use tools where most of the budget goes on graphics and not interaction, AI is producing tools that do this for you. One of those tools is WildFire, a service that creates high-retention in minutes not months at a faction of previous costs.
So far we’ve delivered a lot of content to a range of organisations from a range of pharmaceutical companies and a Royal College to the NHS. The content originated as:
With a modest amount of preparation, one takes the text file (or automatically created transcripts from podcasts and video) and cut and paste them into WildFire, which identifies what it thinks are the main learning points. Taking our lead from recent research in cognitive science, well summarised by researchers in Make It Stick, we focus not on multiple-choice questions (see weaknesses here) but open input, even voice, if desired. Open input is superior to MCQs as it results in better retention and recall.
Note that healthcare documents are often highly regulated, and the fact that we take the original document means we are not breaking that covenant. It also means almost no friction between designers and subject matter experts. The content has already been signed-off – we use that content in an unadulterated form.
The learner has to literally type in the correct answers, identified by our AI engine. But we do much more. We also get the AI to identify links out to supplementary content. This is done automatically. This works well in healthcare, as the vocabulary, definitions and concepts can be daunting.
We break the content down into small 10-15 minute learning experiences. This is necessary for focus as well as frequency of formative assessment. So a large compliance or clinical guideline document, such as a NICE Guideline, can be broken down meaningfully and accessed, as and when needed.
At the end of each pass through one of these short modules, your knowledge is assessed as Green (known), Amber (nearly known) or Red (not known). You must repeat the Ambers and Reds until you reach full 100% competence. This matters in healthcare. Getting 70% is fine but the other 30% can kill.
We don’t stop there. At the end of each module you can add curated content (again using AI) by searching for content directly related to the modules at hand from the selected learning points. This guided curation increases relevance. This is the stuff that you could know, as opposed to the stuff you should know.
Types of content
This is about moving from reading to retention. One clinical guideline may be intended for many audiences, clinicians, various healthcare professionals, carers, even patients. Updates can be delivered separately when they are published. In general, WildFire has been used for:
· Peer-reviewed medical papers
· Royal College clinical Guidelines
· NICE Guidelines
· Clinician in charge of trial podcasts
· Question & answer session with experts
· Clinician in charge of trial video
· Nurse training videos
· Patient videos
· Training PowerPoints
· Process documents
· Compliance documents
· Sales processes
· Lots more….
What matters most is not that this learning content is useful but how it is used. We have delivered online learning prior to workshops and seminars, so that expensive F2F training can benefit from everyone being brought up to speed on the basic knowledge and vocabulary. Just as important is the post F2F experience of reinforcement and revision for exams, new jobs and so on. The content is far more successful when you know the context for delivery.
A full trial looking at speed of production, ese of use and learning efficacy has been done, and is avilable on request. So, if you have good assets that are not being used for learning, WildFire offers a way to get them into effortful, high retention and recall online learning, in minutes not months. To find out more or ask for ademo see here.