Tuesday, February 12, 2019

What is ‘adaptive’ learning?

Personalised ‘adaptive’ learning came top of this 2019 survey in L&D. Having spent a few years involved with an adaptive learning company, delivering real adaption to real learners, on scale, I thought I’d try to explain what it is, a taxonomy of adaptive learning. The problem is that the word has been applied to many things from simple pre-test assessment to full-blown algorithmic and machine learning adaption, and lots in-between. 
In essence it means adapting the online experience to the individual’s needs as they learn, in the way a personal tutor would adapt. The aim is to provide, what many teachers provide, a learning experience that is tailored to the needs of you as an individual learner. 
Benjamin Bloom, best know for his taxonomy of learning, wrote a now famous paper, The 2 Sigma Problem, which compared the lecture, formative feedback lecture and one-to-one tuition. It is a landmark in adaptive learning. Taking the ‘straight lecture’ as the mean, he found an 84% increase in mastery above the mean for a ‘formative feedback’ approach to teaching and an astonishing 98% increase in mastery for ‘one-to-one tuition’. Google’s Peter Norvig famously said that if you only have to read one paper to support  online learning, this is it. In other words, the increase in efficacy for tailored  one-to-one, because of the increase in on-task learning, is huge. This paper deserves to be read by anyone looking at improving the efficacy of learning as it shows hugely significant improvements by simply altering the way teachers interact with learners. Online learning has to date mostly delivered fairly linear and non-adaptive experiences, whether it’s through self-paced structured learning, scenario-based learning, simulations or informal learning. But we are now in the position of having technology, especially AI, that can deliver what Bloom called ‘one-to-one learning’.
Adaption can be many things but at the heart of the process is a decision to present something to the learner based on what the system knows about the learners, learning or context.

Pre-course adaptive
You can adapt a learning journey at the macro level, recommending skills, courses, even careers based on your individual needs.
‘Pre-test’ the learner, to create a prior profile, before staring the course, then present relevant content. The adaptive software makes a decision based on data specific to that individual. You may start with personal data, such as educational background, competence in previous courses and so on. This is a highly deterministic approach that has limited personalisation and learning benefits but may prevent many from taking unnecessary courses.
Allow learners to ‘test-out’ at points in the course to save them time on progression. This short-circuits unnecessary work but has limited benefits in terms of varied learning for individuals.
Ask or test the learner for their learning style or media preference. Unfortunately, research has shown that false constructs such as learning styles, which do not exist, make no difference on learning outcomes. Personality type is another, although one must be careful with poorly validated outputs from the likes of Myers-Briggs. The OCEAN model is much better validated. One can also use learner opinions, although this is also fraught with danger. Learners are often quite mistaken, not only about what they have learnt but also optimal strategies for learning. So, it is possible to use all sorts of personal data to determine how and what someone should be taught but one has to be very, very careful.

Within-course adaptive
Micro-adaptive courses adjust frequently during a course to determine different routes based on their preferences, what the learner has done or based on specially designed algorithms. A lot of adaptive software within courses uses re-sequencing. The idea is that most learning goes wrong when things are presented that are either too easy, too hard or not relevant for the learner at that moment. One can us the idea of desirable difficulty here to determine a learning experience that is challenging enough to keep the learner driving forward.
Decision within a course are determined by user choices or assessed preferences. There is little evidence that this works.
Decisions are based on a rule or set of rules, at its simplest a conditional if… then… decision but I often a sequence of rules that determine the learner’s progress.
It is worth introducing AI at this point, as it is having a profound effect on all areas of human endeavour. It is inevitable, in my view, that this will also happen in the learning game. Adaptive learning is how the large tech companies deliver to your timeline on Facebook/Twitter, sell to you on Amazon, get you to watch stuff on Netflix. They use an array of techniques based on data they gather, statistics, data mining and AI techniques to improve the delivery of their service to you as an individual. Evidence that AI and adaptive techniques will work in learning, especially in adaption, is there on every device on almost every service we use online. Education is just a bit of a slow learner.
Decisions may be based simply on what the system thinks your level of capability is at that moment, based on formative assessment and other factors. The regular testing of learners, not only improves retention, it gathers useful data about what the system knows about the learner. Failure is not a problem here. Indeed, evidence suggests that making mistakes may be critical to good learning strategies.
Decisions within a course use an algorithm with complex data needs. This provides a much more powerful method for dynamic decision making. At this more fine-grained level, every screen can be regarded as a fresh adaption at that specific point in the course.
Machine learning adaption
AI techniques can, of course, be used in systems that learn and improve as they go. Such systems are often trained using data at the start and then use data as they go to improve the system. The more learners use the system, the better it becomes.
Confidence adaption
Another measure, common in adaptive systems, is the measurement of confidence. You may be asked a question then also asked how confident you are of your answer.
Learning theory 
Good learning theory can also be baked into the algorithms, such as retrieval, interleaving and spaced practice. Care can be taken over cognitive load and even personalised performance support provided adapting to an individuals availability and schedule. Duolingo is sensitive to these needs and provides spaced-practice, aware of the fact that you may have not done anything recently and forgotten stuff. Embodying good learning theory and practice may be what is needed to introduce often counterintuitive methods into teaching, that are resisted by human teachers.

Across courses adaptive
Aggregated data
Aggregated data from a learner’ performance on a previous or previous courses can be used. As can aggregated data of all students who have taken the course. One has to be careful here, as one cohort may have started at a different level of competence than another cohort. There may also be differences on other skills, such as reading comprehension, background knowledge, English as a second language and so on.
Adaptive across curricula
Adaptive software can be applied within a course, across a set of courses but also across an entire curriculum. The idea is that personalisation becomes more targeted, the more you use the system and that competences identified earlier may help determine later sequencing.

Post-course adaptive
Adaptive assessment systems
There’s also adaptive assessment, where test items are presented, based on your performance on previous questions. They often start with a mean test item then select harder or easier items as the learner progresses.
Memory retention systems
Some adaptive systems focus on memory retrieval, retention and recall. They present content, often in a spaced-practice pattern and repeat, remediate and retest to increase retention. These can be powerful systems for the consolidation of learning.
Performance support adaption
Moving beyond courses to performance support, delivering learning when you need it, is another form of adaptive delivery that can be sensitive to your individual needs as well as context. These have been delivered within the workflow, often embedded in social communications systems, sometimes as chatbots.

There are many forms of adaptive learning, in terms of the points of intervention, basis of adaption, technology and purpose. If you want to experience one that is accessible and free, try Duolingo, with 200 million registered users, where structured topics are introduced, alongside basic grammar 

 Subscribe to RSS

Friday, February 08, 2019

Why being more digital makes us more human

The more digital your life, the more human you become. Sounds like a contradiction but I think we’re reaching a point where technology is becoming less visible, if not invisible. This means that we can concentrate on ourselves and others, while taking advantage of technology, without the physical and intrusive presence of technology. Unpopular view, I know, but it’s one I hold. I honestly believe that technology frees us from the tyranny of time, space and labour.
My daily life is made easier
From the moment I wake up to the moment I fall asleep, technology is an all pervasive part of my life. I get woken by an Alexa alarm playing the Radio and simply say ‘Stop’ when I’m ready. My robot cleaner does all our floor cleaning on a schedule, returning to base when finished. I don’t even have to be there when it happens. My heating is controlled by a system that learns my pattern of needs and adapts accordingly. I switch all of my lights off by a simple word command when I go to bed. But that’s just the start…
My work is made easier
My workspace is my laptop and I can, and do, work anywhere. I work all over the world and my workplace and markets are online. Even the physical dimension of my work is made easy by tech. I book parking, flights, hotels online. I sail through Gatwick with my electronic boarding pass on my mobile, fly in an aircraft that is almost invariably flies and lands on autopilot (safer), and return by almost walking through the gates due to face recognition. The only problems I have are when some ill-trained goon steps in, as in LAX airport, where my wife was thrown out of the US for having a Syrian stamp in her passport (he didn’t know the rules). It has never been easier and cheaper to experience other countries and cultures.
The majority of my work meetings are on Zoom or Skype, for free. I set alarms on Alexa for most of these (4 minutes before they’re due). I use Alexa for VAT calculations. I use Slack with my developers who work remotely. I build my product online and deliver it online, to anywhere in the world with an internet connection. I invoice electronically and get paid electronically. I work from home, and see commuting, packed trains, standing in tubes, office blocks and cubicle offices as profoundly inhuman activities.
My social circle is hugely human
As a long-time social media user, I don’t buy the echo-chamber theory,  that I live in a bubble. Believe me, my social media contacts are a feisty and varied bunch. The simple maths shows that almost by definition, the larger the number of people we are in contact with, the more varied they will be.
I have a group of friends on Facebook, who are mostly people I know face-to-face. Some I only know face-to-face because I met them on Facebook. Others are people I lost touch with but resurrected our friendship decades later. Another ripple is the 9,700 followers, and people I follow, on twitter – an invaluable source of professional and general information – mainly through the links it provides. Another ripple is YouTube, where my Ted talk got 55,000 views, and others at nearly 30,000. Then there’s the next, even wider ripple, my blog with 4.8 million pageviews, from countries all over the world. The majority outside of the UK. Social media has freed me from the tyranny of distance and connected me with an enormous amount of people around the world.
Social media has allowed me to meet many more real people than I would have, had it not existed, put me back in touch with people who have enriched my life by that second resurrected encounter and friendship. It’s enabled invitations from all over the world to speak to, over the years, hundreds of thousands of people. I travel much more because of technology. Even the real world of travel, whether it’s finding a location when I’m walking or driving has been revolutionised by access to GPs and online maps.
I spent my early childhood in small Scottish mining towns where the only social contact I had was a few school friends and a single pen-pal. No one will convince me that those were better days. My sons have friends all over the world, contact with their relatives, almost all of who live in another country, and have access to people, news, and sources beyond my imagination. 
My choices are greater
Almost any piece of music I want to play is available by asking Alexa, most books I can buy and get delivered within a day, most movies and TV programmes available on demand. I find anything I want on TV, as I have a Smart TV that responds to voice. I can get any major news source at the touch of a button, even paid for sources are cheap and accessible – real people writing real stuff. Academic articles, a few keystrokes or voice command away. These are not only choices of media. They carry over into real life. Attendance at live music events has gone through the roof. Online dating gives everyone a wider choice of options. Online is rarely just online – it leads to offline events and contact with other people.
My learning is easier
I want to know something I ask Google Assistant or Alexa. I haven’t been in a library for years. If I want to explore something in depth, I have the resources and MOOCs available for free. I count myself as a Lifelong Learner but I haven’t been on a ‘course’ for 35 years. We’re seeing people learn more independently, on the workflow. Jane Hart, who tracks what people actually use for learning, shows that they are not traditional training tools, they are YouTube and Twitter. Learner behaviour is driven more by the individual than trainers. What’s needed is support for this. Smart technology can get to know you, give you suggestions, oil the wheels of this access to timely and relevant learning.
My puzzlement is over those who see technology as something that destroys humanity. This is not to say that technology does not have a bad side – it usually does. I don’t drive, never been behind the wheel of a car in my life, and know that 1.3 million a year dies in car crashes. Yet most people continue to drive. The surest solution to this problem is self-driving vehicles. To those on social media, who spend so much time saying how evil it is, we all have our choices. If you don’t like social media, don’t use it, just as I don’t drive. On the whole, technology frees the self, frees us from the tyranny of time, space and numbers.
Technology is increasingly invisible. The interfaces are becoming more natural though touch and now voice. AI takes the heavy lifting away from all sorts of tasks. Technology is no more about devices but smart services. With voice and the IoT, we will find ourselves in a world where technology simply solves problems, behind the scenes. Some of it happens when I’m not there, behind the scenes, below the radar. I like that. Some is light touch, almost frictionless, like Alexa switching off my lights. Other technology saves me a ton of time – like online meetings and business processes. Above all, it’s the people side I like. The more technology I use, the more human my life becomes.

 Subscribe to RSS

Tuesday, February 05, 2019

2019 predictions in L&D... some surprising disappearances...

Great survey from my friend and namesake Donald Taylor. We are sometimes confused (in both senses of the word), but when it comes to what’s hot in workplace L&D in 2019, he’s the go to man. This is the 6thyear of his survey, by nearly 2000 professionals making 5332 votes from 92 countries.

Top three stars

  Personalisation/adaptive learning (1)
  Artificial Intelligence (2)
  Learning Analytics (3)
One could argue that all three of these top spots have been taken by AI. Sure there are aspects of personalized learning and analytics that are not AI, but it’s there, underlying all three top spots. I have spent the last four years saying that Artificial Intelligence is the major shift in learning technologies with a post in 2014, saying My tech prediction for 2015 - two small letters…AI. AI is changing the very nature of work, so it is ridiculous to imagine that it will not also change why, what and how we learn. Having started this journey in AI many years ago, four years ago I made an investment in an adaptive learning company, started my own AI company WildFire and began talking about this at conferences all over the world. To ignore this is to ignore reality and arguably the most important technology shift we've seen since the invention of print.

Three newbies
  Microlearning (5)
  Learning Experience Platforms (6)
  Performance support (11)
I’ve grouped these together as they show an interesting shift in thinking towards the more dynamic delivery of learning. I'd link the to the top three as chatbots and other forms of smart AI delivery are helping them get to learners in the workflow. My fear is that we'll get a fair bit of puff, as people replace the M with an X and deliver the same old stuff.

Three media

  Virtual/augmented reality (7)  
  Mobile delivery (8)
  Video (13)
Characterised by the fact that they’re actually hardware and media defined, they're here to stay. VR/AR is gaining ground, as I thought it would, and we have, at last, a way to deliver learning by doing. Mobile is, of course, everywhere and video is coming of age, as we’re seeing it better integrated into learning.

Three business topics

  Consulting more deeply with the business (9)
  Showing value (10)
  Developing the L&D function (15)
Although all three dropped 5,4 and 3 places respectively, they’re still in respectful positions and it's good to see the profession trying to keep business relevance and professionalism on the table. I'd like to see more attention to research and evidence but we're getting there.

Three bags full

  Collaborative learning (4)
  Neuroscience/cognitive science (12)
  Curation (14)
Collaborative learning is pretty solid, and it’s good to see that the science of learning is still in here. I still find it shocking that many practitioners have no idea what science says about learning and online learning. Lastly curation – bit of an oddball this one but it’s here.

Three goners

Gamification seems to have shot its bolt and disappeared. I think we got fed up with the weak side of gamification, playing Pavlov with learners, so it seems to have run its course. MOOCs have drifted away, more education that L&D – the numbers taking vocational MOOCS are phenomenal but this is not the world of L&D, it is the world of learners (oh the irony). Badges have also gone. That’s a shame but I too changed my mind on these and they seem to have had their day.
Once again, a great insight into how people are thinking. Over the years this has been a pretty good guide to what’s rising, staying around and falling. Well done to Donald Taylor and his team.

 Subscribe to RSS

Saturday, February 02, 2019

Does ‘Design thinking’ lead to bad learning design?

Fads come and go, and ‘Design Thinking’ seems to be one on the rise at the moment. It’s a process with lots of variants but, in the talks I’ve seen on the subject, and the results I’ve seen emerge from the process, I’m not wholly convinced. The problem is that we may well need less ‘design’ and more ‘thinking’.  The combination is likely to dumb down the learning in favour of superficial design. Imagine applying this theory to medicine. You wouldn’t get far by simply asking patients what they need to cure their problems, you need a growing body of good research tried and tested methods, and expertise. So let’s break the Design Thinking process down to see how it works in practice and examine the steps one by one.
Empathy for the learner is an obvious virtue but what exactly does that mean? For years, in practice, this meant Learning Styles. For many it still is Learning Styles, being sensitive to learner’s differences, diversity and needs in terms of preferences. This, of course has been a disastrous waste of time, as research has shown. Other faddish outcomes over-sensitive to supposed learner needs have been Myers-Briggs, NLP and no end of faddish ideas about what we ‘think’ learners need, rather than what research tells us they actually benefit from.
Research in cognitive psychology has given us clear evidence that learners are often mistaken when it comes to judgements about their own learning. Bjork, along with many other high quality researchers, have shown that learning is “quite misunderstood (by learners)…. we have a flawed model of how we learn and remember”. There’s often a negative correlation between people’s judgements of their learning, what they think they have learnt, how they think they learn best - and what they’ve ‘actually’ learnt and the way they can ‘actually’ optimise their learning. In short, our own perceptions of learning are seriously delusional. This is why engagement, fun, learner surveys and happy sheets are such bad measures of what is actually learnt and the enemy of optimal learning strategies. In short, empathy and asking learners what they want can seriously damage design.
In truth replacing a good needs analysis, including a thorough understanding of your target audience is not bettered by calling it empathy. That is simply replacing analysis with an abstract word to make it sound more in tune with the times.
Identifying learner needs and problems has led to a ton of wasteful energy spent on slicing them up into digital natives/immigrants and personas that often average out differentiation and personalisation. The solution is not to identify ideal learners as personas but provide sophisticated pedagogic approaches that are adaptive and provide personal feedback. Design thinking makes the mistake of thinking there is such a thing as ideal learners without realising that you need analysis of the target audience, not ‘averaged out’ personas.
Design Thinking seems to push people towards thinking that learning problems are ‘design’ problems. Many are not. You need top understand the nature of the cognitive problems and researched solutions to those problems. By all means define the problems but those problems but know what a learning problem is.
One area, however, where I think design thinking could be useful is in identifying the context, workflow and moments of need. So, understanding the learner’s world, their business environment. That’s fine. On this I agree, But I rarely hear this from practising ‘Design Thinking’ practitioners, who tend to focus on the screen design itself, rather than design of a blended learning experience, based on the types of learning to be delivered in real environments, in the workflow with performance support. You need a deep understanding of the technology and its limitations.
There is also an argument for having a compete set of skills on the team but this has nothing to do with design thinking. The delivery of online learning is a complex mix of learning, design, technical, business and fiscal challenges. What's needed is balance in the team not a process that values an abstract method with a focus on 'design' alone.
This is the key step, where design thinkers are supposed to provide challenge and creative solutions. It is the step where it can all go wrong. Creative solutions tend to be based on media delivery, not effortful learning, chunking, interleaving, open input, spaced practice and many other deeper pedagogic issues that need to be understood before you design anything. There’s often a dearth of knowledge about the decades of research in learning and cognitive science that should inform design. It is replaced by rather superficial ideas around media production and presentation, hence the edutainment we get, all ‘tainment’ and no ‘edu’. It focuses on presentation not effortful learning.
Few design thinkers I’ve heard show much knowledge of designing for cognitive load, avoiding redundancy and have scant knowledge of the piles of brilliant work done by Nass, Reeves, Mayer, Clark, Roediger, MacDaniel and many other researchers who have worked for decades uncovering what good online learning design requires. This is also why co-design is so dangerous. It leads to easy learning, all front and no depth.
What I’ve seen is lots of ‘ideation’ around gamification (but the trivial, Pavlovian aspects of games - scoring, badges and leaderboards). Even worse is the over-designed, media rich, click-through learning, loosely punctuated by multiple-choice questions. Remember that media rich does not mean mind-rich. Even then, designers rarely know the basic research, for example, on the optimal number of options in MCQs or that open input is superior.
It is easy to prototype surface designs and get voiced feedback on what people like but this is a tiny part of the story. It is pointless prototyping learning solutions in the hope that you’ll uncover real learning efficacy (as opposed to look and feel) without evaluating those different solutions. This means the tricky and inconvenient business of real research, with controls, reasonable sample sizes, randomly selected learners and clear measurement of retention in long-term memory, even transfer. Few with just ‘Design Thinking’ skills have the skills, time and budget to do this. This is why we must rely on past research and build on this body of knowledge, just as clinicians do in medicine. We need to be aware of the work of Bjork, Roediger, Karpicke, Heustler and Metcalfe, who show that asking learners what they think is counterproductive. And build on the research that shows what techniques work for high retention.
A problem is that prototyping is often defined by the primitive tools used by learning designers, that can only produce presentation-like, souped-up Powerpoint and MCQs, whereas real learning requires much deeper structures. Few have made the effort to explore tools that allow open input and free text input, which really does increase retention and recall. Low fidelity prototyping won’t hack it if you want open input and sophisticated adaptive and personalised learning through AI – and that’s where things are heading.
One area that Design Thinking can help is with the ‘user interface’ but this is only one part of the deliverable and often not that important. It is important to make it as frictionless as possible but this comes as much through technical advances, touchscreen, voice, open input, than design.
Testing is a complex business. I used to run a large online learning test lab – the largest in the UK. We tested for usability, accessibility, quality assurance and technical conformance and, believe me, to focus just on ‘design’ is a big mistake. You need to focus not on surface design but, more importantly, on all sorts of things, such as learning efficacy. Once again, learner testimony can help but it can also hinder. Learners often report illusory learning when they are presented with high quality media – this means absolutely nothing. Testing is pointless if you’re not testing the real goal – actual retained learning. Asking people for qualitative opinions does not do that.
In truth testing is quite tricky. You have to be clear about what you are testing, cover everything and have good reporting. There are tried and tested methods, that few have ever studied, so this is a really weak link. Just shoving something under the nose of a learner is not enough. We found early on that it is a short number of iterations with an expert that really works with interface design, along with A/B testing. Not some simple suck it and see trial.
I've heard several presentations on this and done the reading but my reaction is still the same. Is that it? It seems like a short-circuited version of a poor, project management course. I honestly think that the danger of ‘Design Thinking’ is that it holds us back.  We’ve had this for several years now, where design trumps deep thinking, knowledge of how we learn, knowledge of cognitive overload and knowledge of optimal learning strategies. It gives us the illusion of creativity but at the expense of sound learning. Walk around any large online learning exhibition and observe the output – over-engineered design that lacks depth. Design thinking lures us into thinking that we have solved learning problems when all we have done is polish presentation. The real innovations I’ve seen come from a deep understanding of the research, technology and innovative solutions based on that research, like nudge learning and WildFire. Delivery, I think, is better rooted in strong practices, such as ISO standards and practices guided by evidence, which have evolved over time and not simplistic processes that are often simplified further and sold as bromides. As one commentator, who tried Design Thinking, said "we ended up doing nothing more than polishing turds!".

 Subscribe to RSS

Saturday, January 19, 2019

Voice is here - online learning has been traditionally 'nil by mouth' but not now....

Curious conundrum - nil by mouth
Online learning needs to be unmuted. Almost all online learning involves just clicking. Not even typing stuff in, just clicking. We click to navigate, click on menus, click (absurdly) on people to get fictional speech bubbles, click on multiple-choice options. Yet most other online activity involves messaging, typing what you think on social media and being far more active. Also, in real life, we don’t click, we speak and listen. Most actual teaching and training uses voice.
Voice is our first and most natural form of communication. We’ve evolved to speak and listen, grammatical geniuses aged three and are not in any formal sense, ‘taught’ to talk and hear. Whereas it takes many years to learn how to read and write and many struggle, some never achieving mastery in a lifetime. Is 'voice' a solution?
Rise of voice
Strangely enough we may be going back to the pre-literate age with technology, back to this almost frictionless form of interface. It started with services such as Siri and Cortana on our phones. As the AI technology behind these services improved, it was not Apple or Microsoft that took it to consumers, but Amazon and Google, with Alexa and Google Home. I have an Alexa which switches my lights on and off, activates my robot vacuum cleaner, plays all of my music and smart TV. I use it to set timers for calls and Skype meetings. We even use it to voice message across the three floors of our house and my son who lives elsewhere. I use it for weather, news, sports results. In Berlin recently, with my son, who has Bluetooth headphones linked to Google Assistant, he wanted a coffee and simply asked where the nearest coffee shop was and it spoke back, giving voiced directions as we walked. Voice is also in our cars, as we can speak commands or get spoken to from Google Maps. Voice is c creeping in everywhere.
This month we’ve also seen tools emerge that analyse your voice in terms of mood and tone, and also evidence that you can diagnose Dementia, Parkinson’s and other illnesses from frequency level analysis. As Mary Meeker’s analysis shows, voice is here to stay and has become the way we interact with the internet of things (IoT).
Voice for learning
1. Voice as a skill
Text-based learning has squeezed out the skills of oration, yet speaking fluently, explaining, presenting, giving feedback, interviewing, managing, critical thinking, problem solving, team working and much of what is called 21stC skills, are actually skills we used to teach more widely through voice. They are skills that are fundamentally expressed as speech, that most fundamental of media. People have to learn to both speak up and when they speak, speak wisely and to good effect. It is also important, of course, to listen. For these reasons, the return of voice to learning is a good thing. Speaking to a computer, I suspect, also results in more transfer, especially if, in the real world, you are expected to articulate things in meetings or in the workplace to your colleagues, face to face.
2. Podcasts
Another sign that voice is an important medium in itself are podcasts, which have surprised people with their popularity. This is an excellent post on that subject by Steve Rayson. The book  “Podcasting: New Aural Cultures and Digital Media’ by Llinares, Fox and Berry (2018) is an in-depth look at the strengths of voice-only media; the ability to listen when you want (timeshift), use when walking, running, exercising and driving, long pieces having more depth often with multiple participants. In addition, they make you feel as though you are there in the conversation with a sense of intimacy, as this is ‘listening’ not just ‘hearing’, especially when wearing headphones. Podcasts should be used more in learning. 
3. Podcasts and online learning
We’ve been using podcasts in WildFire. One real example is a Senior Clinician, who ran and authored a globally significant medical trial in Asthma. We allow the learner to listen intently to the podcast (an interview) then grab the transcript (automatically translated into text) to produce a more active and effortful learning experience, with free text input. You get the best of both worlds, an intimate and reflective experience with the expert, as if you were there with him, then you reflect, retrieve, retain and can recall what you need to learn. Note that the ‘need to know’ stuff is not every single word, but the useful points about the scale of the trial, it’s objectives and findings.
4. Text to speech
We’ve also used AI, text to speech, to create introductions to online courses, making them more accessible and human. The basic text file can be edited at any time, with ease, if it needs to be changed. These audio introductions have been used in Train the trainer course and a course fo a major Hotel Chain, where learners may need something more than pure text and images.
5. Voice input
We’ve also developed voice-input online learning, where you don’t type in answers but ‘voice’ them in. This is a very different cognitive and learning experience from just clicking on multiple-choice options. Our memories recall what you think you know in your phonological loop, a sort of inner ear where sounds are recalled and rehearsed before being either spoken or written. This is the precursor to expression. Voicing your input jut seems more like real and not artificial dialogue. The entire learning experience is voiced, navigation and retrieval with open input. This, we believe ,will be useful for certain types of learning, especially with audiences that have problems with typing, literacy or dyslexia. Voice is starting to creep into online learning. It will grow further.
6. VR
One of the problems in VR is the inability to type and click on anything. Put on a headset, and typing when possible is far too slow and clumsy. It is much more convenient, and natural, to speak within that immersive world. This opens up the possibility of more flexible learning within VR. Many knowledge components, decisions or communications within the simulation can be voiced as they would be in the real world. Voice will therefore enable more simulation training.
7. Feedback
Voiced feedback is used by some, obviously in coaching and mentoring, but also in feedback to students about assignments. The ease of recording, along with the higher impact on the learner in terms of perceived interest by the teacher, makes this a powerful feedback method.
8. Assessment
So much learning is text based when so much of the real world is voice based. Spoken assessment is, of course, normal in language training but shouldn’t we be expected to voice our opinions, even voice critical pieces for assessment. It is relatively rare to have oral examinations but this may be desirable if newer softer skills are in demand.
9. Chatbots
Voice interfaces with chatbots have been launched on home devices such as Alexa but we will see domain-specific chat emerge. Google Duplex was the first real showcasing of a conversations sensitive product that can actually make voice calls to a restaurant or hairdresser to make appointments. This is not easy and on limited release. But it is a sign of things to come - more prolonged dialogue by voice.
10. Voice agents
Learning techniques such as mentoring, coaching and counselling will, in time, benefit from this voiced approach. Trials with CBT counselling bots have shown promising results in clinical trials and the anonymity, even the fact that it is NOT human, has proven to be rather counterintuitive advantage.
Online learning needs to pay attention to AI-driven voice. It is an underlying consumer technology, now ubiquitous on phones and increasingly in our homes. It is natural, convenient, intimate and human. It has, when used wisely, the ability to lift online learning out of the text and click model in all sorts of imaginative ways.

 Subscribe to RSS

Friday, January 11, 2019

This 'less is more' AI technique saves time, money and helps increase retention...

AI is many things and it is best employed in learning on specific narrow tasks. That’s what we’ve been doing, using AI to create content, semantically analyse free text input, create text to speech podcasts and curate content at WildFire.
One problem we have tackled is the simple fact that the INPUTS into learning content tend to be overlong, overwritten and too detailed. Training departments often get given huge PDFs, long slidedecks packed with text or overlong video. To be fair those in video production are normally professional enough to edit it down to a reasonable length but huge documents and PowerPoints are, I’d say the norm.
AI can be used to automatically shorten this text. This can be done in two ways (or in combination):
Extractive keeps the text intact and simply removes what it judges to be less useful content. This uses techniques such as inverse term document frequency that gives you a measure of how important words are within a corpus (dataset of text). It looks for sentences with these words and extracts them. There are many more sophisticated extractive algorithms but you get the idea.
The advantage of this approach is that you retain the integrity of the original text, which may be useful if it has been through a regulatory, legal or or subject matter review.
Abstractive tends to use deep learning models and neural networks to understand the content then generate summarised content as a précis. Free from the constraint of having to be loyal to the original structure, these algorithms will write their own abstract getting down to the real essence of the text.
This more powerful technique is more likely to provide a tighter, more suitable output for learning – shorter and a more optimal distillation of the meaning.
This is useful in increasing the productivity of any educational or training design team, as you dramatically shorten this necessary editing task. On large PDFs, not uncommon in compliance and SOWs, these techniques really do work well. But it also works well with any text from articles, papers, books, even Powerpoint text and video transcripts. We already automatically grab transcripts from YouTube but this extra step is useful in reducing what is spoken text, down to its real substance. You often get asides and content that works well on screen but not as text. This combination of video plus detailed effortful text, where you pick up the detail and have to make the cognitive effort to show understanding and actual recall of the content is a useful combination. Note that you can scale down in steps until you get to what you feel is an optimal précis. We’ve also found it useful as it surfaces overwriting, repetition, even errors.
Once agreed, the shorter text can be put into WildFire, where other forms of AI create the content, in minutes not months, again dramatically decreasing both time to delivery and costs. The AI cerates the content and analyses free-text input, which is significantly better in terms of improving both retention and recall.
Time matters
This reduction in time is important, as training design has traditionally been a bit of a block in the process. A business sponsor comes to the training department and told it will take weeks and months. Their reaction is often to simply walk away. You can also be seen in the business as delivering timely and, importantly, not over-engineered solutions to business problems.
Less is more
A point, that is often overlooked, is that this is wholly in line with the psychology of learning, which screams  ‘less is more’ at us.  A good motto that itself summarises what learning designers have to do, is ‘Occam’s Razor’ – use the minimum number of entities to reach the given goal. This is true of interfaces but it also true of content design, media design and the needs of learners. 
Our limited working memories along with the need for chunking and retrieval, makes it essential to be precise and as short as possible with learning content. Many courses are over-long with content that is not essential and will be soon forgotten. What learners and businesses want is the crisp essence, what the need to know, not the padding.
This AI technique can be used, alongside other techniques to massively increase speed of delivery, cost and just as importantly efficacy. Your learners will be grateful, as will your business sponsors.

 Subscribe to RSS

Thursday, January 10, 2019

10 things you need to know to before you buy or build a chatbot

As Eliza showed 55 years ago, and Nass and Reeves showed was generally true for technology, we are easily fooled into anthropomorphising and reading agency into chatbots and technology in general. In truth, chatbots don’t talk to you, they pretend to talk to you. They are tricksters. In a sense all human-machine interaction is trickery. It is, in the end, only software being mathematically executed with some human scripts thrown in. Nevertheless, they are surprisingly successful. Even simple Alexa has been a massive hit, and she (well it) only answers simple questions, with little or no dialogue.
Interestingly, this immediately raises an issue for chatbot deployment – setting ‘expectations’. Do you tell users that it is just a piece of software or do you keep up the ‘magic’ myth? How honest will you be about its capability, as you may set the bar too high and get lots of disappointed users. Here’s a few other practical things to think about when you enter the weird and wonderful botland….
1. Domain knowledge
First up – on expectations - and this is really important. Remember that chatbots are not generalists. They are domain specific, good at specific tasks within defined domains. Google Duplex works only because it does domain specific tasks – call a restaurant and book a hairdressing appointment. Some services offer domain specificstores of messaging transcript data, with detailed tasks for each industry sector, such as Dialogueflow and Liveperson. Some even focus on core use cases, which are mostly designed around customer service. Most are a long way off being a genuine teacher, coach or mentor, as they lack the general ability to deal with a breadth of unexpected queries and answers. So dial your expectations down a notch or you’ll be setting yourself up for failure.
2. Voice
Your chatbot needs to have a voice. It’s too easy to just throw a jumble of responses into a database and hope for the best. In organisations, you may need to be on brand, talk like an expert and not a teenager, use humour (or not). Define a persona and build styleguide. At the end of the day, lots of responses have to be written and they need to sound as though they have a single voice. In learning especially, you have to be careful in tone. Too many chatbots have a surfeit of phrases that sound as they’re trying too hard to be cool or funny. In learning, one may want to be a little more serious. This depends, of course, on your intended audience and the subject matter. Whatever the project think about the ‘voice’ in this wider sense.
3. Manifestation
Linked to voice is the visual and aural manifestation of your chatbot. Think carefully about the appearance of the chatbot. Some stay sex neutral, others are identified as male or female. Many, perhaps too many, appear like 1950s square robots. Others have faces, micro-expressions, even animation. Then there’s the name. Be careful with this – it matters. And do you want one name or a separate name for each domain or course? Giving your bot a face seems a little odd and I prefer a bot identity that’s a little more hidden, that leaves the persona to be built in the mind of the user, almost unobtrusive.
4. Natural language processing
Understand what level of technology you want to use. This can mean lots of things, from simple keyword recognition to full speech recognition (as in Amazon.lex). Be very careful here, as this is rarely as good a vendors claim it to be. When a vendor says they are using deep learning or machine learning, that can mean many things, from very basic NLP techniques to more dynamic, sophisticated tasks. Get used to the language of ‘intents’ – this is related to the domain specific issue above. Chatbots needs to have defined tasks, namely ‘intents’ (the user’s intention) as identified and named actions and objects, such as ‘show weather’. These are qualified by ‘entities’. It is worth getting to grips with the vocabulary of NLP when buying or building chatbots.
5. Building
Many chatbot services offer a no-coding tool to build your flow, others require more complex skills. Flowcharting tools are common, and these often result in simply asking users to choose from a set of options and branching from them. To be fair, that keeps you (and the bot) on track, which may be the way to go in structured learning. Others will accept open input but steer you towards certain types of responses. One thing is for sure, you need new skill sets. Traditional interactive design skills will help, but not much. This is about dialogue not monologue, about understanding complex technology, not just pages of HTML.
6. Your data
How do you get your data into their system. This is not trivial. How do you get your content, which maybe exist as messages, pdfs, PowerPoints and other assets into the format that is needed. This is far from automatic. Then, if it’s using complex AI techniaues, there’s the training process. Youbreally do need to understand the data issues – what, where and how it is to be managed – and, of course – GDPR.
7. Hand off to humans
What happens when a chatbot fails? Believe me this is common. A number of failsafe tactics can be employed. You can do the common… ask the person to repeat themselves “Sorry, I didn’t catch that?” “Could you elaborate on that?” The chatbot may even try to use a keyword to save the flow, distract, change the subject and come back to the flow a little later. So think about failsafes. If all else fails, and many customer chatbots do – they default out to a real human. That’s fine in customer service, and many services, like Liveperson and Boutique.ai, off this functionality. This is not so fine if you’re designing an autonomous learning system.
8. Channels
On what channels can the chatbot appear? There are lots of options here and you may want to look at what comms channels you use in your organistion, like website chat, in-app chat, Facebook Messenger, Slack, Google Assistant , Skype, Microsoft Teams, SMS, Twitter or email. The chatbot needs a home and you may want to think about whether it is a performance support chatbot, on your comms system, or a more specific chatbot within a course.
9. Integration
Does the chatbot have an open API and integrate into other platforms? Don’t imagine that this will work easily from your LMS, it won’t. Integration into other systems may also be necessary.
10. Administration
Your chatbot has to be delivered from somewhere, so what are the hosting options and is there monitoring, routing, and management. Reporting and user statisticsmatters with chatbots, as you really do want to see if they deliver what they say, with user stats, times, fallout stats.How are these handled and visualised? Does your chatbot vendor have 24/7 customer support? You may need it. Lastly, of you are using an external service, be careful about them changing without telling you (it happens), especially the large tech vendors, like IBM and Microsoft.
We are only at the start of the use of chatbots in learning. The trick is to play around with all of the demos online, before you start. Checkout the large vendors such as: 
Remember that these are primarily chatbots for customer service. For learning purposes, I’d start with a learning company first. If you want any further advice on this contact me here.

 Subscribe to RSS

Sunday, January 06, 2019

AI breakthroughs in learning in 2018

AI is good at narrow, prescribed tasks, it is hopeless at general tasks. This, in my view is why big data and learning analytics projects are less appropriate in learning than more precise, proven uses of AI. There’s a paucity of data in learning and it is often messy, difficult to access and subject to overfitting and other problems when trying to make predictions.
On the other hand, using specific techniques at specific points on the learning journey – engagement, support, delivery and assessment, one can leverage AI to best effect. So here’s five ways this was done in 2018, in real projects, in real organisations, some winning major awards.
1. Chatbots
We’ve had hundreds of early projects this year where chatbots have been used in the context of promising a future use of chatbots. These include learning engagement, learning support, performance support, assessment and well-being. Google demonstrated Google Duplex, that mastered conversational structure in a limited domain, but enough to fool restaurants that it was a human calling. It has been rolled out for further trials on selected Pixel phones. This builds on several different areas of natural language processing – speech to text, text to speech, trained neural networks, conversational structures. We can expect a lot more in this area in 2019.
2. Creation
The world of design has got bogged down in media production, as if to just watch, listen or read were enough to learn. Even media production can be automated, to a degree with AI. We have been producing text to speech podcasts, using automated transcript creation from video and content creation, at last recognizing that learning, as opposed to click-through consumption, needs fast AI-generated production of high effort learning experiences. Award winning, world-beating projects are now created with AI with no interactive designers.
3. Cognitive effort
Online learning has been trapped in largely linear media ‘experiences’ with low effort, multiple choice questions. This year we’ve seen services, that use open input, either as single concepts or free text, which are both created by and interpreted semantically by AI. The ability of AI to interpret text input by learners automates both assessment and feedback. This was realized in real projects in 2018. It will only get better in 2019.
4. Personalisation
Good learning takes place with timely and relevant action. Almost everything we do online is mediated by AI that delivers timely and relevant options for people – searching on Google, connecting on Social Media, buying on Amazon, entertaining ourselves on Netflix. Adaptive, personalised learning finally showed convincing results on attainment across courses. We can expect a lot more of this in 2019. 
5. Curation
The ability to curate content or tap into the vast cognisphere that is the web, is happening as part of course creation as well as separate searched curation. One can wire in external links to content to solve problems of competence, comprehension or curiosity.
Forget blockchain, badges and gamification. The underlying tectonic shift in learning technology will use AI. This is happening in healthcare, with significant ‘better than human’ applications appearing in 2018. This is happening in finance, with chatbots at the front-end and AI playing an increasing role in back-end systems. This is happening in manufacturing, with the automation of factories. This is happening in retail, as the selling, buying and delivery is increasingly through algorithms and AI. It is also happening in learning. This matters. If we are to adapt to the new normal of AI processes on employment, commerce and politics, we must make sure that education keeps up and that we equip ourselves and our children with better skills for this future.

 Subscribe to RSS

Wednesday, January 02, 2019

Year of learning dangerously – my 15 highs and lows of 2018

So 2018 is behind us. I look back and think… what really happened, what changed? I did a ton of talks over the year in many countries to different types of audiences, teachers, trainers, academics, investors and CEOs. I wrote 65 blogs and a huge number of Tweets and Facebook posts. Also ran an AI business, WildFire, delivering online learning content and we ended the year nicely by winning a major Award. 
So this is not a year end summary nor a forecast for 2019. It’s just a recap on some of the weirder things that happened to me in the world of ‘learning’…
1. Agile, AI-driven, free text learning
As good a term as I can come up with for what I spent most of my year doing and writing about, mostly on the back of AI, and real projects delivered to real clients of AI-generated award winning content, superfast production times and a new tool in WildFire that gets learners to use free-text, where we use AI (semantic analysis) as part of the learning experience. Our initial work shows that this gives huge increases in retention. That is the thing I’m most proud of this year.
2. Video is not enough
Another breakthrough was a WildFire tool that takes any learning video and turns it into a deeper learning experience by taking the transcript and applying AI, not only to create strong online learning but also use the techniques developed above to massively increase retention. Video is rarely enough on its own. It's great at attitudinal learning, processes, procedures and for things that require context and movement. But is it poor at detail and semantic knowledge and has relatively poor retention. This led to working with a video learning company to do just that, as 2+2 = 5.
3. Research matters
I have never been more aware of the lack of awareness on research on learning and online learning than I was this year. At several conferences across the year I saw keynote speakers literally show and state falsehoods that a moments searching on Google would have corrected. These were a mixture of futurists, purveyors of ‘c’ words like creativity and critical thinking and the usual snakeoil merchants. What I did enjoy was giving a talk at the E-learning network on this very topic, where I put forward the idea that interactive design skills will have to change in the face of new AI tech. Until we realise that a body of solid research around effortful learning, illusory learning (learners don’t actually know how they learn or how they should learn), interleaving, desirable difficulties, spaced practice, chunking and so on… we’ll be forever stuck in click-through online learning, where we simply skate across the surface. It led me to realise that almost everything we've done in online learning may now be dated and wrong.
4. Hyperbolic discounting and nudge learning
Learning is hard and suffers from its consequences lying to far in the future for learners to care. Hyperbolic discounting explains why learning is so inefficient but also kicks us into realising that we need to counter it with some neat techniques, such as nudge learning. I saw a great presentation on this in Scotland, where I spoke at the excellent Talent Gathering.
5. Blocked by Tom Peters
The year started all so innocently. I tweeted a link to an article I wrote many moons ago about Leadership and got the usual blowback from those making money from, you guessed it, Leadership workshops.. one of whom praised In Search of Excellence. So I wrote another piece showing that this and another book Good to great, turned out to be false prophets, as much of what they said turned out to be wrong and the many of the companies they heralded as exemplars went bust. More than this I thought that the whole ‘Leadership’ industry in HR had le, eventually to the madness of Our Great Leader, and my namesake, Donald Trump. In any case Tom Peters of all people came back at me and after a little rational tussle – he blocked me. This was one of my favourite achievements of the year.
6. Chatting about chatbots
Did a lot of talks on chatbots this year, after being involved with Otto at Learning Pool (great to see them winning Company of the Year at the Learning technologies Awards), building one of my own in WildFire and playing around with many others, like Woebot. They’re coming of age and have many uses in learning. And bots like Google’s Duplex, are glimpses into an interesting future based on more dialogue than didactic learning. My tack was that they are a natural and frictionless form of learning. We’re still coming to terms with their possibilities.
7. Why I fell out of love with Blockchain
I wrote about blockchain, I got re-married on Blockchain, I gave talks on Blockchain, I read a lot about Blockchain… then I spoke at an event of business CEOs where I saw a whole series of presentations by Blockchain companies and realised that it was largely vapourware, especially in education. Basically, I fell out of love with Blockchain. What no one was explaining were the downsides, that Blockchain had become a bit of a ball and chain.
8. And badges…
It’s OK to change your mind on things and in its wake I also had second thoughts on the whole ‘badges’ thing. This was a good idea that failed to stick, and the movement had run its course. I outlined the reasons for its failure here.
9. Unconscious bias my ass
The most disappointing episode of the year was the faddish rush towards this nonsense. What on earth gave HR the right to think that they could probe my unconscious with courses on ‘unconscious bias’. Of course, they can’t and the tools they’re using are a disgrace. This is all part of the rush towards HR defending organisations AGAINST their own employees. Oh, and by the way, those ‘wellness’ programmes at work – they also turned out to be waste of time and money.
10. Automated my home
It all started with Alexa. Over the months I’ve used it as a hub for timers (meals in oven, Skype calls, deadline), then for music (Amazon music), then the lights, and finally the TV. In the kitchen we have a neat little robot that emerges on a regular basis to clean the ground floor of our house. It does its thing and goes back to plug itself in and have a good sleep. We also have a 3D printer which we’re using to make a 3D drone… that brings me to another techy topic – drones.
11. Drones
I love a bit of niche tech and got really interested in this topic (big thanks to Rebecca, Rosa and Veronique) who allowed me to attend the brilliant E-learning Africa and see Zipline and another drone company in Rwanda (where I was bitch-slapped by a Gorilla but that, as they say, is another story). On my return I spoke about Drones for Good at the wonderful Battle of Ideas in London (listen here). My argument, outlined here, was that drones are not really about delivering pizzas and flying taxis, as that will be regulated out in the developed world. However, they will fly in the developing world. Then along came the Gatwick incident….
12. Graduation
So I donned the Professorial Gown, soft Luther-like hat and was delighted to attend the graduation of hundreds of online students at the University of Derby, with my friends Julie Stone and Paul Bacsich. At the same time I helped get Bryan Caplan across from the US to speak at Online Educa, where he explained why HE is in some trouble (mostly signalling and credential inflation) and that online was part of the answer. 
13. Learning is not a circus and teachers are not clowns
The year ended with a rather odd debate at Online Educa in Berlin, around the motion that “All learning should be fun”. Now I’m as up for a laugh as the next person. And to be fair, Elliot Masie’s defence of the proposition was laughable. Learning can be fun but that’s not really the point. Learning needs effort. Just making things ‘fun’ has led to the sad sight of clickthrough online learning. It was the prefect example of experts who knew the research, versus, deluded sellers of mirth.
14. AI
I spent a lot of time on this in 2018 and plan to spend even more time in 2019. Why? Beneath all the superficial talk about Learning Experiences and whatever fads come through… beneath it allies technology that is smart and has already changed the world forever. AI has and will change the very nature of work. It will, therefore change why we learn, what we learn and how we learn. I ended my year by winning a Learning technologies award with TUI (thanks Henri and Nic) and and WildFire. We did something ground breaking – produced useful learning experiences, in record time, using AI, for a company that showed real impact.
15. Book deal
Oh and got a nice book deal on AI – so head down in 2019.

 Subscribe to RSS

Thursday, December 13, 2018

Learning Experience Systems – just more click-through online learning?

I have this image in my lounge. He's skating, a clergyman skating, as we so often do when we think we're learning - just skating over the surface. For all the talk of Learning Experience Systems and ‘engagement’, if all you serve up are flat media experiences, no matter how short or micro, with click-through multiple choice or worse, drag and drop, you’ll have thin learning. Simply rebadging platforms with the word ‘Experience’ in the middle doesn’t cut it, unless we reflect on what those ‘experiences should be. All experience is learning but some experiences are much more effective than others (effortful). Simply plopping the word 'experience into the middle of the old LMS terms is to simply rebadge. 
 As Mayer showed, this does not mean making things media rich; media rich is not mind rich. This often inhibits learning with unnecessary cognitive load.
Neither does it simply mean delivering flat resources. Similarly with some types of explicit gamification, where the Pavlovian rewards become ends in themselves and inhibit learning. Good gamification, does in fact, induce deep thought – collecting coins, leader boards and other ephemera do not, as the gains are short-lived.
The way to make such systems work is to focus on effortful ‘learning’ experiences, not just media production. We know that what counts is effortful, desirable and deliberate practice.
Engagement does not mean learning. I can be wholly engaged, as I often am, in all sorts of activities – walking, having a laugh in the pub, watching a movie, attending a basketball game – but I’m learning little. Engagement so often means that edutainment stuff - all tainment and no edu. The self-perception of engagement is, in fact, often a poor predictor of learning. As Bjork repeatedly says, on the back of decades of research, from Roediger, Karpicke, Heustler, Metcalfwe and many others, “we have a flawed model of how we learn and remember”. 
We tend to think that we learn just by reading, hearing and watching. When, in fact, it is other, effortful, more sophisticated practices that result in far more powerful learning. Engagement, fun, learner surveys and happy sheets have been shown to be poor measures of what we actually learn and very far from being optimal learning strategies.
Ask Traci Sitzman who has done the research, Sitzmann (2008). Her work on meta-studies, on 68,245 trainees over 354 research reports, attempt to answer two questions:
Do satisfied students learn more than dissatisfied students?After controlling for pre-training knowledge, reactions accounted for only 2% of the variance in factual knowledge, 5% of the variance in skill-based knowledge, 0% of the variance in training transfer. The answer is clearly no!
Are self-assessments of knowledge accurate? Self-assessment is only moderately related to learning. Self-assessment capture motivation and satisfaction, not actual knowledge levels
Her conclusion based on years of research, and I spoke to her and she is adamant, is that self-assessments should NOT be included in course evaluations and should NOT be used as a substitute for objective learning measures.
Open learning
It’s effort to ‘call to mind’ that makes learning work. Even when you read, it’s the mind reflecting, making links, calling up related thoughts that makes the experience a learning experience. But this is especially true in online learning. The open mind is what makes us learn and therefore open response is what makes us really learn in online learning.
You start with whatever learning resource, in whatever medium you have: text (pdf, paper, book…), text and graphics (PowerPoint…), audio (podcast) or video. By all means read your text, go through a Powerpoint, listen to the podcast or watch a video. It’s what comes next that matters.
With WildFire, in addition to the creation of on line learning, in minutes not months, ae have developed open input by learners, interpreted semantically by AI to. You literally get a question and a blank box into which you can type whatever you want. This is what happens in real life – not selection items from multiple-choice lists. Note that you are not encouraged to just retype what you read saw or heard. The point, hence the question, is to think, reflect, retrieve and recall what you think you know.
Here’s an example, a definition of learning…
What is learning?
Learning is a lasting change in a person’s knowledge or behaviour as a result of experiences of some kind.
Next screen….

You are asked to tell us what you think learning is. It’s not easy and people take several attempts. That’s the point. You are, cognitively, digging deep, retrieving what you know and having a go. As long as you get the main points, that it is a lasting change in behaviour or knowledge through experiences, you’re home and dry. As the AI does a semantic analysis, it accepts variations on words, synonyms and different word order. You can’t cut and paste and when you are shown the definition again, whatever part you got right, is highlighted.  
It’s a refreshing experience in online learning, as it is so easy to click through media and multiple-choice questions thinking you have learnt. Bjork called this the ‘illusion of learning’ and it’s remarkably common. Learners are easily fooled into thinking they have mastered something when they have not.
This fundamental principle in learning, developed in research by Bjork and many others, is why we’ve developed open learning in WildFire
Engagement is not a bad thing but it is neither a necessary, and certainly not a sufficient condition, for learning. LXP theory lacks - well theory and research. We know a lot about how people learn, the excessive focus on surface experience may not help. All experience leads to some learning. But that is not the point, as some experiences are better than others. What those experiences should be are rarely understood by learners. What matters is effortful learning, not ice skating across the surface, having fun but not actually learning much – that is click-through learning. 
Alleger et al. (1997) A meta-analysis of the relations among training criteria. Personnel Psychology 50, 341-357.
Sitzmann, T. and Johnson, S. K. (2012). When is ignorance bliss? The effects of inaccurate self-assessments of knowledge on learning and attrition. Organizational Behavior and Human Decision Processes, 117, 192–207.
Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. (2010). Self-assessment of knowledge: A cognitive learning or affective measure? Academy of Management Learning and Education, 9, 169-191.
Brown, K. G., Sitzmann, T., & Bauer, K. N. (2010). Self-assessment one more time: With gratitude and an eye toward the future. Academy of Management Learning and Education, 9, 348-352
Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K. and Zimmerman, R. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology93, 280-295.

 Subscribe to RSS