Saturday, January 19, 2019

Voice is here - online learning has been traditionally 'nil by mouth' but not now....

Curious conundrum - nil by mouth
Online learning needs to be unmuted. Almost all online learning involves just clicking. Not even typing stuff in, just clicking. We click to navigate, click on menus, click (absurdly) on people to get fictional speech bubbles, click on multiple-choice options. Yet most other online activity involves messaging, typing what you think on social media and being far more active. Also, in real life, we don’t click, we speak and listen. Most actual teaching and training uses voice.
Voice is our first and most natural form of communication. We’ve evolved to speak and listen, grammatical geniuses aged three and are not in any formal sense, ‘taught’ to talk and hear. Whereas it takes many years to learn how to read and write and many struggle, some never achieving mastery in a lifetime. Is 'voice' a solution?
Rise of voice
Strangely enough we may be going back to the pre-literate age with technology, back to this almost frictionless form of interface. It started with services such as Siri and Cortana on our phones. As the AI technology behind these services improved, it was not Apple or Microsoft that took it to consumers, but Amazon and Google, with Alexa and Google Home. I have an Alexa which switches my lights on and off, activates my robot vacuum cleaner, plays all of my music and smart TV. I use it to set timers for calls and Skype meetings. We even use it to voice message across the three floors of our house and my son who lives elsewhere. I use it for weather, news, sports results. In Berlin recently, with my son, who has Bluetooth headphones linked to Google Assistant, he wanted a coffee and simply asked where the nearest coffee shop was and it spoke back, giving voiced directions as we walked. Voice is also in our cars, as we can speak commands or get spoken to from Google Maps. Voice is c creeping in everywhere.
This month we’ve also seen tools emerge that analyse your voice in terms of mood and tone, and also evidence that you can diagnose Dementia, Parkinson’s and other illnesses from frequency level analysis. As Mary Meeker’s analysis shows, voice is here to stay and has become the way we interact with the internet of things (IoT).
Voice for learning
1. Voice as a skill
Text-based learning has squeezed out the skills of oration, yet speaking fluently, explaining, presenting, giving feedback, interviewing, managing, critical thinking, problem solving, team working and much of what is called 21stC skills, are actually skills we used to teach more widely through voice. They are skills that are fundamentally expressed as speech, that most fundamental of media. People have to learn to both speak up and when they speak, speak wisely and to good effect. It is also important, of course, to listen. For these reasons, the return of voice to learning is a good thing. Speaking to a computer, I suspect, also results in more transfer, especially if, in the real world, you are expected to articulate things in meetings or in the workplace to your colleagues, face to face.
2. Podcasts
Another sign that voice is an important medium in itself are podcasts, which have surprised people with their popularity. This is an excellent post on that subject by Steve Rayson. The book  “Podcasting: New Aural Cultures and Digital Media’ by Llinares, Fox and Berry (2018) is an in-depth look at the strengths of voice-only media; the ability to listen when you want (timeshift), use when walking, running, exercising and driving, long pieces having more depth often with multiple participants. In addition, they make you feel as though you are there in the conversation with a sense of intimacy, as this is ‘listening’ not just ‘hearing’, especially when wearing headphones. Podcasts should be used more in learning. 
3. Podcasts and online learning
We’ve been using podcasts in WildFire. One real example is a Senior Clinician, who ran and authored a globally significant medical trial in Asthma. We allow the learner to listen intently to the podcast (an interview) then grab the transcript (automatically translated into text) to produce a more active and effortful learning experience, with free text input. You get the best of both worlds, an intimate and reflective experience with the expert, as if you were there with him, then you reflect, retrieve, retain and can recall what you need to learn. Note that the ‘need to know’ stuff is not every single word, but the useful points about the scale of the trial, it’s objectives and findings.
4. Text to speech
We’ve also used AI, text to speech, to create introductions to online courses, making them more accessible and human. The basic text file can be edited at any time, with ease, if it needs to be changed. These audio introductions have been used in Train the trainer course and a course fo a major Hotel Chain, where learners may need something more than pure text and images.
5. Voice input
We’ve also developed voice-input online learning, where you don’t type in answers but ‘voice’ them in. This is a very different cognitive and learning experience from just clicking on multiple-choice options. Our memories recall what you think you know in your phonological loop, a sort of inner ear where sounds are recalled and rehearsed before being either spoken or written. This is the precursor to expression. Voicing your input jut seems more like real and not artificial dialogue. The entire learning experience is voiced, navigation and retrieval with open input. This, we believe ,will be useful for certain types of learning, especially with audiences that have problems with typing, literacy or dyslexia. Voice is starting to creep into online learning. It will grow further.
6. VR
One of the problems in VR is the inability to type and click on anything. Put on a headset, and typing when possible is far too slow and clumsy. It is much more convenient, and natural, to speak within that immersive world. This opens up the possibility of more flexible learning within VR. Many knowledge components, decisions or communications within the simulation can be voiced as they would be in the real world. Voice will therefore enable more simulation training.
7. Feedback
Voiced feedback is used by some, obviously in coaching and mentoring, but also in feedback to students about assignments. The ease of recording, along with the higher impact on the learner in terms of perceived interest by the teacher, makes this a powerful feedback method.
8. Assessment
So much learning is text based when so much of the real world is voice based. Spoken assessment is, of course, normal in language training but shouldn’t we be expected to voice our opinions, even voice critical pieces for assessment. It is relatively rare to have oral examinations but this may be desirable if newer softer skills are in demand.
9. Chatbots
Voice interfaces with chatbots have been launched on home devices such as Alexa but we will see domain-specific chat emerge. Google Duplex was the first real showcasing of a conversations sensitive product that can actually make voice calls to a restaurant or hairdresser to make appointments. This is not easy and on limited release. But it is a sign of things to come - more prolonged dialogue by voice.
10. Voice agents
Learning techniques such as mentoring, coaching and counselling will, in time, benefit from this voiced approach. Trials with CBT counselling bots have shown promising results in clinical trials and the anonymity, even the fact that it is NOT human, has proven to be rather counterintuitive advantage.
Conclusion
Online learning needs to pay attention to AI-driven voice. It is an underlying consumer technology, now ubiquitous on phones and increasingly in our homes. It is natural, convenient, intimate and human. It has, when used wisely, the ability to lift online learning out of the text and click model in all sorts of imaginative ways.

 Subscribe to RSS

Friday, January 11, 2019

This 'less is more' AI technique saves time, money and helps increase retention...

AI is many things and it is best employed in learning on specific narrow tasks. That’s what we’ve been doing, using AI to create content, semantically analyse free text input, create text to speech podcasts and curate content at WildFire.
One problem we have tackled is the simple fact that the INPUTS into learning content tend to be overlong, overwritten and too detailed. Training departments often get given huge PDFs, long slidedecks packed with text or overlong video. To be fair those in video production are normally professional enough to edit it down to a reasonable length but huge documents and PowerPoints are, I’d say the norm.
AI can be used to automatically shorten this text. This can be done in two ways (or in combination):
   Extractive
   Abstractive
Extractive
Extractive keeps the text intact and simply removes what it judges to be less useful content. This uses techniques such as inverse term document frequency that gives you a measure of how important words are within a corpus (dataset of text). It looks for sentences with these words and extracts them. There are many more sophisticated extractive algorithms but you get the idea.
The advantage of this approach is that you retain the integrity of the original text, which may be useful if it has been through a regulatory, legal or or subject matter review.
Abstractive
Abstractive tends to use deep learning models and neural networks to understand the content then generate summarised content as a précis. Free from the constraint of having to be loyal to the original structure, these algorithms will write their own abstract getting down to the real essence of the text.
This more powerful technique is more likely to provide a tighter, more suitable output for learning – shorter and a more optimal distillation of the meaning.
Productivity
This is useful in increasing the productivity of any educational or training design team, as you dramatically shorten this necessary editing task. On large PDFs, not uncommon in compliance and SOWs, these techniques really do work well. But it also works well with any text from articles, papers, books, even Powerpoint text and video transcripts. We already automatically grab transcripts from YouTube but this extra step is useful in reducing what is spoken text, down to its real substance. You often get asides and content that works well on screen but not as text. This combination of video plus detailed effortful text, where you pick up the detail and have to make the cognitive effort to show understanding and actual recall of the content is a useful combination. Note that you can scale down in steps until you get to what you feel is an optimal précis. We’ve also found it useful as it surfaces overwriting, repetition, even errors.
Once agreed, the shorter text can be put into WildFire, where other forms of AI create the content, in minutes not months, again dramatically decreasing both time to delivery and costs. The AI cerates the content and analyses free-text input, which is significantly better in terms of improving both retention and recall.
Time matters
This reduction in time is important, as training design has traditionally been a bit of a block in the process. A business sponsor comes to the training department and told it will take weeks and months. Their reaction is often to simply walk away. You can also be seen in the business as delivering timely and, importantly, not over-engineered solutions to business problems.
Less is more
A point, that is often overlooked, is that this is wholly in line with the psychology of learning, which screams  ‘less is more’ at us.  A good motto that itself summarises what learning designers have to do, is ‘Occam’s Razor’ – use the minimum number of entities to reach the given goal. This is true of interfaces but it also true of content design, media design and the needs of learners. 
Our limited working memories along with the need for chunking and retrieval, makes it essential to be precise and as short as possible with learning content. Many courses are over-long with content that is not essential and will be soon forgotten. What learners and businesses want is the crisp essence, what the need to know, not the padding.
Conclusion
This AI technique can be used, alongside other techniques to massively increase speed of delivery, cost and just as importantly efficacy. Your learners will be grateful, as will your business sponsors.

 Subscribe to RSS

Thursday, January 10, 2019

10 things you need to know to before you buy or build a chatbot

As Eliza showed 55 years ago, and Nass and Reeves showed was generally true for technology, we are easily fooled into anthropomorphising and reading agency into chatbots and technology in general. In truth, chatbots don’t talk to you, they pretend to talk to you. They are tricksters. In a sense all human-machine interaction is trickery. It is, in the end, only software being mathematically executed with some human scripts thrown in. Nevertheless, they are surprisingly successful. Even simple Alexa has been a massive hit, and she (well it) only answers simple questions, with little or no dialogue.
Interestingly, this immediately raises an issue for chatbot deployment – setting ‘expectations’. Do you tell users that it is just a piece of software or do you keep up the ‘magic’ myth? How honest will you be about its capability, as you may set the bar too high and get lots of disappointed users. Here’s a few other practical things to think about when you enter the weird and wonderful botland….
1. Domain knowledge
First up – on expectations - and this is really important. Remember that chatbots are not generalists. They are domain specific, good at specific tasks within defined domains. Google Duplex works only because it does domain specific tasks – call a restaurant and book a hairdressing appointment. Some services offer domain specificstores of messaging transcript data, with detailed tasks for each industry sector, such as Dialogueflow and Liveperson. Some even focus on core use cases, which are mostly designed around customer service. Most are a long way off being a genuine teacher, coach or mentor, as they lack the general ability to deal with a breadth of unexpected queries and answers. So dial your expectations down a notch or you’ll be setting yourself up for failure.
2. Voice
Your chatbot needs to have a voice. It’s too easy to just throw a jumble of responses into a database and hope for the best. In organisations, you may need to be on brand, talk like an expert and not a teenager, use humour (or not). Define a persona and build styleguide. At the end of the day, lots of responses have to be written and they need to sound as though they have a single voice. In learning especially, you have to be careful in tone. Too many chatbots have a surfeit of phrases that sound as they’re trying too hard to be cool or funny. In learning, one may want to be a little more serious. This depends, of course, on your intended audience and the subject matter. Whatever the project think about the ‘voice’ in this wider sense.
3. Manifestation
Linked to voice is the visual and aural manifestation of your chatbot. Think carefully about the appearance of the chatbot. Some stay sex neutral, others are identified as male or female. Many, perhaps too many, appear like 1950s square robots. Others have faces, micro-expressions, even animation. Then there’s the name. Be careful with this – it matters. And do you want one name or a separate name for each domain or course? Giving your bot a face seems a little odd and I prefer a bot identity that’s a little more hidden, that leaves the persona to be built in the mind of the user, almost unobtrusive.
4. Natural language processing
Understand what level of technology you want to use. This can mean lots of things, from simple keyword recognition to full speech recognition (as in Amazon.lex). Be very careful here, as this is rarely as good a vendors claim it to be. When a vendor says they are using deep learning or machine learning, that can mean many things, from very basic NLP techniques to more dynamic, sophisticated tasks. Get used to the language of ‘intents’ – this is related to the domain specific issue above. Chatbots needs to have defined tasks, namely ‘intents’ (the user’s intention) as identified and named actions and objects, such as ‘show weather’. These are qualified by ‘entities’. It is worth getting to grips with the vocabulary of NLP when buying or building chatbots.
5. Building
Many chatbot services offer a no-coding tool to build your flow, others require more complex skills. Flowcharting tools are common, and these often result in simply asking users to choose from a set of options and branching from them. To be fair, that keeps you (and the bot) on track, which may be the way to go in structured learning. Others will accept open input but steer you towards certain types of responses. One thing is for sure, you need new skill sets. Traditional interactive design skills will help, but not much. This is about dialogue not monologue, about understanding complex technology, not just pages of HTML.
6. Your data
How do you get your data into their system. This is not trivial. How do you get your content, which maybe exist as messages, pdfs, PowerPoints and other assets into the format that is needed. This is far from automatic. Then, if it’s using complex AI techniaues, there’s the training process. Youbreally do need to understand the data issues – what, where and how it is to be managed – and, of course – GDPR.
7. Hand off to humans
What happens when a chatbot fails? Believe me this is common. A number of failsafe tactics can be employed. You can do the common… ask the person to repeat themselves “Sorry, I didn’t catch that?” “Could you elaborate on that?” The chatbot may even try to use a keyword to save the flow, distract, change the subject and come back to the flow a little later. So think about failsafes. If all else fails, and many customer chatbots do – they default out to a real human. That’s fine in customer service, and many services, like Liveperson and Boutique.ai, off this functionality. This is not so fine if you’re designing an autonomous learning system.
8. Channels
On what channels can the chatbot appear? There are lots of options here and you may want to look at what comms channels you use in your organistion, like website chat, in-app chat, Facebook Messenger, Slack, Google Assistant , Skype, Microsoft Teams, SMS, Twitter or email. The chatbot needs a home and you may want to think about whether it is a performance support chatbot, on your comms system, or a more specific chatbot within a course.
9. Integration
Does the chatbot have an open API and integrate into other platforms? Don’t imagine that this will work easily from your LMS, it won’t. Integration into other systems may also be necessary.
10. Administration
Your chatbot has to be delivered from somewhere, so what are the hosting options and is there monitoring, routing, and management. Reporting and user statisticsmatters with chatbots, as you really do want to see if they deliver what they say, with user stats, times, fallout stats.How are these handled and visualised? Does your chatbot vendor have 24/7 customer support? You may need it. Lastly, of you are using an external service, be careful about them changing without telling you (it happens), especially the large tech vendors, like IBM and Microsoft.
Conclusion
We are only at the start of the use of chatbots in learning. The trick is to play around with all of the demos online, before you start. Checkout the large vendors such as: 
Remember that these are primarily chatbots for customer service. For learning purposes, I’d start with a learning company first. If you want any further advice on this contact me here.

 Subscribe to RSS

Sunday, January 06, 2019

AI breakthroughs in learning in 2018

AI is good at narrow, prescribed tasks, it is hopeless at general tasks. This, in my view is why big data and learning analytics projects are less appropriate in learning than more precise, proven uses of AI. There’s a paucity of data in learning and it is often messy, difficult to access and subject to overfitting and other problems when trying to make predictions.
On the other hand, using specific techniques at specific points on the learning journey – engagement, support, delivery and assessment, one can leverage AI to best effect. So here’s five ways this was done in 2018, in real projects, in real organisations, some winning major awards.
1. Chatbots
We’ve had hundreds of early projects this year where chatbots have been used in the context of promising a future use of chatbots. These include learning engagement, learning support, performance support, assessment and well-being. Google demonstrated Google Duplex, that mastered conversational structure in a limited domain, but enough to fool restaurants that it was a human calling. It has been rolled out for further trials on selected Pixel phones. This builds on several different areas of natural language processing – speech to text, text to speech, trained neural networks, conversational structures. We can expect a lot more in this area in 2019.
2. Creation
The world of design has got bogged down in media production, as if to just watch, listen or read were enough to learn. Even media production can be automated, to a degree with AI. We have been producing text to speech podcasts, using automated transcript creation from video and content creation, at last recognizing that learning, as opposed to click-through consumption, needs fast AI-generated production of high effort learning experiences. Award winning, world-beating projects are now created with AI with no interactive designers.
3. Cognitive effort
Online learning has been trapped in largely linear media ‘experiences’ with low effort, multiple choice questions. This year we’ve seen services, that use open input, either as single concepts or free text, which are both created by and interpreted semantically by AI. The ability of AI to interpret text input by learners automates both assessment and feedback. This was realized in real projects in 2018. It will only get better in 2019.
4. Personalisation
Good learning takes place with timely and relevant action. Almost everything we do online is mediated by AI that delivers timely and relevant options for people – searching on Google, connecting on Social Media, buying on Amazon, entertaining ourselves on Netflix. Adaptive, personalised learning finally showed convincing results on attainment across courses. We can expect a lot more of this in 2019. 
5. Curation
The ability to curate content or tap into the vast cognisphere that is the web, is happening as part of course creation as well as separate searched curation. One can wire in external links to content to solve problems of competence, comprehension or curiosity.
Conclusion
Forget blockchain, badges and gamification. The underlying tectonic shift in learning technology will use AI. This is happening in healthcare, with significant ‘better than human’ applications appearing in 2018. This is happening in finance, with chatbots at the front-end and AI playing an increasing role in back-end systems. This is happening in manufacturing, with the automation of factories. This is happening in retail, as the selling, buying and delivery is increasingly through algorithms and AI. It is also happening in learning. This matters. If we are to adapt to the new normal of AI processes on employment, commerce and politics, we must make sure that education keeps up and that we equip ourselves and our children with better skills for this future.

 Subscribe to RSS

Wednesday, January 02, 2019

Year of learning dangerously – my 15 highs and lows of 2018

So 2018 is behind us. I look back and think… what really happened, what changed? I did a ton of talks over the year in many countries to different types of audiences, teachers, trainers, academics, investors and CEOs. I wrote 65 blogs and a huge number of Tweets and Facebook posts. Also ran an AI business, WildFire, delivering online learning content and we ended the year nicely by winning a major Award. 
So this is not a year end summary nor a forecast for 2019. It’s just a recap on some of the weirder things that happened to me in the world of ‘learning’…
1. Agile, AI-driven, free text learning
As good a term as I can come up with for what I spent most of my year doing and writing about, mostly on the back of AI, and real projects delivered to real clients of AI-generated award winning content, superfast production times and a new tool in WildFire that gets learners to use free-text, where we use AI (semantic analysis) as part of the learning experience. Our initial work shows that this gives huge increases in retention. That is the thing I’m most proud of this year.
2. Video is not enough
Another breakthrough was a WildFire tool that takes any learning video and turns it into a deeper learning experience by taking the transcript and applying AI, not only to create strong online learning but also use the techniques developed above to massively increase retention. Video is rarely enough on its own. It's great at attitudinal learning, processes, procedures and for things that require context and movement. But is it poor at detail and semantic knowledge and has relatively poor retention. This led to working with a video learning company to do just that, as 2+2 = 5.
3. Research matters
I have never been more aware of the lack of awareness on research on learning and online learning than I was this year. At several conferences across the year I saw keynote speakers literally show and state falsehoods that a moments searching on Google would have corrected. These were a mixture of futurists, purveyors of ‘c’ words like creativity and critical thinking and the usual snakeoil merchants. What I did enjoy was giving a talk at the E-learning network on this very topic, where I put forward the idea that interactive design skills will have to change in the face of new AI tech. Until we realise that a body of solid research around effortful learning, illusory learning (learners don’t actually know how they learn or how they should learn), interleaving, desirable difficulties, spaced practice, chunking and so on… we’ll be forever stuck in click-through online learning, where we simply skate across the surface. It led me to realise that almost everything we've done in online learning may now be dated and wrong.
4. Hyperbolic discounting and nudge learning
Learning is hard and suffers from its consequences lying to far in the future for learners to care. Hyperbolic discounting explains why learning is so inefficient but also kicks us into realising that we need to counter it with some neat techniques, such as nudge learning. I saw a great presentation on this in Scotland, where I spoke at the excellent Talent Gathering.
5. Blocked by Tom Peters
The year started all so innocently. I tweeted a link to an article I wrote many moons ago about Leadership and got the usual blowback from those making money from, you guessed it, Leadership workshops.. one of whom praised In Search of Excellence. So I wrote another piece showing that this and another book Good to great, turned out to be false prophets, as much of what they said turned out to be wrong and the many of the companies they heralded as exemplars went bust. More than this I thought that the whole ‘Leadership’ industry in HR had le, eventually to the madness of Our Great Leader, and my namesake, Donald Trump. In any case Tom Peters of all people came back at me and after a little rational tussle – he blocked me. This was one of my favourite achievements of the year.
6. Chatting about chatbots
Did a lot of talks on chatbots this year, after being involved with Otto at Learning Pool (great to see them winning Company of the Year at the Learning technologies Awards), building one of my own in WildFire and playing around with many others, like Woebot. They’re coming of age and have many uses in learning. And bots like Google’s Duplex, are glimpses into an interesting future based on more dialogue than didactic learning. My tack was that they are a natural and frictionless form of learning. We’re still coming to terms with their possibilities.
7. Why I fell out of love with Blockchain
I wrote about blockchain, I got re-married on Blockchain, I gave talks on Blockchain, I read a lot about Blockchain… then I spoke at an event of business CEOs where I saw a whole series of presentations by Blockchain companies and realised that it was largely vapourware, especially in education. Basically, I fell out of love with Blockchain. What no one was explaining were the downsides, that Blockchain had become a bit of a ball and chain.
8. And badges…
It’s OK to change your mind on things and in its wake I also had second thoughts on the whole ‘badges’ thing. This was a good idea that failed to stick, and the movement had run its course. I outlined the reasons for its failure here.
9. Unconscious bias my ass
The most disappointing episode of the year was the faddish rush towards this nonsense. What on earth gave HR the right to think that they could probe my unconscious with courses on ‘unconscious bias’. Of course, they can’t and the tools they’re using are a disgrace. This is all part of the rush towards HR defending organisations AGAINST their own employees. Oh, and by the way, those ‘wellness’ programmes at work – they also turned out to be waste of time and money.
10. Automated my home
It all started with Alexa. Over the months I’ve used it as a hub for timers (meals in oven, Skype calls, deadline), then for music (Amazon music), then the lights, and finally the TV. In the kitchen we have a neat little robot that emerges on a regular basis to clean the ground floor of our house. It does its thing and goes back to plug itself in and have a good sleep. We also have a 3D printer which we’re using to make a 3D drone… that brings me to another techy topic – drones.
11. Drones
I love a bit of niche tech and got really interested in this topic (big thanks to Rebecca, Rosa and Veronique) who allowed me to attend the brilliant E-learning Africa and see Zipline and another drone company in Rwanda (where I was bitch-slapped by a Gorilla but that, as they say, is another story). On my return I spoke about Drones for Good at the wonderful Battle of Ideas in London (listen here). My argument, outlined here, was that drones are not really about delivering pizzas and flying taxis, as that will be regulated out in the developed world. However, they will fly in the developing world. Then along came the Gatwick incident….
12. Graduation
So I donned the Professorial Gown, soft Luther-like hat and was delighted to attend the graduation of hundreds of online students at the University of Derby, with my friends Julie Stone and Paul Bacsich. At the same time I helped get Bryan Caplan across from the US to speak at Online Educa, where he explained why HE is in some trouble (mostly signalling and credential inflation) and that online was part of the answer. 
13. Learning is not a circus and teachers are not clowns
The year ended with a rather odd debate at Online Educa in Berlin, around the motion that “All learning should be fun”. Now I’m as up for a laugh as the next person. And to be fair, Elliot Masie’s defence of the proposition was laughable. Learning can be fun but that’s not really the point. Learning needs effort. Just making things ‘fun’ has led to the sad sight of clickthrough online learning. It was the prefect example of experts who knew the research, versus, deluded sellers of mirth.
14. AI
I spent a lot of time on this in 2018 and plan to spend even more time in 2019. Why? Beneath all the superficial talk about Learning Experiences and whatever fads come through… beneath it allies technology that is smart and has already changed the world forever. AI has and will change the very nature of work. It will, therefore change why we learn, what we learn and how we learn. I ended my year by winning a Learning technologies award with TUI (thanks Henri and Nic) and and WildFire. We did something ground breaking – produced useful learning experiences, in record time, using AI, for a company that showed real impact.
15. Book deal
Oh and got a nice book deal on AI – so head down in 2019.

 Subscribe to RSS