Saturday, September 22, 2018

Learning Designers will have to adapt or die. Here’s 10 ways they need to adapt to AI….

Interactive Designers will have to adapt or die. As AI starts to play a major part of the online learning landscape, right across the learning journey, it is now being used for learner engagement, learner support, content creation, assessment and so on. It will eat relentlessly into the traditional skills that have been in play for nearly 35 years. The old, core skillset was writing, media production, interactions and assessment.
In one company, in which I’m a Director, we see a shift towards AI services and products and we’re having to identify individuals with the skills and attitudes to deal with this new demand. This means understanding the new technology (not trivial), learning how to write for chatbots and dealing more with AI-aided design and curation, rather than doing this for themselves. It’s a radical shift.
In a recent Keynote I summarised this shift as follows...

In another context, using services like WildFire, means not using traditional interactive designers, as the software largely does this job. It identifies the learning points, automatically creates the interactions, finds the curated links and assesses, formatively and summatively. It creates content in minutes not months. This is the way online learning is going. This stuff is here, now.
The gear-shift in skills is interesting and, although still uncertain, here’s some suggestions based on my concrete experience of making and observing this shift in three separate companies.

1. Technical understanding
Designers, or whatever they’re called now or in the future, will need to know far more about what the software does, its functionality, strengths and weaknesses. In some large projects we have found that a knowledge of how the NLP (Natural language Processing) works has been an invaluable skill, along with an ability to troubleshoot by diagnosing what the software can, or cannot do. Those with some technical understanding fare better here as they can understand the potential and limitations.
This is not to say that you need to be able to code or have AI or data science skills. It does mean that you will have to know, in detail, how the software works. If it uses semantic techniques, make the effort to understand the approach, along with its weaknesses and strengths. With chatbots, it is all too easy to set too high an expectation on performance. You will need to know where these lines are in terms of what you have to do as a chatbot designer and writer. Similarly with data analysis. With traditional online learning, the software largely delivers static pages with no real semantic understanding, adaptability or intelligence. AI created content is very different and has a sort of  ‘life of its own’, especially when it uses machine learning. At the very least get to know what the major areas of AI are, how they work and feel comfortable with the vocabulary.

2. Writing
Text remains the core medium in online learning. It remains the core medium in online activity generally. We have seen the pendulum swing towards video, graphics and audio but text will remain a strong medium, as we read faster than we listen, it is editable and searchable. That's why much social media and messaging is still text at heart. When I ran a large traditional online learning company we regarded writing as the key skill for IDs. We put people through literacy tests before they started, no matter what qualifications they had. It proved to be a good predictor, as writing is not just about turn of phrase and style, it is really about communications, purpose, order, logic and structure. I was never a fan of ‘storytelling’ as an identifiable skill.

However, the sort of writing one has to do in the new world of AI has more to do with being sensitive to what NLP (Natural Language Processing) does and dialogue. To write for chatbots one must really know what the technology can and cannot do, and also write natural dialogue (actually a rare skill). That’s why the US tech giants hire screenwriters for these tasks. You may also find yourself writing for ‘voice’. For example, WildFire automatically produces podcast audio using text to speech and that needs to be written in a certain way. Beyond this, coping with synonyms and the vagaries of natural language processing needs an understanding of all sorts of NLP software techniques.

3. Interaction
Hopefully we will see the reduction in the formulaic Multiple-Choice Question production. MCQs are difficult to write and often flawed. Then there’s the often vicariously used ‘drag and drop’ and hideously patronising ‘Let’s see what Philip, Alisha and Sue think of this… ‘ you click on a face and get a speech bubble of text. I find that this is the area where most online learning really sucks.
This, I think, will be an area of huge change as the limited forms of MCQ start to be replaced by open input; of words, numbers and short text answers. NLP allows us to interpret this text. We do all three in WildFire with little interactive design (only editing out which ones we want). There is also voice interaction to consider, which we have been implementing, so that the entire learning experience, all navigation and interaction, is voice-driven. This needs some extra skills in terms of managing expectations and dealing with the vagaries of speech recognition software. Personalisation may also have to be considered. I'm an investor and Director in one of the word's most sophisticated adaptive learning companies, CogBooks, believe me this software is complex and the sequencing has to be handled by software not designers, that's what makes personalisation on scale possible. With chatbots, where we've been designing everything from invisible LSM bots to tutorbots, the whole form of interaction changes and you need to see how they fit into workflow through existing collaborative tools such as Slack or Microsoft teams. 

4. Media production
As online learning became trapped in ‘media production’ most of the effort and budget went into the production of graphics (often illustrative and not meaningfully instructive), animation (often overworked) and video (not enough in itself). Media rich is not necessarily mind rich and the research from Nass, Reeves, Mayer and many others, shows that the excessive use of media can inhibit learning. Unfortunately, much of this research is ignored. We will see this change as the balance shifts towards effortful and more efficient learning. There will still be the need for good media production but it will lessen as AI can produce text from audio, create text and dialogue. Video is never enough in learning and needs to be supplemented by other forms of active learning. AI can do this, making video an even stronger medium. Curation strategies are also important. We often produce content that is already there but AI helps automatically link to content or provides tools for curating content. Lastly, a word on design thinking. The danger is in seeing every learning experience as a uniquely designed thing, to be subjected to an expensive design thinking process, when design can be embodied in good interface design, use A/B testing and avoid the trap of seeing learning as all about look and feel. Design matters but learning matters more.

5. Assessment
So many online learning courses have a fairly arbitrary 70-80% pass threshold. The assessments are rarely the result of any serious thought about the actual level of competence needed, and if you don’t assess the other 20-30% it may, in healthcare, for example, kill someone. There are many ways in which assessment will be aided by AI in terms of the push towards 100% competence, adaptive assessment, digital identification and so on. This will be a feature of more adaptive AI driven content. We have created online learning that can be switched to assessment mode with one tick on the content management system.

6. Data skills
SCORM is looking like an increasingly stupid limit on online learning. To be honest it was from its inception – I was there. Completion is useful but rarely enough. It is important to supplement SCORM with far more detailed data on user behaviours. But even when data is plentiful, it needs to be turned into information, visualised to make it useful. That is one set of skills that is useful, knowing how to visualise data. Information then has to be turned into knowledge and insights. This is where skills are often lacking. First you have to know the many different types of data in learning, how data sets are cleaned, then the techniques used to extract useful insights, often machine learning. You need to distinguish between data as the new oil and data as the new snake oil.

We take data, clean it, process it, then look for insights – clusters and other statistically significant techniques to find patterns and correlations. For example, do course completions correlate with an increase in sales in those retail outlets that complete the training? Training can then be seen as part of a business process where AI not only creates the learning but does the analysis and that is all in a virtual and virtuous loop that informs and improves the business. It is not that you require deep data scientist skills, but you need to become aware of the possibilities of data production, the danger of GIGO, garbage-in/garbage out and the techniques used in this area.

7. User testing
In one major project , we produced so much content, so quickly, that the client had trouble keeping up on quality control at their end. We were making it faster than it could be tested! You will find that the QA process is very different, with quick access to the actual content, allowing for immediate testing. In fact, AI tends to produce less mistakes in my experience as there is less human input, always a source of spelling, punctuation and other errors. I used to ask graphic artists to always cut and paste text as it was a source of endless QA problems. The advantage of using AI generated content is that all sides can screen share to solve residual problems on the actual content seen by the learner. We completed one large project without a single face-to-face meeting. This quick production also opens up the possibility of A/B testing with real learners. This is an example of A/B testing being used with gamification content – with surprising results.

8. Learning theory
In my experience, few interactive designers can name many researchers or identify key pieces of research on, let's say the optimal number of options in a MCQ (answer at foot of this article), retrieval practice, length of video, effects of redundancy, spaced-practice theory, even the rudiments of how memory works (episodic v semantic). This is elementary stuff but it is rarely taken seriously.

With the implementation of AI, the AI has to embody good pedagogic practice. This is interesting, as we can build good, well-researched, learning practice into the software. This is what we have been doing in WildFire, where effortful learning, open input, retrieval and spaced practice are baked into the software. Hopefully, this will drive online learning away from long-winded projects that take months to complete, towards production that takes minutes not months, and learning experiences that focus on learning not appearance.

9. Agile production
Communications with AI developers and data scientists is a challenge. They know a lot about the software but often little about learning and the goals. On the other hand designers know a lot about communications, learning and goals. Agile techniques, with a shared whiteboard are useful. There are formal agile techniques around identifying the user story, extracting features then coming to agreed tasks. Comms are tougher in this world so learn to be forgiving.

Then there’s communications with the client and SMEs. This can be particularly difficult, as some of the output is AI generated, and as AI is not remotely human (not conscious or cognitive) it can produce mistakes. You learn to deal with this when you work in this field, overfitting, false positives and so on. But this is often not easy for clients to understand, as they will be used to design document, scripts and traditional QA techniques. I had AI once automatically produce a link for the word ‘blow’, a technique nurses ask of young patients when they’re using sharps or needles. The AI linked to the Wikipedia page for ‘blow’ – which was cocaine – easily remedied but odd.

We have also worked to reduce iterations with SMEs, the cause of much of the high cost of online learning. If the AI is identifying learning points and curated content, using already approved documents, PPTs and videos, the need for SME input is lessened. As tools like WildFire produce content very quickly, the clients and SME can test and approve the actual content, not from scripts but in the form of the learning experience itself. This saves a ton of time.

10. Make the leap
AI is here. We are, at last, emerging from a 30 year paradigm of media production and multiple choice questions, in largely flat and unintelligent learning experiences, towards smart, intelligent online learning, that behaves more like a good teacher, where you are taught as an individual with a personalised experience, challenged and, rather than endlessly choosing from lists, engage in effortful learning, using dialogue, even voice. As a Learning designer, Interactive designer, project Manager, Producer, whatever, this is the most exciting thing to have happened in the last 30 years of learning.
Most of the Interactive Designers I have known, worked with and hired over the last 30  plus years have been skilled people, sensitive to the needs of learners but we must always be willing to 'learn', for that is our vocation. To stop learning is to do learning a disservice. So make the leap!

Conclusion
In addition, those in HR and L and D will have to get to grips with AI. It will change the very nature of the workforce, which is our business. This means it will change WHY we learn WHAT we learn and HOW we learn. Almost all online experiences are now mediated by AI - Facebook, Twitter, Instagram, Amazon, Netflix.... except in learning! But that is about to change. What is needed is a change in mindset, as well as tools and skills. It may be difficult to adapt to this new world, where many aspects of design will be automated. I suspect that it will lead to a swing away from souped up graphic design back towards learning. This will be. good thing.

Monday, September 17, 2018

Breakthrough that literally opens up online learning? Using AI for free text input

When teachers ask learners whether they know something they rarely ask them multiple choice questions. Yet the MCQ remains the staple in online learning, even at the level of scenario based learning. Open input remains rare. Yet there is ample evidence that it is superior in terms of retention and recall. Imagine allowing learners to type, in their own words, what they think they know about something, and AI does the job of interpreting that input?

Open input
We’ve developed different levels of more natural open input that takes online learning forward. The first involves using AI to identify the main concepts and getting learners to enter the text/numbers, rather than choosing from a list. The cognitive advantage is that the learner focuses on recalling the idea into their own minds, an act that has been shown to increases retention and recall. There is even evidence that this type of retrieval has a stronger learning effect than the original act of being taught the concept. These concepts then act a ’cues’ which learners hang their learning upon, for recall. We know this works well.

Free text input


But let’s take this a stage further and try more general open input. The learners reads a number of related concepts in text, with graphics, even watching video, and has to type in a longer piece of text, in their own words. This we have also done. This longer form of open-input allows the learner to rephrase and generate their thoughts and the AI software does analysis on this text.

Ideally, one takes the learner through three levels:

1. Read text/interpret graphics/watch video
2. AI generated open-input with cues
3. AI generated open-input of fuller freeform text in your own words

This gives us a learning gradient in terms and of increasing levels of difficulty and retrieval. You move from exposure and reflection, to guided effortful retrieval and full, unaided retrieval. Our approach increases the efficacy of learning in terms of speed of learning, better retrieval and better recall, all generated and executed by AI.

The process of interpretation on the generated text, in your own words, copes with synonyms, words close in meaning and different sentence constructions, as it uses the very latest form of AI. It also uses the more formal data from the structured learning. We have also got this working by voice only input, another breakthrough in learning, as it is a more natural form of expression in practice. 
The opportunities for chatbots is also immense. 

If you work in corporate learning and want to know more, please contact us at WildFire and we can show you this in action
.
Evidence for this approach
Much advice and most practice from educational institutions – re-reading, highlighting and underlining – is wasteful. In fact, these traditional techniques can be dangerous, as they give the illusion of mastery. Indeed, learners who use reading and re-reading show overconfidence in their mastery, compared to learners who take advantage of effortful learning.

Yet significant progress has been made in cognitive science research to identify more potent strategies for learning. The first strategy, mentioned as far back as Aristotle, Francis Bacon then William James, is ‘effortful’ learning. It is what the learner does that matters. 

Simply reading, listening or watching, even repeating these experiences, is not enough. The learning is in the doing. The learner must be pushed to make the effort to retrieve their learning to make it stick in long-term memory.

Active retrieval
 ‘Active retrieval’ is the most powerful learning strategy, even more powerful than the original learning experience.  The first solid research on retrieval was by Gates (1917), who tested children aged 8-16 on short biographies. Some simply re-read the material several times, others were told to look up and silently recite what they had read. The latter, who actively retrieved knowledge, showed better recall. Spitzer (1939) made over 3000 11-12 year olds read 600 word articles then tested students at periods over 2 months. The greater the gap between testing (retrieval) and the original exposure or test, the greater the forgetting. The tests themselves seemed to halt forgetting. Tulving (1967) took this further with lists of 36 words, with repeated testing and retrieval. The retrieval led to as much learning as the original act of studying. This shifted the focus away from testing as just assessment to testing as retrieval, as an act of learning in itself. Roediger et al. (2011) did a study on text material covering Egypt, Mesopotamia, India and China, in the real context of real classes in a real school, a Middle School in Columbia, Illinois. Retrieval tests, only a few minutes long, produced a full grade-level increase on the material that had been subject to retrieval. McDaniel (2011) did a further study on science subjects, with 16 year olds, on genetics, evolution and anatomy. Students who used retrieval quizzes scored 92% (A-) compared to 79% for those who did not. More than this, the effect of retrieval lasted longer, when the students were tested eight months later. So designing learning as a retrieval learning experience, largely using open-input, where you have to pull things from your memory and make a real effort to type in the missing words, given their context in a sentence.

Open input
Most online learning relies heavily on multiple-choice questions, which have become the staple of much e-learning content. These have been shown to be effective, as almost any type of test item is effective to a degree, but they have been shown to be less effective than open-response, as they test recognition from a list, not whether it is actually known.

Duchastel and Nungester (1982) found that multiple-choice tests improve your performance on recognition in subsequent multiple-choice tests and open input improves performance on recall from memory. This is called the ‘test practice effect’. Kang et al. (2007) showed that, with 48 undergraduates, reading academic Journal quality material, open input is superior to multiple-choice (recognition) tasks. Multiple choice testing had an affect similar to that of re-reading whereas open-input resulted in more effective student learning. McDaniel et al. (2007) repeated this experiment in a real course with 35 students enrolled in a web-based Brain and Behavior course at the University of New Mexico. The open-input quizzes produced more robust benefits than multiple-choice quizzes. ‘Desirable difficulties’ is a concept coined by Elizabeth and Robert Bjork, to describe the desirability of creating learning experiences that trigger effort, deeper processing, encoding and retrieval, to enhance learning. The Bjorks have researched this phenomenon in detail to show that effortful retrieval and recall is desirable in learning, as it is the effort taken in retrieval that reinforces and consolidates that learning. A multiple-choice question is a test of recognition from a list. They do not elicit full recall from memory. Studies comparing multiple-choice with open retrieval show that when more effort is demanded of students, they have better retention.As open-response takes cognitive effort, the very act of recalling knowledge also reinforces that knowledge in memory. The act of active recall develops and strengthens memory. It improves the process of recall in ways that passive recall – reading, listening and watching do not.

Design implications
Meaning matters and so we rely primarily on reading and open response, where meaningful recall is stimulated. This act alone, even when you don’t know the answer, is a strong reinforcer, stronger indeed, than the original exposure. Interestingly, even when the answer is not known, the act of trying to answer is also a powerful form of learning. 

So, the deliberate choice of open-response questions, where the user types in the words, then more  substantial open input, is a deliberate, design strategy to take advantage of known AI and learning techniques to increase recall and retention. Note that no learner is subjected to the undesirable difficulty of getting stuck, as letters are revealed one by one, and the answer given after three attempts. Hints are also possible in the system.

Bibliography
Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press.
Bower G. H. (1972) Mental imagery in associative learning in Gregg L,W. Cognition in learning and memory New York, Wiley
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Butler, A. C., & Roediger, H. L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36, 604-616.
Duchastel, P. C., & Nungester, R. J. (1982). Testing effects measured with alternate test forms. Journal of Educational Research75, 309-313.
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Gates, A. I. (1917). Recitation as a factor in memorizing. Archives of Psychology, No. 40, 1-104. 
Hirshman, E. L., & Bjork, R. A. (1988). The generation effect: Support for a two-factor theory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 14, 484–494. 
Jacoby, L. L. (1978). On interpreting the effects of repetition: Solving a problem versus remembering a solution. Journal of Verbal Learning and Verbal Behavior17, 649-667.
Kang, S. H. K., McDermott, K. B., & Roediger, H. L., III. (2007). Test format and corrective feedback modulate the effect of testing on long-term retention. European Journal of Cognitive Psychology19, 528-558. 
McDaniel, M. A., Einstein, G. O., Dunay, P. K., & Cobb, R.  (1986).  Encoding difficulty and memory:  Toward a unifying theory.  Journal of Memory and Language25, 645-656.
McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103, 399-414
Miller, G.A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. 
Richland, L. E., Bjork, R. A., Finley, J. R., & Linn, M. C. (2005). Linking cognitive science to education: Generation and interleaving effects. In B. G. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the twenty-seventh annual conference of the cognitive science society. Mahwah, NJ: Erlbaum. 
Roediger, H. L., Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Test-enhanced learning in the classroom: Long-term improvements from quizzing. Journal of Experimental Psychology: Applied, 17, 382-395.
Spitzer, H. F. (1939). Studies in retention. Journal of Educational Psychology30, 641-656. 
Tulving, E. (1967). The effects of presentation and recall of material in free-recall learning. Journal of Verbal Learning and Verbal Behavior6, 175􏰀184.

Wednesday, September 12, 2018

Why almost all Multiple Choice Questions are badly designed...

Multiple-choice questions are everywhere, from simple retrieval tests to high stakes exams. They normally (not always) contain FOUR options; one right answer and three distractors. But who decided this? Is it more convention than researched practice?


Research
Rodriquez (2005) studied 27 research papers on multiple-choice questions and found that the optimal number of options is THREE not four. Vyas (2008) showed the same was true in medical education. But God is in the detail and their findings surfaced some interesting phenomena.

Four/five options increases the effort the learner has to make but this does not increase the reliability of the test

Four increases the effort needed to write questions and as distractors are the most difficult part of the test item to identify and write, they were often very weak.

Reduction in the number of options from four to three (surprisingly) increased the reliability of test scores.

Tests were shorter leaving more time for teaching.

Cheating MCQs
In my 20 tips on how to increase your score in multiple choice questions, I mention the fact that there's often one distractor that's obviously wrong, another reason for abandoning that often difficult to write fourth distractor.

Weaknesses in MCQs
Of course, multiple-choice is, in itself, weaker than open-input, which is why we can go one step further and have open response, either single words or short answers. Natural Language Processing allows AI not only to create such questions automatically but also provide accurate student scores. AI can also create both of these question types (also MCQs if desired) automatically, saving organisations time and money. This is surely the way forward in online learning. Beyond this is voiced input, again a step forward, and AI also allows this type of input in online learning. If you are interested in learning more about this, see WildFire.

Conclusion
So, you can safely reduce the number of MCQ options from five/four to THREE and not reduce the reliability of the tests. Indeed, there is evidence that it improves the reliability of tests. Not only that it saves organisations, teachers and learners time.

Bibliography
Rodriquez (2005) “Three Options Are Optimal for Multiple-Choice Items: A Meta-Analysis of 80 Years of Research”, Educational Measurement: Issues and Practice, Volume 24, Number 2, June 2005, pp. 3-13.
Vyas R (2008) Medical Education. The National medical Journal of India, Volume 21, No 3

Friday, September 07, 2018

Are 'chatbots' a gamechanger in learning? 10 reasons and some warnings!

This was the debate motion at the LPI conference in London. I was FOR the motion (Henry Stewart was AGAINST) and let me explain why....
1. AI is a gamechanger
AI will change the very nature of work. It may even change what it is to be human. This is a technological revolution as big as the internet and will therefore change what we learn, why we learn and how we learn. The Top Seven companies by market cap all have AI as a core strategy; AppleAlphabetMicrosoftAmazonTencentFacebook and Alibaba. AI is a strategic concern for every sector and every business, even learning. Nevertheless we must be careful not to hype their functionality. They are not capable of fully understanding every question you throw at them, neither do they have the general human capabilities of a teacher. They are, essentially, good within narrow parameters. We must manage expectations here.
2. Evidence from consumers
Several radical shifts in consumer online behavior move us towards chatbots. First the entry of voice activated bots into the home and connected the the IoT (Internet of Things) – Amazon Alexa and Google Home. My Alexa is linked to my internet music service, lights in my home and I use it a a timer for Skype calls and events during the day. It is integrated into my workflow.
3. Rise of voice
The rise of ‘voice’ as a natural form of communicating with technology – Siri, Cortana and other similar services. Over 10% of all search is now by voice. We have been using computer generated voice from text files in WildFire for some time. It adds some humanity to what can often be seen as the sterility of online learning.
4. Chat has superseded social media
The switch from social media to messaging/chat apps took place in late 2014 and the gap is growing – chat is the home screen for most young people. Look over someone's shoulder and you're far more likely to see a 'chat' screen than a website. The lesson here is that chatbots allow us to play to the natural online behaviours of learners. Then again, chat with another human is a little different from chat with a chatbot, far more limited.
5. Social
We are social animals and it was no accident that chatbots first emerged in social tools such as facebook and Slack. They are a natural extension of existing social learning, allowing us to place them in the workflow. Chatbots, like Otto, are designed to lie within these workflow tools, moving learning from the LMS to a more demand-driven model.
6. Pedagogy in chatbots
Most teaching is through dialogue. The Socratic method may have been undermined by blackboards and their successors through to PowerPoint, but voice and dialogue are making a comeback. Speaking and listening through dialogue is our most natural interface. We’ve evolved these capabilities over 2 million years, it’s natural and we’re grammatical geniuses aged 3, without having to be taught to speak and listen. Within dialogue lies lots of pedagogically strong learning techniques; retrieval, elaboration, questions, answers, follow ups, examples and so on. It just feels more natural. Once again, however, we must be careful in thinking of chatbots as people. They are not conscious and not capable of full-flow, open dialogue.
7. Evidence in learning
An exit poll taken by Donald Taylor, from Learning and Technologies conference this year, showed Personalised learning at No 1 and AI and No 3. The interest is clearly strong and there’s lots of real projects being delivered to real clients from WildFire, Learning Pool and so on. However, be careful about vendors telling you their chatbot is true AI. Many are not. It is fiendishly difficult to do this well, so most are very structured, branching bots with limited functionality.
8. Chatbots across the learning journey
There are now real chatbot applications at points across the entire learning journey. I showed actual chatbot applications in learning in the following areas:
   Recruitment bots   
   Onboarding bots
   Learner engagement bots
   Learner support bots
   Invisible LMS bots
   Mentor bots
   Reflective bots
   Practice bots
   Assessment bots
   Wellbeing bots
The problem we have is that most bots are actually just FAQ bots. They are pixies for search. In the learning game they have much more potential. My own view is that we'll see a range of bot types emerge that will match the needs of learners and organisations. We must think more expansively around bots if they are to play a significant role in learning.
9. They’re learners
An important feature of modern chatbots, compared to ELIZA from the 1960s, is the fact that they now learn. This matters as the more you train and use them, the better they get. We used to have just human teachers and learners, we now have technology that is both a teacher and learner. This means one can take advantage of bot services from some of the large tech companies. But be careful - it's not cheap and they tend t swap out functionality with little sensitivity around your delivery.
10. It’s started
Technology is always ahead of the sociology, which is always ahead of learning and development. Yet, we see in these many projects, even with relatively primitive technology and emerging trend – the use of technology delivered chatbot learning. In time, this will happen. I've been involved in several projects now across a range of chatbot types.
Objections
Nigel Paine chaired the debate with his usual panache and teased questions out of the audience and the real debate ensued. The questions were rather good.
Q Has AI has passed the Turing test?
First, there are many versions of the Turing test but the evidence from the many chatbots on social media all the way to Google Duplex, shows that it has been passed but only in limited areas. Not for long, sustained and very detailed dialogue, but certainly within limited domains. Google Duplex showed that we’re getting there on sustained dialogue and the next generation of Amazon’s Alexa and Google Home will have memory, context and personalisation in their chatbot software. It will come in time.
Q AI can never match the human brain
This is true but not always the point. We didn’t learn to fly by copying the wings of a bird – we invented new technology – the airplane. We didn’t go faster by looking at the legs of a cheetah, we invented the wheel. The human brain is actually a rather fragile entity. It takes 20 years and more of training to make it even remotely useful in the workplace, it is inattentive, easily overloaded, has fallible memory, forgets most of what its tries to learn, has tons of biases (we're all racist and sexist), sleeps 8 hours a day, we can’t download, can’t network and we die. But it is true that it is rather good at general things. This is why chatbots are best targeted at specific uses and domains, such as the species of chatbot I listed earlier.
Q Chatbots v people
Michelle Parry-Slater made a good point about chatbots not replacing people but working alongside people. This is important. Chatbots may replace some functions and roles but few suppose that all people will be eliminated by chatbots. We have to see them as being part of the learning landscape.
Q Chatbots need to capture pedagogy
Good question from Martin Couzins. Chatbots have to embody good pedagogy and already do. Whether it’s models of engagement, support, learning objectives, invisible LMS, practice, assessment or well being, the whole point is to use both the interface and back-end functionality (important area for pedagogic capture) to deliver powerful learning based on evidence-based theory, such as retrieval, effortful learning, spaced-practice and so on. This will improve rather than diminish or ignore pedagogy. In all of the examples I showed, pedagogy was first and foremost.
Q Will L and D skills have to change
Indeed. I have been training Interactive Designers on chatbot and AI skills as this is already in demand. The days of simply producing media assets and multiple choice questions is coming to a close – thankfully.
Conclusion
Oh and we won the debate by some margin, with a significant number changing their minds from sceptics to believers along the way! That doesn't really matter, as it was a self-selecting audience - they came, I'd imagine, as they were curious and had some affinity with the idea that chatbots have a role. My view is these debates are good at conferences - by starting with a polarised position, the audience can move and shift around in the middle. The audience in this session were excellent, with great questions, as you've seen above. Note to conference organisers - we need more of this - it energises debate and audience participation.

Tuesday, September 04, 2018

First randomised-controlled trial of an employee “Wellness Programme” suggests they are a waste of money Oh dear….

Jones et al (2018) in their study What Do Workplace Wellness Programmes Do? Took 12000 employees, randomly assigned them into groups, but found no “significant causal effects of treatment on total medical expenditures, health behaviors, employee productivity, or self-reported health status in the first year”.
This is a huge business, around $8 billion in the US alone. Yet it is largely based on articles of faith, not research. This is a welcome study, as it gets rid of the self-selecting nature of the audiences so prevalent in other studies on well-being, which renders them largely useless as the basis of recommendations. 
Do they reduce sickness? No they don’t. Does it results in staying in your job, getting promotion or a pay rise? No, it doesn’t.  Did it reduce medication or hospital visits? No, it didn’t. This was true for almost every one of the 37 features mentioned. The bottom line is that there is no bottom line, no return on investment.
The interesting conclusion by the authors of the study is that wellness programmes, far from helping the intended audience (the obese, smokers etc.) it simply screens out those who are already healthy, yet the burden of cost is borne by all.

Bullshit Jobs - Futurist, Thought Leader, Leader... if you call yourself this, you're most probably not

If you call yourself a 'Futurist', 'Thought Leader' even 'Leader', you're most probably not one. I keep coming across people at conferences and on social media that have these titles, yet have shallow theoretical and practical competences.
Futurists
I've lost count of the presentations I've seen that are merely anecdotes and examples culled from the internet with some general nonsense about how 65% of all primary school kids are being taught for jobs that don't yet exist, or some other quite from Einstein that, on inspection, he never said.
Let me give you some real examples. Two years ago, I went to speak at DevLearn in Las Vegas. Now one wants a keynote speaker to provide new, insightful thinking, but a guy called David Pogue did a second-rate Jim Carey act. His ‘look at these wacky things on the internet’ shtick was a predictable routine. Kids can play the recorder on their iPhone! No they don’t. Only a 50 year old who bills himself as a ‘futurist’ thinks that kids take this stuff seriously. 
At Devlearn, we also got a guy called Adam Savage. I had never heard of him, but he’s a famous TV presenter in the US who hosts a show called Mythbusters. He spent an hour trying to claim that art and science were really the same thing, as both were really (and here comes his big insight) – storytelling. The problem is that the hapless Adam knew nothing about science or art. It was trite, reductionist and banal. Then there was the speaker on workplace learning, at OEB last year, who used the totally made up “65% of kids… jobs that don’t exist” line.
My own view is that these conferences do need outsiders who can talk knowledgeably about learning and not just about observing their kids or delivering a thinly disguised autobiography. I want some real relevance. I’ve begun to tire of ‘futurists’ – they all seem to be relics from the past. 
Bullshit Jobs - the book
This is where David Graeber comes in. He’s written a rather fine book, called Bullshit Jobs, which identifies five types of jobs that he regards as bullshit. Graeber’s right, many people do jobs, that if they disappeared tomorrow, would make no difference to the world and may even make things simpler, more efficient and better. As a follow up to the Graeber book, YouGov did a poll and found 37% thought that their jobs did not contribute meaningfully to the world. I find that both astonishing and all too real. In my experience, worryingly true. So what are those bullshit jobs?
Box tickers
Some of Graeber's jobs largely orbit around the concept of self-worth. Graeber identifies box tickers as one huge growth area in Bullshit Jobs. Now we know what this is in most organisations, those that deliver over-engineered and almost immediately forgotten compliance training, that is mostly about protecting the organisation from their own employees or satisfying some mythical insurance risk. It keeps them busy but also prevents others from getting on with their jobs. They forget almost all of it anyway.
It also includes all of those jobs created around abstract concepts such as diversity, equality or some other abstract threat. The job titles are a dead giveaway Chief Imagination Officer, Life Coach… any title with future, life, innovation, leadership, creative, liaison, strategist, ideation, facilitator, diversity, equality and so on. All of this pimping of job titles, along with fancy new business cards, is a futile exercise in self and organisational deception. It keeps non-productive people in non-productive jobs. Who hasn’t come across the pointless bureaucracy of organisations. From the process of signing in at reception, to getting wifi and all sorts of other administrative baloney. But that is nothing compared to the mindless touting of mindfulness, NLP courses and other fanciful and faddish nonsense that HR peddles in organisations. Then there’s a layer of pretend measurement with useless Ponzi scheme tools such as Myers Briggs, unconscious bias courses, emotional intelligence, 21stC skills and Kirkpatrick.
A second Graeber category is taskmasters, and he specifically targets middle management jobs and leadership professionals. Who doesn’t find themselves, at some point in the week doing something they know is pointless, instructed by someone whose job suggests pointless activity. The bullshit job boom has exploded in this area, with endless folk wanting to tell you that you’re a 'Leader’. You all need ‘Leadership training’ apparently, as everyone’s a Leader these days, rendering the meaning of the word completely useless. Stanford's Pfeffer nails this in his book Leadership is BS.
All of this comes at a cost. We have systematically downgraded jobs where people do real things, like plumbers, carpenters, carers, teachers, nurses and every other vocational occupation, paying them peanuts, while the rise of the robots, and I don’t mean technology, I mean purveyors of bullshit, all of those worthy middle-class jobs that pay people over the odds for being outraged on behalf of others. Leadership training has replaced good old-fashioned management training, abstractions replacing competences. Going to ‘Uni’ has become the only option for youngsters, often creating the expectation that they will go straight into bullshit jobs, managing others who do all most of the useful work.
I disagree with Graeber’s hypothesis that capitalism, and its engine the Protestant work ethic, leads to keeping people busy, just for the sake of being busy – business as busyness. I well remember the team leader in a summer job I had saying to me ‘listen just look busy… just pretend to be busy’. I felt like saying ‘You’re the boss, you pretend I’m busy’. Most of these bullshit jobs arise out of fear, the fear of being seen not to be progressive, the fear of regulation and litigation, the fear of not doing what everyone else is doing with a heavy does of groupthink.
We keep churning out these hopeless jobs in the hope that they will make the workplace more human, but all they do is dehumanise the workplace. They turn it into a place of quiet resentment and ridicule.

Saturday, September 01, 2018

Hearables are hear to stay in learning - podcasts, learning, language learning, tutoring, spaced-practice and cheating in exams!

Hearables are wearable devices that use your sense of hearing, smart headphones if you wish, that are becoming popular on the rise of voice as a significant form of interaction. With voice rising dramatically as a means of search, Amazon Echo/Google Home into millions of homes, realtime translation and earbuds for music and voice on mobiles, we are increasingly using hearing as the sense of choice for communications. 
Apples’s removal of the audio jack and launch of their EarPods was a landmark in the shift towards wireless hearables but other devices are also available or in development. Some rely on your smartphone, others, such as the Vinci, are independent, with local computing power and storage. You can even get them designed for your own ears through 3D printing.
Voice in learning
The advantages of audio in learning is in line with the simple fact that voice is primary in language, as we are all auditory grammatical geniuses, able to listen and speak, by the age of three, without instruction,  whereas reading and writing take years of instruction and practise. It is a more natural form of communication, more human. It also leaves your imagination free to create or generate your own thoughts and interpretations. This pared back input, arguably, allows deeper processing and learning as it requires attention, focus and effortful learning. Most of our communication is through dialogue and hearing, not print and most teaching takes place through hearing and dialogue.
So, there are several ways hearables could be used in learning:
1. Radio
Radio predates TV and modern media for learning. It remains a popular form of communications, as it is undemanding, leaves you hands free (while making breakfast, driving the car and so on). It also has a long history in learning, in Australia and other regions where distances are huge and resources low. The straight delivery of radio via hearables is the baseline.
2.Podcasts
Podcasts have also become a popular medium, especially for learning. They appeal to the learner who wants to focus on hearing experts, often interviewing other experts, on specific topics. As they are downloadable, they provide audio on demand, when you have the time to listen, often in those periods when you can focus, listen and learn.
3. Online learning
We have been using voice to deliver online learning in WildFire, with zero typing, as all navigation and input is by voice. It is a facsinating experience and feels more like normalised teaching and learning, when compared to using a keyboard.
4. Language learning
Language learning is an obvious application, where listening and comprehension can be delivered to your personal needs, with appropriate levels of feedback, even voice rand pronunciation recognition.
5. Translation
Translation in real time is already available through Google's Pixel Buds. The advantages, in terms of convenience, but also language learning, has huge potential. It must surely be worth exploring the advantages for novice language learners of hearing with translation and playback in the real world, where immersion and interactions with native speakers matters.
6. Tutoring
Have you ever had to call someone for help on how to fix something or get technical help on your computer? As a form of quick tutoring, voice is useful as you are hands free to try things, while you have access to experts anywhere on the planet.
7. Health
Hearables that deliver notifications on heart rate, oxygen saturation, calories burned, distance, speed and duration are available and as they can be used during exercise, may prove popular. This health learning loop also has potential to modify behavior, from diabetes to obesity.
8. Lifelong learning
As one gets older, reading, typing and other forms of interaction become more difficult. Hearables provide an easier and more convenient form of interaction, especially when combined with ‘reminders’ for those with memory problems.
9. Spaced practice
Audio could be used as a spaced-practice tool, pushed to you at intervals personalised to your needs and your own forgetting curve, namely more at the start then levelling out over time.
10. Exam cheating
Lastly, although undesirable, there have already been many instances reported of exam cheating using hearables. Ebay is awash with cheating devices such a micro-earpieces, Bluetooth pens and so on. In some ways this shows how powerful such devices can be for the timely delivery of learning!
Conclusion
Hearables are becoming part of the consumer technology landscape and in terms of learning, will have an impact. Different devices have different affordances but there is no doubt that hearing is a sense of choice for many people. Hearables, therefore, are hear to stay.