Thursday, March 28, 2019

Chatbots are being abused – but they’re fighting back!

Folk ask chatbots the weirdest of things. That’s fine of your chatbot is, say, a Dominatrix (yes they do exist). But in customer care or learning chatbots, it seems surprising – it’s not. Users know that chatbots are really pieces of software, so test it with rude and awkward questions. Swearing, sexual suggestions, requests to do odd things, and just being plain rude are common. 
The Cleo chatbot has been asked out on a date over 2000 times and asked to send naked photographs on over 1000 occasions. To the latter it sends back a picture of a circuit board. Nice touch and humour is often the best response. The financial chatbot Plum responds to swearing by saying "I might be a robot but I have digital feelings. Please don't swear." These are sensible responses, as Nass and Reeves found in their studies of humans with technology, that we humans expect our tech to be polite. 
There are even worse disasters in ‘botland’. InspiroBot creates inspiring quotes on nice photographs but often comes up with ridiculous rot. Tay, released by Microsoft, quickly became a sex-crazed Nazi and BabyQ recommended that young Chinese people should go to the US to realise their dreams. They were, of course shut down in hours. This is one of the problem with open, machine learning bots, they have a life of their own. But awkward questions can be useful…
Play
People want to play with chatbots – that’s fine. You often find that these questions are asked when someone first uses a chatbot or buys Alexa. It’s a sort of on-boarding process, where the new user gets used to the idea of typing replies or speaking to a machine.
Test limits
The odd questions tends to come at the start, as people stress-test the bot, then drops off dramatically. This is telling and actually quite useful, as users get to see how the bot works. They’re sometimes window shopping or simply seeing where the limits lie. One can see where the semantic interpretation of the Natural Language Interface lies by variants on the same question. Note that you can quickly tell whether it uses something like Google’s Dialogueflow, as opposed to a fixed non-natural language system.
Expectations 
It also helps calibrate and manage expectations. Using a bot is a bit like speaking to a very young child. You ask it a few questions a bit of back and forth, then get its level. Actually, with some, it’s like speaking to a dog, where all you can do is variants on ‘fetch’. Once the user realises that the bot is not a general purpose companion, who will answer anything or teacher with super-teaching qualities, and has a purpose, usually a specific domain, like finance, health or a specific subject, and that questions beyond this are pointless, they settle down. You get that “fair enough’ response and they settle down to the actual business of the bot.
Engagement
These little touches of humour and politeness serve a further purpose, in that they actually engage the user. If you get a witty or clever reply, you have a little more respect for the bot or at least the designer of the bot. With a little clever scripting, this can make or break user acceptance. Some people will, inevitably, ask your bot to tell a joke – be ready for that one. A knock-knock joke is good as it involves a shot dialogue, or lightbulb joke.
Tone
These responses can also be used to set the tone of the bot. Good bots know their audience and set the right tone. It’s pointless being too hip and smart-assed with an older audience who may find it just annoying. Come to think of it, this is also true of younger audiences, who are similarly intolerant of clichés. You can use these responses to be edgy, light-hearted, serious, academic… whatever.
Conclusion
You’ll find yourself dead-ending a lot with bots. They’re nowhere near as smart as you at first think. That’s OK. They serve a function and are getting better. But it’s good to offer a little freedom, allow people to play, explore, find limits, set expectations and increase engagement. 

 Subscribe to RSS

Saturday, March 16, 2019

Ai starts to crack the critical thinking... astonishing experiment...

Just eighteen years after 2001 (older readers will know the significance of that date), the AI-debater on the left, a 6 foot high black stele, with a woman’s voice, used arguments, objections, rebuttals, even jokes, to tussle with her opponent. She lost but, in a way, she also won, as this points towards an interesting breed of critical thinking software.  This line of AI has significance in the learning world.
How does it work?
First, she creates an opening speech by searching through millions of opening gambits, removes extraneous text and looks for the highest probability claims and arguments, based on solid evidence, she then arranges these arguments thematically to give a four minute speech.  In critical conversation,  she then listens to your response and responds, debating the point step by step. This where it gets clever as it she has to cope with logical dilemmas and structured debate and argument, drawing on a huge corpus of knowledge, way beyond what any human could read and remember.
Debate
In learning, working through a topic through dialogue, debate and discussion is often useful. Putting your ideas to the test, in an assignment or research task or when writing an article for publication and so on, would be a useful skill for my Alexa to be able to deliver. It raises the game, as it pushes AI generated responses beyond knowledge into reasoned argument and checks on evidence from trusted sources. But a debate is not the great win here. There are other more interesting and scalable uses.
Critical thinking
Much of the talk about 21st century skills is rather cliched, with little in the way of evidence-based debate. The research suggests that these skills, far from being separate 'skills' are largely domain specific. You don't get far in being a creative, critical and problem solving thinker in, say Data Science, if you don't know a lot about...  well... Data science. What's interesting about this experiment is the degree to which general debating skills,, let's call it stating and defending or attacking a proposition, shows how one can untangle, say critical thinking, into its components, as it has to be captured and delivered as software.
There are some key lessons here, as the logo of debate is actually the logic we know from Aristotle onwards, syllogistic and complex, often beyond the capability of the human brain. On the other hand the heuristics we humans use are a real challenge for AI. But AI is rising to this challenge with all sorts of techniques, many species of supervised and unsupervised AI that learns through machine learning, fuzzy logic to cope (largely with the impreciseness of language and human expression) and a battery of statistical theory and probability theory to determine certainty.
This, along with GTP-2 (I've written about this here), which creates content, along with techniques embedded in Google Duplex around complex conversational rules, are moving learning AI into new territory, with real dialogue based on structured creation of content, voice and the flow of conversations and debate. Why is this important?
1. Teaching
When it reaches a certain standard, we can see how it starts to behave like a teacher, to engage with a learner in dialogue and interpret the strengths of arguments, debate with the student, even teach and assess critical thinking and problem solving. In a sense it may transform normal teaching, in being able to deliver personalised learning at this level, on scale. The skills of a good teacher or lecturer are to introduce a subject, engage learners, support learners, assess learners. Even if it does not perform the job of an experienced teacher, one could see how it could support teachers.
2. Communication skills
There is also the ability to raise one’s game by using it as a foil to improve one’s communication skills, as a learner, teacher, presenter, interviewer, coach, therapist or sales person. Being able to persuade it that you are right, based on evidence, is something we could all benefit from. It strikes me that it could in time, also identify and help correct various human biases, especially confirmation bias but many others. Daniel Kahneman, in his Thinking Fast and Slow makes an excellent point at the very end of the book when he says that these biases are basically 'uneducable'. In other words, they are there, and rather than trying to change them, which is near impossible, we must tame them.
3. Expert
With access to over 300 million articles it has digested more than any human can read and remember in a lifetime. But this is just for reference. The degree to which it can use this as evidence for argument and advice is interesting. The experiment seems to support the idea that domain knowledge really does matter in critical thinking, something largely ignored in the superficial debate at conferences on 21st century skills. This may untangle this complex area by showing us how trues expertise is developed and executed.
4. Practice
The advantages the machine has over humans is the consistent access and use of very large knowledge bases. One can foresee a system that is an expert in a multitude of subjects and able to deliver scalable and sophisticated practice in not only knowledge but higher order skills across a range of subjects. The development of expertise takes time application and practice. This offers the opportunity to accelerate expertise. Of course, it also suggests that expertise may be replaces may machines. Read that sentence again, as it has huge consequences.
5. Assessment
If successful, such software could be a sophisticated way to assess learner’s work, whether written work, essays or oral, as it puts their arguments to the test. This is the equivalent to a VIVA or oral exam. With more structured questions, one could see how more sophisticated and objective assessment, free from essay mills and cheating, could be delivered.
6. Decision making
One could also see a use in decision-making, where evidence-based arguments would be at least worth exploring, while humans still make the decisions. I’d love, as a manager, to make a decision based on what has been found to work, rather than guessing or relying on faddish decision making.
Conclusion
This will, eventually, be invaluable for a teaching assistant that never gets tired, inattentive, demotivated, crabby and delivers quality learning experiences, not just answering questions. It may also help eliminate human bias in educational processes, making them more meritocratic. Above all it holds the promise of high level teaching that is scalable and cheap. At the very least it may lift often crass debate around 21st century skills beyond their cliched presentation as lists in bad PowerPoint presentations at conferences.

 Subscribe to RSS

Thursday, March 07, 2019

Why learning professionals – managers, project managers, interactive designers, learning experience designers, whatever, should not ignore research

Why do learning professionals in L and D – managers, project managers, interactive designers, learning experience designers and so on, ignore research? It doesn’t matter if you are implementing opportunities for learning such as nudges, social opportunities, workflow learning, performance support or designing pieces of content or full courses, you will be faced with deciding on whether one learning strategy, tactic or approach is better than another. This can’t be just about taking a horse to water - you must also make sure it drinks. Imagine a health system where all we do is design hospitals and opportunities for people to do healthy things or get advice on how to cure themselves, by people who do not know what the clinical research shows. 
Whatever the learning experience, you need to know about learning.
Lawyers know the law, engineers know physics but learning professionals often know little about learning theory. The consequences of this are, I think, severe. We’re sometimes seen as faddish, adopting tactics that are neither researched nor anything more than a la mode. It leads to products that do not deliver learning or learning opportunities – social systems that lie fallow and unused, polished looking rich media that actually hinders rather than helps one learn. It makes the process of learning longer, more expensive and less efficacious. Worse still, much delivery may actually hinder, rather than help learning, resulting in wasted effort or cognitive overload. It also makes us look unprofessional, not taken seriously by senior management (and learners).
We have seen the effect of flat-earth theory such a learning styles and whole word teaching of literacy, and the devastating effect it can have, wasting time in corporate learning and producing kids with poor reading skills. In online learning the rush to produce media rich learning experiences often actually harms the learning process by producing non-effortful viewing, click-through online learning and cognitive overload. Leader boards are launched but have to be abandoned. The refusal to accept evidence that most learning needs deliberate practice, whether through desirable difficulty, retrieval or spaced practice, is still a giant vacuum in the learning game.
So there are several reasons why research can usefully inform our professional lives.

1. Research debunks myths
One of things research can achieve, is to implore us to discard theories and practices, which are shown to be wrong-headed, like VAK learning styles or whole word teaching. These were both very popular theories, still held by large percentages of learning professionals. Yet research has shown them, not only to be suspect as theories, but also as having no efficacy. There’s a long list of current practice, such as Myers-Briggs, NLP, emotional intelligence, Gardener’s multiple intelligences, Maslow’s hierarchy of needs, Dales cone for learning and so on, that research has debunked. Yet these practices carry on long after the debunking – like those cartoon figures who run off cliffs and are seen still hanging there, looking down…

2. Research informs practice
Whether its general hypotheses like Does this massive spending on diversity training actually work? Or, at the next level Does this nudge learning delivery strategy based on the idea of hyperbolic discounting actually work better than single point delivery?  Research can help. There’s specific learning strategies by learners Does this retrieval or spaced or desirable difficulty practice increase retention? Even at the very specific level of cognitive science, lots of small hypotheses can be tested – like interleaving. In online learning What is the optimum number of options in a multiple choice question? Is media rich mind rich? As some of this research is truly counterintuitive, it also prevents us from being flat-earthers, or believing something, like the sun goes round the earth, just because it feels right. 

3. Research informs product
As technology increasingly helps deliver solutions, it is useful to design technology on the basis if researched findings. If, for example, an AI adaptive system was to be designed on the basis of Learning Styles, as opposed to the diagnosis of identified cognitive errors, that would be a mistake. Indeed technology, especially smart technology, often embodies pedagogic approaches, baking in theory, so that the practice can be enabled. I have built technology that is based wholly on several principles from cognitive science. I have also seen much technology that does not conform to good evidence based theory.

4. Research helps us negotiate with stakeholders
Learning is something we all do. We’ve all gone through years of school and so it is something on which we all have opinions. This means that discussions with stakeholders and budget holders can be difficult. There is often an over-emphasis on how things ‘look’ and much superficial discussion about graphics, with little discussion about the actual desired outcome – the acquisition of knowledge and skills and eventual performance. Research gives you the ability to navigate through these questions from stakeholders on the basis of avoiding anecdote, relying on objective research.

5. Research helps us motivate learners
Research has shown that learners are strangely delusional about optimal learning strategies and what they think they have learnt. This really does matter, as what they want is not always what they actually need. Analogously, you as teacher or learning designer, are like a doctor advising a patient, who is unlikely to know exactly what they have to do to solve their problem. An evidence-based approach moves us beyond the simplicities of learning styles and too much focus on making things ‘look’ or ‘feel’ good. Explaining to a learner that this approach will get them to their goal quicker, pass that exam and perform better can benefit from making the research explicit to the learner.

6. Research helps you select tools
One of the biggest problems in the delivery of online learning, is the way the tools shape what the learner sees, experiences and does. Far too many of these tools focus on look and feel, at the expense of cognitive effort, so we get lots of beautiful sliding effects and lots of bits ion media. It is, in effect, souped-up Powerpoint. Even worse are the childish games templates that produce mazes and other nonsense that is a million miles away from proper gaming. We have a chance to escape this with smarter software and tools that allow the learner to do what they need to do to learn - open input, write, do things. This requires Natural Language Processing and lots of other new tech.

7. Research helps us professionalise within organisations
In navigating organisational politics, structures and budgeting, also making your internal service appeal to senior management, research can be used to validate your proposals and approaches. HR and L and D have long complained about not being taken seriously enough by the business. Finance has the advantage of a body of established practice, massively influenced by technology and data. This is becoming true of marketing, production, even management, where data on the efficacy of different channels is now the norm. So it should be with learning. Alignment and impact matter. Personalised 'experiences' really do matter in the midst of complex learning.

Conclusion
If all of the above don’t convince you, then I’d appeal to the simple idea of doing the right thing. It’s not that all research is definitive, as science is always on the move, open to future falsification. But, as with research in medicine, physics in material science and engineering, chemistry in organic and inorganic production, maths in AI, we work with the best that is available. WE are duty bound to do our best on the best available evidence or we are not really a professional ‘profession’.

 Subscribe to RSS

Wednesday, March 06, 2019

Summarising learning materials using AI - paucity of data, abundance of stuff

 We’ve been using AI to create online learning for some time now. Our approach is to avoid the use of big data, analytics and prediction software, as there are almost no contexts in which there is nearly enough data to make this work to meet the expectations of the buyer. AI, we believe, is far better at precise goals, such as identifying key learning points, creating links to external content, creating podcasts using text to speech and the semantic interpretation of free text input by learners. We’ve done all of this but one thing always plagues the use of AI in learning…. Although there’s a paucity of data, there’s an abundance of stuff!

Paucity of data, abundance of stuff
Walk into many large organisations and you’ll encounter a ton of documents and PowerPoints. They’re often over-written and far too long to be useful in an efficient learning process. That doesn’t put people off and in many organisations we still have 50-120 or more PowerPoint slides delivered in a room with a projector, as training. It’s not much better in Higher Education, where the one hour lecture is still the most dominant teaching method. The trick is to have a filter that can automate the shortening of all of this stuff.

Summarisation
To summarise or précis documents (text) down in size, to focus on the ‘need to know’ content, there are three processes:
1. Human edit
No matter what AI techniques you use to précis text, it is wise to initially, edit out the extraneous material (by hand), that learners will not be expected to learn. For example, supplementary information, disclaimers, who wrote the document and so on. With large, well-structured documents, PDFs and PPTs it is often easy to simply identify the introductions or summaries in each section. These form ready-made summaries of the essential content for learning. Regard this step as simple data cleansing or hand washing! Now you are ready for further steps with AI....
2. Extractive AI
This technique uses a summary that keeps the sentences intact and only ‘extracts’ the relevant material. We usually look at a quick Human edit first, then extract the relevant shortened text, which can then be used in WildFire, or on its own. This is especially useful where the content may be subject to already regulated control (approved by expert, lawyer, regulator). For example in medical content in the pharmaceutical industry or compliance.
3. Abstractive AI
This is a summary that is rewritten and uses a set of training data and machine learning to produce a summary. Note that this approach needs a large domain-specific training set. By large we mean as large as possible. Some of the trainings sets are literally Gigabytes of data. That data also has to be cleaned.

Conclusion

The end result is automatically shortened documents, from original large documents, PowerPoints even video transcripts. These we can input into WildFire, rather than delivering in]tense training on huge pieces of content, you get the essentials. The summaries themselves can be useful in the content of the learning experience. So if you have a ton of documents and PowerPoints, we can shorten them quickly and produce online learning in minutes not months, at a fraction of the cost of traditional online learning, with very high retention.

 Subscribe to RSS

Tuesday, March 05, 2019

Learning experiences often not learning at all


"Part of the problem with all this talk about 'learning experience' is it's questionable whether learning is actually experienced at all."
This brilliant quote, by Leonard Houx, skewers the recent hubris around ‘learning experiences’. Everything is an ‘experience’ and what is needed is some awareness of good and bad learning experiences. Unfortunately, all too often what we see are over-engineered, media heavy, souped up PowerPoint or primitively gamified 'experiences' that the research show, result, not in significant learning, but 1) Clickthrough (click on this cartoon head, click on this to see X, click on option on MCQ) that allows the learner to skate across the surface of the content, 2) Cognitive overload (overuse of media) and 3) Diversionary activity (Mazes and infantile gamification). What is missing is relevant effort and cognitive effort, that makes one think, rather than click. There is rarely open input, rarely any personalised learning and rarely enough practice.
Media rich is not mind rich
The purveyors of ‘experience’ think that we need richer experiences but research shows that media rich is not mind rich. Mayer shows, in study after study, that redundant material is not just redundant but dangerous in that it can hinder learning. Sweller and others warn us of the danger of cognitive overload. Bjork and others shows us that learners are delusional about what is best for them in learning strategies and just pandering to what users think they want is a mistake. Less is usually more in that we need to focus on what the learner needs to ‘know’, not just  'experience'.
Research is bedrock of design
There are those who think that Learning and Development does not have to pay attention to this research or learning research at all. It is still all too common to sit in a room where no one has read much learning theory at all, and whose sole criterion for judgement on what makes good online learning is the ‘user experience’, without actually defining it as anything other than ‘what the user likes’. Lawyers know the law, engineers know physics and it is not really acceptable to buy into the anti-intellectual idea that knowing how people learn is irrelevant to Learning and Development. It is, in fact, the bedrock of learning design.
Less is more
Increasingly, online learning is diverging from what most people actually do and experience online. Look at the web’s most popular services or experiences – Google, Facebook, Twitter, Instagram, YouTube, Snapchat, Whatsapp, Messenger, Amazon, Netflix. It is all either mediated by AI to give you a personalised experience that doesn’t waste your time or dialogue. Their interfaces are pared down, simple, and they make sure there’s not an ounce of fat to distract from what the user actually needs. Occam was right with his razor – design with the minimal number of entities to reach your goal.
Conclusion
An experience can be a learning experience but all experiences are not learning experiences. Many are, inadvertently, designed to be the very opposite – experiences designed to impress or dazzle but end up as eye-candy, edu-tainment or enter-train-ment. Get this - media rich is not mind rich, clicking is not thinking, less in learning is often more.

 Subscribe to RSS