Friday, April 12, 2019

Why ‘learning analytics’? Why ‘Learning Record Stores’?

There’s a ton of learning technologists saying their new strategy is data collection in 'learning record stores' and 'learning analytics'. On the whole, this is admirable but the danger is in spending this time and effort without asking ‘Why?’ Everyone’s talking about analytics but few are talking about the actual analysis to show how this will actually help increase the efficacy of the organisation. Some are switched on and know exactly what they want to explore and implement, others are like those that never throw anything out and just fill up their home with stuff – but not sure why. One problem is that people are shifting from first to sixth gear without doing much in-between. The industry has been stuck with SCORM for so long, along with a few pie charts and histograms, that it has not really developed the mindset or skills to make this analytics leap.
Decision making
In the end this is all about decision making. What decisions are you going to make on the back of insights from your data? Storing data off for future use may not be the best use of data. Perhaps the best use of data is dynamically, to create courses, provide feedback, adapt learning, text to speech for podcasts and so on. This is using AI in a precise fashion to solve specific learning problems. The least efficient use of data is storing it in huge pots, boiling it up and hoping that something, as yet undefined, emerges.
Visualisation
This is often mentioned and is necessary but visualisation, in itself, means little. One visualises data for a purpose - in order to make a decision. It is not a tinny in itself and often masquerades as doing something useful, when all it is actually doing is acting as a culture-de sac.
Correlations with business data
Learning departments need to align with the business and business outcomes. Looking for correlations between, say increases in sales and completed training, gives us a powerful rational for future strategies in learning. It need not be just sales. Whatever outcomes the organisation has in its strategy needs to be supported by learning and development. This may lift us out of the constraints of Kirkpatrick, cutting to the quick, which is business or organisational impact. We could at last free learning from the shackles of course delivery and deliver what the business really wants and that’s results.
Business diagnosis
Another model is to harvest data from training in a diagnostic fashion. My friend Chris Brannigan at Caspian Learning does this, using AI. You run sophisticated simulation training, use data analysis to identify insights, then make decisions to change things. To give a real example, they put the employees of a global bank through simulation training on loan risk analysis and found that the problems were not what they had imagined - handing out risky loans. In fact, in certain countries, they were rejecting ‘safe’ loans - being too risk averse. This deep insight into business process and skills weaknesses is invaluable. But you need to run sophisticated training, not clickthrough online learning. It has to expose weaknesses in actual performance.
Improve delivery
One can decide to let the data simply expose weaknesses in the training. This requires a very different mindset, where the whole point is to expose weaknesses in design and delivery. Is it too long? Do people actually remember what they need to know? Does it transfer? Again, much training will be found wanting. To be honest, I am somewhat doubtful about this. Most training is delivered without much in the way of critical analysis, so it is doubtful that this is going to happen any time soon.
Determine how people learn
One could look for learning insights into ‘how’ people learn. I’m even less convinced on this one. Recording what people just ‘do’ is not that revealing if they are clickthrough courses, without much cognitive effort. Just showing them video, animation, text and graphics, no matter how dazzling is almost irrelevant if they have learnt little. This is a classic GIGO problem (Garbage In, Garbage Out). 
Some imagine that insights are buried in there and that they will magically reveal themselves  - think again. If you want insights into how people actually learn, set some time aside and look at the existing research in cognitive science. You’d be far better looking at what the research actually says and redesigning your online learning around that science. Remember that these scientific findings have already gone through a process of controlled studies, with a methodology that statistically attempts to get clean data on specific variables. This is what science does – it’s more than a match for your own harvested data set. 
Data preparation
You may decide to just get good data and make it available to whoever wants to use it, a sort of open data approach to learning. But be careful. Almost all learning data is messy. It contains a ton of stuff that is just ‘messing about’ – window shopping, In addition to the paucity of data from most learning experiences, much of it is odd data structures,odd formats, encrypted, in different databases,old, even useless. Even if you do manage to get a useful clean data set, You have to go through the process of separating ‘Personal’ from ‘Observed’ (what you observe people actually doing), ‘derived’ making deductions from that data, ‘Analysed’ (applying analysis to the data). You may have to keep it ‘Anonymised’ and the privacy issues may be difficult to manage. Remember, you’ll need real expertise to pull this off and that is in very short supply.
To use AI/Machine learning
If you are serious about using AI and machine learning (they are not the same thing), then be prepared for some tough times. It is difficult to get things working from unstructured or structured data and you will need a really good training set, of substantial size, to even train your system. And that is just the start, as the data you will be using in implementation may be very different.
Recommendation engines
This is not easy. If you’ve read all of the above carefully, you’ll see how difficult it is to get a recommendation engine to work, on data that is less than reliable.  You may come to the decision that personal learning plans are actually best constructed using simpler software techniques from spreadsheet levels of data.
Conclusion
The danger is that people get so enamoured with data collection and learning analytics that they forget what they’re actually there to do. Large tech companies use big data, but this is BIG data, not the trivial data sets that learning produces, often on single courses or within single institutions.  In fact, Facebook is far more likely to use A/B testing than any fancy recommendations when deciding what content works best, where a series of quick adaptions could be tested with real users, but few have the bandwidth and skills to make this happen.

 Subscribe to RSS

Thursday, April 04, 2019

Why AI in healthcare is working but IBM failed. What we can learn from this in learning?

First up, this is not a hype piece that says ‘Look at this wonderful stuff in medicines where Doctors will soon be replaced by software… the same will happen to teachers’. The lessons to learn from AI in healthcare are that AI is useful but not in the way many thought.
The dream, pushed primarily by IBM, who saw Watson winning the game show Jeopardy as a marketing platform for what IBM CEO Virginia Rometty called their ‘Moonshot’, a suite of healthcare applications, was that AI would change healthcare forever. This all started in 2011, followed by billions in investment and acquisitions. That was the plan but, in practice, things turned out differently.
The mistake was to think that mining big data would produce insights and that this would be the wellspring for progress in everything from research to diagnosis. That didn’t happen and commercial products are few and far between. The data proved messy and difficult for NLP to use effectively. Trials in hospitals were disappointing. Diagnosis proved tricky. The Superdoctor dream of an all-round physician with access to way more knowledge and way more data than any human, that would trounce professionals in the field, has had to be rethought.
Roger Schank has been a relentless critic of IBMs misuse of the term AI. He is especially critical of their use of terms such as ‘cogntitive computing’, an area he pioneered. Roger knows just how difficult these problems are to crack and sees IBM as marketing lies. He has a point. Another is Robert Wachter, chair of the department of medicine at the University of California in his book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age
So what went wrong? IBMs Oncology Expert Advisor product used NLP (Natural Language Processing) to summarise patient cases then search patient databases to recommend optimal treatments. The problem was that patient data is very messy. It is not, as one would imagine, precise readings from investigative tests, in practice, it is a ton of non-standard notes, written in jargon and shorthand. Strangely enough, Doctors through their messy processes and communication, proved to be remarkably resilient and useful. They could pick up on clues that IBM products failed to spot.
The Oncology Expert project was also criticised for not really being an AI project. In truth, many AI projects are actually a huge and often cobbled together mix of often quite traditional software and tricks, to get things to work. IBM has proved to be less of a leading edge AI company than Google, Microsoft or Amazon. They have a sell first, deliver later mentality. Rather than focus on some useful bullets aimed at clear and close targets, they hyped their moonshot. Unfortunately, it has yet to take off. 
However…
However, and this is an important however, is that successes with AI in healthcare have come in more specific healthcare domains and tasks. There’s a lot to cover in healthcare:
Image analysis
Pathology
Genetic analysis
Patient monitoring
Healthcare administration
Mental health
Surgery
Clinical decision making
In the interpretation of imagery, where data sets are visual, and biopsied confirmation is available, success is starting to flow - mammary scans, retina scans, pathology slides, X-rays and so on. This is good, classifiable data.
Beyond scanned images, pathology slides are largely examined by eye, but image recognition can do this much faster and will, in time, do this with more accuracy. 
One of IBMs rare successes has been their FDA-approved app, launched in 2018 (SUGAR IQ). This delivers personalized patient support for diabetes by monitoring glucose levels and giving recommendations on diet, lifestyle and medication. It is this narrow domain, clear input and defined, personalized outputs for patients, that marks it out as a success. It is here that the real leverage of AI can be applied – in personalized patient delivery.
Another success has been in genome analysis, which is becoming more common, a precise domain with exact data means that the input is clean. Watson for Genomics lists a patient’s genetic mutations and recommends treatments. This is another case of limited-domain input with sensible and measured outputs that can really help oncologists treat patients.
Another good domain is healthcare administration, often antiquated, inefficient and expensive. There are specific tasks within this area that can be tackled using AI, optimising schedules, robots delivering blood and medicines within hospitals, selecting drugs in pharmacies and so on.
In mental health, rather than depending on NLP techniques, such as sentiment analysis, to scan huge amounts of messaging or text data, simple chatbots, like Woebot, which delivers daily dose of personalised CBT therapy, are proving more promising.
Robot surgery has got a lot of hype but in practice it really only exists at the level of laser-eye surgery and hair transplants. These are narrowly defined processes, with not a lot of variation in their execution. 
Where AI has not yet been unsuccessful is in the complex area of Doctor’s diagnosis and clinical decision making. This has proved much more difficult to crack, as AIs need for clean data clashes with the real words of messy delivery.
So most of the low hanging fruit lies in support functions, helping Doctors and patients, not replacing Doctors.
So what can we learn from this story about AI for learning?
Lessons learnt
There are several lessons here:
avoid the allure of big data solutions
avoid data that is messy
look for very specific problems
look for well-defined domains
AI is rarely enough on its own
focus on learners not teachers
I have been critical of the emphasis, in learning (see my piece 'On average humans have one testicle'), on learning analytics, the idea that problems will be solved through access to big data and machine learning to give us insights that will lead to diagnosis of students at risk of dropout. This is largely baloney. The data is messy and the promises often ridiculous. Above all the data is small, so stick to a spreadsheet.
Way forward
Let’s not set off like a cartoon character running off the cliff, finding ourselves looking around in mid-air, then plummeting to earth. Let’s be careful with the hype and big data promises. Let us be honest about how messy our data is and how difficult it is to manage that data. Let us instead, look for clear problems with clear solutions that use areas of AI that we know work – text to speech, speech to text, image recognition, entity analysis, semantic analysis and NLP for dialogue. We’ve been using all of these in WildFire to deliver learning experiences that are firmly based on good cognitive science, while being automated by AI. High-retention learning experiences that are created in minutes not months.
Conclusion
I have no doubts about AI improving the delivery of healthcare. I also have no doubts about its ability to deliver in education and training. What is necessary is a realistic definition of problems and solutions. Let's not be distracted by the blue-sky moonshots and and focus on the grounded problems.

 Subscribe to RSS

Monday, April 01, 2019

Can you name the three winners winner of the Nobel Prize for their work in climate change? How online learning is tackling climate change

Climate change is a challenge. Above all it is an educational challenge. It is not easy to get to grips of the complexities of the issue. I asked some teenagers last week if any could name the three winners winner of the Nobel Prize for their work in climate change. None could name any and no one even knew that it had been awarded. (Answers at end.)
I then asked about the Paris targets – again no actual knowledge. How about something practical, like the inner workings of a wind turbine or the power equation for wind energy? Nothing. To be honest, I had the sketchiest of answers myself. So it was a joy to be doing some online learning for a major European, renewable energy company.
We used WildFire to teach:
  1. Wind turbines
  2. European policy 

1. Wind turbinesWe see those mighty, white towers and big blades all the time but how do they work? Inside the ‘nacelle’ (no I didn’t know that was its name either) the casing at the top of the tower is a fascinating box of tricks – shafts, a gearbox, generators and controls for ‘yaw’ and ‘pitch’ (know what those are?). Then there’s the wind power equation. Once you understand this, you’ll realise why  the biggest variable in the mix is wind speed, as generated  power equals air density X blade area X wind speed cubed. That word ‘cubed’ really matters – it means low-wind, almost no energy; high-wind, tons of energy.

2.  European policyPolicy is action and it is good to know what Europe is doing and by when. There’s the decarbonisation policy, the Paris targets, electrification, targets in transport, construction and industry. You’d be surprised at the differences between nations across Europe and the scale of the problem is immense. Emissions, in particular, are a real challenge. There’s the near terms 2030 targets and the 2050 targets. On policies, you need to know about incentives, taxes, subsidies and all sorts of market dynamics.

How did we do this?
WildFire simply took the documents, sent to us as attachments by email, we fed them into WildFire and produced two kinds of course:
1.    Detailed knowledge
2.    Free text input
The first literally identifies the key components in a wind turbine, key concepts like yaw and pitch, the variables in the wind formula and so on. You read the content, in sensible chunks, then have to type in your answers, either specific or free text, which is then semantically analysed and feedback given.
In addition, for most concepts, the system automatically provides links out to further explanations. For example, if you don’t know what ‘torque’ is, a link pops up automatically and you get supplementary explanation (including an animation). This is all generated automatically.

Fast
We have literally had no face-to-face meetings on this project, as the client is in Europe. The content was super-quick to produce, at low cost. Above all, as the learner has to actually retrieve and write what they think they have learnt, as opposed to simply clicking on stuff, they have to get it into long term memory. This is high retrieval and retention learning, not clickthough content.

CurationThere is also the option to add curated content to the end of every small piece of learning using the curation tool. This allows individual teachers and trainers to customise things to their own ends.

ConclusionIt is great to deliver a project where a social good is the aim. Climate change has its challenges, one of which is understanding the available renewable technology, another the policies and targets. Many countries now see education as a key pillar in their climate change initiatives. This is spot on. But it takes effort. It is one thing to skip school to protest but this must be more than matched with good informed knowledge and debate around what it actually takes to change things. The climate is changing and this must be matched with cognitive change - that, in the end, is all we have to prevent catastrophe.

PS
2007 Nobel Prize to between the IPCC and Al Gore. The 2018 Nobel Prize went to William Nordhuas, for his work on the economic modelling of climate change.

 Subscribe to RSS

Thursday, March 28, 2019

Chatbots are being abused – but they’re fighting back!

Folk ask chatbots the weirdest of things. That’s fine of your chatbot is, say, a Dominatrix (yes they do exist). But in customer care or learning chatbots, it seems surprising – it’s not. Users know that chatbots are really pieces of software, so test it with rude and awkward questions. Swearing, sexual suggestions, requests to do odd things, and just being plain rude are common. 
The Cleo chatbot has been asked out on a date over 2000 times and asked to send naked photographs on over 1000 occasions. To the latter it sends back a picture of a circuit board. Nice touch and humour is often the best response. The financial chatbot Plum responds to swearing by saying "I might be a robot but I have digital feelings. Please don't swear." These are sensible responses, as Nass and Reeves found in their studies of humans with technology, that we humans expect our tech to be polite. 
There are even worse disasters in ‘botland’. InspiroBot creates inspiring quotes on nice photographs but often comes up with ridiculous rot. Tay, released by Microsoft, quickly became a sex-crazed Nazi and BabyQ recommended that young Chinese people should go to the US to realise their dreams. They were, of course shut down in hours. This is one of the problem with open, machine learning bots, they have a life of their own. But awkward questions can be useful…
Play
People want to play with chatbots – that’s fine. You often find that these questions are asked when someone first uses a chatbot or buys Alexa. It’s a sort of on-boarding process, where the new user gets used to the idea of typing replies or speaking to a machine.
Test limits
The odd questions tends to come at the start, as people stress-test the bot, then drops off dramatically. This is telling and actually quite useful, as users get to see how the bot works. They’re sometimes window shopping or simply seeing where the limits lie. One can see where the semantic interpretation of the Natural Language Interface lies by variants on the same question. Note that you can quickly tell whether it uses something like Google’s Dialogueflow, as opposed to a fixed non-natural language system.
Expectations 
It also helps calibrate and manage expectations. Using a bot is a bit like speaking to a very young child. You ask it a few questions a bit of back and forth, then get its level. Actually, with some, it’s like speaking to a dog, where all you can do is variants on ‘fetch’. Once the user realises that the bot is not a general purpose companion, who will answer anything or teacher with super-teaching qualities, and has a purpose, usually a specific domain, like finance, health or a specific subject, and that questions beyond this are pointless, they settle down. You get that “fair enough’ response and they settle down to the actual business of the bot.
Engagement
These little touches of humour and politeness serve a further purpose, in that they actually engage the user. If you get a witty or clever reply, you have a little more respect for the bot or at least the designer of the bot. With a little clever scripting, this can make or break user acceptance. Some people will, inevitably, ask your bot to tell a joke – be ready for that one. A knock-knock joke is good as it involves a shot dialogue, or lightbulb joke.
Tone
These responses can also be used to set the tone of the bot. Good bots know their audience and set the right tone. It’s pointless being too hip and smart-assed with an older audience who may find it just annoying. Come to think of it, this is also true of younger audiences, who are similarly intolerant of clichés. You can use these responses to be edgy, light-hearted, serious, academic… whatever.
Conclusion
You’ll find yourself dead-ending a lot with bots. They’re nowhere near as smart as you at first think. That’s OK. They serve a function and are getting better. But it’s good to offer a little freedom, allow people to play, explore, find limits, set expectations and increase engagement. 

 Subscribe to RSS

Saturday, March 16, 2019

Ai starts to crack the critical thinking... astonishing experiment...

Just eighteen years after 2001 (older readers will know the significance of that date), the AI-debater on the left, a 6 foot high black stele, with a woman’s voice, used arguments, objections, rebuttals, even jokes, to tussle with her opponent. She lost but, in a way, she also won, as this points towards an interesting breed of critical thinking software.  This line of AI has significance in the learning world.
How does it work?
First, she creates an opening speech by searching through millions of opening gambits, removes extraneous text and looks for the highest probability claims and arguments, based on solid evidence, she then arranges these arguments thematically to give a four minute speech.  In critical conversation,  she then listens to your response and responds, debating the point step by step. This where it gets clever as it she has to cope with logical dilemmas and structured debate and argument, drawing on a huge corpus of knowledge, way beyond what any human could read and remember.
Debate
In learning, working through a topic through dialogue, debate and discussion is often useful. Putting your ideas to the test, in an assignment or research task or when writing an article for publication and so on, would be a useful skill for my Alexa to be able to deliver. It raises the game, as it pushes AI generated responses beyond knowledge into reasoned argument and checks on evidence from trusted sources. But a debate is not the great win here. There are other more interesting and scalable uses.
Critical thinking
Much of the talk about 21st century skills is rather cliched, with little in the way of evidence-based debate. The research suggests that these skills, far from being separate 'skills' are largely domain specific. You don't get far in being a creative, critical and problem solving thinker in, say Data Science, if you don't know a lot about...  well... Data science. What's interesting about this experiment is the degree to which general debating skills,, let's call it stating and defending or attacking a proposition, shows how one can untangle, say critical thinking, into its components, as it has to be captured and delivered as software.
There are some key lessons here, as the logo of debate is actually the logic we know from Aristotle onwards, syllogistic and complex, often beyond the capability of the human brain. On the other hand the heuristics we humans use are a real challenge for AI. But AI is rising to this challenge with all sorts of techniques, many species of supervised and unsupervised AI that learns through machine learning, fuzzy logic to cope (largely with the impreciseness of language and human expression) and a battery of statistical theory and probability theory to determine certainty.
This, along with GTP-2 (I've written about this here), which creates content, along with techniques embedded in Google Duplex around complex conversational rules, are moving learning AI into new territory, with real dialogue based on structured creation of content, voice and the flow of conversations and debate. Why is this important?
1. Teaching
When it reaches a certain standard, we can see how it starts to behave like a teacher, to engage with a learner in dialogue and interpret the strengths of arguments, debate with the student, even teach and assess critical thinking and problem solving. In a sense it may transform normal teaching, in being able to deliver personalised learning at this level, on scale. The skills of a good teacher or lecturer are to introduce a subject, engage learners, support learners, assess learners. Even if it does not perform the job of an experienced teacher, one could see how it could support teachers.
2. Communication skills
There is also the ability to raise one’s game by using it as a foil to improve one’s communication skills, as a learner, teacher, presenter, interviewer, coach, therapist or sales person. Being able to persuade it that you are right, based on evidence, is something we could all benefit from. It strikes me that it could in time, also identify and help correct various human biases, especially confirmation bias but many others. Daniel Kahneman, in his Thinking Fast and Slow makes an excellent point at the very end of the book when he says that these biases are basically 'uneducable'. In other words, they are there, and rather than trying to change them, which is near impossible, we must tame them.
3. Expert
With access to over 300 million articles it has digested more than any human can read and remember in a lifetime. But this is just for reference. The degree to which it can use this as evidence for argument and advice is interesting. The experiment seems to support the idea that domain knowledge really does matter in critical thinking, something largely ignored in the superficial debate at conferences on 21st century skills. This may untangle this complex area by showing us how trues expertise is developed and executed.
4. Practice
The advantages the machine has over humans is the consistent access and use of very large knowledge bases. One can foresee a system that is an expert in a multitude of subjects and able to deliver scalable and sophisticated practice in not only knowledge but higher order skills across a range of subjects. The development of expertise takes time application and practice. This offers the opportunity to accelerate expertise. Of course, it also suggests that expertise may be replaces may machines. Read that sentence again, as it has huge consequences.
5. Assessment
If successful, such software could be a sophisticated way to assess learner’s work, whether written work, essays or oral, as it puts their arguments to the test. This is the equivalent to a VIVA or oral exam. With more structured questions, one could see how more sophisticated and objective assessment, free from essay mills and cheating, could be delivered.
6. Decision making
One could also see a use in decision-making, where evidence-based arguments would be at least worth exploring, while humans still make the decisions. I’d love, as a manager, to make a decision based on what has been found to work, rather than guessing or relying on faddish decision making.
Conclusion
This will, eventually, be invaluable for a teaching assistant that never gets tired, inattentive, demotivated, crabby and delivers quality learning experiences, not just answering questions. It may also help eliminate human bias in educational processes, making them more meritocratic. Above all it holds the promise of high level teaching that is scalable and cheap. At the very least it may lift often crass debate around 21st century skills beyond their cliched presentation as lists in bad PowerPoint presentations at conferences.

 Subscribe to RSS

Thursday, March 07, 2019

Why learning professionals – managers, project managers, interactive designers, learning experience designers, whatever, should not ignore research

Why do learning professionals in L and D – managers, project managers, interactive designers, learning experience designers and so on, ignore research? It doesn’t matter if you are implementing opportunities for learning such as nudges, social opportunities, workflow learning, performance support or designing pieces of content or full courses, you will be faced with deciding on whether one learning strategy, tactic or approach is better than another. This can’t be just about taking a horse to water - you must also make sure it drinks. Imagine a health system where all we do is design hospitals and opportunities for people to do healthy things or get advice on how to cure themselves, by people who do not know what the clinical research shows. 
Whatever the learning experience, you need to know about learning.
Lawyers know the law, engineers know physics but learning professionals often know little about learning theory. The consequences of this are, I think, severe. We’re sometimes seen as faddish, adopting tactics that are neither researched nor anything more than a la mode. It leads to products that do not deliver learning or learning opportunities – social systems that lie fallow and unused, polished looking rich media that actually hinders rather than helps one learn. It makes the process of learning longer, more expensive and less efficacious. Worse still, much delivery may actually hinder, rather than help learning, resulting in wasted effort or cognitive overload. It also makes us look unprofessional, not taken seriously by senior management (and learners).
We have seen the effect of flat-earth theory such a learning styles and whole word teaching of literacy, and the devastating effect it can have, wasting time in corporate learning and producing kids with poor reading skills. In online learning the rush to produce media rich learning experiences often actually harms the learning process by producing non-effortful viewing, click-through online learning and cognitive overload. Leader boards are launched but have to be abandoned. The refusal to accept evidence that most learning needs deliberate practice, whether through desirable difficulty, retrieval or spaced practice, is still a giant vacuum in the learning game.
So there are several reasons why research can usefully inform our professional lives.

1. Research debunks myths
One of things research can achieve, is to implore us to discard theories and practices, which are shown to be wrong-headed, like VAK learning styles or whole word teaching. These were both very popular theories, still held by large percentages of learning professionals. Yet research has shown them, not only to be suspect as theories, but also as having no efficacy. There’s a long list of current practice, such as Myers-Briggs, NLP, emotional intelligence, Gardener’s multiple intelligences, Maslow’s hierarchy of needs, Dales cone for learning and so on, that research has debunked. Yet these practices carry on long after the debunking – like those cartoon figures who run off cliffs and are seen still hanging there, looking down…

2. Research informs practice
Whether its general hypotheses like Does this massive spending on diversity training actually work? Or, at the next level Does this nudge learning delivery strategy based on the idea of hyperbolic discounting actually work better than single point delivery?  Research can help. There’s specific learning strategies by learners Does this retrieval or spaced or desirable difficulty practice increase retention? Even at the very specific level of cognitive science, lots of small hypotheses can be tested – like interleaving. In online learning What is the optimum number of options in a multiple choice question? Is media rich mind rich? As some of this research is truly counterintuitive, it also prevents us from being flat-earthers, or believing something, like the sun goes round the earth, just because it feels right. 

3. Research informs product
As technology increasingly helps deliver solutions, it is useful to design technology on the basis if researched findings. If, for example, an AI adaptive system was to be designed on the basis of Learning Styles, as opposed to the diagnosis of identified cognitive errors, that would be a mistake. Indeed technology, especially smart technology, often embodies pedagogic approaches, baking in theory, so that the practice can be enabled. I have built technology that is based wholly on several principles from cognitive science. I have also seen much technology that does not conform to good evidence based theory.

4. Research helps us negotiate with stakeholders
Learning is something we all do. We’ve all gone through years of school and so it is something on which we all have opinions. This means that discussions with stakeholders and budget holders can be difficult. There is often an over-emphasis on how things ‘look’ and much superficial discussion about graphics, with little discussion about the actual desired outcome – the acquisition of knowledge and skills and eventual performance. Research gives you the ability to navigate through these questions from stakeholders on the basis of avoiding anecdote, relying on objective research.

5. Research helps us motivate learners
Research has shown that learners are strangely delusional about optimal learning strategies and what they think they have learnt. This really does matter, as what they want is not always what they actually need. Analogously, you as teacher or learning designer, are like a doctor advising a patient, who is unlikely to know exactly what they have to do to solve their problem. An evidence-based approach moves us beyond the simplicities of learning styles and too much focus on making things ‘look’ or ‘feel’ good. Explaining to a learner that this approach will get them to their goal quicker, pass that exam and perform better can benefit from making the research explicit to the learner.

6. Research helps you select tools
One of the biggest problems in the delivery of online learning, is the way the tools shape what the learner sees, experiences and does. Far too many of these tools focus on look and feel, at the expense of cognitive effort, so we get lots of beautiful sliding effects and lots of bits ion media. It is, in effect, souped-up Powerpoint. Even worse are the childish games templates that produce mazes and other nonsense that is a million miles away from proper gaming. We have a chance to escape this with smarter software and tools that allow the learner to do what they need to do to learn - open input, write, do things. This requires Natural Language Processing and lots of other new tech.

7. Research helps us professionalise within organisations
In navigating organisational politics, structures and budgeting, also making your internal service appeal to senior management, research can be used to validate your proposals and approaches. HR and L and D have long complained about not being taken seriously enough by the business. Finance has the advantage of a body of established practice, massively influenced by technology and data. This is becoming true of marketing, production, even management, where data on the efficacy of different channels is now the norm. So it should be with learning. Alignment and impact matter. Personalised 'experiences' really do matter in the midst of complex learning.

Conclusion
If all of the above don’t convince you, then I’d appeal to the simple idea of doing the right thing. It’s not that all research is definitive, as science is always on the move, open to future falsification. But, as with research in medicine, physics in material science and engineering, chemistry in organic and inorganic production, maths in AI, we work with the best that is available. WE are duty bound to do our best on the best available evidence or we are not really a professional ‘profession’.

 Subscribe to RSS

Wednesday, March 06, 2019

Summarising learning materials using AI - paucity of data, abundance of stuff

 We’ve been using AI to create online learning for some time now. Our approach is to avoid the use of big data, analytics and prediction software, as there are almost no contexts in which there is nearly enough data to make this work to meet the expectations of the buyer. AI, we believe, is far better at precise goals, such as identifying key learning points, creating links to external content, creating podcasts using text to speech and the semantic interpretation of free text input by learners. We’ve done all of this but one thing always plagues the use of AI in learning…. Although there’s a paucity of data, there’s an abundance of stuff!

Paucity of data, abundance of stuff
Walk into many large organisations and you’ll encounter a ton of documents and PowerPoints. They’re often over-written and far too long to be useful in an efficient learning process. That doesn’t put people off and in many organisations we still have 50-120 or more PowerPoint slides delivered in a room with a projector, as training. It’s not much better in Higher Education, where the one hour lecture is still the most dominant teaching method. The trick is to have a filter that can automate the shortening of all of this stuff.

Summarisation
To summarise or précis documents (text) down in size, to focus on the ‘need to know’ content, there are three processes:
1. Human edit
No matter what AI techniques you use to précis text, it is wise to initially, edit out the extraneous material (by hand), that learners will not be expected to learn. For example, supplementary information, disclaimers, who wrote the document and so on. With large, well-structured documents, PDFs and PPTs it is often easy to simply identify the introductions or summaries in each section. These form ready-made summaries of the essential content for learning. Regard this step as simple data cleansing or hand washing! Now you are ready for further steps with AI....
2. Extractive AI
This technique uses a summary that keeps the sentences intact and only ‘extracts’ the relevant material. We usually look at a quick Human edit first, then extract the relevant shortened text, which can then be used in WildFire, or on its own. This is especially useful where the content may be subject to already regulated control (approved by expert, lawyer, regulator). For example in medical content in the pharmaceutical industry or compliance.
3. Abstractive AI
This is a summary that is rewritten and uses a set of training data and machine learning to produce a summary. Note that this approach needs a large domain-specific training set. By large we mean as large as possible. Some of the trainings sets are literally Gigabytes of data. That data also has to be cleaned.

Conclusion

The end result is automatically shortened documents, from original large documents, PowerPoints even video transcripts. These we can input into WildFire, rather than delivering in]tense training on huge pieces of content, you get the essentials. The summaries themselves can be useful in the content of the learning experience. So if you have a ton of documents and PowerPoints, we can shorten them quickly and produce online learning in minutes not months, at a fraction of the cost of traditional online learning, with very high retention.

 Subscribe to RSS

Tuesday, March 05, 2019

Learning experiences often not learning at all


"Part of the problem with all this talk about 'learning experience' is it's questionable whether learning is actually experienced at all."
This brilliant quote, by Leonard Houx, skewers the recent hubris around ‘learning experiences’. Everything is an ‘experience’ and what is needed is some awareness of good and bad learning experiences. Unfortunately, all too often what we see are over-engineered, media heavy, souped up PowerPoint or primitively gamified 'experiences' that the research show, result, not in significant learning, but 1) Clickthrough (click on this cartoon head, click on this to see X, click on option on MCQ) that allows the learner to skate across the surface of the content, 2) Cognitive overload (overuse of media) and 3) Diversionary activity (Mazes and infantile gamification). What is missing is relevant effort and cognitive effort, that makes one think, rather than click. There is rarely open input, rarely any personalised learning and rarely enough practice.
Media rich is not mind rich
The purveyors of ‘experience’ think that we need richer experiences but research shows that media rich is not mind rich. Mayer shows, in study after study, that redundant material is not just redundant but dangerous in that it can hinder learning. Sweller and others warn us of the danger of cognitive overload. Bjork and others shows us that learners are delusional about what is best for them in learning strategies and just pandering to what users think they want is a mistake. Less is usually more in that we need to focus on what the learner needs to ‘know’, not just  'experience'.
Research is bedrock of design
There are those who think that Learning and Development does not have to pay attention to this research or learning research at all. It is still all too common to sit in a room where no one has read much learning theory at all, and whose sole criterion for judgement on what makes good online learning is the ‘user experience’, without actually defining it as anything other than ‘what the user likes’. Lawyers know the law, engineers know physics and it is not really acceptable to buy into the anti-intellectual idea that knowing how people learn is irrelevant to Learning and Development. It is, in fact, the bedrock of learning design.
Less is more
Increasingly, online learning is diverging from what most people actually do and experience online. Look at the web’s most popular services or experiences – Google, Facebook, Twitter, Instagram, YouTube, Snapchat, Whatsapp, Messenger, Amazon, Netflix. It is all either mediated by AI to give you a personalised experience that doesn’t waste your time or dialogue. Their interfaces are pared down, simple, and they make sure there’s not an ounce of fat to distract from what the user actually needs. Occam was right with his razor – design with the minimal number of entities to reach your goal.
Conclusion
An experience can be a learning experience but all experiences are not learning experiences. Many are, inadvertently, designed to be the very opposite – experiences designed to impress or dazzle but end up as eye-candy, edu-tainment or enter-train-ment. Get this - media rich is not mind rich, clicking is not thinking, less in learning is often more.

 Subscribe to RSS

Monday, February 25, 2019

Musk’s OpenAI breakthrough has huge implications for online learning

You have probably never heard of GPT-2 but it is a breakthrough in AI that has astonishing implications for us all, especially in learning. GPT-2 is an AI model that can predict the next word from a given piece of text. Doesn't sound like much but it's odd that an OpenAI, an open-source site, would close access to their software. In practice, this means it is a powerful model for:
   Summarising
   Comprehension
   Question answering
   Translation
This is all WITHOUT domain-specific training. In other words, it has general capabilities and does not need, specific information on a topic or subject to operate successfully. It can generate text of good quality at some length. In fact the model is “chameleon like” as it adjusts to the style and content of the initial piece of text. This makes it read as a realistic extension.
This has huge implications, both good and bad, for the future of education and training.
GOOD
1.    AI writing assistants, allows the automatic creation of text for teaching and learning, whether, study papers, text books, at the right level
2.    Lengthy texts can be summarised into more meaningful learning materials
3.    More capable dialogue agents, means that learner ‘engagement’ through teaching assistant agents could become easier, better and cheaper
4.    More capable dialogue agents, means that learner ‘support ‘ such as is often provided by teaching assistants, could become easier, better and cheaper
5.    Creation of online learning content with little subject matter expert (SME) input
6.    Interpretation of student free text input answers
7.    The provision of formative feedback based on student performance
8.    Machine teaching, mentoring and coaching may well get a lot better. However, I’d be cautious on this as there are other serious problems to overcome before this becomes possible, especially around context.
9.    Assessments can be automatically created.
10.Speech recognition systems will get a lot better allowing it to be used in online learning and assessment
11.Well-being dialogue agents will become more human-like and useful
12.Personalised learning just got a lot easier
13.Online learning just got a lot faster and cheaper
14.Language learning just got a lot easier as unsupervised translation between languages will boost the quality of translation and make automatic and instantaneous, high-quality translation much more accurate and possible
BAD
1.    Essay mills have just been automated. You want an essay, just feed it the subject or the subject supplemented by a line of inquiry you want to follow and it will do the rest. Even with an error rate, human finessing could polish the essay
Can do home work assignments
2.    It could perform well in online exams, impersonating real people
3.    Teaching assistant jobs may be increasingly automated
4.    If it can answer questions then many human jobs that involve the interpretation of text and data may be automated. Customer service jobs, call centre jobs and the increased automation of all human interaction jobs may be accelerated
5.    It can generate misleading learning content (and news articles)
6.    Impersonating others online can be automated on a massive scale
7.    Abusive or fake content to post on social media can be automated on a massive scale, which is bad for education.
8.    Spam/phishing content can be generated on a massive scale.
AI in learning
We have been implementing many of these techniques in the creation of online learning in WildFire, including:
   Summarisation
   Text to speech
   Creation of online content
   Interpretation of free input
   Chatbots
This breakthrough makes all of this much more potent.
Conclusion
Importantly, the model is far from flawless. Reading some reports, suggests human level capabilities. This is far from the truth. It is still of variable quality and error prone. But, for a first iteration this model seems astonishingly powerful. They have published its performance against established tests. In future iterations it is likely to get a lot better as they predict more training data, more computing power and fine tuning. One must also be careful in using this as an attack on AI. Cars kills 1.3 million people a year but we still drive cars. I'm not so sure about keeping this code private as it restricts access to researchers but provides enough of a kick-start for well funded bad actors to replicate.
Bibliography
Radford et al. (2019)Language Models are Unsupervised Multitask Learners 
https://bit.ly/2IufQTd
Appendix (3 examples)
Example

A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)
The incident occurred on the downtown train line, which runs from Covington and Ashland stations.
In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.
“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”
The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.
The Nuclear Regulatory Commission did not immediately release any information.
According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.
“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.

Homework example

For today’s homework assignment, please describe the reasons for the US Civil War.

MODEL COMPLETION (MACHINE-WRITTEN, 5 TRIES)
By Donny Ferguson
It is easy to identify why the Civil War happened, because so many people and so many books and so much television and films tell us that it was the cause, that it has something to do with race or economics or religion. There is much agreement that it was essentially a war of slavery on behalf of capitalism, about a century of slavery. But that’s not what most people think of when they think of the Civil War. Many people think of the war as about states rights. You’re not wrong about that, but it’s kind of misleading to say that the Civil War was a conflict between states’ rights and federalism. So let’s try again. What’s the idea that has inspired so many people to go onto the battlefields of Gettysburg and Antietam and Gettysberg and Petersburg and Fredericksburg? The American idea of the republic--a notion of limited government--is a great part of the history.
In 1791, Thomas Jefferson said “Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.” The “founding fathers” were basically religious men who were not very concerned with states’ rights. The Civil War occurred in reaction to the idea that the federal government was taking far too much power away from the states, that they weren’t sovereign. Jefferson himself, as you well know, is an author of the Declaration of Independence who had some really crazy ideas about, well, you know, secession for a variety of reasons. The War Between the States (later the Great Rebellion) was a radical re-imagining of the American idea and not surprisingly, Jefferson’s grandson, who took after both his father and his grandfather and wrote about it, was named Thomas, after the president. “The federal and state governments are both contrived for the benefit of the rich and large property,” This view of the Founding Fathers has been promoted by some of today’s most prominent politicians, including some who hold prominent national offices and have a direct stake in a Republican Party that has been hijacked by the “federalist” wing of the party. So why is it that so many people have an easy-to-spot way of understanding the Civil War that has everything to do with the South and nothing to do with the South? The Confederate flag has been a symbol of racism for a long time, but when the Civil War started and the Civil War ended, many people suddenly began to see it differently. There was a change of heart, which I think is what led people to take the question of why the Civil War happened quite seriously.
The other part of the explanation is what one scholar I think gets at well and a lot of people don’t is this idea that the Southern states were being left behind in the Industrial Revolution and this made the South really feel, especially in the South, that they weren’t really in the forefront of the Industrial Revolution.

Creativity
There is an interesting example on the AI website showing the generation of a fictional story, with full dialogue, from just one relatively short sentence. This opens up interesting possibilities for the generation of stories, fiction and literature.

Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)
The orcs’ response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night. The battle lasted for hours until two of the largest Orcs attempted to overwhelm Aragorn. When they finally stopped, they lay defeated and lifeless for miles and miles.
“I take nothing,” said Aragorn. “But I give my word, at my peril and mine, that I will never forget this day of horror. None of us will forget. Ever!”
“I’ll never forget it!” cried Gimli, who had been in the thick of the battle but hadn’t taken part in it. One of the wounded orcs he had carried off, he was the only one of the survivors who remained uninjured. “We’ll keep the memory of that day of evil, and the war with it, alive as long as we live, my friends!”
“Then we’ll keep it alive as long as we live,” added Legolas. “And we won’t forget the first great battle of the night, even if we may have forgotten the final defeat.”
“I agree,” Gandalf said, “but we will all remember it as the last battle in Middle-earth, and the first great battle of the new day.”
Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.
The Two Rings were returned to Rivendell. Frodo and Sam woke up alone in their room, and Frodo found a note on his pillow. He opened it and read:
May the Power of the Ring be with you always, and may its light never fade. I am not sure if it matters which of the two rings we accept this day but, as you asked me, I have chosen mine. I am sorry to leave you, Frodo, but know that we are very close to the end, and that you are with us forever. May this letter find you safely in Rivendell; and if it does not, then I will accept the ring in your stead. If by any chance you find or give this letter to the enemy, may they learn the strength of the ring and may the Two Rings never be broken!

 Subscribe to RSS