Saturday, August 11, 2018

AI now plays a major recommendation role in L&D - resistance is futile

In 2014, at EPFL (Ecole Polytechnique Federale de lausanne)in Switzerland, I was speaking at a MOOC conference explaining how MOOCs would eventually move towards satisfying real demand in vocational courses and not, as most attendees thought at the time, deliver standard HE courses. This proved to be right. The most popular MOOCs worldwide are not liberal arts courses but IT, business and healthcare courses. Udacity, Coursera, Udemy and EdX have all recognised this and shifted their business model. Futurelearn remains in the doldrums.
At that same conference I spoke to Andrew Ng and asked him about the application of AI (he’s a word-class AI expert) to online content. I had been pushing this since April 2013. His reply was interesting, “It’s too early and the problem is the difficulty in authoring the courses to be suitable for the application of AI”. He was right then, but he’s now just done exactly that in Coursera.
AI Recommendation engines
Recommender systems are now mainstream in almost every area of online delivery: search (Google), social media (Facebook, Twitter, Instagram), e-commerce (Amazon) and entertainment (Netflix). AI is the new UI (User Interface), as almost all online experiences are mediated by recommendation engines.
Their success has been proven in almost every area of human endeavour, even more recently in health where they provide diagnoses, comparable, and at times superior, to that of clinicians. Yet they are rare in education and training. This is changing as companies such as WildFire, Coursera, Udacity and others, offer Recommendation services that allow organisations and learners to identify and select courses, based on existing skillsets and experience. We can expect this to be area of real growth over the next few years. 
Recommender engines in learning
Recommender systems, are used at various levels in learning:
  • Macro-level insights from training to solve business problems 
  • Macro-level curricula selection for organisations
  • Macro-level course selection for individuals
  • Micro-level insights and decisions within courses for individuals through adaptive learning
  • Micro-level recommendations for external content within context while doing a course
  • Micro-level insights and decisions for individuals through assessment
Macro-level insights from training to solve business problems 
Chris Brannigan is the CEO of Caspian Learning is a neuroscientist who uses what he calls ‘Human Performance Intelligence’ to investigate, diagnose and treat business problems within organisations. This can be compliance issues, risk or performance of any kind. The aim is to do a complete health check, using AI-driven, 3D simulated training scenarios, sophisticated behavioural analysis, right through to predictive analysis and recommendations for process, human and other types of change. The ambition is breathtaking.
Let’s take a real example, one that Chris has completed. How do you know that your tens or hundreds of thousands of employees perform under high risk? You don’t. The problem is that the risk is asymmetric. A few bad apples can incur the wrath of the regulators, who are getting quite feisty. You really do need to know what they do, why they do it and what you need to do to change things for the better. Their system learns from experts, so that there is an ideal model, then employees go through scenarios (distributed practice) which subtly gathers data over 20 or so scenarios, with lot of different flavours. It then diagnoses the problems in terms of decision-making, reasoning and investigation. A diagnosis, along with a financial impact analysis is delivered to senior executives and line managers, with specific actions. All of this is done using AI techniques that include machine learning and other forms of algorithmic and data analysis to improve the business. It is one very smart solution.
Note that the goal is not to improve training but to improve the business. The data, intelligence, predictive analytics, all move towards decisions, actions and change. The diagnosis will identify geographic areas, cultural problems, specific processes, system weaknesses – all moving towards solutions that may be; more training, investment decisions, system changes or personnel changes. All of this is based on modelling business outcomes.
Macro-level curricula selection for organisations
Coursera have 31 million registered users, huge data sets and 1400 commercial partners and, as mentioned, are using AI to improve organisational learning. Large numbers of employees are taking large numbers of courses and the choice of future courses is also large and getting bigger. Yet within an organization, the data gathered on who takes what courses is limited. This data, when linked to the content (courses and elements within courses), if gathered and put to use through AI, can provide insights leading to recommendations for both organisations about demand, needs and activity, but also at the organizational level, recommendations on what courses to supply and to whom. It is not just Coursera who are doing this, Udacity also have an AI team who have produced interesting tools using sentiment analysis and chatbots that recommend courses. We've also done this at WildFire.
Macro-level course selection for individuals
At WildFire, we’ve developed an AI recommendation engine that doeS some nifty analysis on data from completed courses and recommends other courses for you from that data. At WildFire we take databases of course interactions (courses taken and user-course interactions with learners), then use model-based collaborative filtering to create a scoring matrix. These matrixes are sparse and need to be filled out using correlations that create unknown user interaction grades, and these are used to cluster courses into groups, based on similarities. We then use unsupervised learning to identify clusters of similar specialization, to find areas of overlapping specialization. We have sets of specialisations that are similar to each other, insights that can lead to a better investment in certain courses and determine what courses should be taken by whom.
Micro-level insights and decisions within courses for individuals through adaptive learning
In learning, recommendation engines can also be used to recommend routes through courses, even recommend learning strategies. They promise to increase the efficacy of online delivery through personalised learning, each learning experience being unique to each learner, drawing on data about the learner, other learners who have taken the course, as well as all data from other courses taken by those learners. As learners will vector through learning experiences at speeds related to their competences, they will save time, especially for the faster learners as well as reducing drop-out from courses, by learners who need more individualised support.
Recommender engines lift traditional online learning above the usual straight HTML delivery, with little in the way of intelligent software, making the experience more relevant and efficient for the learner. They also provide scale and access from anywhere with online access, at anytime. If their efficacy can be proved, there is every reason to suppose that their adoption will be substantial.
Micro-level recommendations for external content within context while doing a course
In WildFire, we create content, in minutes not months, using various AI techniques. In sense the AI identifies learning points and creates user interactions within that content but it also has a ‘curation’ engine that recommends external content, linked to that specific piece of learning and produces links to that content automatically. This creates content that satisfies both those who need more explanation and detail, as well as the more curious learner.
This is exactly how experienced learners learn. One thing sparks off curiosity in another. In this case, we formalise the process with AI to find links that satisfy those needs. Sometimes it will be a simple internal document, at others external links to a definition, YouTube video, TED talk or other trusted and selected source.
Micro-level insights and decisions for individuals through assessment
Coursera have been using IRT (Item Response Theory) into machine learning software to analyse learner’s performance from assessments. This allows you to gauge the competences of your employees relative to each other but also other companies in your sector or generally. This is similar to the work done by Chris Brannigan at Caspian Learning. 
Conclusion
LandD talk a lot about business alignment but often don’t get very far down that track. We can see that AI has succeeded because it moves beyond LandD into other business units. What it gathers for an organisation is a unique data set that really does deliver recommendations for change. It’s light years ahead of happy sheets and Kirkpatrick. What’s more interesting is that it is the polar opposite of much of what is being done at present, with low key, non-interventionist training. Even in the online learning world blind adherence to SCORM has meant that most of the useful data, beyond mere completion, has not been gathered. This blind adherence to what was an ill-conceived standard, will hold us back unless we move on.
This AI approach draws on behaviour of real people, uses sim/scenario-based data gathering, focuses on actual performance, captures expert performance, uses AI/machine intelligence to produce concrete recommendations. It’s all about business decision making and direct business impact. And here’s the rub – it gets better the more it is used.
There is no doubt in my mind that AI will change why we learn, what we lean and how we learn. I’ve been showing real examples of this for several years now at conferences, built an AI in learning company that creates online learning in minutes not months - WildFire, invested in others. Learning and Development has always been poor on data, evaluation and return on investment, relying on an old, outdated form of evaluation (Kirkpatrick). It’s time to move on and address that issue square on, with evaluation based on real data and efficacy based on actual demand, using that data.

 Subscribe to RSS

Monday, August 06, 2018

Video is good but never enough - how to supplement it in minutes to get great learning

10 researched tips to produce great video in learning (some will surprise you) had concrete tips on producing video for online learning but it was only half the story, as research also shows that video, in most cases, is rarely enough in learning. 
Video not sufficient
Video is great at showing processes, procedures, real things moving in the real world, drama, even much-maligned talking heads, but it is poor on many other things, especially concepts, numbers and abstract meaning. When delivering WildFire created content to nurses in the NHS, we discovered that processes and procedures were recalled from video but much of the detail was not. The knowledge that was not retained and recalled was often 'semantic' knowledge: 
1) Numbers (doses, measurements, statistical results and so on) 
2) Names and concepts (concepts, drugs, pathogens, anatomy and so on)
This is not surprising, as there is a real difference between episodic and semantic memory. 
Episodic memory is all of those things remembered as experiences or events, you are an actor within these events. Semantic memory is facts, concepts, numbers, where meaning is independent of space and time, often thought as words and symbols.
In healthcare, as in most professions, you need to know both. This is why video alone, is rarely enough. One solution is to supplement video with learning that focuses on reinforcing the episodic and semantic knowledge, so that two plus two makes five.
Two plus two makes five
Our solution was to automatically grab the transcript (narration) of the videos. Some transcripts were already available and for those that were not, we used the automatic transcript service on YouTube. This transcript was put through the WildFire process, where AI was used to automatically produce online leaning with open input questions to increase retention and recall. This allowed the learner to both watch the video (for process and procedure) then do the active learning, where they picked up the semantic knowledge, as well as reinforcing the processes and procedures.
Example
In a nurse training video on Allergy Tests, where the nurse administers allergens into the skin of the patient and the reactions are recorded, the video shows the nurse as she gets patient comfortable with a pillow under his arm. She then asks him some questions (Any lotions on your skin? Taken any antihistamines in the last 4 days?). Other important learning points are to blot (not rub), tell the patient not to scratch and so on.
Now the video did a great job on the procedure – pillow under the arm, lancets in sharps bin, blot not rub, and so on. Where the video failed was in the number of days within which the patient had taken antihistamines, the names of the allergens and the concept of a negative control. This was then covered by asking the learners to recall and type in their answers (not MCQs) in WildFire, items such as 4 days, names of allergens, negative control etc. In addition, if the learner didn’t know, for example, what a negative control was, there were AI created links to explanations, describing what a negative control is within a diagnostic test.
The learner gets the best of both worlds, the visual learning through video and the semantic learning through WildFire, all in the right order and context.
Conclusion
Video is a fabulous learning medium, witness the popularity of YouTube and the success of video in learning, although there are some principles that make it better. When supplemented by WildFire produced content, you get a double dividend – visual episodic learning and semantic knowledge. If you have video content that you need to turn into powerful online learning, quickly, with high retention and recall, contact us at WildFire.

 Subscribe to RSS

Friday, July 20, 2018

“Huge milestone in advancing artificial intelligence” (Gates) as AI becomes a team – big implications for learning?

An event took place this month that received little publicity but was described by Bill Gates as a “huge milestone in advancing artificial intelligence”. It could have profound implications for advances in AI. An OpenAI team, of five neural networks, and a lot of other techniques, beat human teams in the game Dota2.
To understand the significance of this, we must understand the complexity of the task. A computer environment like Dota2 is astoundingly complex and it is played by teams of five, all of whom determine long-term success, exploring the complex environment, making decisions in real time, identifying threats, employing clever team strategies. It’s a seriously bad, chaotic, complicated and fast environment made even messier by the fact that you’re playing against some very smart teams of humans.
To win they created five separate neural networks, each representing a player. You need an executive layer to determine the weightings for each of the player’s actions to determine priorities. Using reinforcement learning techniques and playing itself millions of times (equivalent of 180 years of playtime per day!), it learned fast. Not yet at the level of the professional Dota2 team, but it will get there. 
What’s frightening is the speed of the training and actual learned competence across a ‘team’ in a team context. It needed to optimize team decisions and think about long-term goals, not short-term wins. This is way beyond single player games like chess and GO. 
The implications are huge as AI moves from being good at single, narrow tasks, to general tasks, using a modular approach.
Team AI in military
Let’s take the most obvious parallel. In a battle of robot soldiers versus humans, the robots and drones will simulate the mission, going through millions of possible scenarios, then, having learned all it needs to know, attack in real time, problem solving, as a team, as it goes. Autonomous warfare will be won by the smartest software, organic or not. This is deeply worrying, so worrying that senior AI luminaries, such as Musk, and within Google, have signed a pledge this week saying they will not work on autonomous weapons. So let’s turn to the civilian world.
Team AI in robotics
I’ve written about shared learning in AI across teams of robots which give significant advantages in terms of speed and shared experience. Swarm and cloud robotics are very real. But let’s turn to something very real – business simulations.
Team AI in learning for individuals
Imagine a system that simulates the senior management of a company– its five main Directors. It learns by gathering data about the environment – competitors, financial variables, consumer trends, cost modeling, instantaneous cost-benefit analysis, cashflow projections, resource allocation, recruitment policies, training. It may not be perfect but it is likely to learn faster than any group of five humans and make decisions faster, beating possible competitors. Would a team AI be useful, as an aid to decision making? Do you really need those expensive managers to learn this stuff and make these decisions?
Team AI in learning for individuals
Let’s bring it down to specific strategies for individuals. Suppose I want to maximize my ‘learning’ time. The model in the past has always been one teacher to one or many learners. Imagine pooling the expertise of many teachers to decide what you see and do next in your learning experience or journey? Imagine those teachers having different dimensions, cross-curricular and so on? This breaks the traditional teaching mould. The very idea of a single teacher may just be an artifact of the old model of timetabling and institutions. Multi-teaching, by best of breed systems, may be a better model.
Team AI and blended learning
One could see AI determine the optimal blend for a course or you as an individual. If we take Blended Learning (not Blended Teaching) as our staring point, one of then, based on the learning task, learner and resources available from the AI teacher team, we could see guidance emerge on optimizing the learning journey for that individual, a pedagogic expert who determines what particular type of teaching/learning experience you need at that particular moment?
Conclusion
Rather than seeing AI as a single entity and falsely imagining that it should simulate the behavior of a single ‘teacher’ we should see it as a set of different things that work as a team to help you learn. At the moment we use all sorts of ~AI to help you learn – Google, Recommendation engines, Engagement bots (Differ), Support bots (Jill Watson), Content creation (WildFire), Adaptive Learning (CogBooks), Spaced-practice (WildFire), even Wellbeing. Imagine bringing all of these together for you as an individual learner – coordinated, optimized, not a teacher but a team of teachers.

 Subscribe to RSS

Wednesday, July 04, 2018

Data is not the new oil, more likely the new snakeoil….

Oil is messy and dirty (crude) when it comes out of the ground but it is largely useful when fractioned. Up to 50% is used for petrol, 20% distillate fuel (heating oil and diesel fuel) and 8%  jet fuel. The rest has many other useful purposes. The unwanted elements and compounds are a tiny percentage. Data is not the new oil. It’s stored in weird ways and places, is often old, useless, messy, embarrassing, secret, personal, observed, derived, analytic, may need to be anonymised, training sets identified and subject to GDPR. To quote that old malapropism, ‘data is a minefield of information’!
1. Data dumps
Data is really messy, with much of it in:
  • odd data structures
  • odd formats/encrypted
  • different databases
Just getting a hold of the stuff is difficult.
2. Defunct data
Then there’s the problem of relevance and utility, as much of it is:
  • Old
  • Useless
  • Messy
In fact, much of it could be deleted. We have so much of the stuff because we haven’t known what to do with it, don’t clean it and don’t know how to manage it.
3. Difficult data
There are also problems around data that is:
  • embarrassing
  • secret
There may be very good reasons for not opening up historic data, such as emails and internal communications. It may open up a sizeable legal and other HR risks for organisations. Think Wikileaks email dumps. It’s not like a barrel of oil, more like a can of worms. Like oil spills, we also have data leaks. 
4. Different data
Once cleaned, one can see that there’s many different types of data. Unlike oil it has not so much fractions as different categories of data. In learning we can have ‘Personal’ data, provided by the person or actions performed by that person with their full knowledge. This may be gender, age, educational background, needs, stated goals and so on. Then there’s ‘Observed’ data from the actions of the user, their routes, clicks, pauses and choices. You also have ‘Derived’ data inferred from existing data to create new data and higher level ‘Analytic’ data from statistical and probability techniques related to that individual. . Data may be created on the fly or stored.
5. Anonymised data
Just when you thought it was getting clearer. You also have ‘Anonymised’ data is a bit like oil of an unknown origin. It is clean of any attributes that may relate it to specific individuals. This is rather difficult to achieve as there are often techniques to back engineer attribution to individuals.as
6. Supervised data
In AI there’s also ‘Training’ data used for training AI systems and ‘Production’ data which the system actually uses when it is launched in the real world. This is not trivial. Given the problems stated above, it is not easy to get a suitable data set, which is clean and reliable for training. Then, when you launch the service or product the new data may be subject to all sorts of unforeseen problems not uncovered in the training 
7. Paucity of data
But the problems don’t stop there. In the learning world, the data problem is even worse as there is another problem – the paucity of data. Institutions are not gushing wells of data. Universities, for example, don’t even know how many students turn up for lectures. Data on students is paltry. The main problem with the use of data in learning, is that we have so little of the stuff. SCORM, which has been around for 20 plus years literally stopped the collection of data with its focus in completion. This was the result of a stupid decision by a bunch of folk at ADL. This makes most data analytics projects next to useless. The data can be best handled in a spreadsheet. It is certainly not as large, clean and relevant as it needs to be to produce genuine insights.
Prep
Before entering these data analytics projects ask yourself some serious questions about 'data. Data size by itself, is overated, but size still matters, whether n = tens, hundreds, thousands, millions, the Law of Small Numbers still matters. Don’t jump until you are clear about how much relevant and useful data you have, where it is, how clean it is and in what databases.
New types of data may be more fruitful than legacy data. In learning this could be dwell time on questions, open input data, wrong answers to questions and so on.
More often than not, what you have as data is really proxies for phenomenon. Be careful here, as your oil may actually be snakeoil.
Conclusion
GDPR has made its management and use more difficult. All of this adds up to what I’d call the ‘data delusion’, the idea that data is one thing, easy to manage and generally useful in data analytics projects in institutions and organisations. In general, it is not. That's not to say you should ignore its uses - just don't get sucked into data analytics projects in learning that promise lots but deliver little. Far better to focus on the use of data in adaptive learning or small scale teaching and learning projects where relatively small amounts of data can be put to good use.


 Subscribe to RSS