Wednesday, April 24, 2019

The Geneva Learning Foundation is bringing AI-driven training to health workers in 90 countries

Wildfire is helping the Swiss non-profit tackle a wicked problem: while international organizations publish global guidelines, norms, and standards, they often lack an effective, scalable mechanism to support countries to turn these into action that leads to impact. What is required is low cost, quick conversion, high retention training.
So the Geneva Learning Foundation (GLF) has partnered with artificial intelligence (AI) learning pioneer Wildfire to pilot cutting edge learning technology with over 1,000 immunization professionals in 90 countries, many working at the district level. It is fascinating to see so much feedback come in from so many countries.
By using AI to automate the conversion of such guidelines into learning modules, as well as interpret open-response answers, Wildfire’s AI reduces the cost of training health workers to recall critical information that is needed in the field.. This retention is a key step, if global norms and standards are to translate into making a real impact in the health of people.
If the pilot is successful, Wildfire’s AI will be included in TGLF’s Scholar Approach, a state-of-the-art, evidence-based package of pedagogies to deliver high-quality, multi-lingual learning. This unique Approach has already been shown to not only enhance competencies but also to foster collaborative implementation of transformative projects that began as course work.
TGLF President Reda Sadki said: “The global community allocates considerable human and financial resources to training. This investment should go into pedagogical innovation to revolutionize health.”
As a Learning Innovation Partner to the Geneva learning Foundation, our aim is to improve the adoption and application of digital learning toward achievement of the Sustainable Development Goals (SDGs). Three learning modules based on the World Health Organization’s Global Routine Immunization Strategies and Practices (GRISP) guidelines are now available to pilot participants, including Alumni of the WHO Scholar Level 1 GRISP certification in routine immunization planning
Conclusion
World health needs strong guidelines and solid practices in the field. We are delighted to be delivering this training, using AI as a social good, deliverable on mobiles and in a way that is simple to use but results in real retention and recall.

Friday, April 12, 2019

Why ‘learning analytics’? Why ‘Learning Record Stores’?

There’s a ton of learning technologists saying their new strategy is data collection in 'learning record stores' and 'learning analytics'. On the whole, this is admirable but the danger is in spending this time and effort without asking ‘Why?’ Everyone’s talking about analytics but few are talking about the actual analysis to show how this will actually help increase the efficacy of the organisation. Some are switched on and know exactly what they want to explore and implement, others are like those that never throw anything out and just fill up their home with stuff – but not sure why. One problem is that people are shifting from first to sixth gear without doing much in-between. The industry has been stuck with SCORM for so long, along with a few pie charts and histograms, that it has not really developed the mindset or skills to make this analytics leap.
Decision making
In the end this is all about decision making. What decisions are you going to make on the back of insights from your data? Storing data off for future use may not be the best use of data. Perhaps the best use of data is dynamically, to create courses, provide feedback, adapt learning, text to speech for podcasts and so on. This is using AI in a precise fashion to solve specific learning problems. The least efficient use of data is storing it in huge pots, boiling it up and hoping that something, as yet undefined, emerges.
Visualisation
This is often mentioned and is necessary but visualisation, in itself, means little. One visualises data for a purpose - in order to make a decision. It is not a tinny in itself and often masquerades as doing something useful, when all it is actually doing is acting as a culture-de sac.
Correlations with business data
Learning departments need to align with the business and business outcomes. Looking for correlations between, say increases in sales and completed training, gives us a powerful rational for future strategies in learning. It need not be just sales. Whatever outcomes the organisation has in its strategy needs to be supported by learning and development. This may lift us out of the constraints of Kirkpatrick, cutting to the quick, which is business or organisational impact. We could at last free learning from the shackles of course delivery and deliver what the business really wants and that’s results.
Business diagnosis
Another model is to harvest data from training in a diagnostic fashion. My friend Chris Brannigan at Caspian Learning does this, using AI. You run sophisticated simulation training, use data analysis to identify insights, then make decisions to change things. To give a real example, they put the employees of a global bank through simulation training on loan risk analysis and found that the problems were not what they had imagined - handing out risky loans. In fact, in certain countries, they were rejecting ‘safe’ loans - being too risk averse. This deep insight into business process and skills weaknesses is invaluable. But you need to run sophisticated training, not clickthrough online learning. It has to expose weaknesses in actual performance.
Improve delivery
One can decide to let the data simply expose weaknesses in the training. This requires a very different mindset, where the whole point is to expose weaknesses in design and delivery. Is it too long? Do people actually remember what they need to know? Does it transfer? Again, much training will be found wanting. To be honest, I am somewhat doubtful about this. Most training is delivered without much in the way of critical analysis, so it is doubtful that this is going to happen any time soon.
Determine how people learn
One could look for learning insights into ‘how’ people learn. I’m even less convinced on this one. Recording what people just ‘do’ is not that revealing if they are clickthrough courses, without much cognitive effort. Just showing them video, animation, text and graphics, no matter how dazzling is almost irrelevant if they have learnt little. This is a classic GIGO problem (Garbage In, Garbage Out). 
Some imagine that insights are buried in there and that they will magically reveal themselves  - think again. If you want insights into how people actually learn, set some time aside and look at the existing research in cognitive science. You’d be far better looking at what the research actually says and redesigning your online learning around that science. Remember that these scientific findings have already gone through a process of controlled studies, with a methodology that statistically attempts to get clean data on specific variables. This is what science does – it’s more than a match for your own harvested data set. 
Data preparation
You may decide to just get good data and make it available to whoever wants to use it, a sort of open data approach to learning. But be careful. Almost all learning data is messy. It contains a ton of stuff that is just ‘messing about’ – window shopping, In addition to the paucity of data from most learning experiences, much of it is odd data structures,odd formats, encrypted, in different databases,old, even useless. Even if you do manage to get a useful clean data set, You have to go through the process of separating ‘Personal’ from ‘Observed’ (what you observe people actually doing), ‘derived’ making deductions from that data, ‘Analysed’ (applying analysis to the data). You may have to keep it ‘Anonymised’ and the privacy issues may be difficult to manage. Remember, you’ll need real expertise to pull this off and that is in very short supply.
To use AI/Machine learning
If you are serious about using AI and machine learning (they are not the same thing), then be prepared for some tough times. It is difficult to get things working from unstructured or structured data and you will need a really good training set, of substantial size, to even train your system. And that is just the start, as the data you will be using in implementation may be very different.
Recommendation engines
This is not easy. If you’ve read all of the above carefully, you’ll see how difficult it is to get a recommendation engine to work, on data that is less than reliable.  You may come to the decision that personal learning plans are actually best constructed using simpler software techniques from spreadsheet levels of data.
Conclusion
The danger is that people get so enamoured with data collection and learning analytics that they forget what they’re actually there to do. Large tech companies use big data, but this is BIG data, not the trivial data sets that learning produces, often on single courses or within single institutions.  In fact, Facebook is far more likely to use A/B testing than any fancy recommendations when deciding what content works best, where a series of quick adaptions could be tested with real users, but few have the bandwidth and skills to make this happen.

Thursday, April 04, 2019

Why AI in healthcare is working but IBM failed. What we can learn from this in learning?

First up, this is not a hype piece that says ‘Look at this wonderful stuff in medicines where Doctors will soon be replaced by software… the same will happen to teachers’. The lessons to learn from AI in healthcare are that AI is useful but not in the way many thought.
The dream, pushed primarily by IBM, who saw Watson winning the game show Jeopardy as a marketing platform for what IBM CEO Virginia Rometty called their ‘Moonshot’, a suite of healthcare applications, was that AI would change healthcare forever. This all started in 2011, followed by billions in investment and acquisitions. That was the plan but, in practice, things turned out differently.
The mistake was to think that mining big data would produce insights and that this would be the wellspring for progress in everything from research to diagnosis. That didn’t happen and commercial products are few and far between. The data proved messy and difficult for NLP to use effectively. Trials in hospitals were disappointing. Diagnosis proved tricky. The Superdoctor dream of an all-round physician with access to way more knowledge and way more data than any human, that would trounce professionals in the field, has had to be rethought.
Roger Schank has been a relentless critic of IBMs misuse of the term AI. He is especially critical of their use of terms such as ‘cogntitive computing’, an area he pioneered. Roger knows just how difficult these problems are to crack and sees IBM as marketing lies. He has a point. Another is Robert Wachter, chair of the department of medicine at the University of California in his book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age
So what went wrong? IBMs Oncology Expert Advisor product used NLP (Natural Language Processing) to summarise patient cases then search patient databases to recommend optimal treatments. The problem was that patient data is very messy. It is not, as one would imagine, precise readings from investigative tests, in practice, it is a ton of non-standard notes, written in jargon and shorthand. Strangely enough, Doctors through their messy processes and communication, proved to be remarkably resilient and useful. They could pick up on clues that IBM products failed to spot.
The Oncology Expert project was also criticised for not really being an AI project. In truth, many AI projects are actually a huge and often cobbled together mix of often quite traditional software and tricks, to get things to work. IBM has proved to be less of a leading edge AI company than Google, Microsoft or Amazon. They have a sell first, deliver later mentality. Rather than focus on some useful bullets aimed at clear and close targets, they hyped their moonshot. Unfortunately, it has yet to take off. 
However…
However, and this is an important however, is that successes with AI in healthcare have come in more specific healthcare domains and tasks. There’s a lot to cover in healthcare:
Image analysis
Pathology
Genetic analysis
Patient monitoring
Healthcare administration
Mental health
Surgery
Clinical decision making
In the interpretation of imagery, where data sets are visual, and biopsied confirmation is available, success is starting to flow - mammary scans, retina scans, pathology slides, X-rays and so on. This is good, classifiable data.
Beyond scanned images, pathology slides are largely examined by eye, but image recognition can do this much faster and will, in time, do this with more accuracy. 
One of IBMs rare successes has been their FDA-approved app, launched in 2018 (SUGAR IQ). This delivers personalized patient support for diabetes by monitoring glucose levels and giving recommendations on diet, lifestyle and medication. It is this narrow domain, clear input and defined, personalized outputs for patients, that marks it out as a success. It is here that the real leverage of AI can be applied – in personalized patient delivery.
Another success has been in genome analysis, which is becoming more common, a precise domain with exact data means that the input is clean. Watson for Genomics lists a patient’s genetic mutations and recommends treatments. This is another case of limited-domain input with sensible and measured outputs that can really help oncologists treat patients.
Another good domain is healthcare administration, often antiquated, inefficient and expensive. There are specific tasks within this area that can be tackled using AI, optimising schedules, robots delivering blood and medicines within hospitals, selecting drugs in pharmacies and so on.
In mental health, rather than depending on NLP techniques, such as sentiment analysis, to scan huge amounts of messaging or text data, simple chatbots, like Woebot, which delivers daily dose of personalised CBT therapy, are proving more promising.
Robot surgery has got a lot of hype but in practice it really only exists at the level of laser-eye surgery and hair transplants. These are narrowly defined processes, with not a lot of variation in their execution. 
Where AI has not yet been unsuccessful is in the complex area of Doctor’s diagnosis and clinical decision making. This has proved much more difficult to crack, as AIs need for clean data clashes with the real words of messy delivery.
So most of the low hanging fruit lies in support functions, helping Doctors and patients, not replacing Doctors.
So what can we learn from this story about AI for learning?
Lessons learnt
There are several lessons here:
avoid the allure of big data solutions
avoid data that is messy
look for very specific problems
look for well-defined domains
AI is rarely enough on its own
focus on learners not teachers
I have been critical of the emphasis, in learning (see my piece 'On average humans have one testicle'), on learning analytics, the idea that problems will be solved through access to big data and machine learning to give us insights that will lead to diagnosis of students at risk of dropout. This is largely baloney. The data is messy and the promises often ridiculous. Above all the data is small, so stick to a spreadsheet.
Way forward
Let’s not set off like a cartoon character running off the cliff, finding ourselves looking around in mid-air, then plummeting to earth. Let’s be careful with the hype and big data promises. Let us be honest about how messy our data is and how difficult it is to manage that data. Let us instead, look for clear problems with clear solutions that use areas of AI that we know work – text to speech, speech to text, image recognition, entity analysis, semantic analysis and NLP for dialogue. We’ve been using all of these in WildFire to deliver learning experiences that are firmly based on good cognitive science, while being automated by AI. High-retention learning experiences that are created in minutes not months.
Conclusion
I have no doubts about AI improving the delivery of healthcare. I also have no doubts about its ability to deliver in education and training. What is necessary is a realistic definition of problems and solutions. Let's not be distracted by the blue-sky moonshots and and focus on the grounded problems.

Monday, April 01, 2019

Can you name the three winners winner of the Nobel Prize for their work in climate change? How online learning is tackling climate change

Climate change is a challenge. Above all it is an educational challenge. It is not easy to get to grips of the complexities of the issue. I asked some teenagers last week if any could name the three winners winner of the Nobel Prize for their work in climate change. None could name any and no one even knew that it had been awarded. (Answers at end.)
I then asked about the Paris targets – again no actual knowledge. How about something practical, like the inner workings of a wind turbine or the power equation for wind energy? Nothing. To be honest, I had the sketchiest of answers myself. So it was a joy to be doing some online learning for a major European, renewable energy company.
We used WildFire to teach:
  1. Wind turbines
  2. European policy 

1. Wind turbinesWe see those mighty, white towers and big blades all the time but how do they work? Inside the ‘nacelle’ (no I didn’t know that was its name either) the casing at the top of the tower is a fascinating box of tricks – shafts, a gearbox, generators and controls for ‘yaw’ and ‘pitch’ (know what those are?). Then there’s the wind power equation. Once you understand this, you’ll realise why  the biggest variable in the mix is wind speed, as generated  power equals air density X blade area X wind speed cubed. That word ‘cubed’ really matters – it means low-wind, almost no energy; high-wind, tons of energy.

2.  European policyPolicy is action and it is good to know what Europe is doing and by when. There’s the decarbonisation policy, the Paris targets, electrification, targets in transport, construction and industry. You’d be surprised at the differences between nations across Europe and the scale of the problem is immense. Emissions, in particular, are a real challenge. There’s the near terms 2030 targets and the 2050 targets. On policies, you need to know about incentives, taxes, subsidies and all sorts of market dynamics.

How did we do this?
WildFire simply took the documents, sent to us as attachments by email, we fed them into WildFire and produced two kinds of course:
1.    Detailed knowledge
2.    Free text input
The first literally identifies the key components in a wind turbine, key concepts like yaw and pitch, the variables in the wind formula and so on. You read the content, in sensible chunks, then have to type in your answers, either specific or free text, which is then semantically analysed and feedback given.
In addition, for most concepts, the system automatically provides links out to further explanations. For example, if you don’t know what ‘torque’ is, a link pops up automatically and you get supplementary explanation (including an animation). This is all generated automatically.

Fast
We have literally had no face-to-face meetings on this project, as the client is in Europe. The content was super-quick to produce, at low cost. Above all, as the learner has to actually retrieve and write what they think they have learnt, as opposed to simply clicking on stuff, they have to get it into long term memory. This is high retrieval and retention learning, not clickthough content.

CurationThere is also the option to add curated content to the end of every small piece of learning using the curation tool. This allows individual teachers and trainers to customise things to their own ends.

ConclusionIt is great to deliver a project where a social good is the aim. Climate change has its challenges, one of which is understanding the available renewable technology, another the policies and targets. Many countries now see education as a key pillar in their climate change initiatives. This is spot on. But it takes effort. It is one thing to skip school to protest but this must be more than matched with good informed knowledge and debate around what it actually takes to change things. The climate is changing and this must be matched with cognitive change - that, in the end, is all we have to prevent catastrophe.

PS
2007 Nobel Prize to between the IPCC and Al Gore. The 2018 Nobel Prize went to William Nordhuas, for his work on the economic modelling of climate change.