Sunday, May 19, 2019

How to turn video into deep learning

With video in learning one can feel as though one is learning, as the medium holds your attention but as you are hurtled forward, that knowledge disappears off the back. It’s like a shooting star; looks and feels great but the reality is that it burns up as it enters the atmosphere and rarely ever lands.
Video and learning
We have evolved to learn our first language, walk, recognise faces and so on. This primary knowledge was not learnt in the sense of being schooled or deliberately studied. It is embodied in our evolutionary past and evolved brains. Note that some of this learning is patently wrong. Our intuitive view of inertia, forces, astronomy, biology and many other things is intuitively wrong, which is why we, as a species, developed intellectual development, science, maths, literature and… education. This secondary knowledge is not easily learnt – it has to be deliberately learned and takes effort. This includes maths, medicine, the sciences and most forms of intellectual and even practical endeavour. That brings us to the issue of how we learn this stuff.
Working and LT memory
Let’s start with the basics. What you are conscious of, is what’s in working memory, limited in capacity to 2-4 elements of information at any time. We can literally only hold these conscious thoughts in memory for 20 or so seconds. So our minds move through a leaning experience with limited capacity and duration. This is true of all experience and with video it has some interesting consequences. 
We also have a long-term memory, which has no known limits in capacity or duration, although lifespan is its obvious limit. We can transfer thoughts from long-term meory back into working memory quickly and effortlessly. This is why ‘knowing’ matters. In maths, it is useful to automatically know your times table, to allow working memory to then manipulate recalled results more efficiently. We also use existing information to cope with and integrate novel information. The more you know the easier it is to learn new information. Old, stored, processed information renders working memory enormous through effortless recall from long-term memory.
All of this raises the question of how we can get video-based learning into long-term memory.
Episodic and semantic memory
There is also the distinction, in long-term memory, between episodic and semantic memory. Episodic memories are those experiences such as what you did last night, what you ate for dinner, recalling your experience at a concert. They are, in a sense, like recalling short video sequences (albeit reconstructed). Semantic memory is the recall of facts, numbers, rules and language. They are different types of memory processed in different ways and places by the brain.
When dealing with video in learning, it is important to know what you are targeting. Video appeals far more to episodic than semantic memory – the recall of scenes, events, procedures, places and people doing things.
Element interactivity
When learning meaningful information that is processed, for example in multiplication, you have 2-4 registers for the numbers being multiplied. The elements have to be manipulated within working memory and that adds extra load. Element interactivity is always extra load. Learning simply additions or subtractions have low element interactivity but multiplication is more difficult. Learning vocabulary has low element interactivity. Learning how to put the words together into meaningful sentences is more difficult.
In video, element interactivity, is very difficult, as the brain is coping with newly presented material and the pace is not under your control. This makes video a difficult medium for learning semantic information, as well as consolidation g learning through cognitive effort and deeper processing.
Video not sufficient
Quite simple, we engage in teaching, whether offline or online, to get things into long-term memory via working memory. You must take this learning theory into account when designing video content. When using video we tend to forget about working memory as a limitation and the absence of opportunity to move working memory experiences into long-term memory.  We also tend to shove in material that is more suited to other media, semantic content such as facts, figures and conceptual manipulations. So video is often too long, shows points too quickly and is packed with inappropriate content. 
We can recognise that video has some great learning affordances in that it can capture experiences that one may not be able to experience easily, for real – human interactions, processes, procedures, places and so on. Video can also enhance learning experiences, reveal the internal thoughts of people with voiceover and use techniques that compress, focus in and highlight points that need to be learnt. When done well, it can also have an emotional or affective impact making it good for attitudinal change. The good news is that video has had a century or so to develop a rich grammar of techniques designed to telescope, highlight and get points across. The range of techniques from talking heads to drama, with sophisticated editing techniques and the ability to play with time, people and place, makes it a potent and engaging medium.
The mistake is to see video as a learning medium in itself. Video is a great learning medium if it things are paced, reinforced but made greater if the learner has the opportunity to supplement the video experience with some effortful learning.
Illusion of learning
However, the danger is that, on its own, video can encourage the illusion of learning. This phenomenon was uncovered by Bjork and others, showing that learners are easily fooled into thinking that learning experiences have stuck, when they have actually decayed from memory, often within the first 20 minutes. 
Video plus…
How do we make sure that video learning experience is not lost and forgotten? The evidence is clear, the learner needs some effortful learning – they need to supplement their video learning experience with deeper learning that allows them to move that experience from short to long-term memory.
The first is repeated access to the video, so that second and third bites of the cherry are possible. Everything in the psychology of learning tells us that repeated access to content allows us to understand, process and embed learning for retention and later recall. While repeated watching helps consolidate the learning it is not enough and an inefficient, long-winded, learning strategy.
The second is to take notes. This increase retention significantly by up to 20-30% of done well as deeper processing comes into play as you write, generate your own words, draw diagrams and so on.
WildFire
The third, is far more effective and that is to engage in a form of deeper, effortful learning that involves retrieval and recall. We have built a tool, WildFire,that does exactly this.
How do you ensure that your learning is not lost and forgotten? Strangely enough it is by engaging in a learning experience that makes you recall what you think you’ve learnt. We grab the transcript of the video, put it into an AI engine that creates a supplementary learning experience, where you have to type in what you ‘think’ you know. This is both simple concepts, numbers but also open input sentences, where the AI also semantically interprets your answers. This powerful form of retrieval learning, not only gives you reinforcement through a a second bite of the cherry bit also consolidates the learning. Research has shown that recalling back into memory – literally looking away and thinking about what you know, is even more powerful than the original teaching experience or exposure. In addition, the AI creates links out to supplementary material (curates if you wish) to further consolidate memory through deeper thought and processing.

 Subscribe to RSS

Thursday, May 02, 2019

‘Machines Like Me’ by Ian McEwan – a flawed machinage a trois

Ian McEwan’s 'Machines Like Me' is a machinage a trois between Charlie, Miranda and Adam. Now Ian can pen a sentence and, at times, writes beautifully but this is a rather mechanical, predictable and, at times, flawed effort.
Robot Fallacy
The plot is remarkably similar to the 2015 threesome-with-a-robot movie Uncanny (also has an Adam) which is somewhat better than this novel. But the real problem is the Robot Fallacy – the idea that AI is all about robots – it’s not. AI, even robotics, is not all about creating interesting characters for second rate novels and films and is not on a quest to create anthropoid human robots as some sort of undefined companions. Art likes to think it is, as art needs characterisation and physical entities. AI is mostly bits not atoms, largely invisible and quite difficult to reveal, it is mostly online but that's difficult for authors and film makers. That’s why the film Her was also superior to this novel – it doesn’t fall into the idea that it’s all about physical robots. McEwan’s robot and plot limits any real depth of analysis as it’s stuck in the Mary Shelley Frankenstein myth, with Turing as the gratuitous Frankenstein. In fact, it is a simple retelling of that tale, yet another in a long line of dystopian views of technology. McEwan compounds the Robot Fallacy by making Adam appear, almost perfectly formed, from nowhere. In reality, AI is a long haul with tons of incremental trials and failures. Adam appears as if created by God. Then there’s the confusion of complexity with autonomy. Stephen Pinker and others have pointed out the muddle-headed nature of this line of thought in Enlightenment Now. It is easy to avoid autonomy in the engineering of such systems. It tries to introduce some pathos at the end but ultimately it’s an old tale not very well told.
Oddities and flaws
Putting that aside, there are some real oddities, even clangers, in McEwan’s text. The robot often washes the dishes by hand, as if we have invented a realistic human companion but not a dishwasher. In fact, dishwashers are around, as one pops up, oddly as an analogy, later in the book. The robot can’t drive yet (self-driving cars appeared but didn’t work because of a traffic jam!). Yet self-driving cars make an appearance later in the book.
Counterfactuals are tricky to handle as it makes suspension of disbelief that much harder and in this case it the entire edifice of losing the Falklands war and muddling up political events seems like artifice without any real justification. One counterfactual completely threw me. It’s one thing to counterfactually ‘extend’ Turing’s life, another to recalibrate someone’s birth date , taking it back a couple of decades, as in the appearance of Demis Hassabis (of Deepmind fame). Hassabis pops up as Turing’s brilliant young colleague in 1968, odd as he wasn’t born until 1976 (as stated on the final page)!
Then there’s an even odder insertion into the novel – Brexit. McEwan is a famous Leave campaigner and for no reason, other than pettifoggery, he drags the topic into the narrative. I have no idea why. It has no causality within the plot and no relevance to the story. It just comes across as an inconsequential and personal gripe.
The yarn has one other fatal flaw – the odd way the child in introduced into the story, via a manufactured incident in the park, a continuing thread in the story that is about believable as a chocolate robot. I’m not the first to spot the straight-up snobbery in his handling of this plot line  - working class people as hapless thugs.
To be fair there are some interesting ideas, such as the couple choosing personality settings for their robot in a weird form of parenting and this blurring of boundaries is the book’s strength. The robot shines through as being by far the most interesting character in the book, curiously philosophical, and there’s some exploration of loyalty, justice and self.
Conclusion
Did I learn anything about AI from this novel? Unfortunately not. In the end it’s a rather mechanical and, at times, petty work. It was difficult to hold suspension of disbelief, as so many points were unbelievable. McEwan seems to have lost his imaginative flair, along with his ability to surprise and transgress. His fictional progeny are more ciphers than people. In truth, AI is only software, and all of this angst around robots murdering us in our sleep is hyperbolic and doesn’t really tackle the main issues around automation and perhaps the good that come out of such technology.

 Subscribe to RSS

Wednesday, April 24, 2019

The Geneva Learning Foundation is bringing AI-driven training to health workers in 90 countries

Wildfire is helping the Swiss non-profit tackle a wicked problem: while international organizations publish global guidelines, norms, and standards, they often lack an effective, scalable mechanism to support countries to turn these into action that leads to impact. What is required is low cost, quick conversion, high retention training.
So the Geneva Learning Foundation (GLF) has partnered with artificial intelligence (AI) learning pioneer Wildfire to pilot cutting edge learning technology with over 1,000 immunization professionals in 90 countries, many working at the district level. It is fascinating to see so much feedback come in from so many countries.
By using AI to automate the conversion of such guidelines into learning modules, as well as interpret open-response answers, Wildfire’s AI reduces the cost of training health workers to recall critical information that is needed in the field.. This retention is a key step, if global norms and standards are to translate into making a real impact in the health of people.
If the pilot is successful, Wildfire’s AI will be included in TGLF’s Scholar Approach, a state-of-the-art, evidence-based package of pedagogies to deliver high-quality, multi-lingual learning. This unique Approach has already been shown to not only enhance competencies but also to foster collaborative implementation of transformative projects that began as course work.
TGLF President Reda Sadki said: “The global community allocates considerable human and financial resources to training. This investment should go into pedagogical innovation to revolutionize health.”
As a Learning Innovation Partner to the Geneva learning Foundation, our aim is to improve the adoption and application of digital learning toward achievement of the Sustainable Development Goals (SDGs). Three learning modules based on the World Health Organization’s Global Routine Immunization Strategies and Practices (GRISP) guidelines are now available to pilot participants, including Alumni of the WHO Scholar Level 1 GRISP certification in routine immunization planning
Conclusion
World health needs strong guidelines and solid practices in the field. We are delighted to be delivering this training, using AI as a social good, deliverable on mobiles and in a way that is simple to use but results in real retention and recall.

 Subscribe to RSS

Monday, April 22, 2019

Climate change: dematerialisation and online learning

The number of young adults with driving licences has fallen dramatically, so that over half of American 18-year-olds do not have a driving license. This is partly due to the internet and their alternative investment in mobiles, laptops and connectivity. This is good news. I have never, ever driven a car, having lived in cities such as Edinburgh, London and now Brighton. I’ve never really been stuck, in terms of getting anywhere. I walk, take trains or public transport more than most. This has meant I’ve habitually learnt on the move, largely in what Marc Auge calls ‘non-places’ – trains, planes, automobiles, buses, hotels, airports, stations. I’m never without a laptop, book or mobile device for learning. Whether it’s text, podcasts or video; m-learning has become my dominant form of informal learning. This has literally given me years of extra time to read, write and learn in the isolated and comfortable surroundings of buses, trains and planes. I actually look forward to travel, as I know I’ll be able to read and think, even write in peace. Being locked away, uninterrupted in a comfortable environment is exactly what I need in terms of attention and reflection. I calculate that over the last 35 years of not driving, I’ve given myself pretty much a couple of extra degrees. 
At the risk of sounding like a hobo, I also have only two pairs of shoes and a minimal amount of clothing. I never buy bottled water and have a lifelong principle – Occam’s Razor – use the minimal amount of entities to reach your given goal. 
Dematerialisation
More importantly, all my life I have worked in technology, which has delivered much to the world in terms of eradicating poverty, mindless labour, disease and hardship. Technology has dematerialised many activities. Mobile comms has replaced atoms with bits Take music - we no longer have to listen on vinyl in paper sleeves (except for nostalgists) or unrecyclable compact discs, as most music is now streamed and literally has no substance.
Newspaper circulation has plummeted and my phone delivered an unlimited amount of knowledge and communications that., in the past, would have been infrastructure heavy and hugely wasteful. Paper production is a massive, global polluter on land, water and air. It is the third largest industrial, polluter in North America, the fifth biggest user of energy and uses more water per ton of product than any other industry and paper in landfill sites accounts for around 35% of all waste by weight. Recycling helps but even the deinking process produces pollutants. Paper production still uses chlorine and chlorine based chemicals and dioxins are an almost inevitable part of the paper production process. Water pollution is perhaps the worst, as pulp-mill, waste water is oxygen hungry and contains an array of harmful chemicals. Harmful gases and greenhouse gases are also emitted. On top of this the web has given us the sharing economy, where bikes, cars, rooms and so on can be reused and shared. It would seem as though we're nearing what Ausuble called 'Peak Stuff'. This is all good as the best type of energy saving is not using energy at all or at least minimising the effort and resources needed.
Online learning and climate change
I have spent the whole of my adult life delivering a green product – online learning – which stops the need to travel and reduces the need for carbon-intensive, physical infrastructure.
More recently, we have (with help of my friend Inge) built and delivered a large amount of online education around renewable technologies, targets, policies and solutions. Knowledge is power and with knowledge we have the power to solve this problem. That’s why this project was so important. We used AI to create online education content in minutes not months, from just a few basic documents. Most of the projects we’ve created in WildFire have been without face-to-face meetings and this was no exception. We plan, deliver and project manage online.
Additionally, on climate change, the power of online education is not only its green credentials but also its power to inform. Even active protesters show precious little awareness of what the Paris agreements were, how the technology works and what the science actually says. We need to move beyond the bombast to practical, pragmatic and informed solutions. 
WindPower
First up was content on those huge triffid-like wind turbines. What do you call the thing that sits on top of the tower? (nacelle) What lies inside? (lots of things) What controls are there in terms of direction and so on? (Yaw, pitch and speed) What is the wind equation? (P=pav3 this explains exactly why wind speed is the key variable). We have a huge offshore wind turbine field just off Brighton and I’ll never see them in the same light.
Policies
It’s all very well demonstrating for zero emissions but this has to be achieved through practical policies. Decarbonising economies requires the adoption of the right policy levers and accelerating electrification is top of the list. Renewables are great but not enough, as rather than producing less we must minimise energy use per unit of economic output. Rather than ranting against the ‘man’ we must use technology and market based instruments. Without a change in mindset, this will be difficult, so it’s all hand to the political pump to get things moving.
Conclusion
Technology – wind turbines, solar, electric vehicles, battery technology, AI driven IOT and all sorts of future solutions will solve this problem but without cheaper, greener education and an awareness of what we have to do and why, this is unlikely to happen. One contribution is the rapid rise in online learning. This has already led to the disappearance of those large training centres I remember back on the day. Less people travel to train. It also means access to learning for anyone with online access.

 Subscribe to RSS

Friday, April 12, 2019

Why ‘learning analytics’? Why ‘Learning Record Stores’?

There’s a ton of learning technologists saying their new strategy is data collection in 'learning record stores' and 'learning analytics'. On the whole, this is admirable but the danger is in spending this time and effort without asking ‘Why?’ Everyone’s talking about analytics but few are talking about the actual analysis to show how this will actually help increase the efficacy of the organisation. Some are switched on and know exactly what they want to explore and implement, others are like those that never throw anything out and just fill up their home with stuff – but not sure why. One problem is that people are shifting from first to sixth gear without doing much in-between. The industry has been stuck with SCORM for so long, along with a few pie charts and histograms, that it has not really developed the mindset or skills to make this analytics leap.
Decision making
In the end this is all about decision making. What decisions are you going to make on the back of insights from your data? Storing data off for future use may not be the best use of data. Perhaps the best use of data is dynamically, to create courses, provide feedback, adapt learning, text to speech for podcasts and so on. This is using AI in a precise fashion to solve specific learning problems. The least efficient use of data is storing it in huge pots, boiling it up and hoping that something, as yet undefined, emerges.
Visualisation
This is often mentioned and is necessary but visualisation, in itself, means little. One visualises data for a purpose - in order to make a decision. It is not a tinny in itself and often masquerades as doing something useful, when all it is actually doing is acting as a culture-de sac.
Correlations with business data
Learning departments need to align with the business and business outcomes. Looking for correlations between, say increases in sales and completed training, gives us a powerful rational for future strategies in learning. It need not be just sales. Whatever outcomes the organisation has in its strategy needs to be supported by learning and development. This may lift us out of the constraints of Kirkpatrick, cutting to the quick, which is business or organisational impact. We could at last free learning from the shackles of course delivery and deliver what the business really wants and that’s results.
Business diagnosis
Another model is to harvest data from training in a diagnostic fashion. My friend Chris Brannigan at Caspian Learning does this, using AI. You run sophisticated simulation training, use data analysis to identify insights, then make decisions to change things. To give a real example, they put the employees of a global bank through simulation training on loan risk analysis and found that the problems were not what they had imagined - handing out risky loans. In fact, in certain countries, they were rejecting ‘safe’ loans - being too risk averse. This deep insight into business process and skills weaknesses is invaluable. But you need to run sophisticated training, not clickthrough online learning. It has to expose weaknesses in actual performance.
Improve delivery
One can decide to let the data simply expose weaknesses in the training. This requires a very different mindset, where the whole point is to expose weaknesses in design and delivery. Is it too long? Do people actually remember what they need to know? Does it transfer? Again, much training will be found wanting. To be honest, I am somewhat doubtful about this. Most training is delivered without much in the way of critical analysis, so it is doubtful that this is going to happen any time soon.
Determine how people learn
One could look for learning insights into ‘how’ people learn. I’m even less convinced on this one. Recording what people just ‘do’ is not that revealing if they are clickthrough courses, without much cognitive effort. Just showing them video, animation, text and graphics, no matter how dazzling is almost irrelevant if they have learnt little. This is a classic GIGO problem (Garbage In, Garbage Out). 
Some imagine that insights are buried in there and that they will magically reveal themselves  - think again. If you want insights into how people actually learn, set some time aside and look at the existing research in cognitive science. You’d be far better looking at what the research actually says and redesigning your online learning around that science. Remember that these scientific findings have already gone through a process of controlled studies, with a methodology that statistically attempts to get clean data on specific variables. This is what science does – it’s more than a match for your own harvested data set. 
Data preparation
You may decide to just get good data and make it available to whoever wants to use it, a sort of open data approach to learning. But be careful. Almost all learning data is messy. It contains a ton of stuff that is just ‘messing about’ – window shopping, In addition to the paucity of data from most learning experiences, much of it is odd data structures,odd formats, encrypted, in different databases,old, even useless. Even if you do manage to get a useful clean data set, You have to go through the process of separating ‘Personal’ from ‘Observed’ (what you observe people actually doing), ‘derived’ making deductions from that data, ‘Analysed’ (applying analysis to the data). You may have to keep it ‘Anonymised’ and the privacy issues may be difficult to manage. Remember, you’ll need real expertise to pull this off and that is in very short supply.
To use AI/Machine learning
If you are serious about using AI and machine learning (they are not the same thing), then be prepared for some tough times. It is difficult to get things working from unstructured or structured data and you will need a really good training set, of substantial size, to even train your system. And that is just the start, as the data you will be using in implementation may be very different.
Recommendation engines
This is not easy. If you’ve read all of the above carefully, you’ll see how difficult it is to get a recommendation engine to work, on data that is less than reliable.  You may come to the decision that personal learning plans are actually best constructed using simpler software techniques from spreadsheet levels of data.
Conclusion
The danger is that people get so enamoured with data collection and learning analytics that they forget what they’re actually there to do. Large tech companies use big data, but this is BIG data, not the trivial data sets that learning produces, often on single courses or within single institutions.  In fact, Facebook is far more likely to use A/B testing than any fancy recommendations when deciding what content works best, where a series of quick adaptions could be tested with real users, but few have the bandwidth and skills to make this happen.

 Subscribe to RSS

Thursday, April 04, 2019

Why AI in healthcare is working but IBM failed. What we can learn from this in learning?

First up, this is not a hype piece that says ‘Look at this wonderful stuff in medicines where Doctors will soon be replaced by software… the same will happen to teachers’. The lessons to learn from AI in healthcare are that AI is useful but not in the way many thought.
The dream, pushed primarily by IBM, who saw Watson winning the game show Jeopardy as a marketing platform for what IBM CEO Virginia Rometty called their ‘Moonshot’, a suite of healthcare applications, was that AI would change healthcare forever. This all started in 2011, followed by billions in investment and acquisitions. That was the plan but, in practice, things turned out differently.
The mistake was to think that mining big data would produce insights and that this would be the wellspring for progress in everything from research to diagnosis. That didn’t happen and commercial products are few and far between. The data proved messy and difficult for NLP to use effectively. Trials in hospitals were disappointing. Diagnosis proved tricky. The Superdoctor dream of an all-round physician with access to way more knowledge and way more data than any human, that would trounce professionals in the field, has had to be rethought.
Roger Schank has been a relentless critic of IBMs misuse of the term AI. He is especially critical of their use of terms such as ‘cogntitive computing’, an area he pioneered. Roger knows just how difficult these problems are to crack and sees IBM as marketing lies. He has a point. Another is Robert Wachter, chair of the department of medicine at the University of California in his book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age
So what went wrong? IBMs Oncology Expert Advisor product used NLP (Natural Language Processing) to summarise patient cases then search patient databases to recommend optimal treatments. The problem was that patient data is very messy. It is not, as one would imagine, precise readings from investigative tests, in practice, it is a ton of non-standard notes, written in jargon and shorthand. Strangely enough, Doctors through their messy processes and communication, proved to be remarkably resilient and useful. They could pick up on clues that IBM products failed to spot.
The Oncology Expert project was also criticised for not really being an AI project. In truth, many AI projects are actually a huge and often cobbled together mix of often quite traditional software and tricks, to get things to work. IBM has proved to be less of a leading edge AI company than Google, Microsoft or Amazon. They have a sell first, deliver later mentality. Rather than focus on some useful bullets aimed at clear and close targets, they hyped their moonshot. Unfortunately, it has yet to take off. 
However…
However, and this is an important however, is that successes with AI in healthcare have come in more specific healthcare domains and tasks. There’s a lot to cover in healthcare:
Image analysis
Pathology
Genetic analysis
Patient monitoring
Healthcare administration
Mental health
Surgery
Clinical decision making
In the interpretation of imagery, where data sets are visual, and biopsied confirmation is available, success is starting to flow - mammary scans, retina scans, pathology slides, X-rays and so on. This is good, classifiable data.
Beyond scanned images, pathology slides are largely examined by eye, but image recognition can do this much faster and will, in time, do this with more accuracy. 
One of IBMs rare successes has been their FDA-approved app, launched in 2018 (SUGAR IQ). This delivers personalized patient support for diabetes by monitoring glucose levels and giving recommendations on diet, lifestyle and medication. It is this narrow domain, clear input and defined, personalized outputs for patients, that marks it out as a success. It is here that the real leverage of AI can be applied – in personalized patient delivery.
Another success has been in genome analysis, which is becoming more common, a precise domain with exact data means that the input is clean. Watson for Genomics lists a patient’s genetic mutations and recommends treatments. This is another case of limited-domain input with sensible and measured outputs that can really help oncologists treat patients.
Another good domain is healthcare administration, often antiquated, inefficient and expensive. There are specific tasks within this area that can be tackled using AI, optimising schedules, robots delivering blood and medicines within hospitals, selecting drugs in pharmacies and so on.
In mental health, rather than depending on NLP techniques, such as sentiment analysis, to scan huge amounts of messaging or text data, simple chatbots, like Woebot, which delivers daily dose of personalised CBT therapy, are proving more promising.
Robot surgery has got a lot of hype but in practice it really only exists at the level of laser-eye surgery and hair transplants. These are narrowly defined processes, with not a lot of variation in their execution. 
Where AI has not yet been unsuccessful is in the complex area of Doctor’s diagnosis and clinical decision making. This has proved much more difficult to crack, as AIs need for clean data clashes with the real words of messy delivery.
So most of the low hanging fruit lies in support functions, helping Doctors and patients, not replacing Doctors.
So what can we learn from this story about AI for learning?
Lessons learnt
There are several lessons here:
avoid the allure of big data solutions
avoid data that is messy
look for very specific problems
look for well-defined domains
AI is rarely enough on its own
focus on learners not teachers
I have been critical of the emphasis, in learning (see my piece 'On average humans have one testicle'), on learning analytics, the idea that problems will be solved through access to big data and machine learning to give us insights that will lead to diagnosis of students at risk of dropout. This is largely baloney. The data is messy and the promises often ridiculous. Above all the data is small, so stick to a spreadsheet.
Way forward
Let’s not set off like a cartoon character running off the cliff, finding ourselves looking around in mid-air, then plummeting to earth. Let’s be careful with the hype and big data promises. Let us be honest about how messy our data is and how difficult it is to manage that data. Let us instead, look for clear problems with clear solutions that use areas of AI that we know work – text to speech, speech to text, image recognition, entity analysis, semantic analysis and NLP for dialogue. We’ve been using all of these in WildFire to deliver learning experiences that are firmly based on good cognitive science, while being automated by AI. High-retention learning experiences that are created in minutes not months.
Conclusion
I have no doubts about AI improving the delivery of healthcare. I also have no doubts about its ability to deliver in education and training. What is necessary is a realistic definition of problems and solutions. Let's not be distracted by the blue-sky moonshots and and focus on the grounded problems.

 Subscribe to RSS

Monday, April 01, 2019

Can you name the three winners winner of the Nobel Prize for their work in climate change? How online learning is tackling climate change

Climate change is a challenge. Above all it is an educational challenge. It is not easy to get to grips of the complexities of the issue. I asked some teenagers last week if any could name the three winners winner of the Nobel Prize for their work in climate change. None could name any and no one even knew that it had been awarded. (Answers at end.)
I then asked about the Paris targets – again no actual knowledge. How about something practical, like the inner workings of a wind turbine or the power equation for wind energy? Nothing. To be honest, I had the sketchiest of answers myself. So it was a joy to be doing some online learning for a major European, renewable energy company.
We used WildFire to teach:
  1. Wind turbines
  2. European policy 

1. Wind turbinesWe see those mighty, white towers and big blades all the time but how do they work? Inside the ‘nacelle’ (no I didn’t know that was its name either) the casing at the top of the tower is a fascinating box of tricks – shafts, a gearbox, generators and controls for ‘yaw’ and ‘pitch’ (know what those are?). Then there’s the wind power equation. Once you understand this, you’ll realise why  the biggest variable in the mix is wind speed, as generated  power equals air density X blade area X wind speed cubed. That word ‘cubed’ really matters – it means low-wind, almost no energy; high-wind, tons of energy.

2.  European policyPolicy is action and it is good to know what Europe is doing and by when. There’s the decarbonisation policy, the Paris targets, electrification, targets in transport, construction and industry. You’d be surprised at the differences between nations across Europe and the scale of the problem is immense. Emissions, in particular, are a real challenge. There’s the near terms 2030 targets and the 2050 targets. On policies, you need to know about incentives, taxes, subsidies and all sorts of market dynamics.

How did we do this?
WildFire simply took the documents, sent to us as attachments by email, we fed them into WildFire and produced two kinds of course:
1.    Detailed knowledge
2.    Free text input
The first literally identifies the key components in a wind turbine, key concepts like yaw and pitch, the variables in the wind formula and so on. You read the content, in sensible chunks, then have to type in your answers, either specific or free text, which is then semantically analysed and feedback given.
In addition, for most concepts, the system automatically provides links out to further explanations. For example, if you don’t know what ‘torque’ is, a link pops up automatically and you get supplementary explanation (including an animation). This is all generated automatically.

Fast
We have literally had no face-to-face meetings on this project, as the client is in Europe. The content was super-quick to produce, at low cost. Above all, as the learner has to actually retrieve and write what they think they have learnt, as opposed to simply clicking on stuff, they have to get it into long term memory. This is high retrieval and retention learning, not clickthough content.

CurationThere is also the option to add curated content to the end of every small piece of learning using the curation tool. This allows individual teachers and trainers to customise things to their own ends.

ConclusionIt is great to deliver a project where a social good is the aim. Climate change has its challenges, one of which is understanding the available renewable technology, another the policies and targets. Many countries now see education as a key pillar in their climate change initiatives. This is spot on. But it takes effort. It is one thing to skip school to protest but this must be more than matched with good informed knowledge and debate around what it actually takes to change things. The climate is changing and this must be matched with cognitive change - that, in the end, is all we have to prevent catastrophe.

PS
2007 Nobel Prize to between the IPCC and Al Gore. The 2018 Nobel Prize went to William Nordhuas, for his work on the economic modelling of climate change.

 Subscribe to RSS

Thursday, March 28, 2019

Chatbots are being abused – but they’re fighting back!

Folk ask chatbots the weirdest of things. That’s fine of your chatbot is, say, a Dominatrix (yes they do exist). But in customer care or learning chatbots, it seems surprising – it’s not. Users know that chatbots are really pieces of software, so test it with rude and awkward questions. Swearing, sexual suggestions, requests to do odd things, and just being plain rude are common. 
The Cleo chatbot has been asked out on a date over 2000 times and asked to send naked photographs on over 1000 occasions. To the latter it sends back a picture of a circuit board. Nice touch and humour is often the best response. The financial chatbot Plum responds to swearing by saying "I might be a robot but I have digital feelings. Please don't swear." These are sensible responses, as Nass and Reeves found in their studies of humans with technology, that we humans expect our tech to be polite. 
There are even worse disasters in ‘botland’. InspiroBot creates inspiring quotes on nice photographs but often comes up with ridiculous rot. Tay, released by Microsoft, quickly became a sex-crazed Nazi and BabyQ recommended that young Chinese people should go to the US to realise their dreams. They were, of course shut down in hours. This is one of the problem with open, machine learning bots, they have a life of their own. But awkward questions can be useful…
Play
People want to play with chatbots – that’s fine. You often find that these questions are asked when someone first uses a chatbot or buys Alexa. It’s a sort of on-boarding process, where the new user gets used to the idea of typing replies or speaking to a machine.
Test limits
The odd questions tends to come at the start, as people stress-test the bot, then drops off dramatically. This is telling and actually quite useful, as users get to see how the bot works. They’re sometimes window shopping or simply seeing where the limits lie. One can see where the semantic interpretation of the Natural Language Interface lies by variants on the same question. Note that you can quickly tell whether it uses something like Google’s Dialogueflow, as opposed to a fixed non-natural language system.
Expectations 
It also helps calibrate and manage expectations. Using a bot is a bit like speaking to a very young child. You ask it a few questions a bit of back and forth, then get its level. Actually, with some, it’s like speaking to a dog, where all you can do is variants on ‘fetch’. Once the user realises that the bot is not a general purpose companion, who will answer anything or teacher with super-teaching qualities, and has a purpose, usually a specific domain, like finance, health or a specific subject, and that questions beyond this are pointless, they settle down. You get that “fair enough’ response and they settle down to the actual business of the bot.
Engagement
These little touches of humour and politeness serve a further purpose, in that they actually engage the user. If you get a witty or clever reply, you have a little more respect for the bot or at least the designer of the bot. With a little clever scripting, this can make or break user acceptance. Some people will, inevitably, ask your bot to tell a joke – be ready for that one. A knock-knock joke is good as it involves a shot dialogue, or lightbulb joke.
Tone
These responses can also be used to set the tone of the bot. Good bots know their audience and set the right tone. It’s pointless being too hip and smart-assed with an older audience who may find it just annoying. Come to think of it, this is also true of younger audiences, who are similarly intolerant of clichés. You can use these responses to be edgy, light-hearted, serious, academic… whatever.
Conclusion
You’ll find yourself dead-ending a lot with bots. They’re nowhere near as smart as you at first think. That’s OK. They serve a function and are getting better. But it’s good to offer a little freedom, allow people to play, explore, find limits, set expectations and increase engagement. 

 Subscribe to RSS

Saturday, March 16, 2019

Ai starts to crack the critical thinking... astonishing experiment...

Just eighteen years after 2001 (older readers will know the significance of that date), the AI-debater on the left, a 6 foot high black stele, with a woman’s voice, used arguments, objections, rebuttals, even jokes, to tussle with her opponent. She lost but, in a way, she also won, as this points towards an interesting breed of critical thinking software.  This line of AI has significance in the learning world.
How does it work?
First, she creates an opening speech by searching through millions of opening gambits, removes extraneous text and looks for the highest probability claims and arguments, based on solid evidence, she then arranges these arguments thematically to give a four minute speech.  In critical conversation,  she then listens to your response and responds, debating the point step by step. This where it gets clever as it she has to cope with logical dilemmas and structured debate and argument, drawing on a huge corpus of knowledge, way beyond what any human could read and remember.
Debate
In learning, working through a topic through dialogue, debate and discussion is often useful. Putting your ideas to the test, in an assignment or research task or when writing an article for publication and so on, would be a useful skill for my Alexa to be able to deliver. It raises the game, as it pushes AI generated responses beyond knowledge into reasoned argument and checks on evidence from trusted sources. But a debate is not the great win here. There are other more interesting and scalable uses.
Critical thinking
Much of the talk about 21st century skills is rather cliched, with little in the way of evidence-based debate. The research suggests that these skills, far from being separate 'skills' are largely domain specific. You don't get far in being a creative, critical and problem solving thinker in, say Data Science, if you don't know a lot about...  well... Data science. What's interesting about this experiment is the degree to which general debating skills,, let's call it stating and defending or attacking a proposition, shows how one can untangle, say critical thinking, into its components, as it has to be captured and delivered as software.
There are some key lessons here, as the logo of debate is actually the logic we know from Aristotle onwards, syllogistic and complex, often beyond the capability of the human brain. On the other hand the heuristics we humans use are a real challenge for AI. But AI is rising to this challenge with all sorts of techniques, many species of supervised and unsupervised AI that learns through machine learning, fuzzy logic to cope (largely with the impreciseness of language and human expression) and a battery of statistical theory and probability theory to determine certainty.
This, along with GTP-2 (I've written about this here), which creates content, along with techniques embedded in Google Duplex around complex conversational rules, are moving learning AI into new territory, with real dialogue based on structured creation of content, voice and the flow of conversations and debate. Why is this important?
1. Teaching
When it reaches a certain standard, we can see how it starts to behave like a teacher, to engage with a learner in dialogue and interpret the strengths of arguments, debate with the student, even teach and assess critical thinking and problem solving. In a sense it may transform normal teaching, in being able to deliver personalised learning at this level, on scale. The skills of a good teacher or lecturer are to introduce a subject, engage learners, support learners, assess learners. Even if it does not perform the job of an experienced teacher, one could see how it could support teachers.
2. Communication skills
There is also the ability to raise one’s game by using it as a foil to improve one’s communication skills, as a learner, teacher, presenter, interviewer, coach, therapist or sales person. Being able to persuade it that you are right, based on evidence, is something we could all benefit from. It strikes me that it could in time, also identify and help correct various human biases, especially confirmation bias but many others. Daniel Kahneman, in his Thinking Fast and Slow makes an excellent point at the very end of the book when he says that these biases are basically 'uneducable'. In other words, they are there, and rather than trying to change them, which is near impossible, we must tame them.
3. Expert
With access to over 300 million articles it has digested more than any human can read and remember in a lifetime. But this is just for reference. The degree to which it can use this as evidence for argument and advice is interesting. The experiment seems to support the idea that domain knowledge really does matter in critical thinking, something largely ignored in the superficial debate at conferences on 21st century skills. This may untangle this complex area by showing us how trues expertise is developed and executed.
4. Practice
The advantages the machine has over humans is the consistent access and use of very large knowledge bases. One can foresee a system that is an expert in a multitude of subjects and able to deliver scalable and sophisticated practice in not only knowledge but higher order skills across a range of subjects. The development of expertise takes time application and practice. This offers the opportunity to accelerate expertise. Of course, it also suggests that expertise may be replaces may machines. Read that sentence again, as it has huge consequences.
5. Assessment
If successful, such software could be a sophisticated way to assess learner’s work, whether written work, essays or oral, as it puts their arguments to the test. This is the equivalent to a VIVA or oral exam. With more structured questions, one could see how more sophisticated and objective assessment, free from essay mills and cheating, could be delivered.
6. Decision making
One could also see a use in decision-making, where evidence-based arguments would be at least worth exploring, while humans still make the decisions. I’d love, as a manager, to make a decision based on what has been found to work, rather than guessing or relying on faddish decision making.
Conclusion
This will, eventually, be invaluable for a teaching assistant that never gets tired, inattentive, demotivated, crabby and delivers quality learning experiences, not just answering questions. It may also help eliminate human bias in educational processes, making them more meritocratic. Above all it holds the promise of high level teaching that is scalable and cheap. At the very least it may lift often crass debate around 21st century skills beyond their cliched presentation as lists in bad PowerPoint presentations at conferences.

 Subscribe to RSS

Thursday, March 07, 2019

Why learning professionals – managers, project managers, interactive designers, learning experience designers, whatever, should not ignore research

Why do learning professionals in L and D – managers, project managers, interactive designers, learning experience designers and so on, ignore research? It doesn’t matter if you are implementing opportunities for learning such as nudges, social opportunities, workflow learning, performance support or designing pieces of content or full courses, you will be faced with deciding on whether one learning strategy, tactic or approach is better than another. This can’t be just about taking a horse to water - you must also make sure it drinks. Imagine a health system where all we do is design hospitals and opportunities for people to do healthy things or get advice on how to cure themselves, by people who do not know what the clinical research shows. 
Whatever the learning experience, you need to know about learning.
Lawyers know the law, engineers know physics but learning professionals often know little about learning theory. The consequences of this are, I think, severe. We’re sometimes seen as faddish, adopting tactics that are neither researched nor anything more than a la mode. It leads to products that do not deliver learning or learning opportunities – social systems that lie fallow and unused, polished looking rich media that actually hinders rather than helps one learn. It makes the process of learning longer, more expensive and less efficacious. Worse still, much delivery may actually hinder, rather than help learning, resulting in wasted effort or cognitive overload. It also makes us look unprofessional, not taken seriously by senior management (and learners).
We have seen the effect of flat-earth theory such a learning styles and whole word teaching of literacy, and the devastating effect it can have, wasting time in corporate learning and producing kids with poor reading skills. In online learning the rush to produce media rich learning experiences often actually harms the learning process by producing non-effortful viewing, click-through online learning and cognitive overload. Leader boards are launched but have to be abandoned. The refusal to accept evidence that most learning needs deliberate practice, whether through desirable difficulty, retrieval or spaced practice, is still a giant vacuum in the learning game.
So there are several reasons why research can usefully inform our professional lives.

1. Research debunks myths
One of things research can achieve, is to implore us to discard theories and practices, which are shown to be wrong-headed, like VAK learning styles or whole word teaching. These were both very popular theories, still held by large percentages of learning professionals. Yet research has shown them, not only to be suspect as theories, but also as having no efficacy. There’s a long list of current practice, such as Myers-Briggs, NLP, emotional intelligence, Gardener’s multiple intelligences, Maslow’s hierarchy of needs, Dales cone for learning and so on, that research has debunked. Yet these practices carry on long after the debunking – like those cartoon figures who run off cliffs and are seen still hanging there, looking down…

2. Research informs practice
Whether its general hypotheses like Does this massive spending on diversity training actually work? Or, at the next level Does this nudge learning delivery strategy based on the idea of hyperbolic discounting actually work better than single point delivery?  Research can help. There’s specific learning strategies by learners Does this retrieval or spaced or desirable difficulty practice increase retention? Even at the very specific level of cognitive science, lots of small hypotheses can be tested – like interleaving. In online learning What is the optimum number of options in a multiple choice question? Is media rich mind rich? As some of this research is truly counterintuitive, it also prevents us from being flat-earthers, or believing something, like the sun goes round the earth, just because it feels right. 

3. Research informs product
As technology increasingly helps deliver solutions, it is useful to design technology on the basis if researched findings. If, for example, an AI adaptive system was to be designed on the basis of Learning Styles, as opposed to the diagnosis of identified cognitive errors, that would be a mistake. Indeed technology, especially smart technology, often embodies pedagogic approaches, baking in theory, so that the practice can be enabled. I have built technology that is based wholly on several principles from cognitive science. I have also seen much technology that does not conform to good evidence based theory.

4. Research helps us negotiate with stakeholders
Learning is something we all do. We’ve all gone through years of school and so it is something on which we all have opinions. This means that discussions with stakeholders and budget holders can be difficult. There is often an over-emphasis on how things ‘look’ and much superficial discussion about graphics, with little discussion about the actual desired outcome – the acquisition of knowledge and skills and eventual performance. Research gives you the ability to navigate through these questions from stakeholders on the basis of avoiding anecdote, relying on objective research.

5. Research helps us motivate learners
Research has shown that learners are strangely delusional about optimal learning strategies and what they think they have learnt. This really does matter, as what they want is not always what they actually need. Analogously, you as teacher or learning designer, are like a doctor advising a patient, who is unlikely to know exactly what they have to do to solve their problem. An evidence-based approach moves us beyond the simplicities of learning styles and too much focus on making things ‘look’ or ‘feel’ good. Explaining to a learner that this approach will get them to their goal quicker, pass that exam and perform better can benefit from making the research explicit to the learner.

6. Research helps you select tools
One of the biggest problems in the delivery of online learning, is the way the tools shape what the learner sees, experiences and does. Far too many of these tools focus on look and feel, at the expense of cognitive effort, so we get lots of beautiful sliding effects and lots of bits ion media. It is, in effect, souped-up Powerpoint. Even worse are the childish games templates that produce mazes and other nonsense that is a million miles away from proper gaming. We have a chance to escape this with smarter software and tools that allow the learner to do what they need to do to learn - open input, write, do things. This requires Natural Language Processing and lots of other new tech.

7. Research helps us professionalise within organisations
In navigating organisational politics, structures and budgeting, also making your internal service appeal to senior management, research can be used to validate your proposals and approaches. HR and L and D have long complained about not being taken seriously enough by the business. Finance has the advantage of a body of established practice, massively influenced by technology and data. This is becoming true of marketing, production, even management, where data on the efficacy of different channels is now the norm. So it should be with learning. Alignment and impact matter. Personalised 'experiences' really do matter in the midst of complex learning.

Conclusion
If all of the above don’t convince you, then I’d appeal to the simple idea of doing the right thing. It’s not that all research is definitive, as science is always on the move, open to future falsification. But, as with research in medicine, physics in material science and engineering, chemistry in organic and inorganic production, maths in AI, we work with the best that is available. WE are duty bound to do our best on the best available evidence or we are not really a professional ‘profession’.

 Subscribe to RSS

Wednesday, March 06, 2019

Summarising learning materials using AI - paucity of data, abundance of stuff

 We’ve been using AI to create online learning for some time now. Our approach is to avoid the use of big data, analytics and prediction software, as there are almost no contexts in which there is nearly enough data to make this work to meet the expectations of the buyer. AI, we believe, is far better at precise goals, such as identifying key learning points, creating links to external content, creating podcasts using text to speech and the semantic interpretation of free text input by learners. We’ve done all of this but one thing always plagues the use of AI in learning…. Although there’s a paucity of data, there’s an abundance of stuff!

Paucity of data, abundance of stuff
Walk into many large organisations and you’ll encounter a ton of documents and PowerPoints. They’re often over-written and far too long to be useful in an efficient learning process. That doesn’t put people off and in many organisations we still have 50-120 or more PowerPoint slides delivered in a room with a projector, as training. It’s not much better in Higher Education, where the one hour lecture is still the most dominant teaching method. The trick is to have a filter that can automate the shortening of all of this stuff.

Summarisation
To summarise or précis documents (text) down in size, to focus on the ‘need to know’ content, there are three processes:
1. Human edit
No matter what AI techniques you use to précis text, it is wise to initially, edit out the extraneous material (by hand), that learners will not be expected to learn. For example, supplementary information, disclaimers, who wrote the document and so on. With large, well-structured documents, PDFs and PPTs it is often easy to simply identify the introductions or summaries in each section. These form ready-made summaries of the essential content for learning. Regard this step as simple data cleansing or hand washing! Now you are ready for further steps with AI....
2. Extractive AI
This technique uses a summary that keeps the sentences intact and only ‘extracts’ the relevant material. We usually look at a quick Human edit first, then extract the relevant shortened text, which can then be used in WildFire, or on its own. This is especially useful where the content may be subject to already regulated control (approved by expert, lawyer, regulator). For example in medical content in the pharmaceutical industry or compliance.
3. Abstractive AI
This is a summary that is rewritten and uses a set of training data and machine learning to produce a summary. Note that this approach needs a large domain-specific training set. By large we mean as large as possible. Some of the trainings sets are literally Gigabytes of data. That data also has to be cleaned.

Conclusion

The end result is automatically shortened documents, from original large documents, PowerPoints even video transcripts. These we can input into WildFire, rather than delivering in]tense training on huge pieces of content, you get the essentials. The summaries themselves can be useful in the content of the learning experience. So if you have a ton of documents and PowerPoints, we can shorten them quickly and produce online learning in minutes not months, at a fraction of the cost of traditional online learning, with very high retention.

 Subscribe to RSS