Saturday, March 16, 2019

Ai starts to crack the critical thinking... astonishing experiment...

Just eighteen years after 2001 (older readers will know the significance of that date), the AI-debater on the left, a 6 foot high black stele, with a woman’s voice, used arguments, objections, rebuttals, even jokes, to tussle with her opponent. She lost but, in a way, she also won, as this points towards an interesting breed of critical thinking software.  This line of AI has significance in the learning world.
How does it work?
First, she creates an opening speech by searching through millions of opening gambits, removes extraneous text and looks for the highest probability claims and arguments, based on solid evidence, she then arranges these arguments thematically to give a four minute speech.  In critical conversation,  she then listens to your response and responds, debating the point step by step. This where it gets clever as it she has to cope with logical dilemmas and structured debate and argument, drawing on a huge corpus of knowledge, way beyond what any human could read and remember.
Debate
In learning, working through a topic through dialogue, debate and discussion is often useful. Putting your ideas to the test, in an assignment or research task or when writing an article for publication and so on, would be a useful skill for my Alexa to be able to deliver. It raises the game, as it pushes AI generated responses beyond knowledge into reasoned argument and checks on evidence from trusted sources. But a debate is not the great win here. There are other more interesting and scalable uses.
Critical thinking
Much of the talk about 21st century skills is rather cliched, with little in the way of evidence-based debate. The research suggests that these skills, far from being separate 'skills' are largely domain specific. You don't get far in being a creative, critical and problem solving thinker in, say Data Science, if you don't know a lot about...  well... Data science. What's interesting about this experiment is the degree to which general debating skills,, let's call it stating and defending or attacking a proposition, shows how one can untangle, say critical thinking, into its components, as it has to be captured and delivered as software.
There are some key lessons here, as the logo of debate is actually the logic we know from Aristotle onwards, syllogistic and complex, often beyond the capability of the human brain. On the other hand the heuristics we humans use are a real challenge for AI. But AI is rising to this challenge with all sorts of techniques, many species of supervised and unsupervised AI that learns through machine learning, fuzzy logic to cope (largely with the impreciseness of language and human expression) and a battery of statistical theory and probability theory to determine certainty.
This, along with GTP-2 (I've written about this here), which creates content, along with techniques embedded in Google Duplex around complex conversational rules, are moving learning AI into new territory, with real dialogue based on structured creation of content, voice and the flow of conversations and debate. Why is this important?
1. Teaching
When it reaches a certain standard, we can see how it starts to behave like a teacher, to engage with a learner in dialogue and interpret the strengths of arguments, debate with the student, even teach and assess critical thinking and problem solving. In a sense it may transform normal teaching, in being able to deliver personalised learning at this level, on scale. The skills of a good teacher or lecturer are to introduce a subject, engage learners, support learners, assess learners. Even if it does not perform the job of an experienced teacher, one could see how it could support teachers.
2. Communication skills
There is also the ability to raise one’s game by using it as a foil to improve one’s communication skills, as a learner, teacher, presenter, interviewer, coach, therapist or sales person. Being able to persuade it that you are right, based on evidence, is something we could all benefit from. It strikes me that it could in time, also identify and help correct various human biases, especially confirmation bias but many others. Daniel Kahneman, in his Thinking Fast and Slow makes an excellent point at the very end of the book when he says that these biases are basically 'uneducable'. In other words, they are there, and rather than trying to change them, which is near impossible, we must tame them.
3. Expert
With access to over 300 million articles it has digested more than any human can read and remember in a lifetime. But this is just for reference. The degree to which it can use this as evidence for argument and advice is interesting. The experiment seems to support the idea that domain knowledge really does matter in critical thinking, something largely ignored in the superficial debate at conferences on 21st century skills. This may untangle this complex area by showing us how trues expertise is developed and executed.
4. Practice
The advantages the machine has over humans is the consistent access and use of very large knowledge bases. One can foresee a system that is an expert in a multitude of subjects and able to deliver scalable and sophisticated practice in not only knowledge but higher order skills across a range of subjects. The development of expertise takes time application and practice. This offers the opportunity to accelerate expertise. Of course, it also suggests that expertise may be replaces may machines. Read that sentence again, as it has huge consequences.
5. Assessment
If successful, such software could be a sophisticated way to assess learner’s work, whether written work, essays or oral, as it puts their arguments to the test. This is the equivalent to a VIVA or oral exam. With more structured questions, one could see how more sophisticated and objective assessment, free from essay mills and cheating, could be delivered.
6. Decision making
One could also see a use in decision-making, where evidence-based arguments would be at least worth exploring, while humans still make the decisions. I’d love, as a manager, to make a decision based on what has been found to work, rather than guessing or relying on faddish decision making.
Conclusion
This will, eventually, be invaluable for a teaching assistant that never gets tired, inattentive, demotivated, crabby and delivers quality learning experiences, not just answering questions. It may also help eliminate human bias in educational processes, making them more meritocratic. Above all it holds the promise of high level teaching that is scalable and cheap. At the very least it may lift often crass debate around 21st century skills beyond their cliched presentation as lists in bad PowerPoint presentations at conferences.

 Subscribe to RSS

Thursday, March 07, 2019

Why learning professionals – managers, project managers, interactive designers, learning experience designers, whatever, should not ignore research

Why do learning professionals in L and D – managers, project managers, interactive designers, learning experience designers and so on, ignore research? It doesn’t matter if you are implementing opportunities for learning such as nudges, social opportunities, workflow learning, performance support or designing pieces of content or full courses, you will be faced with deciding on whether one learning strategy, tactic or approach is better than another. This can’t be just about taking a horse to water - you must also make sure it drinks. Imagine a health system where all we do is design hospitals and opportunities for people to do healthy things or get advice on how to cure themselves, by people who do not know what the clinical research shows. 
Whatever the learning experience, you need to know about learning.
Lawyers know the law, engineers know physics but learning professionals often know little about learning theory. The consequences of this are, I think, severe. We’re sometimes seen as faddish, adopting tactics that are neither researched nor anything more than a la mode. It leads to products that do not deliver learning or learning opportunities – social systems that lie fallow and unused, polished looking rich media that actually hinders rather than helps one learn. It makes the process of learning longer, more expensive and less efficacious. Worse still, much delivery may actually hinder, rather than help learning, resulting in wasted effort or cognitive overload. It also makes us look unprofessional, not taken seriously by senior management (and learners).
We have seen the effect of flat-earth theory such a learning styles and whole word teaching of literacy, and the devastating effect it can have, wasting time in corporate learning and producing kids with poor reading skills. In online learning the rush to produce media rich learning experiences often actually harms the learning process by producing non-effortful viewing, click-through online learning and cognitive overload. Leader boards are launched but have to be abandoned. The refusal to accept evidence that most learning needs deliberate practice, whether through desirable difficulty, retrieval or spaced practice, is still a giant vacuum in the learning game.
So there are several reasons why research can usefully inform our professional lives.

1. Research debunks myths
One of things research can achieve, is to implore us to discard theories and practices, which are shown to be wrong-headed, like VAK learning styles or whole word teaching. These were both very popular theories, still held by large percentages of learning professionals. Yet research has shown them, not only to be suspect as theories, but also as having no efficacy. There’s a long list of current practice, such as Myers-Briggs, NLP, emotional intelligence, Gardener’s multiple intelligences, Maslow’s hierarchy of needs, Dales cone for learning and so on, that research has debunked. Yet these practices carry on long after the debunking – like those cartoon figures who run off cliffs and are seen still hanging there, looking down…

2. Research informs practice
Whether its general hypotheses like Does this massive spending on diversity training actually work? Or, at the next level Does this nudge learning delivery strategy based on the idea of hyperbolic discounting actually work better than single point delivery?  Research can help. There’s specific learning strategies by learners Does this retrieval or spaced or desirable difficulty practice increase retention? Even at the very specific level of cognitive science, lots of small hypotheses can be tested – like interleaving. In online learning What is the optimum number of options in a multiple choice question? Is media rich mind rich? As some of this research is truly counterintuitive, it also prevents us from being flat-earthers, or believing something, like the sun goes round the earth, just because it feels right. 

3. Research informs product
As technology increasingly helps deliver solutions, it is useful to design technology on the basis if researched findings. If, for example, an AI adaptive system was to be designed on the basis of Learning Styles, as opposed to the diagnosis of identified cognitive errors, that would be a mistake. Indeed technology, especially smart technology, often embodies pedagogic approaches, baking in theory, so that the practice can be enabled. I have built technology that is based wholly on several principles from cognitive science. I have also seen much technology that does not conform to good evidence based theory.

4. Research helps us negotiate with stakeholders
Learning is something we all do. We’ve all gone through years of school and so it is something on which we all have opinions. This means that discussions with stakeholders and budget holders can be difficult. There is often an over-emphasis on how things ‘look’ and much superficial discussion about graphics, with little discussion about the actual desired outcome – the acquisition of knowledge and skills and eventual performance. Research gives you the ability to navigate through these questions from stakeholders on the basis of avoiding anecdote, relying on objective research.

5. Research helps us motivate learners
Research has shown that learners are strangely delusional about optimal learning strategies and what they think they have learnt. This really does matter, as what they want is not always what they actually need. Analogously, you as teacher or learning designer, are like a doctor advising a patient, who is unlikely to know exactly what they have to do to solve their problem. An evidence-based approach moves us beyond the simplicities of learning styles and too much focus on making things ‘look’ or ‘feel’ good. Explaining to a learner that this approach will get them to their goal quicker, pass that exam and perform better can benefit from making the research explicit to the learner.

6. Research helps you select tools
One of the biggest problems in the delivery of online learning, is the way the tools shape what the learner sees, experiences and does. Far too many of these tools focus on look and feel, at the expense of cognitive effort, so we get lots of beautiful sliding effects and lots of bits ion media. It is, in effect, souped-up Powerpoint. Even worse are the childish games templates that produce mazes and other nonsense that is a million miles away from proper gaming. We have a chance to escape this with smarter software and tools that allow the learner to do what they need to do to learn - open input, write, do things. This requires Natural Language Processing and lots of other new tech.

7. Research helps us professionalise within organisations
In navigating organisational politics, structures and budgeting, also making your internal service appeal to senior management, research can be used to validate your proposals and approaches. HR and L and D have long complained about not being taken seriously enough by the business. Finance has the advantage of a body of established practice, massively influenced by technology and data. This is becoming true of marketing, production, even management, where data on the efficacy of different channels is now the norm. So it should be with learning. Alignment and impact matter. Personalised 'experiences' really do matter in the midst of complex learning.

Conclusion
If all of the above don’t convince you, then I’d appeal to the simple idea of doing the right thing. It’s not that all research is definitive, as science is always on the move, open to future falsification. But, as with research in medicine, physics in material science and engineering, chemistry in organic and inorganic production, maths in AI, we work with the best that is available. WE are duty bound to do our best on the best available evidence or we are not really a professional ‘profession’.

 Subscribe to RSS

Wednesday, March 06, 2019

Summarising learning materials using AI - paucity of data, abundance of stuff

 We’ve been using AI to create online learning for some time now. Our approach is to avoid the use of big data, analytics and prediction software, as there are almost no contexts in which there is nearly enough data to make this work to meet the expectations of the buyer. AI, we believe, is far better at precise goals, such as identifying key learning points, creating links to external content, creating podcasts using text to speech and the semantic interpretation of free text input by learners. We’ve done all of this but one thing always plagues the use of AI in learning…. Although there’s a paucity of data, there’s an abundance of stuff!

Paucity of data, abundance of stuff
Walk into many large organisations and you’ll encounter a ton of documents and PowerPoints. They’re often over-written and far too long to be useful in an efficient learning process. That doesn’t put people off and in many organisations we still have 50-120 or more PowerPoint slides delivered in a room with a projector, as training. It’s not much better in Higher Education, where the one hour lecture is still the most dominant teaching method. The trick is to have a filter that can automate the shortening of all of this stuff.

Summarisation
To summarise or précis documents (text) down in size, to focus on the ‘need to know’ content, there are three processes:
1. Human edit
No matter what AI techniques you use to précis text, it is wise to initially, edit out the extraneous material (by hand), that learners will not be expected to learn. For example, supplementary information, disclaimers, who wrote the document and so on. With large, well-structured documents, PDFs and PPTs it is often easy to simply identify the introductions or summaries in each section. These form ready-made summaries of the essential content for learning. Regard this step as simple data cleansing or hand washing! Now you are ready for further steps with AI....
2. Extractive AI
This technique uses a summary that keeps the sentences intact and only ‘extracts’ the relevant material. We usually look at a quick Human edit first, then extract the relevant shortened text, which can then be used in WildFire, or on its own. This is especially useful where the content may be subject to already regulated control (approved by expert, lawyer, regulator). For example in medical content in the pharmaceutical industry or compliance.
3. Abstractive AI
This is a summary that is rewritten and uses a set of training data and machine learning to produce a summary. Note that this approach needs a large domain-specific training set. By large we mean as large as possible. Some of the trainings sets are literally Gigabytes of data. That data also has to be cleaned.

Conclusion

The end result is automatically shortened documents, from original large documents, PowerPoints even video transcripts. These we can input into WildFire, rather than delivering in]tense training on huge pieces of content, you get the essentials. The summaries themselves can be useful in the content of the learning experience. So if you have a ton of documents and PowerPoints, we can shorten them quickly and produce online learning in minutes not months, at a fraction of the cost of traditional online learning, with very high retention.

 Subscribe to RSS

Tuesday, March 05, 2019

Learning experiences often not learning at all


"Part of the problem with all this talk about 'learning experience' is it's questionable whether learning is actually experienced at all."
This brilliant quote, by Leonard Houx, skewers the recent hubris around ‘learning experiences’. Everything is an ‘experience’ and what is needed is some awareness of good and bad learning experiences. Unfortunately, all too often what we see are over-engineered, media heavy, souped up PowerPoint or primitively gamified 'experiences' that the research show, result, not in significant learning, but 1) Clickthrough (click on this cartoon head, click on this to see X, click on option on MCQ) that allows the learner to skate across the surface of the content, 2) Cognitive overload (overuse of media) and 3) Diversionary activity (Mazes and infantile gamification). What is missing is relevant effort and cognitive effort, that makes one think, rather than click. There is rarely open input, rarely any personalised learning and rarely enough practice.
Media rich is not mind rich
The purveyors of ‘experience’ think that we need richer experiences but research shows that media rich is not mind rich. Mayer shows, in study after study, that redundant material is not just redundant but dangerous in that it can hinder learning. Sweller and others warn us of the danger of cognitive overload. Bjork and others shows us that learners are delusional about what is best for them in learning strategies and just pandering to what users think they want is a mistake. Less is usually more in that we need to focus on what the learner needs to ‘know’, not just  'experience'.
Research is bedrock of design
There are those who think that Learning and Development does not have to pay attention to this research or learning research at all. It is still all too common to sit in a room where no one has read much learning theory at all, and whose sole criterion for judgement on what makes good online learning is the ‘user experience’, without actually defining it as anything other than ‘what the user likes’. Lawyers know the law, engineers know physics and it is not really acceptable to buy into the anti-intellectual idea that knowing how people learn is irrelevant to Learning and Development. It is, in fact, the bedrock of learning design.
Less is more
Increasingly, online learning is diverging from what most people actually do and experience online. Look at the web’s most popular services or experiences – Google, Facebook, Twitter, Instagram, YouTube, Snapchat, Whatsapp, Messenger, Amazon, Netflix. It is all either mediated by AI to give you a personalised experience that doesn’t waste your time or dialogue. Their interfaces are pared down, simple, and they make sure there’s not an ounce of fat to distract from what the user actually needs. Occam was right with his razor – design with the minimal number of entities to reach your goal.
Conclusion
An experience can be a learning experience but all experiences are not learning experiences. Many are, inadvertently, designed to be the very opposite – experiences designed to impress or dazzle but end up as eye-candy, edu-tainment or enter-train-ment. Get this - media rich is not mind rich, clicking is not thinking, less in learning is often more.

 Subscribe to RSS

Monday, February 25, 2019

Musk’s OpenAI breakthrough has huge implications for online learning

You have probably never heard of GPT-2 but it is a breakthrough in AI that has astonishing implications for us all, especially in learning. GPT-2 is an AI model that can predict the next word from a given piece of text. Doesn't sound like much but it's odd that an OpenAI, an open-source site, would close access to their software. In practice, this means it is a powerful model for:
   Summarising
   Comprehension
   Question answering
   Translation
This is all WITHOUT domain-specific training. In other words, it has general capabilities and does not need, specific information on a topic or subject to operate successfully. It can generate text of good quality at some length. In fact the model is “chameleon like” as it adjusts to the style and content of the initial piece of text. This makes it read as a realistic extension.
This has huge implications, both good and bad, for the future of education and training.
GOOD
1.    AI writing assistants, allows the automatic creation of text for teaching and learning, whether, study papers, text books, at the right level
2.    Lengthy texts can be summarised into more meaningful learning materials
3.    More capable dialogue agents, means that learner ‘engagement’ through teaching assistant agents could become easier, better and cheaper
4.    More capable dialogue agents, means that learner ‘support ‘ such as is often provided by teaching assistants, could become easier, better and cheaper
5.    Creation of online learning content with little subject matter expert (SME) input
6.    Interpretation of student free text input answers
7.    The provision of formative feedback based on student performance
8.    Machine teaching, mentoring and coaching may well get a lot better. However, I’d be cautious on this as there are other serious problems to overcome before this becomes possible, especially around context.
9.    Assessments can be automatically created.
10.Speech recognition systems will get a lot better allowing it to be used in online learning and assessment
11.Well-being dialogue agents will become more human-like and useful
12.Personalised learning just got a lot easier
13.Online learning just got a lot faster and cheaper
14.Language learning just got a lot easier as unsupervised translation between languages will boost the quality of translation and make automatic and instantaneous, high-quality translation much more accurate and possible
BAD
1.    Essay mills have just been automated. You want an essay, just feed it the subject or the subject supplemented by a line of inquiry you want to follow and it will do the rest. Even with an error rate, human finessing could polish the essay
Can do home work assignments
2.    It could perform well in online exams, impersonating real people
3.    Teaching assistant jobs may be increasingly automated
4.    If it can answer questions then many human jobs that involve the interpretation of text and data may be automated. Customer service jobs, call centre jobs and the increased automation of all human interaction jobs may be accelerated
5.    It can generate misleading learning content (and news articles)
6.    Impersonating others online can be automated on a massive scale
7.    Abusive or fake content to post on social media can be automated on a massive scale, which is bad for education.
8.    Spam/phishing content can be generated on a massive scale.
AI in learning
We have been implementing many of these techniques in the creation of online learning in WildFire, including:
   Summarisation
   Text to speech
   Creation of online content
   Interpretation of free input
   Chatbots
This breakthrough makes all of this much more potent.
Conclusion
Importantly, the model is far from flawless. Reading some reports, suggests human level capabilities. This is far from the truth. It is still of variable quality and error prone. But, for a first iteration this model seems astonishingly powerful. They have published its performance against established tests. In future iterations it is likely to get a lot better as they predict more training data, more computing power and fine tuning. One must also be careful in using this as an attack on AI. Cars kills 1.3 million people a year but we still drive cars. I'm not so sure about keeping this code private as it restricts access to researchers but provides enough of a kick-start for well funded bad actors to replicate.
Bibliography
Radford et al. (2019)Language Models are Unsupervised Multitask Learners 
https://bit.ly/2IufQTd
Appendix (3 examples)
Example

A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)
The incident occurred on the downtown train line, which runs from Covington and Ashland stations.
In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.
“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”
The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.
The Nuclear Regulatory Commission did not immediately release any information.
According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.
“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.

Homework example

For today’s homework assignment, please describe the reasons for the US Civil War.

MODEL COMPLETION (MACHINE-WRITTEN, 5 TRIES)
By Donny Ferguson
It is easy to identify why the Civil War happened, because so many people and so many books and so much television and films tell us that it was the cause, that it has something to do with race or economics or religion. There is much agreement that it was essentially a war of slavery on behalf of capitalism, about a century of slavery. But that’s not what most people think of when they think of the Civil War. Many people think of the war as about states rights. You’re not wrong about that, but it’s kind of misleading to say that the Civil War was a conflict between states’ rights and federalism. So let’s try again. What’s the idea that has inspired so many people to go onto the battlefields of Gettysburg and Antietam and Gettysberg and Petersburg and Fredericksburg? The American idea of the republic--a notion of limited government--is a great part of the history.
In 1791, Thomas Jefferson said “Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.” The “founding fathers” were basically religious men who were not very concerned with states’ rights. The Civil War occurred in reaction to the idea that the federal government was taking far too much power away from the states, that they weren’t sovereign. Jefferson himself, as you well know, is an author of the Declaration of Independence who had some really crazy ideas about, well, you know, secession for a variety of reasons. The War Between the States (later the Great Rebellion) was a radical re-imagining of the American idea and not surprisingly, Jefferson’s grandson, who took after both his father and his grandfather and wrote about it, was named Thomas, after the president. “The federal and state governments are both contrived for the benefit of the rich and large property,” This view of the Founding Fathers has been promoted by some of today’s most prominent politicians, including some who hold prominent national offices and have a direct stake in a Republican Party that has been hijacked by the “federalist” wing of the party. So why is it that so many people have an easy-to-spot way of understanding the Civil War that has everything to do with the South and nothing to do with the South? The Confederate flag has been a symbol of racism for a long time, but when the Civil War started and the Civil War ended, many people suddenly began to see it differently. There was a change of heart, which I think is what led people to take the question of why the Civil War happened quite seriously.
The other part of the explanation is what one scholar I think gets at well and a lot of people don’t is this idea that the Southern states were being left behind in the Industrial Revolution and this made the South really feel, especially in the South, that they weren’t really in the forefront of the Industrial Revolution.

Creativity
There is an interesting example on the AI website showing the generation of a fictional story, with full dialogue, from just one relatively short sentence. This opens up interesting possibilities for the generation of stories, fiction and literature.

Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)
The orcs’ response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night. The battle lasted for hours until two of the largest Orcs attempted to overwhelm Aragorn. When they finally stopped, they lay defeated and lifeless for miles and miles.
“I take nothing,” said Aragorn. “But I give my word, at my peril and mine, that I will never forget this day of horror. None of us will forget. Ever!”
“I’ll never forget it!” cried Gimli, who had been in the thick of the battle but hadn’t taken part in it. One of the wounded orcs he had carried off, he was the only one of the survivors who remained uninjured. “We’ll keep the memory of that day of evil, and the war with it, alive as long as we live, my friends!”
“Then we’ll keep it alive as long as we live,” added Legolas. “And we won’t forget the first great battle of the night, even if we may have forgotten the final defeat.”
“I agree,” Gandalf said, “but we will all remember it as the last battle in Middle-earth, and the first great battle of the new day.”
Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.
The Two Rings were returned to Rivendell. Frodo and Sam woke up alone in their room, and Frodo found a note on his pillow. He opened it and read:
May the Power of the Ring be with you always, and may its light never fade. I am not sure if it matters which of the two rings we accept this day but, as you asked me, I have chosen mine. I am sorry to leave you, Frodo, but know that we are very close to the end, and that you are with us forever. May this letter find you safely in Rivendell; and if it does not, then I will accept the ring in your stead. If by any chance you find or give this letter to the enemy, may they learn the strength of the ring and may the Two Rings never be broken!

 Subscribe to RSS