Friday, October 19, 2018

"On average, humans have one testicle”... that's why most data analytics projects in education are misleading....

AI will have a huge impact in learning. It changes everything… why we learn, what we learn and how we learn. Of course, AI is not one thing. It is a huge array of mathematical and software techniques. Yet looking at the spend in education and training, people have been drawn to one very narrow area – data analytics. This, I think, is a mistake.
Much of this so-called use of AI is like going over top of head with your right hand to scratch your left ear. Complex algorithmic and machine learning approaches are likely to be more expensive and far less reliable and verifiable than simple measures like using a spreadsheet or making what little data you have available, in a visualized, digestible form to faculty or managers. Beyond this, traditional statistics is likely to prove more fruitful. Data analytics has taken on the allure of AI, yet many of it is actually plain old statistics.
Data problems
Data is actually a huge problem here. They say that data is the new oil, more likely the new snakeoil.It is stored in weird ways and places, often old, useless, messy, embarrassing, personaland even secret. It may need to be anonymised, training sets identified and subject to GDPR. To quote that old malapropism, ‘data is a minefield of information’. It may even be massively misleading, as in the testicle example.
Paucity of data
In the learning world, the data problem is even worse as there is another problem – the paucity of data. Institutions are not gushing wells of data. Universities, for example, don’t even know how many students turn up for lectures. I can tell you that the actual data, when collected, paints a picture of catastrophic absence. Data on students is paltry. The main problem with the use of data in learning, is that we have so little of the stuff. 
SCORM, which has been around for 20 plus years literally stopped the collection of data with its focus on completion. This makes most data analytics projects next to useless. The data can be best handled in a spreadsheet. It is certainly not as large, clean and relevant as it needs to be to produce genuine insights.
Other data sources are similarly flawed, as there's little in the way of fine-grained data about actual performance. It's small data sets, often messy, poorly structured and not understood.
Data dumps
Data is often not as clean as you think it is, with much of it in:
·      odd data structures
·      odd formats/encrypted
·      different databases
Just getting a hold of the stuff is difficult.
Defunct data
Then there’s the problem of relevance and utility, as much of it is:
·      old
·      useless
·      messy
In fact, much of it could be deleted. We have so much of the stuff because we simply haven’t known what to do with it, don’t clean it and don’t know how to manage it.
Difficult data
There are also problems around data that can be:
·      embarrassing
·      secret
There may be very good reasons for not opening up historic data, such as emails and internal social communications. It may open up a sizeable legal and other HR risks for organisations. Think Wikileaks email dumps. Data is not like a barrel of oil, more like a can of worms. 
Different types of data
Once cleaned, one can see that there are many different types of data. Unlike oil it has not so much fractions as different categories of data. In learning we can have ‘Personal’ data, provided by the person or actions performed by that person with their full knowledge. This may be gender, age, educational background, needs, stated goals and so on. Then there’s ‘Observed’ data from the actions of the user, their routes, clicks, pauses and choices. You also have ‘Derived’ data inferred from existing data to create new data and higher level ‘Analytic’ data from statistical and probability techniques related to that individual. Data may also be created on the fly.
Just when you thought it was getting clearer. You also have ‘Anonymised’ data is a bit like oil of an unknown origin. It is clean of any attributes that may relate it to specific individuals. This is rather difficult to achieve as there are often techniques to back engineer attribution to individuals.
In AI there’s also ‘Training’ data used for training AI systems and ‘Production’ data which the system actually uses when it is launched in the real world. This is not trivial. Given the problems stated above, it is not easy to get a suitable data set, which is clean and reliable for training. Then, when you launch the service or product the new data may be subject to all sorts of unforeseen problems not uncovered in the training process. This is a rock on which many AI projects founder.
Data preparation
Before entering these data analytics projects, ask yourself some serious questions about 'data’. Data size by itself, is overrated, but size still matters, whether n = tens, hundreds, thousands, millions, the Law of Small Numbers still matters. Don’t jump until you are clear about how much relevant and useful data you have, where it is, how clean it is and in what databases.
New types of data may be more fruitful than legacy data. In learning this could be dwell time on questions, open input data, wrong answers to questions and so on. More often than not, what you have as data is really proxies for phenomenon. 
Action not analytics
The problem with spending all of your money on diagnosis, especially when the diagnosis is an obvious limited set of possible causes, that were probably already known, is that the money is usually better spent on treatment. Look at improving student support, teaching and learning, not dodgy diagnosis.
In practice, even when those amazing (or not so amazing) insights come through, what do institutions actually do? Do they record lectures because students with English as a foreign language find some lecturers difficult and the psychology of learning screams at us to let students have repeated access to resources? Do they tackle the issue of poor teaching by specific lecturers? Do they question the use of lectures? Do they radically reduce response times on feedback to students? Do they drop the essay as a lazy and monolithic form of assessment? Or do they waffle on about improving the ‘student experience’ where nothing much changes?
I work in AI in learning, have an AI learning company, invest in AI EdTech companies, am on the board of an AI learning company, speak on the subject all over the world, write constantly on the subject . You’d expect me to be a big fan of data analytics and  – recommendation engines - I’m not. Not yet. I’d never say never but so much of this seems like playing around with the problem, rather than facing up to solving the problem. That's not to say you should ignore its uses - just don't get sucked into data analytics projects in learning that promise lots but deliver little. Far better to focus on the use of data in adaptive learning or small scale teaching and learning projects where relatively small amounts of data can be put to good use.
AI is many things and a far better use of AI in learning, is, in my opinion, to improve teaching through engagement, support, personalised, adaptive learning, better feedback, student support, active learning, content creation (WildFire) and assessment. All of these are available right now. They address the REAL problem – teaching and learning. 

 Subscribe to RSS

Tuesday, October 16, 2018

Nudge learning

Things move fast in organisations and when Standard Life merged with Aberdeen Asset Management, an agile learning approach to changing behaviour in the new organisation was implemented, a training intervention that is itself agile and resulted in actual behavioural change. A huge traditional course, whether face-to-face or online, based on a diet of knowledge would have been counterproductive in this fast moving, post-merger commercial environment and be seen as a bit old-school and non-agile. Whereas a series of short, sharp interventions that nudge people into applying agile in their own context and work environment was likely to work better. At least, that's what Peter Yarrow, Head of Learning, thought – and I think he's right. It was his brainchild.
His successful project used the 'nudge' technique. Nudge theory recommends small interventions to push people into changing behaviour. Famous examples include the image of a fly in men’s urinals, to improve aim and reduce cleaning costs! Opting out, rather than into organ donation is another. The psychological theory is laid out in the book 'Nudge: Improving Decisions About Health, Wealth and Happiness’ by Thaler and Sunstein. They could well have added ‘Learning’ to the title.
Nudge solution
In learning, Standard Life Aberdeen sent small, professionally shot videos (on average 1min 30secs long), mainly talking heads from leaders and experts in the organisation, out via email. In addition to the video, there was a ‘challenge’ to apply the lesson in their own working environment. I like this approach, and it is a truly fresh and agile 'nudge'intervention.In their case it was general management techniques but I feel that agile techniques could be applied in response to all sorts of needs. Each starts with a proposition, or problem, followed by a suggested solution and finally, and crucially, a call to action. This is based on techniques also used in web and online design.
Example 1
Video on importance of comms
It’s hard to be a high performing team if colleagues don’t know each other well. Without trust, mutual respect and goodwill, performance will most likely remain middle of the road. Exceptional performance is fuelled by positive working relationships. Take this week’s challenge to get to know your colleagues better.
Example 2
Video on mentoring
Being mentored is a great way to develop and progress. But how do you get started? Begin by identifying someone you trust who has taken a career path you aspire to. Take this week’s challenge to learn more about making mentoring relation ships work.
These videos and challenges were sent out by email and usage tracked. The take-up across the organisation surprised the training department and the feedback was very positive. People felt that it was integrated into their natural workflow (they were not too long and intrusive) and that it was made more relevant by virtue of nudging people towards action by them as individuals in their specific job.
Suggestions for nudge learning
Great start but rather than batch emails, I'd use an algorithm to decide personal needs and, take data from usage and get more precise in timing and targeting. This means harvesting more data, which one can do, even with internal email systems. People get habituated out of responding if they get too many emails.
On the challenges I’d use more of a pure marketing approach, a strong command verb at the start, really concise, with reason and emotional pull. Give your audience a reason why they should take the desired action, maybe a bit of FOMO (Fear Of Missing Out), maybe some compelling numbers. Writing calls to action is both a science and an art and it’s worth being a little creative.
I'd also do slightly more than just state the challenge. I'd get the user to do something there and then to make sure they got the main points in the video (we've done this by grabbing the transcript and getting the user to check they've understood the main points) using AI generated, open-input experiences with WildFire.
None of this is a criticism of Peter’s pioneering project, merely suggestions to make it more potent.
I really liked Peter’s fresh thinking around the ‘nudge’ thing. It has legs and could go in all sorts of directions. It is the combination of proven marketing techniques with learning that make this approach fly. Few in marketing want to slab out hours and hours of content – they think first audience, second channels and third action. Their whole way of thinking is around ‘less is more’. This also happens to be exactly what the psychology of learning tells us about learning experiences. The limits of working memory, cognitive overload, forgetting and the need for transfer mean doing less but doing it better. 

 Subscribe to RSS

Monday, October 15, 2018

Why Blockchain is a busted flush

I recently wrote a piece explaining why I think Blockchain may not be such a good idea in education and training, arguing that it is an opaque solution looking for problems, that is hungry on energy consumption, storage and bandwidth. In discussing this piece with others I uncovered what I think may be a deeper problem – that Blockchain itself may be a busted flush.
Cryptocurrencies based on Blockchain (or variants) have plummeted in value, many by 70-80%. In a blizzard of scams and frauds, ‘trust’, the very thing Blockchain was supposed to capture, has evapourated. 
Professor Nouriel Roubini, from NYU’s Stern School of Business, was the Senior Economist for International Affairs in the White House's Council of Economic Advisers during the Clinton Administration, and has worked for the International Monetary Fund, the US Federal Reserve, and the World Bank, describes Blockchain as nothing more than a ‘glorified spreadsheet’. He argues that it’s a dangerous neo-liberal idea that tries to disengage from governments, banks and other trusted sources. He may be right as it has all the hallmarks of the bait and switch tactic, where you promise democratisation, decentralisation and disintermediation with one hand… then concentrate power in miners and a few techo-folk and businesses on the other. The messiahs who suck you in, when hacked, simply fork to another currency. To allow this to flourish beyond the law and normal protections of sovereign states may be a big mistake. Far from creating wealth it may concentrate wealth.
More specifically, why would any organisation want to put their transactions on public decentralized peer-to-peer permissionless ledgers? We have database technology that works, under the supervision and governance of organisations. In the end people employ governance, and not just coders or Blockchain companies. In practice another bait and switch process is taking place. Private blockchains with guarded permissions are not really blockchain at all but relatively small private databases under the control of the bank or organisation. They are not decentralised.
After giving a talk in 2016, where I saw some potential in Blockchain, I’m not ashamed to say I’ve flipped. It’s a bait and switch technology, a busted-flush. Almost all of the projects in education will die.

 Subscribe to RSS

Thursday, October 04, 2018

Blockchain looking more and more like a ball and chain?

I was at a conference of CEOs and analysts yesterday and saw presentation after presentation on Blockchain. Then, when I chatted to those in the audience, found that not a single one:
  • Knew what it was
  • Actually used it
  • Could think of uses for it
In over 30 years in the tech industry, I’ve never know a concept so opaque that caused so much fuss. More worrying was the complete lack of awareness about its:
  • energy consumption
  • storage needs
  • bandwidth needs
  • origins in crypto-currency
I gave a talk on Blockchain in learning two years ago in Berlin, where I was pretty upbeat, outlining its structure and possible uses in learning around trusted legers for qualifications and badges. I even got married using Bitcoin on Blockchain! Yet I’ve fallen out of love with this technology. I’ve still to see a single implementation in learning that is worth the candle. In truth education and training does not want to be decentralised and democratised or disintermediated, as almost everyone in the field works in an institutions that will protect themselves to the death. Put aside payment, as one could well imagine that this will happen as the reduction of transaction costs makes commercial sense, Blockchain looks increasingly like it is largely irrelevant in the learning world.
Despite all the diagrams and explanations about blocks, hashes and distributed databases, Blockchain remains an opaque term, even, I suspect, to those shovelling out grants and research money. It is not easy to grasp and needs some level of technical knowledge to unravel. I remember Jeff Staes, standing with his back to the audience in Berlin, screaming ‘What the fuck IS Blockchain?’. His point was that if we don’t know what it is why are we so sure about its uses in education and training?
Solution looking for problem
Software fails when there is no market demand, and people start looking for problems that are already solved by simpler and cheaper solutions. Many of the e-portfolio and qualifications problems are quite simply satisfied by existing systems, where centralised storage and security is already in place, especially in these times of data regulation (more of that later). Locking these problems down in a complex and opaque distributed database, is starting to seem like overkill.
Badges are one of those ideas that I wish had worked but haven’t. Like Blockchain itself, the badges idea has run its course and demand has faltered. Lacking credibility, objectivity, transferability and motivational pull, they simply hit institutional resistance. So to depend on badges and other forms of mico-credentialism is, sadly, no longer the platform on which Blockchain solutions can flourish.
Data regulation
Decentralised databases make sense but there are serious problems in terms of data regulations. Data has, historically, been almost universally stored centrally in databases. On public blockchains, however, this data is massively replicated across the entire distributed database. One solution is to cleave off the trusted hardware for the management and privacy of transactions. Management and privacy are big issues here and I’m not convinced that those recommending Blockchain have solutions that will escape the massive potential fines that are enshrined in EU law. This is a legal minefield that may well block Blockchain projects, especially in the learning world. It will be interesting to see how this plays out. For the moment, I’d stay well clear, as the risks are too high.
Energy consumption
Bitcoin, which uses blockchain, has been estimated to use more electricity than the whole of Ireland and that consumption is rising, rapidly. The huge number of hash calculations across the distributed database gobbles up energy. As long as there is a margin in mining for bitcoin, this energy consumption will continue. One bitcoin mining facility in Russia was shut down because it defaulted on paying its vast electricity bill. In these days of sustainable growth and climate change, Blockchain has a BIG problem and it’s getting bigger.
Huge storage
Some see Blockchain as the new internet, in the sense that it will be the new form of co-operative cloud storage. But that is still a pipe dream. First, we need bigger pipes and second, way more storage. Remember that storage prices have fallen dramatically but Blockchain is a famously bloated system and still costs money to run. 
You may imagine that Blockchain is a simple, unitary piece of technology. It is not. The community is full of road-map wars and disagreements between factions. It will be some time before all of this is stable enough for real implementations.
Trust a problem
For a system that promises a trustworthy leger, Blockchain is built on the premise of ‘trust’. Yet its origins in Bitcoin and crypto-currency may well be its undoing. Few would deny that Bitcoin, in particular, is soaked in money laundering and other forms of criminal activity. Being secure and unhackable is a technical issue, gaining the trust of institutions and consumers is a psychological issue. Blockchain may never cross that Rubicon.
A chain is no stronger that its weakest link and although Blockchain is a distributed, unhackable chain, it has weak links are outwith that technical chain - in terms of opacity, complexity, regulation, energy consumption, bandwidth needs, storage needs, and trust. Software fails when it is just too damn complex to get your head around and the consequences were not thought through in terms of regulation and other demands on bandwidth, processing power and storage. There is a chance that Blockchain may sneak into be the underlying, secure technology for the entire internet, with individual private keys but it has to overcome the problems above and overturn the existing system. This, I think, is unlikely. Blockchain is starting to look less like a great solution and more like a ball and chain.

 Subscribe to RSS

Saturday, September 22, 2018

Learning Designers will have to adapt or die. Here’s 10 ways they need to adapt to AI….

Interactive Designers will have to adapt or die. As AI starts to play a major part of the online learning landscape, right across the learning journey, it is now being used for learner engagement, learner support, content creation, assessment and so on. It will eat relentlessly into the traditional skills that have been in play for nearly 35 years. The old, core skillset was writing, media production, interactions and assessment.
In one company, in which I’m a Director, we see a shift towards AI services and products and we’re having to identify individuals with the skills and attitudes to deal with this new demand. This means understanding the new technology (not trivial), learning how to write for chatbots and dealing more with AI-aided design and curation, rather than doing this for themselves. It’s a radical shift.
In another context, using services like WildFire, means not using traditional interactive designers, as the software largely does this job. It identifies the learning points, automatically creates the interactions, finds the curated links and assesses, formatively and summatively. It creates content in minutes not months. This is the way online learning is going. This stuff is here, now.
The gear-shift in skills is interesting and, although still uncertain, here’s some suggestions based on my concrete experience of making and observing this shift in three separate companies.
1. Technical understanding
Designers, or whatever they’re called now or in the future, will need to know far more about what the software does, its functionality, strengths and weaknesses. In some large projects we have found that a knowledge of how the NLP works has been an invaluable skill, along with an ability to troubleshoot by diagnosing what the software can, or cannot do. Those with some technical understanding fare better here.
This is not to say that you need to be able to code or have AI or data scientist skills. It does mean that you will have to know, in detail, how the software works. If it uses semantic techniques, make the effort to understand the approach, along with its weaknesses and strengths. With chatbots, it is all too easy to set too high en expectation on performance. You will need to know where these lines are in terms of what you have to do as a designer. Similarly with data analysis. With traditional online learning, the software largely delivers static pages with no real semantic understanding, adaptability or intelligence. AI created content is very different and has a sort of ‘life of its own’, especially when it uses machine learning. At the very least get to know what the major areas of AI are, how they work and feel comfortable with the vocabulary.
2. Writing
Text remains the core medium in online learning. It remains the core medium in online activity generally. We have seen the pendulum swing towards video, graphics and audio but text will remain a strong medium, as we read faster than we listen, it is editable and searchable. That's why much social media and messaging is still text at heart. When I ran a large traditional online learning company I regarded writing as the key skill for IDs. We put people through literacy tests before they started, no matter what qualifications they had. It proved to be a good predictor, as writing is not just about turn of phrase and style, it is really about communications, purpose, order, logic and structure. I was never a fan of ‘storytelling’ as an identifiable skill.
However, the sort of writing one has to do in the new world of AI has more to do with being sensitive to what NLP (Natural Language Processing) does and dialogue. To write for chatbots one must really know what the technology can and cannot do, and also write natural dialogue (actually a rare skill). That’s why the US tech giants hire screenwriters for these tasks. You may also find yourself writing for ‘voice’. For example, WildFire automatically produces podcast audio using text to speech and that needs to be written in a certain way. Beyond this, coping with synonyms and the vagaries of natural language processing needs an understanding of all sorts of NLP software techniques.
3. Interaction
Hopefully we will see the reduction in the formulaic Multiple Choice Question production. MCQs are difficult to write and often flawed. Then there’s the often vicariously used ‘drag and drop’ and hideously patronising ‘Let’s see what Philip, Alisha and Sue think of this… ‘ you click on a face and get a speech bubble of text. I find that this is the area where most online learning really sucks.
This, I think, will be an area of huge change as the limited forms of MCQ start to be replaced by open input; of words, numbers and short text answers. NLP allows us to interpret this text. We do all three in WildFire with little interactive design (only editing out which ones we want). There is also voice interaction to consider, which we have been implementing, so that the entire learning experience, all navigation and interaction, is voice-driven. This needs some extra skills in terms of managing expectations and dealing with the vagaries of speech recognition software. Personalisation may also have to be considered. I'm an investor and Director in one of the word's most sophisticated adaptive learning companies CogBooks, believe me this software is sophisticated and the sequencing has to be handled by software not designers, that's what makes personalisation on scale possible. With chatbots, where we've been designing everything from invisible LSM bots to tutorbots, the whole form of interaction changes and you need to see how they fit into workflow through existing collaborative tools such as Slack or Microsoft teams. there's a lot of opportunities out there.
4. Media production
As online learning became trapped in ‘media production’ most of the effort and budget went into the production of graphics (often illustrative and not meaningfully instructive), animation (often overworked) and video (not enough in itself). Media rich is not necessarily mind rich and the research from Mayer and others, showing that the excessive use of media can inhibit learning is often ignored. We will see this change as the balance shifts towards effortful and more efficient learning. There will still be the need for good media production but it will lessen as AI can produce text from audio, create text and dialogue. Video is never enough in learning and needs to be supplemented by other forms of active learning. AI can do this, making video an even stronger medium. Curation strategies are also important. We often produce content that is already there but AI helps automatically link to content or provides tools for curating content. Lastly, a word on design thinking. The danger is in seeing every learning experience as a uniquely designed thing, to be subjected to an expensive design thinking process, when design can be embodied in good interface design, use A/B testing and avoid the trap of seeing learning as all about look and feel. Design matters but learning matters more.
5. Assessment
So many online learning courses have a fairly arbitrary 70-80% pass threshold. The assessments are rarely the result of any serious thought about the actual level of competence needed, and if you don’t assess the other 20-30% it may, in healthcare,for example, kill someone. There are many ways in which assessment will be aided by AI in terms of the push towards 100% competence, adaptive assessment, digital identification and so on. This will be a feature of more adaptive AI driven content. 
6. Data skills
SCORM is looking like an increasingly stupid limit on online learning. To be honest it was from its inception – I was there. Completion is useful but rarely enough. It is important to supplement SCORM with far more detailed data on user behaviours. But even when data is plentiful, it needs to be turned into information, visualised to make it useful. That is one set of skills that is useful, knowing how to visualise data. Information then has to be turned into knowledge and insights. This is where skills are often lacking. First you have to know the many different types of data in learning, how data sets are cleaned, then the techniques used to extract useful insights, often machine learning. You need to distinguish between data as the new oil and data as the new snake oil.
We take data, clean it, process it, then look for insights – clusters and other statistically significant techniques to find patterns and correlations. For example, do course completions correlate with an increase in sales in those retail outlets that complete the training? Training can then be seen as part of a business process where AI not only creates the learning but does the analysis and that is all in a virtual and virtuous loop that informs and improves the business. It is not that you require deep data scientist skills, but you need to become aware of the possibilities of data production, the danger of GIGO, garbage-in/garbage out and the techniques used in this area.
7. User testing
In one major project we produced so much content, so quickly, that the clients had trouble keeping up on quality control at their end. You will find that the QA process is very different, with quick access to the actual content, allowing for immediate testing. In fact, AI tends to produce less mistakes in my experience as there is less human input, always a source of spelling, punctuation and other errors. I used to ask graphic artists to always cut and paste text as it was a source of endless QA problems. The advantage of using AI generated content is that all sides can screen share to solve residual problems on the actual content seen by the learner. We completed one large project without a single face-to-face meeting. This quick production also opens up the possibility of A/B testing with real learners. This is an example of A/A testing being used with gamification content – with surprising results.
8. Learning theory
In my experience, few interactive designers can name many researchers or identify key pieces of research on, let's say the optimal number of options in a MCQ (answer at foot of this article), retrieval practice, length of video, effects of redundancy, spaced-practice theory, even the rudiments of how memory works (episodic v semantic). This is elementary stuff but it is rarely taken seriously.
With the implementation of AI, the AI has to embody good pedagogic practice. This is interesting, as we can build good, well-researched, learning practice into the software. This is what we have been doing in WildFire, where effortful learning, open input, retrieval and spaced practice are baked into the software. Hopefully, this will drive online learning away from long-winded projects that take months to complete, towards production that takes minutes not months and learning experiences that focus on learning not appearance.
9. Communications
Communications with AI developers and data scientists is a challenge. They know a lot about the software but often little about learning and the goals. On the other hand designers know a lot about communications, learning and goals. Agile techniques, with a shared whiteboard are useful. There are formal agile techniques around identifying the user story, extracting features then coming to agreed tasks. Comms are tougher in this world so learn to be forgiving.
Then there’s communications with the client and SMEs. This can be particularly difficult, as some of the output is AI generated, and as AI is not remotely human (not conscious or cognitive) it can produce mistakes. You learn to deal with this when you work in this field, overfitting, false positives and so on. But this is often not easy for clients to understand, as they will be used to design document, scripts and traditional QA techniques. I had AI once automatically produce a link for the word ‘blow’, a technique nurses ask of young patients when they’re using sharps or needles. The AI linked to the Wikipedia page for ‘blow’ – which was cocaine – easily remedied but odd.
We have also worked to reduce iterations with SMEs, the cause of much of the high cost of online learning. If the AI is identifying learning points and curated content, using already approved documents, PPTs and videos, the need for SME input is lessened. As tools like WildFire produce content very quickly, the clients and SME can test and approve the actual content, not from scripts but in the form of the learning experience itself. This saves a ton of time.
10. Make the leap
AI is here. Few argue that it will change the very nature of employment and therefore it will change what you learn, how you learn and even why you learn. We are, at last, emerging from a 30 year paradigm of media production and multiple choice questions, in largely flat and unintelligent learning experiences, towards smart, intelligent online learning, that behaves more like a good teacher, where you are taught as an individual with a personalised experience, challenged and, rather than endlessly choosing from lists, engage in effortful learning, using dialogue, even voice. As a Learning designer, Interactive designer, project Manager, Producer, whatever, this is the most exciting thing to have happened in the last 30 years of learning.
Most of the Interactive Designers I have known, worked with and hired over the last 30  plus years have been skilled people, sensitive to the needs of learners but we must always be willing to 'learn', for that is our vocation. To stop learning is to do learning a disservice. So make the leap!

 Subscribe to RSS

Monday, September 17, 2018

Breakthrough that literally opens up online learning? Using AI for free text input

When teachers ask learners whether they know something they rarely ask them multiple choice questions. Yet the MCQ remains the staple in online learning, even at the level of scenario based learning. Open input remains rare. Yet there is ample evidence that it is superior in terms of retention and recall. Imagine allowing learners to type, in their own words, what they think they know about something, and AI does the job of interpreting that input?
Open input
We’ve developed different levels of more natural open input that takes online learning forward. The first involves using AI to identify the main concepts and getting learners to enter the text/numbers, rather than choosing from a list. The cognitive advantage is that the learner focuses on recalling the idea into their own minds, an act that has been shown to increases retention and recall. There is even evidence that this type of retrieval has a stronger learning effect than the original act of being taught the concept. These concepts then act a ’cues’ which learners hang their learning upon, for recall. We know this works well.
Free text input
But let’s take this a stage further and try more general open input. The learners reads a number of related concepts in text, with graphics, even watching video, and has to type in a longer piece of text, in their own words. This we have also done. This longer form of open-input allows the learner to rephrase and generate their thoughts and the AI software does analysis on this text.
Ideally, one takes the learner through three levels
1. Read text/interpret graphics/watch video
2. AI generated open-input with cues
3. AI generated open-input of fuller freeform text in your own words
This gives us a learning gradient in terms and of increasing levels of difficulty and retrieval. You move from exposure and reflection, to guided effortful retrieval and full, unaided retrieval. Our approach increases the efficacy of learning in terms of speed of learning, better retrieval and better recall, all generated and executed by AI.
The process of interpretation on the generated text, in your own words, copes with synonyms, words close in meaning and different sentence constructions, as it uses the very latest form of AI. It also uses the more formal data from the structured learning. We have also got this working by voice only input, another breakthrough in learning, as it is a more natural form of expression in practice. 
The opportunities for chatbots is also immense. 
If you work in corporate learning and want to know more, please contact us at WildFire and we can show you this in action.
Evidence for this approach
Much advice and most practice from educational institutions – re-reading, highlighting and underlining – is wasteful. In fact, these traditional techniques can be dangerous, as they give the illusion of mastery. Indeed, learners who use reading and re-reading show overconfidence in their mastery, compared to learners who take advantage of effortful learning.
Yet significant progress has been made in cognitive science research to identify more potent strategies for learning. The first strategy, mentioned as far back as Aristotle, Francis Bacon then William James, is ‘effortful’ learning. It is what the learner does that matters. 
Simply reading, listening or watching, even repeating these experiences, is not enough. The learning is in the doing. The learner must be pushed to make the effort to retrieve their learning to make it stick in long-term memory.
Active retrieval
 ‘Active retrieval’ is the most powerful learning strategy, even more powerful than the original learning experience.  The first solid research on retrieval was by Gates (1917), who tested children aged 8-16 on short biographies. Some simply re-read the material several times, others were told to look up and silently recite what they had read. The latter, who actively retrieved knowledge, showed better recall. Spitzer (1939) made over 3000 11-12 year olds read 600 word articles then tested students at periods over 2 months. The greater the gap between testing (retrieval) and the original exposure or test, the greater the forgetting. The tests themselves seemed to halt forgetting. Tulving (1967) took this further with lists of 36 words, with repeated testing and retrieval. The retrieval led to as much learning as the original act of studying. This shifted the focus away from testing as just assessment to testing as retrieval, as an act of learning in itself. Roediger et al. (2011) did a study on text material covering Egypt, Mesopotamia, India and China, in the real context of real classes in a real school, a Middle School in Columbia, Illinois. Retrieval tests, only a few minutes long, produced a full grade-level increase on the material that had been subject to retrieval. McDaniel (2011) did a further study on science subjects, with 16 year olds, on genetics, evolution and anatomy. Students who used retrieval quizzes scored 92% (A-) compared to 79% for those who did not. More than this, the effect of retrieval lasted longer, when the students were tested eight months later. So designing learning as a retrieval learning experience, largely using open-input, where you have to pull things from your memory and make a real effort to type in the missing words, given their context in a sentence.
Open input
Most online learning relies heavily on multiple-choice questions, which have become the staple of much e-learning content. These have been shown to be effective, as almost any type of test item is effective to a degree, but they have been shown to be less effective than open-response, as they test recognition from a list, not whether it is actually known.
Duchastel and Nungester (1982) found that multiple-choice tests improve your performance on recognition in subsequent multiple-choice tests and open input improves performance on recall from memory. This is called the ‘test practice effect’. Kang et al. (2007) showed that, with 48 undergraduates, reading academic Journal quality material, open input is superior to multiple-choice (recognition) tasks. Multiple choice testing had an affect similar to that of re-reading whereas open-input resulted in more effective student learning. McDaniel et al. (2007) repeated this experiment in a real course with 35 students enrolled in a web-based Brain and Behavior course at the University of New Mexico. The open-input quizzes produced more robust benefits than multiple-choice quizzes. ‘Desirable difficulties’ is a concept coined by Elizabeth and Robert Bjork, to describe the desirability of creating learning experiences that trigger effort, deeper processing, encoding and retrieval, to enhance learning. The Bjorks have researched this phenomenon in detail to show that effortful retrieval and recall is desirable in learning, as it is the effort taken in retrieval that reinforces and consolidates that learning. A multiple-choice question is a test of recognition from a list. They do not elicit full recall from memory. Studies comparing multiple-choice with open retrieval show that when more effort is demanded of students, they have better retention.As open-response takes cognitive effort, the very act of recalling knowledge also reinforces that knowledge in memory. The act of active recall develops and strengthens memory. It improves the process of recall in ways that passive recall – reading, listening and watching do not.
Design implications
Meaning matters and so we rely primarily on reading and open response, where meaningful recall is stimulated. This act alone, even when you don’t know the answer, is a strong reinforcer, stronger indeed, than the original exposure. Interestingly, even when the answer is not known, the act of trying to answer is also a powerful form of learning. 
So, the deliberate choice of open-response questions, where the user types in the words, then more  substantial open input, is a deliberate, design strategy to take advantage of known AI and learning techniques to increase recall and retention. Note that no learner is subjected to the undesirable difficulty of getting stuck, as letters are revealed one by one, and the answer given after three attempts. Hints are also possible in the system.
Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press.
Bower G. H. (1972) Mental imagery in associative learning in Gregg L,W. Cognition in learning and memory New York, Wiley
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Butler, A. C., & Roediger, H. L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36, 604-616.
Duchastel, P. C., & Nungester, R. J. (1982). Testing effects measured with alternate test forms. Journal of Educational Research75, 309-313.
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Gates, A. I. (1917). Recitation as a factor in memorizing. Archives of Psychology, No. 40, 1-104. 
Hirshman, E. L., & Bjork, R. A. (1988). The generation effect: Support for a two-factor theory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 14, 484–494. 
Jacoby, L. L. (1978). On interpreting the effects of repetition: Solving a problem versus remembering a solution. Journal of Verbal Learning and Verbal Behavior17, 649-667.
Kang, S. H. K., McDermott, K. B., & Roediger, H. L., III. (2007). Test format and corrective feedback modulate the effect of testing on long-term retention. European Journal of Cognitive Psychology19, 528-558. 
McDaniel, M. A., Einstein, G. O., Dunay, P. K., & Cobb, R.  (1986).  Encoding difficulty and memory:  Toward a unifying theory.  Journal of Memory and Language25, 645-656.
McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103, 399-414
Miller, G.A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. 
Richland, L. E., Bjork, R. A., Finley, J. R., & Linn, M. C. (2005). Linking cognitive science to education: Generation and interleaving effects. In B. G. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the twenty-seventh annual conference of the cognitive science society. Mahwah, NJ: Erlbaum. 
Roediger, H. L., Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Test-enhanced learning in the classroom: Long-term improvements from quizzing. Journal of Experimental Psychology: Applied, 17, 382-395.
Spitzer, H. F. (1939). Studies in retention. Journal of Educational Psychology30, 641-656. 
Tulving, E. (1967). The effects of presentation and recall of material in free-recall learning. Journal of Verbal Learning and Verbal Behavior6, 175􏰀184.

 Subscribe to RSS

Wednesday, September 12, 2018

Simple researched design feature would save organisations and learners a huge amount of money and time – yet hardly anyone does it

Multiple-choice questions are everywhere, from simple retrieval tests to high stakes exams. They normally (not always) contain FOUR options; one right and answer and three distractors. But who decided this? Is it more convention than researched practice?
Rodriquez (2005) studied 27 research papers on multiple-choice questions and found that the optimal number of options is THREE not four. Vyas (2008) showed the same was true in medical education. But God is in the detail and their findings surfaced some interesting phenomena.
Four/five options increases the effort the learner has to make but this does not increase the reliability of the test
Four increases the effort needed to write questions and as distractors are the most difficult part of the test item to identify and write, they were often very weak.
Reduction in the number of options from four to three (surprisingly) increased the reliability of test scores.
Tests shorter leaving more time for teaching.
Next step
Of course, multiple-choice is, in itself weaker than open-input, which is why we can go one step further and have open response, either single words or short answers. Natural language Processing allows AI not only to create such questions automatically but also provide accurate student scores. AI can also actually create both of these question types (also MCQs if desired) automatically, saving organisations time and money. This is surely the way forward in online learning. Beyond this is voiced input, again a step forward, and AI has also allowed this type of input in online learning. If you are interested in learning more about this see WildFire.
So, you can safely reduce the number of MCQ options from five/four to THREE and not reduce the reliability of the tests. Indeed, there is evidence that it improves the reliability of tests. Not only that it saves organisations, teachers and learners time.
Rodriquez (2005) “Three Options Are Optimal for Multiple-Choice Items: A Meta-Analysis of 80 Years of Research”, Educational Measurement: Issues and Practice, Volume 24, Number 2, June 2005, pp. 3-13.
Vyas R (2008) Medical Education. The National medical Journal of India, Volume 21, No 3

 Subscribe to RSS

Friday, September 07, 2018

Chatbots a gamechanger in learning? The BIG debate at LPI

This was the debate motion at the LPI conference in London. I was FOR the motion (Henry Stewart was AGAINST) and let me explain why.
1. AI is a gamechanger
AI will change the very nature of work. It may even change what it is to be human. This is a technological revolution as big as the internet and will therefore change what we learn, why we learn and how we learn. The Top Seven companies by market cap all have AI as a core strategy; AppleAlphabetMicrosoftAmazonTencentFacebook and Alibaba. AI is a strategic concern for every sector and every business, even learning.
2. Evidence from consumers
Several radical shifts in consumer online behavior move us towards chatbots. First the entry of voice activated bots into the home and connected the the IoT (Internet of Things) – Amazon Alexa and Google Home. Second the rise of ‘voice’ as a natural form of communicating with technology – Siri, Cortana and other similar services. Over 10% of all search is now by voice. Third, the switch from social media to message/chat apps – chat overtook social media in late 2014 and the gap is growing – chat is the home screen for most young people.
3. Pedagogy in chatbots
Most teaching is through dialogue. The Socratic method may have been undermined by blackboards and their successors through to PowerPoint, but voice and dialogue are making a comeback. Speaking and listening through dialogue is our most natural interface. We’ve evolved these capabilities over 2 million years or so. It’s natural and we’re grammatical geniuses aged 3, without having to be taught to speak and listen. Within dialogue lies lots of pedagogically strong learning techniques; retrieval, elaboration, questions, answers, follow ups, examples and so on. It just feels more natural.
4. Evidence in learning
An exit poll taken by Donald Taylor, from Learning and Technologies conference this year, showed Personalised learning at No 1 and AI and No 3. The interest is clearly strong and there’s lots of real projects being delivered to real clients from WildFire, Learning Pool and so on.
5. Chatbots across the learning journey
There are now real chatbot applications at points across the entire learning journey. I showed actual chatbot applications in learning in the following areas:
   Onboarding bots
   Learner engagement bots
   Learner support bots
   Invisible LMS bots
   Mentor bots
   Practice bots
   Assessment bots
   Wellbeing bots
If you want to know more about these actual projects, I'd be glad to help.
6. They’re learners
An important feature of modern chatbots, compared say to ELIZA from the 1960s, is the fact that they now learn. This matters as the more you train and use them, the better they get. We used to have just human teachers and learners, we now have technology that is both a teacher and learner.
7. It’s started
Technology is always ahead of the sociology, which is always ahead of learning and development. Yet, we see in these many projects, even with relatively primitive technology and emerging trend – the use of technology delivered chatbot learning. In time, this will happen. Resistance is futile. 
Nigel Paine chaired the debate with his usual panache and teased questions out of the audience and the real debate ensued. The questions were rather good.
Q Has AI has passed the Turing test?
First, there are many versions of the Turing test but the evidence from the many chatbots on social media all the way to Google Duplex, shows that it has been passed. Not for long, sustained and very detailed dialogue, but certainly within limited domains. Google Duplex showed that we’re getting there on sustained dialogue and the next generation of Amazon’s Alexa and Google Home will have memory, context and personalisation in their chatbot software. It will come in time.
Q AI can never match the human brain
This is true but not always the point. We didn’t learn to fly by copying the wings of a bird – we invented new technology – the airplane. We didn’t go faster by looking at the legs of a cheetah, we invented the wheel. The human brain is actually a rather fragile entity. It takes 20 years and more of training to make it even remotely useful in the workplace, it is inattentive, easily overloaded, has fallible memory, forget most of what its tries to learn, has tons of biases (we're all racist and sexist), we can’t download, can’t network and we die. But it is true that it is rather good at general things. This is why chatbots are best targeted at specific uses and domains, such as the eight species of chatbot I demonstrated.
Q Chatbots v people
Michelle Parry-Slater made a good point about chatbots not replacing people but working alongside people. This is important. Chatbots may replace some functions and roles but few suppose that all people will be eliminated by chatbots. We have to see them as being part of the landscape.
Q Chatbots need to capture pedagogy
Good question from Martin Couzins. Chatbots have to embody good pedagogy and already do. Whether it’s models of engagement, support, learning objectives, invisible LMS, practice, assessment or well being, the whole point is to use both the interface and back-end functionality (important area for pedagogic capture) to deliver powerful learning based on evidence-based theory, such as retrieval, effortful learning, spaced-practice and so on. This will improve rather than diminish or ignore pedagogy. In all of the examples I showed, pedagogy was first and foremost.
Q Will L and D skills have to change
Indeed. I have been training Interactive Designers on chatbot and AI skills as this is already in demand. The days of simply producing media assets and multiple choice questions is coming to a close – thankfully.
Oh and we won the debate by some margin with a significant number changing their minds from sceptics to believers along the way! But that doesn't;t really matter, as it was a self-selecting audience - they came, I'd imagine, as they were curious and handsome affinity with the idea that chatbots have a role. My view is goat these debates are good at conferences - by starting with a polarised position, the audience can move and shift around in the middle. The audience in this session were excellent, with great questions, as you've seen above. Note to conference organisers - we need more of this - it energises debate and audience participation.

 Subscribe to RSS