Monday, September 17, 2018

Breakthrough that literally opens up online learning? Using AI for free text input

When teachers ask learners whether they know something they rarely ask them multiple choice questions. Yet the MCQ remains the staple in online learning, even at the level of scenario based learning. Open input remains rare. Yet there is ample evidence that it is superior in terms of retention and recall. Imagine allowing learners to type, in their own words, what they think they know about something, and AI does the job of interpreting that input?
Open input
We’ve developed different levels of more natural open input that takes online learning forward. The first involves using AI to identify the main concepts and getting learners to enter the text/numbers, rather than choosing from a list. The cognitive advantage is that the learner focuses on recalling the idea into their own minds, an act that has been shown to increases retention and recall. There is even evidence that this type of retrieval has a stronger learning effect than the original act of being taught the concept. These concepts then act a ’cues’ which learners hang their learning upon, for recall. We know this works well.
Free text input
But let’s take this a stage further and try more general open input. The learners reads a number of related concepts in text, with graphics, even watching video, and has to type in a longer piece of text, in their own words. This we have also done. This longer form of open-input allows the learner to rephrase and generate their thoughts and the AI software does analysis on this text.
Ideally, one takes the learner through three levels
1. Read text/interpret graphics/watch video
2. AI generated open-input with cues
3. AI generated open-input of fuller freeform text in your own words
This gives us a learning gradient in terms and of increasing levels of difficulty and retrieval. You move from exposure and reflection, to guided effortful retrieval and full, unaided retrieval. Our approach increases the efficacy of learning in terms of speed of learning, better retrieval and better recall, all generated and executed by AI.
The process of interpretation on the generated text, in your own words, copes with synonyms, words close in meaning and different sentence constructions, as it uses the very latest form of AI. It also uses the more formal data from the structured learning. We have also got this working by voice only input, another breakthrough in learning, as it is a more natural form of expression in practice. 
The opportunities for chatbots is also immense. 
If you work in corporate learning and want to know more, please contact us at WildFire and we can show you this in action.
Evidence for this approach
Much advice and most practice from educational institutions – re-reading, highlighting and underlining – is wasteful. In fact, these traditional techniques can be dangerous, as they give the illusion of mastery. Indeed, learners who use reading and re-reading show overconfidence in their mastery, compared to learners who take advantage of effortful learning.
Yet significant progress has been made in cognitive science research to identify more potent strategies for learning. The first strategy, mentioned as far back as Aristotle, Francis Bacon then William James, is ‘effortful’ learning. It is what the learner does that matters. 
Simply reading, listening or watching, even repeating these experiences, is not enough. The learning is in the doing. The learner must be pushed to make the effort to retrieve their learning to make it stick in long-term memory.
Active retrieval
 ‘Active retrieval’ is the most powerful learning strategy, even more powerful than the original learning experience.  The first solid research on retrieval was by Gates (1917), who tested children aged 8-16 on short biographies. Some simply re-read the material several times, others were told to look up and silently recite what they had read. The latter, who actively retrieved knowledge, showed better recall. Spitzer (1939) made over 3000 11-12 year olds read 600 word articles then tested students at periods over 2 months. The greater the gap between testing (retrieval) and the original exposure or test, the greater the forgetting. The tests themselves seemed to halt forgetting. Tulving (1967) took this further with lists of 36 words, with repeated testing and retrieval. The retrieval led to as much learning as the original act of studying. This shifted the focus away from testing as just assessment to testing as retrieval, as an act of learning in itself. Roediger et al. (2011) did a study on text material covering Egypt, Mesopotamia, India and China, in the real context of real classes in a real school, a Middle School in Columbia, Illinois. Retrieval tests, only a few minutes long, produced a full grade-level increase on the material that had been subject to retrieval. McDaniel (2011) did a further study on science subjects, with 16 year olds, on genetics, evolution and anatomy. Students who used retrieval quizzes scored 92% (A-) compared to 79% for those who did not. More than this, the effect of retrieval lasted longer, when the students were tested eight months later. So designing learning as a retrieval learning experience, largely using open-input, where you have to pull things from your memory and make a real effort to type in the missing words, given their context in a sentence.
Open input
Most online learning relies heavily on multiple-choice questions, which have become the staple of much e-learning content. These have been shown to be effective, as almost any type of test item is effective to a degree, but they have been shown to be less effective than open-response, as they test recognition from a list, not whether it is actually known.
Duchastel and Nungester (1982) found that multiple-choice tests improve your performance on recognition in subsequent multiple-choice tests and open input improves performance on recall from memory. This is called the ‘test practice effect’. Kang et al. (2007) showed that, with 48 undergraduates, reading academic Journal quality material, open input is superior to multiple-choice (recognition) tasks. Multiple choice testing had an affect similar to that of re-reading whereas open-input resulted in more effective student learning. McDaniel et al. (2007) repeated this experiment in a real course with 35 students enrolled in a web-based Brain and Behavior course at the University of New Mexico. The open-input quizzes produced more robust benefits than multiple-choice quizzes. ‘Desirable difficulties’ is a concept coined by Elizabeth and Robert Bjork, to describe the desirability of creating learning experiences that trigger effort, deeper processing, encoding and retrieval, to enhance learning. The Bjorks have researched this phenomenon in detail to show that effortful retrieval and recall is desirable in learning, as it is the effort taken in retrieval that reinforces and consolidates that learning. A multiple-choice question is a test of recognition from a list. They do not elicit full recall from memory. Studies comparing multiple-choice with open retrieval show that when more effort is demanded of students, they have better retention.As open-response takes cognitive effort, the very act of recalling knowledge also reinforces that knowledge in memory. The act of active recall develops and strengthens memory. It improves the process of recall in ways that passive recall – reading, listening and watching do not.
Design implications
Meaning matters and so we rely primarily on reading and open response, where meaningful recall is stimulated. This act alone, even when you don’t know the answer, is a strong reinforcer, stronger indeed, than the original exposure. Interestingly, even when the answer is not known, the act of trying to answer is also a powerful form of learning. 
So, the deliberate choice of open-response questions, where the user types in the words, then more  substantial open input, is a deliberate, design strategy to take advantage of known AI and learning techniques to increase recall and retention. Note that no learner is subjected to the undesirable difficulty of getting stuck, as letters are revealed one by one, and the answer given after three attempts. Hints are also possible in the system.
Bibliography
Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press.
Bower G. H. (1972) Mental imagery in associative learning in Gregg L,W. Cognition in learning and memory New York, Wiley
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Butler, A. C., & Roediger, H. L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36, 604-616.
Duchastel, P. C., & Nungester, R. J. (1982). Testing effects measured with alternate test forms. Journal of Educational Research75, 309-313.
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Gates, A. I. (1917). Recitation as a factor in memorizing. Archives of Psychology, No. 40, 1-104. 
Hirshman, E. L., & Bjork, R. A. (1988). The generation effect: Support for a two-factor theory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 14, 484–494. 
Jacoby, L. L. (1978). On interpreting the effects of repetition: Solving a problem versus remembering a solution. Journal of Verbal Learning and Verbal Behavior17, 649-667.
Kang, S. H. K., McDermott, K. B., & Roediger, H. L., III. (2007). Test format and corrective feedback modulate the effect of testing on long-term retention. European Journal of Cognitive Psychology19, 528-558. 
McDaniel, M. A., Einstein, G. O., Dunay, P. K., & Cobb, R.  (1986).  Encoding difficulty and memory:  Toward a unifying theory.  Journal of Memory and Language25, 645-656.
McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103, 399-414
Miller, G.A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. 
Richland, L. E., Bjork, R. A., Finley, J. R., & Linn, M. C. (2005). Linking cognitive science to education: Generation and interleaving effects. In B. G. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the twenty-seventh annual conference of the cognitive science society. Mahwah, NJ: Erlbaum. 
Roediger, H. L., Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Test-enhanced learning in the classroom: Long-term improvements from quizzing. Journal of Experimental Psychology: Applied, 17, 382-395.
Spitzer, H. F. (1939). Studies in retention. Journal of Educational Psychology30, 641-656. 
Tulving, E. (1967). The effects of presentation and recall of material in free-recall learning. Journal of Verbal Learning and Verbal Behavior6, 175􏰀184.

 Subscribe to RSS

Wednesday, September 12, 2018

Simple researched design feature would save organisations and learners a huge amount of money and time – yet hardly anyone does it

Multiple-choice questions are everywhere, from simple retrieval tests to high stakes exams. They normally (not always) contain FOUR options; one right and answer and three distractors. But who decided this? Is it more convention than researched practice?
Research
Rodriquez (2005) studied 27 research papers on multiple-choice questions and found that the optimal number of options is THREE not four. Vyas (2008) showed the same was true in medical education. But God is in the detail and their findings surfaced some interesting phenomena.
Four/five options increases the effort the learner has to make but this does not increase the reliability of the test
Four increases the effort needed to write questions and as distractors are the most difficult part of the test item to identify and write, they were often very weak.
Reduction in the number of options from four to three (surprisingly) increased the reliability of test scores.
Tests shorter leaving more time for teaching.
Next step
Of course, multiple-choice is, in itself weaker than open-input, which is why we can go one step further and have open response, either single words or short answers. Natural language Processing allows AI not only to create such questions automatically but also provide accurate student scores. AI can also actually create both of these question types (also MCQs if desired) automatically, saving organisations time and money. This is surely the way forward in online learning. Beyond this is voiced input, again a step forward, and AI has also allowed this type of input in online learning. If you are interested in learning more about this see WildFire.
Conclusion
So, you can safely reduce the number of MCQ options from five/four to THREE and not reduce the reliability of the tests. Indeed, there is evidence that it improves the reliability of tests. Not only that it saves organisations, teachers and learners time.
Bibliography
Rodriquez (2005) “Three Options Are Optimal for Multiple-Choice Items: A Meta-Analysis of 80 Years of Research”, Educational Measurement: Issues and Practice, Volume 24, Number 2, June 2005, pp. 3-13.
Vyas R (2008) Medical Education. The National medical Journal of India, Volume 21, No 3

 Subscribe to RSS

Friday, September 07, 2018

Chatbots a gamechanger in learning? The BIG debate at LPI

This was the debate motion at the LPI conference in London. I was FOR the motion (Henry Stewart was AGAINST) and let me explain why.
1. AI is a gamechanger
AI will change the very nature of work. It may even change what it is to be human. This is a technological revolution as big as the internet and will therefore change what we learn, why we learn and how we learn. The Top Seven companies by market cap all have AI as a core strategy; AppleAlphabetMicrosoftAmazonTencentFacebook and Alibaba. AI is a strategic concern for every sector and every business, even learning.
2. Evidence from consumers
Several radical shifts in consumer online behavior move us towards chatbots. First the entry of voice activated bots into the home and connected the the IoT (Internet of Things) – Amazon Alexa and Google Home. Second the rise of ‘voice’ as a natural form of communicating with technology – Siri, Cortana and other similar services. Over 10% of all search is now by voice. Third, the switch from social media to message/chat apps – chat overtook social media in late 2014 and the gap is growing – chat is the home screen for most young people.
3. Pedagogy in chatbots
Most teaching is through dialogue. The Socratic method may have been undermined by blackboards and their successors through to PowerPoint, but voice and dialogue are making a comeback. Speaking and listening through dialogue is our most natural interface. We’ve evolved these capabilities over 2 million years or so. It’s natural and we’re grammatical geniuses aged 3, without having to be taught to speak and listen. Within dialogue lies lots of pedagogically strong learning techniques; retrieval, elaboration, questions, answers, follow ups, examples and so on. It just feels more natural.
4. Evidence in learning
An exit poll taken by Donald Taylor, from Learning and Technologies conference this year, showed Personalised learning at No 1 and AI and No 3. The interest is clearly strong and there’s lots of real projects being delivered to real clients from WildFire, Learning Pool and so on.
5. Chatbots across the learning journey
There are now real chatbot applications at points across the entire learning journey. I showed actual chatbot applications in learning in the following areas:
   Onboarding bots
   Learner engagement bots
   Learner support bots
   Invisible LMS bots
   Mentor bots
   Practice bots
   Assessment bots
   Wellbeing bots
If you want to know more about these actual projects, I'd be glad to help.
6. They’re learners
An important feature of modern chatbots, compared say to ELIZA from the 1960s, is the fact that they now learn. This matters as the more you train and use them, the better they get. We used to have just human teachers and learners, we now have technology that is both a teacher and learner.
7. It’s started
Technology is always ahead of the sociology, which is always ahead of learning and development. Yet, we see in these many projects, even with relatively primitive technology and emerging trend – the use of technology delivered chatbot learning. In time, this will happen. Resistance is futile. 
Objections
Nigel Paine chaired the debate with his usual panache and teased questions out of the audience and the real debate ensued. The questions were rather good.
Q Has AI has passed the Turing test?
First, there are many versions of the Turing test but the evidence from the many chatbots on social media all the way to Google Duplex, shows that it has been passed. Not for long, sustained and very detailed dialogue, but certainly within limited domains. Google Duplex showed that we’re getting there on sustained dialogue and the next generation of Amazon’s Alexa and Google Home will have memory, context and personalisation in their chatbot software. It will come in time.
Q AI can never match the human brain
This is true but not always the point. We didn’t learn to fly by copying the wings of a bird – we invented new technology – the airplane. We didn’t go faster by looking at the legs of a cheetah, we invented the wheel. The human brain is actually a rather fragile entity. It takes 20 years and more of training to make it even remotely useful in the workplace, it is inattentive, easily overloaded, has fallible memory, forget most of what its tries to learn, has tons of biases (we're all racist and sexist), we can’t download, can’t network and we die. But it is true that it is rather good at general things. This is why chatbots are best targeted at specific uses and domains, such as the eight species of chatbot I demonstrated.
Q Chatbots v people
Michelle Parry-Slater made a good point about chatbots not replacing people but working alongside people. This is important. Chatbots may replace some functions and roles but few suppose that all people will be eliminated by chatbots. We have to see them as being part of the landscape.
Q Chatbots need to capture pedagogy
Good question from Martin Couzins. Chatbots have to embody good pedagogy and already do. Whether it’s models of engagement, support, learning objectives, invisible LMS, practice, assessment or well being, the whole point is to use both the interface and back-end functionality (important area for pedagogic capture) to deliver powerful learning based on evidence-based theory, such as retrieval, effortful learning, spaced-practice and so on. This will improve rather than diminish or ignore pedagogy. In all of the examples I showed, pedagogy was first and foremost.
Q Will L and D skills have to change
Indeed. I have been training Interactive Designers on chatbot and AI skills as this is already in demand. The days of simply producing media assets and multiple choice questions is coming to a close – thankfully.
Conclusion
Oh and we won the debate by some margin with a significant number changing their minds from sceptics to believers along the way! But that doesn't;t really matter, as it was a self-selecting audience - they came, I'd imagine, as they were curious and handsome affinity with the idea that chatbots have a role. My view is goat these debates are good at conferences - by starting with a polarised position, the audience can move and shift around in the middle. The audience in this session were excellent, with great questions, as you've seen above. Note to conference organisers - we need more of this - it energises debate and audience participation.

 Subscribe to RSS

Tuesday, September 04, 2018

First randomised-controlled trial of an employee “Wellness Programme” suggests they are a waste of money Oh dear….

Jones et al (2018) in their study What Do Workplace Wellness Programmes Do? Took 12000 employees, randomly assigned them into groups, but found no “significant causal effects of treatment on total medical expenditures, health behaviors, employee productivity, or self-reported health status in the first year”.
This is a huge business, around $8 billion in the US alone. Yet it is largely based on articles of faith, not research. This is a welcome study, as it gets rid of the self-selecting nature of the audiences so prevalent in other studies on well-being, which renders them largely useless as the basis of recommendations. 
Do they reduce sickness? No they don’t. Does it results in staying in your job, getting promotion or a pay rise? No, it doesn’t.  Did it reduce medication or hospital visits? No, it didn’t. This was true for almost every one of the 37 features mentioned. The bottom line is that there is no bottom line, no return on investment.
The interesting conclusion by the authors of the study is that wellness programmes, far from helping the intended audience (the obese, smokers etc.) it simply screens out those who are already healthy, yet the burden of cost is borne by all.

 Subscribe to RSS

Bullshit Jobs - Futurist, Thought Leader, Leader... if you call yourself this, you're most probably not

If you call yourself a 'Futurist', 'Thought Leader' even 'Leader', you're most probably not one. I keep coming across people at conferences and on social media that have these titles, yet have shallow theoretical and practical competences.
Futurists
I've lost count of the presentations I've seen that are merely anecdotes and examples culled from the internet with some general nonsense about how 65% of all primary school kids are being taught for jobs that don't yet exist, or some other quite from Einstein that, on inspection, he never said.
Let me give you some real examples. Two years ago, I went to speak at DevLearn in Las Vegas. Now one wants a keynote speaker to provide new, insightful thinking, but a guy called David Pogue did a second-rate Jim Carey act. His ‘look at these wacky things on the internet’ shtick was a predictable routine. Kids can play the recorder on their iPhone! No they don’t. Only a 50 year old who bills himself as a ‘futurist’ thinks that kids take this stuff seriously. 
At Devlearn, we also got a guy called Adam Savage. I had never heard of him, but he’s a famous TV presenter in the US who hosts a show called Mythbusters. He spent an hour trying to claim that art and science were really the same thing, as both were really (and here comes his big insight) – storytelling. The problem is that the hapless Adam knew nothing about science or art. It was trite, reductionist and banal. Then there was the speaker on workplace learning, at OEB last year, who used the totally made up “65% of kids… jobs that don’t exist” line.
My own view is that these conferences do need outsiders who can talk knowledgeably about learning and not just about observing their kids or delivering a thinly disguised autobiography. I want some real relevance. I’ve begun to tire of ‘futurists’ – they all seem to be relics from the past. 
Bullshit Jobs - the book
This is where David Graeber comes in. He’s written a rather fine book, called Bullshit Jobs, which identifies five types of jobs that he regards as bullshit. Graeber’s right, many people do jobs, that if they disappeared tomorrow, would make no difference to the world and may even make things simpler, more efficient and better. As a follow up to the Graeber book, YouGov did a poll and found 37% thought that their jobs did not contribute meaningfully to the world. I find that both astonishing and all too real. In my experience, worryingly true. So what are those bullshit jobs?
Box tickers
Some of Graeber's jobs largely orbit around the concept of self-worth. Graeber identifies box tickers as one huge growth area in Bullshit Jobs. Now we know what this is in most organisations, those that deliver over-engineered and almost immediately forgotten compliance training, that is mostly about protecting the organisation from their own employees or satisfying some mythical insurance risk. It keeps them busy but also prevents others from getting on with their jobs. They forget almost all of it anyway.
It also includes all of those jobs created around abstract concepts such as diversity, equality or some other abstract threat. The job titles are a dead giveaway Chief Imagination Officer, Life Coach… any title with future, life, innovation, leadership, creative, liaison, strategist, ideation, facilitator, diversity, equality and so on. All of this pimping of job titles, along with fancy new business cards, is a futile exercise in self and organisational deception. It keeps non-productive people in non-productive jobs. Who hasn’t come across the pointless bureaucracy of organisations. From the process of signing in at reception, to getting wifi and all sorts of other administrative baloney. But that is nothing compared to the mindless touting of mindfulness, NLP courses and other fanciful and faddish nonsense that HR peddles in organisations. Then there’s a layer of pretend measurement with useless Ponzi scheme tools such as Myers Briggs, unconscious bias courses, emotional intelligence, 21stC skills and Kirkpatrick.
A second Graeber category is taskmasters, and he specifically targets middle management jobs and leadership professionals. Who doesn’t find themselves, at some point in the week doing something they know is pointless, instructed by someone whose job suggests pointless activity. The bullshit job boom has exploded in this area, with endless folk wanting to tell you that you’re a 'Leader’. You all need ‘Leadership training’ apparently, as everyone’s a Leader these days, rendering the meaning of the word completely useless. Stanford's Pfeffer nails this in his book Leadership is BS.
All of this comes at a cost. We have systematically downgraded jobs where people do real things, like plumbers, carpenters, carers, teachers, nurses and every other vocational occupation, paying them peanuts, while the rise of the robots, and I don’t mean technology, I mean purveyors of bullshit, all of those worthy middle-class jobs that pay people over the odds for being outraged on behalf of others. Leadership training has replaced good old-fashioned management training, abstractions replacing competences. Going to ‘Uni’ has become the only option for youngsters, often creating the expectation that they will go straight into bullshit jobs, managing others who do all most of the useful work.
I disagree with Graeber’s hypothesis that capitalism, and its engine the Protestant work ethic, leads to keeping people busy, just for the sake of being busy – business as busyness. I well remember the team leader in a summer job I had saying to me ‘listen just look busy… just pretend to be busy’. I felt like saying ‘You’re the boss, you pretend I’m busy’. Most of these bullshit jobs arise out of fear, the fear of being seen not to be progressive, the fear of regulation and litigation, the fear of not doing what everyone else is doing with a heavy does of groupthink.
We keep churning out these hopeless jobs in the hope that they will make the workplace more human, but all they do is dehumanise the workplace. They turn it into a place of quiet resentment and ridicule.

 Subscribe to RSS

Saturday, September 01, 2018

Hearables are hear to stay in learning - podcasts, learning, language learning, tutoring, spaced-practice and cheating in exams!

Hearables are wearable devices that use your sense of hearing, smart headphones if you wish, that are becoming popular on the rise of voice as a significant form of interaction. With voice rising dramatically as a means of search, Amazon Echo/Google Home into millions of homes, realtime translation and earbuds for music and voice on mobiles, we are increasingly using hearing as the sense of choice for communications. 
Apples’s removal of the audio jack and launch of their EarPods was a landmark in the shift towards wireless hearables but other devices are also available or in development. Some rely on your smartphone, others, such as the Vinci, are independent, with local computing power and storage. You can even get them designed for your own ears through 3D printing.
Voice in learning
The advantages of audio in learning is in line with the simple fact that voice is primary in language, as we are all auditory grammatical geniuses, able to listen and speak, by the age of three, without instruction,  whereas reading and writing take years of instruction and practise. It is a more natural form of communication, more human. It also leaves your imagination free to create or generate your own thoughts and interpretations. This pared back input, arguably, allows deeper processing and learning as it requires attention, focus and effortful learning. Most of our communication is through dialogue and hearing, not print and most teaching takes place through hearing and dialogue.
So, there are several ways hearables could be used in learning:
1. Radio
Radio predates TV and modern media for learning. It remains a popular form of communications, as it is undemanding, leaves you hands free (while making breakfast, driving the car and so on). It also has a long history in learning, in Australia and other regions where distances are huge and resources low. The straight delivery of radio via hearables is the baseline.
2.Podcasts
Podcasts have also become a popular medium, especially for learning. They appeal to the learner who wants to focus on hearing experts, often interviewing other experts, on specific topics. As they are downloadable, they provide audio on demand, when you have the time to listen, often in those periods when you can focus, listen and learn.
3. Online learning
We have been using voice to deliver online learning in WildFire, with zero typing, as all navigation and input is by voice. It is a facsinating experience and feels more like normalised teaching and learning, when compared to using a keyboard.
4. Language learning
Language learning is an obvious application, where listening and comprehension can be delivered to your personal needs, with appropriate levels of feedback, even voice rand pronunciation recognition.
5. Translation
Translation in real time is already available through Google's Pixel Buds. The advantages, in terms of convenience, but also language learning, has huge potential. It must surely be worth exploring the advantages for novice language learners of hearing with translation and playback in the real world, where immersion and interactions with native speakers matters.
6. Tutoring
Have you ever had to call someone for help on how to fix something or get technical help on your computer? As a form of quick tutoring, voice is useful as you are hands free to try things, while you have access to experts anywhere on the planet.
7. Health
Hearables that deliver notifications on heart rate, oxygen saturation, calories burned, distance, speed and duration are available and as they can be used during exercise, may prove popular. This health learning loop also has potential to modify behavior, from diabetes to obesity.
8. Lifelong learning
As one gets older, reading, typing and other forms of interaction become more difficult. Hearables provide an easier and more convenient form of interaction, especially when combined with ‘reminders’ for those with memory problems.
9. Spaced practice
Audio could be used as a spaced-practice tool, pushed to you at intervals personalised to your needs and your own forgetting curve, namely more at the start then levelling out over time.
10. Exam cheating
Lastly, although undesirable, there have already been many instances reported of exam cheating using hearables. Ebay is awash with cheating devices such a micro-earpieces, Bluetooth pens and so on. In some ways this shows how powerful such devices can be for the timely delivery of learning!
Conclusion
Hearables are becoming part of the consumer technology landscape and in terms of learning, will have an impact. Different devices have different affordances but there is no doubt that hearing is a sense of choice for many people. Hearables, therefore, are hear to stay.

 Subscribe to RSS

Thursday, August 30, 2018

Research shows Good Behaviour Game is constructivist nonsense

The Good Behaviour Game was touted for years by social constructivists as yet another silver bullet for classroom behaviour. Yet a large, well funded trial, across 77 schools with 3084 pupils, at the cost of £4000 per school, has shown that it’s a dud. The researchers (EIF) literally showed that it was a waste of time and money.
Of course, it had that word ‘Game’ in its brand, and ‘gamification’ being de rigeur, gave it momentum, along with some outrageous claims about its efficacy. And being a non-interventionist approach (teachers were not allowed to interfere) it also played to the Ken Robinson/Rousseau myth that if we only let children be themselves, they will thrive. It also had that vital component, the social group, where children were expected to use and pick up those vital 21stcentury skills, such as collaboration, communication and teamwork. So its premises: 1) Gamification, 2) Natural development and 3) Social – were found wanting.
Its creators claim that it is underpinned by theory that emerged in the 1960s; ‘life course’ and ‘social field theory’. Life Course theory is right out of the social constructivist playbook, specifically codified through the book Constructing Life Course by Gubrium and Holstein (2000), the idea that one should ignore specific measures, and implement practice and evaluate this practice holistically at the social level. Social Field Theory is another constructivist theory, taken from sociology, that looks at social actors’ and how these actors construct social fields, and how they are affected by such fields.
Claims for GBGs efficacy were nothing if not bold: improving behaviour, reducing mental health problems, crime, violence, anti-social behaviour, even substance abuse. Each game took 10-45 minutes and was supposed to result in better social behaviour and the game teams were balanced for gender and temperament. In truth, it was almost wholly a waste of time. The EIF summary is worth quoting in full:
“EEF Summary
Behaviour is a major concern for both teachers and students. EEF funded this project because GBG is an established programme, and previous evidence suggests it can improve behaviour, and may have a longer-term impact on attainment.
This trial found no evidence that GBG improves pupils’ reading skills or their behaviour (concentration, disruptive behaviour and pro-social behaviour) on average. There was also no effect found on teacher outcomes such as stress and teacher retention. However, there was some tentative evidence that boys at-risk of developing conduct problems showed improvements in behaviour. 
Most classes in the trial played the game less often and for shorter time periods than recommended, and a quarter of schools stopped before the end of the trial. However, classes who followed the programme closely did not get better results.
GBG is strictly manualised and this raised some challenges. In particular, some teachers felt the rule that they should not interact with students during the game was difficult for students with additional needs, and while some found that students got used to the challenge and thrived, others found the removal of their support counter-productive. The EEF will continue to look for effective programmes which support classroom management.”
Pretty conclusive results and further reason for my long-held belief that the orthodoxy of social coonstructivism needs to be challenged, before it causes even more damage in teacher training and our schools.
More importantly, it skewers the whole idea that children are naturally self-regulating and that all teachers and parents have to do is create the right social environment and they will progress.
It’s all to easy to think that real learning is taking place in collaborative groups, ignoring the research on social loafing and the possibility that the weakest learners may suffer badly from this sort of non-guided collaboration, when all that’s happening is slow and inefficient learning, illusory learning. This trial showed that this was indeed the case, with weaker students floundered. Even at the level of actual teacher practice, the approach failed, with both teachers and pupils getting wary, with shorter and shorter sessions and many just giving up.
 Conclusion
In evidence based education negative results are just as important as positive results, as they can stop wasted time and effort in the classroom. I’d say this was conclusive and stops some of the crazier constructivist practice in its tracks. It is in line with the negative constructivist results around whole word theory, the last destructive theory that took root in education and then found to be destructive, through evidence.


 Subscribe to RSS

Wednesday, August 29, 2018

Wikipedia’s bot army - shows us the way in governance of AI

Wikipedia is, for me, the digital wonder of the world. A free, user generated repository of knowledge, open to edits, across many languages, increasing in both breadth and depth. It is truly astonishing. But it has recently become a victim of its own success. As it scaled, it became difficult to manage. Human editorial processes have not been able to cope with the sheer number of additions, deletions, vandalism, rights violations, resizing of graphics, dead links, updating lists, blocking proxies, syntax fixing, tagging and so on. 
So would it surprise you to learn that an army of bots is, as we sleep, working on all of these tasks and many more? I was.
There are nearly 3000 bot tasks identified for use in Wikipedia. So many that there is a Bots Approval Group (BAG) with a Bot Policy that covers all of these, whether fully or partially automated, helping humans with editorial tasks. 
The policy rules are interesting. Your bot must be harmless, useful, does not consume resources unnecessarily, performs only tasks for which there is consensus, carefully adheres to relevant policies and guidelines and uses informative messages, appropriately worded, in any edit summaries or messages left for users. 
So far so good but the danger is that some bots malfunction and cause chaos. This is why their bot governance is strict and strong. What is fascinating here, is the glimpse we have into the future of online entities, where large amounts of dynamic data have to be protected, while being allowed to be used for human good. The Open Educational Resources people don’t like to mention Wikipedia. It is far too populist for their liking but it remains the largest, most useful knowledge base we’ve ever seen. So what can we learn from Wikipedia and bots?
AI and Wikipedia
Wikipedia, as a computer based system, is way superior to humans and even print, as it has perfect recall, unlimited storage and 24/7 performance. On the other hand it hits ceilings, such as the ability of human editors to handle the traffic. This is where well defined tasks can be automated – as previously mentioned. It is exactly how AI is best used, as solving very specific, well defined, repetitive tasks that occur 24/7 on scale. This leaves the editors free to do their job. Note that these bots are not machine learning AI, they are pieces of software that filter and execute tasks but the lessons for AI are clear.
At WildFire, we use AI to select content related to supplement learning experiences. This is a worthy aim, and there is no real editorial problem, as it is still, entirely under human control, as we can check, edit and change any problems. Let me give you an example. Our system automatically creates links to Wikipedia but as AI is not conscious or cognitive in any sense, it makes the occasional mistake. So in a medical programme, where the nurse had to ask the young patient to ‘blow’, while a lancet was being used to puncture his skin repeatedly in an allergy test, the AI automatically created a link to the page for cocaine. Ooops! Easily edited out but you get the idea. In the vast majority of cases it is accurate. You just need a QA system that catches the false positives.
Governance
Wikipedia has to handle this sort of ambiguity all the time. This is not easy for software. The Winograd Challenge offers $25000 for software that can handle its awkward sentences with 90% accuracy – the nearest anyone has got is 58%. Roger Schank used Groucho Marx jokes! Software and data are brittle, they don’t bend they break, which is why it still needs a ton of human checking, advising and oversight.
This is a model worth copying. Governance on the use of AI (let’s just call it autonomous software). Wikipedia, with its Bot Approval Group and Bot Policy, offers a good example within an open source context of good governance over data. It draws the line between bots and humans but keeps humans in control.
Conclusion
The important lesson here is that the practitioners themselves know what has to be done. They are good people doing good things to keep the integrity of Wikipedia intact, as well as keeping it efficient. AI is like the God Shiva, it both creates and destroys. The problem with the dozens of ethics groups springing up, is that all they see is the destruction. AI can be a force for good but not if it is automatically seen as an ideological and deficit model. It seems, at times, as though there’s more folk on ethics groups than actually doing anything on AI. Wikipedia shows us the way here – a steady, realistic system of governance, that quietly does its work, while allowing the system to grow and retain its efficiencies, with humans in control.

 Subscribe to RSS

Tuesday, August 28, 2018

How I got blocked by Tom Peters - you must bow to the cult of Leadership or be rejected as an apostate

Odd thing this ‘Leadership’ business. I’ve been writing about it for ten years and get roughly the same reaction to every piece – one of outrage from those who sell their ‘Leadership’ services, either as consultants or trainers. In these cases, I refer on the wise words of Upton Sinclair, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”
But a far more interesting spat ensued when I wrote a piece criticising some of the best-selling books that got this whole Leadership bandwagon rolling, namely In Search of Excellence and Good to Great. Tom Peters himself joined the fray, as outraged Leadership consultants huffed and puffed. Some showed real Leadership by simply hurling abuse, accompanied by GIFs (showing outrage or dismissal), doing the very opposite of everything they claim to admire in all of this Leadership piffle. What I learnt was that there is no room for scepticism or critical thinking in the cult of Leadership – you must bow to the God of Leadership or be rejected as an apostate.
To be fair, Tom Peters retweeted the critical piece and my replies, so hats off to him on that front but his responses bordered on the bizarre.

So far, so good. But I wasn’t blaming him and Collins for the crash. What I was actually saying is that a cult of Leadership, sustained, as Jeffrey Pfeffer showed, by a hyperactive business book publishing and training industry, produced a tsunami of badly researched books full of simplistic bromides and silver bullets, exaggerating the role of Leaders and falsely claiming to have found the secrets of success. This, I argued, as I had personally seen it happen at RBS and other organisations, eventually led us up the garden path to the dung-heap that was the financial disaster in 2008, led by financial CEOs who were greedy, rapacious and clearly incompetent. They had been fattened on a diet of Leadership BS and led us to near global, financial disaster.

Hold on Tom, I wasn’t saying you two were singularly responsible. I was making a much wider point about the exponential growth in publishing and training around Leadership, like Pfeffer in his book Leadership is BS, showing that it had, arguably, led to near disaster.
As the cult of Leadership took hold, I knew that something was awry when I dared criticise IBM at the Masie Conference. Elliot was chairing a session on enterprise training software and I pointed out that IBM had sold such a system to Hitler. In 1939, the CEO of IBM, Thomas Watson, flew across the Atlantic to meet Hitler. The meeting resulted in the Nazis leasing the mechanical equivalent of a Learning Management System (LMS). Data was stored as holes in punch cards to record details of people including their skills, race and sexual inclination and used daily throughout the 12 year Reich. It was a vital piece of apparatus used in the Final Solution, to execute the very categories stored on the apparently innocent cards - Jews, Gypsies, the disabled and homosexuals, as documented in the book IBM and the Holocaust by Edwin Black. They were also use to organise slave labour and trains to the concentration camps. He went apeshit at me. Why? IBM were sponsors of his conference. lesson - this is about money not morals.
I remember seeing Jack Welsh, of GE, pop up time and time again at US conferences and talk about how it was necessary to get rid of 10% of your workforce a year and a whole host of so called gurus who claimed to have found the Leadership gold at the end of the rainbow. There was just one problem. The evidence suggested that the CEOs of very large successful companies turned out to not to be the Leaders their theory stated they were. Indeed, the CEOs of financial institutions turned out to be incompetent managers, driven by the hubris around Leadership, who drove their companies and the world’s financial system to the edge of catastrophe. Bailed out by the taxpayer, they showed little remorse and kept on taking the bonuses, misselling and behaving badly.
In the 90s and the whole post 2000 period we then saw the deification of the tech Leaders – Gates, Jobs, Dell, Zuckerberg, Musk and Bezos. Who wants to be a billionaire? became the aspiration of a new generation, who also lapped up the biographies and Leadership books, this time with a ‘start-up’ spin. Yet they too proved to be all too keen on tax evasion, greed, share buybacks and a general disregard for the public good. Steve Jobs was a rather hideous individual - but no matter - the hopelessly utopian Leadership books kept on coming.
Jump to 2018 and Trump. How on earth did that happen? Oh, and before we in the EU get on our high horses, Italy did the same with Berlusconi. Even now, Andrej Babis, a billionaire businessman, has become the President of the Czech Republic in 2017. But back to Trump. He’s riding high in the polls but let’s look at how he got to become President. First the whole ‘I’m a successful business Leader’ shtick that gave him a platform on The Apprentice, then the campaign on the premise that he, as someone who was better suited to the role than traditional politicians, was the real deal, ‘the deal maker’. He even had his own sacred ‘Leadership’ text – The Art of the Deal.  The polling is interesting – his supporters don’t care about his racism and sexism, what they admire is his ability to get things done. They have elected, not a President, but a CEO. This is the apotheosis of the cult of business leadership, the American Dream, reframed in terms of Leadership BS, turned into a nightmare.
And on it went, our Leadership guru descending into sarcasm and abuse. This is exactly what I have been writing about for the last ten years, the hubris around Leadership. Is this what Leadership is really about -  going off in a hissy fit when you are challenged? It sort of confirms what I have always thought – that this Leadership movement is actually a Ponzi scheme – write a book, talk at conferences, make a pile of cash…. lead nothing but seminars, then take absolutely no responsibility when your data turns out to be wrong or the consequences are shown to be disastrous. 
We have fetishised the word 'Leader'. Everyone is obsessed by doing Leadership training and reading 4th rate paperbacks on this dubious subject. You're a leader, I'm a leader, we're all leaders now - rendering the very meaning of the word useless. What do you do for a living? I’m a ‘leader’. Cue laughter and ridicule. Have you ever heard anyone in an organisation say, We need to ask our ‘Leader’? Only if it was sneering sarcasm. It was invented by people who sell management training to fool us all into thinking that it's a noble calling but is it all a bit phoney and exaggerated and often leads to dysfunctional behaviour. 
As James Gupta said, on this thread, “Leader, innovator… yes they are legit and important roles, but if you have to call yourself one, you probably ain’t”… then even wiser words from a guy called Dick “If your job title is a concept then maybe it’s not a real job.”
In the end Peters blocked me – even though I never followed him??!

 Subscribe to RSS

Saturday, August 25, 2018

Why these best selling books on 'Leadership' got it disastrously wrong

I have two groaning shelves of business books. I used to read a lot of these – until I realised that most weren’t actually helping me at all. With hindsight, they tend to have three things in common – anecdotes, analogies and being hopelessly utopian. Even those that sa they rely on data, often get it all wrong. Worse still some of these early best-sellers not only got things badly wrong, they created the cult of 'Leadership'.
Good To Great

Take the lauded Good To Great by Jim Collins. It claimed to be revolutionary as it was based on oodles of research and real data. He claimed to have had 21 researchers working for five years selecting 6000 articles, with over 2000 pages of interviews. Out of this data came a list of stellar companies – Abbot Labs, Circuit City, Fannie Mae, Gillette, Kimberly Clark, Kroger, Nucor, Philip Morris, Pitney Bowes, Walgreen, Wells Fargo. The subtitle to the book was Why Some Companies Make the Leap... and Others Don't. Unfortunately some leapt of a cliff, many underperformed and the net result was that they were largely false dawns. 
Fannie Mae was close to collapse in 2008 having failed to spot the risks they had on their $6 trillion mortgage book and had to be bailed out. Senior staff, including two CEOs were found to have taken out illegal loans from the company, as well as making contributions to politicians sitting on committees regulating their industry. It didn’t stop there, since 2011 they have been embroiled in kickback charges as well as securities fraud and a swarm of lawsuits. It could be described as the most deluded, fraudulent and badly run company in the US, led by incompetent, greedy, liars and cheats.
Circuit City went bankrupt in 2009, having made some truly disastrous decisions at the top, dropping their large and successful large appliances business, a stupid exclusive deal with Verizon that stopped them selling other brands of phones, terrible acquisitions and a series of chops and changes that led to a rapid decline. It was unique in failing to capitalise on the growing technology markets, making decisions that were almost wholly wrong-headed. Its leaders took it down.
Wells Fargo has been plagued by controversy. The list of wrongdoing is depressingly long; money laundering, gouging overdraft payers, fines for mortgage misconduct, fines for inadequate risk disclosures, sued for loan underwriting, fined for breaking credit card laws, massive accounting frauds, insider trading, even racketeering and accusations of excessive pay. This is, quite simply, one of the most corrupt and rapacious companies on the planet, led by greedy, fraudulent fools.
I could go on but let’s summarise by saying that nine of the other stocks chosen by Collins have had lacklustre performance, regarded by the markets as journeymen stocks. Steven Levitt called the book out, showing that the stocks have underperformed against the average S&P. His point was simple, Collins cherry picked his data. So if you see this book recommended in Leadership courses, call it out, call the trainer out.
In Search of Excellence
In Search of Excellence by Tom Peters was another best seller that sparked off the obsession with Leadership. The case against this book is, in many ways more serious, as BusinessWeek claimed he had ‘faked’ the data. Chapman even wrote a book called In Search of Stupidity, showing that his list of ‘excellent’ companies were actually poor to indifferent. He had inadvertently picked companies that were dominant in their sectors but had then become lazy and sclerotic. It was a classic example of what Gary Smith highlighted as the famous Texas Sharpshooter fallacy. You shoot a bullet, then draw a line around your bullethole and claim you hit the target. Simply joining up already successful dots is not data, it’s cherry picking. Even them he picked the wrong cherries, most of which were rotten inside.
If anything, things have got worse. There's been a relentless flood of books on leadership that make the same mistakes time and time again. At least Collins and Peters tried to use data, many are nothing more than anecdote and analogies. Management is frustrating, difficult and messy. There are no easy bromides and simply stating a series of vague abstract concepts like authenticity, trust, empathy and so on, is not enough.
We have fetishised 'Leadership'You're a leader, I'm a leader, we're all leaders now - rendering the very meaning of the word useless. What do you do for a living? I’m a ‘leader’. Cue laughter and ridicule. Have you ever heard anyone in an organisation say, We need to ask our ‘Leader’? Only if it was sneering sarcasm. It was invented by people who sell management training to fool us all into thinking that it's a noble calling but is it all a bit phoney and exaggerated and does it lead to dysfunctional behaviour? 
Conclusion
Back in the day, before ‘Leadership’ became a ‘thing’ in business, it was fuelled by these key books. They went beyond the normal best-seller status to cult status. Everyone bought them and read them. These seminal, and actually fraudulent, books were the foundation stones for an industry that led to the financial crisis in 2008, that nearly took the world’s entire financial system down. We’re still in the shadow of that disaster, and many of the word’s current ills can be traced to that event – decades of austerity and increasing inequality. The trait that both the authors of these books and the so-called leaders we've ended up with, in politics, sport, entertainment and business - is integrity. The tragic end-point of this cult of Leadership is Trump with his Art of the Deal. Worse still, corporate training is still in thrall with this nonsense, with ‘Leadership’ courses that pay homage to this utopian idea that there are silver bullet solutions to the messy world that is management.

 Subscribe to RSS

Friday, August 24, 2018

Drones - why they really do matter....

Drones are an underestimated technology. As they whizz about, quietly revolutionising film making, photography, archaeology, agriculture, surveying, project management, wildlife conservation, the delivery of goods, food, post, even medicines and into disaster zones, we will be seeing a lot more of them.
I’ll be in Kigali, Rwanda next month chairing an event an E-learning Africa on drones, as their use in Africa clearly benefits from the ‘leapfrog’ phenomenon – the idea that technology sometimes gains from being deployed where there is little or no existing service or technology. Rwanda and other African countries are already experimenting with drones in everything from agriculture to medical supply delivery.
Drones and AI
What makes them more interesting is the intelligence they now embody. First their manoeuvrability. My friend is a helicopter pilot and he rightly describes a helicopter as a complex set of moving parts, every one of which wants to kill you. A drone, however, has sophisticated algorithms that maintain stability, can set it off on a mission and return it back to the spot it left from at the press of a button.  But it is the autonomy of drones that is really interesting. Navigation and movement are being aided by image recognition of the ground and other objects, to avoid collision. Even foldable, consumer drones now have anti-collision sensors on all sides, zoom lenses. They are the self-driving cars of the air. 
MIT is even using VR environments to allow drones to learn to fly without expensive crashes. They literally fly in an empty room filled with VR created obstacles and landscapes. Drones can not only learn to fly using AI, it can use AI in many other tasks. It can take different forms of AI into the sky – image recognition, route calculation.
Drone abuse
But it is image recognition that also enables surveillance. A $200 drone can hover, shoot video of a crowd and use AI to identify potential and actual violent poses, such as strangling, punching, kicking, shooting, and stabbing. These are early systems but their use and abuse by police-forces and/or authoritarian regimes is a cause for worry.
Google recently gave into pressure from its own employees not to use its AI (Tensorflow) in Project Maven – image recognition from military drones. And let’s not forget that drone industry is, at present, largely part of the military industrial complex. The IBOT (Internet Battle of Things) is a thing. The military are already envisaging battles between autonomous battle objects – drones and other autonomous vehicles and robot soldiers.

On the delivery side, drones are a pretty effective drug mules into prisons. This has become areal problem, turning jails into markets for drugs, where the prices are x10 higher. And for a truly petrifying view of payloads drones in warfare, watch Slaughterbots. With use comes abuse.
But what makes them interesting in the developing world? 
There two main uses of drones:
1. Imaging
2. Delivery
Although some other niche uses are being developed, such as delivery of internet access and window cleaning, the main uses are as an eye in the sky or dropping stuff off.
Action shots of skiing, surfing, mountain biking, climbing and many sports has changed the whole way we see the events. So common are drone shots, that we barely notice the bird’s eye view. Action shots that used to require expensive helicopters are now much easier and you can do things with drones that no one would have dared do in a helicopter. Even for ground action shots, a drone can add speed and sensation. In an interesting turn of events, Game of Thrones producers have had the problem of too many drones. In addition to their own drones for shots, they’ve had to content with snooper drones trying to get previews. But let’s turn to real life…
Already used in crop spraying, there are other obvious applications in imaging to show irrigation, soil, crop yield, pest infestations. Drone imaging can see things on scale, which the normal eye cannot see, with it spectral range. This should help to increase yields and efficiency. The Global Market for agricultural drones is expected, in one report, to reach $3.69 billion by 2022.
Animals close to extinction in areas too large to keep them safe from poachers. Drones are being used to track and look out for these animals. Tourist companies are using drone footage to encourage Safari holidays. In general, environmental care is being helped by being able to track what is happening through cheap drone tech.
Large building projects or mines are being managed with the help of drones that can help survey, plan, then track vehicles, actual progress and build. It’s like having a project management overview of the whole site whenever it is needed with accurate realtime data. When built, drones are also being used to inspect roofs and even sell properties.
Collision tolerant drones are being used, not in the open air, but in confined spaces, such as tunnels, to inspect plant and pipes. They are small enough to get to places that are too tight or dangerous for humans.
Delivery drones
Amazon, Google, DHL, FedX and dozens of other retailers have been experimenting with drone delivery. All sorts of issues have to be overcome for drone delivery to become feasible, including: reliability, safety, security, theft, cost, training, laws and regulations. But there seems to be an inevitability about this, especially if they become cheap and reliable. That reliability depends very much on AI in terms of flight, locations and actual delivery.
In healthcare, medicines, vaccines, blood, samplescan all be delivered by drone. Defribrillators with live feeds telling individuals nearby how to operate them have been prototypes in the Netherlands and the US. A company in the US has already delivered FAA approved water, food and first aid kit in Nevada. Switzerland are creating a drone network for medical delivery and
Zipline, in Rwanda, have partnered with the government to deliver blood and other medical supplies to 21 facilities. The benefits in term of speed, cost, accurate delivery and saving lives is enormous. Sudden, unexpected disasters need fast, location specific drop-offs of medical supplies, food and water.
The delivery of ordered items is already being trialled with pizzas, books and everything else that is relatively small, light and can be dropped off at a specific location. The Burrito Bomber, Tacocopter and Domicopter deliver fast food. IBM even have a patent for delivering coffee by drone.
Delivering the post by drone makes sense and trials have been done in Australia, Switzerland, Germany, Singapore and Ukraine. Again speed, reliability and cost are the appealing factors.
Conclusion
Tech can be used and abused. Drones are the perfect example, already killing machines, they also have the potential to save lives. The good news, is that in Africa, the attention is on the latter.

 Subscribe to RSS