Thursday, October 04, 2018

Blockchain looking more and more like a ball and chain?

I was at a conference of CEOs and analysts yesterday and saw presentation after presentation on Blockchain. Then, when I chatted to those in the audience, found that not a single one:
  • Knew what it was
  • Actually used it
  • Could think of uses for it
In over 30 years in the tech industry, I’ve never know a concept so opaque that caused so much fuss. More worrying was the complete lack of awareness about its:
  • energy consumption
  • storage needs
  • bandwidth needs
  • origins in crypto-currency
I gave a talk on Blockchain in learning two years ago in Berlin, where I was pretty upbeat, outlining its structure and possible uses in learning around trusted legers for qualifications and badges. I even got married using Bitcoin on Blockchain! Yet I’ve fallen out of love with this technology. I’ve still to see a single implementation in learning that is worth the candle. In truth education and training does not want to be decentralised and democratised or disintermediated, as almost everyone in the field works in an institutions that will protect themselves to the death. Put aside payment, as one could well imagine that this will happen as the reduction of transaction costs makes commercial sense, Blockchain looks increasingly like it is largely irrelevant in the learning world.
Despite all the diagrams and explanations about blocks, hashes and distributed databases, Blockchain remains an opaque term, even, I suspect, to those shovelling out grants and research money. It is not easy to grasp and needs some level of technical knowledge to unravel. I remember Jeff Staes, standing with his back to the audience in Berlin, screaming ‘What the fuck IS Blockchain?’. His point was that if we don’t know what it is why are we so sure about its uses in education and training?
Solution looking for problem
Software fails when there is no market demand, and people start looking for problems that are already solved by simpler and cheaper solutions. Many of the e-portfolio and qualifications problems are quite simply satisfied by existing systems, where centralised storage and security is already in place, especially in these times of data regulation (more of that later). Locking these problems down in a complex and opaque distributed database, is starting to seem like overkill.
Badges are one of those ideas that I wish had worked but haven’t. Like Blockchain itself, the badges idea has run its course and demand has faltered. Lacking credibility, objectivity, transferability and motivational pull, they simply hit institutional resistance. So to depend on badges and other forms of mico-credentialism is, sadly, no longer the platform on which Blockchain solutions can flourish.
Data regulation
Decentralised databases make sense but there are serious problems in terms of data regulations. Data has, historically, been almost universally stored centrally in databases. On public blockchains, however, this data is massively replicated across the entire distributed database. One solution is to cleave off the trusted hardware for the management and privacy of transactions. Management and privacy are big issues here and I’m not convinced that those recommending Blockchain have solutions that will escape the massive potential fines that are enshrined in EU law. This is a legal minefield that may well block Blockchain projects, especially in the learning world. It will be interesting to see how this plays out. For the moment, I’d stay well clear, as the risks are too high.
Energy consumption
Bitcoin, which uses blockchain, has been estimated to use more electricity than the whole of Ireland and that consumption is rising, rapidly. The huge number of hash calculations across the distributed database gobbles up energy. As long as there is a margin in mining for bitcoin, this energy consumption will continue. One bitcoin mining facility in Russia was shut down because it defaulted on paying its vast electricity bill. In these days of sustainable growth and climate change, Blockchain has a BIG problem and it’s getting bigger.
Huge storage
Some see Blockchain as the new internet, in the sense that it will be the new form of co-operative cloud storage. But that is still a pipe dream. First, we need bigger pipes and second, way more storage. Remember that storage prices have fallen dramatically but Blockchain is a famously bloated system and still costs money to run. 
You may imagine that Blockchain is a simple, unitary piece of technology. It is not. The community is full of road-map wars and disagreements between factions. It will be some time before all of this is stable enough for real implementations.
Trust a problem
For a system that promises a trustworthy leger, Blockchain is built on the premise of ‘trust’. Yet its origins in Bitcoin and crypto-currency may well be its undoing. Few would deny that Bitcoin, in particular, is soaked in money laundering and other forms of criminal activity. Being secure and unhackable is a technical issue, gaining the trust of institutions and consumers is a psychological issue. Blockchain may never cross that Rubicon.
A chain is no stronger that its weakest link and although Blockchain is a distributed, unhackable chain, it has weak links are outwith that technical chain - in terms of opacity, complexity, regulation, energy consumption, bandwidth needs, storage needs, and trust. Software fails when it is just too damn complex to get your head around and the consequences were not thought through in terms of regulation and other demands on bandwidth, processing power and storage. There is a chance that Blockchain may sneak into be the underlying, secure technology for the entire internet, with individual private keys but it has to overcome the problems above and overturn the existing system. This, I think, is unlikely. Blockchain is starting to look less like a great solution and more like a ball and chain.

 Subscribe to RSS

Saturday, September 22, 2018

Learning Designers will have to adapt or die. Here’s 10 ways they need to adapt to AI….

Interactive Designers will have to adapt or die. As AI starts to play a major part of the online learning landscape, right across the learning journey, it is now being used for learner engagement, learner support, content creation, assessment and so on. It will eat relentlessly into the traditional skills that have been in play for nearly 35 years. The old, core skillset was writing, media production, interactions and assessment.
In one company, in which I’m a Director, we see a shift towards AI services and products and we’re having to identify individuals with the skills and attitudes to deal with this new demand. This means understanding the new technology (not trivial), learning how to write for chatbots and dealing more with AI-aided design and curation, rather than doing this for themselves. It’s a radical shift.
In another context, using services like WildFire, means not using traditional interactive designers, as the software largely does this job. It identifies the learning points, automatically creates the interactions, finds the curated links and assesses, formatively and summatively. It creates content in minutes not months. This is the way online learning is going. This stuff is here, now.
The gear-shift in skills is interesting and, although still uncertain, here’s some suggestions based on my concrete experience of making and observing this shift in three separate companies.
1. Technical understanding
Designers, or whatever they’re called now or in the future, will need to know far more about what the software does, its functionality, strengths and weaknesses. In some large projects we have found that a knowledge of how the NLP works has been an invaluable skill, along with an ability to troubleshoot by diagnosing what the software can, or cannot do. Those with some technical understanding fare better here.
This is not to say that you need to be able to code or have AI or data scientist skills. It does mean that you will have to know, in detail, how the software works. If it uses semantic techniques, make the effort to understand the approach, along with its weaknesses and strengths. With chatbots, it is all too easy to set too high en expectation on performance. You will need to know where these lines are in terms of what you have to do as a designer. Similarly with data analysis. With traditional online learning, the software largely delivers static pages with no real semantic understanding, adaptability or intelligence. AI created content is very different and has a sort of ‘life of its own’, especially when it uses machine learning. At the very least get to know what the major areas of AI are, how they work and feel comfortable with the vocabulary.
2. Writing
Text remains the core medium in online learning. It remains the core medium in online activity generally. We have seen the pendulum swing towards video, graphics and audio but text will remain a strong medium, as we read faster than we listen, it is editable and searchable. That's why much social media and messaging is still text at heart. When I ran a large traditional online learning company I regarded writing as the key skill for IDs. We put people through literacy tests before they started, no matter what qualifications they had. It proved to be a good predictor, as writing is not just about turn of phrase and style, it is really about communications, purpose, order, logic and structure. I was never a fan of ‘storytelling’ as an identifiable skill.
However, the sort of writing one has to do in the new world of AI has more to do with being sensitive to what NLP (Natural Language Processing) does and dialogue. To write for chatbots one must really know what the technology can and cannot do, and also write natural dialogue (actually a rare skill). That’s why the US tech giants hire screenwriters for these tasks. You may also find yourself writing for ‘voice’. For example, WildFire automatically produces podcast audio using text to speech and that needs to be written in a certain way. Beyond this, coping with synonyms and the vagaries of natural language processing needs an understanding of all sorts of NLP software techniques.
3. Interaction
Hopefully we will see the reduction in the formulaic Multiple Choice Question production. MCQs are difficult to write and often flawed. Then there’s the often vicariously used ‘drag and drop’ and hideously patronising ‘Let’s see what Philip, Alisha and Sue think of this… ‘ you click on a face and get a speech bubble of text. I find that this is the area where most online learning really sucks.
This, I think, will be an area of huge change as the limited forms of MCQ start to be replaced by open input; of words, numbers and short text answers. NLP allows us to interpret this text. We do all three in WildFire with little interactive design (only editing out which ones we want). There is also voice interaction to consider, which we have been implementing, so that the entire learning experience, all navigation and interaction, is voice-driven. This needs some extra skills in terms of managing expectations and dealing with the vagaries of speech recognition software. Personalisation may also have to be considered. I'm an investor and Director in one of the word's most sophisticated adaptive learning companies CogBooks, believe me this software is sophisticated and the sequencing has to be handled by software not designers, that's what makes personalisation on scale possible. With chatbots, where we've been designing everything from invisible LSM bots to tutorbots, the whole form of interaction changes and you need to see how they fit into workflow through existing collaborative tools such as Slack or Microsoft teams. there's a lot of opportunities out there.
4. Media production
As online learning became trapped in ‘media production’ most of the effort and budget went into the production of graphics (often illustrative and not meaningfully instructive), animation (often overworked) and video (not enough in itself). Media rich is not necessarily mind rich and the research from Mayer and others, showing that the excessive use of media can inhibit learning is often ignored. We will see this change as the balance shifts towards effortful and more efficient learning. There will still be the need for good media production but it will lessen as AI can produce text from audio, create text and dialogue. Video is never enough in learning and needs to be supplemented by other forms of active learning. AI can do this, making video an even stronger medium. Curation strategies are also important. We often produce content that is already there but AI helps automatically link to content or provides tools for curating content. Lastly, a word on design thinking. The danger is in seeing every learning experience as a uniquely designed thing, to be subjected to an expensive design thinking process, when design can be embodied in good interface design, use A/B testing and avoid the trap of seeing learning as all about look and feel. Design matters but learning matters more.
5. Assessment
So many online learning courses have a fairly arbitrary 70-80% pass threshold. The assessments are rarely the result of any serious thought about the actual level of competence needed, and if you don’t assess the other 20-30% it may, in healthcare,for example, kill someone. There are many ways in which assessment will be aided by AI in terms of the push towards 100% competence, adaptive assessment, digital identification and so on. This will be a feature of more adaptive AI driven content. 
6. Data skills
SCORM is looking like an increasingly stupid limit on online learning. To be honest it was from its inception – I was there. Completion is useful but rarely enough. It is important to supplement SCORM with far more detailed data on user behaviours. But even when data is plentiful, it needs to be turned into information, visualised to make it useful. That is one set of skills that is useful, knowing how to visualise data. Information then has to be turned into knowledge and insights. This is where skills are often lacking. First you have to know the many different types of data in learning, how data sets are cleaned, then the techniques used to extract useful insights, often machine learning. You need to distinguish between data as the new oil and data as the new snake oil.
We take data, clean it, process it, then look for insights – clusters and other statistically significant techniques to find patterns and correlations. For example, do course completions correlate with an increase in sales in those retail outlets that complete the training? Training can then be seen as part of a business process where AI not only creates the learning but does the analysis and that is all in a virtual and virtuous loop that informs and improves the business. It is not that you require deep data scientist skills, but you need to become aware of the possibilities of data production, the danger of GIGO, garbage-in/garbage out and the techniques used in this area.
7. User testing
In one major project we produced so much content, so quickly, that the clients had trouble keeping up on quality control at their end. You will find that the QA process is very different, with quick access to the actual content, allowing for immediate testing. In fact, AI tends to produce less mistakes in my experience as there is less human input, always a source of spelling, punctuation and other errors. I used to ask graphic artists to always cut and paste text as it was a source of endless QA problems. The advantage of using AI generated content is that all sides can screen share to solve residual problems on the actual content seen by the learner. We completed one large project without a single face-to-face meeting. This quick production also opens up the possibility of A/B testing with real learners. This is an example of A/A testing being used with gamification content – with surprising results.
8. Learning theory
In my experience, few interactive designers can name many researchers or identify key pieces of research on, let's say the optimal number of options in a MCQ (answer at foot of this article), retrieval practice, length of video, effects of redundancy, spaced-practice theory, even the rudiments of how memory works (episodic v semantic). This is elementary stuff but it is rarely taken seriously.
With the implementation of AI, the AI has to embody good pedagogic practice. This is interesting, as we can build good, well-researched, learning practice into the software. This is what we have been doing in WildFire, where effortful learning, open input, retrieval and spaced practice are baked into the software. Hopefully, this will drive online learning away from long-winded projects that take months to complete, towards production that takes minutes not months and learning experiences that focus on learning not appearance.
9. Communications
Communications with AI developers and data scientists is a challenge. They know a lot about the software but often little about learning and the goals. On the other hand designers know a lot about communications, learning and goals. Agile techniques, with a shared whiteboard are useful. There are formal agile techniques around identifying the user story, extracting features then coming to agreed tasks. Comms are tougher in this world so learn to be forgiving.
Then there’s communications with the client and SMEs. This can be particularly difficult, as some of the output is AI generated, and as AI is not remotely human (not conscious or cognitive) it can produce mistakes. You learn to deal with this when you work in this field, overfitting, false positives and so on. But this is often not easy for clients to understand, as they will be used to design document, scripts and traditional QA techniques. I had AI once automatically produce a link for the word ‘blow’, a technique nurses ask of young patients when they’re using sharps or needles. The AI linked to the Wikipedia page for ‘blow’ – which was cocaine – easily remedied but odd.
We have also worked to reduce iterations with SMEs, the cause of much of the high cost of online learning. If the AI is identifying learning points and curated content, using already approved documents, PPTs and videos, the need for SME input is lessened. As tools like WildFire produce content very quickly, the clients and SME can test and approve the actual content, not from scripts but in the form of the learning experience itself. This saves a ton of time.
10. Make the leap
AI is here. Few argue that it will change the very nature of employment and therefore it will change what you learn, how you learn and even why you learn. We are, at last, emerging from a 30 year paradigm of media production and multiple choice questions, in largely flat and unintelligent learning experiences, towards smart, intelligent online learning, that behaves more like a good teacher, where you are taught as an individual with a personalised experience, challenged and, rather than endlessly choosing from lists, engage in effortful learning, using dialogue, even voice. As a Learning designer, Interactive designer, project Manager, Producer, whatever, this is the most exciting thing to have happened in the last 30 years of learning.
Most of the Interactive Designers I have known, worked with and hired over the last 30  plus years have been skilled people, sensitive to the needs of learners but we must always be willing to 'learn', for that is our vocation. To stop learning is to do learning a disservice. So make the leap!

 Subscribe to RSS

Monday, September 17, 2018

Breakthrough that literally opens up online learning? Using AI for free text input

When teachers ask learners whether they know something they rarely ask them multiple choice questions. Yet the MCQ remains the staple in online learning, even at the level of scenario based learning. Open input remains rare. Yet there is ample evidence that it is superior in terms of retention and recall. Imagine allowing learners to type, in their own words, what they think they know about something, and AI does the job of interpreting that input?
Open input
We’ve developed different levels of more natural open input that takes online learning forward. The first involves using AI to identify the main concepts and getting learners to enter the text/numbers, rather than choosing from a list. The cognitive advantage is that the learner focuses on recalling the idea into their own minds, an act that has been shown to increases retention and recall. There is even evidence that this type of retrieval has a stronger learning effect than the original act of being taught the concept. These concepts then act a ’cues’ which learners hang their learning upon, for recall. We know this works well.
Free text input
But let’s take this a stage further and try more general open input. The learners reads a number of related concepts in text, with graphics, even watching video, and has to type in a longer piece of text, in their own words. This we have also done. This longer form of open-input allows the learner to rephrase and generate their thoughts and the AI software does analysis on this text.
Ideally, one takes the learner through three levels
1. Read text/interpret graphics/watch video
2. AI generated open-input with cues
3. AI generated open-input of fuller freeform text in your own words
This gives us a learning gradient in terms and of increasing levels of difficulty and retrieval. You move from exposure and reflection, to guided effortful retrieval and full, unaided retrieval. Our approach increases the efficacy of learning in terms of speed of learning, better retrieval and better recall, all generated and executed by AI.
The process of interpretation on the generated text, in your own words, copes with synonyms, words close in meaning and different sentence constructions, as it uses the very latest form of AI. It also uses the more formal data from the structured learning. We have also got this working by voice only input, another breakthrough in learning, as it is a more natural form of expression in practice. 
The opportunities for chatbots is also immense. 
If you work in corporate learning and want to know more, please contact us at WildFire and we can show you this in action.
Evidence for this approach
Much advice and most practice from educational institutions – re-reading, highlighting and underlining – is wasteful. In fact, these traditional techniques can be dangerous, as they give the illusion of mastery. Indeed, learners who use reading and re-reading show overconfidence in their mastery, compared to learners who take advantage of effortful learning.
Yet significant progress has been made in cognitive science research to identify more potent strategies for learning. The first strategy, mentioned as far back as Aristotle, Francis Bacon then William James, is ‘effortful’ learning. It is what the learner does that matters. 
Simply reading, listening or watching, even repeating these experiences, is not enough. The learning is in the doing. The learner must be pushed to make the effort to retrieve their learning to make it stick in long-term memory.
Active retrieval
 ‘Active retrieval’ is the most powerful learning strategy, even more powerful than the original learning experience.  The first solid research on retrieval was by Gates (1917), who tested children aged 8-16 on short biographies. Some simply re-read the material several times, others were told to look up and silently recite what they had read. The latter, who actively retrieved knowledge, showed better recall. Spitzer (1939) made over 3000 11-12 year olds read 600 word articles then tested students at periods over 2 months. The greater the gap between testing (retrieval) and the original exposure or test, the greater the forgetting. The tests themselves seemed to halt forgetting. Tulving (1967) took this further with lists of 36 words, with repeated testing and retrieval. The retrieval led to as much learning as the original act of studying. This shifted the focus away from testing as just assessment to testing as retrieval, as an act of learning in itself. Roediger et al. (2011) did a study on text material covering Egypt, Mesopotamia, India and China, in the real context of real classes in a real school, a Middle School in Columbia, Illinois. Retrieval tests, only a few minutes long, produced a full grade-level increase on the material that had been subject to retrieval. McDaniel (2011) did a further study on science subjects, with 16 year olds, on genetics, evolution and anatomy. Students who used retrieval quizzes scored 92% (A-) compared to 79% for those who did not. More than this, the effect of retrieval lasted longer, when the students were tested eight months later. So designing learning as a retrieval learning experience, largely using open-input, where you have to pull things from your memory and make a real effort to type in the missing words, given their context in a sentence.
Open input
Most online learning relies heavily on multiple-choice questions, which have become the staple of much e-learning content. These have been shown to be effective, as almost any type of test item is effective to a degree, but they have been shown to be less effective than open-response, as they test recognition from a list, not whether it is actually known.
Duchastel and Nungester (1982) found that multiple-choice tests improve your performance on recognition in subsequent multiple-choice tests and open input improves performance on recall from memory. This is called the ‘test practice effect’. Kang et al. (2007) showed that, with 48 undergraduates, reading academic Journal quality material, open input is superior to multiple-choice (recognition) tasks. Multiple choice testing had an affect similar to that of re-reading whereas open-input resulted in more effective student learning. McDaniel et al. (2007) repeated this experiment in a real course with 35 students enrolled in a web-based Brain and Behavior course at the University of New Mexico. The open-input quizzes produced more robust benefits than multiple-choice quizzes. ‘Desirable difficulties’ is a concept coined by Elizabeth and Robert Bjork, to describe the desirability of creating learning experiences that trigger effort, deeper processing, encoding and retrieval, to enhance learning. The Bjorks have researched this phenomenon in detail to show that effortful retrieval and recall is desirable in learning, as it is the effort taken in retrieval that reinforces and consolidates that learning. A multiple-choice question is a test of recognition from a list. They do not elicit full recall from memory. Studies comparing multiple-choice with open retrieval show that when more effort is demanded of students, they have better retention.As open-response takes cognitive effort, the very act of recalling knowledge also reinforces that knowledge in memory. The act of active recall develops and strengthens memory. It improves the process of recall in ways that passive recall – reading, listening and watching do not.
Design implications
Meaning matters and so we rely primarily on reading and open response, where meaningful recall is stimulated. This act alone, even when you don’t know the answer, is a strong reinforcer, stronger indeed, than the original exposure. Interestingly, even when the answer is not known, the act of trying to answer is also a powerful form of learning. 
So, the deliberate choice of open-response questions, where the user types in the words, then more  substantial open input, is a deliberate, design strategy to take advantage of known AI and learning techniques to increase recall and retention. Note that no learner is subjected to the undesirable difficulty of getting stuck, as letters are revealed one by one, and the answer given after three attempts. Hints are also possible in the system.
Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press.
Bower G. H. (1972) Mental imagery in associative learning in Gregg L,W. Cognition in learning and memory New York, Wiley
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Butler, A. C., & Roediger, H. L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36, 604-616.
Duchastel, P. C., & Nungester, R. J. (1982). Testing effects measured with alternate test forms. Journal of Educational Research75, 309-313.
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Gates, A. I. (1917). Recitation as a factor in memorizing. Archives of Psychology, No. 40, 1-104. 
Hirshman, E. L., & Bjork, R. A. (1988). The generation effect: Support for a two-factor theory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 14, 484–494. 
Jacoby, L. L. (1978). On interpreting the effects of repetition: Solving a problem versus remembering a solution. Journal of Verbal Learning and Verbal Behavior17, 649-667.
Kang, S. H. K., McDermott, K. B., & Roediger, H. L., III. (2007). Test format and corrective feedback modulate the effect of testing on long-term retention. European Journal of Cognitive Psychology19, 528-558. 
McDaniel, M. A., Einstein, G. O., Dunay, P. K., & Cobb, R.  (1986).  Encoding difficulty and memory:  Toward a unifying theory.  Journal of Memory and Language25, 645-656.
McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103, 399-414
Miller, G.A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. 
Richland, L. E., Bjork, R. A., Finley, J. R., & Linn, M. C. (2005). Linking cognitive science to education: Generation and interleaving effects. In B. G. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the twenty-seventh annual conference of the cognitive science society. Mahwah, NJ: Erlbaum. 
Roediger, H. L., Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Test-enhanced learning in the classroom: Long-term improvements from quizzing. Journal of Experimental Psychology: Applied, 17, 382-395.
Spitzer, H. F. (1939). Studies in retention. Journal of Educational Psychology30, 641-656. 
Tulving, E. (1967). The effects of presentation and recall of material in free-recall learning. Journal of Verbal Learning and Verbal Behavior6, 175􏰀184.

 Subscribe to RSS

Wednesday, September 12, 2018

Simple researched design feature would save organisations and learners a huge amount of money and time – yet hardly anyone does it

Multiple-choice questions are everywhere, from simple retrieval tests to high stakes exams. They normally (not always) contain FOUR options; one right and answer and three distractors. But who decided this? Is it more convention than researched practice?
Rodriquez (2005) studied 27 research papers on multiple-choice questions and found that the optimal number of options is THREE not four. Vyas (2008) showed the same was true in medical education. But God is in the detail and their findings surfaced some interesting phenomena.
Four/five options increases the effort the learner has to make but this does not increase the reliability of the test
Four increases the effort needed to write questions and as distractors are the most difficult part of the test item to identify and write, they were often very weak.
Reduction in the number of options from four to three (surprisingly) increased the reliability of test scores.
Tests shorter leaving more time for teaching.
Next step
Of course, multiple-choice is, in itself weaker than open-input, which is why we can go one step further and have open response, either single words or short answers. Natural language Processing allows AI not only to create such questions automatically but also provide accurate student scores. AI can also actually create both of these question types (also MCQs if desired) automatically, saving organisations time and money. This is surely the way forward in online learning. Beyond this is voiced input, again a step forward, and AI has also allowed this type of input in online learning. If you are interested in learning more about this see WildFire.
So, you can safely reduce the number of MCQ options from five/four to THREE and not reduce the reliability of the tests. Indeed, there is evidence that it improves the reliability of tests. Not only that it saves organisations, teachers and learners time.
Rodriquez (2005) “Three Options Are Optimal for Multiple-Choice Items: A Meta-Analysis of 80 Years of Research”, Educational Measurement: Issues and Practice, Volume 24, Number 2, June 2005, pp. 3-13.
Vyas R (2008) Medical Education. The National medical Journal of India, Volume 21, No 3

 Subscribe to RSS

Friday, September 07, 2018

Chatbots a gamechanger in learning? The BIG debate at LPI

This was the debate motion at the LPI conference in London. I was FOR the motion (Henry Stewart was AGAINST) and let me explain why.
1. AI is a gamechanger
AI will change the very nature of work. It may even change what it is to be human. This is a technological revolution as big as the internet and will therefore change what we learn, why we learn and how we learn. The Top Seven companies by market cap all have AI as a core strategy; AppleAlphabetMicrosoftAmazonTencentFacebook and Alibaba. AI is a strategic concern for every sector and every business, even learning.
2. Evidence from consumers
Several radical shifts in consumer online behavior move us towards chatbots. First the entry of voice activated bots into the home and connected the the IoT (Internet of Things) – Amazon Alexa and Google Home. Second the rise of ‘voice’ as a natural form of communicating with technology – Siri, Cortana and other similar services. Over 10% of all search is now by voice. Third, the switch from social media to message/chat apps – chat overtook social media in late 2014 and the gap is growing – chat is the home screen for most young people.
3. Pedagogy in chatbots
Most teaching is through dialogue. The Socratic method may have been undermined by blackboards and their successors through to PowerPoint, but voice and dialogue are making a comeback. Speaking and listening through dialogue is our most natural interface. We’ve evolved these capabilities over 2 million years or so. It’s natural and we’re grammatical geniuses aged 3, without having to be taught to speak and listen. Within dialogue lies lots of pedagogically strong learning techniques; retrieval, elaboration, questions, answers, follow ups, examples and so on. It just feels more natural.
4. Evidence in learning
An exit poll taken by Donald Taylor, from Learning and Technologies conference this year, showed Personalised learning at No 1 and AI and No 3. The interest is clearly strong and there’s lots of real projects being delivered to real clients from WildFire, Learning Pool and so on.
5. Chatbots across the learning journey
There are now real chatbot applications at points across the entire learning journey. I showed actual chatbot applications in learning in the following areas:
   Onboarding bots
   Learner engagement bots
   Learner support bots
   Invisible LMS bots
   Mentor bots
   Practice bots
   Assessment bots
   Wellbeing bots
If you want to know more about these actual projects, I'd be glad to help.
6. They’re learners
An important feature of modern chatbots, compared say to ELIZA from the 1960s, is the fact that they now learn. This matters as the more you train and use them, the better they get. We used to have just human teachers and learners, we now have technology that is both a teacher and learner.
7. It’s started
Technology is always ahead of the sociology, which is always ahead of learning and development. Yet, we see in these many projects, even with relatively primitive technology and emerging trend – the use of technology delivered chatbot learning. In time, this will happen. Resistance is futile. 
Nigel Paine chaired the debate with his usual panache and teased questions out of the audience and the real debate ensued. The questions were rather good.
Q Has AI has passed the Turing test?
First, there are many versions of the Turing test but the evidence from the many chatbots on social media all the way to Google Duplex, shows that it has been passed. Not for long, sustained and very detailed dialogue, but certainly within limited domains. Google Duplex showed that we’re getting there on sustained dialogue and the next generation of Amazon’s Alexa and Google Home will have memory, context and personalisation in their chatbot software. It will come in time.
Q AI can never match the human brain
This is true but not always the point. We didn’t learn to fly by copying the wings of a bird – we invented new technology – the airplane. We didn’t go faster by looking at the legs of a cheetah, we invented the wheel. The human brain is actually a rather fragile entity. It takes 20 years and more of training to make it even remotely useful in the workplace, it is inattentive, easily overloaded, has fallible memory, forget most of what its tries to learn, has tons of biases (we're all racist and sexist), we can’t download, can’t network and we die. But it is true that it is rather good at general things. This is why chatbots are best targeted at specific uses and domains, such as the eight species of chatbot I demonstrated.
Q Chatbots v people
Michelle Parry-Slater made a good point about chatbots not replacing people but working alongside people. This is important. Chatbots may replace some functions and roles but few suppose that all people will be eliminated by chatbots. We have to see them as being part of the landscape.
Q Chatbots need to capture pedagogy
Good question from Martin Couzins. Chatbots have to embody good pedagogy and already do. Whether it’s models of engagement, support, learning objectives, invisible LMS, practice, assessment or well being, the whole point is to use both the interface and back-end functionality (important area for pedagogic capture) to deliver powerful learning based on evidence-based theory, such as retrieval, effortful learning, spaced-practice and so on. This will improve rather than diminish or ignore pedagogy. In all of the examples I showed, pedagogy was first and foremost.
Q Will L and D skills have to change
Indeed. I have been training Interactive Designers on chatbot and AI skills as this is already in demand. The days of simply producing media assets and multiple choice questions is coming to a close – thankfully.
Oh and we won the debate by some margin with a significant number changing their minds from sceptics to believers along the way! But that doesn't;t really matter, as it was a self-selecting audience - they came, I'd imagine, as they were curious and handsome affinity with the idea that chatbots have a role. My view is goat these debates are good at conferences - by starting with a polarised position, the audience can move and shift around in the middle. The audience in this session were excellent, with great questions, as you've seen above. Note to conference organisers - we need more of this - it energises debate and audience participation.

 Subscribe to RSS

Tuesday, September 04, 2018

First randomised-controlled trial of an employee “Wellness Programme” suggests they are a waste of money Oh dear….

Jones et al (2018) in their study What Do Workplace Wellness Programmes Do? Took 12000 employees, randomly assigned them into groups, but found no “significant causal effects of treatment on total medical expenditures, health behaviors, employee productivity, or self-reported health status in the first year”.
This is a huge business, around $8 billion in the US alone. Yet it is largely based on articles of faith, not research. This is a welcome study, as it gets rid of the self-selecting nature of the audiences so prevalent in other studies on well-being, which renders them largely useless as the basis of recommendations. 
Do they reduce sickness? No they don’t. Does it results in staying in your job, getting promotion or a pay rise? No, it doesn’t.  Did it reduce medication or hospital visits? No, it didn’t. This was true for almost every one of the 37 features mentioned. The bottom line is that there is no bottom line, no return on investment.
The interesting conclusion by the authors of the study is that wellness programmes, far from helping the intended audience (the obese, smokers etc.) it simply screens out those who are already healthy, yet the burden of cost is borne by all.

 Subscribe to RSS

Bullshit Jobs - Futurist, Thought Leader, Leader... if you call yourself this, you're most probably not

If you call yourself a 'Futurist', 'Thought Leader' even 'Leader', you're most probably not one. I keep coming across people at conferences and on social media that have these titles, yet have shallow theoretical and practical competences.
I've lost count of the presentations I've seen that are merely anecdotes and examples culled from the internet with some general nonsense about how 65% of all primary school kids are being taught for jobs that don't yet exist, or some other quite from Einstein that, on inspection, he never said.
Let me give you some real examples. Two years ago, I went to speak at DevLearn in Las Vegas. Now one wants a keynote speaker to provide new, insightful thinking, but a guy called David Pogue did a second-rate Jim Carey act. His ‘look at these wacky things on the internet’ shtick was a predictable routine. Kids can play the recorder on their iPhone! No they don’t. Only a 50 year old who bills himself as a ‘futurist’ thinks that kids take this stuff seriously. 
At Devlearn, we also got a guy called Adam Savage. I had never heard of him, but he’s a famous TV presenter in the US who hosts a show called Mythbusters. He spent an hour trying to claim that art and science were really the same thing, as both were really (and here comes his big insight) – storytelling. The problem is that the hapless Adam knew nothing about science or art. It was trite, reductionist and banal. Then there was the speaker on workplace learning, at OEB last year, who used the totally made up “65% of kids… jobs that don’t exist” line.
My own view is that these conferences do need outsiders who can talk knowledgeably about learning and not just about observing their kids or delivering a thinly disguised autobiography. I want some real relevance. I’ve begun to tire of ‘futurists’ – they all seem to be relics from the past. 
Bullshit Jobs - the book
This is where David Graeber comes in. He’s written a rather fine book, called Bullshit Jobs, which identifies five types of jobs that he regards as bullshit. Graeber’s right, many people do jobs, that if they disappeared tomorrow, would make no difference to the world and may even make things simpler, more efficient and better. As a follow up to the Graeber book, YouGov did a poll and found 37% thought that their jobs did not contribute meaningfully to the world. I find that both astonishing and all too real. In my experience, worryingly true. So what are those bullshit jobs?
Box tickers
Some of Graeber's jobs largely orbit around the concept of self-worth. Graeber identifies box tickers as one huge growth area in Bullshit Jobs. Now we know what this is in most organisations, those that deliver over-engineered and almost immediately forgotten compliance training, that is mostly about protecting the organisation from their own employees or satisfying some mythical insurance risk. It keeps them busy but also prevents others from getting on with their jobs. They forget almost all of it anyway.
It also includes all of those jobs created around abstract concepts such as diversity, equality or some other abstract threat. The job titles are a dead giveaway Chief Imagination Officer, Life Coach… any title with future, life, innovation, leadership, creative, liaison, strategist, ideation, facilitator, diversity, equality and so on. All of this pimping of job titles, along with fancy new business cards, is a futile exercise in self and organisational deception. It keeps non-productive people in non-productive jobs. Who hasn’t come across the pointless bureaucracy of organisations. From the process of signing in at reception, to getting wifi and all sorts of other administrative baloney. But that is nothing compared to the mindless touting of mindfulness, NLP courses and other fanciful and faddish nonsense that HR peddles in organisations. Then there’s a layer of pretend measurement with useless Ponzi scheme tools such as Myers Briggs, unconscious bias courses, emotional intelligence, 21stC skills and Kirkpatrick.
A second Graeber category is taskmasters, and he specifically targets middle management jobs and leadership professionals. Who doesn’t find themselves, at some point in the week doing something they know is pointless, instructed by someone whose job suggests pointless activity. The bullshit job boom has exploded in this area, with endless folk wanting to tell you that you’re a 'Leader’. You all need ‘Leadership training’ apparently, as everyone’s a Leader these days, rendering the meaning of the word completely useless. Stanford's Pfeffer nails this in his book Leadership is BS.
All of this comes at a cost. We have systematically downgraded jobs where people do real things, like plumbers, carpenters, carers, teachers, nurses and every other vocational occupation, paying them peanuts, while the rise of the robots, and I don’t mean technology, I mean purveyors of bullshit, all of those worthy middle-class jobs that pay people over the odds for being outraged on behalf of others. Leadership training has replaced good old-fashioned management training, abstractions replacing competences. Going to ‘Uni’ has become the only option for youngsters, often creating the expectation that they will go straight into bullshit jobs, managing others who do all most of the useful work.
I disagree with Graeber’s hypothesis that capitalism, and its engine the Protestant work ethic, leads to keeping people busy, just for the sake of being busy – business as busyness. I well remember the team leader in a summer job I had saying to me ‘listen just look busy… just pretend to be busy’. I felt like saying ‘You’re the boss, you pretend I’m busy’. Most of these bullshit jobs arise out of fear, the fear of being seen not to be progressive, the fear of regulation and litigation, the fear of not doing what everyone else is doing with a heavy does of groupthink.
We keep churning out these hopeless jobs in the hope that they will make the workplace more human, but all they do is dehumanise the workplace. They turn it into a place of quiet resentment and ridicule.

 Subscribe to RSS

Saturday, September 01, 2018

Hearables are hear to stay in learning - podcasts, learning, language learning, tutoring, spaced-practice and cheating in exams!

Hearables are wearable devices that use your sense of hearing, smart headphones if you wish, that are becoming popular on the rise of voice as a significant form of interaction. With voice rising dramatically as a means of search, Amazon Echo/Google Home into millions of homes, realtime translation and earbuds for music and voice on mobiles, we are increasingly using hearing as the sense of choice for communications. 
Apples’s removal of the audio jack and launch of their EarPods was a landmark in the shift towards wireless hearables but other devices are also available or in development. Some rely on your smartphone, others, such as the Vinci, are independent, with local computing power and storage. You can even get them designed for your own ears through 3D printing.
Voice in learning
The advantages of audio in learning is in line with the simple fact that voice is primary in language, as we are all auditory grammatical geniuses, able to listen and speak, by the age of three, without instruction,  whereas reading and writing take years of instruction and practise. It is a more natural form of communication, more human. It also leaves your imagination free to create or generate your own thoughts and interpretations. This pared back input, arguably, allows deeper processing and learning as it requires attention, focus and effortful learning. Most of our communication is through dialogue and hearing, not print and most teaching takes place through hearing and dialogue.
So, there are several ways hearables could be used in learning:
1. Radio
Radio predates TV and modern media for learning. It remains a popular form of communications, as it is undemanding, leaves you hands free (while making breakfast, driving the car and so on). It also has a long history in learning, in Australia and other regions where distances are huge and resources low. The straight delivery of radio via hearables is the baseline.
Podcasts have also become a popular medium, especially for learning. They appeal to the learner who wants to focus on hearing experts, often interviewing other experts, on specific topics. As they are downloadable, they provide audio on demand, when you have the time to listen, often in those periods when you can focus, listen and learn.
3. Online learning
We have been using voice to deliver online learning in WildFire, with zero typing, as all navigation and input is by voice. It is a facsinating experience and feels more like normalised teaching and learning, when compared to using a keyboard.
4. Language learning
Language learning is an obvious application, where listening and comprehension can be delivered to your personal needs, with appropriate levels of feedback, even voice rand pronunciation recognition.
5. Translation
Translation in real time is already available through Google's Pixel Buds. The advantages, in terms of convenience, but also language learning, has huge potential. It must surely be worth exploring the advantages for novice language learners of hearing with translation and playback in the real world, where immersion and interactions with native speakers matters.
6. Tutoring
Have you ever had to call someone for help on how to fix something or get technical help on your computer? As a form of quick tutoring, voice is useful as you are hands free to try things, while you have access to experts anywhere on the planet.
7. Health
Hearables that deliver notifications on heart rate, oxygen saturation, calories burned, distance, speed and duration are available and as they can be used during exercise, may prove popular. This health learning loop also has potential to modify behavior, from diabetes to obesity.
8. Lifelong learning
As one gets older, reading, typing and other forms of interaction become more difficult. Hearables provide an easier and more convenient form of interaction, especially when combined with ‘reminders’ for those with memory problems.
9. Spaced practice
Audio could be used as a spaced-practice tool, pushed to you at intervals personalised to your needs and your own forgetting curve, namely more at the start then levelling out over time.
10. Exam cheating
Lastly, although undesirable, there have already been many instances reported of exam cheating using hearables. Ebay is awash with cheating devices such a micro-earpieces, Bluetooth pens and so on. In some ways this shows how powerful such devices can be for the timely delivery of learning!
Hearables are becoming part of the consumer technology landscape and in terms of learning, will have an impact. Different devices have different affordances but there is no doubt that hearing is a sense of choice for many people. Hearables, therefore, are hear to stay.

 Subscribe to RSS

Thursday, August 30, 2018

Research shows Good Behaviour Game is constructivist nonsense

The Good Behaviour Game was touted for years by social constructivists as yet another silver bullet for classroom behaviour. Yet a large, well funded trial, across 77 schools with 3084 pupils, at the cost of £4000 per school, has shown that it’s a dud. The researchers (EIF) literally showed that it was a waste of time and money.
Of course, it had that word ‘Game’ in its brand, and ‘gamification’ being de rigeur, gave it momentum, along with some outrageous claims about its efficacy. And being a non-interventionist approach (teachers were not allowed to interfere) it also played to the Ken Robinson/Rousseau myth that if we only let children be themselves, they will thrive. It also had that vital component, the social group, where children were expected to use and pick up those vital 21stcentury skills, such as collaboration, communication and teamwork. So its premises: 1) Gamification, 2) Natural development and 3) Social – were found wanting.
Its creators claim that it is underpinned by theory that emerged in the 1960s; ‘life course’ and ‘social field theory’. Life Course theory is right out of the social constructivist playbook, specifically codified through the book Constructing Life Course by Gubrium and Holstein (2000), the idea that one should ignore specific measures, and implement practice and evaluate this practice holistically at the social level. Social Field Theory is another constructivist theory, taken from sociology, that looks at social actors’ and how these actors construct social fields, and how they are affected by such fields.
Claims for GBGs efficacy were nothing if not bold: improving behaviour, reducing mental health problems, crime, violence, anti-social behaviour, even substance abuse. Each game took 10-45 minutes and was supposed to result in better social behaviour and the game teams were balanced for gender and temperament. In truth, it was almost wholly a waste of time. The EIF summary is worth quoting in full:
“EEF Summary
Behaviour is a major concern for both teachers and students. EEF funded this project because GBG is an established programme, and previous evidence suggests it can improve behaviour, and may have a longer-term impact on attainment.
This trial found no evidence that GBG improves pupils’ reading skills or their behaviour (concentration, disruptive behaviour and pro-social behaviour) on average. There was also no effect found on teacher outcomes such as stress and teacher retention. However, there was some tentative evidence that boys at-risk of developing conduct problems showed improvements in behaviour. 
Most classes in the trial played the game less often and for shorter time periods than recommended, and a quarter of schools stopped before the end of the trial. However, classes who followed the programme closely did not get better results.
GBG is strictly manualised and this raised some challenges. In particular, some teachers felt the rule that they should not interact with students during the game was difficult for students with additional needs, and while some found that students got used to the challenge and thrived, others found the removal of their support counter-productive. The EEF will continue to look for effective programmes which support classroom management.”
Pretty conclusive results and further reason for my long-held belief that the orthodoxy of social coonstructivism needs to be challenged, before it causes even more damage in teacher training and our schools.
More importantly, it skewers the whole idea that children are naturally self-regulating and that all teachers and parents have to do is create the right social environment and they will progress.
It’s all to easy to think that real learning is taking place in collaborative groups, ignoring the research on social loafing and the possibility that the weakest learners may suffer badly from this sort of non-guided collaboration, when all that’s happening is slow and inefficient learning, illusory learning. This trial showed that this was indeed the case, with weaker students floundered. Even at the level of actual teacher practice, the approach failed, with both teachers and pupils getting wary, with shorter and shorter sessions and many just giving up.
In evidence based education negative results are just as important as positive results, as they can stop wasted time and effort in the classroom. I’d say this was conclusive and stops some of the crazier constructivist practice in its tracks. It is in line with the negative constructivist results around whole word theory, the last destructive theory that took root in education and then found to be destructive, through evidence.

 Subscribe to RSS

Wednesday, August 29, 2018

Wikipedia’s bot army - shows us the way in governance of AI

Wikipedia is, for me, the digital wonder of the world. A free, user generated repository of knowledge, open to edits, across many languages, increasing in both breadth and depth. It is truly astonishing. But it has recently become a victim of its own success. As it scaled, it became difficult to manage. Human editorial processes have not been able to cope with the sheer number of additions, deletions, vandalism, rights violations, resizing of graphics, dead links, updating lists, blocking proxies, syntax fixing, tagging and so on. 
So would it surprise you to learn that an army of bots is, as we sleep, working on all of these tasks and many more? I was.
There are nearly 3000 bot tasks identified for use in Wikipedia. So many that there is a Bots Approval Group (BAG) with a Bot Policy that covers all of these, whether fully or partially automated, helping humans with editorial tasks. 
The policy rules are interesting. Your bot must be harmless, useful, does not consume resources unnecessarily, performs only tasks for which there is consensus, carefully adheres to relevant policies and guidelines and uses informative messages, appropriately worded, in any edit summaries or messages left for users. 
So far so good but the danger is that some bots malfunction and cause chaos. This is why their bot governance is strict and strong. What is fascinating here, is the glimpse we have into the future of online entities, where large amounts of dynamic data have to be protected, while being allowed to be used for human good. The Open Educational Resources people don’t like to mention Wikipedia. It is far too populist for their liking but it remains the largest, most useful knowledge base we’ve ever seen. So what can we learn from Wikipedia and bots?
AI and Wikipedia
Wikipedia, as a computer based system, is way superior to humans and even print, as it has perfect recall, unlimited storage and 24/7 performance. On the other hand it hits ceilings, such as the ability of human editors to handle the traffic. This is where well defined tasks can be automated – as previously mentioned. It is exactly how AI is best used, as solving very specific, well defined, repetitive tasks that occur 24/7 on scale. This leaves the editors free to do their job. Note that these bots are not machine learning AI, they are pieces of software that filter and execute tasks but the lessons for AI are clear.
At WildFire, we use AI to select content related to supplement learning experiences. This is a worthy aim, and there is no real editorial problem, as it is still, entirely under human control, as we can check, edit and change any problems. Let me give you an example. Our system automatically creates links to Wikipedia but as AI is not conscious or cognitive in any sense, it makes the occasional mistake. So in a medical programme, where the nurse had to ask the young patient to ‘blow’, while a lancet was being used to puncture his skin repeatedly in an allergy test, the AI automatically created a link to the page for cocaine. Ooops! Easily edited out but you get the idea. In the vast majority of cases it is accurate. You just need a QA system that catches the false positives.
Wikipedia has to handle this sort of ambiguity all the time. This is not easy for software. The Winograd Challenge offers $25000 for software that can handle its awkward sentences with 90% accuracy – the nearest anyone has got is 58%. Roger Schank used Groucho Marx jokes! Software and data are brittle, they don’t bend they break, which is why it still needs a ton of human checking, advising and oversight.
This is a model worth copying. Governance on the use of AI (let’s just call it autonomous software). Wikipedia, with its Bot Approval Group and Bot Policy, offers a good example within an open source context of good governance over data. It draws the line between bots and humans but keeps humans in control.
The important lesson here is that the practitioners themselves know what has to be done. They are good people doing good things to keep the integrity of Wikipedia intact, as well as keeping it efficient. AI is like the God Shiva, it both creates and destroys. The problem with the dozens of ethics groups springing up, is that all they see is the destruction. AI can be a force for good but not if it is automatically seen as an ideological and deficit model. It seems, at times, as though there’s more folk on ethics groups than actually doing anything on AI. Wikipedia shows us the way here – a steady, realistic system of governance, that quietly does its work, while allowing the system to grow and retain its efficiencies, with humans in control.

 Subscribe to RSS

Tuesday, August 28, 2018

How I got blocked by Tom Peters - you must bow to the cult of Leadership or be rejected as an apostate

Odd thing this ‘Leadership’ business. I’ve been writing about it for ten years and get roughly the same reaction to every piece – one of outrage from those who sell their ‘Leadership’ services, either as consultants or trainers. In these cases, I refer on the wise words of Upton Sinclair, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”
But a far more interesting spat ensued when I wrote a piece criticising some of the best-selling books that got this whole Leadership bandwagon rolling, namely In Search of Excellence and Good to Great. Tom Peters himself joined the fray, as outraged Leadership consultants huffed and puffed. Some showed real Leadership by simply hurling abuse, accompanied by GIFs (showing outrage or dismissal), doing the very opposite of everything they claim to admire in all of this Leadership piffle. What I learnt was that there is no room for scepticism or critical thinking in the cult of Leadership – you must bow to the God of Leadership or be rejected as an apostate.
To be fair, Tom Peters retweeted the critical piece and my replies, so hats off to him on that front but his responses bordered on the bizarre.

So far, so good. But I wasn’t blaming him and Collins for the crash. What I was actually saying is that a cult of Leadership, sustained, as Jeffrey Pfeffer showed, by a hyperactive business book publishing and training industry, produced a tsunami of badly researched books full of simplistic bromides and silver bullets, exaggerating the role of Leaders and falsely claiming to have found the secrets of success. This, I argued, as I had personally seen it happen at RBS and other organisations, eventually led us up the garden path to the dung-heap that was the financial disaster in 2008, led by financial CEOs who were greedy, rapacious and clearly incompetent. They had been fattened on a diet of Leadership BS and led us to near global, financial disaster.

Hold on Tom, I wasn’t saying you two were singularly responsible. I was making a much wider point about the exponential growth in publishing and training around Leadership, like Pfeffer in his book Leadership is BS, showing that it had, arguably, led to near disaster.
As the cult of Leadership took hold, I knew that something was awry when I dared criticise IBM at the Masie Conference. Elliot was chairing a session on enterprise training software and I pointed out that IBM had sold such a system to Hitler. In 1939, the CEO of IBM, Thomas Watson, flew across the Atlantic to meet Hitler. The meeting resulted in the Nazis leasing the mechanical equivalent of a Learning Management System (LMS). Data was stored as holes in punch cards to record details of people including their skills, race and sexual inclination and used daily throughout the 12 year Reich. It was a vital piece of apparatus used in the Final Solution, to execute the very categories stored on the apparently innocent cards - Jews, Gypsies, the disabled and homosexuals, as documented in the book IBM and the Holocaust by Edwin Black. They were also use to organise slave labour and trains to the concentration camps. He went apeshit at me. Why? IBM were sponsors of his conference. lesson - this is about money not morals.
I remember seeing Jack Welsh, of GE, pop up time and time again at US conferences and talk about how it was necessary to get rid of 10% of your workforce a year and a whole host of so called gurus who claimed to have found the Leadership gold at the end of the rainbow. There was just one problem. The evidence suggested that the CEOs of very large successful companies turned out to not to be the Leaders their theory stated they were. Indeed, the CEOs of financial institutions turned out to be incompetent managers, driven by the hubris around Leadership, who drove their companies and the world’s financial system to the edge of catastrophe. Bailed out by the taxpayer, they showed little remorse and kept on taking the bonuses, misselling and behaving badly.
In the 90s and the whole post 2000 period we then saw the deification of the tech Leaders – Gates, Jobs, Dell, Zuckerberg, Musk and Bezos. Who wants to be a billionaire? became the aspiration of a new generation, who also lapped up the biographies and Leadership books, this time with a ‘start-up’ spin. Yet they too proved to be all too keen on tax evasion, greed, share buybacks and a general disregard for the public good. Steve Jobs was a rather hideous individual - but no matter - the hopelessly utopian Leadership books kept on coming.
Jump to 2018 and Trump. How on earth did that happen? Oh, and before we in the EU get on our high horses, Italy did the same with Berlusconi. Even now, Andrej Babis, a billionaire businessman, has become the President of the Czech Republic in 2017. But back to Trump. He’s riding high in the polls but let’s look at how he got to become President. First the whole ‘I’m a successful business Leader’ shtick that gave him a platform on The Apprentice, then the campaign on the premise that he, as someone who was better suited to the role than traditional politicians, was the real deal, ‘the deal maker’. He even had his own sacred ‘Leadership’ text – The Art of the Deal.  The polling is interesting – his supporters don’t care about his racism and sexism, what they admire is his ability to get things done. They have elected, not a President, but a CEO. This is the apotheosis of the cult of business leadership, the American Dream, reframed in terms of Leadership BS, turned into a nightmare.
And on it went, our Leadership guru descending into sarcasm and abuse. This is exactly what I have been writing about for the last ten years, the hubris around Leadership. Is this what Leadership is really about -  going off in a hissy fit when you are challenged? It sort of confirms what I have always thought – that this Leadership movement is actually a Ponzi scheme – write a book, talk at conferences, make a pile of cash…. lead nothing but seminars, then take absolutely no responsibility when your data turns out to be wrong or the consequences are shown to be disastrous. 
We have fetishised the word 'Leader'. Everyone is obsessed by doing Leadership training and reading 4th rate paperbacks on this dubious subject. You're a leader, I'm a leader, we're all leaders now - rendering the very meaning of the word useless. What do you do for a living? I’m a ‘leader’. Cue laughter and ridicule. Have you ever heard anyone in an organisation say, We need to ask our ‘Leader’? Only if it was sneering sarcasm. It was invented by people who sell management training to fool us all into thinking that it's a noble calling but is it all a bit phoney and exaggerated and often leads to dysfunctional behaviour. 
As James Gupta said, on this thread, “Leader, innovator… yes they are legit and important roles, but if you have to call yourself one, you probably ain’t”… then even wiser words from a guy called Dick “If your job title is a concept then maybe it’s not a real job.”
In the end Peters blocked me – even though I never followed him??!

 Subscribe to RSS