Thursday, August 30, 2018

Research shows Good Behaviour Game is constructivist nonsense

The Good Behaviour Game was touted for years by social constructivists as yet another silver bullet for classroom behaviour. Yet a large, well funded trial, across 77 schools with 3084 pupils, at the cost of £4000 per school, has shown that it’s a dud. The researchers (EIF) literally showed that it was a waste of time and money.
Of course, it had that word ‘Game’ in its brand, and ‘gamification’ being de rigeur, gave it momentum, along with some outrageous claims about its efficacy. And being a non-interventionist approach (teachers were not allowed to interfere) it also played to the Ken Robinson/Rousseau myth that if we only let children be themselves, they will thrive. It also had that vital component, the social group, where children were expected to use and pick up those vital 21stcentury skills, such as collaboration, communication and teamwork. So its premises: 1) Gamification, 2) Natural development and 3) Social – were found wanting.
Its creators claim that it is underpinned by theory that emerged in the 1960s; ‘life course’ and ‘social field theory’. Life Course theory is right out of the social constructivist playbook, specifically codified through the book Constructing Life Course by Gubrium and Holstein (2000), the idea that one should ignore specific measures, and implement practice and evaluate this practice holistically at the social level. Social Field Theory is another constructivist theory, taken from sociology, that looks at social actors’ and how these actors construct social fields, and how they are affected by such fields.
Claims for GBGs efficacy were nothing if not bold: improving behaviour, reducing mental health problems, crime, violence, anti-social behaviour, even substance abuse. Each game took 10-45 minutes and was supposed to result in better social behaviour and the game teams were balanced for gender and temperament. In truth, it was almost wholly a waste of time. The EIF summary is worth quoting in full:
“EEF Summary
Behaviour is a major concern for both teachers and students. EEF funded this project because GBG is an established programme, and previous evidence suggests it can improve behaviour, and may have a longer-term impact on attainment.
This trial found no evidence that GBG improves pupils’ reading skills or their behaviour (concentration, disruptive behaviour and pro-social behaviour) on average. There was also no effect found on teacher outcomes such as stress and teacher retention. However, there was some tentative evidence that boys at-risk of developing conduct problems showed improvements in behaviour. 
Most classes in the trial played the game less often and for shorter time periods than recommended, and a quarter of schools stopped before the end of the trial. However, classes who followed the programme closely did not get better results.
GBG is strictly manualised and this raised some challenges. In particular, some teachers felt the rule that they should not interact with students during the game was difficult for students with additional needs, and while some found that students got used to the challenge and thrived, others found the removal of their support counter-productive. The EEF will continue to look for effective programmes which support classroom management.”
Pretty conclusive results and further reason for my long-held belief that the orthodoxy of social coonstructivism needs to be challenged, before it causes even more damage in teacher training and our schools.
More importantly, it skewers the whole idea that children are naturally self-regulating and that all teachers and parents have to do is create the right social environment and they will progress.
It’s all to easy to think that real learning is taking place in collaborative groups, ignoring the research on social loafing and the possibility that the weakest learners may suffer badly from this sort of non-guided collaboration, when all that’s happening is slow and inefficient learning, illusory learning. This trial showed that this was indeed the case, with weaker students floundered. Even at the level of actual teacher practice, the approach failed, with both teachers and pupils getting wary, with shorter and shorter sessions and many just giving up.
In evidence based education negative results are just as important as positive results, as they can stop wasted time and effort in the classroom. I’d say this was conclusive and stops some of the crazier constructivist practice in its tracks. It is in line with the negative constructivist results around whole word theory, the last destructive theory that took root in education and then found to be destructive, through evidence.

Wednesday, August 29, 2018

Wikipedia’s bot army - shows us the way in governance of AI

Wikipedia is, for me, the digital wonder of the world. A free, user generated repository of knowledge, open to edits, across many languages, increasing in both breadth and depth. It is truly astonishing. But it has recently become a victim of its own success. As it scaled, it became difficult to manage. Human editorial processes have not been able to cope with the sheer number of additions, deletions, vandalism, rights violations, resizing of graphics, dead links, updating lists, blocking proxies, syntax fixing, tagging and so on. 
So would it surprise you to learn that an army of bots is, as we sleep, working on all of these tasks and many more? I was.
There are nearly 3000 bot tasks identified for use in Wikipedia. So many that there is a Bots Approval Group (BAG) with a Bot Policy that covers all of these, whether fully or partially automated, helping humans with editorial tasks. 
The policy rules are interesting. Your bot must be harmless, useful, does not consume resources unnecessarily, performs only tasks for which there is consensus, carefully adheres to relevant policies and guidelines and uses informative messages, appropriately worded, in any edit summaries or messages left for users. 
So far so good but the danger is that some bots malfunction and cause chaos. This is why their bot governance is strict and strong. What is fascinating here, is the glimpse we have into the future of online entities, where large amounts of dynamic data have to be protected, while being allowed to be used for human good. The Open Educational Resources people don’t like to mention Wikipedia. It is far too populist for their liking but it remains the largest, most useful knowledge base we’ve ever seen. So what can we learn from Wikipedia and bots?
AI and Wikipedia
Wikipedia, as a computer based system, is way superior to humans and even print, as it has perfect recall, unlimited storage and 24/7 performance. On the other hand it hits ceilings, such as the ability of human editors to handle the traffic. This is where well defined tasks can be automated – as previously mentioned. It is exactly how AI is best used, as solving very specific, well defined, repetitive tasks that occur 24/7 on scale. This leaves the editors free to do their job. Note that these bots are not machine learning AI, they are pieces of software that filter and execute tasks but the lessons for AI are clear.
At WildFire, we use AI to select content related to supplement learning experiences. This is a worthy aim, and there is no real editorial problem, as it is still, entirely under human control, as we can check, edit and change any problems. Let me give you an example. Our system automatically creates links to Wikipedia but as AI is not conscious or cognitive in any sense, it makes the occasional mistake. So in a medical programme, where the nurse had to ask the young patient to ‘blow’, while a lancet was being used to puncture his skin repeatedly in an allergy test, the AI automatically created a link to the page for cocaine. Ooops! Easily edited out but you get the idea. In the vast majority of cases it is accurate. You just need a QA system that catches the false positives.
Wikipedia has to handle this sort of ambiguity all the time. This is not easy for software. The Winograd Challenge offers $25000 for software that can handle its awkward sentences with 90% accuracy – the nearest anyone has got is 58%. Roger Schank used Groucho Marx jokes! Software and data are brittle, they don’t bend they break, which is why it still needs a ton of human checking, advising and oversight.
This is a model worth copying. Governance on the use of AI (let’s just call it autonomous software). Wikipedia, with its Bot Approval Group and Bot Policy, offers a good example within an open source context of good governance over data. It draws the line between bots and humans but keeps humans in control.
The important lesson here is that the practitioners themselves know what has to be done. They are good people doing good things to keep the integrity of Wikipedia intact, as well as keeping it efficient. AI is like the God Shiva, it both creates and destroys. The problem with the dozens of ethics groups springing up, is that all they see is the destruction. AI can be a force for good but not if it is automatically seen as an ideological and deficit model. It seems, at times, as though there’s more folk on ethics groups than actually doing anything on AI. Wikipedia shows us the way here – a steady, realistic system of governance, that quietly does its work, while allowing the system to grow and retain its efficiencies, with humans in control.

Tuesday, August 28, 2018

How I got blocked by Tom Peters - you must bow to the cult of Leadership or be rejected as an apostate

Odd thing this ‘Leadership’ business. I’ve been writing about it for ten years and get roughly the same reaction to every piece – one of outrage from those who sell their ‘Leadership’ services, either as consultants or trainers. In these cases, I refer to the wise words of Upton Sinclair, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”

But a far more interesting spat ensued when I criticised some of the best-selling books that got this whole Leadership bandwagon rolling, namely In Search of Excellence and Good to Great. Tom Peters himself joined the fray, as outraged Leadership consultants huffed and puffed. Some showed real Leadership by simply hurling abuse, accompanied by GIFs (showing outrage or dismissal), doing the very opposite of everything they claim to admire in all of this Leadership piffle. What I learnt was that there is no room for scepticism or critical thinking in the cult of Leadership – you must bow to the God of Leadership or be rejected as an apostate.

To be fair, Tom Peters retweeted the critical piece and my replies, so hats off to him on that front but his responses bordered on the bizarre.

So far, so good. But I wasn’t blaming him and Collins for the crash. What I was actually saying is that a cult of Leadership, sustained, as Jeffrey Pfeffer showed, by a hyperactive business book publishing and training industry, produced a tsunami of badly researched books full of simplistic bromides and silver bullets, exaggerating the role of Leaders and falsely claiming to have found the secrets of success. This, I argued, as I had personally seen it happen at RBS and other organisations, eventually led us up the garden path to the dung-heap that was the financial disaster in 2008, led by financial CEOs who were greedy, rapacious and clearly incompetent. They had been fattened on a diet of Leadership BS and led us to near global, financial disaster.

Hold on Tom, I wasn’t saying you two were singularly responsible. I was making a much wider point about the exponential growth in publishing and training around Leadership, like Pfeffer in his book Leadership BS, showing that it had, arguably, led to near disaster.

As the cult of Leadership took hold, I knew that something was awry when I dared criticise IBM at the Masie Conference. Elliot was chairing a session on enterprise training software and I pointed out that IBM had sold such a system to Hitler. In 1939, the CEO of IBM, Thomas Watson, flew across the Atlantic to meet Hitler. The meeting resulted in the Nazis leasing the mechanical equivalent of a Learning Management System (LMS). Data was stored as holes in punch cards to record details of people including their skills, race and sexual inclination and used daily throughout the 12 year Reich. It was a vital piece of apparatus used in the Final Solution, to execute the very categories stored on the apparently innocent cards - Jews, Gypsies, the disabled and homosexuals, as documented in the book IBM and the Holocaust by Edwin Black. They were also use to organise slave labour and trains to the concentration camps. He went apeshit at me. Why? IBM were sponsors of his conference. lesson - this is about money not morals.

I remember seeing Jack Welsh, of GE, pop up time and time again at US conferences and talk about how it was necessary to get rid of 10% of your workforce a year and a whole host of so called gurus who claimed to have found the Leadership gold at the end of the rainbow. There was just one problem. The evidence suggested that the CEOs of very large successful companies turned out to not to be the Leaders their theory stated they were. Indeed, the CEOs of financial institutions turned out to be incompetent managers, driven by the hubris around Leadership, who drove their companies and the world’s financial system to the edge of catastrophe. Bailed out by the taxpayer, they showed little remorse and kept on taking the bonuses, mis-selling and behaving badly.

In the 90s and the whole post 2000 period we then saw the deification of the tech Leaders – Gates, Jobs, Dell, Zuckerberg, Musk and Bezos. Who wants to be a billionaire? became the aspiration of a new generation, who also lapped up the biographies and Leadership books, this time with a ‘start-up’ spin. Yet they too proved to be all too keen on tax evasion, greed, share buybacks and a general disregard for the public good. Steve Jobs was a rather hideous individual - but no matter - the hopelessly utopian Leadership books kept on coming.

Jump to 2018 and Trump. How on earth did that happen? Oh, and before we in the EU get on our high horses, Italy did the same with Berlusconi. Even now, Andrej Babis, a billionaire businessman, has become the President of the Czech Republic in 2017. But back to Trump. He’s riding high in the polls but let’s look at how he got to become President. First the whole ‘I’m a successful business Leader’ shtick that gave him a platform on The Apprentice, then the campaign on the premise that he, as someone who was better suited to the role than traditional politicians, was the real deal, ‘the deal maker’. He even had his own sacred ‘Leadership’ text – The Art of the Deal.  The polling is interesting – his supporters don’t care about his racism and sexism, what they admire is his ability to get things done. They have elected, not a President, but a CEO. This is the apotheosis of the cult of business leadership, the American Dream, reframed in terms of Leadership BS, turned into a nightmare.

And on it went, our Leadership guru descending into sarcasm and abuse. This is exactly what I have been writing about for the last ten years, the hubris around Leadership. Is this what Leadership is really about -  going off in a hissy fit when you are challenged? It sort of confirms what I have always thought – that this Leadership movement is actually a Ponzi scheme – write a book, talk at conferences, make a pile of cash…. lead nothing but seminars, then take absolutely no responsibility when your data turns out to be wrong or the consequences are shown to be disastrous.

We have fetishised the word 'Leader'. Everyone is obsessed by doing Leadership training and reading 4th rate paperbacks on this dubious subject. You're a leader, I'm a leader, we're all leaders now - rendering the very meaning of the word useless. What do you do for a living? I’m a ‘leader’. Cue laughter and ridicule. Have you ever heard anyone in an organisation say, We need to ask our ‘Leader’? Only if it was sneering sarcasm. It was invented by people who sell management training to fool us all into thinking that it's a noble calling but is it all a bit phoney and exaggerated and often leads to dysfunctional behaviour. 

As James Gupta said, on this thread, “Leader, innovator… yes they are legit and important roles, but if you have to call yourself one, you probably ain’t”… then even wiser words from a guy called Dick “If your job title is a concept then maybe it’s not a real job.”

In the end Peters blocked me – even though I never followed him??!

Saturday, August 25, 2018

Why these best selling books on 'Leadership' got it disastrously wrong

I have two groaning shelves of business books. I used to read a lot of these – until I realised that most weren’t actually helping me at all. With hindsight, they tend to have three things in common – anecdotes, analogies and being hopelessly utopian. Even those that sa they rely on data, often get it all wrong. Worse still some of these early best-sellers not only got things badly wrong, they created the cult of 'Leadership'.
Good To Great

Take the lauded Good To Great by Jim Collins. It claimed to be revolutionary as it was based on oodles of research and real data. He claimed to have had 21 researchers working for five years selecting 6000 articles, with over 2000 pages of interviews. Out of this data came a list of stellar companies – Abbot Labs, Circuit City, Fannie Mae, Gillette, Kimberly Clark, Kroger, Nucor, Philip Morris, Pitney Bowes, Walgreen, Wells Fargo. The subtitle to the book was Why Some Companies Make the Leap... and Others Don't. Unfortunately some leapt of a cliff, many underperformed and the net result was that they were largely false dawns. 
Fannie Mae was close to collapse in 2008 having failed to spot the risks they had on their $6 trillion mortgage book and had to be bailed out. Senior staff, including two CEOs were found to have taken out illegal loans from the company, as well as making contributions to politicians sitting on committees regulating their industry. It didn’t stop there, since 2011 they have been embroiled in kickback charges as well as securities fraud and a swarm of lawsuits. It could be described as the most deluded, fraudulent and badly run company in the US, led by incompetent, greedy, liars and cheats.
Circuit City went bankrupt in 2009, having made some truly disastrous decisions at the top, dropping their large and successful large appliances business, a stupid exclusive deal with Verizon that stopped them selling other brands of phones, terrible acquisitions and a series of chops and changes that led to a rapid decline. It was unique in failing to capitalise on the growing technology markets, making decisions that were almost wholly wrong-headed. Its leaders took it down.
Wells Fargo has been plagued by controversy. The list of wrongdoing is depressingly long; money laundering, gouging overdraft payers, fines for mortgage misconduct, fines for inadequate risk disclosures, sued for loan underwriting, fined for breaking credit card laws, massive accounting frauds, insider trading, even racketeering and accusations of excessive pay. This is, quite simply, one of the most corrupt and rapacious companies on the planet, led by greedy, fraudulent fools.
I could go on but let’s summarise by saying that nine of the other stocks chosen by Collins have had lacklustre performance, regarded by the markets as journeymen stocks. Steven Levitt called the book out, showing that the stocks have underperformed against the average S&P. His point was simple, Collins cherry picked his data. So if you see this book recommended in Leadership courses, call it out, call the trainer out.
In Search of Excellence
In Search of Excellence by Tom Peters was another best seller that sparked off the obsession with Leadership. The case against this book is, in many ways more serious, as BusinessWeek claimed he had ‘faked’ the data. Chapman even wrote a book called In Search of Stupidity, showing that his list of ‘excellent’ companies were actually poor to indifferent. He had inadvertently picked companies that were dominant in their sectors but had then become lazy and sclerotic. It was a classic example of what Gary Smith highlighted as the famous Texas Sharpshooter fallacy. You shoot a bullet, then draw a line around your bullethole and claim you hit the target. Simply joining up already successful dots is not data, it’s cherry picking. Even them he picked the wrong cherries, most of which were rotten inside.
If anything, things have got worse. There's been a relentless flood of books on leadership that make the same mistakes time and time again. At least Collins and Peters tried to use data, many are nothing more than anecdote and analogies. Management is frustrating, difficult and messy. There are no easy bromides and simply stating a series of vague abstract concepts like authenticity, trust, empathy and so on, is not enough.
We have fetishised 'Leadership'You're a leader, I'm a leader, we're all leaders now - rendering the very meaning of the word useless. What do you do for a living? I’m a ‘leader’. Cue laughter and ridicule. Have you ever heard anyone in an organisation say, We need to ask our ‘Leader’? Only if it was sneering sarcasm. It was invented by people who sell management training to fool us all into thinking that it's a noble calling but is it all a bit phoney and exaggerated and does it lead to dysfunctional behaviour? 
Back in the day, before ‘Leadership’ became a ‘thing’ in business, it was fuelled by these key books. They went beyond the normal best-seller status to cult status. Everyone bought them and read them. These seminal, and actually fraudulent, books were the foundation stones for an industry that led to the financial crisis in 2008, that nearly took the world’s entire financial system down. We’re still in the shadow of that disaster, and many of the word’s current ills can be traced to that event – decades of austerity and increasing inequality. The trait that both the authors of these books and the so-called leaders we've ended up with, in politics, sport, entertainment and business - is integrity. The tragic end-point of this cult of Leadership is Trump with his Art of the Deal. Worse still, corporate training is still in thrall with this nonsense, with ‘Leadership’ courses that pay homage to this utopian idea that there are silver bullet solutions to the messy world that is management.

Friday, August 24, 2018

Drones - why they really do matter....

Drones are an underestimated technology. As they whizz about, quietly revolutionising film making, photography, archaeology, agriculture, surveying, project management, wildlife conservation, the delivery of goods, food, post, even medicines and into disaster zones, we will be seeing a lot more of them.
This year I've been in Kigali, Rwanda chairing an event an E-learning Africa on drones, as their use in Africa clearly benefits from the ‘leapfrog’ phenomenon – the idea that technology sometimes gains from being deployed where there is little or no existing service or technology. Rwanda and other African countries are already experimenting with drones in everything from agriculture to medical supply delivery. I also spoke at the Battle of Ideas on drones in November.
Like any technology they are a force for good but also have a downside. Like cars, we all drive them, but 1.3 million people die gory deaths every year in car crashes and that figure doesn't include the injured. Almost all tech has a downside.
So it is with drones. They saves lives in Rwanda by delivering blood but are used to kill in the Middle East and disrupt entire airports for days, as in Gatwick.
Drones and AI
What makes them  interesting is the intelligence they now embody. First their manoeuvrability. My friend is a helicopter pilot and he rightly describes a helicopter as a complex set of moving parts, every one of which wants to kill you. A drone, however, has sophisticated algorithms that maintain stability, can set it off on a mission and return it back to the spot it left from at the press of a button.  But it is the autonomy of drones that is really interesting. Navigation and movement are being aided by image recognition of the ground and other objects, to avoid collision. Even foldable, consumer drones now have anti-collision sensors on all sides, zoom lenses. They are the self-driving cars of the air. 
MIT is even using VR environments to allow drones to learn to fly without expensive crashes. They literally fly in an empty room filled with VR created obstacles and landscapes. Drones can not only learn to fly using AI, it can use AI in many other tasks. It can take different forms of AI into the sky – image recognition, route calculation. (Think about this for a moment. Drones can fly autonomously. This makes them incredibly dangerous, when used by people who want to cause chaos or do harm.)
Drone abuse
Image recognition also enables surveillance. A $200 drone can hover, shoot video of a crowd and use AI to identify potential and actual violent poses, such as strangling, punching, kicking, shooting, and stabbing. These are early systems but their use and abuse by police-forces and/or authoritarian regimes is a cause for worry.
Google recently gave into pressure from its own employees not to use its AI (Tensorflow) in Project Maven – image recognition from military drones. And let’s not forget that drone industry is, at present, largely part of the military industrial complex. The IBOT (Internet Battle of Things) is a thing. The military are already envisaging battles between autonomous battle objects – drones and other autonomous vehicles and robot soldiers.
On the delivery side, drones are also a pretty effective drug mules into prisons. This has become a real problem, turning jails into markets for drugs, where the prices are x10 higher. And for a truly petrifying view of payloads drones in warfare, watch Slaughterbots. With use comes abuse.
In developed world?
First, it is doubtful that drones will be used to deliver anything in complex, urban environments.  It is certain that flying taxi drones will not take off. On drone taxis, as Elon Musk says, we already have them, they're called helicopters. You need a big beast of a drone to carry people and the physics of this means lots (and I mean lots) of noise - that's why they're a non-starter. The social acceptance problem is huge. However, for specific line of sight uses by firefighters, police and so on the uses are clear.
Drones are not integrated into the airspace and that airspace is getting pretty full in the developed world. Pizzas and Amazon books are not going to be delivered to your home any time soon. There is safety, regulatory and social issues to overcome. Technology is always ahead of sociology and regulations. In the case of drones the lag is enormous.
Technically drones can deliver things safely and never collide. However, the potential for problems is through the roof. They are limited to 'close to pilot' uses. Think of them as flying mobile phones, as they use much the same tech. They have not come from the world of aviation. In terms of regulation, we still see drones over crowd (illegal) and close to roads (illegal).
So what about irresponsible pilots? Those with ill-intent can interfere with drones, not only blocking signals but even falling them into thinking they're somewhere else. The potential for interrupting normal business is huge as it the potential for delivering harmful payloads - think Scripal, think dirty bombs.... We may have spent £100 billion on Trident but our air traffic can be brought to a standstill by the mere presence of a £200 drone.
One solution is to demand that drones have internal intelligence that keeps them safe - that they cannot go near airports/planes,  crash into crowds (find safe place if fault) and so on. Sound good but this is not easy. Drones, like your car, or aircraft, uses GPS. That's fine when there's a driver or pilot but in a drone it doesn't work. GPS can be jammed, and drones even told that they're somewhere else. You can build one for tens of dollars from YouTube videos.
Of course, what many don't realise is that almost all commercial airliners are, in effect drones. They fly and land autonomously with the pilots doing mostly monitoring. There may be a future of autonomous drones, but it's way off without failsafes.
In the developing world? 
There two main uses of drones:
1. Imaging
2. Delivery
Although some other niche uses are being developed, such as delivery of internet access and window cleaning, the main uses are as an eye in the sky or dropping stuff off.
Action shots of skiing, surfing, mountain biking, climbing and many sports has changed the whole way we see the events. So common are drone shots, that we barely notice the bird’s eye view. Action shots that used to require expensive helicopters are now much easier and you can do things with drones that no one would have dared do in a helicopter. Even for ground action shots, a drone can add speed and sensation. In an interesting turn of events, Game of Thrones producers have had the problem of too many drones. In addition to their own drones for shots, they’ve had to content with snooper drones trying to get previews. But let’s turn to real life…
Already used in crop spraying, there are other obvious applications in imaging to show irrigation, soil, crop yield, pest infestations. Drone imaging can see things on scale, which the normal eye cannot see, with it spectral range. This should help to increase yields and efficiency. The Global Market for agricultural drones is expected, in one report, to reach $3.69 billion by 2022.
Animals close to extinction in areas too large to keep them safe from poachers. Drones are being used to track and look out for these animals. Tourist companies are using drone footage to encourage Safari holidays. In general, environmental care is being helped by being able to track what is happening through cheap drone tech.
Large building projects or mines are being managed with the help of drones that can help survey, plan, then track vehicles, actual progress and build. It’s like having a project management overview of the whole site whenever it is needed with accurate realtime data. When built, drones are also being used to inspect roofs and even sell properties.
Collision tolerant drones are being used, not in the open air, but in confined spaces, such as tunnels, to inspect plant and pipes. They are small enough to get to places that are too tight or dangerous for humans.
Delivery drones
Amazon, Google, DHL, FedX and dozens of other retailers have been experimenting with drone delivery. All sorts of issues have to be overcome for drone delivery to become feasible, including: reliability, safety, security, theft, cost, training, laws and regulations. But there seems to be an inevitability about this, especially if they become cheap and reliable. That reliability depends very much on AI in terms of flight, locations and actual delivery.
In healthcare, medicines, vaccines, blood, samplescan all be delivered by drone. Defribrillators with live feeds telling individuals nearby how to operate them have been prototypes in the Netherlands and the US. A company in the US has already delivered FAA approved water, food and first aid kit in Nevada. Switzerland are creating a drone network for medical delivery and
Zipline, in Rwanda, have partnered with the government to deliver blood and other medical supplies to 21 facilities. The benefits in term of speed, cost, accurate delivery and saving lives is enormous. Sudden, unexpected disasters need fast, location specific drop-offs of medical supplies, food and water.
The delivery of ordered items is already being trialled with pizzas, books and everything else that is relatively small, light and can be dropped off at a specific location. The Burrito Bomber, Tacocopter and Domicopter deliver fast food. IBM even have a patent for delivering coffee by drone.
Delivering the post by drone makes sense and trials have been done in Australia, Switzerland, Germany, Singapore and Ukraine. Again speed, reliability and cost are the appealing factors.
Tech can be used and abused. Drones are the perfect example, already killing machines, they also have the potential to save lives. The good news, is that in Africa, the attention is on the latter. In the developing world, safety, social and regulatory environments mean that little is possible commercially.

Tuesday, August 21, 2018

Agile production – online learning needs to get its skates on

Agile – plenty of advocates for this method, as it is quick, iterative and gets results. Yet the online learning industry could be accused of being the very opposite, with long cumbersome schedules, often over-engineered content and costs to match.
When training becomes a slow and sluggish response to business problems it ends up being out of phase with the business or worse out of favour. Business managers are often surprised when their request for online training will take many months not days, at 15-25k per hour of learning - oh and you’ll not be able to evaluate much, as the data is unlikely to tell you much (we have a thing called SCORM).
If Learning and Development is to remain relevant, it has to get out of the slow lane, with its glacial production processes and dark ages design, all presentation and little learning. The problem is that online learning has become a media production process, where most of the budget goes into graphics, video and presentation, not learning. I’m not sure about calling someone an Interactive Designer if all the interaction they come up with is multiple choice questions, drag and drop (damn dropped it again), even worse ‘click on Dave, Ahmid or Suzie to see why they think GDPR is a great idea’ – cue speech bubble.
We have to find a way to deliver online learning in minutes not months and switch away from these clumsy interactions. This means, meaningful interactions that require cognitive retrieval not mere recognition. Multiple choice and drag and drop are acts of recognition not retrieval. Click on cartoon image of person is just banal. The physical interactions (click or drag) are low retention in terms of learning. If it is knowledge, the learners need to be asked to recall from their own brain and then type or voice the answer. This has been shown to increase retention and recall as it involves effort and deeper processing.
There are three dimensions to the ‘agile’ production process:
1. Assess and edit content
2. Avoid SME iterations
3. AI generated online learning
4. Review actual content (online)
1. Assess and edit content
Reviews assets with advice on suitability of content in terms of:
1. Editing down to ‘need to know’ learning. Cut it until it bleeds, then cut again. Take other content and regard it as desirable but not essential and provide the detail as references contentb through links. 
2. Avoid SME iterations
As most of the delays in content production go for minimal or no SME input. If you use good original video, PowerPoints or approved documents, you can get away without these lengthy and costly delays in production. Use the SME to clarify things that are unclear or ambiguous, not for design.
3. AI generated online learning
Use AI to identify learning points and create the interactions. You still have the opportunity to fine tune the content but the AI creates the learning experience, as well as links to relevant content.
4. Review actual content (online)
The review or QA process is another choke point. We get this done, using the actual AI-generated content, so that the actual course is being reviewed. This can be done online, using screen sharing, and changes made live as the content is reviewed.
Having developed a method that does produce online content quickly, allowing you to deliver content the same day, in WildFire, we have seen the huge reductions both in time taken and costs. One client has calculated a £452,000 cost saving and produced so much content, so quickly, that the real block was simply testing. In a truly agile process, with not a single face-to-face meeting, the content assets were locked down, AI-generated content produced, QA done using real courses and changes made quickly. The final experience has increased sales and is now being used across the business in other contexts, continuing to improve an already stellar return on investment. 
Total savings, compared to traditional online learning production, were calculated as £438,000 plus £15,000 in salary costs”. Delivery has “freed up 15% of manager time” to do other things. “36% increase in sales has already been recognised in the first few months the training has been available”. As the client says “With a bit of lateral thinking and a lot of tenacity – seemingly impossible timescales were met”.

Monday, August 20, 2018

Could this be the worst piece of online learning ever? Let me explain why it may well be…

PewDiePie is a legend among his 100+ million YouTube followers. He lives in my home town, Brighton, and has built his reputation on videos that praise, review and sometimes eviscerate games. Unusually, this piece of US gamified, online learning aimed at cybersecurity, was his target and he nails it.
I’ve written a lot about how online learning has gone down a rabbit hole, with its overworked media, all presentation and no learning, reliance on recognition not retrieval, and sometimes (not always) ridiculous use of games and gamification. This is a hilarious example of condescending scenarios, awful multiple-choice questions, interspersed with screens full of text, even a game within a game. It’s so bad it’s good – as comedy.
Seriously though, it has all the hallmarks of where the online content market has gone wrong. I can only guess what this cost the client – but it was most likely a high five or even six figure sum.
Look and feel
Let’s be honest, it must have been hard to pull off, but it both looks and feels awful. Not sure where the art direction came from but it is all over the place. I have come to loathe this cartoon style learning. I find it condescending and patronising in equal measure but this is what happens when you slam together disparate media, from 3D animation to clip art to screens crammed with text. It is a cartoon mess.
Media rich is not mind rich
It tries SO hard to be engaging with 3D animated characters but they are straight out of the clip-art, cliché playbook. Then the animated effects that slide, whoosh and pop up like a disjointed, surreal dream. We need to sit our teams of content designers down, and scream out the simple principle – LESS IS MORE. We have decades of research showing that all of this ‘noise’ inhibits, not enhances, learning.
The questions are, at times, mind-blowingly bad. Multiple-choice questions are OK but difficult to write. They could have made the effort. You end up just clicking through or laughing at some of the ridiculous options. 
This is where it really goes up its own asshole. So determined are they to gamify everything, they completely destroy the learning. It is clearly a game designed by someone who has no actual knowledge of computer games. You get a lot of this in some online learning. So bad is the media mix and rewards, that it is truly hilarious. But nothing prepares you for the ‘millionaire’ game within a game, doubling down on the use of trite gamification. It is not that all gamification is bad but so much is this badly executed, Pavlovian nonsense.
How can I sum this up? It should be compulsory to show this, in ALL instructional design courses, as an example of how NOT to design learning. If you get to the very end look out for the hilarious point where he downloads his PDF pass certificate and PewDiePie says, it’s a ‘virus’. It is so surreal that it could pass for a deliberate piss take. When online learning has come to this, you know it’s time for a rethink. Enough already with the graphics and grotesque gamification. It’s embarrassing. Stop. Slow down. Keep it simple. This is what used to be called edutainment but it is neither edu nor tainment, it is the ugly cul-de-sac of an industry that has abandoned learning for crap media production.

Saturday, August 11, 2018

AI now plays a major recommendation role in L&D - resistance is futile

In 2014, at EPFL (Ecole Polytechnique Federale de lausanne)in Switzerland, I was speaking at a MOOC conference explaining how MOOCs would eventually move towards satisfying real demand in vocational courses and not, as most attendees thought at the time, deliver standard HE courses. This proved to be right. The most popular MOOCs worldwide are not liberal arts courses but IT, business and healthcare courses. Udacity, Coursera, Udemy and EdX have all recognised this and shifted their business model. Futurelearn remains in the doldrums.
At that same conference I spoke to Andrew Ng and asked him about the application of AI (he’s a word-class AI expert) to online content. I had been pushing this since April 2013. His reply was interesting, “It’s too early and the problem is the difficulty in authoring the courses to be suitable for the application of AI”. He was right then, but he’s now just done exactly that in Coursera.
AI Recommendation engines
Recommender systems are now mainstream in almost every area of online delivery: search (Google), social media (Facebook, Twitter, Instagram), e-commerce (Amazon) and entertainment (Netflix). AI is the new UI (User Interface), as almost all online experiences are mediated by recommendation engines.
Their success has been proven in almost every area of human endeavour, even more recently in health where they provide diagnoses, comparable, and at times superior, to that of clinicians. Yet they are rare in education and training. This is changing as companies such as WildFire, Coursera, Udacity and others, offer Recommendation services that allow organisations and learners to identify and select courses, based on existing skillsets and experience. We can expect this to be area of real growth over the next few years. 
Recommender engines in learning
Recommender systems, are used at various levels in learning:
  • Macro-level insights from training to solve business problems 
  • Macro-level curricula selection for organisations
  • Macro-level course selection for individuals
  • Micro-level insights and decisions within courses for individuals through adaptive learning
  • Micro-level recommendations for external content within context while doing a course
  • Micro-level insights and decisions for individuals through assessment
Macro-level insights from training to solve business problems 
Chris Brannigan is the CEO of Caspian Learning is a neuroscientist who uses what he calls ‘Human Performance Intelligence’ to investigate, diagnose and treat business problems within organisations. This can be compliance issues, risk or performance of any kind. The aim is to do a complete health check, using AI-driven, 3D simulated training scenarios, sophisticated behavioural analysis, right through to predictive analysis and recommendations for process, human and other types of change. The ambition is breathtaking.
Let’s take a real example, one that Chris has completed. How do you know that your tens or hundreds of thousands of employees perform under high risk? You don’t. The problem is that the risk is asymmetric. A few bad apples can incur the wrath of the regulators, who are getting quite feisty. You really do need to know what they do, why they do it and what you need to do to change things for the better. Their system learns from experts, so that there is an ideal model, then employees go through scenarios (distributed practice) which subtly gathers data over 20 or so scenarios, with lot of different flavours. It then diagnoses the problems in terms of decision-making, reasoning and investigation. A diagnosis, along with a financial impact analysis is delivered to senior executives and line managers, with specific actions. All of this is done using AI techniques that include machine learning and other forms of algorithmic and data analysis to improve the business. It is one very smart solution.
Note that the goal is not to improve training but to improve the business. The data, intelligence, predictive analytics, all move towards decisions, actions and change. The diagnosis will identify geographic areas, cultural problems, specific processes, system weaknesses – all moving towards solutions that may be; more training, investment decisions, system changes or personnel changes. All of this is based on modelling business outcomes.
Macro-level curricula selection for organisations
Coursera have 31 million registered users, huge data sets and 1400 commercial partners and, as mentioned, are using AI to improve organisational learning. Large numbers of employees are taking large numbers of courses and the choice of future courses is also large and getting bigger. Yet within an organization, the data gathered on who takes what courses is limited. This data, when linked to the content (courses and elements within courses), if gathered and put to use through AI, can provide insights leading to recommendations for both organisations about demand, needs and activity, but also at the organizational level, recommendations on what courses to supply and to whom. It is not just Coursera who are doing this, Udacity also have an AI team who have produced interesting tools using sentiment analysis and chatbots that recommend courses. We've also done this at WildFire.
Macro-level course selection for individuals
At WildFire, we’ve developed an AI recommendation engine that doeS some nifty analysis on data from completed courses and recommends other courses for you from that data. At WildFire we take databases of course interactions (courses taken and user-course interactions with learners), then use model-based collaborative filtering to create a scoring matrix. These matrixes are sparse and need to be filled out using correlations that create unknown user interaction grades, and these are used to cluster courses into groups, based on similarities. We then use unsupervised learning to identify clusters of similar specialization, to find areas of overlapping specialization. We have sets of specialisations that are similar to each other, insights that can lead to a better investment in certain courses and determine what courses should be taken by whom.
Micro-level insights and decisions within courses for individuals through adaptive learning
In learning, recommendation engines can also be used to recommend routes through courses, even recommend learning strategies. They promise to increase the efficacy of online delivery through personalised learning, each learning experience being unique to each learner, drawing on data about the learner, other learners who have taken the course, as well as all data from other courses taken by those learners. As learners will vector through learning experiences at speeds related to their competences, they will save time, especially for the faster learners as well as reducing drop-out from courses, by learners who need more individualised support.
Recommender engines lift traditional online learning above the usual straight HTML delivery, with little in the way of intelligent software, making the experience more relevant and efficient for the learner. They also provide scale and access from anywhere with online access, at anytime. If their efficacy can be proved, there is every reason to suppose that their adoption will be substantial.
Micro-level recommendations for external content within context while doing a course
In WildFire, we create content, in minutes not months, using various AI techniques. In sense the AI identifies learning points and creates user interactions within that content but it also has a ‘curation’ engine that recommends external content, linked to that specific piece of learning and produces links to that content automatically. This creates content that satisfies both those who need more explanation and detail, as well as the more curious learner.
This is exactly how experienced learners learn. One thing sparks off curiosity in another. In this case, we formalise the process with AI to find links that satisfy those needs. Sometimes it will be a simple internal document, at others external links to a definition, YouTube video, TED talk or other trusted and selected source.
Micro-level insights and decisions for individuals through assessment
Coursera have been using IRT (Item Response Theory) into machine learning software to analyse learner’s performance from assessments. This allows you to gauge the competences of your employees relative to each other but also other companies in your sector or generally. This is similar to the work done by Chris Brannigan at Caspian Learning. 
LandD talk a lot about business alignment but often don’t get very far down that track. We can see that AI has succeeded because it moves beyond LandD into other business units. What it gathers for an organisation is a unique data set that really does deliver recommendations for change. It’s light years ahead of happy sheets and Kirkpatrick. What’s more interesting is that it is the polar opposite of much of what is being done at present, with low key, non-interventionist training. Even in the online learning world blind adherence to SCORM has meant that most of the useful data, beyond mere completion, has not been gathered. This blind adherence to what was an ill-conceived standard, will hold us back unless we move on.
This AI approach draws on behaviour of real people, uses sim/scenario-based data gathering, focuses on actual performance, captures expert performance, uses AI/machine intelligence to produce concrete recommendations. It’s all about business decision making and direct business impact. And here’s the rub – it gets better the more it is used.
There is no doubt in my mind that AI will change why we learn, what we lean and how we learn. I’ve been showing real examples of this for several years now at conferences, built an AI in learning company that creates online learning in minutes not months - WildFire, invested in others. Learning and Development has always been poor on data, evaluation and return on investment, relying on an old, outdated form of evaluation (Kirkpatrick). It’s time to move on and address that issue square on, with evaluation based on real data and efficacy based on actual demand, using that data.

Monday, August 06, 2018

Video is good but never enough - how to supplement it in minutes to get great learning

10 researched tips to produce great video in learning (some will surprise you) had concrete tips on producing video for online learning but it was only half the story, as research also shows that video, in most cases, is rarely enough in learning. 
Video not sufficient
Video is great at showing processes, procedures, real things moving in the real world, drama, even much-maligned talking heads, but it is poor on many other things, especially concepts, numbers and abstract meaning. When delivering WildFire created content to nurses in the NHS, we discovered that processes and procedures were recalled from video but much of the detail was not. The knowledge that was not retained and recalled was often 'semantic' knowledge: 
1) Numbers (doses, measurements, statistical results and so on) 
2) Names and concepts (concepts, drugs, pathogens, anatomy and so on)
This is not surprising, as there is a real difference between episodic and semantic memory. 
Episodic memory is all of those things remembered as experiences or events, you are an actor within these events. Semantic memory is facts, concepts, numbers, where meaning is independent of space and time, often thought as words and symbols.
In healthcare, as in most professions, you need to know both. This is why video alone, is rarely enough. One solution is to supplement video with learning that focuses on reinforcing the episodic and semantic knowledge, so that two plus two makes five.
Two plus two makes five
Our solution was to automatically grab the transcript (narration) of the videos. Some transcripts were already available and for those that were not, we used the automatic transcript service on YouTube. This transcript was put through the WildFire process, where AI was used to automatically produce online leaning with open input questions to increase retention and recall. This allowed the learner to both watch the video (for process and procedure) then do the active learning, where they picked up the semantic knowledge, as well as reinforcing the processes and procedures.
In a nurse training video on Allergy Tests, where the nurse administers allergens into the skin of the patient and the reactions are recorded, the video shows the nurse as she gets patient comfortable with a pillow under his arm. She then asks him some questions (Any lotions on your skin? Taken any antihistamines in the last 4 days?). Other important learning points are to blot (not rub), tell the patient not to scratch and so on.
Now the video did a great job on the procedure – pillow under the arm, lancets in sharps bin, blot not rub, and so on. Where the video failed was in the number of days within which the patient had taken antihistamines, the names of the allergens and the concept of a negative control. This was then covered by asking the learners to recall and type in their answers (not MCQs) in WildFire, items such as 4 days, names of allergens, negative control etc. In addition, if the learner didn’t know, for example, what a negative control was, there were AI created links to explanations, describing what a negative control is within a diagnostic test.
The learner gets the best of both worlds, the visual learning through video and the semantic learning through WildFire, all in the right order and context.
Video is a fabulous learning medium, witness the popularity of YouTube and the success of video in learning, although there are some principles that make it better. When supplemented by WildFire produced content, you get a double dividend – visual episodic learning and semantic knowledge. If you have video content that you need to turn into powerful online learning, quickly, with high retention and recall, contact us at WildFire.