Saturday, August 29, 2020

More important than man on the moon - the melding of mind and machine

Last night we witnessed a live streamed event that may prove more significant that the moon landing. Elon Musk showed the remarkable progress of Neurlink. AI, robotics, physics, material science, medicine and biology collided in a Big Bang event, where we saw an affordable device that can be inserted into your brain to solve important spinal and brain problems. By problems they meant memory loss, hearing loss, blindness, paralysis, extreme pain, seizures, strokes and brain damage. They also included mental health issues such as depression, anxiety, insomnia and addiction. Ultimately, I have no doubt that this will lead to huge decrease in human suffering. God doesn’t seem to have solved the problem of human suffering, we as a species, through science are on the brink of doing it by and for ourselves.

Current tech

Current technology (Utah array) has only 100 channels per array and the wires are rigid, inserted crudely with an air hammer. You have to wear a box on your head, with the risk of infection, and it requires a great deal of medical expertise. It does a valuable job but is low bandwidth and destroys about a sugarcube of brain matter. Nevertheless, it has greatly improved the lives of over 150,000 people.

Neuralink tech

Musk showed three little piggies in pens, one without an implant, one that had an implant, now removed without any effects and one with an implant (they showed the signal live). Using a robot as surgeon the Neuralink tech can be inserted in an hour, without a general anaesthetic and you can be out of hospital the same day. The coin size device is inserted in the skull, beneath the skull. Its fibres are only 5 microns in diameter (a human hair is 100 microns) and it has ten times the channels of he Utah array, with a megabit bandwidth rate, to and from your smartphone. All channels are read and write.

Smartphone talks and listens to brain

When writing to the brain, you don’t want to damage anything and you need precise control over a range of electric fields in both time and space, also delivering a wide range of currents to different parts of the brain. The device uses Bluetooth to and from your smartphone. Indeed, it is the mass production of smartphone chips and sensors that have made this breakthrough possible.  

Team Musk

What really made this possible was Elon Musk, a remarkable man, who brought together this remarkable team of AI experts, roboticists, material scientists, mechanical engineers, electrical engineers and neurologists. In the Q&A session afterwards, they were brilliant.

What next?

I discussed Neurolink in my book ‘AI for Learning’ speculating that at some distant time machine would meld with mind, and this would open up possibilities for learning. I didn’t imagine that it would be kicked off just a few days after the book’s release… but here we have it. So what are the possibilities for learning?


At the very least this will give us insights into the way the brain works. We can ‘read’ the brain more precisely but also experiment to prove/disprove hypotheses on memory and learning. This will take a lot more than just reading ‘spikes’ (electrical impulses from one neuron to many) but it is a huge leap in terms of an affordable window into the brain. If we unlock memory formation, we have the key to efficient learning.


Our current interfaces, keyboards, touchscreen, gestures and voice, could also be bypassed, giving much faster ‘thought to and from machine’ communication by tapping into the phonological loop. This would be an altogether different form of interface, more akin to VR. Consciousness is a reconstructed representation of reality anyway and these new interfaces would be much more experiential as forms of consciousness, not just language.

Read memories

Memories are of many types and complex, distributed things in the brain. Musk talked eloquently about being able to read memories, that means they can be stored for later retrieval. Imagine having cherished memories stored to be later experienced, like your wedding photos, only as felt conscious events, like episodic memories. There are conceptual problems with this, as memory is a reconstructive event, but at least these reconstructions could be read for later retrieval. At the wilder end of speculation Musk imagined that you could ‘read’ your entire brain, with all of its memories, store this and implant in another device. 


This is not just about memories. It is our faculty of the imagination that drives us as a species forward, whether in mathematics, AI and science (Neuralink is an exemplar) but also in art and creativity. Think of the possibilities in music and other art forms, the opportunities around the creative process, where we can have imagination prostheses.

Write memories

Reading memories is one thing. Imagine being able to ‘write’ memories to the brain. That is, essentially a form of learning. If we can do this, we can accelerate learning. This would be a massive leap for our species. Learning is a slow and laborious process. It takes 20nyears or more before we become functioning members of society, even then we forget much of what we were taught and learned. Our brains are seriously hindered by the limited bandwidth and processing power of our working memory. Overcoming that block, by direct writing to the brain, would allow much faster learning. Could we eliminate great tranches of boring schooling? Such reading and writing of memories would, of course, be encrypted for privacy. You wouldn’t want your brain hacked!


In my book I talk about the philosophical discussion around extended consciousness and cognition. Some think the internet and personal devices like smartphones have already extended cognition. The Neuralink team are keenly aware that they may have opened up a window on the mind that may ultimately solve the hard problem of consciousness, something that has puzzled us for thousands of years. If we can really identify correlates between what we think in consciousness and what is happening in the brain and can even simulate and create consciousness, we are well on the way to solving that problem.

End to suffering

But the real win here, is the opportunity to limit suffering, pain, physical disabilities, autism, learning difficulties and many forms of mental illness. It may also be able to read electrical and chemical signals for other diseases, leading to their prevention. This is only the beginning, like the first transistor or telephone call. It is a scalable solution and as versions roll out with more channels, better interpretation using AI, in more areas of the brain, there are endless possibilities. This event was, for me, more important than man landing on the moon as it has its focus, not on grand gestures and political showmanship, but on reducing human suffering. That is a far more noble goal. It is about time we stopped obsessing with the ethics of AI, with endless dystopian navel gazing, to recognise that it has revolutionary possibilities in the reduction of suffering.

FDA approved

The good news is that they have FDA Breakthrough Device designation and will be doing human trials soon. 

Sunday, August 23, 2020

Taylor (1856 – 1915) training as a formal function within organisations, essential to business growth

Frederick Winslow Taylor turned down Harvard for an apprenticeship, competed nationally at tennis and made his fortune in steel. After a four year apprenticeship, he worked his way up to senior management roles, invented patented techniques and so his theories were grounded in real organisational experience, practice and success. 

He is best known for his work in applying the scientific method to management. Taylor’s Principles were long respected in organisational planning and training but ‘Taylorism’ became a pejorative term, as we moved out of mass manufacturing and production into services. Yet his fundamental idea, that efficiencies should be sought in organisations, far from being abandoned, has remained the mainstay of management theory and practice for over a century. The Principles of Scientific Management (1911) was voted the most influential management book of the 20th century by the Academy of Management.

Four principles

Taylor's four principles of scientific management are worth repeating:

1.     Replace rule-of-thumb work methods with methods based on the scientific study of tasks

2.     Scientifically select, train, and develop each employee rather than passively leaving them to train themselves

3.     Provide detailed instruction and supervision of each worker in the performance of that worker's discrete task

4.     Divide work nearly equally between managers and workers, so that the managers apply scientific management principles to planning the work and the workers actually perform the tasks

This can be reduced to the scientific and analytic approach to productivity through the focus on tasks or process, then formal training, with a focus on performance. Management is the science of planning for performance. When stated that way, we can see why Drucker, a huge admirer of Taylor, saw him not only as the father of modern management but also the person who shaped the great wealth creating industries that lifted millions out of poverty. This is far from the derogatory descriptions of many who see him as the architect of exploitative capitalism.

What characterises Taylor’s Principles is his focus on measurement, standardisation, management and he division of labour. The modern obsession with management, as opposed to general employees, even the obsessions with yet another class of management, leaders, all stems from Taylor. This fundamental distinction between management (who think and plan) and workers (who do and make things) was, of course more pronounced in the great era of manufacturing. But who can deny that, even in the modern era, more dominated by services, it has been carried over in all aspects of organisational structure, planning and training.


His principles put training at the centre of his scientific process, with the selection, development and training of staff to be based on scientific principles. His legacy was therefore to have training as a formal function within organisations, essential to business growth. Formal, direct training was the key to improving productivity.

This focus on training, not in a general sense, but in precise competences has also has a lasting effect. Whatever the business gaol or process, he recommends a scientific approach to the training of those performances, not as pure theory but as doing. Practice was essential and the transfer of learning to actual competence was essential. In many ways we have backtracked on this with the separation of training off into a different realm, not the workplace but the classroom and now online. We may have drifted back off Taylor’s base principle about training being about actual proven competences that transfer into practice in the workplace. In some ways we have forgotten these scientific principles, as training became, in places, more faddish, with less reliance on scientific research on how we learn and evidence-based practice. There is a contemporary movement to debunk these fads and myths that have crept into learning and training, which is Taylorist in approach. 


Taylor’s world was one where most jobs were manual, so his focus on physical process was understandable. We now have the inverse, where manual work is now less than a tenth of all jobs, so his principles have to be adapted towards knowledge work. This means less focus on manual skills and more on cognitive skills.

As Taylor wanted to find ‘scientific’ solutions to production and performance problems, he recommended a single solution, with a binary split, where managers manage and plan, then workers do and make. This single scientific solution was replaced by less hierarchical approaches that distributed responsibility more widely in organisations, so that more personal responsibility is taken by all. Also, managers are no longer separated off to do pure planning. They take a more active role in the personal development and supervision of those they manage. Teamwork and collaboration, defined and researched by Belbin, Salas and Stodd have also led to a more democratised structures. Leadership has also been layered on to the management category.

In many organisations extreme and narrow specialisation is seen as inflexible. Indeed, it is seen as demotivating. A more humanistic approach to management where motivation, support, appraisal and personal development is seen as leading to higher productivity. Yet, Taylor was not blind to these issues. Two of his four principles were about training people.

There have also been changes in the way business processes are perceived, with more focus on continuous improvement. Quality management, control and now sophisticated data-driven approaches address the sheer complexity of procurement, supply chain management, production and distribution.


Although modern commentators are often critical of Taylor, they effectively parrot his approach. Management consultants unwittingly apply his original schema, that separated out managers from workers and now, leaders.

His methods resulted in both rejection by some owners and workers but also in significantly higher wages in others that adopted his methods, when wages were linked to productivity, so the charge that he was merely a stooge of the owners is not entirely true. However, there is little doubt that he had a rather negative view of the working class. Overall, however, Drucker is right  in saying the result of his management techniques lifted many out of poverty. The downside is that this focus on paying managers and leaders well, has also led to massive levels of inequality, as the modern economists like Picketty and data shows. 

What is striking, however, to see what little has changed. His basic distinction between management and workers has survived. Specialism still exists and the focus on business process that leads to increased performance and productivity, remains intact.


Taylor, F.W., (1911). The principles of scientific management. New York, 202.

Drucker, Peter (1974). Management: Tasks, Responsibilities, Practices. New York: Harper & Row.

Picketty, T., 2014. Capital in the 21st Century, trans. Arthur Goldhammer.

Friday, August 21, 2020

Universities are the perfect hub and spoke network for viral spread

Here’s an idea. Let’s take hundreds of thousands of young people, get them to travel to another city in the country, preferably far from their home town, put them in closed rooms together for hours on end with older people, let them mix, go to bars and party. Throw in a healthy dose of foreigner students from countries all over the globe. Now after getting them all into one container, you shake this lethal cocktail, give it time to ripen, then send them all back home, just before Christmas, as the flu and other viruses peak. That’s essentially what Universities are doing around the world. You couldn’t design a better, more optimised system for viral spread, as it reaches almost every village, town and city in the country and abroad.

Why would you take such a risk? We know from recent exam results in schools, that results dog-legged up, even though the school children were not at school. We know that lectures can be online. We also know that tutorials can be held online. In fact, we know that entire degrees can be delivered online because they are, on scale. I’ve attended graduation ceremonies for years helping hand out degrees to such students.

In truth, Covid is exposing the hard reality of Higher Education, that is it mostly about hanging out with other young people. This is what administrators call the ‘student experience’. That’s fine but let’s be honest about where all that money goes. Beyond this there’s a lot of signalling – basically get a degree and put a sticker on your head saying hire me. Unfortunately, that sticker is starting to fall off as so many people have degrees, their value has been commoditised.


But let’s get back to Covid. What is happening in the US is illustrative. There is so much cash at stake, from sports, accommodation, food and other non-educational services, and the institutions are scared shitless about having to lower costs or refund students and their parents for an online only experience, that they’re ‘toggling’.

‘Toggling’ is a term invented by Bryan Alexander for switching to and fro between campus and online provision, effectively playing chicken. What many Universities are doing is saying:

It’s OK, come to Campus…

And by the way pay up…

Oh no, the students are partying and infections are rising, we have to close…

Sorry no refunds, it’s their fault…


Scott Galloway threw a grenade into this car crash, by publishing a spreadsheet that categorised institutions into those that will:


Includes the elite Universities with strong brands as they double-down and adopt some online provision.


Universities that weather the storm with good brand equity, credential-to-cost ratio, and/or endowments to weather the storm.


He describes these as having ‘comorbidities’ high tuition rates, low numbers, poor endowments.

Be challenged (he had perish!)

High tuition costs, low endowments, dependence on international students, and weak brands.

His spreadsheet is here.


I’m in favour of K12 schools returning, if carefully monitored, as it is localised. On the whole most kids attend their local school and tracing can be managed. Universities are different. It is a massive, national and evenly spread distribution network that spokes out to international locations. This is exactly what an evolving virus wants, an efficient and optimised delivery mechanism. 

Tuesday, August 18, 2020

AI for Learning. So what's this book about?

 So what is the book about?

This is, to my knowledge the first general book about how AI can be used for learning and by that I mean the whole gamut of education and training. It is not a technical book on AI. It is designed for the many people who teach, lecture, instruct or train, also those involved in the administration, delivery, even policy  around online learning, even the merely curious. It is essentially a practical book about using AI for learning, with real examples of real teaching and learning in real organizations with real learners.

AI changes everything. It changes how we work, shop, travel, entertain ourselves, socialize, deal with finance and healthcare. When online, AI mediates almost everything – Google, Google Scholar, YouTube, Facebook, Twitter, Instagram, TikTok, Amazon, Netflix. It would be bizarre to imagine that AI will have no role to play in learning – it already has. 

Both informally and formally, AI is now embedded in many of the tools real learners use for online learning – we search for knowledge using AI (Google, Google Scholar), we search for practical knowledge using AI (YouTube), Duolingo for languages, and CPD is becoming common on social media, almost all mediated by AI. It is everywhere, just largely invisible. This book is partly about the role of AI in informal learning but it is largely about its existing and potential role in formal learning – in schools, Universities and the workplace. AI changes the world, so it changes why we learn, what we learn and how we learn.

It looks at how smart AI can be, and is, used for both teaching and learning. For teachers it can reduce workload and complement what they do, helping them teach more effectively. For learners it can accelerate learning right across the learning journey from learning engagement, support, feedback, creation of content, curation, adaption, personalization and assessment, AI provides smart solutions to make people smarter. 


So how did we get here? Well AI didn’t spring from nowhere. It has a 2500 year pedigree. What matters is where we are today - somewhere quite remarkable. AI is ‘the’ technology of the age. The most valuable tech companies in the world have AI as their core, strategic technology. As it lies behind much of what see online, it literally supports the global web, driving use through personalization. Surprisingly, AI does this as an IDIOT SAVANT, profoundly stupid compared to humans, nowhere near the capabilities of a real teacher, but profoundly smart on specific tasks. Curiously, it can provide wonderfully effective techniques , such as adaptive feedback, on a scale impossible by humans, but doesn’t ‘know’ anything. It is ‘competence without comprehension’ but competence gets us a long way!

AI and teachers

In the book we first look at AI from the teacher or trainer’s perspective, showing that it is not a replacement, but valuable aid, to teaching. Robot teachers are beside the point, a bit like having robot drivers in self-driving cars. The dialectic between AI and teaching shows that there will be a synthesis and increased efficacy in teaching when its benefits are realized. Similarly for learners. AI is not a threat, it is a powerful teaching and learning tool.

AI is the new UI

AI underlies most interfaces online by mediating what you actually see on the screen. More recently it has provided voice interfaces, both text to speech and speech to text. This is important in learning, as most teaching is, in practice, delivered by voice. Then there is the wonderful world of chatbots, the return of the Socratic method, with real success in engagement, support and learning. There’s lots of real examples of how these new interfaces and, in particular, dialogue will expand online learning.

AI creates content

A surprising development has been the use of AI to create of online content. Tools like WildFire have been creating online content in minutes not months with high-retention learning – using AI to semantically interpret answers and get away from the traditional MCQs. AI can also enhance video, which suffers from being a transitory medium in terms of memory like a shooting star leaving a trail of forgetting behind it, towards powerful, high-retention learning experiences. New adaptive learning platforms are proving to be powerful, personalizing learning on scale , delivering entire degrees. AI pushes organisations towards being serious learning organisations by producing and using data to improve performance, not only of the AI systems themselves but also teachers and learners. Models such as GTP-3 are producing content that is indistinguishable, when tested, from human output. This shows that there is far more to AI than at first meets the AI!

AI and learning analytics

Learning is not an event, it is a process. Data describes, analyses, predicts and can prescribe process. Data types, the need for cleaning data, the practical issues around its use in learning and its use in learning analytics along with personalized and adaptive learning shows how AI can educate and train everyone uniquely. Data-driven approaches can also deliver push techniques, such as nudge learning and spaced-practice, embodying potent pedagogic practice. New ecosystems of learning such as Learning eXperience Platforms and Learning Record Stores move us towards more dynamic forms of teaching and learning. Sentiment analysis, using AI to interpret subjective emotions in learning is also covered. AI in this sense, is the rocket with data as its fuel. We explore how you can move towards a more data-driven approach to learning in the book.

AI in assessment

Then there’s assessment, which is being made easier and enhanced by AI. From student identification to the delivery of assessments and forms of assessment, AI promises to free assessment from the costs and restraints of the traditional exam hall. Plagiarism checking is also discussed, as is the semantic analysis of open input in assessment and essay marking.

What next for AI in learning?

Well, there will be a significant shift in the skills needed to use AI in learning away from the traditional ‘media production’ mode and these new skills are explained in detail. More seriously, you can’t have a book on AI for learning without tacking ‘ethics’ and so bias, transparency, race, gender and dehumanisation are all examined. The good news is that AI is not as good as many ethicists think it is and not as bad as you fear. On employment, we look at something few have looked at; the effect of AI on the employment of learning professionals.

AI: the Final Frontier

Finally there a cheeky look at the final frontier. What next? There technology on how AI may accelerate learning through non-immersive and immersive, brain-based technology, as well as speculation on how this may all pan out in the future. It is literally mind-blowing.


In these times of pandemic, we have all had to adapt to online learning; teachers, learners and parents. Necessity has become the mother of invention and this book offers a look at the future, where AI technology will provide the sophistication we need to make online learning smart, responsive and up to the the future challenge of a changing world. AI is here, its use is irreversible and its role in learning inevitable. I hope the book answers any questions you may have on AI in learning, more importantly, I hope it inspires you to think about how you may use it in your organization.

Use code AHR20 here to get 20% discount and free delivery in UK and US.


Monday, August 17, 2020

Study on retention using Video plus AI-generated retrieval practice


The aim of this trial was to test the effectiveness of chunking video and placing effortful, retrieval practice after each chunk of video. Chunking is the slicing of video content down into several, separate video segments or chuinks, so that there is less cognitive load and forgetting. Retrieval practice is making the learner recall what they think they know to reinforce therefore increasing retention and subsequent recall. Two groups were compared. One was shown only a training video on Equality & Diversity produced for a large company, the other the same video, chunked into smaller segments, with AI generated practice at the end of each short segment. Both groups were tested immediately after the learning experience. The results showed a 61.5% greater score in the Video + AI generated practice group over the Video only group. This study shows that video significantly benefits from enhancement through chunking, reinforcement, retention and recall by adding Video plus AI-generated retrieval practice.


Video has become commonplace in learning, through YouTube and Vimeo in both the public domain and on private channels. It has also become common to deliver learning video content from a VLE (Virtual Learning Environment), LMS (Learning Management System) or LXP (Learning eXperience Platform). Other video specific platforms use Netflix-style carousel and other interfaces to deliver learning video content.

Yet little attention has been paid to the research that suggests video should be enhanced with active learning. Research into the use of video for learning recommends several techniques to enhance the watching of video on its own, (Reeves & Nass, 1996; Zhang, 2006; Mayer, 2008: Brame, 2016; Chaohua, 2019).


Twenty-six participants were selected. The first group of thirteen watched the video only. The second group of thirteen watched the same video but chunked down into four meaningful segments, edited to match separate topics, interspersed with AI-generated, retrieval practice. The AI-generated, retrieval practice group required the learner to recall key ideas and concepts and type them in. This involved acts of recall and writing created by the AI tool, that reinforce learning, where the learner was required to recall concepts as well as type in those concepts. Any items that were not correct had to be repeatedly input until all were correct. A separate and identical written, recall test was completed immediately after the learning experience for both groups.

Note that the retrieval practice tool used was WildFire. It creates online learning from the chunks of video, applying AI to the automatically generated,video transcript , using the AI to identify the key learning points, create questions, as well as generate links to external content to enhance the learning experience. If the learner has not been able to retrieve the relevant concepts, it provides remedial practice until that concept is known. On input it accepts spelling variants, as well as British and American English. 


The Video + AI group scored significantly higher than the video group.

Figure 1 shows that Video + AI group had a 61.5% increase in mean retention, from a mean value of 9.00 to 14.54. 

Figure 2 shows that Video + AI group had a 61.5% increase in mean retention, from a mean value of 9.00 to 14.54.

In Figure 3, histograms of the two groups are compared showing that the Video + AI group has a higher mean and users scored higher more frequently.

In Figure 4, a box and whisker plot gives more insight into the respective distributions. The Video only group had a lower median value of 8 and a smaller range than the Video + AI group. The Video + AI group had a 75% increase in median score over the Video group.


We know from (Guo, 2014), from a large data set of learning video data gathered from MOOCs (Massive Open Online Courses), that learners drop out in large numbers at around six minutes. This drops dramatically down to 50% at 9-12 minutes and 20% beyond this. Evidence from other studies on attention, using eye-tracking, confirm this rapid drop in arousal (Risko, 2012). The suggestion is that learning videos should be 6 minutes or less. Chunking video down into smaller and meaningful segments achieves this aim and relieves the load on working memory.

Many can recall scenes from films and videos but far fewer can remember what was actually said. That is because our episodic memory is strong and video appeals to that form of visual memory and recall but video is poor for semantic memory and semantic knowledge, what we need to know in terms of language. One remembers the scene and can literally play it back in one’s mind but it is more difficult to remember facts and speech. This is why video is not so good at imparting detail and knowledge. There is a big difference between recalling episodes and knowing things.

Learning is a lasting change in long-term memory and video suffers from the lack of opportunity to encode and consolidate memories. Your working memory lasts about 20 seconds and can only hold three or four things in the mind at one time. Without the time to encode, these things can be quickly forgotten through cognitive overload or the failure to consolidate into long-term memory (Sweller, 1988). Our minds move through video at the pace of the narrator but like a shooting star, the memories burn up behind us, as we have not had the opportunity to encode them into long-term memory. Without additional active, effortful learning, we forget. An additional researched problem is that people often ‘think’ or ‘feel’ they have learnt from video but as (Bjork, 2013) and others have shown, this can be ‘illusory’ learning. The learner mistakes the feeling that they have learnt things for actual learning. When tested they are shown to have learned less than they thought they had learned.

How do we reduce cognitive load in video for learning? (Mayer, 2003) and others have shown that text plus audio plus video on the screen, commonly seen in lecture capture, actually inhibits learning. One should not put captions, text or scripts on the screen while the narrator or person on the screen is talking. (Florella 2019) proposes that learning improves when there are “visual rests” and memory is enhanced when ”people have a chance to stop and think about the information presented". Chunking video down to smaller, meaningful segments and providing the opportunity for active, effortful learning will both enhance learning by reducing cognitive load and increasing reinforcement, retention and recall.  

But what exactly should learners do after and between these video chunks? (MacHardy, 2015) shows that the relationship between video and the active learning must be meaningful and closely related. In a large data mining exercise, they showed that if the two are too loosely related, it inhibits student attainment. To increase reinforcement, retention and recall, (Szpunar, 2013; Roediger, 2006; Vural, 2013) suggest that retrieving key concepts is a powerful learning technique. This was the aim of this study, to test the hypothesis that chunked video with video plus AI-generated retrieval practice increases reinforcement, retention and recall.

Practical applications

There are several possible applications of this form of enhanced video learning:

1.     Existing video learning libraries can be made into far more effective learning experiences 

2.     New videos for learning can be made into far more effective learning experiences

Note that additional design recommendations identified during the study include:

1.     Scripting the videos into a more ‘chaptered’ structure

2.     Clear edit points on visuals and audio at the end of each planned chunk of video 

3.     Close relationship between the video and the retrieval practice


This trial provides evidence that the use of both chunked videos and AI-generated retrieval practice, in combination, significantly increases retention and recall and can be strongly recommended for both existing and new video learning content.


Bjork, R.A., Dunlosky, J. and Kornell, N., 2013. Self-regulated learning: Beliefs, techniques, and illusions. Annual review of psychology64, pp.417-444.

Brame, C.J., 2016. Effective educational videos: Principles and guidelines for maximizing student learning from video content. CBE—Life Sciences Education15(4), p.es6.

Chaohua, O, Joyner, D, Goel, A., 2019. Developing Videos for Online Learning: A 7-Principle Model. Online Learning

Fiorella, L., van Gog, T., Hoogerheide, V. and Mayer, R.E., 2017. It’s all a matter of perspective: Viewing first-person video modeling examples promotes learning of an assembly task. Journal of Educational Psychology109(5), p.653.

Fiorella, L., Stull, A.T., Kuhlmann, S. and Mayer, R.E., 2019. Fostering generative learning from video lessons: Benefits of instructor-generated drawings and learner-generated explanations. Journal of Educational Psychology.

Guo PJ, Kim J, Robin R.  L@S’14 Proceedings of the First ACM Conference on Learning at Scale.New York: ACM; 2014. How video production affects student engagement: an empirical study of MOOC videos; pp. 41–50.

MacHardy Z, Pardos ZA., 2015 Evaluating the relevance of educational videos using BKT and big data. In: Santos OC, Boticario JG, Romero C, Pechenizkiy M, Merceron A, Mitros P, Luna JM, Mihaescu C, Moreno P, Hershkovitz A, Ventura S, Desmarais M, editors. Proceedings of the 8th International Conference on Educational Data Mining, Madrid, Spain.

Mayer, R.E. and Moreno, R., 2003. Nine ways to reduce cognitive load in multimedia learning. Educational psychologist38(1), pp.43-52.

Mayer, R.E., 2008. Applying the science of learning: Evidence-based principles for the design of multimedia instruction. American psychologist63(8), p.760.

Reeves, B. and Nass, C.I., 1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press.

Risko, E.F., Anderson, N., Sarwal, A., Engelhardt, M. and Kingstone, A., 2012. Everyday attention: Variation in mind wandering and memory in a lecture. Applied Cognitive Psychology26(2), pp.234-242.

Roediger III, H.L. and Karpicke, J.D., 2006. The power of testing memory: Basic research and implications for educational practice. Perspectives on psychological science1(3), pp.181-210.

Sweller, J., 1988. Cognitive load during problem solving: Effects on learning. Cognitive science12(2), pp.257-285

Szpunar, K.K., Khan, N.Y. and Schacter, D.L., 2013. Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences110(16), pp.6313-6317.

Vural, O.F., 2013. The Impact of a Question-Embedded Video-based Learning Tool on E-learning. Educational Sciences: Theory and Practice13(2), pp.1315-1323.


Zhang, D., Zhou, L., Briggs, R.O. and Nunamaker Jr, J.F., 2006. Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness. Information & management43(1), pp.15-27.

Sunday, August 16, 2020

Pixel is a powerful, portal, personal pocketful of AI....

My son’s an AI lad. He has expertise in object recognition (currently best in world on fruit recognition, which can be used to increase yield). He’s also been involved in AI for Learning, as he’s coded the new version of WildFire, an AI-driven content creation service for learning. So he’s my go-to-guy for recommendations and swears by his Pixel smartphone from Google. As he says “it’s literally AI in your pocket”. For me, the Pixel is a little sandbox for consumer AI, so gives us insights into the way technology is moving and therefore the way online learning will move.

Not many products get better after you buy them, but that can be said for this smartphone. As a device it really does deserve to be called ‘smart’ as it uses in-device machine learning. The Pixel 4 uses Neural Core, a TPU chip with tons of on board AI features for everything from song recognition to computational photography. The Adaptive Battery feature even uses AI to predict when your battery will run out from your use patterns, and automatically reduce behind the scenes activity to lengthen battery time. The Pixel phones takes AI to a new level with:


Language & image recognition

Learner support 

Capture media




Speech to text has come of age and the Pixel automatically transcribes videos. Live caption will also handle podcasts and audio messages. You can record and export these transcripts and, as text is searchable, keyword triggers can also be set up. Note taking can be transformed if you use Google’s transcription service, during Zoom calls. 

You can start, save, and search recordings in the Recorder app using Google Assistant. Just say “Hey Google, start recording my voice” to start recording, or “Hey Google, find my voice recording about LXPs” to find that session you had recorded. The saved transcripts can also be easily exported to Google Docs, just choose a recording, tap “Transcript” to show the transcript, then tap the three dots menu on the top-right corner, and tap “Save text to Google Docs.” 

There are all sorts of NLP (Natural Language Processing) tricks you can pull off here and we already use this in WildFire to transcribe videos, going further by using AI to automatically generate powerful, online learning. We have also been using voice as input. Imagine online learning allowing open voice responses from a learner and their automatic, semantic interpretation so that feedback can be provided until you get things right, that’s exactly what we’ve done in WildFire.

Language and image recognition

Google Lens must be one of the best, but least used, AI features on smartphones. You simply point and shoot at a plant, tree, flower, animal, work of art, landmark, restaurant or product, and much more. Then good news is that an ‘education’ feature for Lens is in the works. You will be able to point your camera at an assignment or homework question and get instant help. The word is that Google will focus initially on maths and we’ve seen how Photomaths uses AI to problem solve and unpack the steps from question to answer in a mathematical problem. There is huge scope here for learner engagement, support and eventually online teaching with this line of development. 

In languages Lens already translates in real time, whether it be a foreign menu, or words on the page. You needed to be online in the past but it looks as though this will be possible offline. 

In some subjects, imagery may be important; biology, geography, architecture, art. Image recognition leading to relevant education links is already in there, with a purely educational mode it could be made more relevant to education.

But the real advantage comes with its text recognition, which springs off into interpretation, recommendations, transcription and translation. Most subject would benefit from help from limited test. What most people will use it for in an education or training context is its ability to take text from the real world; from a document, manual, whiteboard, book or business card… as it turns it into text on the phone, hat can be used in any way you want. It’s the links from the text that matter – links to a free educational service, person who can help, possible training course.

Learner support

The primary problem with most assistant interaction is that it is ‘single shot’. You ask for something and it responds – once. Google Assistant is, as expected, forging ahead with continued conversation or multi-turn dialogue. You say, “Okay, Google” Google Assistant will respond but also continue to listen for additional commands, to continue the dialogue until you say “Stop” or “Thank you” to end the conversation. This is a fiendishly difficult software problem to solve and needs AI to do it well.

This could be a big leap for learning, as you can take deeper dives into topics; linked to actions and sharing. What makes it easier is the transcription of your words on the screen to confirm that it has captured what you intended. You find that this all increases that sense of flow, of it being a conversation. This is the direction of travel for conversational interfaces and chatbots. True dialogue promises to provide more than just answers to questions, as it will also provide, at some point, real Socratic dialogue, in other words – teaching.

Capture media

Phones have largely replaced cameras for most consumer use. Taking pictures and videos for Facebook, Twitter, Instagram and TikTok (see why TikTok is relevant to on line learning), has become a core use of smartphones. Social media has migrated across media; from text, to text and images, to images only, to video and now to those media with the ability to create, filter, edit media. In Pixel phones you see this happen at a very sophisticated level. 

Want a sharp portrait, good picture at night, images of the Milky Way, get a good Zoomed image; Google kicks ass on computational photography. Using machine learning-based white balance and multiple exposures to fill out problems on an image, it turns you into an impressive photographer, and that’s where machine learning, on-device neural engines, and overall improvements in both hardware and software component performance, raise the game in photography. The Pixel literally uses AI as a creative force in photography. 

As video has become an important medium in lea ring, so smartphones like these combined with the sharing capability on online platforms, allows learning through video to happen with ease.


Basically, your smartphone is getting smarter, as it is now self-aware of where it is, not simply its GPS position, but where it is in relation to the world around itself. Google’s Soli, a motion sense chip, uses smart sensors and data analysis to detect how big something is, where it is and how close your phone is to that object. It shoots out electromagnetic waves and these waves bounce back to be interpreted by AI so that the position and objects can be recognised. It has a 180 degree view, better than the human eye, with only 120 degrees, concentrated in a much smaller arc, as most of it is peripheral vision.

This is crazy, but as you go to pick up your Pixel, it sees your hand, switches on the face recognition sensors, recognises you and unlocks your phone, as it recognises lots of face orientations, even upside down for unlocking secure payment… all in one motion. This is only one of a number of its applications for motion sense. And if you’re worried, as some were, about a phone that can be unlocked when you are sleeping, even dead, they have introduced a blink recognition system. 

Motion sense also delivers gesture recognition. This touchless future could be huge in the future, especially in a more Covid aware world. We have contactless payment and contactless interfaces are now here. A swipe of the hand for moving back and through songs, a pinch of the fingers for a button press, we could soon see an agreed language, like sign-language for interactions on lots of different devices; smartphones, laptops, with AR, within VR, control within cars. We gesture all the time, almost unconsciously pointing to imaginary watches when describing time and we’ve moved towards ever more transparent interfaces, with touchscreen, voice and now gestures. 

AI for Learning
As I explain in my book ‘AI for Learning’, it is the invisible hand and eye of AI that fueled this change. In learning, these frictionless interfaces provide interfaces that are easier to learn and use. They also reduce cognitive, leaving more bandwidth to learn. 

Your phone may also know, not just where you are, but what is around you, allowing the start of more sophisticated context reading for online job aids and learning. Suppose my phone knows what building I’m in, where I am in a building, close to an object, and also know what project I’m working on, it can have an educated guess as to what I’m likely to need in terms of push and pull nudges and support. This could be performance support on steroids where the whole move towards learning in the workflow is enriched by AI.


The smartphone has been astoundingly successful as a consumer and professional device. From its brick-like dimensions in the 1970s and 80s it quickly developed out of voice-only into text, photographs, then video. On interfaces, from buttons to touchscreens and is now a powerful computer that can do much of what a desktop computer can do and more. But the real leap is their AI capabilities, as they have AI embedded hardware for a lot more offline punch, as well as useful functionality. Your phone learns about you, personalises your experiences, knows where you are and now what’s around you. This all helps deliver the support you need to work, learn and improve your own expertise. We would be wise to look at the evolution of these devices as the evolution of how learners have and will interface with online learning. The main lesson is that the AI in every modern smartphone will be. In all online learning in the future.

Friday, August 14, 2020

AI and ethics - it's not as good as you think and not as bad as you fear

Joanna Bryson, one of the world’s experts in AI and ethics is right when she points out that the big problem in AI and Ethics is ‘anthropomorphising’. AI is competence without comprehension. It can beat you at chess, Go and poker but doesn’t know it has won. Literally hundreds of AI and ethics groups have sprung up over the last couple of years. Some are serious international bodies like the EU, IEEE and so on, but it is important to examine but remain level-headed on this issue. The danger is that we destroy the social goods that AI offer, by demonising it  before it has been tried.

Having just launched a new book ‘AI for Learning’ in which I tackle these ethical issues in some detail, I thought I’d provide a taster for the ethical concerns as they may affect the world of learning. 


Let’s get one moral issue out of the way – the existential threat. This often centres around Ray Kurzweil's ‘Singularity’, the idea that AI will at one point transcend human intelligence and become uncontrollable. Other AI experts like Stuart Russell, Brett Frischmann and Nick Bostrom have speculated at length on ways in which runaway AI could be a threat to our species. Although there are possible scenarios where runaway AI will lead to our demise as a species, this is not an issue that should worry us much in using AI for learning. Many, such as Stephen Pinker, Daniel Dennett and other serious researchers in AI are sceptical of these end-of-days theories. In any case, it is highly unlikely that AI for education will do much other than protect us from such scenarios.


Much more relevant is the topic of ‘bias’. The problem with many of the discussions around bias in AI, is that the discussions themselves are loaded with biases; confirmation bias, negativity bias, immediacy bias and so on. Remember that AI is ‘competence without comprehension’ competences that can be changed, whereas all humans have cognitive biases, which are difficult to change. AI is just maths, software and data. This is mathematical bias, for which there are definitions. It is easy to anthropomorphize these problems by seeing one form of bias as the same as the other. That aside, mathematical bias can be built into algorithms and data sets. What the science of statistics, and therefore AI, does, is quantify and try to eliminate such biases. This is, essentially, a design problem, and I don’t see much of a problem in the learning game, where datasets tend to be quite small, for example in adaptive learning. It gets to be a greater problem when using a model such as GPT-3 for learning, where the data set is massive. It can literally produce essay-like content at the click of a button. Nevertheless, I think that the ability of AI to be blind to gender, race, sexuality and social class may, in learning, make it less biased than humans. We need to be careful when I comes to making decisions that humans often make, but at the level of learning engagement, support there’s lots of low hanging fruit that need be of little ethical concern.


The most valuable companies in the world are AI companies, in that their core strategic technology is AI. As to the common charge that AI is largely written by white coders, I can only respond by saying that the total number of white AI coders is massively outgunned by Chinese, Asian and Indian coders. The CEOs of Microsoft and Alphabet (Google) were both born and educated in India. And the CEOs of the three top Chinese tech companies are Chinese. Having spent some time in Silicon Valley last year, it is one of the most diverse working environment I’ve seen in terms of race. We can always do better but this should, in my view not be seen as a crippling ethical issue.


Gender is an altogether different issue and a much more intractable problem. There seems to be bias in the educational system among parents, teachers and others to steer girls away from STEM subjects and computer studies. But the idea that all algorithms are gender-biased is naïve. If such bias does arise one can work to eliminate the bias. Eliminating human gender bias is much more difficult.


It is true that some AI is not wholly transparent, especially deep learning using neural networks. However, we shouldn’t throw out the baby with the bathwater… and the bath. We all use Google and academics use Google Scholar, because they are reliably useful. They are not transparent. The problems arise when AI is used to say, select or assess students. Here, we must ensure that we use systems that are fair. A lot of work is going into technology that interprets other AI software and reveals their inner workings.


A danger expressed by some educators is that AI may automate and therefore dehumanise the process of learning. This is often within discussions of robot teachers. I discuss the fallacy of robot teachers in the book. It is largely a silly idea, as silly as having a robot driver in a self-driving car. It is literally getting the wrong end of the stick, as AI in learning is largely about support for learners. Far from dehumanising learning it may empower learners.


The impact of AI on employment is a lively political and economic topic. Yet, before Covid, we had record levels of employment in the US, UK and China. There seems to be a fair amount of scaremongering at learning conferences, where you commonly see completely fictional quotes, such as ‘65% of children entering primary school today will be doing jobs that have yet to exist’. Even academic studies tend to be hyperbolic, such as the Frey and Osborne (2013) report from Oxford University that claimed ‘47% of jobs will be automated in the next two decades’. Seven years in and the evidence that this is true is slim. What is clear is that skills in creating and using AI for learning will be necessary. Indeed, Covid has accelerated this process. I categorise and list these new skills in the book.


I touch upon all of these issues in the book and stick to my original premise that AI is ‘not as good as you think it is and not as bad as you fear’. Sure there are ethical issues, but these are similar to general ethical issues in software and any area of human endeavour where technology is used. It is important not to see AI as separate from software and technology in general. That’s why I’m on the side of Pinker and Dennett in saying these are manageable problems. We can use technology to police technology. Indeed AI is used to stop sexist, racist and hate text and imagery from appearing online. Technology is always a balance between good and bad. We drive cars despite the fact that 1.3 million people die horrible deaths every year from crashes and many more have serious injuries. Let’s not demonise AI to such a degree that its benefits are not realised and , as I discuss in the book, in education and training the benefits are considerable.


AI for Learning

The book ‘AI for Learning’ is available on Amazon. In addition to ethics it covers many facets of AI for learning; teaching, learning, learning support, content creation, chatbots, learning analytics, sentiment analysis, and assessment.



Bryson, J.J., Diamantis, M.E. and Grant, T.D., 2017. Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law25(3), pp.273-291.

Kurzweil, R., 2005. The singularity is near: When humans transcend biology. Penguin.

Russell, S., 2019. Human compatible: Artificial intelligence and the problem of control. Penguin.

Clark, D., Review of Human Compatible

Frischmann, B. and Selinger, E., 2018. Re-engineering humanity. Cambridge University Press.

Clark, D., Review of Re-engineering humanity

Bostrom, N., 2017. Superintelligence. Dunod.

Pinker, S., 2018. Enlightenment now: The case for reason, science, humanism, and progress. Penguin.

Dennett, D.C., 2017. From bacteria to Bach and back: The evolution of minds. WW Norton & Company.

Clark, D., Review of From bacteria to Bach and back