Saturday, April 17, 2021

Be heard and not seen… amazing rise of the podcast…

The rise of the podcast, let’s face it no one saw that coming. I knew it had arrived when I saw them being reviewed in the pages of the press. I, like hundreds of millions of others, am an addict. I blogged about users wanting to turn their cameras OFF during online learning. Podcasts provide ample evidence, that if it’s just talking heads, the camera of the teacher, lecturer or trainer can also, often be switched off. That’s why podcasts are so popular. Time and time again I hear people say they don't miss images and heads when ideas are being discussed and that they prefer the informality of a conversation or interview to a didactic presentation.

Purity of the podcast

The joy of a podcast is its purity. It doesn’t nag you with over-earnest graphic design. You’re alone with your thoughts and there’s space to think. Odd that it focuses you to focus by the absence of distracting images or talking heads. Not seeing their faces is a plus. It often adds little, can be distracting. It’s what they say that matters not what they look like. This fees the eyes and hands for other things, such as note taking. It’s a medium, not multimedia, that’s its strength. You have to make an effort, cognitive effort, to actively listen. It’s hard to be lazy when listening to a podcast, whereas you can sit back and let a video was over you. With audio you’re either in or out, there’s no half-way house. 


It a rebel medium, with lots of causes. As mainstream media becomes ever more homogenous, our attention has gone online and podcast are part of the counter-culture on the web. We had the YouTubers, now we have the podcasters, such as Joe Rogan and a massive array of funny, out-there podcasters breaking all the rules. Traditional media seems so formulaic, so hidebound, with a limited range of voices. Podcasts shatter that model. There’s no editor, little censorship. Swearing is not unusual, taboo subjects common. There is a sense of being on the edge, out there.


Podcasts may have had their precedents in radio but they are the child of a specific piece of technology. The portmanteau, podcast, comes from combining iPod and broadcast. It was coined by The Guardian columnist Ben Hammersley, in 2004 in a Guardian article. It’s ease of production and distribution, streamed or downloaded, means it can be used on almost any device, computer, smartphone or audio speaker.


Key to podcast culture is the ‘series’ with some sort of identity, the podcaster(s), theme or brand. On-demand streaming and downloading gave it legs. It’s a medium in itself and has spawned an entire global industry of platforms, sponsorship and audiences. They tend to be more personal, with a lead podcaster and interviewee(s), more informal that traditional broadcast media. Conversation is the aim, not a didactic talk. You’re talking with and to people not at them.


My favourite design principle for the design of learning (design in general) is Occam's Razor - the minimum number of entities to reach your goal... also useful in teaching and for learners. The podcast is an exemplar of this type of design thinking. Cognitively, give me things in the least cognitively loaded format. I’m happy with text if it’s just ideas, podcasts for just discussions, graphics if I need something illustrated visually, video for drama and its other genres. Don’t pack out screens or use media that is not matched to the learning content. Less is more.

Friday, April 16, 2021

AI revolutionises Higher Education in China Open. Huge project gets UNESCO Prize

After writing a book ‘AI for Learning’, I have given a lot of talks and podcasts on the need to use smart software to make people smarter. That means using the technology of the age – AI and data. In the book (p228), and in many of these talks, I explained how China is forging ahead with AI in this field. Unlike the West, China has focus, investment and a view that access to education needs to be cheaper, faster, smarter and with a massive increases in access. 

Meanwhile, we continue with a view that Higher Education needs to remain scarce and expensive, very expensive. We put more attention into AI and Ethics than real projects. Our Universities and colleges do little more than write reports on AI for learning. That’s a shame.

Meanwhile the Open University of China has been awarded a UNESCO Prize for its use of AI to empower rural learners. Their ‘One College Student Per Village’ is an ambitious and inspiring initiative that puts equitable access at the heart of their offer. This is all about improving access to education for the poor. Running since 2004, financed by the Chinese Ministry of Education to tackle access problems, it does far more than reach out with infrastructure. AI lies at the heart of their efforts to provide scalability.


In its efforts to provide quality learning experiences, the OUC set up over 500 cloud-based classrooms and smart classrooms in poorer areas in 31 provinces, municipalities and autonomous regions. The trick was to make the courses demand-led by asking what local people wanted and providing largely practical, vocational courses. They also built the courses to be accessible on mobile devices for farmers and those in rural professions and places. The numbers are impressive:

·      29 programmes (using AI) 

·      825,827 learners enrolled

·      529,321 graduated

·      1,500 OUC study centres

·      300 online courses

·      100,000 mini-lectures

·      all open to general public

AI for Learning

But the secret, sweet without the sour sauce, is AI. The learning is personalised using personal and aggregated data. This adaptive learning means that different students take different paths through the courses, a bit like the SatNav or GPS in your car, go off course, and it re-sequences content and provides feedback to get you back on course. I’ve been working with this for five years – believe me it works.

The really clever stuff is the use of AI to recognise text (text to speech), as well as semantic analysis of answers. This allows open input from students, as opposed to MCQs. I’ve also been working with this for some time in WildFire. This approach allows learners to answer or ask questions which are automatically recognised through semantic analysis, then feedback automatically provided by the system. This feedback (should really be called feedforward) is what oils the wheels of learning and provides real scalability. The immediate feedback with learning opportunities means that the system does not depend as much on human tutors. Semantic analysis of learner answers is something we’ve implemented. It is powerful and pedagogically superior to MCQs as it is more realistic, requires greater cognitive effort and can be more diagnostic for the purpose of feedback.

They also use AI for knowledge mapping, automatic content generation and smart chatbots for 24/7 online learner support. This use of AI for content creation is something we’ve been doing at WildFire. It reduces the cost of learning per student, as new content can be created quickly from documents, PowerPoints and videos. The point is to create content quickly, cheaply but using AI for semantic analysis and retrieval practice for high retention.

AI and assessment

The automation of assessment and essay marking is what allows them to scale sophisticated learning to so many people free from the tyranny of time, place and expensive human effort. Automating much of the assessment allows human tutors to focus on closing knowledge and skills gaps, rather than marking. As Li Ganged, a tutor at the OUC says, “Automated essay scoring is efficient in that I don’t have to mark these assignments myself but I can get a clear picture of where learners need help.” This idea of using AI to create assessments on all of your content at little extra cost is also something we've been doing using AI.


Another feature of this initiative, something our rigid system can’t really handle is the mix of programmes, degrees, diplomas and short courses. The focus on vocational training is also something we desperately need but underfund in favour of longer degree courses. We have to move beyond our sclerotic system of high cost content creation, high cost delivery and dependence on physical campuses. There’s too much at stake here for us to focus on education of the few at the expense of the many. Who would have thought that China is leading the way. They’ve just clocked up over 18% of economic growth but it’s not just about that, it’s the simple fact that they are leading the world in educational innovation.

Wednesday, April 14, 2021

Micro-videos; What are they? How to make them sticky? How to deliver? Good interface? How to make them?

What are they?

Micro-videos are short videos. How short? Often very short, 15 seconds on TIKTOK, Twitter at 2 mins max, up to 6 minutes or so at most. There is no absolute rule here but the research suggests that people duck out from learning video at around 6 minutes. The idea is to be short, to avoid cognitive overload, be hard-hitting, increase retention and deliver relevant learning. The learning world has picked up on YouTube, Twitch, Facebook, Twitter, Instagram and TikTok. It’s not hard to find good examples.

How to make ‘em stick?

You can try too hard here, with too much animation, sound effects and noise. This is learning. Learning is not a circus and we are not clowns. It is important to match the style to the content. Drama works for behavioural and attitudinal shift. In fact video’s primary strength is in motivation, attitudes and behaviour. It is not so hot on knowledge, and conceptual learning. The transience effect means you quickly forget detail on video, like a shooting star your 20 second attention span means the knowledge burns up behind you, and you forget. Think back to the last three series Box-set you watched. How much do you really remember? But you walk away with the ‘gist’ of things – impressions. Think of a micro-video, not as the primary learning event but the trigger or catalyst for further learning.

Some tricks to make ‘em stick? Here’s six starters…

Surprise with a question, counterintuitive point or dramatic statement. First impressions matter… don’t do a learning objective!

Take it slow. Learning needs attention and the mind needs time to digest ideas. Play it a little slower than they do in the movies.

Summarise at the end. Learning is a process and not an event so summarise points at the end.

Calls to action. Make them go off and DO something, then report back on what they did and what they found easy and difficult.

Leave them hanging… that’s what a good video TV series will do… want you to come back for more…

Follow up with some active learning using the narration from the video. We do this with WildFire, where AI creates the content.

In a sense, video is rarely ever enough. It needs to be supplemented by more active learning. It tends to give the illusion of learning.

How do you deliver them?

Most people will have a LMS/VLE. But this may be the most unsatisfactory method of delivery, as they are largely repositories, not designed for sophisticated delivery. That’s where an LXP (Learning eXperience Platform) scores better, where you can pull and push micro-videos to and from learners in the workflow. Learning I a process not an event. Emails can be just as powerful. I’ve seen some great examples of 90 second videos delivered by email, which is still a popular and powerful communications tool in organisations. Remember also, that YouTube and Vimeo are literally learning platforms, with tools for privacy, editing and transcription. Use them. You may also want to consider analytics. YouTube and Vimeo work, as will your LMS or LXP. Just decide what data you want up front and what you want to do with it. Dashboards don’t make decisions, you do.

What’s a good interface?

An interface that has emerged as dominant is the Netflix, YouTube, Vimeo, Prime interface, with its tiles, horizontal scrolling for more, vertical scrolling for themes, along with search, maybe a 'playlist' or 'what’s new'. This makes great use of limited screen real estate and, above all, is now familiar to almost everyone on the planet. Never underestimate the power of search, especially deep search, into the narration and detailed content of videos. Mobile’s different. Instagram and TikTok are the masters there.

How do you make them?

Take your smartphone and record. It’s really that simple. You can also record in Powerpoint with those slide images. For more complex stuff there’s tools like Vyond, Powtoon, Vlognow, Adobe Animate, Articulate replay, Storyblocks – a ton of them. Although I’m not a great fan of animated, cartoony stuff. I often think it would be better in a single image, like an infographic. There’s Captivate and other similar tools for capturing ‘how to’ software tasks. Remember some simple rules about framing. It’s all in the eyes, so go for close-ups in learning. If you’re showing how to do something, shoot first-person not third person i.e. put the learner in the shoes of the doer.


Micro-videos are coming of age, the result of their popularity on consumer devices, platforms and social media. But remember that learning is not entertainment. Learning micro-videos need to be made with learning in mind. If it’s edutainment you want, beware of too much ‘tainment' and not enough ‘edu’. That’s the big mistake. For a much deeper look at the research on video for learning click here.

AI: America innovates, China implements, EU regulates... where does that leave the UK?

America innovates, China implements, EU regulates 

The EU's AI obsession is regulation. That's fine and I have little criticism of the direction of such regulation, part from the usual bureaucracy. What I do find depressing is the dampening effect this has on actual effort. Thankfully our own little UK 'AI and Ethics' group was more like a Parish Council, a rather amateurish academic attempt to tell companies how to run their business by people who don't know much about business. It amounted to little more than a rather dull checklist. This is good news as it remains unknown and largely ignored. AI is not as good as you think it is and not as bad as you fear.

In AI for Learning in Higher Education, we have several world class companies in the UK. One has received a seven figure investment from a US University but has literally zero UK customers. Our effort in this area is largely third rate AI and Ethics commentators. In AI itself, however, we have a ton of talent.


Where does this leave the UK? We should diverge from the EU here. In fact, we already have, Deepmind and other AI companies in the UK looked to the US, not Europe or China for investment and markets. Similarly in my own field, AI for Learning. There is little UK-EU commercial or M&A activity. It is almost all UK-US. We need to stay innovative and look to those countries not obsessed by negativity around AI and Ethics to move forward.

The investment community in London and US is well connected and most fo the deals are on that axis. This has increased post-Brexit, with even more alignment. The EU is linguistically diverse and much messier in terms of marketing and implementation. few companies see the EU as their target market, preferring the much bigger US market, which more aligned linguistically, culturally and financially.


China has made the investment and is actually forging ahead with AI for Learning. I've written about this here. They have a strategic view, with huge government targets and investments, that is markedly different from the EU. We have already seen the emergence of large-scale projects in schools and Universities. On the other hand, their attitude towards social scoring and surveillance technology leaves them open to criticism.

Appendix - EU legislation

As I say, the proposed EU legislation is OK and has been leaked (probably deliberately). It is, as expected, bureaucratic with lots of quangos being set up and a typical piece of EU overkill. Some of it, however, is eminently sensible and Google and others have been asking for this for some time. This is the right level for such discussions, if it is aligned with other efforts from IEEE and so on.

To summarise:

1.     Yet another Board! European Artificial Intelligence Board (one for each of the EU27 countries, a representative of the Commission & European Data Protection Supervisor)

2.     Digital Hubs and Testing Facilities to be set up

3.     Member states need inspection bodies for assessment and certification (3rd parties for 5 years)

4.     High risk AI systems tested before release

5.     High risk is, for example, face recognition for physical safety decisions in healthcare, transport or energy

6.     Authorisation for use of biometric identification in public domain

7.     Rules on exploitation of data

8.     Manipulation of human behaviour (to people’s detriment)

9.     Prevents mass surveillance

10.  Disclosure for deep fakes

11.  Voice agents cannot pretend to be human

12.  Emotion recognition has to be made explicit to user

13.  Ban ranking social behaviour (as in China)

14.  Self-assessment requirements for AI used for the purpose of determining access or assigning persons to educational and vocational training institutions

15.  Fines on a GDPR scale

16.  Aim is to prevent abuses with sizeable fines up to 4% global revenue 

17.  Notable exceptions for military and safeguarding public security

18.  SMEs to get privileged access

19.  Exemptions for training data 

20.  Notable get outs for member states (national security worries)

Saturday, April 10, 2021

Monkey plays computer game by mind control, mediated by AI. Here’s 10 implications for learning…

Pager is a macaque. He will go down in history as one of the first sentient beings to play a computer game just by thinking. He was trained to play ‘pong’ using a joystick, which is fascinating in itself. Then when the joystick was unplugged, he played by THOUGHT ALONE.

I wrote about this in my book ‘AI for Learning’.

So what are the possibilities for learning?

1. Invisible interface

First there is the promise of the brain interacting with the world without speech or movement. Our fingers are slow input devices, even speech is slow. Imagine being able to conjure up answers, have a dialogue, practice a language and engage with learning experiences without the messiness of an interface. The invisible interface eliminates all of that packing away at screens and keyboards. 

When reading and writing from and to the brain, you don’t want to damage anything and you need precise control over a range of electric fields in both time and space, also delivering a wide range of currents to different parts of the brain. The device uses Bluetooth to and from your smartphone. Indeed, it is the mass production of smartphone chips and sensors that have made this breakthrough possible. The smartphone may in the end be merely a bridge to its own obsolescence.

Our current interfaces, keyboards, touchscreen, gestures and voice, could also be bypassed, giving much faster thought ‘to and from machine’ by tapping into the phonological loop. This is an altogether different form of interface, more akin to VR. Consciousness is a reconstructed representation of reality anyway and these new interfaces would be much more experiential as forms of consciousness, not just language. Note that Pager is not executing imagined speech but actions.

2. Reducing cognitive load

This invisible interface feature alone will save immense amounts of cognitive effort, thereby reducing cognitive load. This matters, as cognitive load, is a rate limiting step in learning. To give but one example, when we watch video to learn, we have a 20 second period of attention and can hold only 2-4 things in our head at any one time. This means that this learning experience is largely one of forgetting. We get the illusion of learning, we feel as though we’re learning but, like a shooting star, our memories burn up behind us. Unfortunately, this transience effect, severely limits what we learn from video. In fact, this problem of overload is common to most learning.

Eliminating the need to learn how to use an interface, recognise icons and manipulate things to increase the limited screen real-estate, means that much of our cognitive effort, the key to learning, is wasted. We’re so busy, scanning, clicking, scrolling and manipulating the interface, that it harms learning. UX design will disappear into understanding the psychology of learning not the ergonomics of screens.

3. Accelerated learning

Learning is a relatively permanent change to long-term memory. If we can use AI, as they do I this experiment, to read data from our minds, then good pedagogy can be applied. Immediate feedback to propel the learner forward. Feedback should be renamed feedforward, as its purpose is to accelerate learning. Fast, personalised feedback, can be provided on the basis of what we are thinking. All sorts of other AI and data-driven techniques, which I examine in my book ‘AI for Learning’, come into play – personalised, adaptive, deep search, chatbots, nudge learning, learning on the flow of work. This advance unlocks many other uses of AI for learning.

It doesn’t end there. This experiment shows something that we knew already, that mental rehearsal, leads to learning. Note that the Neuralink system captures what Pager learns, calibrates it using AI, then uses that to do what pager wants without any physical interface. They read Pager’s mind, literally intentions, in realtime to predict what Pager wants to do.

It’s the decoding of Pager’s brain signals that are being used here. This is not just about the fibre implants. It is the AI decoded data that does the smart work. You simply imagine something then the computer knows what you are thinking. These intentions can spark of actions anywhere on a network. For example, implants on the legs of paraplegics, allowing them to walk. More commonly, anyone could use a smartphone mentally, faster than anyone using it physically.

4. Insights on learning

At the very least this will give us insights into the way the brain works. We can ‘read’ the brain more precisely but also experiment to prove/disprove hypotheses on memory and learning. This will take a lot more than just reading ‘spikes’ (electrical impulses from one neuron to many) but it is a huge leap in terms of an affordable window into the brain. If we unlock memory formation, we have the key to efficient learning.

5. Read memories

Memories are of many types and complex, distributed phenomena in the brain. Musk talked eloquently about being able to read memories, that means they can be stored for later retrieval. Imagine having cherished memories stored to be later experienced, like your wedding photos, only as felt conscious events, like episodic memories. There are conceptual problems with this, as memory is a reconstructive event, but at least these reconstructions could be read for later retrieval. At the wilder end of speculation Musk imagined that you could ‘read’ your entire brain, with all of its memories, store this and implant in another device. 

6. Write memories

Reading memories is one thing. Imagine being able to ‘write’ memories to the brain. That is, essentially learning, especially if they bypass the limitations of working memory. If we can do this, we can accelerate learning. This would be a massive leap for our species. Learning is a slow and laborious process. It takes 20 years or more before we become functioning members of society, even then we forget much of what we were taught and learned. Our brains are seriously hindered by the limited bandwidth and processing power of our working memory. We are easily distracted, get demotivated, can’t upload, download and sleep for one third of our lives. Overcoming those blocks, by direct writing to the brain, would allow much faster learning. Could we eliminate great tranches of boring schooling? Such reading and writing of memories would, of course, be encrypted for privacy. You wouldn’t want your brain hacked!

7. Imagination

This is not just about memories. It is our faculty of the imagination that drives us as a species forward, whether in mathematics, AI and science but also in art and creativity. Think of the possibilities in music and other art forms, the opportunities around the creative process, where we can have imagination prostheses.

8. Consciousness

In my book I talk about the philosophical discussion around extended consciousness and cognition. Some think the internet and personal devices like smartphones have already extended cognition. The Neuralink team are keenly aware that they may have opened up a window on the mind that may ultimately solve the hard problem of consciousness, something that has puzzled us for thousands of years. If we can really identify correlates between what we think in consciousness and what is happening in the brain and can even simulate and create consciousness, we are well on the way to solving that problem.

9. End to suffering

But the real long-term win here, is the opportunity to limit suffering, pain, physical disabilities, autism, learning difficulties and many forms of mental illness. It may also be able to read electrical and chemical signals for other diseases, leading to their prevention. This is only the beginning, like the first transistor or telephone call. It is a scalable solution and as versions roll out with more channels, better interpretation using AI, in more areas of the brain, there are endless possibilities. This event was, for me, more important than man landing on the moon as it has its focus, not on grand gestures and political showmanship, but on reducing human suffering. That is a far more noble goal. It is about time we stopped obsessing with the ethics of AI, with endless dystopian navel gazing, to recognise that it has revolutionary possibilities in the reduction of suffering.

10. Neural interfaces are here

Musk showed three little piggies in pens, one without an implant, one that had an implant, now removed without any effects and one with an implant (they showed the signal live). Using a robot as surgeon the Neuralink tech can be inserted in an hour, without a general anaesthetic and you can be out of hospital the same day. The coin size device is inserted in the skull, beneath the skull. Its fibres are only 5 microns in diameter (a human hair is 100 microns) and it has ten times the channels of the Utah array, with a megabit bandwidth rate, to and from your smartphone. All channels are read and write.

From a pig in 2000 to playing a computer game in realtime in 2001. AI, robotics, physics, material science, medicine and biology collided in a Big Bang event, where we saw an affordable device that can be inserted into your brain to solve important spinal and brain problems. By problems they meant memory loss, hearing loss, blindness, paralysis, extreme pain, seizures, strokes and brain damage. They also included mental health issues such as depression, anxiety, insomnia and addiction. Ultimately, I have no doubt that this will lead to huge decrease in human suffering. God doesn’t seem to have solved the problem of human suffering, we as a species, through science are on the brink of doing it by and for ourselves.

Other companies are working on other neural interfaces. One promising line is a brain interface from a stent in a brain blood vessel, a stentrode. This is easily inserted, gets incorporated into tissue. 

Tech for good...