Sunday, June 09, 2019

Can AI be creative?

A friend, Mark Harrison, filmed me for his film on AI and creativity. It made me think - really think. Can AI be creative? Easy to ask and difficult to answer, because it involves complex philosophical, aesthetic and technical issues. Whenever the subject is brought up in conversation, I can feel the visceral reaction among many - that creativity is that last bastion of humanity, what makes us human-all-too-human, and not the domain of machines. Yet...

Problems with definition
The problem with ‘creativity’ is that it is so difficult to pin down. The word ‘creative’ is a bit like Wittgenstein’s word ‘game’. A game can be a sport, board game, card game, even just bouncing a ball against a wall. It defies exact definition, as words are not defined by dictionaries but use. So it is with ‘creativity’, as it can be the product of a creative work of art, creative play in sport, creative decision making, even creative accounting.
This is also a problem with AI, the phrase created by John McCarthy in 1956. There is no exact mathematical or computing definition for AI and it is many different things. Like ‘Ethics and AI’, “Aesthetics and AI’ suffers from difficulties in definition, anthropomorphism and often a failure to discuss the philosophical complexities of the issue. That, of course, does not prevent us from trying.

Is AI intrinsically creative?
Language, poetry, music and image generators have used many AI techniques, often in combination, as well as using outside data sources; TAILSPIN, MINSTREL, BRUTUS for storytelling, JAPE, STANDUP create jokes, ASPERA for poetry, AARON, NEvAr, The Painting Fool for visual works of art. In addition, the use of AI in areas such as research, maths, business and other domains could also be seen as intrinsically creative. When problems in these fields are solved in an innovative manner, are these creative acts? There are literally dozens of systems that claim to have produced creative output.
Some AI systems, often combinations of AI techniques, claim creative output, as they can produce, in a controlled or unpredictable manner, new, innovative and useful outputs. Contemporary AI techniques such as neural networks, machine learning and semantic networks, some claim, generate creative output in themselves. Stephen Thaler claims that neural networks and deep learning have already exhibited true creativity, as they are intrinsically creative.
Recently, GPT-2, a model from the not-for-profit OpenAI, showed how potent generative AI can be, producing articles, essays and text from just general queries and requests (see more here). Google’s Deep Dream is another open-source resource that uses neural networks to produce strange psychedelic imagery, used in print and moving images, such as music videos. DeepArt produces new images in the style of famous artists.
Others, however, argue that if software has to be programmed by humans it is by definition not creative, that AI can never be creative in that it can do nothing other than transform inputs into outputs. The rebuttal being that this argument could also be applied to humans. So let’s look at some specific creative domains.

Creativity, AI and language
There is linguistic creativity using AI around many forms of language, everything from punssarcasm, and irony to similes, metaphors, analogieswitticisms and jokes. Sometimes linguistic creativity involves the intensification of existing rules, sometimes the breaking of these rules. Narrative Science and many other companies have been using AI to generate text for sports, financial and other articles. These have been widely syndicated, published, read and evaluated.

Creativity, AI and games
DeepMind, when it played Space Invaders, did something quite astonishing. It shot up to either side of the screen, around the invaders, so that the space invaders could be attacked from above, something humans hadn’t done. In Chess and GO, we see this a lot. Seemingly odd, unorthodox and surprising moves, that turn out to turning points that win the game, are masterfully creative. Also in computer games such as DOTA-2 AI agents are beating humans in complex team environments.

Creativity, AI and music
The one area of Computational Creativity that has received most attention is music. Could AI composed music win a Grammy? It hasn’t some argue that one day it could. Classical music, many would say, is a crowning human achievement. It is regarded as high art and its composition creative and complex. Jazz is wonderfully improvisational. Whatever the genre, music has the ability to be transformative and plays a significant role in most of our lives. But can AI compose transformative music?
At a concert in Santa Cruz the audience clapped loudly and politely praised the pieces played. It was a success. No one knew that it had all been composed by AI. Its creator, or at least the author of the composer software, was David Cope, Professor of the University of California, an expert in AI composed music. He developed software called EMI (Experiments in Musical Intelligence) and has been creating AI composed music for decades.
Prof Steve Larson, of the University of Oregon, heard about this concert and was sceptical. He challenged Cope to a showdown, a concert where three pianists would play three pieces, composed by:
   1. Bach
   2. EMI (AI)
   3. Larson himself
Bach was a natural choice as his output is enormous and style distinctive. Larson was certain of the outcome, and in front of a large audience of lecturers, students and music fans, in the University of Oregon concert Hall, they played the three pieces. The result was fascinating. The audience believed that:
   1. Bach’s was composed by Larson
   2. EMI’s piece was by Bach
   3. Larson’s piece composed by EMI.
Interesting result. (You can buy Cope’s album Classical Music Composed by Computer.) 
Iamus, named after the Greek god who could understand birdsong, created at the University of Malaga, composed a piece called Transits - Into the Abyss, which was performed by the  London Symphony Orchestra in 2012 and also released as an album. Unline Cope's software, Iamus creates original, modern pieces that are not based on any previous style or composer. Their Melamics website has an enormous catalogue of music and has an API to allow you to integrate it into your software. They even offer adaptive music which reacts to your driving habits or lulls you into sleep in bed, by reacting to your body movements.
Further examples of the Turing Test for music have been applied to work by Kulitta at Yale. But is a Turing test really necessary? One could argue that all we’re doing is fooling people into thinking this has been composed by a machine that cheats. Cope has been creating music from computers from 1975, when he used punch cards on a mainframe. He really does believe that computers are creative. Others are not so sure and argue that his AI simply mimics the great work of the past and doesn’t produce new work. Then again, most human composers also borrow and steal from the past. The debate continues, as it should. What we need to do is look beneath the surface to see how AI works when it ‘composes’.
The mathematical nature of harmony and music has been known since the Pre-Socratics and music also has strong connections with mathematics in terms of tempo, form, scales, pitch, transformations, inversions and so on. Its structural qualities makes it a good candidate for AI production.
Remember - AI is not one thing, it is very many things. Most have been used, in some form, to create music. Beyond mimicry, algorithms can be used to make compositional decisions. One of the more interesting phenomena is the idea of improvisation through algorithms that can, in a sense, randomise and play with algorithmic structures, such as Markov chains and Monte Carlo tree decisions, to create, not deterministic outcomes, but compositions that are uniquely generated. Evolutionary algorithms have been used to generate variations that are then honed towards a musical goal. Algorithms can also be combined to produce music. This use of multiple algorithms is not unusual in AI and often plays to the multiple modality of musical structure, playing to different strengths to produce aesthetically beautiful music. In a more recent development, machine learning, presents data to the algorithmic set, which then learns from that data and goes on to refine and produce composed music, bringing an extra layer of compositional sophistication.
We, and all composers, are organisms created from a bundle of organic algorithms over millions of years. These algorithms are not linked to the materials from which you create the composer. Whether the composer is man or machine, music is music. There is no fatal objection to the idea that organic composers can do things that non-organic algorithms will never be able to replicate, even surpass. 

Creativity AI and aesthetics
The AI v human composition of music also opens up several interesting debates within aesthetics. What is art? Does ‘art’ reside in the work of art itself or in the act of appreciation or interrogation by the spectator? Does art need intention by a human artist or can other forms of ‘intelligence’ create art? Does AI challenge the institutional theory of art, as new forms of intelligent creation and judgement are in play? Does beauty itself contain algorithmic acts within our brains that are determined by our evolutionary past? AI opens up new vistas in the philosophy of art that challenge (possibly refute, possibly support) existing theories of aesthetics. This may indeed be a turning point in art. If art can be anything, can it be the product of AI? 
This area is rich in innovation and pushes and challenges us to think about what art is and could be. Is the defence of the ‘artist’ or ‘composer’ just a human conceit, built on the libertarian idea of human freedom and sanctity of the individual, that makes us repel from the idea of AI generated music and art? The advent of computers, used by musicians to compose and in live performance, has produced amazing music, some created live, even through ‘live coding’. As in other areas, where AI is delivering real solutions, music is being created that is music and is liked. Early days but it may be that musical composition, with it’s strong grounding in mathematical structures, is one of those things that AI will eventually do as well, if not better, than we mere mortals.
Let’s focus on the question, ‘What is art?’ Is it defined in terms of the:

  1. aesthetic effect of the object itself 
  2. intention of the artist
  3. institutional affirmation
If it is 1. AI created art could be eminently possible, as many AI created works have already been judged to be art.
If 2. and you need intention, then we will have to abandon hope for AI or wait until AI has intention. The Intentional fallacy was written in 1946 and attempted to strip away the intention of the artist. This argument gained momentum in the 1960s with Barthes "death of the author, Foucault's abandonment of the "author function" and Derrida's "nothing but the text".
If it is 3. then the arts community may at some time agree that something created by AI is art. Put it another way, if you define ‘creative’ as something that is, in its essence ‘human’, then by definition you have to be ‘human’ to be creative. Then AI can, logically, never be creative. If, however, we accept that AI is not about copying what it is to be human, we leave room for it being called creative. We see this in technology all the time. We didn’t copy the flapping of bird wings, we invented the airplane,. We didn’t go faster by copying the legs of a cheetah but by inventing the wheel.
So how do you decide what is art when generating AI output? It is easy to mimic or appropriate existing artworks to create what many regard as pastiches. One fundamental problem here is anthropomorphism. When we say ‘Can AI be creative?’ we may already have anthropomorphized the issue. Our benchmark may already be human capabilities and output, so that creative acts may be limited to human endeavour. 
We may, literally, in our language, be unable to envisage creative acts and works beyond our human-all-too-human abilities. What would such a thing be? How would we make that judgement? Some have proposed formal criteria, such as novelty and quality to judge creative outputs. The danger of many systems is that AI has produced lots of works but a human has ‘curated’ so that only the best are selected for scrutiny. Another solution is to posit an equivalent to Turing’s Test. The problem is that this assumes that creativity is a judgment on the work itself, without requiring intention.

Beyond the human
We seem to want creative technology to be more human but this may be a red herring. It may well be that creativity is that layer that lies just beyond the edge of our normal capabilities – that’s certainly how creative acts are sometimes described, as pushing boundaries. So why not consider acts that come from another source, such as DeepMind, OpenAI or Watson? If AI transcends what it is to be human then we may have to accept that acts of creativity may do the same. Our expectations may have to change. In art we saw this with Duchamp’s urinal (Fountain). Could it be that a Duchamp-like event could take us into the next phase of art history, where it is precisely because it was NOT created by a human that it is considered art – art as a transgressive and transcendental act?

Conclusion
This is a lively field of human inquiry, that has a long way to go. It is easy to jump to conclusions and underestimate the complexity of the issues, which need careful unpacking. We need to be clear in the language we use, the claims we make and the evaluative judgments we make, as it is too easy to come to premature conclusions. Moffat and Kelly (2006) produced evidence that people are biased against machines when making judgments about creativity. Others are too quick to claim that outputs constitute art.
There are several possible futures here, where: 
   AI plays no role in creative output
   AI enhances human creative output
   AI produces creative output that is valued, appreciated and bought and accepted as creative by humans 
   AI creations transcends that of humans and that art becomes the domain of AI

Time and technology will tell…

 Subscribe to RSS

Friday, June 07, 2019

Podcasts - 20 reasons why we should be using more podcasts in learning

Even the Obamas are in on the podcast act, signing a deal with Spotify. Hardly surprising, as over 50% of Americans have now listened to a podcast, very much a medium for people of working age, with the listening figures dropping off in the 55+ age group. Reuters Journalism, Media and Technology trends highlights audio as a significant growth area and Spotify are investing $500 million in the medium. 
I’m a podcast fan myself. Whether it is Talking Politics, where some of the best minds in political science discuss a contemporary political topic, or In Our Time, where history, philosophy, art and science is brought alive with a trio of academics. If I want an in-depth learning experience, this is often my medium of choice. For real depth I prefer text – books, papers, articles and blogs. For practical learning, video. For really practical learning – doing stuff and experience. But podcasts lie in that niche between long-form text and short-form video and  have their own special allure, as well as being so very convenient. So why use podcasts in learning? What type of content is suitable? How does one make one?
Content
Podcasts tend to be long form and content rich. They are, in a sense , the opposite of microlearning or the tendency to reduce things into small pieces. They also have different cognitive affordances from video, text or graphics. Video is great for ‘showing’ things such as drama, objects, places, processes and procedures with more of an emphasis on attitudinal or practical learning. Text may be better for semantic and symbolic knowledge, where the art of the wordsmith comes into play and subjects like maths. Graphics, of course can visualise data, show schemas; diagrams can illustrate what you want to teach and photographs give a sense of realism. Podcasts, however, tend to deal with more conceptual knowledge, where ideas and discussion matter. They seem better at allowing experts or leaders to explain more complex thoughts and issues, where genuine discussion or stories can reveal the learning, with deeper levels of reflection and different perspectives. Relying on spoken language alone often gives them a depth that other media don't carry.
Convenience
Many like to listen to podcasts when walking, running, in the gym, car or commuting. The sheer convenience of time shifting the experience, of using this dead time, in what Marc Augur called ‘non-places’, even if only to hold off boredom, is what makes podcasts a form of productive, mobile learning. I personally, like to listen while sitting down, with headphones, as I’m a note taker but many listen when they are doing other things. This convenience factor is a big plus.
Natural
Oral communication is more natural and feels more authentic than written text, as it has many of the human flavours of the speaker, such as tone, intonation, accent, emotion and emphasis. Technically, we are grammatical geniuses aged three, able to listen and understand complex language, without formal learning. This makes such content easy to access, especially for those with lower levels of reading literacy. It is, in this sense, a very natural form of learning. This more frictionless form of communication allows us to take deeper dives, through attentive listening (as opposed to hearing), making them potent learning experiences.
Eavesdropping
Listening to a podcast, especially with headphones, can be an intense, intimate and private affair. Many podcast fans report that sense of eavesdropping into an intimate conversation, you feel as though you get to know the people over time. There is a sense of focus and attention that the learner feels, as if one was part of the conversation, literally sitting there next to the participant(s). So in this process of eavesdropping, how many participants should one have in a podcast? 
Monologue
It is hard to hold full attention for long when it is a single podcaster. Imagine sitting in a plane, asking someone a question and they come back with a 40 minute reply. Although, as a fan of the comedian Bill Burr’s podcasts, it can be done. The difference is that Bill has decades of experience as a stand-up comic and can hold an audience’s attention.
Dialogue
A more popular format is the interview. Joe Rogan is a good example, with massive audiences - there are many others. He interviews an individual, drawing out their stories and anecdotes. The questions in an interview format act as breaks, chunking the content down into meaningful pieces, making them easier to learn. It sometimes feels as if it is you, as the learner is asking the questions, and in that sense, feels like a personal dialogue.
Discussion
Some of my favourite podcasts, the BBCs In Our Time and Talking Politics, often have three or four participants. Interestingly, they both have an anchor, Melvyn Bragg and David Runciman, who hold the discussion together and give it shape and direction. The advantage of this format is that it provides different angles on the same subject, sometimes different areas of expertise, even disagreements.
Episodes
Some of the most popular podcasts have been series, where they’ve built an audience over time. These segment the content and often have cliff hangers, to make you want to listen to the next one. In learning, of course, this has the advantage of splitting material over time, introducing spaced practice, by taking just a minute or so to recap on the previous episode and summarise that the end, to top and tail, improves retention.
Media rich is not mind rich
Mayer and others have, over decades, shown us, through pinpoint research with good controls, that rich media, used unwisely, can inhibit learning, as in learning 'less if often more'. This has much to do with the limitations of working memory but also with using up cognitive channels. Yet online learning seems to ignore that simple, popular, single-channel medium – the podcast. Podcasts have the advantage of low cognitive bandwidth and low costs, along with several other advantages in learning.
Construct
Audio has the advantage of taking up only one channel, the auditory channel, leaving the mind free to generate, through the imagination, your own interpretation, allowing the brain to integrate new knowledge with your prior existing knowledge. As working memory has a limit of 4 or so registers, which we can hold for around 20 seconds, keeping some free from imagery can, for some types of knowledge, be a powerful advantage, especially for conceptual content, as it gives your working memory some time to interpret, even manipulate ideas.
Take notes
Podcasts have one great advantage over video or text/graphics. For active learners, the simple fact that you don’t have to look at a screen allows you to take notes. Research shows that note taking can increase retention from 20-30%. In learning podcasts it is important that you recommend note taking, as you are hands and eye free, allowing more sophisticated notes in your own words.
Speed control
Many listen to podcasts at 1.5 times normal speed as they can still understand what is being said. We read faster than we listen and many find that they can still get the full meaning at speeds beyond that of spoken delivery. This variability of speed allows different learners to listen at different rates, giving learning, almost personalised advantages,
Content control
Another form of control is stop, back, forward and control over a visible timeline. Most find themselves doing a lot of this when using podcasts to learn, when you don’t understand something, want to reflect more, skip extraneous material, take more detailed notes and so on. This, again, allows the learner to process content at a much deeper level for retention.
Audio quality
Nass and Reeves, in research in their book, The Media Equation, showed that although one can get away with low fidelity images in video, this doesn’t work for audio. Poor quality audio has a significantly detrimental effect on learning, lowering retention. We have evolved to have visual systems that can adapt to twilight and distance. Our auditory systems are less forgiving, and expect high fidelity audio, as if delivered by a person speaking in front of you. Distance, volume, tinny sound, mechanical delivery, all diminish attention and learning. Experienced podcast producers will recommend either a studio or as quiet as possible an environment, with a good microphone to get best results. Some avoid table-top mikes and prefer lapel or head mounted.
Delivery
Reading content from a script can be a killer as delivery really does matter. Listeners want energy, passion and expert or academic gravitas. Humour often helps to punctuate, dwell, then move on. It is that sense of listening to an ‘expert’, also shown by Nass and Reeves to increase retention, that is so important in learning. Above all, podcasts seem to give authenticity to the ‘voice’ of the speaker. It must be and sound natural. Over-produced podcasts can often be counter-productive.
Preparation
A good podcast also needs god preparation. Make sure your technical set-up is clear. Then prep the participants. A structured script is useful, even if it just a series of agreed questions, along with advice on short answers to questions. Test the levels, make sure the atmosphere is relaxed to encourage good discussion.
Music
There’s an argument for having music as a lead-in, even leading out at the end, as it helps brands podcasts, especially if it is a series of episodes, but avoid laying down a music track behind the speakers – it just kills attention and retention.
While recording
‘Go again… this time shorter” if often good advice, editing out the longer version. Try to avoid recording over several session, as it is difficult to get the same levels and sound the second and third time around. And if you think you can simply drop a word into a sentence that may have been mispronounced or the wrong word, think again – this is notoriously difficult. For 'learning' podcasts, there is something to be said for more structure in the content and clear edit points for different learning objectives. There are also strong arguments for more recaps, summaries and repetition to increase retention.
AI generated podcasts
One can already generate speech from text with relative ease. This is passable but still a little ‘mechanical’. However, we are reaching a position where it will feel very natural, so automatic podcasts from text scripts will be quick and cheap to produce and one can change and update them by simply changing the text, without going back into a recording studio. We already do this in WildFire for short introductions to pieces of learning.
Transcription
Google is introducing real time transcription. This is a boon for note taking, as you can annotate, add your own words, summarise, mind-map, whatever. This is often difficult when you have to ‘watch’ a lecturer, PowerPoint or video. With WildFire we have also grabbed podcast transcripts, used AI to generate active online content to supplement the listening experience and solidify knowledge.
Conclusion
Before commissioning or producing podcasts, listen to a few. They’re everywhere on the web. But listen to those that are most popular. You will find all sorts of subjects, by all sorts of people and variations on formats. For learning, listen to some of the more serious podcasts, although there’s nothing wrong with lightening things up. I know of several companies who do ‘learning’ podcasts and have been on the end of quite a few. Given that it is a massively popular medium, cheap to produce, with significant advantages in terms of learning, why not give them a try?
Bibliography
Edison Report. (2019) https://www.edisonresearch.com/infinite-dial-2019/
Llinares D. (2018) Podcasting: New Aural Cultures and Digital Media 
Nass and Reeves. The Media Equation
Newman N. (2019) Reuters. Journalism, Media and Technology Trends and Predictions https://reutersinstitute.politics.ox.ac.uk/our-research/journalism-media-and-technology-trends-and-predictions-2019 
PS
For some interesting, and detailed research on podcasts, try Steve Rayson's blog. He has a strong learning background and is doing detailed research on who uses podcasts and why. Some of the ideas in this piece have come from his blog.

 Subscribe to RSS

Thursday, May 30, 2019

10 ways AI is used in video for learning – from deepfakes to personalisation

Video is the medium of the age and AI is the technology of the age. Combine the two and you have a potent mixture. I’ve been involved with both, working in a video production company, using video on all sorts of media, from interactive videotape machines, laserdiscs, compact discs, CDi to streaming, even making a feature film called The Killer Tongue (you really donl;t want to know). Believe me that last one lost me a ton of money. I now run an AI for learning company WildFire and am writing a book AI for Learning. I know these two worlds well. But how do they interact?

1. Edit
There are tools that allow you to edit video much faster and to higher quality. Different cameras shoot different colour balances – that can be fixed with AI. Same actor in different scenes with different skin tone, that can be fixed with facial recognition and skin tone matching - using AI. Need your music mixed down behind dialogue - use AI. AI is increasingly used to fix, augment and enhance moving images. 


2. Fake
Of course, easy editing with AI also means easy fakes. AI generated avatars as TV presenters have appeared reading the news using text to speech software. One can have Obama saying whatever you like from a voiceover artist mimicking his voice. Similarly with a fake teacher, won can deliver talking head content. Even more worrying is fake porn. Many famous actresses and actors have had their faces transposed to create ‘deepfake’ porn scenes. This mimics what is possible with fake homework, essays and text output using OpenAIs GTP-2 software, so dangerous that they decided not to release it. Just feed it a question and it produces an essay. 

3. Create
Beyond fakery lies the world of complete video creation. Alibaba’s Aliwood software uses AI to create 20 second product videos from a company’s existing stills, identifying and selecting key images, close ups and so on. The selected images are then edited together with AI and even change with musical shifts.  They say it increases online purchases by 2.6%. Some video creation software goes further and also adds a text to speech narration, with edits at appropriate points. Many pop videos and films have been made with AI tools that use AI tools such as Deep Dream for image creation along with style capture and flow tools. There’s even complete films made from AI created scripts. We already see services that create learning videos quickly and cheaply using the same methods.

4. Caption
Once you have created a video, AI can also add captions. This type of software can even pick up on dog barks and other sounds and is now standard on TV, YouTube, Facebook, even Android phones, increasing accessibility. It is also useful in noisy environments. Language learners also commonly report cationing as having benefits in self-directed, language learning. Although one must be careful here, as Mayer’s research shows that narration and text together have an inhibitory effect on learning. 

5. Transcribe
Speech to text is also useful in transcription, where a learner may want the actual transcription of a video as notes. Some tools, such as WildFire, take these transcriptions and use them to create online learning to supplement the video with effortful learning. The learner watches the video, which is good for attitudinal and affective learning, even process and procedures but poor on semantic knowledge and detail. Adding an online treatment of the transcript, created and assessed by AI, can provide that extra dimension to the learning experience.

6. Translate
One you have the transcribed text, translation is also possible. This has improved enormously, with reduced latency, from Google Translate to more sophisticated services. Google’s Translatotron promises to deliver speech-to-speech translation with an end-to-end translation model that can deliver accurate results with low latency. Advances like these will allow any video to be translated into multiple languages, allowing low-cost and quick global distribution of learning videos.

7. Filter
Ever thought why YouTube and other video services prevent porn and other undesirable material from appearing? AI filters that use image recognition to search and delete. Facebook claims that AI now identifies 96.8% of prohibited content. It is not that AI does the whole job here. Removing dick pics and beheadings relies on algorithms and image recognition but there’s also community flagging and real people sitting watching this stuff. AI is increasingly used to protect us from undesirable content.

8. Find
Want to know something or do something? Searching YouTube is increasingly the first option chosen by learners. YouTube is probably the most used learning platform on the planet. Yet we tend to forget that it is only functional with good search. AI search techniques are what gives YouTube its power. Note that YouTube search is different from Google search. Google uses authority, relevancy, site structure and organization; whereas YouTube, being in control of all its content, uses, growth in viewing, patterns in viewing, view time, peak view times, and social media features such as shares, comments, likes and repeat views. Search is what makes YouTube such a convenient learning tool.

9. Personalise
Video services such as YouTube, Vimeo and Netflix use AI to algorithmically present content. AI is the new UI and most video content is served up in this personalised fashion. Netflix famously handed out a £1 million prize for an algorithm and has since refined their approach. This is exactly what is happening in adaptive learning systems, where individual and aggregated data is used to personalise and optimise the learning experience for each individual, so that everyone is educated uniquely.

10. Analyse
Talking of Netflix, there is now a huge amount of data collected on global services that can inform future decision making. This can be data about when people cut out of a show, literally showing favourite characters and sub-plots., which can be used to inform future script writing. Data on stars, and genres can also be used to guide scripting and spend on original content. Similarly in learning, analytics around usage, cut outs and so on can inform decisions about the efficacy of the learning.

Conclusion
All of the above are and will affect the delivery of video in learning. Several are already de facto techniques. We can expect them all to develop in line with advances in AI, as well as learner demand. This is clearly an example of where the learning world has lots to learn and lots to gain from consumer services. Most of the above techniques are being built, honed and delivered on consumer platforms first then being used in a learning context.

 Subscribe to RSS

Sunday, May 19, 2019

How to turn video into deep learning

With video in learning one can feel as though one is learning, as the medium holds your attention but as you are hurtled forward, that knowledge disappears off the back. It’s like a shooting star; looks and feels great but the reality is that it burns up as it enters the atmosphere and rarely ever lands.
Video and learning
We have evolved to learn our first language, walk, recognise faces and so on. This primary knowledge was not learnt in the sense of being schooled or deliberately studied. It is embodied in our evolutionary past and evolved brains. Note that some of this learning is patently wrong. Our intuitive view of inertia, forces, astronomy, biology and many other things is intuitively wrong, which is why we, as a species, developed intellectual development, science, maths, literature and… education. This secondary knowledge is not easily learnt – it has to be deliberately learned and takes effort. This includes maths, medicine, the sciences and most forms of intellectual and even practical endeavour. That brings us to the issue of how we learn this stuff.
Working and LT memory
Let’s start with the basics. What you are conscious of, is what’s in working memory, limited in capacity to 2-4 elements of information at any time. We can literally only hold these conscious thoughts in memory for 20 or so seconds. So our minds move through a leaning experience with limited capacity and duration. This is true of all experience and with video it has some interesting consequences. 
We also have a long-term memory, which has no known limits in capacity or duration, although lifespan is its obvious limit. We can transfer thoughts from long-term meory back into working memory quickly and effortlessly. This is why ‘knowing’ matters. In maths, it is useful to automatically know your times table, to allow working memory to then manipulate recalled results more efficiently. We also use existing information to cope with and integrate novel information. The more you know the easier it is to learn new information. Old, stored, processed information renders working memory enormous through effortless recall from long-term memory.
All of this raises the question of how we can get video-based learning into long-term memory.
Episodic and semantic memory
There is also the distinction, in long-term memory, between episodic and semantic memory. Episodic memories are those experiences such as what you did last night, what you ate for dinner, recalling your experience at a concert. They are, in a sense, like recalling short video sequences (albeit reconstructed). Semantic memory is the recall of facts, numbers, rules and language. They are different types of memory processed in different ways and places by the brain.
When dealing with video in learning, it is important to know what you are targeting. Video appeals far more to episodic than semantic memory – the recall of scenes, events, procedures, places and people doing things.
Element interactivity
When learning meaningful information that is processed, for example in multiplication, you have 2-4 registers for the numbers being multiplied. The elements have to be manipulated within working memory and that adds extra load. Element interactivity is always extra load. Learning simply additions or subtractions have low element interactivity but multiplication is more difficult. Learning vocabulary has low element interactivity. Learning how to put the words together into meaningful sentences is more difficult.
In video, element interactivity, is very difficult, as the brain is coping with newly presented material and the pace is not under your control. This makes video a difficult medium for learning semantic information, as well as consolidation g learning through cognitive effort and deeper processing.
Video not sufficient
Quite simple, we engage in teaching, whether offline or online, to get things into long-term memory via working memory. You must take this learning theory into account when designing video content. When using video we tend to forget about working memory as a limitation and the absence of opportunity to move working memory experiences into long-term memory.  We also tend to shove in material that is more suited to other media, semantic content such as facts, figures and conceptual manipulations. So video is often too long, shows points too quickly and is packed with inappropriate content. 
We can recognise that video has some great learning affordances in that it can capture experiences that one may not be able to experience easily, for real – human interactions, processes, procedures, places and so on. Video can also enhance learning experiences, reveal the internal thoughts of people with voiceover and use techniques that compress, focus in and highlight points that need to be learnt. When done well, it can also have an emotional or affective impact making it good for attitudinal change. The good news is that video has had a century or so to develop a rich grammar of techniques designed to telescope, highlight and get points across. The range of techniques from talking heads to drama, with sophisticated editing techniques and the ability to play with time, people and place, makes it a potent and engaging medium.
The mistake is to see video as a learning medium in itself. Video is a great learning medium if it things are paced, reinforced but made greater if the learner has the opportunity to supplement the video experience with some effortful learning.
Illusion of learning
However, the danger is that, on its own, video can encourage the illusion of learning. This phenomenon was uncovered by Bjork and others, showing that learners are easily fooled into thinking that learning experiences have stuck, when they have actually decayed from memory, often within the first 20 minutes. 
Video plus…
How do we make sure that video learning experience is not lost and forgotten? The evidence is clear, the learner needs some effortful learning – they need to supplement their video learning experience with deeper learning that allows them to move that experience from short to long-term memory.
The first is repeated access to the video, so that second and third bites of the cherry are possible. Everything in the psychology of learning tells us that repeated access to content allows us to understand, process and embed learning for retention and later recall. While repeated watching helps consolidate the learning it is not enough and an inefficient, long-winded, learning strategy.
The second is to take notes. This increase retention significantly by up to 20-30% of done well as deeper processing comes into play as you write, generate your own words, draw diagrams and so on.
WildFire
The third, is far more effective and that is to engage in a form of deeper, effortful learning that involves retrieval and recall. We have built a tool, WildFire,that does exactly this.
How do you ensure that your learning is not lost and forgotten? Strangely enough it is by engaging in a learning experience that makes you recall what you think you’ve learnt. We grab the transcript of the video, put it into an AI engine that creates a supplementary learning experience, where you have to type in what you ‘think’ you know. This is both simple concepts, numbers but also open input sentences, where the AI also semantically interprets your answers. This powerful form of retrieval learning, not only gives you reinforcement through a a second bite of the cherry bit also consolidates the learning. Research has shown that recalling back into memory – literally looking away and thinking about what you know, is even more powerful than the original teaching experience or exposure. In addition, the AI creates links out to supplementary material (curates if you wish) to further consolidate memory through deeper thought and processing.

 Subscribe to RSS

Thursday, May 02, 2019

‘Machines Like Me’ by Ian McEwan – a flawed machinage a trois

Ian McEwan’s 'Machines Like Me' is a machinage a trois between Charlie, Miranda and Adam. Now Ian can pen a sentence and, at times, writes beautifully but this is a rather mechanical, predictable and, at times, flawed effort.
Robot Fallacy
The plot is remarkably similar to the 2015 threesome-with-a-robot movie Uncanny (also has an Adam) which is somewhat better than this novel. But the real problem is the Robot Fallacy – the idea that AI is all about robots – it’s not. AI, even robotics, is not all about creating interesting characters for second rate novels and films and is not on a quest to create anthropoid human robots as some sort of undefined companions. Art likes to think it is, as art needs characterisation and physical entities. AI is mostly bits not atoms, largely invisible and quite difficult to reveal, it is mostly online but that's difficult for authors and film makers. That’s why the film Her was also superior to this novel – it doesn’t fall into the idea that it’s all about physical robots. McEwan’s robot and plot limits any real depth of analysis as it’s stuck in the Mary Shelley Frankenstein myth, with Turing as the gratuitous Frankenstein. In fact, it is a simple retelling of that tale, yet another in a long line of dystopian views of technology. McEwan compounds the Robot Fallacy by making Adam appear, almost perfectly formed, from nowhere. In reality, AI is a long haul with tons of incremental trials and failures. Adam appears as if created by God. Then there’s the confusion of complexity with autonomy. Stephen Pinker and others have pointed out the muddle-headed nature of this line of thought in Enlightenment Now. It is easy to avoid autonomy in the engineering of such systems. It tries to introduce some pathos at the end but ultimately it’s an old tale not very well told.
Oddities and flaws
Putting that aside, there are some real oddities, even clangers, in McEwan’s text. The robot often washes the dishes by hand, as if we have invented a realistic human companion but not a dishwasher. In fact, dishwashers are around, as one pops up, oddly as an analogy, later in the book. The robot can’t drive yet (self-driving cars appeared but didn’t work because of a traffic jam!). Yet self-driving cars make an appearance later in the book.
Counterfactuals are tricky to handle as it makes suspension of disbelief that much harder and in this case it the entire edifice of losing the Falklands war and muddling up political events seems like artifice without any real justification. One counterfactual completely threw me. It’s one thing to counterfactually ‘extend’ Turing’s life, another to recalibrate someone’s birth date , taking it back a couple of decades, as in the appearance of Demis Hassabis (of Deepmind fame). Hassabis pops up as Turing’s brilliant young colleague in 1968, odd as he wasn’t born until 1976 (as stated on the final page)!
Then there’s an even odder insertion into the novel – Brexit. McEwan is a famous Leave campaigner and for no reason, other than pettifoggery, he drags the topic into the narrative. I have no idea why. It has no causality within the plot and no relevance to the story. It just comes across as an inconsequential and personal gripe.
The yarn has one other fatal flaw – the odd way the child in introduced into the story, via a manufactured incident in the park, a continuing thread in the story that is about believable as a chocolate robot. I’m not the first to spot the straight-up snobbery in his handling of this plot line  - working class people as hapless thugs.
To be fair there are some interesting ideas, such as the couple choosing personality settings for their robot in a weird form of parenting and this blurring of boundaries is the book’s strength. The robot shines through as being by far the most interesting character in the book, curiously philosophical, and there’s some exploration of loyalty, justice and self.
Conclusion
Did I learn anything about AI from this novel? Unfortunately not. In the end it’s a rather mechanical and, at times, petty work. It was difficult to hold suspension of disbelief, as so many points were unbelievable. McEwan seems to have lost his imaginative flair, along with his ability to surprise and transgress. His fictional progeny are more ciphers than people. In truth, AI is only software, and all of this angst around robots murdering us in our sleep is hyperbolic and doesn’t really tackle the main issues around automation and perhaps the good that come out of such technology.

 Subscribe to RSS

Wednesday, April 24, 2019

The Geneva Learning Foundation is bringing AI-driven training to health workers in 90 countries

Wildfire is helping the Swiss non-profit tackle a wicked problem: while international organizations publish global guidelines, norms, and standards, they often lack an effective, scalable mechanism to support countries to turn these into action that leads to impact. What is required is low cost, quick conversion, high retention training.
So the Geneva Learning Foundation (GLF) has partnered with artificial intelligence (AI) learning pioneer Wildfire to pilot cutting edge learning technology with over 1,000 immunization professionals in 90 countries, many working at the district level. It is fascinating to see so much feedback come in from so many countries.
By using AI to automate the conversion of such guidelines into learning modules, as well as interpret open-response answers, Wildfire’s AI reduces the cost of training health workers to recall critical information that is needed in the field.. This retention is a key step, if global norms and standards are to translate into making a real impact in the health of people.
If the pilot is successful, Wildfire’s AI will be included in TGLF’s Scholar Approach, a state-of-the-art, evidence-based package of pedagogies to deliver high-quality, multi-lingual learning. This unique Approach has already been shown to not only enhance competencies but also to foster collaborative implementation of transformative projects that began as course work.
TGLF President Reda Sadki said: “The global community allocates considerable human and financial resources to training. This investment should go into pedagogical innovation to revolutionize health.”
As a Learning Innovation Partner to the Geneva learning Foundation, our aim is to improve the adoption and application of digital learning toward achievement of the Sustainable Development Goals (SDGs). Three learning modules based on the World Health Organization’s Global Routine Immunization Strategies and Practices (GRISP) guidelines are now available to pilot participants, including Alumni of the WHO Scholar Level 1 GRISP certification in routine immunization planning
Conclusion
World health needs strong guidelines and solid practices in the field. We are delighted to be delivering this training, using AI as a social good, deliverable on mobiles and in a way that is simple to use but results in real retention and recall.

 Subscribe to RSS

Monday, April 22, 2019

Climate change: dematerialisation and online learning

The number of young adults with driving licences has fallen dramatically, so that over half of American 18-year-olds do not have a driving license. This is partly due to the internet and their alternative investment in mobiles, laptops and connectivity. This is good news. I have never, ever driven a car, having lived in cities such as Edinburgh, London and now Brighton. I’ve never really been stuck, in terms of getting anywhere. I walk, take trains or public transport more than most. This has meant I’ve habitually learnt on the move, largely in what Marc Auge calls ‘non-places’ – trains, planes, automobiles, buses, hotels, airports, stations. I’m never without a laptop, book or mobile device for learning. Whether it’s text, podcasts or video; m-learning has become my dominant form of informal learning. This has literally given me years of extra time to read, write and learn in the isolated and comfortable surroundings of buses, trains and planes. I actually look forward to travel, as I know I’ll be able to read and think, even write in peace. Being locked away, uninterrupted in a comfortable environment is exactly what I need in terms of attention and reflection. I calculate that over the last 35 years of not driving, I’ve given myself pretty much a couple of extra degrees. 
At the risk of sounding like a hobo, I also have only two pairs of shoes and a minimal amount of clothing. I never buy bottled water and have a lifelong principle – Occam’s Razor – use the minimal amount of entities to reach your given goal. 
Dematerialisation
More importantly, all my life I have worked in technology, which has delivered much to the world in terms of eradicating poverty, mindless labour, disease and hardship. Technology has dematerialised many activities. Mobile comms has replaced atoms with bits Take music - we no longer have to listen on vinyl in paper sleeves (except for nostalgists) or unrecyclable compact discs, as most music is now streamed and literally has no substance.
Newspaper circulation has plummeted and my phone delivered an unlimited amount of knowledge and communications that., in the past, would have been infrastructure heavy and hugely wasteful. Paper production is a massive, global polluter on land, water and air. It is the third largest industrial, polluter in North America, the fifth biggest user of energy and uses more water per ton of product than any other industry and paper in landfill sites accounts for around 35% of all waste by weight. Recycling helps but even the deinking process produces pollutants. Paper production still uses chlorine and chlorine based chemicals and dioxins are an almost inevitable part of the paper production process. Water pollution is perhaps the worst, as pulp-mill, waste water is oxygen hungry and contains an array of harmful chemicals. Harmful gases and greenhouse gases are also emitted. On top of this the web has given us the sharing economy, where bikes, cars, rooms and so on can be reused and shared. It would seem as though we're nearing what Ausuble called 'Peak Stuff'. This is all good as the best type of energy saving is not using energy at all or at least minimising the effort and resources needed.
Online learning and climate change
I have spent the whole of my adult life delivering a green product – online learning – which stops the need to travel and reduces the need for carbon-intensive, physical infrastructure.
More recently, we have (with help of my friend Inge) built and delivered a large amount of online education around renewable technologies, targets, policies and solutions. Knowledge is power and with knowledge we have the power to solve this problem. That’s why this project was so important. We used AI to create online education content in minutes not months, from just a few basic documents. Most of the projects we’ve created in WildFire have been without face-to-face meetings and this was no exception. We plan, deliver and project manage online.
Additionally, on climate change, the power of online education is not only its green credentials but also its power to inform. Even active protesters show precious little awareness of what the Paris agreements were, how the technology works and what the science actually says. We need to move beyond the bombast to practical, pragmatic and informed solutions. 
WindPower
First up was content on those huge triffid-like wind turbines. What do you call the thing that sits on top of the tower? (nacelle) What lies inside? (lots of things) What controls are there in terms of direction and so on? (Yaw, pitch and speed) What is the wind equation? (P=pav3 this explains exactly why wind speed is the key variable). We have a huge offshore wind turbine field just off Brighton and I’ll never see them in the same light.
Policies
It’s all very well demonstrating for zero emissions but this has to be achieved through practical policies. Decarbonising economies requires the adoption of the right policy levers and accelerating electrification is top of the list. Renewables are great but not enough, as rather than producing less we must minimise energy use per unit of economic output. Rather than ranting against the ‘man’ we must use technology and market based instruments. Without a change in mindset, this will be difficult, so it’s all hand to the political pump to get things moving.
Conclusion
Technology – wind turbines, solar, electric vehicles, battery technology, AI driven IOT and all sorts of future solutions will solve this problem but without cheaper, greener education and an awareness of what we have to do and why, this is unlikely to happen. One contribution is the rapid rise in online learning. This has already led to the disappearance of those large training centres I remember back on the day. Less people travel to train. It also means access to learning for anyone with online access.

 Subscribe to RSS

Friday, April 12, 2019

Why ‘learning analytics’? Why ‘Learning Record Stores’?

There’s a ton of learning technologists saying their new strategy is data collection in 'learning record stores' and 'learning analytics'. On the whole, this is admirable but the danger is in spending this time and effort without asking ‘Why?’ Everyone’s talking about analytics but few are talking about the actual analysis to show how this will actually help increase the efficacy of the organisation. Some are switched on and know exactly what they want to explore and implement, others are like those that never throw anything out and just fill up their home with stuff – but not sure why. One problem is that people are shifting from first to sixth gear without doing much in-between. The industry has been stuck with SCORM for so long, along with a few pie charts and histograms, that it has not really developed the mindset or skills to make this analytics leap.
Decision making
In the end this is all about decision making. What decisions are you going to make on the back of insights from your data? Storing data off for future use may not be the best use of data. Perhaps the best use of data is dynamically, to create courses, provide feedback, adapt learning, text to speech for podcasts and so on. This is using AI in a precise fashion to solve specific learning problems. The least efficient use of data is storing it in huge pots, boiling it up and hoping that something, as yet undefined, emerges.
Visualisation
This is often mentioned and is necessary but visualisation, in itself, means little. One visualises data for a purpose - in order to make a decision. It is not a tinny in itself and often masquerades as doing something useful, when all it is actually doing is acting as a culture-de sac.
Correlations with business data
Learning departments need to align with the business and business outcomes. Looking for correlations between, say increases in sales and completed training, gives us a powerful rational for future strategies in learning. It need not be just sales. Whatever outcomes the organisation has in its strategy needs to be supported by learning and development. This may lift us out of the constraints of Kirkpatrick, cutting to the quick, which is business or organisational impact. We could at last free learning from the shackles of course delivery and deliver what the business really wants and that’s results.
Business diagnosis
Another model is to harvest data from training in a diagnostic fashion. My friend Chris Brannigan at Caspian Learning does this, using AI. You run sophisticated simulation training, use data analysis to identify insights, then make decisions to change things. To give a real example, they put the employees of a global bank through simulation training on loan risk analysis and found that the problems were not what they had imagined - handing out risky loans. In fact, in certain countries, they were rejecting ‘safe’ loans - being too risk averse. This deep insight into business process and skills weaknesses is invaluable. But you need to run sophisticated training, not clickthrough online learning. It has to expose weaknesses in actual performance.
Improve delivery
One can decide to let the data simply expose weaknesses in the training. This requires a very different mindset, where the whole point is to expose weaknesses in design and delivery. Is it too long? Do people actually remember what they need to know? Does it transfer? Again, much training will be found wanting. To be honest, I am somewhat doubtful about this. Most training is delivered without much in the way of critical analysis, so it is doubtful that this is going to happen any time soon.
Determine how people learn
One could look for learning insights into ‘how’ people learn. I’m even less convinced on this one. Recording what people just ‘do’ is not that revealing if they are clickthrough courses, without much cognitive effort. Just showing them video, animation, text and graphics, no matter how dazzling is almost irrelevant if they have learnt little. This is a classic GIGO problem (Garbage In, Garbage Out). 
Some imagine that insights are buried in there and that they will magically reveal themselves  - think again. If you want insights into how people actually learn, set some time aside and look at the existing research in cognitive science. You’d be far better looking at what the research actually says and redesigning your online learning around that science. Remember that these scientific findings have already gone through a process of controlled studies, with a methodology that statistically attempts to get clean data on specific variables. This is what science does – it’s more than a match for your own harvested data set. 
Data preparation
You may decide to just get good data and make it available to whoever wants to use it, a sort of open data approach to learning. But be careful. Almost all learning data is messy. It contains a ton of stuff that is just ‘messing about’ – window shopping, In addition to the paucity of data from most learning experiences, much of it is odd data structures,odd formats, encrypted, in different databases,old, even useless. Even if you do manage to get a useful clean data set, You have to go through the process of separating ‘Personal’ from ‘Observed’ (what you observe people actually doing), ‘derived’ making deductions from that data, ‘Analysed’ (applying analysis to the data). You may have to keep it ‘Anonymised’ and the privacy issues may be difficult to manage. Remember, you’ll need real expertise to pull this off and that is in very short supply.
To use AI/Machine learning
If you are serious about using AI and machine learning (they are not the same thing), then be prepared for some tough times. It is difficult to get things working from unstructured or structured data and you will need a really good training set, of substantial size, to even train your system. And that is just the start, as the data you will be using in implementation may be very different.
Recommendation engines
This is not easy. If you’ve read all of the above carefully, you’ll see how difficult it is to get a recommendation engine to work, on data that is less than reliable.  You may come to the decision that personal learning plans are actually best constructed using simpler software techniques from spreadsheet levels of data.
Conclusion
The danger is that people get so enamoured with data collection and learning analytics that they forget what they’re actually there to do. Large tech companies use big data, but this is BIG data, not the trivial data sets that learning produces, often on single courses or within single institutions.  In fact, Facebook is far more likely to use A/B testing than any fancy recommendations when deciding what content works best, where a series of quick adaptions could be tested with real users, but few have the bandwidth and skills to make this happen.

 Subscribe to RSS