Sunday, November 26, 2023

Online Educa Berlin 2023 - Fun three days? Damn right it was!


OEB in Berlin was intense this year, with a three hour workshop, in a packed room, the Big Debate and a live podcast with my friend John Helmer. Disappointed that the Christmas markets were not yet open but there was plenty to one’s teeth into than a Bratwurst.

As is often the case, I found the keynotes a bit odd. An American jumped around like a firecracker, confusing shouting with substance, followed by a soporific speaker recommending a return to pencils. She was selling 'critical thinking' but seemed to have little of it. Truly mind numbing. Read every single word from a piece of paper. Then Luciano Floridi. I was looking forward to this having read his work but what a strange talk. He presented himself as a philosopher…. I’m not so sure. He’s actually more of psychologist, who has made a name for himself as a philosopher of the ‘digital’; can’t remember seeing a philosopher of the ‘pencil’. He claimed you could split the whole of philosophy, nay human affairs, into “just two things” - the Socratic and Hobbesian view – people are either stupid or evil. I have been reading philosophy all of my life and have never come across a more idiotic and simplistic summary of either philosophy or human nature. He then massacred Wittgenstein. It is clear he was playing to the crowd. 

 

Undeterred I had some great conversations with people who actually know what they are talking about. That’s what makes conferences so odd – the showbiz v substance. Substance is to be found in the casual encounters with new people, the smaller rooms, the bar, over a coffee. Lovely to see Gabor’s AI project progress from just an idea last year, our Norwegian friends seem to have cracked the HE assessment issue, Glean are progressing with AI and there were some real projects on AI that were lifting us out of the old paradigm.

 

So what did I learn?

 

People are still stuck in the LMS/ cartoon content/video world

The exhibition seemed stuck in that world, a bit dead and often empty

Conversations were full of new ideas and ambition

AI is here and here to stay and lots of real projects shown

Superficial AI moralising was noticeable absent

AI is at a technology way beyond what we’ve seen before

HE is in a panic over AI

L&D once again thinks it is about to be taken seriously at Board level – it is not

‘Critical thinking’ has become the predictable phrase people use when they don’t know what else to say – it’s now my litmus test for people who are walking away…

 

After three days of intense discussion people were ready to let rip in the Friday night and we did – a big meal, followed by a party. As usual, it was one of the best conferences of the year. It’s about the people, many who were there last year and the year before. It’s not in some hideous Casino in Las Vegas, or worse a Disney venue in Florida, or some god-awful, anonymous conference centre. Fun three days? Damn right it was.

Saturday, November 11, 2023

EU AI legislation is firming up and I fear the worst. Danger of overregulation...

There are some good structural aspects of the legislation in terms of classifying size of organisation to avoid penalising small innovative firms and research, as well as classifying risks but as the Irishman said when being asked for directions... "We'll sir... I wouldn't have started here".

 


BANNED        – social scoring, face recognition, dark patterns, manipulation

High risk         – recruitment, employment, justice, law, immigration

Limited risk     – chatbots, deep fakes, emotion recognition

Minimal risks  – spam filters, computer games

 

But, as usual, the danger is in over reach. That can take place in several ways:

 

Catch-all

Like an oversized fishing net, the ban on biometric data may have the unintended consequences of banning useful applications such as deepfake detection, age recognition in child protection, detecting children in child porn, accessibility features, safety applications, healthcare applications and so on. Innovation often produces benefits that were unforeseen at the start. We have already seen a Cambrian explosion of innovation from this technology, category bans are therefore unwise. I fear it is too late on this one.

 

Core tech not applications

Rather than focus on actual applications, they have an eye on general purpose AI and foundational models. This is playing with fire. By looking for deeper universal targets they may make the same mistake as they did over consent and lose millions of hours of lost productivity as we all have to deal with ‘manage cookies’ pop-ups and no one ever reads consent forms. An attack on, say foundational ‘open source’ systems, would be a disaster. Yet it is hard to see how open source could be allowed. It has an ill-defined concept of a 'frontier model' in the proposed legislation that could wipe out innovation in one blow. No one knows what it means.

 

Complexity

It hauls in the Digital Services Act (DSA), Digital Markets Act (DMA), General Data Protection Regulation (GDPR), as well as the new regulation on political advertising, as well as the Platform Work Directive (PWD) – are you breathless while reading that sentence? It could become an octopus with tentacles that reach out into all sorts areas making it impossible to interpret and police. 

 

Copyright

Signs of the EU jumping the gun here and not in a good way. There is always the danger of some publishers (we know who you are) lobbying for restrictions on data for training models. This would put the EU at a huge disadvantage. To be honest, the EU is so far behind the curve now on foundational AI that it will make little difference but a move like this will condemn the EU to looking on at a two horse race, with China and the US racing round the track while the EU remains locked up in its own stalls.

 

Concrete

One problem with EU law is its fixity. Once mixed and set, it is like concrete – hard and unchangeable. Unlike common law, such as exists in England, US, Canada and Australia, where things are less codified and easier to adapt to new circumstances, EU Roman Law is top down and requires new changes in the law through legislative acts. If mistakes are made, you can’t fix them.

 

Conceit

At only 5.8% of the world’s population, there is the illusion that it speaks for the whole world. Having been round the world this year in several continents - I have news for them – it doesn’t. Certainly not for the US, and as for China, not a chance. It doesn’t even speak for Europe, as several countries are not in the EU. To be fair, one shouldn’t design laws that pander to other states but in this case Europe is so far behind that this may be the final n-AI-l in the coffin. Some think millions, of not trillions of dollars are a stake in lost innovation and productivity gains. I hope not.

 

We had a taste of all this when Italy banned ChatGPT. They relented when they saw the consequences. I hope the EU apply Occam’s razor, the minimum number of entities to reach their stated goals, unfortunately they have form here, of overregulating.



Sunday, November 05, 2023

Bakhtin (1895-1975) dialogics, learning and AI

Mikhail Bakhtin, the Russian philosopher and literary critic, developed a theory of language which saw dialogue as primary. He took this idea and applied it to learning. By dialogue he mean social interaction between people, teachers and learners, learners and learners, learners and others but also dialogue with the past. He was persecuted during the Soviet era but his work was rediscovered in the 1980s, and he has since become an important theorists in the philosophy of language and even learning through AI.

Dialogism

Bakhtin criticises the 'monologic' tradition in Western thought, where individuals are defined by religion a concept of the finite, the soul, religious and establishment belief. Individuals for him, are open and must engage in dialogue with the world and others. Dialogism is his foundational idea that all language and thought are inherently dialogic. We lean language and come to understand language through the practice of dialogue. Learning means dialogue in many forms with different people, tools, perspectives in many different contexts. It was introduced in his Problems of Dostoevsky's Poetics(1929). This dialogue is in stark contrast with the teacher norm, direct instruction or monologue. Learning emerges from dialogue, external and internal.

In his incomplete essay Toward a Philosophy of the Act (1986), written in the 1920s, he outlines a theory of human identity or mind based on dialogic development. There are three forms of identity:

I-for-Myself

I-for-the-Other

Other-for-Me

The I-for-myself is untrustworthy. The ‘I-for-the-Other’ on the other hand, allows one to develop one’s identity as a se of perspectives others have about you. Th ‘Other-for-Me’ is the incorporation by others of their perception of you into their own identities. This is an expansive and interesting form of identity in an age of dialogue with others through social media and messaging technology, as well as dialogue with technology which now plays a similar role.

Dialogism manifests itself in ‘heteroglossia’, with exposure to a multiplicity of voices and perspectives in a language. These can be parents, teachers, friends and those on social media. A heterogeneous group of voices that one can learn from.

Authoritative vs. Internally Persuasive Discourse

Authoritative Discourse is discourse that is embedded in a culture, enshrined in tradition. It is unnegotiable and taught as truth. It may be religious beliefs, science, the canon, parental beliefs and other assumed forms of knowledge and practice. It is often enshrined in an official curriculum or syllabus, which learners must memorise and regurgitate in exams.

Internally Persuasive Discourse is more personal, related to the learner’s experiences and views. The learner has to engage in dialogue with the established views to relate it to their own prior knowledge, adapt to it, assimilate it and create their own sense of meaning.

Although we learn through both these forms of discourse, learning is to move from the authoritative to the internally persuasive to find our own deeper forms of meaning.

Carnivalesque

Bakhtin also wrote about the concept of the "carnivalesque" in literature, which subverts and liberates the assumptions of the dominant style or atmosphere through humor and chaos. In education, this can be pedagogical strategies that disrupt traditional hierarchies and empower students to challenge and question. 

Dialogism and AI

AI technology has now produced dialogic systems that may also fulfil Bakhtin’s needs. We can now communicate, in dialogue form with a new form of ‘other’ that lifts us out of our I-for-Myself, allowing technology to play a role in I-for-the-Other and Other-for-Me.

Interestingly traditional educators demand of the technology that it be a truth machine, when language is no such thing. Language is multi-perspective, at times messy, even carnivalesque. It points not just towards learning as dialogue but learning emerging from dialogue, in many different forms. LLMs and chatbots seem to be delivering this new form of learning.

Bakhtin would have loves the carnivalesque uses, such as be a pirate, the odd poems. I rather like the fact that Musk's GROK chatbot is quite snarky! AI satisfied both authoritative and personal learning and language allows for both. This is why I think Generative AI will have a profound effect on learning through dialogue, which is how most good teaching is done, both direct instruction and looser learner-centric dialogue.

Conclusion

He recognised the multifarious, and messy nature of human communication and therefore learning. For this he should be applauded, where theory is so often formulated in rigid and simplistic models. Bakhtin’s work has influenced educational theorists who view learning as a social, dialogical process and who advocate for more participatory, student-centered approaches. Educators who draw on Bakhtin’s ideas might focus on the role of discussion, debate, and dialogue in learning, and they might seek to create learning environments in which students’ voices are heard and valued as part of the collective construction of knowledge. 

Bibliography

Bakhtin, M.M. (1984) Problems of Dostoevsky's Poetics. Ed. and trans. Caryl Emerson. Minneapolis: University of Minnesota Press.

Bakhtin, M.M. (1993) Toward a Philosophy of the Act. Ed. Vadim Liapunov and Michael Holquist. Trans. Vadim Liapunov. Austin: University of Texas Press.

Bakhtin, M.M. (1981) The Dialogic Imagination: Four Essays by M.M. Bakhtin. Ed. Michael Holquist. Trans. Caryl Emerson and Michael Holquist. Austin and London: University of Texas Press.

 

 

 

Thursday, August 17, 2023

Why ‘storytelling’ sucks

The Story Paradox, by Jonathon Gottschall, will disturb you. It is taken as a given that stories are a force for good. That is the orthodoxy, especially among the supposedly educated. But what if the truth is that they are now a force for bad? Are we being drowned in a tsunami of stories from TED Talks, YouTube, TV, Movies, Netflix, Prime, Disney, Social Media, education and workplace learning? Could it be that they distract, distort and deny reality?

We love stories but only the stories that confirm our own convictions. We seek out those narratives that tell us what we already believe, retell and reinforce them, over and over again. And maybe the most subversive story of all, is the one we tell ourselves, over and over again, that we are right and those other storytellers and stories are wrong.

 

Technology has given storytelling an immense boost. Social media is often a storytelling device for amplifying stories you love and attacking those you hate. Movies are often morality takes reflecting what the audience already believes and years to believe – like Barbie. Box sets are consumed in a series of 6-13 episodes, then subsequent series released. Streaming services have exploded with Netflix, Prime, Disney, HBO and many others. They have huge amounts of archived content and release more than any individual could consume. We live in this blizzard of confirmative storytelling.

 

It is common in my world to hear people extol the virtues of ‘storytelling’ from people who want to sound virtuous. Everything’s a story, they claim, almost philosophically, except that is bad philosophy. Bounded in the nutshell of narrative they flounder when that narrative hits any serious philosophical challenge, such as ‘What do we know? or ‘What is real?’ Epistemology or ontology are hard ideas and of little interest to storytellers. What they actually do is often parrot the fuzzy echoes of bad French philosophy which sees all as narrative.

 

I used to think this was just the mutterings of the philosophically challenged, until increasing numbers of people, nay large portions of the society I live in, started believing their own narrative about narratives. You really are your lived experience, we’re told, (except for those whose stories don’t count, like the working class). Your gender is whatever narrative you choose, so here’s a pic ‘n mix of genders and dispositions. And by the way you must also use the pronouns I use in my story or you will be exposed as denying the truth of my ‘story’. This is not just the tyranny of narrative, it is the tyranny of the author demanding the reader and speaker comply to their fictions. Of course, language, supposedly their handmaiden, doesn’t work that way. No matter how hard you try, most people don’t use the terms, as language is use not imposed imperatives.

 

Even worse, children will be given stories about what choices of story they want to live in. These used to be called Fairy Tales but rather than warn children of the dangers of wolves in sheep's clothing or the dangers of being too judgemental or trusting, they are being encouraged to see such stories as a menu of options, even if it means serious and irreversible medical intervention. Being a drag queen is no longer seen as little more than a pantomime dame, but a lifestyle.

 

Suddenly, and it was sudden, we are drowning in narratives. Science and reason were consumed in the fierce bonfire of storytelling. The conceit of calling 2+2=4 a story, is quite something. Of course it opened up all sorts of weird narratives on both the right and the left, although each side claim immunity, their respective narratives being true in the eyes of their respective tribes, the others fictional conspiracies. And there’s the rub. Madcap ideas, such as there is no biological sex through to QAnon are all permitted in the la la land of storytellers. That’s the consequence of this belief in Santa Clauses. We get infantilised by narratives.

 

The world as stories or the storied self are inadequate theories, as they are what Popper called universal theories, you cannot deny them as your critic will just claim your critique IS storytelling. That is BS and here’s why.

 

It started in the 1960s with Barthes ‘Introduction to the Structural Analysis of Narratives’ and has been gulped down by those who are actually too lazy to read French Philosophy and prefer second hand narratives to thinking for themselves. Bruner was the equivalent in psychology, with ‘The Narrative Construction of Reality’. To be fair, Lyotard was rightly suspicious of post-Enlightenment grand narratives (although the French bought into them, philosophy and intellectual thought being a matter of fashion). He was right on Freud and Marx. One super narrative, used and abused by Marxism, that people can be sliced and diced into the oppressed and oppressors has been plucked from the literature, the dying remnants of dialectical materialism, and taken as gospel. I thought that Hitler and Goebbels on the right, and Stalin, then Pol Pot on the left had put an end to that meta-narrative nonsense. But no, it was resurrected, wrapped up in Foucault type obfuscation, none more absurd that Judith Butler’s queerness, the very triumph of philosophical naivety, where everything is a performative story. Facts are trampled on and trumped by narratives, giving anyone permission, Trump being a master iof the art, to say what they wanted, even to lie. The narrative turn in society turned out to be a rebounding and dangerous boomerang. Worse still, it is an intellectual cul de sac.

 

Life is not a story. Socrates was right, the stories and narratives we tell ourselves are usually the stories and narratives others have told us, especially those of the storied self and identity politics. You can see that in people who invariably have the same religion as their parents or adopt the lifestyle of their peer groups. In a world where stories are all that matter, peer groups with consistent narratives become all powerful. It is the sign of a closed not an open mind. Wallowing in storyland is not to think for yourself but to think tribally. Stories have shifted from being humane tales we tell each other to socialise, comfort and amuse ourselves, to something sinister, something akin to weapons of abuse.

 

Stories often bore me, especially backstories, or elaborations in advertising or learning. Columnists with their cosy tales of comfortable conceit. Every article now begins with an anecdote and ends up as a bad sermon or parable. Politicians push the story of the day. Marketing gurus like Seth Godin banging on about telling lies, sorry stories, in marketing.

 

Even worse, educators see themselves as storytellers. Every training course has to be fronted or supported by the crutch of storytelling, so you get tedious narratives about how Peter, Abigail and Sophie have a unique spin on the Data Protection Act. Click on the speech bubble to see what Alisha thinks about Data Protection. Really?

 

Everyone has a novel in them, so the story goes, but they don’t, so that genre has descended into the self-indulgence of characters you must ‘like’ telling stories you ‘empathise’ with. Pop music has become a bland sewer of bad storytelling lyrics, teenage doggerel, albeit in perfect pitch through autocue. Storification has become so inflated that it has smothered art. If you donp;t conform, as a comedian in the Edinburgh Fringe, the venue will now cancel you on the whims of some students who work there. 

The trouble with reliance on stories is that they tend to become beliefs and dogmas, then those with strong convictions make convicts of the others who want to tell other stories, even if they’re satirical or straight up funny. It has happened repeatedly in history with Fascism, Communism, Dictatorships and Theocracies. They will always come up against the cold reality that the world is not a story, certainly not the story they believe in.

 

In the beginning was not the word, it was the world. The world is not made up of stories. It was around before us and will be around after we are gone, along with our conceits. End of story.

Sunday, July 16, 2023

This is the worst AI will ever be, so focused are educators on the present they can’t see the future

One thing many have not grasped about this current explosion of AI, it is that it is moving fast – very fast. Performance improvement is real, fast and often surprising. This is why we must be careful in fixating on what these models do at present. The phrase 'AI is the worst it will ever be' is relevant here. People, especially in ethical discussions, are often fixated by the past, old tools and the present, and not considering the future. It only took 66 years between the first flight and getting to the moon. Progress in AI will be much faster.

In addition to being the fasted adopted technology in the history of our species, it has another feature that many miss – it learns, adapts and adds features very, very quickly. You have to check in daily to keep up. 

Learning technology

The models learn, not just from unsupervised training on gargantuan amounts of data but also reinforcement learning by humans. LLMs reached escape velocity in functionality when the training set reached a certain size, there is still no end in sight yet. Developments such as synthetic data will take it further. This simple fact, that this is the first technology to ‘learn’ and learn fast, on scale, continuously, across a range of media and tasks, it what makes it extraordinary.

 

Teaching technology

There is also the misconception around the word ‘generative’, the assumption that all it does is create blocks of predictable text. Wrong. May of its best uses in learning are its ability to summarise, outline, provide guidance, support and many other pedagogic features that can be built into the software. This works and will mean tutors, teachers, teaching support, not taking support, coaches and many other services will emerge that aid both teaching and learning. They are being developed in their hundreds as we speak.

 

Additive technology

On top of all this is the blending of generative AI with plug-ins, where everything from Wikipedia to advanced mathematics, have been added to supplement its functionality. These are performance enhancers. Ashok Goes had blended his already successful teaching bot Jill Watson with ChatGPT to increases the efficacy of both. Aon top of this are APIs that give it even more potency. The reverse is also true, where Generative AI supplements other tools. There are no end of online tools that have added generative AI to make them more productive, as it need not be a standalone tool. 


Use and translation between hundreds of languages, also computer languages, even translation from text to computer languages, images, video, 3D characters, 3D worlds... it is astounding how fast this has happened, oiling productivity, communications, sharing and learning. Minority languages are no longer ignored.


All of the world's largest technology companies are now AI companies (all in US and China). The competitions is intense and drives things forward. This blistering pace means they are experimenting, innovating and involving us in that process. The prize of increased productivity, cheaper and faster learning, along with faster and better healthcare are already being seen, of you have the eyes to look.


People tend to fossilise their view of technology, their negativity means they don’t update their knowledge, experience and expectations. AI is largely Bayesian, it learns as it goes and it is not hanging around. People are profoundly non-Bayesian, they tend to rely on first impressions and stick with their fixed views through confirmation and negativity biases. They fear the future so stick to the present. 

 

Conclusion

Those who do not see AI as a developing fast and exponentially, use their fixity of vision to criticise what has already been superseded. They poke fun at ChatGPT3.5 without having tried ChatGPT4, any plug-is or any of the other services available. It’s like using Wikipedia circa 2004 and saying ‘look, it got this wrong’. They poke the bear with prompts designed to flush out mistakes, like children trying to break a new toy. Worse they play the GIGO trick, garbage in: garbage out, then say ‘look it’s garbage’. 


This is the worst AI will ever be and its way better than most journalists, teachers and commentators think, so we are in for a shock. The real digital divide is now between those with curiosity and those that refuse to listen. Anyone with access to a smartphone, computer laptop or tablet... that's basically almost all learners in the developed world have access to this technology. The real divide is among those in the know and not in the know, using it and not using it, and that is the increasing gap between learners and teachers. So focused are educators on the present they can’t see the future. 

Thursday, July 13, 2023

AI is now opening its eyes, like Frankenstein awakening to the world

The AI frenzy hasn’t lessened since OpenAI launched ChatGPT. The progress, widening functionality and competition has been relentless, with what sounds like the characters from a new children’s puppet show - Bing, Bard, Ernie and Claude. This brought Microsoft, Google, Baidu and Anthropic into the race, actually a two horse race, the US and China.

It has accelerated the shift from search to chat. But Google responded with Bard, the Chinese with Ernie’s impressive benchmarks and Claude has just entered the race with a 100k document limit and cheaper prices. They are all expanding their features but one particular thing did catch my eye and that was the integration of ‘Google Lens’ into Bard, from Google. Let’s focus on that for a moment.

 

Context matters

Large Language Models have focused on text input, as the dialogue or chat format works well with text prompting and text output. They are, after all, ‘language’ models but one of the weaknesses of such models is their lack of ‘context’. Which is why, when prompting, it is wise to describe the context within your prompt. It has no world model, doesn’t know anything about you or the real world in which you exist, your timelines, actions and so on. This means it has to guess your intent just from the words you use. What it lacks is a sense of the real world, to see what you see.

 

Seeing is believing

Suppose it could see what you see? Bard, in integrating Google Lens, has just opened up its eyes to the real world. You point your smartphone at something and it interprets what it thinks it sees. It is a visual search engine that can ID objects, animals, plants, landmarks, places and no end of other useful things. It can also capture text as it appears in the real world on menus, signs, posters, written notes; as well as translating that text. Its real time translation is one of its wonders. It will also execute actions, like dialling telephone numbers. Product search is also there from barcodes, which opens up advertising opportunities. It even has style matching.

More than meets the eye

OK, so large language models can now see and there’s more than meets the eye in that capability. This has huge long-term possibilities and consequences, as this input can be used to identify your intent in more detail. The fact that you are pointing your phone at something is a strong intent, that the object or place is of real, personal interest. That, with data about where you are, where you’ve been, even where you’re going, all fills out your intention.

 

This has huge implications for learning in biology, medicine, physics, chemistry, lab work, geography, geology, architecture, sports, the arts and any subject where visuals and real world context matters. It will know, to some degree, far more about your personal context, therefore intentions. Take one example, healthcare. With Google Lens one can see how skin, nails, eyes, retinas, eventually movements can be used to help diagnose medical problems. It has been used to fact check images, to see if they are, in fact, relevant to what is happening on the news.  One can clearly see it being useful in a lab or in the field, to help with learning through experiments or inquiry. Art objects, plants, rocks can all be identified. This is an input-output problem. The better the input, the better the output.

 

Performance support

Just as importantly, learning in the workplace is a contextualised event. AI can provide support and learning relevant to actual workplaces, airplanes, hospital wards, retail outlets, factories, alongside machines, in vehicles and offices - the actual places where work takes place - not abstract classrooms.


In the workplace, learning at the point of need for performance support can now see the machine, vehicle, place or object that is the subject of your need. Problems and needs are situated and so performance support, providing support at that moment of need, as pioneered by the likes of Bob Mosher and Alfred Remmits, can be contextualised. Workplace learning has long sought to solve this problem of context. We may well be moving towards solving this problem.

 

Moving from 2D to 3D virtual worlds

Moving into virtual world, my latest book, out later this year, argues that AI has accelerated the shift from 2D to 3D worlds for learning. Apple may not use the words ‘artificial’ or ‘intelligence’ but its new Vision Pro headset, which redefines computer interfaces, is packed full of the stuff, with eye, face and gesture tracking. Here the 3D world can be recognised by generative AI to give more relevant learning in context, real learning by doing. Again context will be provided.

 

Conclusion

Generative AI was launched as a text service but it quickly moved into media generation. It is now opening its eyes, like Frankenstein awakening to the world. There is often confusion around whether Frankenstein was the creator or created intelligence. With Generative AI, it is both, as we created the models but it is our culture and language that is the LLM. We are looking at ourselves, the hive mind in the model. Interestingly, if AI is to have a world view we may not want to feed it such a view, like LLMs, we may want it to create a world view from what it experiences. We are making steps towards that exciting, and slightly terrifying, future.

Huw did we ever get to this?

I spoke to an interesting woman at the BBC once, where I gave a talk on the challenge of digital media to traditional TV. My talk was received like a turd left behind by a burglar, as they then saw the internet and YouTube as an irrelevant gadfly. But that’s another story. At that event I met this woman, from Northern Ireland, who trained fledgling newsreaders and presenters. She told me she had informally called her course ‘The Egos Have Landed’ as she had repeatedly seen an odd phenomenon, young, and not so young journalists and others, catapulted into fame, thinking they were something more than autocue puppets. Their exposure turned them into monstrous narcissists who then started having opinions they thought mattered, all because they read from a teleprompter or chatted to each other on a studio sofa.

Saville was the King of such monsters, a prolific paedophile lauded, and worse, protected by BBC managers. Everyone knew, everyone laughed it off. Roll the credits on decades of paedophiles from Rolf Harris to Stuart Hall and a string of Radio 1 DJs. They’re an odd bunch. Kristian Digby, host of BBC1's To Buy Or Not To Buy, accidentally suffocated while attempting auto-erotic asphyxiation. We love a jolly frontman, as long as we don’t hear about his not so jolly backroom behaviour. Schofield and Edwards are just the latest in a long line of friendly faces that mask disturbing behaviour. I’m little concerned with their behaviour, as the witch hunts are so unedifying.
The deeper malaise is old media trying hard to avoid extinction. They need more front, as that’s the only thing they have left. Witness the recent disastrous interview by the BBC with Andrew Tate or Cathy Newman being demolished by Jordan Peterson. Whatever your views on these two odd chaps, they themselves have a lot of ‘front’, they’re smart, articulate and part of the counter-culture that has challenged TV. They ran rings around their stumbling, formulaic, ex-journalist interrogators.
The problem is too much focus on the ‘presentation’ layer. Presenters are really just juiced up human PowerPoints. I see this in tech all the time, its obsession with UX, then along comes Google – just type into a box, or ChatGPT, the same. TV has to put horrifically expensive lipstick on pigs because we want the truth watered down and mouthed out to us by what is known in the trade as ‘talking heads’. Loose Women, Quiz Shows and Reality TV are packed with these D-list ‘presenters’. They never die, just reappear as banal commentators on endless third rate entertainment programmes, the graveyards for clowns.
I have no idea why we think that news ’readers’ are worth listening to, outside of being working journalists. They’re the teleprompt and interview folk, and usually not very good at the latter, as their skills are with the written not spoken word. I was once introduced by Jackie Bird, a famous TV presenter in Scotland, as ‘Douglas’, even though I could see the word on the autocue was ‘Donald’. She was basically a bad parrot.
The problem is that they now get paid huge sums to ‘present’ homilies, seem like wholesome figures, often castigating others for their moral turpitude. We expect them to be our moral guardians, clean, pure, sensible and decent, when in truth they’re worse than most, as they often turn into overpaid narcissists. Will we miss Huw or will we manage without paying him £410,000 a year to read an autocue and behave like an old letch hunting down young ‘talent’?
I feel sorry for old Huw. He seems so ordinary, unremarkable and absent of charisma. Just a drone voice over royal events and a dull, earnest newsreaderI can't think of a single interesting sentence he ever uttered. He does stand out as someone without any obvious talent or presence.
TV is in trouble, as it is being crushed by the timeshifted streamers, social media and a dozen other alternatives. This is merely a sign of the old v new.

Tuesday, July 11, 2023

Is Ethics doing more HARM than GOOD in AI for learning?


I put this to an audience of Higher Education professionals at an Online Learning Conference yesterday at Leeds University.

I have an amazing piece of technology I’ve invented. It will bring astonishing levels of autonomy, freedom and excitement to billions of people. But here’s the downside, 1.4 million people will die horrible, bloody, sometimes mangled deaths every year, with another couple of million maimed and injured. This World War level of casualties, will strike every year, and is the price you have to pay. Would you say YES or NO?

Most rational souls would say NO. But let me reveal that technology – the automobile. We have come to an accommodation with the technology, as the benefits outweigh the downsides. Al may even bring in the self-driving car. My point is that we rush to judgement, as we are amateur ethicists and rely on gut feel, not reason.


This whole area, ethics, is oddly subject to a huge amount of bias as it is such an emotive subject. It plays to people's fears and prejudices, so objectivity is rare. Add new technology to the mix, along with a pile of stories in social media and you have a cocktail of wrong-headed certainty and exaggeration.

 

1. Deontological v Utilitarian

The offer I made at the start, I have put to many audiences. It is never taken up, as we are Utilitarians (calculating benefits against downsides) when it comes to actual decisions on using technology but dogmatic Deontologists (seeing morals as rules or moral laws) when it comes to thinking about ethics and technology. 

I am a fan of David Hume’s Indirect Utilitarianism, refined by Harsanyi as preference Utilitarianism. For a good discussion on how this relates to ethical issue and AI, see Stuart Russell’s excellent book, Chapter 9, Human Compatible (2019), where he attempts to translate this into achievable, controlled but effective AI. Curiously, Hume found himself cancelled by a few morally deluded students at the University of Edinburgh recently and they removed his name from the building which housed the Department of Philosophy. This was doubling down on Religious Deontologists refusing him a Professorship in the 18th century when he was one of the most respected intellectuals in the whole of Europe. Both groups are deluded Deontologists. He remains, in my opinion, the finest of the English speaking philosophers.This tension has existed in ethical thinking since the Enlightenment.

In truth, most of what passes for Ethics in AI these days is lazy ‘moralizing’, moral high horses ridden by people with absolute certainty about their own values and rules, as if they were God-given. More than this they want to impose those rules on others. They call themselves ‘ethicists’ but it is thinly disguised activism, as there is no real attempt to balance the debate out with the considerable benefits. It’s an odd form of moral philosophy that only considers the downsides. 

Google, Google Scholar, AI mediated timelines on almost all social media, the filtering out of harmful and pornography material into our email boxes, the protection of our bank accounts – all use AI. The future suggests that other huge near-term upsides in terms of learning, healthcare and productivity are well underway.

There is a big difference between ‘ethics’ and ‘moralising’. Even a  basic understanding of ethics will reveal the complexity of the subject. We have thousands of years of serious intellectual debate around deontological, rights-based, duty-based, utilitarian and other ways of thinking about ethics. A pity we give it so little thought before passing judgement.

2. Duplicity

Thomas Nagel points out, in his book 'Equality and Partiality', that we often pronounce strong deontological, moral opinions but rarely apply them in our own behaviour. We talk a lot about, say climate change, but drive large cars and fly off regularly on vacation. We talk about the climate emergency in academia but fly off for conferences at the drop of a sunhat, don’t deliver learning online and believe in spending €28 billion flying largely rich students around Europe through Erasmus. You may want all of your AI to be fully ‘transparent’. That’s fine, but stop using Google and Google Scholar and almost every other online service as they all use AI and it is far from transparent. My favourite example are those who are happy to 'probe' my unconscious in 'unconscious bias' training but decry the use of student data in learning on htebgrounds of privacy!

I’m just back from Senegal, where my fellow debating colleague Michael, from Kenya, berated the white saviours for denying the opportunities that AI offers Africa. Denying young aspiring workers to do human reinforcement training pays above the average wage and gives people a step into IT employment. It’s bizarre, he says, for white saviours on 80k to see this as exploitation.

3. To focus on AI is to focus on the wrong problem

Rather than climate change, the possibility of nuclear war, a demographic time bomb or increasing inequalities – AI is getting it in the neck, yet it may just solve some of these real and present problems. In particular, it may well increase productivity, democratise education and dramatically reduce the costs of healthcare. These are upsides that should not be thwarted by idle speculation.

At its most extreme, this speculation, that 'AI will lead to extinction of the human species' seems to have turned into the Doomsday tail that wags the black dog, despite the fact there is no evidence at all that this is possible or likely. Focus on what is likely not the fear-mongering that caught your attention on Twitter.

4. New technology always induces an exaggerated bout of ethical concern

Every man, women their uncle, aunt and dog, is an armchair ethicist but this is hardly new. It was ever thus. Plotus made the same point about the sundial in the 3rd century: 

The gods confound the man who first found out how to distinguish hours! 
Confound him too who in this place set up a sundial to cut and hack my days so wretchedly

into small portions!

Plato thought writing would harm learning and memory in the Phaedrus, the Catholic Church fought the printing press (we still idiotically teach Latin in schools), travelling in trains at speed was going to kill us, rock ‘n roll spelled the end of civilisation, calculators would paralyse our ability to do arithmetic, Y@K was going to cause the world to implode, computer games would turn us into violent psychopaths, screen time would rot the brain, the internet, Wikipedia, smartphones, social media… now AI. 


As Stephen Pinker righty spotted a predictable combination of negativity and confirmation bias leads to a predictable reaction to any new technology. This inexorably leads to an over-egging of ethical issues as they confirm your initial bias.

 

5. Fake distractive ethics

Curiously, much of the language and examples in the layperson’s mind, has come from shallow and fake news, which is actually a real concern in AI, with deep fakes. Take the famous NYT article where the journalist claimed ChatGPT had told him to leave his wife. On further reading it shows he had prompted it towards this answer. If some stranger in a bar dropped you the line that his marriage was on the rocks, you’d put a significant bet on him being right to leave his wife. ChatGPT was actually on the money. It was a classic GIGO, Garbage In: Garbage Out, poke the bear story. Then there was that AI guided missile that supposedly returned back and hunted down its launcher - never happened – complete fake. The endless stream of clickbait ‘look it can’t do this’, mostly using ChatGPT 3.5 (a bit like using Wikipedia circa 2004), flooded social media. This is the worst AI will ever be but hey, let’s not consider the fact that first release technology almost always leads to dramatic improvement. Think long-term folks before using short-term clickbait to make judgements. 

 

6. Argument from authority

Then there is the argument from authority. I’m a Professor say people in strongly worded letter to the world, therefore I must be right. Two things matter here, domain experts often have a lousy track record and a lack of expertise in philosophy, moral philosophy, the history of technology, politics and economics. To be fair experts in AI are worth listening to as they understand what is often difficult to understand and opaque technology. Generative AI, in particular, is difficult to comprehend, in terms of both what it is, how it works and why it works. It confounds even AI experts. But they are not experts on politics, ethics or regulation.

 

The letters that appeared in both 2015 and 2023, pushed by Tegmark’s Future of Life Institute (whose role is ethical oversight), use the argument from authority. We’re academics, we know what’s right for you the masses. It demanded that we immediately stop releasing AI for six months until the regulators caught up – a ridiculous and naive request that showed their political, economic and social naivety. I dislike this ‘letter writing’ lobbying. First it had names of people who demanded they be taken off the list as they had not given permission and some have since rescinded the statement but authority alone is never enough.

 

Conclusion

This tsunami of shallow moralising is almost perfectly illustrated in Higher Education, where most of the debate around ethics has focused on plagiarism, when the actual problem is crap assessment. There is little consideration of the huge upsides and benefits for teachers and students alike. Learning, in my view is the biggest beneficiary of this new form of AI, health care second. Hundreds of millions are already using it to learn.

On climbing into personal pulpits, we may fail to realise the benefits in learning. Personalised learning, allowing any learner to learn anything, at any time from any place is becoming a reality. Functioning, endlessly patient tutors, that can teach any subject at any level in any language are on the horizon, universal teachers with a degree in any subject and driven by good learning science and pedagogy. The benefits for inclusion and accessibility are enormous, as its potential to teach in any language.

It is not that there are no ethical problems just that objective ethical debate is harmed when it becomes enveloped in a culture of absolute values and intolerant moralising. For every ethical problem that arises, there seems to be glib answers that are simple, confidently pronounced and often wrong.

I wrote this because I feel we are now in the position, in some countries and sectors, especially education, in getting bogged down in a swamp of amateur moralising on AI, suppressing the benefits. This has already happened in the EU, where the atmosphere is one of general negativity, seeing their role as regulators not creators. But the EU is only 5.7% of the world’s population. Google has not released Bard in the EU, OpenAI have set up shop in London and when Italy banned ChatGPT it spooked investors. We are in danger of throwing the baby out with the bathwater – and the bath. In practice learners are using this tech anyway, they are bypassing institutional inertia and high-horse ethical posing. Eric Atwell, at Leeds University, noted that all of his AI students ticked ‘not interested’ when it came to taking a module on ethics. They have a point. They know they’ll get a lot of moralising and not much in the way of ethics. It is unethical not to be using AI in learning.

Indeed, ethics may be doing more harm than good by making AI less useful and efficient. guardrailing and alignment may well be reducing the effectiveness of generative AI by placing too many constraints on output.

Leeds leads the way on HE and online learning events

Solid Online Learning Summit at Leeds University, open debate and discussion and some great people as speakers as well as expertise in the audience. I could only be there for one of the two days but it was worth the trip to Leeds, my second in a week. I like Leeds.

Irrepressible Neil Mosley 

First up, the irrepressible Neil Mosley, a knowledgeable, productive and honest broker of information on online learning in HE. Knows his stuff. He outlined the growth in the UK HE online learning market. I say growth but at 400k, in reality, it is a bit lacklustre, a point also made by both myself and the Paul Backsish. One could conclude that this is little more than a bit of an earner on the side, especially for foreign student income, rather than the strategic execution we see in the US. Taken by surprise by Covid, they seem to be retreating back into the old model and necessary expansion is slight. There is no real strategic intention to reduce costs and scale with online offers, as it is often an attempt to milk the lucrative 'Masters Degree' market.

His characterisation of the ‘partnerships’ market was good:

OPMs (Online Programme Management)

Ex-MOOC platform companies

Short Course Companies

Service Companies (learning design etc)

The whole MOOC movement made lots of mistakes and they’ve now turned into ‘courses’. The disaster that was Futurelearn, an organisation that simply ripped out cash from UK Universities, distracted them from the real task of online learning and collapsed as they had no business expertise. Hiring your CEO from BBC Radio condemned them to a long decline into irrelevance. The OU was meant to open up HE to a wider audience and could have led the charge into online learning but the old boys club took over and has thwarted them at evert turn.

Neil then looked at growth in the numbers and types of courses:

Degrees

MOOCs

Premium Short Courses

Micro-credentials

 

Micro-credentials

It is worth bringing in a later panel at this point on ‘Micro-credentials’, which must be one of the most disastrous bits of HE marketing ever… such a stupid word, an explicit recognition that what you offer is a trite piece of paper, badge or some such nonsense. It is such a stupid, demeaning and diminished term. The audience knew this but the panel seemed happy with it because it could be ‘translated’ – the worse response to any question on the day. This is what happens when you get people who know nothing about marketing talking about marketing. Not for the first time did the audience show real insights and expertise.

This rose by any other name stinks. A blatant attempt to, yet again, steal market share from those who do skills training well; FE and private providers. HE are hopeless at skills stuff but smell the cash and have been down lobbying the DfE, the panellist from Staffordshire admitted as much. The other panellist, from Wales, seemed to live on EU Erasmus grants, which have, rightly in my view, dried up. I did like the woman from Mexico who was blunt and honest about her very different context. Once again money gets sucked up from actual skills delivery to pretend skills delivery in HE. They can’t do this and justify this immoral move by tagging on the term ‘Lifelong Learning’. It doesn’t wash. HE is NOT in the Lifelong Learning sector, never was and never will be. There was also some baloney about ‘badges’ from a ‘badges’ man who we were told was some sort of lackey in the Royal Household. They will learn the hard way and fail to make money. Paul Bacsish made much the same point. I like Paul – he’s been around the block several times and has a good nose for this waste, which I remember him describing as ‘doomed to succeed’.

Learning Engineering

I enjoyed Aaron Kessler’s talk on Learning Engineering, although I’m not a fan of the term ‘engineering’ here as it is being used analogously. I feel that learning is a wide and messy business and doesn’t always fit neatly into this paradigm. The insistence of using learners in the process of design suffers, I think, from the obvious fact that they don’t know what they don’t know and are often delusional about good learning theory and practice. But the talk was sound, as it stated what is obvious, that process matters, implementation is hard and evaluation harder. The push towards data was also, rightly, emphasised. One again an audience member pointed out that most don’t have the luxury for the complexities of abstract model as they have tight deadlines (great point). Aaron very kindly gave me a copy of the ‘Learning Engineering Toolkit’ book, which has some pretty good stuff. I tackled the same stuff in my ‘Learning Experience Design’ book. We’re all in the same boat here, rowing in the same direction.

Ethics and AI

My contribution was a short talk on Ethics and AI. I made the point that most Ethical AI, is not ethics at all but ‘moralising’. It’s a complex issue diminished when barely disguises activism enters the room. Lots of moral high horses are being ridden into the debate, clouding expertise. The fact that HE focused almost entirely in plagiarism as the moral issue says how far behind we are in our thinking about the use of AI in HE. The problem is not AI but crap assessment. My message was a bit depressing as I now think the UK and EU are way behind on both AI and AI for learning. The US and China are streaking ahead as we wallow in bad regulation. Eric Atwell who teaches AI at Leeds very kindly summed my talk up by agreeing with every last thing I had said! This was gratifying as I find a great deal of good sense comes from practitioners, as opposed to arrivistes who have jumped on the ethical bandwagon. Adam Nosel made some good points about coaching and the need to maintain the human and social elements, as did Andrew Kirkton on some of the nitty gritty issues in HE.

Podcasts
I had breakfast with Bo from Warwick who was doing some great work on podcasting in her institution. It is a subject close to my heart. We are stuck in a traditional paradigm in learning design, ignoring one of the most important mediums of our day. Not to use podcasting in learning is mad, as hundreds of millions listen to learning podcasts every day, of their own volition. We know a lot about how to do these well and Bo was pn point here. Good to see young experts get a voice at this event.

Conclusion
These were merely my impressions written on the train back to Brighton, not an exhaustive summary and even though I disagreed with some, that is the point. Margaret Korosec Jo-Anne Murray, Megan Parsons and the rest of the team did a great job here, encouraging honest, open and sometimes uncomfortable debate. That’s the point. This is about moving forward, learning something new and moving on. To do that we need to look outwards. I’d have loved to have seen some people from FE here as well as private providers (there were some). But this was only the first event. It was a shame I couldn’t stay for the second day, the Tapas meal was fun, Leeds I love, and I met and spoke to some great people. Look forward to the second.

Sunday, June 25, 2023

Can machines have empathy and other emotions?

Can machines have empathy and other emotions? Yann Lecun thinks they can and I agree but it is a qualified agreement. This will matter if AI it to become a Universal teacher and have the qualities of an expert teacher.

 

One must start with what emotions are. There has been a good deal of research on this, by Krathwohl, Damasio & Immordino-Yang, Lakoff, Panksepp. Also good work done on uncovering the role of emotion in learning by Nick Shackleton-Jonesl also covered them all in this podcast.


We must also make the distinction between:


Emotional recognition

Display of emotion

Feeling emotions

 

Emotional recognition

The face is a primary indicator of emotions and we look for changes in facial muscles, such as raised eyebrows, narrowed or widened eyes, smiles, frowns, or clenched jaw. Facial scanning can certainly identify emotions using this route. Eye contact is another, a solid gaze showing interest, even anger, while avoiding eye contact can indicate disinterest, shyness, unease or guilt. Microexpressions are also recognisable as expressing emotions. Note that all of this is often a weakness in humans, with a significant difference between men and women, also in those with autism. Emotional recognition is well on its way to being better than most humans and will most likely surpass that ability. 

 

Vocal tone and volume are also significant, tone of voice, intonation, pitch, raised volume when aroused or angry; quiet or softer tone when sad or reflective; upbeat when happy. Body language is another, clearly possible by scanning for folded arms and movements showing unease, disinterest or anger.

 

Even at the level of text, one can use sentiment analysis to spot a range of emotions, as emotions are encoded in laguage. LLMs show this quite dramatically. This can be used to semantically interpret text that reveals a whole range of emotions. It can be used over time, for example, to spot failing students who show negativity in a course. It can be used at an individual level or provide insights into social media monitoring, public opinion, customer feedback, brand perception, and other areas where understanding sentiment is valuable. As it improves, using LLMs it is starting to spot It may struggle with sarcasm, irony and complex language usage.

 

AI already could understands music in some sense, even its emotional intent and effect. Spotify already classify using these criteria using AI. This is not to say it feels emotion.

 

Even at the level of ‘recognition, it could very well be that machine help humans control and modulate bad emotions. I’m sure that feedback loops can calm people down and encourage emotional intelligence. The fact that machines could read stimuli quicker than us and respond quicker, may mean it is better at empathy than we could ever be. Recognising emotion will allow AI to respond appropriately to our needs and should not be dismissed. It can be used as a means to many ends, from education to mental healthcare. Chatbots are already being used to deliver CBT therapy. 

 

Display of emotion

Emotions can be displayed without being felt. Actors can do this, written words in a novel can do this and both can elicit strong human emotions. Coaches do this frequently. Machines can also do this. From the earliest Chatbots, such as ELIZA, that has been clear, Nass &Reeves, showed in 35 studies in The Media Equation, that this reading of human qualities and emotions into machines is common.


As Panksepp repeatedly says we have a tendency to think of emotions as human and therefore ‘good’. Their evolutionary development means they are there for different reasons than we think, which is why they often overwhelm us or have dangerous as well as beneficial consequences. Most crime is driven by emotional impulses such as unpredictable anger, especially violent and sexual crime. This would lead us to conclude that the display of positive emotions should be encouraged, bad ones designed out of the system. There are already efforts to build fairness, kindness, altruism and mercy into systems. It is not just a matter of having a full set of emotions, mort a matter of what emotions we want these systems to display or have.

 

Feeling emotions

This would require AI to be fully embedded in a physical nervous system that can feel in the sense that we feels emotions in the brain. It also seems to require consciousness of the feelings themselves. We could dismiss this as impossible but there are half way houses here and there is another possibility. Geoffry Hinton has posited The Mortal Computer and hybrid computer brain interfaces could very well blur this distinction in a sense of integrating thought with human emotions, in ways as yet not experiences, even subconsciously. But we may not need to go this far.

 

Are emotions necessary in teaching?

I have always been struck by Donald Norman’s argument “Empathy… sounds wonderful but the search for empathy is simply misled.” He argued that this call for empathy in design is wrong-headed and that “the concept is impossible, and even if possible, wrong”. There is no way you can put yourself into the heads of the hundreds, thousands, even tens and hundreds of thousands of learners. As Norman says “It sounds wonderful but the search for empathy is simply misled.” Not only is it not possible to understand individuals in this way, it is just not that useful. It is not empathy but data you need. Who are these people, what do they need to actually do and how can we help them. As people they will be hugely variable but what they need to know and do, in order to achieve a goal, is relatively stable. This has little to do with empathy and a lot to do with understanding and reason.

 

Sure, the emotional side of learning is important and people like Norman, have written and researched the subject extensively. Positive emotions help people learn (Um et al., 2012). Even negative emotions (D’Mello et al., 2014) can help people learn, stimulating attention and motivation, including mild stress (Vogel and Schwabe, 2016). We also know that emotions induce attention (Vuilleumier, 2005) and motivation that can be described as curiosity, where the novel or surprising can stimulate active interest (Oudeyer et al., 2016). In short, emotional events are remembered longer, more clearly and accurately than neutral events.

 

All too often we latch on to a noun in the learning world without thinking much about what it actually means, what experts in the field say about it and bandy it about as though it were a certain truth. But trying to induce emotion in the teaching and design process may not be not that relevant or pnly relevant to the degree that mimicing emotion may be enough. AI can be designed to induce and manipulate the learner towards positive emotions and not the emotions, identified by Panksepp and others, that harm learning, such as fear, anxiety and anger. We are in such a rush to include ‘emotion’ in design that we confuse emotion in learning process with emotion in the teacher and designer. It also seems like lazy signalling, for not doing the hard analysis up front, defaulting to the loose language of concern and sympathy.

 

Conclusion

In discussion emotions we tend to think of it as a uniquely human phenomenon. It is not. Animals clearly have emotions. This is not a case of human exceptionalism. In other words, beings with less complexity than us can feel. At what point therefore can the bottom up process create machine that can feel? We seem to be getting there and have come quite far having reached ‘recognition’ and ‘display; 

 

If developments in AI have taught us one thing, it is to never say never. Exponential advances are now being made and this will continue, with some of the largest companies with huge investments, along with a significant shift in research and government intentions. We already have the recognition and display of emotions. The feeling of emotions may be far off, unnecessary for many tasks, even teaching and learning.

 


In medicine, empathy is already being helped with GPT4, patients can benefit from being helped by both a knowledgeable and empathetic machine. We see this already Healthcare in the Ayers (2023) research, where 79% of the time, patients rated the chatbot significantly higher for both quality and empathy. That’s before the obvious benefits of being available 24/7, getting quicker results, increased availability of healthcare in rural areas, access by the poor and decreased workload for healthcare systems. It empowers the patient. For more on this area of AI helping patients with empathy listen to Peter Lee’s excellent podcast here. He shows that even pseudo-empathy can run deep and be used in many interaction with teachers, doctors, in retail and so on.

This is why I think the Universal Teacher and Universal Doctor are now on the horizon.

 

Bibliography

Ayers et al. 2023. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Platform

Norman, D.A., 2004. Emotional design: Why we love (or hate) everyday things. Basic Civitas Books.

Norman, D., 2019. Why I Don't Believe in Empathic Design.

Um, E., Plass, J.L., Hayward, E.O. and Homer, B.D., 2012. Emotional design in multimedia learning. Journal of educational psychology104(2), p.485.

D’Mello, S., Lehman, B., Pekrun, R. and Graesser, A., 2014. Confusion can be beneficial for learning. Learning and Instruction29, pp.153-170.

Vogel, S. and Schwabe, L., 2016. Learning and memory under stress: implications for the classroom. npj Science of Learning1(1), pp.1-10.

Vuilleumier, P., 2005. How brains beware: neural mechanisms of emotional attention. Trends in cognitive sciences9(12), pp.585-594.

Oudeyer, P.Y., Gottlieb, J. and Lopes, M., 2016. Intrinsic motivation, curiosity, and learning: Theory and applications in educational technologies. Progress in brain research229, pp.257-284.

https://greatmindsonlearning.libsyn.com/affective-learning-with-donald-clark