Saturday, December 16, 2023

Babbage - genius but of little causal significance in history of computers

Charles Babbage (1791 - 1871) was a colourful British mathematician, inventor, and mechanical engineer. He made significant contributions to the field of computing through his pioneering work on the design of mechanical computers. 

A mathematician of great stature, he held the position at Cambridge held by Newton and received substantial Government funds to build a calculating Differential Engine, funds he used to go further to develop an Analytical Engine, more of a computer than calculator.


Machine to mind

For the first time we see,  albeit still mechanical product of the Industrial Revolution, the move from machine to mind. Babbage saw that human calculations are often full of errors and speculated whether steam could be used to do such calculations. This led to his lifetime focus on building such a machine.

Babbage had been given government money to conceive and develop a mechanical computing device, the Difference Engine. He designed it in the early 1820s and it was meant to automate the calculation of mathematical tables, basically a sophisticated calculator, hat used repeated addition. He did, in fact, go on to design a superior Analytical Engine, a far more complex machine that shifted its functionality from calculation to computation. Conceived by him in 1834, it was the first programmable, general-purpose computational engine and embodies almost all logical features of a modern computer. Although a mechanical computer, it features an arithmetic logic unit, control flow with conditional branching, and memory, what he called the ‘store’. We should note that it is decimal but not binary but it could automatically execute computations. 

Although this is an astonishing achievement, neither were fully built during his lifetime, Babbage's design laid the groundwork for future developments in computing. His son Henry Babbage did build a part of his differential engine as a trial piece and it was completed before Babbage's death in 1871. A full Differential Engine was built in 1991 from materials that were available at the time and to tolerances achievable at the time. It weighs in at 5 tons, with 8000 moving parts. Both can be seen in the Science Museum in London.


Babbage was a prickly character who alienated many, especially in Government, who generously funded his work. He alienated the government and in a tale that has been common in UK computing, never turned from theory into practice. Those developments eventually came from the US. In the end he failed but from the drawings alone, Lady Byron called it a ‘thinking machine’ and Ada Lovelace, her daughter, asked for the ‘blueprints’ and became fascinated by the design and its potential.


Although described as the ‘father of computers’, there is no direct, causal influence between Babbage and the development of the modern compuer. Babbage’s designs were not studied until the 1970s, so Babbage’s designs could not have been the direct descendants of the modern computer. It was the pioneers of electronic computers in the 1940s that were the true progenitors of modern computers. There is a much stronger case made for the idea that it was Holerith and his census machines that had the real causal effect.


Swade, D. and Babbage, C., 2001. Difference engine: Charles Babbage and the quest to build the First Computer. Viking Penguin.

Hyman, A., 1985. Charles Babbage: Pioneer of the computer. Princeton University Press.

In Our Time, Ada Lovelace, featuring Patricia Fara, Senior Tutor at Clare College, Cambridge; Doron Swade, Visiting Professor in the History of Computing at Portsmouth University; John Fuegi, Visiting Professor in Biography at Kingston University.

Ada Lovelace - insightful but full of surprises...

Ada Lovelace (1815 -1852) died at the age of 36 but had significant insights into computer science. She was the daughter of the poet Lord and Lady Byron but her parents parted only weeks after her birth. Her mother was interested in mathematics, also social movements, and established a series of schools and helped establish the University of London. It was she who ensured that Ada got a good, disciplined education in both science and mathematics. 

After both attended a Babbage soiree in London, where her mother described Babbage’s engine as a ‘thinking machine’, they both went on a tour round the Midlands where they saw the Jacquard Loom. This was to inspire a series of thoughts in the form of notes from Ada on the potential of the Analytical Engine that Babbage had invented.

The mathematician Hannah Fry describes Ada as intelligent but also “manipulative and aggressive, a drug addict, a gambler and an adulteress!”

Analytical engine

Ada then collaborated closely with the mathematician and inventor, Charles Babbage, who invented what some regard as the first modern computer - his Analytical Engine. This resulted in her translation from French to English, of an article written by the Italian mathematician Luigi Federico Menabrea (future Prime Minister), about Babbage's Analytical Engine, where she added extensive notes and annotations. These notes were three times as long as the original essay and were published in Scientific Memoirs Selected from the Transactions of Foreign Academies of Science and Learned Societies in 1843 and contained some seminal ideas on computing.


In these notes she described the potential for machines to perform operations beyond simple arithmetic calculations. In one of her notes, she described an algorithm for calculating Bernoulli numbers, which is considered, by some, to be the world's first computer program, although doubt has been cast on this by recent scholarship. It is a detailed and tabulated set of sequential instructions that could be input into the Analytical Engine. This demonstrated her understanding of how machines could be programmed to perform various tasks, a fundamental concept in computer science and AI. 

Insightful though her notes were, she was not a top flight mathematician and the supposed computer programme was really a sort of pseudocode with mathematical expressions. The claim that she wrote the first computer programme some regard as exaggerated and it was never executed on any machine as an actual programme. As the Babbage scholar, Doron Swade, who built the Babbage Analytical Engine, claims, the concept and principle of a computer programme for this machine was actually Babbage’s idea, as his notes of 1836/37 predate those of Lovelace, although her insights on computation beyond mathematics was absolutely original. 

From calculation to computation

The notes had the idea that instructions (programs) could be given to these machines to perform a wide range of tasks, making her a pioneer in the ‘concept’ of computer programming. Accomplished in embroidery, she describes the possibility of input through punched cards similar to the method used on the Jacquard loom. This loom was invented by Joseph-Marie Jacquard in the early 19th century and revolutionized the textile industry by allowing for the automated production of intricate patterns in fabrics. Punched cards were used for patterns, each hole being an on/off switch, one card per line in a column of sequenced cards, a technique used on mainframe computers in the 20th century.

Babbage saw his machines as dealing with numbers only. Lovelace saw that we could see such machines as not just doing calculation but also computation. Numbers can represent others things, representations of the world and she speculated that computers could be used to create outputs other than mathematics such as language, music and design. She understood that machines could be programmed to generate creative works. This anticipation of the creative potential of machines aligns with the field of generative AI, which focuses on developing algorithms that can produce creative content such as music, art, and literature, This was her main insight, although there is no direct causal influence between her work and these developments. 

Education and learning

Her views on education aligned with her own experiences and that of her mother Lady Byron. She received an extensive education in mathematics and science, which enabled her to mix with other intellectuals and practitioners in the field, making ground-breaking insights to the field of computing. She was an advocate for the intellectual and educational development of women and believed in providing women with opportunities for education in mathematics and the sciences, which was uncommon at that time. Lovelace's passion for learning and her advocacy for education for all, regardless of gender, continue to inspire educators and learners today.



Her role in inventing either the idea of computer programming and the first computer programme seems o have been quashed. The said programme was, of course, never used in the Analytical Engine, as it was never turned into actual code and the Analytical Engine was never built.

There was no real causal influence here on modern computing, no real continuity between Lovelace and modern computing. This is an ad hoc legend rather than a matter of history. Turing read her notes and admired her insights, and although one can argue that came through Turing, who was in Bletchley Park, and that she influenced the Colossus machine, which decoded German scripts, the causal link is tenuous and unproven. There is no direct, causal trail to modern compuers, even through Babbage, as Babbage’s designs were not studied until the 1970s, so Babbage’s ideas were not actually the direct descendants of the modern computer. It was the pioneers of electronic computers in the 1940s that were the true progenitors of modern computers.


Ultimately, says Hannah Fry, her contribution was in seeing that computation was more than calculation, yet “Her work… had no tangible impact on the world whatsoever.” Nevertheless, Lovelace's passion for learning and her advocacy for education for all, regardless of gender, continue to inspire educators and learners today. The Ada Lovelace Institute in the UK is a good example of this legacy and a dozen biographies were published on the 200th anniversary of her birth in 2015.



Hollings, C., Martin, U. and Rice, A.C., 2018. Ada Lovelace: The making of a computer scientist (p. 2018). Oxford: Bodleian Library.

In Our Time, Ada Lovelace, featuring Patricia Fara, Senior Tutor at Clare College, Cambridge; Doron Swade, Visiting Professor in the History of Computing at Portsmouth University; John Fuegi, Visiting Professor in Biography at Kingston University.

Hannah Fry

Friday, December 15, 2023

Google has several cards up their sleeves...

What can we expect from Google’s Gemini?

Google have several cards up their sleeve on Generative AI:

  1. They invented it!
  2. Awesome AI resources
  3. Google Search
  4. Google Scholar
  5. Google books
  6. YouTube
  7. Google Translate
  8. Google Maps
  9. Global reach, data centres and delivery
  10. Above all they have DeepMind


Gemini is their big response to OpenAI and has a family of foundational models, at three sizes:


Ultra – professional version of Bard

Pro – through Bard and Google products
Nano (on device - mobile version)


An important word here is ‘multimodal’, as their model has been trained on and is capable of processing most media types, including text, images, video, audio and code. So far Generative AI has largely been a 'calculator for words'... but this lifts it into another realm. It is multimodal from the ground up (input not necessarily output). 

However, one must be careful in making the assumption that this moves it more towards how humans think. Our multimodality is quite different, especially the ways we deal with sight and sound. There is little crossover.

They make claims about “sophisticated reasoning and its ability to process complex information in different formats, also the analysis and extraction of data from research articles, being able to really distinguish the relevant from irrelevant. On benchmarking, it has already outperformed human specialists in MMLU (massive multitasking language comprehension on topics like math, history, law and ethics, at 90.0% against 86.4% of GPT-4.


This is all very exciting, as it shows how competition is accelerating progress. Google has taken a cautious approach by rolling out these products across next year. Despite their ability to read multimedia, the initial Gemini models will not – at least initially – produce images or videos.

Unlike OpenAI, they have an advertising business to protect and need to be careful. They know that the writing is on the generative wall for search. As usual, they are going for integration into products approach, a rising tide rather than a single flood. Seems wise. After a tumultuous year, we are entering a more stable product development phase. Google will not see any Board bun-fights. Although there will be some surprises in store for sure.

Google have a universal, global mission, to harness our cultural information and make it available to all. Year one was revolution, year two will be evolution.

Watch video here.

Wednesday, December 13, 2023

Chatty robots are here...

A number of domestic and robot companies have been funded, some have already failed but two stand out, as being well funded because they have shown remarkable progress. If any are successful, and success means different things in different markets, there will be huge demand. Domestic robots are one thing but functional working robots is quite another. Increases in productivity, along with zero salary costs would create a radical shift in real world, vocational jobs.

There have been ridiculous pretenders like Sophie, basically chatbots in cheap bits of plastic but two companies stand out; Figure and Optimus.


Figure have been first to bring a general purpose robot to life with a superb robot integrated with OpenAI's ChatGPT. The dialogue is fascinating, with visual recognition of people and objects, interpretation of incoming speech, completion of tasks and what appears to be explanations of its own behaviour. There is a little latency, suggesting this will be a problem but that doesn't matter. One can imagine open source, local models, trained inside the robot (models are not that big) with developing functionality around dialogue.

It is dialogic is the full sense of the word, using dialogue in language as well as recognition and action dialogue with the real world. Note that there is a limited set of real world actions, limited by what the robot can and should do. Problems are broken down and turned into specific plans within certain constraints.

The demo is at normal speed and uses AI, namely an A-Z neural network. We now see the integration of image recognition (seeing), sound recognition through microphones, with speech the interpretation of planning then action. It uses a pre-trained multimodal model (we don't know what - probably better than GPT4 - one specific to robotics) with language recognition, generation, along with actions. These robots are likely to use specific MMNs.

Image recognition is recognition and reason. This also sounds like a human as the speech generation software does this. Then it moves and performs actions, keeps balanced, makes smooth movements, grasps, moves and places objects as a stand-alone robot.

Very impressive.


Musk has stated that his robot business could out perform his car business. Potentially, he is right. Robots change the nature not just of driving but work itself - initially unsafe or mundane tasks. This would be an enormous shock to the economy, as 24 hour labour becomes cheap, plentiful and scalable.

He has told employees that Tesla could become a $10 trillion company on the back of this. The robot is the same size as a human so the it fits into existing work context as we do.

Still 2023 and progress by Tesla is phenomenal. Neck movements really bring it to life, tactile sensing on fingers can deal with an egg. The hands are where the action is. Waste of time walking and running if you can't do things. Five fingers, all the same length making manufacturing and maintenance easier. The dexterity is amazing. Within a year Musk tweeted, it will thread a needle.

There is nothing in the head, as it can all be in the body. The head is just a camera and possibly a display screen. The point is not to wholly mimic a human. Airplanes do no wholly mimic birds, the design is driven by needs.

The fact that it can walk shows they're after any domain where humans work and operate. We'll see fixed robots (factories), wheeled robots (limited area) and legged robots, I'm sure. Walking has toe articulation, a real advance. Looks slow but speed may not be the key issue. Most people are fairly stationary most of the time in life. It can squat and dance. Injection moulded body parts reduce weight and costs.

It's all in the acuators. An actuator is a device that converts energy (typically electrical, hydraulic, or pneumatic) into mechanical motion. This is the trick, to make these a high enough quality and easy to manufacture.

It will, of course talk and listen. You will be able to tell it what to do and it will have Chatbot functionality. 

I can see these being your butler, something you can chat to, a companion. But there are thousands of potential real-world applications in the workplace. Think care homes, hospitals, retail.

Go further and think Teachers and Doctors?

Release date 3-5 years?

‘Engines of Engagement’ is a curious (authors’ description) book about Generative AI.

‘Engines of Engagement’ is a curious (authors’ description) book about Generative AI. Julian Stodd, is the progenitor of this project. If you know Julian, as I do, you’ll know this will be an interesting read. Along with his co-authors Sae Schatz and Geoff Stead, they’ve come up with something wonderful.

The book is fluid, a bit like a large language model. It has avoided the fixity of most books on the subject and is honest about the ambiguity of Generative AI. People find it difficult to think probabilistically, yet that is what one has to do with Generative AI. We demand certainty, right answers and truth-machines. Julian and his co-authors have admirably avoided this trap. Our brains are intrinsically probabilistic, as are these Generative AI tools, so the subject demands a different mindset, one that is more open and doesn’t fall into the trap of using pejorative language such as ‘hallucinations’. It has this great line,“AI isn’t perfect – because neither are we” and has more questions than answers, is playful rather than dogmatic.

They have also avoided the endless moralising we hear from academics and quango folk with a lot of time on their hands, accompanied by little knowledge of AI or ethics, riding into the discussion on their moral high horses. They pose questions and recognise the complexity of the situation. It’s also relatively short, a blessing in this age of verbosity. So I’ll end my review here.

Lastly, a confession. I have a small piece in the book on ‘Human Exceptionalism’. Julian was open enough, as always, to take a risk on a piece that tries to demolish the idea that we are special and exceptional. Copernicus and Darwin put that to rest, yet we still hang on to our 21st century skills, whatever…. 

Well done to these three and See Saw Publishing. 

EU legislation is a mess... even Macron thinks it has gone way too far

The EU have stated their intention to implement some pretty stringent laws in their AI Act. Even Macron has warned that EU AI legislation will hamper European tech companies compared to rivals in the US, UK and China. Mistral is a very successful French company who may be hit by the AI Acts attack in foundational models.

It will, of course take forever, there is a two year process, with three different drafts from here from three different institutions – Parliament, Commission and Council. It is all a bit of a technocratic mess, with lots of potential bun fights.

Here are the problems:

A ban on emotion recognition may well hamper useful work in autism, mental health. In truth there wiggle room here, as it is absent in two of the drafts.

A ban on facial recognition will face fierce resistance from police and intelligence agencies, on cases as wide as deepfake recognition, age recognition in child protection, accessibility features, safety applications, terrorism, trafficking and child porn. Expect a lot of back-tracking and exceptions here.

The ban on social scoring sounds good but you are being assessed all the time for credit, loans, insurance and mortgages. In truth, social scoring Chinese style does not exist, so this is an unnecessary law tackling a non-existent problem.

The big one is the move on foundational models and their training data. This will be fiercely fought, hence Macron’s early move. Attacking foundation models is like attacking the wind not the boat and its destination. Ohers, such as the US and UK have urged a more application-driven, sector driven approach, that focus on uses not the underlying models. France, Germany and Italy has fought this, so I’d expect backtracking.

The Act has several other shortcomings in terms of definition and implementation. It sees AI as a fixed entity. AI is intrinsically dynamic, as it learns and morphs as it operates. This makes it a moving target in terms of the object of legislation. On top of this AI is an integrated set of services, networks and organisations with various data sources. If I am a small AI company I am likely to be using various foundational models and delivery services from OpenAI, Microsoft, Facebook and Amazon, all as cloud services. One general-purpose system may be used for several separate services. It is difficult to see how responsibility can be assigned.

While the rest of the world gets on and uses this technology for astounding benefits in learning and healthcare, the EU seems determined to become a mere regulator. Not just a regulator but one that loves complex, red-tape driven solutions. Billions, if not trillions, in terms of productivity growth and innovation may be a stake here.

My guess is that the act will facing challenges in terms of reaching consensus among member states, given the diverse interests and viewpoints on AI regulation. There will also be difficulties in implementing the guidelines and regulations proposed in the AI Act, due to technological complexities or resistance from AI stakeholders. The act will therefore need revisions or updates to address emerging AI technologies and scenarios that were not initially considered.


The final problem with EU law is its fixity. Once mixed and set, it is like concrete – hard and unchangeable. Unlike common law, such as exists in England, US, Canada and Australia, where things are less codified and easier to adapt to new circumstances, EU Roman Law is top down and requires new changes in the law through legislative acts. If mistakes are made, you can’t fix them.


At only 5.8% of the world’s population, there is the illusion that it speaks for the whole world. It does not.


When the Pope called for a global treaty on AI, we have surely reached Peak BS on AI and ethics. The Catholic Church put Galileo:house arrest for rest of life, they burnt Bruno at stake for scientific heresy. Their banned books list Galileo, Descartes, Voltaire, Sartre & Kant only in 1966! We have surely reached Peak BS on ethics & AI?


Sunday, November 26, 2023

Online Educa Berlin 2023 - Fun three days? Damn right it was!

OEB in Berlin was intense this year, with a three hour workshop, in a packed room, the Big Debate and a live podcast with my friend John Helmer. Disappointed that the Christmas markets were not yet open but there was plenty to one’s teeth into than a Bratwurst.

As is often the case, I found the keynotes a bit odd. An American jumped around like a firecracker, confusing shouting with substance, followed by a soporific speaker recommending a return to pencils. She was selling 'critical thinking' but seemed to have little of it. Truly mind numbing. Read every single word from a piece of paper. Then Luciano Floridi. I was looking forward to this having read his work but what a strange talk. He presented himself as a philosopher…. I’m not so sure. He’s actually more of psychologist, who has made a name for himself as a philosopher of the ‘digital’; can’t remember seeing a philosopher of the ‘pencil’. He claimed you could split the whole of philosophy, nay human affairs, into “just two things” - the Socratic and Hobbesian view – people are either stupid or evil. I have been reading philosophy all of my life and have never come across a more idiotic and simplistic summary of either philosophy or human nature. He then massacred Wittgenstein. It is clear he was playing to the crowd. 


Undeterred I had some great conversations with people who actually know what they are talking about. That’s what makes conferences so odd – the showbiz v substance. Substance is to be found in the casual encounters with new people, the smaller rooms, the bar, over a coffee. Lovely to see Gabor’s AI project progress from just an idea last year, our Norwegian friends seem to have cracked the HE assessment issue, Glean are progressing with AI and there were some real projects on AI that were lifting us out of the old paradigm.


So what did I learn?


People are still stuck in the LMS/ cartoon content/video world

The exhibition seemed stuck in that world, a bit dead and often empty

Conversations were full of new ideas and ambition

AI is here and here to stay and lots of real projects shown

Superficial AI moralising was noticeable absent

AI is at a technology way beyond what we’ve seen before

HE is in a panic over AI

L&D once again thinks it is about to be taken seriously at Board level – it is not

‘Critical thinking’ has become the predictable phrase people use when they don’t know what else to say – it’s now my litmus test for people who are walking away…


After three days of intense discussion people were ready to let rip in the Friday night and we did – a big meal, followed by a party. As usual, it was one of the best conferences of the year. It’s about the people, many who were there last year and the year before. It’s not in some hideous Casino in Las Vegas, or worse a Disney venue in Florida, or some god-awful, anonymous conference centre. Fun three days? Damn right it was.

Saturday, November 11, 2023

EU AI legislation is firming up and I fear the worst. Danger of overregulation...

There are some good structural aspects of the legislation in terms of classifying size of organisation to avoid penalising small innovative firms and research, as well as classifying risks but as the Irishman said when being asked for directions... "We'll sir... I wouldn't have started here".


BANNED        – social scoring, face recognition, dark patterns, manipulation

High risk         – recruitment, employment, justice, law, immigration

Limited risk     – chatbots, deep fakes, emotion recognition

Minimal risks  – spam filters, computer games


But, as usual, the danger is in over reach. That can take place in several ways:



Like an oversized fishing net, the ban on biometric data may have the unintended consequences of banning useful applications such as deepfake detection, age recognition in child protection, detecting children in child porn, accessibility features, safety applications, healthcare applications and so on. Innovation often produces benefits that were unforeseen at the start. We have already seen a Cambrian explosion of innovation from this technology, category bans are therefore unwise. I fear it is too late on this one.


Core tech not applications

Rather than focus on actual applications, they have an eye on general purpose AI and foundational models. This is playing with fire. By looking for deeper universal targets they may make the same mistake as they did over consent and lose millions of hours of lost productivity as we all have to deal with ‘manage cookies’ pop-ups and no one ever reads consent forms. An attack on, say foundational ‘open source’ systems, would be a disaster. Yet it is hard to see how open source could be allowed. It has an ill-defined concept of a 'frontier model' in the proposed legislation that could wipe out innovation in one blow. No one knows what it means.



It hauls in the Digital Services Act (DSA), Digital Markets Act (DMA), General Data Protection Regulation (GDPR), as well as the new regulation on political advertising, as well as the Platform Work Directive (PWD) – are you breathless while reading that sentence? It could become an octopus with tentacles that reach out into all sorts areas making it impossible to interpret and police. 



Signs of the EU jumping the gun here and not in a good way. There is always the danger of some publishers (we know who you are) lobbying for restrictions on data for training models. This would put the EU at a huge disadvantage. To be honest, the EU is so far behind the curve now on foundational AI that it will make little difference but a move like this will condemn the EU to looking on at a two horse race, with China and the US racing round the track while the EU remains locked up in its own stalls.



One problem with EU law is its fixity. Once mixed and set, it is like concrete – hard and unchangeable. Unlike common law, such as exists in England, US, Canada and Australia, where things are less codified and easier to adapt to new circumstances, EU Roman Law is top down and requires new changes in the law through legislative acts. If mistakes are made, you can’t fix them.



At only 5.8% of the world’s population, there is the illusion that it speaks for the whole world. Having been round the world this year in several continents - I have news for them – it doesn’t. Certainly not for the US, and as for China, not a chance. It doesn’t even speak for Europe, as several countries are not in the EU. To be fair, one shouldn’t design laws that pander to other states but in this case Europe is so far behind that this may be the final n-AI-l in the coffin. Some think millions, of not trillions of dollars are a stake in lost innovation and productivity gains. I hope not.


We had a taste of all this when Italy banned ChatGPT. They relented when they saw the consequences. I hope the EU apply Occam’s razor, the minimum number of entities to reach their stated goals, unfortunately they have form here, of overregulating.

Sunday, November 05, 2023

Bakhtin (1895-1975) dialogics, learning and AI

Mikhail Bakhtin, the Russian philosopher and literary critic, developed a theory of language which saw dialogue as primary. He took this idea and applied it to learning. By dialogue he mean social interaction between people, teachers and learners, learners and learners, learners and others but also dialogue with the past. He was persecuted during the Soviet era but his work was rediscovered in the 1980s, and he has since become an important theorists in the philosophy of language and even learning through AI.


Bakhtin criticises the 'monologic' tradition in Western thought, where individuals are defined by religion a concept of the finite, the soul, religious and establishment belief. Individuals for him, are open and must engage in dialogue with the world and others. Dialogism is his foundational idea that all language and thought are inherently dialogic. We lean language and come to understand language through the practice of dialogue. Learning means dialogue in many forms with different people, tools, perspectives in many different contexts. It was introduced in his Problems of Dostoevsky's Poetics(1929). This dialogue is in stark contrast with the teacher norm, direct instruction or monologue. Learning emerges from dialogue, external and internal.

In his incomplete essay Toward a Philosophy of the Act (1986), written in the 1920s, he outlines a theory of human identity or mind based on dialogic development. There are three forms of identity:




The I-for-myself is untrustworthy. The ‘I-for-the-Other’ on the other hand, allows one to develop one’s identity as a se of perspectives others have about you. Th ‘Other-for-Me’ is the incorporation by others of their perception of you into their own identities. This is an expansive and interesting form of identity in an age of dialogue with others through social media and messaging technology, as well as dialogue with technology which now plays a similar role.

Dialogism manifests itself in ‘heteroglossia’, with exposure to a multiplicity of voices and perspectives in a language. These can be parents, teachers, friends and those on social media. A heterogeneous group of voices that one can learn from.

Authoritative vs. Internally Persuasive Discourse

Authoritative Discourse is discourse that is embedded in a culture, enshrined in tradition. It is unnegotiable and taught as truth. It may be religious beliefs, science, the canon, parental beliefs and other assumed forms of knowledge and practice. It is often enshrined in an official curriculum or syllabus, which learners must memorise and regurgitate in exams.

Internally Persuasive Discourse is more personal, related to the learner’s experiences and views. The learner has to engage in dialogue with the established views to relate it to their own prior knowledge, adapt to it, assimilate it and create their own sense of meaning.

Although we learn through both these forms of discourse, learning is to move from the authoritative to the internally persuasive to find our own deeper forms of meaning.


Bakhtin also wrote about the concept of the "carnivalesque" in literature, which subverts and liberates the assumptions of the dominant style or atmosphere through humor and chaos. In education, this can be pedagogical strategies that disrupt traditional hierarchies and empower students to challenge and question. 

Dialogism and AI

AI technology has now produced dialogic systems that may also fulfil Bakhtin’s needs. We can now communicate, in dialogue form with a new form of ‘other’ that lifts us out of our I-for-Myself, allowing technology to play a role in I-for-the-Other and Other-for-Me.

Interestingly traditional educators demand of the technology that it be a truth machine, when language is no such thing. Language is multi-perspective, at times messy, even carnivalesque. It points not just towards learning as dialogue but learning emerging from dialogue, in many different forms. LLMs and chatbots seem to be delivering this new form of learning.

Bakhtin would have loves the carnivalesque uses, such as be a pirate, the odd poems. I rather like the fact that Musk's GROK chatbot is quite snarky! AI satisfied both authoritative and personal learning and language allows for both. This is why I think Generative AI will have a profound effect on learning through dialogue, which is how most good teaching is done, both direct instruction and looser learner-centric dialogue.


He recognised the multifarious, and messy nature of human communication and therefore learning. For this he should be applauded, where theory is so often formulated in rigid and simplistic models. Bakhtin’s work has influenced educational theorists who view learning as a social, dialogical process and who advocate for more participatory, student-centered approaches. Educators who draw on Bakhtin’s ideas might focus on the role of discussion, debate, and dialogue in learning, and they might seek to create learning environments in which students’ voices are heard and valued as part of the collective construction of knowledge. 


Bakhtin, M.M. (1984) Problems of Dostoevsky's Poetics. Ed. and trans. Caryl Emerson. Minneapolis: University of Minnesota Press.

Bakhtin, M.M. (1993) Toward a Philosophy of the Act. Ed. Vadim Liapunov and Michael Holquist. Trans. Vadim Liapunov. Austin: University of Texas Press.

Bakhtin, M.M. (1981) The Dialogic Imagination: Four Essays by M.M. Bakhtin. Ed. Michael Holquist. Trans. Caryl Emerson and Michael Holquist. Austin and London: University of Texas Press.




Thursday, August 17, 2023

Why ‘storytelling’ sucks

The Story Paradox, by Jonathon Gottschall, will disturb you. It is taken as a given that stories are a force for good. That is the orthodoxy, especially among the supposedly educated. But what if the truth is that they are now a force for bad? Are we being drowned in a tsunami of stories from TED Talks, YouTube, TV, Movies, Netflix, Prime, Disney, Social Media, education and workplace learning? Could it be that they distract, distort and deny reality?

We love stories but only the stories that confirm our own convictions. We seek out those narratives that tell us what we already believe, retell and reinforce them, over and over again. And maybe the most subversive story of all, is the one we tell ourselves, over and over again, that we are right and those other storytellers and stories are wrong.


Technology has given storytelling an immense boost. Social media is often a storytelling device for amplifying stories you love and attacking those you hate. Movies are often morality takes reflecting what the audience already believes and years to believe – like Barbie. Box sets are consumed in a series of 6-13 episodes, then subsequent series released. Streaming services have exploded with Netflix, Prime, Disney, HBO and many others. They have huge amounts of archived content and release more than any individual could consume. We live in this blizzard of confirmative storytelling.


It is common in my world to hear people extol the virtues of ‘storytelling’ from people who want to sound virtuous. Everything’s a story, they claim, almost philosophically, except that is bad philosophy. Bounded in the nutshell of narrative they flounder when that narrative hits any serious philosophical challenge, such as ‘What do we know? or ‘What is real?’ Epistemology or ontology are hard ideas and of little interest to storytellers. What they actually do is often parrot the fuzzy echoes of bad French philosophy which sees all as narrative.


I used to think this was just the mutterings of the philosophically challenged, until increasing numbers of people, nay large portions of the society I live in, started believing their own narrative about narratives. You really are your lived experience, we’re told, (except for those whose stories don’t count, like the working class). Your gender is whatever narrative you choose, so here’s a pic ‘n mix of genders and dispositions. And by the way you must also use the pronouns I use in my story or you will be exposed as denying the truth of my ‘story’. This is not just the tyranny of narrative, it is the tyranny of the author demanding the reader and speaker comply to their fictions. Of course, language, supposedly their handmaiden, doesn’t work that way. No matter how hard you try, most people don’t use the terms, as language is use not imposed imperatives.


Even worse, children will be given stories about what choices of story they want to live in. These used to be called Fairy Tales but rather than warn children of the dangers of wolves in sheep's clothing or the dangers of being too judgemental or trusting, they are being encouraged to see such stories as a menu of options, even if it means serious and irreversible medical intervention. Being a drag queen is no longer seen as little more than a pantomime dame, but a lifestyle.


Suddenly, and it was sudden, we are drowning in narratives. Science and reason were consumed in the fierce bonfire of storytelling. The conceit of calling 2+2=4 a story, is quite something. Of course it opened up all sorts of weird narratives on both the right and the left, although each side claim immunity, their respective narratives being true in the eyes of their respective tribes, the others fictional conspiracies. And there’s the rub. Madcap ideas, such as there is no biological sex through to QAnon are all permitted in the la la land of storytellers. That’s the consequence of this belief in Santa Clauses. We get infantilised by narratives.


The world as stories or the storied self are inadequate theories, as they are what Popper called universal theories, you cannot deny them as your critic will just claim your critique IS storytelling. That is BS and here’s why.


It started in the 1960s with Barthes ‘Introduction to the Structural Analysis of Narratives’ and has been gulped down by those who are actually too lazy to read French Philosophy and prefer second hand narratives to thinking for themselves. Bruner was the equivalent in psychology, with ‘The Narrative Construction of Reality’. To be fair, Lyotard was rightly suspicious of post-Enlightenment grand narratives (although the French bought into them, philosophy and intellectual thought being a matter of fashion). He was right on Freud and Marx. One super narrative, used and abused by Marxism, that people can be sliced and diced into the oppressed and oppressors has been plucked from the literature, the dying remnants of dialectical materialism, and taken as gospel. I thought that Hitler and Goebbels on the right, and Stalin, then Pol Pot on the left had put an end to that meta-narrative nonsense. But no, it was resurrected, wrapped up in Foucault type obfuscation, none more absurd that Judith Butler’s queerness, the very triumph of philosophical naivety, where everything is a performative story. Facts are trampled on and trumped by narratives, giving anyone permission, Trump being a master iof the art, to say what they wanted, even to lie. The narrative turn in society turned out to be a rebounding and dangerous boomerang. Worse still, it is an intellectual cul de sac.


Life is not a story. Socrates was right, the stories and narratives we tell ourselves are usually the stories and narratives others have told us, especially those of the storied self and identity politics. You can see that in people who invariably have the same religion as their parents or adopt the lifestyle of their peer groups. In a world where stories are all that matter, peer groups with consistent narratives become all powerful. It is the sign of a closed not an open mind. Wallowing in storyland is not to think for yourself but to think tribally. Stories have shifted from being humane tales we tell each other to socialise, comfort and amuse ourselves, to something sinister, something akin to weapons of abuse.


Stories often bore me, especially backstories, or elaborations in advertising or learning. Columnists with their cosy tales of comfortable conceit. Every article now begins with an anecdote and ends up as a bad sermon or parable. Politicians push the story of the day. Marketing gurus like Seth Godin banging on about telling lies, sorry stories, in marketing.


Even worse, educators see themselves as storytellers. Every training course has to be fronted or supported by the crutch of storytelling, so you get tedious narratives about how Peter, Abigail and Sophie have a unique spin on the Data Protection Act. Click on the speech bubble to see what Alisha thinks about Data Protection. Really?


Everyone has a novel in them, so the story goes, but they don’t, so that genre has descended into the self-indulgence of characters you must ‘like’ telling stories you ‘empathise’ with. Pop music has become a bland sewer of bad storytelling lyrics, teenage doggerel, albeit in perfect pitch through autocue. Storification has become so inflated that it has smothered art. If you donp;t conform, as a comedian in the Edinburgh Fringe, the venue will now cancel you on the whims of some students who work there. 

The trouble with reliance on stories is that they tend to become beliefs and dogmas, then those with strong convictions make convicts of the others who want to tell other stories, even if they’re satirical or straight up funny. It has happened repeatedly in history with Fascism, Communism, Dictatorships and Theocracies. They will always come up against the cold reality that the world is not a story, certainly not the story they believe in.


In the beginning was not the word, it was the world. The world is not made up of stories. It was around before us and will be around after we are gone, along with our conceits. End of story.

Sunday, July 16, 2023

This is the worst AI will ever be, so focused are educators on the present they can’t see the future

One thing many have not grasped about this current explosion of AI, it is that it is moving fast – very fast. Performance improvement is real, fast and often surprising. This is why we must be careful in fixating on what these models do at present. The phrase 'AI is the worst it will ever be' is relevant here. People, especially in ethical discussions, are often fixated by the past, old tools and the present, and not considering the future. It only took 66 years between the first flight and getting to the moon. Progress in AI will be much faster.

In addition to being the fasted adopted technology in the history of our species, it has another feature that many miss – it learns, adapts and adds features very, very quickly. You have to check in daily to keep up. 

Learning technology

The models learn, not just from unsupervised training on gargantuan amounts of data but also reinforcement learning by humans. LLMs reached escape velocity in functionality when the training set reached a certain size, there is still no end in sight yet. Developments such as synthetic data will take it further. This simple fact, that this is the first technology to ‘learn’ and learn fast, on scale, continuously, across a range of media and tasks, it what makes it extraordinary.


Teaching technology

There is also the misconception around the word ‘generative’, the assumption that all it does is create blocks of predictable text. Wrong. May of its best uses in learning are its ability to summarise, outline, provide guidance, support and many other pedagogic features that can be built into the software. This works and will mean tutors, teachers, teaching support, not taking support, coaches and many other services will emerge that aid both teaching and learning. They are being developed in their hundreds as we speak.


Additive technology

On top of all this is the blending of generative AI with plug-ins, where everything from Wikipedia to advanced mathematics, have been added to supplement its functionality. These are performance enhancers. Ashok Goes had blended his already successful teaching bot Jill Watson with ChatGPT to increases the efficacy of both. Aon top of this are APIs that give it even more potency. The reverse is also true, where Generative AI supplements other tools. There are no end of online tools that have added generative AI to make them more productive, as it need not be a standalone tool. 

Use and translation between hundreds of languages, also computer languages, even translation from text to computer languages, images, video, 3D characters, 3D worlds... it is astounding how fast this has happened, oiling productivity, communications, sharing and learning. Minority languages are no longer ignored.

All of the world's largest technology companies are now AI companies (all in US and China). The competitions is intense and drives things forward. This blistering pace means they are experimenting, innovating and involving us in that process. The prize of increased productivity, cheaper and faster learning, along with faster and better healthcare are already being seen, of you have the eyes to look.

People tend to fossilise their view of technology, their negativity means they don’t update their knowledge, experience and expectations. AI is largely Bayesian, it learns as it goes and it is not hanging around. People are profoundly non-Bayesian, they tend to rely on first impressions and stick with their fixed views through confirmation and negativity biases. They fear the future so stick to the present. 



Those who do not see AI as a developing fast and exponentially, use their fixity of vision to criticise what has already been superseded. They poke fun at ChatGPT3.5 without having tried ChatGPT4, any plug-is or any of the other services available. It’s like using Wikipedia circa 2004 and saying ‘look, it got this wrong’. They poke the bear with prompts designed to flush out mistakes, like children trying to break a new toy. Worse they play the GIGO trick, garbage in: garbage out, then say ‘look it’s garbage’. 

This is the worst AI will ever be and its way better than most journalists, teachers and commentators think, so we are in for a shock. The real digital divide is now between those with curiosity and those that refuse to listen. Anyone with access to a smartphone, computer laptop or tablet... that's basically almost all learners in the developed world have access to this technology. The real divide is among those in the know and not in the know, using it and not using it, and that is the increasing gap between learners and teachers. So focused are educators on the present they can’t see the future. 

Thursday, July 13, 2023

AI is now opening its eyes, like Frankenstein awakening to the world

The AI frenzy hasn’t lessened since OpenAI launched ChatGPT. The progress, widening functionality and competition has been relentless, with what sounds like the characters from a new children’s puppet show - Bing, Bard, Ernie and Claude. This brought Microsoft, Google, Baidu and Anthropic into the race, actually a two horse race, the US and China.

It has accelerated the shift from search to chat. But Google responded with Bard, the Chinese with Ernie’s impressive benchmarks and Claude has just entered the race with a 100k document limit and cheaper prices. They are all expanding their features but one particular thing did catch my eye and that was the integration of ‘Google Lens’ into Bard, from Google. Let’s focus on that for a moment.


Context matters

Large Language Models have focused on text input, as the dialogue or chat format works well with text prompting and text output. They are, after all, ‘language’ models but one of the weaknesses of such models is their lack of ‘context’. Which is why, when prompting, it is wise to describe the context within your prompt. It has no world model, doesn’t know anything about you or the real world in which you exist, your timelines, actions and so on. This means it has to guess your intent just from the words you use. What it lacks is a sense of the real world, to see what you see.


Seeing is believing

Suppose it could see what you see? Bard, in integrating Google Lens, has just opened up its eyes to the real world. You point your smartphone at something and it interprets what it thinks it sees. It is a visual search engine that can ID objects, animals, plants, landmarks, places and no end of other useful things. It can also capture text as it appears in the real world on menus, signs, posters, written notes; as well as translating that text. Its real time translation is one of its wonders. It will also execute actions, like dialling telephone numbers. Product search is also there from barcodes, which opens up advertising opportunities. It even has style matching.

More than meets the eye

OK, so large language models can now see and there’s more than meets the eye in that capability. This has huge long-term possibilities and consequences, as this input can be used to identify your intent in more detail. The fact that you are pointing your phone at something is a strong intent, that the object or place is of real, personal interest. That, with data about where you are, where you’ve been, even where you’re going, all fills out your intention.


This has huge implications for learning in biology, medicine, physics, chemistry, lab work, geography, geology, architecture, sports, the arts and any subject where visuals and real world context matters. It will know, to some degree, far more about your personal context, therefore intentions. Take one example, healthcare. With Google Lens one can see how skin, nails, eyes, retinas, eventually movements can be used to help diagnose medical problems. It has been used to fact check images, to see if they are, in fact, relevant to what is happening on the news.  One can clearly see it being useful in a lab or in the field, to help with learning through experiments or inquiry. Art objects, plants, rocks can all be identified. This is an input-output problem. The better the input, the better the output.


Performance support

Just as importantly, learning in the workplace is a contextualised event. AI can provide support and learning relevant to actual workplaces, airplanes, hospital wards, retail outlets, factories, alongside machines, in vehicles and offices - the actual places where work takes place - not abstract classrooms.

In the workplace, learning at the point of need for performance support can now see the machine, vehicle, place or object that is the subject of your need. Problems and needs are situated and so performance support, providing support at that moment of need, as pioneered by the likes of Bob Mosher and Alfred Remmits, can be contextualised. Workplace learning has long sought to solve this problem of context. We may well be moving towards solving this problem.


Moving from 2D to 3D virtual worlds

Moving into virtual world, my latest book, out later this year, argues that AI has accelerated the shift from 2D to 3D worlds for learning. Apple may not use the words ‘artificial’ or ‘intelligence’ but its new Vision Pro headset, which redefines computer interfaces, is packed full of the stuff, with eye, face and gesture tracking. Here the 3D world can be recognised by generative AI to give more relevant learning in context, real learning by doing. Again context will be provided.



Generative AI was launched as a text service but it quickly moved into media generation. It is now opening its eyes, like Frankenstein awakening to the world. There is often confusion around whether Frankenstein was the creator or created intelligence. With Generative AI, it is both, as we created the models but it is our culture and language that is the LLM. We are looking at ourselves, the hive mind in the model. Interestingly, if AI is to have a world view we may not want to feed it such a view, like LLMs, we may want it to create a world view from what it experiences. We are making steps towards that exciting, and slightly terrifying, future.