Wednesday, March 29, 2023

Moral panic and AI regulation...

Open letters asking for bans strike me as the wrong approach, the unrepresentative few, denying the many of things they may want. It is a power play and profoundly undemocratic. I remember people demanding that we stop and block access to 'Wikipedia' in Schools and Universities actually did this.... blocked access to a knowledge base. Why? Power - they saw themselves as the sole purveyors of knowledge. The blockers now are largely in academia, as this is a technology they fear. They see themselves as these overseers and this threatens their status. 

Blocking technology is sometimes a churlish attempt to hold onto power. I note that one minute they despise Elon Musk, then suddenly see him as a saviour! Fickle bunch. We are months into the release of ChatGPT and have hardly seen the end of civilisation. The release was deliberate to test with a large number of real users across the Globe. That worked and ChatGPT4 is miles better due to feedback and human training. I note that most of the examples I see on Twitter are still ChatGPT3.

You’d think, from the moral panic around AI, that no one was doing anything around ethics. Every man, woman and their dog is chucking out advice, frameworks, papers, rules opinions and pronouncements on AI, as if they were the first to see ethical problems. Much of it is not ‘ethics’ at all, as there is barely a mention of the benefits. That is a big problem as the net benefits also need to be identified in making an overall judgement. This is, of course, normal. Every major shift in technology gets this reaction - writing (read Plato), printing, calculators, internet, Wikipedia, social media, computer games, smartphones… whenever a new tectonic plate rubs up against the old one, the old is subsumed beneath it and there is seismic activity, even a few volcanic outbursts!

We sometimes forget that there is a great deal of existing law and regulation that covers technology and its use. In addition to existing regulation, huge teams have been working on regulation in dozens of countries also political blocs, like the EU. There has also been communications and alignment between them.

For example, if you have  an AI solution to solve a real clinical problem, you need to certify it as it as it develops, through some pretty tough regulatory standards for Software as a Medical Device (SaMD). You cannot launch the product or service without jumping through these hoops, which are demanding and expensive. There is also GDPR and many other country-specific laws.


It is pretty much specific use cases at the moment at state level with little Federal law. The Algorithmic Accountability Act 2022 requires companies to assess the impacts of AI and there isn proposed regulation going through the process as we speak.

The 'Blueprint' has five principles:

  1. Safe and Effective Systems: You should be protected from unsafe or ineffective systems.

  2. Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.

  3. Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

  4. Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.

  5. Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

There is much talk of an 'AI Bill of Rights’ but these are still regulatory guidelines, a blueprint for legislation. They get quite specific in certain areas, what the EU would call ‘high risk’ areas, such as HR, money lending and surveillance. That, I think is the right approaches tahrs is a massive baby and bathwater problem here, being so strict on legislation tat the benefits of AI are nor realised.


The EU have been hard at it for several years now, since 2018, and although they tend to suffer from technocratic hubris, have taken an angle that is pragmatic and easy to understand.

They have published proposals for regulations called the Artificial Intelligence Act. It has some good stuff around the usual suspects - data quality, transparency, human oversight and accountability and right tackle sector specific issues. But its big idea, which is reasonable, is to classify systems by risk and mandate. The classification system identifies levels of risk that an AI system could pose and there are four tiers: 

  1. unacceptable 

  2. high

  3. limited

  4. minimal

Minimal risk will be unaffected and that is right, as they’ve been working for decades to do good, innovative work and that should continue. The others will be subject to scrutiny and reasonable regulation. The problem is that it can't cope with new products and ideas, as EU law is set in stone, unlike common law that is more flexible. There are already signs that they will regulate so hard that innovation will be stifled. It is the EU so it will tend towards overregulation and the EU is only 5.7% of the world's population, so let’s not imagine that it holds all the cards here. In truth the EU is not a powerhouse in AI, all the innovation and product is coming from the US. The EU law is expected in 2024.

The Council of Europe have also been publishing a large number of discussion documents in the field, including several in education.


China has a much more aggressive attitude towards regulation, and some good focus on preventing fake news and fraud on the elderly but the fiery dragon has a long tail of problems. These laws are in place and any foreign companies in China must comply.

Its DMA and DSA Regulations went into effect in March 2022. They tackled Bothe general and specific issues, information service norms and user rights protection. It also demands audits and transparency. It has a focus on protecting users, especially minors, from data harvesting and the elderly - this, I think is enlightened. They are also keen to avoid monopolies and want control over algorithmic manipulation, so are very specific with their targets:

Article 13 prohibits the algorithmic generation of fake news and requires online service news providers to be licensed (the sting in the tail)

Article 19 offers protection to the elderly by requiring online service providers to address the needs of older users, especially on fraud

Other targets include manipulating traffic numbers and promoting addictive content, hence the limiting of screen time for young people. It is here that things get very strange, as there are ‘ethics’ rules around ‘Upholding mainstream value’ (Government ethics), ‘Vigorously disseminating positive energy' (Government propaganda) and the ‘Prevention or reduction of controversies or disputes’ (toe the line - straightforward censorship).


Unlike the EU, the UK is taking its time and leaving sector-specific bodies to do their work within the existing law. I think this is right. We do not want to regulate out innovation. The AI sector is now strong and growing. They’re taking a de minimus approach, being careful and flexible. There are no statutory laws as yet, atlthough GDPR is there.

They have just published a white paper outlining five principles that the regulators should consider to enable the safe and innovative use of AI in the industries they monitor:

  1. Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed

  2. Transparency and "explainability": organisations developing and deploying AI should be able to communicate when and how it is used and explain a system's decision-making process in an appropriate level of detail that matches the risks posed by the use of AI

  3. Fairness: AI should be used in a way which complies with the UK's existing laws, for example on equalities or data protection, and must not discriminate against individuals or create unfair commercial outcomes

  4. Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes

  5. Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI

We will see a slow, sensible and pragmatic approach, sensitive to new developments.


There is a geo-political AI-race and that affects regulation. It is likely, in my opinion that we will get US-European alignment, to keep us competitive in AI. There is a EU-US Trade and Technology Council also looking at alignment. I think we will see a parting of the ways between US/Europe and China. Another scenario is that the EU overegg everything and go it alone without the US. This would be a big mistake and just push the EU further behind in harvesting the benefits from AI. The UK, post-Brexit, has the freedom to make more flexible choices.

However, one can see an interesting synths taking place , where we do the following:

1. Take an Occam's razor approach to regulation from the UK, the minimum number of legal regulations to meet our goal

2. Adopt the EU idea of graded regulation to the size and function of organisations to protect innovation

3. Makes sure the regulation is flexible enough to quickly cope with new advances in get technology

4. Take the Chine approach of specific targets at 'protecting minors' and 'fraud against the elderly'

5. Have a unified global body issue, first a set of guidelines then cascade back to nation states.



There is of course the WEF, there’s always the WEF, one of my least favourite organisations. There’s much rhetoric around the Fourth Industrial Revolution - it is neither industrial nor the fourth - often infantile. There are also a lot of long academic reports that are out of date before they are printed. I wouldn’t hold my breath.

Tuesday, March 21, 2023

We've just gone from a simple teacher-learner model to a new world of AI teachers - a new pedAIgogy

As new products from OpenAI, Google, Khan Academy, Duolingo and others are launched, with hundreds of millions using them, the learning game has taken a shift. The new pedAIgogy has unleashed a wave of innovation that not only changes our relationship with knowledge away from transfer, search and access, to dialogue and co-creation.

This takes place at several levels. At the level of global culture as LLMs literally take all of our accumulated culture (language, images, audio, video) and mirrors it back to us. It takes place at the level of the individual who can use, talk to and co-create knowledge. This is what Vygotsky talked about with socially constructed learning, mediated by 'tools'.

The learning game used to be simple. We had 'Teachers' and 'Learners'. 

Schools, Colleges, Universities and Workplace Learning (L&D) has this as its fundamental model or premise. This is still likely to continue as the model for young children, who have less autonomy in learning. But the world for everyone else has suddenly changed. Our whole relationship with knowledge and skills has changed. The nature of work will also change so how we learn will changed. We need less long-form courses and a more dynamic, personalised approach to learning, one that is also motivating and exciting.

That brings us to a fresh and different model, as there are two new kids on the block. 

  1. Human Teacher
  2. Human Learner
  3. AI Teacher (such as ChatGPT and its integration into tools such as Khan Academy & Duolingo)
  4. AI Learner (the AI model trained on a gargantuan amount of data and some human training)

We have moved from Human Teachers and Human Learners, as a diad to AI Teachers and AI Learners as a tetrad. But there is a twist to this tale.

Human Teachers are skilled but those skills tend to be subject specific, they know one topic really well and are not generalists.  They also have valuable teaching skills but these level off or plateau. 

Learners, however, need to learn more efficiently. 

AI learns (see red arrow) and gets exponentially better, AI Teachers therefore get better as they draw upon these improvements from the AI learner.

This means that the balance between teachers and AI changes. Teacher skills plateau, whereas AI Teachers and Learners get better.

AI Teachers get better across ALL subjects. AI Teachers are also available 24/7/365 and are becoming multimodal to deliver speech, text, graphics and video. They also delver dialogue and effortful activity, such as case studies, examples, debate and assessment. We have a new pedagogy based on personal, one-to-one dialogue. This was something researched by Bloom in his paper, The 2 Sigma Problem (1984).

He compared a lecture, lecture with formative feedback and one-to-one tuition. Taking the lecture as the mean, he found an astonishing 84% increase in mastery above the mean fo the formative lecture and 98% increase in mastery for one-to-one tuition.

The final stage, and this is some way off, is the elimination of the human teacher, to provide one-to-one tuition using AI. W are now in that age.

This is an uncomfortable debate but we have now crossed that Rubicon. We can now see that the path to faster, cheaper and more effective learning is through faster, cheaper and smarter technology - that technology, as I've been saying for many years, is AI.



Love Vygotsky? You should love ChatGPT4
Can you name his two major works? Vygotsky is the most oft-quoted but rarely read learning theorist I know. Let me start by saying I am not an extreme  social constructivist but in using ChatGPT3 and 4, I have become more Vygotskian. ChatGPT and Bard are the almost perfect examples of Vygotskian teachers. Let me explain.
Ultimately the strength of Vygotsky’s learning theory stands or falls on his social constructivism, the idea that learning is fundamentally a socially mediated and constructed activity. Psychology becomes sociology as all psychological phenomena are seen as social constructs. Vygotsky's theory does not propose distinct developmental stages but instead emphasizes the role of social interaction and cultural context in cognitive development. He believed that social interaction plays a critical role in children's cognitive development and argued that children learn through interactions with more knowledgeable individuals, who provide guidance and support.
This is exactly what ChatGPT4 does, in general, but also in a more formal teaching experience as in Khan Academies implementation. It provides the ‘knowledgeable other’. In fact, this ‘knowledgeable other’ is better than any one teacher as it covers all subjects, at different levels, is available 365/24/7, is endlessly patient, polite, encouraging and friendly.
This is the cardinal idea in Vygotsky’s psychology of education, that knowledge is constructed through mediation, yet it is not entirely clear what mediation entails and what he means by the ‘tools’ he refers to as mediators. In many contexts, it simply seems like a synonym for discussion between teacher and learner. However he does focus on being aware of the learner’s needs, so that they can ‘construct’ their own learning experience and changes the focus of teaching towards guidance and facilitation, as learners are not so much ‘educated’ by teachers as helped to construct their own meaning and learning.
This is exactly what ChatGPT4 does as a ‘tool’. It mediates and allows the learner to construct their own sense and meaning by driving the learning process. It uses language, the key form of learning and social development for Vygotsky, to patiently go at the learners own pace, level and even identify mistakes.
Zone of Proximal Development (ZPD)
Vygotsky also prescribes a method of instruction that keeps the learner in the Zone of Proximal Development (ZPD), an idea that was neither original to him nor even fully developed in his work. The ZPD is the difference between what the learner knows and what the learner is capable of knowing or doing with mediated assistance. To progress, one must interact with peers who are ahead of the game through social interaction, a dialectical process between learner and peer. 
Bruner thought the concept was contradictory in that you don’t know what you don’t yet know. And if it simply means not pushing learners too far through complexity or cognitive overload, then the observation, or concept, seems rather obvious. Bruner was to point out the weakness of this idea but also replace it with the much more practical and useful concept of ‘scaffolding’.
ChatGPT4 is a brilliant scaffolder. It’s patience and usefulness in providing dialogue to move through a topic is extraordinary. Khan Academy has put this to great use in their first iteration of their brilliant tutor service.


Vygotsky, L.S. and Cole, M., 1978. Mind in society: Development of higher psychological processes. Harvard university press.
Vygotsky, L.S., 2012. Thought and language. MIT press.

Thursday, March 16, 2023

ChatGPT4 hits Duolingo. Game changer in language learning

I have been writing for years about how Duolingo points the way forward in personalised learning using AI. Duolingo has been using sophisticated algorithms for spaced practice, based on ‘half-life forgetting for some time. This is what is behind the increased efficacy of the product, fine-grained personalisation, around, not learning, but identifying how fast you forget. An interesting inversion.


Fascinating to see that ChatGPT-4 is now marketed as a learning tool, and the first launch was integrated with Duolingo to give free-flowing, immersive conversations. This is a game changer in language learning as this new stuff is on a new level. 


ChatGPT4 provides two new levels of functionality.


1. Explain My Answer

When you get something wrong n learning a second language, as you frequently do, it can be frustrating knowing what you got wrong. Duolingo now gives you an explanation 'elaborate' of what was wrong and can also gives 'examples' to point you in the right direction, a bit like having a 24/7 native speaker or tutor to help you learn. See image.


2. Roleplay

Here you can chat with someone 24/7, a native speaker who knows your level of competence. That person is AI. It uses human written scenarios as its basis but as it is generative AI, every conversation is, essentially, unique to you. This gives you much needed practice and immersion, always a problem when learning a second language. Once completed, each roleplay session gives you a report to suggest improvements.


We are at the start of a new era of online, fremium learning products in every subject that will revolutionise learning. These products will match and eventually exceed learning in school. Who can deny that language teaching is an area of catastrophic failure. Most spend years at school trying to learn a language and can barely order a cup of coffee at the end.



The whole idea of AI as a useful teacher is here. Honestly it's astounding. They have provided a Socratic approach to an algebra problem that is totally on point. Most people learn in the absence of a teacher or lecturer. They need constant scaffolding, someone to help them move forward, with feedback. This changes our whole relationship with what we need to know, and how we get to know it. Its reasoning ability is also off the scale.

We now have human teachers, human learners but also AI teachers and AI that learns. It used to be a diad, it is now a tetrad - that is the basis of the new pedAIgogy.

Personalised, tutor-led learning, in any subject, anywhere, at any time for anyone. That has suddenly become real.Personalised, tutor-led learning, in any subject, anywhere, at any time for anyone. That has suddenly become real.

Wednesday, March 08, 2023

Best use of 'engagement' in learning I've ever heard using ChatGPT!

Gave a Keynote at City of Glasgow College on 'AI for Learning'. I've rarely seen an audience m ore engaged, the question and answer session could have gone on all day! Indeed, I spent the whole day, including attending another excellent session run by Joe Wilson and had innumerable conversations with teachers who approached me - this is Glasgow, people are open and friendly. Honestly, the entire day was positive, nay exciting, a real buzz about this technology that I’ve never seen before when I give such talks. They weren’t obsessed with ‘academic integrity’, indeed well aware that academics themselves pulled stuff from all sorts of sources themselves, that assessment was all a bit lazy, and were smart enough to laugh and nod at these points.

Ally Robertson, who teaches troubled kids at West Lothian College gave the best use of ChatGTP I’ve heard to date. He asked his students, some who had been in serious trouble with the law, hoodies up, a challenge. ”Anybody have a use for this ChatGPT thing?”. One lad joked that his Tinder profile was rubbish. “Could it do a better job on that” They laughed, tried it, and that day his hits went from zero to 90! Ally loves the tech, and sees it as a powerful engagement and learning tool, especially for these seriously, disengaged learners.

Free from the hubris of Higher Education, these teachers were close to their students, personally committed to their success and could immediately see the potential of the technology. Joe Wilson showed an aggregated site with dozens of generative AI tools, urged people to try it, which they were doing throughout the day. In other words they gave permission to students and faculty to use the tool, as long at they mentioned that they had used it. This was so refreshing.

I should also thank Clair, Derek and Joe, the senior managers there for inviting me. Joe, long a champion of vocational learning, took me to the Sloane bar, we had a couple of pints, a bit of a laugh. The College is fantastic, the people dedicated to teaching and Megan, the student who spoke and sat on the panel, was superb. This is what technology in learning should be about – a positive, objective appraisal of the potential of technology with plenty of real examples and conversations – it is, after all, called CHATgpt!

Thursday, March 02, 2023

PedAIgogy – new era of knowledge and learning where AI changes everything

I’m not sure we have fully grasped what has just happened with ChatGPT, or more generally, generative AI. It is a far more profound shift than we realise as it changes our very relationship with knowledge and learning.

Big bang

Knowledge and learning was, for most of our history, largely a matter of oral stories, cave paintings and simple 3D artefacts. Writing, around 5000 years ago, was the big bang of knowledge production but our relationship to that knowledge was slight, as most of us remained illiterate and the elites kept it to themselves. Latin being just one example, the language of deliberate exclusion. The technology; papyrus, paper and vellum remained expensive, copying and reproduction laborious. Then printing amplified the big bang and took it to the masses in their vernacular languages and through books



The second big bang was the internet, where digital knowledge eventually became multimodal, largely through multimedia. Text was available through Wikipedia, newspapers, articles and books online. Audio through podcasts and music. Video through YouTube. Also 3D worlds through Google Earth and Maps.


What made the real pedagogic difference, however, was not content but access to content, through ‘search’. Google, Google Scholar and search for YouTube videos (different form of search) was the pedagogic means to the end. Hyperlinks also allowed us to leap across and down into knowledge. Search continues to be developed through semantic search, which promises to be far more accurate.

In a fascinating comparison between using Google for search and ChatGPT, the latter was faster, coped wiha better range of questions, often had better quality answers and was a better user experience, as I seemed like a more natural dialogue and not search and retrieve. There are still accuracy problems but it is clear that search has a challenger. It may boot be dead but is dying.

Many struggle at first with using language models as you have to make the effort to make yourself clear through prompts. IT is no longer click to retrieve, it is talk to get an answer within dialogue. It takes more effort to make your self clear. Education has encouraged a non-dialogue lecture, inert text, chalkboard,  Powerpoint model that discourages dialogue, speech and inquiry. It's  the difference between Google search/retrieve and dialogue using ChatGPT. These skills need to be encourage and developed as they are fundamental to inquiry and critical thinking.


The relationship with knowledge was also mediated by our relationships with others online. Suddenly we had ties to more than our close friends, relations and work colleagues. We could communicate and share knowledge with anyone online. People began to post, repost, comment, message, Zoom and see knowledge as accessible via others. This vastly expanded our reach into knowledge and learning.


An entirely different form of dialogue appeared to consumers in 2022 with ChatGPT, famously beating all records for adoption. A fiendishly simple interface, like search (but not really) as it draws on something approaching the sum of human stored knowledge. It is also like social in being trained by us all and involves more dialogue like ‘social’. It is also multimedia, as it can generate text, images, audio and video.

We are no longer in a world with just teachers and learners. We are now in a world of human teachers, human learners but also technology that teaches and technology that learns. We can learn using it, and from it. We can also teach using it and it can also teach us. Pedagogically, we used to be in a diad, we're now in a tetrad - that is pedAIgogy.


This is another big bang, the difference being the dynamic creation of knowledge, in real time, in co-created dialogue. We are no longer using technology to simply find knowledge and learn. We have moved forward to find, create, change, organize, synthesize, even evaluate knowledge and learning with technology. This is a new form of pedagogy, I call ‘pedAIgogy’. We are co-creators, not just of text but in all media, multimedia creators, as well as learning and teaching in a far more complex relationship with technology

We have only just begun to realise that we will now move forward, not by keeping everything at a distance but by embracing dialogue. Socrates and Plato were suspicious of writing for good reasons. In the Phaedrus Plato cautions us about being too reliant on a technology as simple as writing. It may have the opposite educational effect from that intended, as it creates a sense that something is learnt but actually results in forgetfulness. He warns us that writing may be the enemy of memory, as one is not generating from one’s own mind but the already written text.

In returning to a core Socratic relationship with knowledge, new forms of co-created literature, images, audio and video will emerge, new knowledge, new research, new art, new forms of teaching, new forms of learning. We have crossed a generative Rubicon and there is no going back. Neither should we want to, as this technology captures all of our thoughts, It is us. It reflects the many not the few, the hive mind, the supermind. 

There are dangers but let’s not imagine that scarcity in knowledge or learning was a great thing. Perhaps we have been drowning in a sea of text in education, learning, research and work for too long.

This may seem heretical but have we have been under the yoke of text heavy institutions for too long, with scarce, expensive courses, plunging many into debt, in some cases for a lifetime? 20 solid years of reading and writing text – that, oddly, is our model. Yet, those most in need of education still seem furthest from it and despite serious, practical, skills' shortages, vocational learning has been decimated. 

Most people want a ‘working knowledge’ of things they want to do, not over-engineered, PPT-led, abstract courses. This new era of PedAIgogy may herald a more dynamic way of formal teaching and learning. It may also swing us quickly toward performance support in the workplace, where the technology responds to needs. A demand driven, not supply-driven model.