Friday, September 06, 2024

AI is a provocation – that explains a lot of the bizarre reactions


Huge numbers are using ‘AI on the SLY’ in schools, Universities and workplaces. Institutions will try to preserve the problem to which they are a solution. The institutions are protecting themselves against their own learners and employees.

State of play

Even if we stopped right now, randomised controlled trials show large performance gains in real institutions for management, administration, ideas generation, translation, writing and coding using AI. But opinions among insiders (different from outsiders) has coalesced around the idea that AI will experience continuing rapid development for years. GPT5 is a real thing, we have not exhausted scaling and many other techniques are in play stimulated by huge investment and a recognition that this now a massive shift in technology with huge impacts all our lives in culture, economics, education, health, finance, entertainment and above all, the nature of work itself. That’s why it will change why we learn, what we learn and how we learn.

Provocation

We would do well to ask why has this happened? I suspect it is because Generative AI is a brilliant ‘provocation’, not because it is the fastest adopted technology in the history of our species, but its greatest challenge. It destabilises the establishment, especially those who want to control power through knowledge. That’s why the reaction has been strongest among technocrats in Government and academia. Institutions just can’t deal with it. 

They can’t deal with the fact that it behaves like us, because it is ‘us’. It has been trained on our vast cultural legacy. It throws traditional transfer of knowledge thinkers because they think they are in the game of teaching the ‘truth’ (as they see it) when it actually behaves like real people. Thy expect a search engine and get dialogue. Used to telling people what they need to know and do, they see the technology as a treat to their positions. 

To be honest, it is pretty good at being traditionally smart – only 18 months in and it’s passing high stakes exams in most subjects, now performing better than humans in many. And when it comes to the meat and potatoes of a largely ‘text-based’ education, it can do most of what we expect the average pupil or student do in seconds. We are meant to be teaching people to be critical but anyone with a modest critical outlook, on their first use of GenAI, thinks ‘Holy shit… much of what I’m learning and do in my job can be done quickly, in most languages, at any time – and it is free. It makes learners think – is all of this essentially text-based learning worth my while? This is exactly what was uncovered in a recent Harvard student survey where almost every student admitted to using AI as well as having a rethink about their education and careers. LINK

A lot of what learners are doing in school, college or work has started to seem a little predictable, tired, often irrelevant. All of that time writing essays, doing maths you’ll never use. It takes 20 years of pretty constant slog to get our young people even remotely ready for the workplace. We send our kids to school, from 5-25, yet almost all of this time is spent on text - reading and writing oodles and oodles of text. Most vocational skills have been erased from curricula. No wonder they react by gaming the system as they are assessed on text only.

All of this effort in playing the cat and mouse game around text consumption and production rather than reflecting on the purpose of education, has led to a certain suspicion about this 20 years of schooling. It is hard to maintain enthusiasm for a process that has become odder and odder as the years go by. This is not to say bthat text is unimportant, only that it is not everything and very often, a poor form of assessment.

Morality police

Another symptom of institutions not being able to cope with the provocation is the endless moralising by paid ‘AI and Ethics’ folk, is the assumption that they have the keys to the ethical behaviour when the rest of us do not. When, in fact, they are often no more qualified on the subject, show massive biases towards their own ‘values’ and often have a poor understanding of how the technology actually works. It’s a heady cocktail of pious moralising.

Worse still, is their assumption that we, and by that I mean teachers, employees and learners, don’t really know what we’re doing, and need to be educated on matters of morality. They speak like the morality police, a well-paid professional cadre, demanding that we obey their exhortations. We poor amoral souls need their help in behaving in the right way. Sur, there are ethical issues, that does not mean Iranian levels of policing.

Conclusion

If we really want our young people to become autonomous adults, and that is what I think education is about, then AI can help foster and support that autonomy. It already gives them amazing levels of agency, access and support. Let’s see how we can help them become more autonomous and use these tools to achieve their goals, not stuff that can be done in seconds by machines. Treat learners like machines and they’ll behave like machines. Treat learners like autonomous people and they’ll use the machines.


Founder v Manager mode - the meme

At a recent YC event, Brian Chesky’s talk challenged the conventional wisdom on scaling companies. It has become a massive meme. It struck a nerve with me and confirmed what I experienced over 40 years in business. I've built and sold businesses, invested and got involved with others. Recently, I was in a busienss that we grew from scratch, sold to Private Equity and saw all of this play out - advisors shoved down our throats, bad  advice - mostly from people who do not know the sector or market. I got out quickly as I saw their behaviour on day 1 - riding roughshod over everyone, ignoring obvious points.

Back to Chesky. He shared how the advice he received 'hire good people and give them space' was disastrous for Airbnb. After studying Steve Jobs' approach, Chesky developed a more hands-on style that has been successful, as reflected in Airbnb’s strong financial performance.

Many founders echoed similar experiences, realising that the advice they received was tailored for managers, not founders. This highlighted two different modes of running a company: 'founder mode' and 'manager mode'. Unlike managers, founders need to stay deeply involved in key areas of their companies, even as they grow.

His talk revealed that there’s little formal knowledge about founder-led scaling. Founders have been left to figure it out themselves, but successful examples like Chesky’s show the importance of founders staying connected to their teams and not relying solely on delegation. This approach, while more complex than traditional management, often works better for fast-growing companies.

1. Don’t blindly follow conventional advice
The typical advice of "hire good people and give them space" doesn’t always work for founders. Be cautious when others tell you how to scale your company. This, for me is the. umber one rule. Advisors are often stuck in their own world. I've never had a 'mentor' or 'coach'. I've seen success come from NOT taking the obvious advice. Advisros are so often just group think people and you can find that stuff in seconds on ChatGPT.

2. Embrace 'Founder Mode

Founders should run companies differently from professional managers. Stay involved in important details and don't feel pressured to delegate everything. Yip. You're not playing the game you're trying to reinvent the game and get some edge. Oh and joint CEOs never work!

3. Don’t switch to a manager mindset

Just because your company is growing doesn’t mean you should operate like a corporate manager. Your unique insight as a founder is key, so keep it in play. Perhaps the most important bit of advice - as soon as you descend into managerial mode, you lose the difference you're trying to make.

4. Have skip-level meetings

Don’t just talk to your direct reports. Get to know people further down the line. It helps maintain the company culture and keeps you in touch with what’s really happening. Yip, walk the floor - listen and act. Introduce your customers to as many people as possible. I used to have a stations of the cross tour, constantly taking customers round showing them the production process.

5. Develop your own style

Try out unconventional ideas, people at Apple, to keep things fresh and keep the company agile. So important. Ring the changes and make it seem exciting. This does not mean silly team building events in escape rooms. Be yourself and don't follow leadership course BS about empathy being everything - it's not.

6. Watch out for professional fakers

Be careful of people who seem great at "managing up" but don’t actually bring much value. Make sure your team truly shares your vision. There's a ton of 'let me grow your business' and 'entrepreneur courses' around. They're largely BS.

7. Delegate carefully

As your company grows, you’ll need to delegate, but make sure you do it based on trust that’s been earned, not just because it’s expected. This is so right. Delegation is one thing, losing control or low performance is another.

8. Trust your instincts

Don’t let others make you doubt your gut feelings, even if professional managers or advisors disagree with you. As a founder, your perspective is valuable. I remember bold decision, like forking from code, switching out of our established sector - sometimes bold decisions have to be made.

9. Don’t misuse founder mode

Once the idea of “founder mode” becomes more popular, be careful not to use it as an excuse to avoid delegation. Also, watch out for non-founders trying to adopt it in the wrong way. When a PE company comes in you get a crop of largely useless 'advisors', often people from different sectors giving you sage advice - it's formulaic and never sage. In one case, in a hugely successful company, after hearing a series of these, I cashed out. The company is now run by spreadsheet bods.

10. Keep evolving

As your company scales, keep reassessing how you want to progress. You may need to adapt, but stay closely connected to the core vision of your business. This perseverance I've seen in people who love what they do and build over time. This is important. When AI, for example is seen as a productivity amplifier, get on with it, don't wait for reports.

Don;t be scared to get out when the money guys come in. If you're bought, take the money and go off and do something interesting. It gives you the freedom to do precisley that. Don't become a manager of someone else's business - you will hate it.




Thursday, September 05, 2024

Man who coined the phrase ‘Postmodern’ is often forgotten - Jean-François Lyotard

More fashionable names in Postmodern theory include Foucault, Derrida, Baudrillard and Barthes. But the man who coined the phrase ‘Postmodern’ is often forgotten. It was Jean-François Lyotard in The Postmodern Condition (1979), yet he often gets ignored, as his views contradict his hipper colleagues. 

As a far-left activist and academic in France, Algeria and the US (in the Critical Theory Department of the University of California, then Emory University) he explored the impact of postmodernity on a wide range of subjects; philosophy, epistemology, science, art, literature, film, music and culture. 

Meta- and Mini-narratives

His alternatives to ‘meta-narratives’ are personal ‘mini-narratives’ that reduce knowledge to personal experience. Objective, empirical evidence is trumped by lived experience, so that the mini-narratives of individuals and groups are placed above those of science, general ethics or society as a whole.

We see in Lyotard an explicit epistemic relativism (belief in personal or culturally specific truths or facts) and the advocacy of privileging ‘lived experience’ over empirical evidence. We also see the promotion of a version of pluralism which privileges the views of minority groups over the general consensus of scientists or liberal, democratic ethics which are presented as authoritarian and dogmatic. This is consistent in postmodern thought.

In The Postmodern Condition: A Report on Knowledge (1979) Lyotard explores the transformation of knowledge in postmodern society, focusing on how the decline of meta-narratives and the rise of performative knowledge affect educational practices. The Differend: Phrases in Dispute (1985) elaborates on the idea of different discourses and the importance of acknowledging differences in understanding and communication, relevant to educational contexts that value diverse perspectives.

Like Foucault, Jean-François Lyotard’s views on teaching and learning reflect his broader critique of knowledge and society. With the decline of grand narratives and the rise of performative, context-dependent knowledge, he pushed for an educational approach that values diversity, adaptability, and critical engagement. He also encouraged a move towards more flexible, pragmatic, and technologically integrated forms of learning that respond to the complexities of the postmodern condition.

The grand narratives of the Enlightenment (meta-narratives) that once legitimised knowledge are in decline. Universal reason and progress no longer rule and this decline affects how knowledge is produced and transmitted. Learning is increasingly legitimised through its utility and efficiency, rather than through universal or absolute claims. This shift influences educational practices and the goals of teaching and learning.

Knowledge, for Lyotard, changes with the dissolution of dominant narratives. The Enlightenment narratives of objectivity, truth are no longer applicable. This, he thinks, has caused a crisis in knowledge, as it has been commercialised, creating tensions between rich and poor, private sector and state. 

Lyotard oddly used "paganism" metaphorically to describe a stance that rejects universal principles in favour of a multiplicity of perspectives and values. In this context, it contrasts with monotheistic or universal approaches to truth and ethics. This concept is linked to his broader critique of universalism, advocating instead for a recognition of diversity and the co-existence of different, sometimes conflicting, ways of life. 

Parology

Knowledge, if more fragmented with a plurality of perspectives, is opposed to the fixity of a ‘canon of universal knowledge’. Building on the value of these multiple perspectives, he introduces the concept of ‘paralogy’, the generation new, often contradictory perspectives and diversity in thought, a pluralistic approach to learning. Localized, context-specific narratives are preferred to overarching truths. Teaching and learning should, therefore, focus on fostering diverse perspectives and critical thinking.

Knowledge is contingent and context-dependent and cannot be derived from universal principles. We must now look towards their pragmatic effectiveness in specific contexts to combat the traditional notions of academic authority and curriculum design. Traditional educational institutions perpetuate outdated meta-narratives and hierarchical structures and resist the pluralistic and performative approaches that Lyotard advocates, so he called for reform.

Teaching and learning are open, dynamic processes rather than rigid, predetermined paths and should embrace uncertainty and complexity, encouraging students to engage with multiple perspectives and to be critical of established norms. It should involve innovation and experimentation, allowing students to explore and create new forms of knowledge rather than merely replicating existing ones.

Performative Knowledge

Lyotard sees learning increasingly judged by its performance, practical application, and efficiency rather than its slavish adherence to old narratives and universal truths. Performative Knowledge is valued for its ability to produce measurable outcomes and skills that can be immediately applied in practical contexts. This performative emphasis shifts the focus of education towards marketable skills and competencies. This ‘perspective’ influences curricula to prioritise skills and knowledge that have clear, immediate applications and can be quantified, often at the expense of more traditional, humanistic educational goals.

Language games

As part of his critique of grand narratives and stated truths, he also had a view on language and its uses. While the term "language games" was originally coined by Ludwig Wittgenstein, Lyotard popularised it in the context of postmodernism to describe how different groups use language according to their own rules and contexts, leading to a plurality of meanings rather than a single, unified understanding. This concept highlights the idea that knowledge and meaning are contingent on social contexts and that different communities or disciplines may operate according to different linguistic norms.

Technology

Lyotard also recognises the significant impact of technological advancements on teaching and learning. Technology changes how knowledge is accessed, distributed, and valued. In a world characterised by rapid technological advancements, the production and dissemination of knowledge become highly efficient. Educational institutions must adapt to these changes by integrating new technologies into their teaching methods. The rise of digital learning aligns with Lyotard's idea that knowledge is becoming increasingly performative and accessible in new forms, led him to embrace learning technologies.

Yet, as he is critical of claims that knowledge is truth, as it is a slave to 'meta-narratives'. Science, in particular, he sees as a meta-narrative that puts knowledge in the hands of power and politics, thereby shedding its claim to objectivity. Faith in science, as he explains in Inhuman (1988) legitimises the digital capture of knowledge and therefore faith in technology.

Critique

Despite the claim for a plurality of perspectives there is often a fall back to universal claims, theories and values. Also, his attack on science as a meta-narrative doesn’t really explain why the scientific method, with falsification lacks legitimacy or what scientific knowledge has been delegitimised. It is a failure to recognise that many of the meta-narratives postmodernists criticise have methods that allow them to examine, even themselves. They are also sceptical about claims claiming to be absolute truths and at least have processes of self-correction.

It is as if the progress we’ve made since the Enlightenment didn’t exist, that there was no Reformation, French Revolution, secular progress, no progression towards liberal democracies and values. Postmodernism doesn’t have a monopoly on emancipation, many of the advances made in the 60s and 70s were prior to Postmodernism, not caused by it. That was not just the well-spring but theoretical basis upon which such progress was made, the very progress that allows the current generation of critical theorists to think and act for themselves.

Worse still, it destroys all possible methods of discussion, debate and disagreement, the foundations of liberal democracy, there is no arguing with it. All common ground or methods of falsification have disappeared or are interpreted as power plays. It has donned all the defensiveness of the meta-narratives it purports to despise.

Bibliography

Lyotard, J.F., 1984. The postmodern condition: A report on knowledge (Vol. 10). U of Minnesota Press.

Lyotard, Jean Francois. (1988). The differend : phrases in dispute / Jean-Francois

Lyotard ; translation by Georges Van Den Abbeele. Minneapolis : University of Minnesota Press

Lyotard, Jean-Francois. (1988). The Inhuman: Reflections On Time (First Published). California: Standford University Press.



UNESCO and AI - mostly rrhetoric

I have been following the output of UNESCO on AI for some time, even debated against them (twice). It has been a dispiriting experience. Rthare then useful effort and adice it remains mired in abstract and often irrelevant frameworks. This is the world of conferences and reports, not the real world.


There is a stark contrast between US and EU, between the affirmative, voluntary and guidance approach of the US and regulatory approach of the EU.


The US is forging ahead in education, their companies and Universities now way ahead of the EU. Most of the technology comes from the US, with few European examples. The investment in the US dwarfs that of the EU. Meanwhile, the EU sinks into in a vale of despondency, it's Universities doing little, innovation way behind.

The Chair, while talking about bias, becomes hopelessly biased seconds later and makes a big blunder by calling the 'EU' AI Act as the 'European' AI Act. That is quite simply wrong. Thankfully there are countries in Europe that are not in the EU or subject to this act. And China is notably sidelined, yet they have some excellent legislation that has been in place for a long time.

There is something odd, very Davos, about these people flying all over the world to discuss AI and ethics, especially when their core principle was, and I quote 'Climate friendly AI'! In truth UNESCO is irrelevant here. The world is using this technology, paying no regard to the millions of words these aloof world bodies throw out on their websites..

This may sound harsh but these top-down entities have no really useful role to play here. Mired in the rhetoric of 'values' they mean THEIR 'values'. This is not a revolution led by UNESCO, UN, OECD or any other of these bodies. They join the bandwagon long after it left town.

This is a shame, an opportunity lost. Rather than push the really positive, innovative and exciting opportunities, they sit on sofas, reading from prepared scripts and screens, remote from the actual technology and its uses. It's wholly performative - exceptionally well paid people harking on about the poor. Indeed they simply duplicate, at great cost, the same old long reports, frameworks and documents and statements, which are largely ignored, as the real world moves on paying them little or no regard.

In truth this AI shift is bottom up, driven by product releases, users and use. That's why their Teacher Competences document is mostly repetitive rhetoric. It will have no real impact, as it is far too abstract. The group is loaded with AI and Ethics people, low on people who have any actual experience in the application of AI in learning. This means a ton of abstract talk about ethics. The word 'ethics' is mentioned on almost every page. 

Teachers are teachers, not experts on ethics. The idea that they need to be competent in judging ethical issues at the political and technical level is very odd. All empty theory, low on practice. I've seen competency frameworks all my adult life - they're usually empty exercises by a mixture of academics and people who have little real practical experience and often ignored or out of date by the time they hit the press.

The problem with a competency framework, is that they need real examples. Banging on about competences in ethics is completely misguided. That is not the role of the teacher. It's easy to conjure up little pyramids with these words but without practical guidance, it's yet another document.
You can read the entire document as a teacher, and to be honest, be none the wiser about what you actually need to do in your job.


Wednesday, September 04, 2024

Only a God will save us… some reflections on the enormity of AI

Greek dystopia

The Greeks understood, profoundly, the philosophy of technology. In Aeschylus’s Prometheus Bound, when Zeus hands Prometheus the power of metallurgy, writing and mathematics, he gifts it to man, so Zeus punishes him, with eternal torture. This warning is the first dystopian view of technology in Western culture. Mary Shelley called Frankenstein ‘A Modern Prometheus’ and Hollywood has delivered for a nearly a century on that dystopian vision. Art has largely been wary and critical of technology.

Yet we may not be thinking deeply enough about what AI brings. For all the chat about its power, we need to see it as a technological event that eclipses the invention of stone tools, writing, printing and the internet. It may be the culmination of all of these, as it already promises physical tools as robots, multimodal capabilities beyond the world of text, global dialogue with an another intelligence at any time from any place on anything in any language. It seems to transcend other technologies, with implications beyond past technologies into future unknowns. These are a few reflections on these unknowns.

God as maker

But there was another more considered view of technology in ancient Greece. Plato articulated the philosophy of technology, seeing the world, in Timaeus, as the work of an ‘Artisan’, the universe as a created entity, a technology. Aristotle makes the brilliant observation in his Physics, that technology not only mimics nature but continues “what nature cannot bring to a finish”. They set in train an idea that the universe was made and that there was a maker, the universe as a technological creation.

Monotheism rose on the back of cultures in the fertile crescent of the Middle East, who literally lived on the fruits of their tool-aided labour. The spade, the plough and the scythe gave them time to reflect. Interestingly our first written records, on that beautifully permanent piece of technology, the clay tablet, are largely the accounts of agricultural produce and exchange. The rise of writing and efficient alphabets made writing the technology of various forms of capitalism and control, holding everything to account, even our sins. The great religious books shaped us for Millenia, and still do.

The two-thousand year history of Western culture after the Greeks bought into the myth of the universe as a piece of created technology. As we entered the age of industrial design and production, Paley formulated it as a modern argument for the existence of God from design, using technological imagery, the watch, to specify and prove the existence of a designed universe and therefore a designer - we call (him) God. In Natural Theology; or Evidences of the Existence and Attributes of the Deity, he uses an argument from analogy to compare the workings of a watch with the observed movements of the planets in the solar system to conclude that it shows signs of design and that there must be a designer. God as watchmaker, technologist, has been the dominant, popular, philosophical belief for two millennia. 

Technology, in this sense, helped generate this metaphysical deity. It is this binary separation of the subject from the object that allows us to create new realms, heaven and earth, which gets a moral patina and becomes good and evil, heaven and hell. The machinations of the pastoral heaven and fiery foundry that is hell revealed the dystopian vision of the Greeks and continues in the more exaggerated form of Promethean, doomster AI ethics.

Technology is the manifestation of human conceptualisation and action, as it creates objects that enhance human powers, first physical then psychological. With the first hand-held axes, we turned natural materials to our own ends. With such tools we could hunt, expand and thrive, then control the energy from felled trees to create metals and forge even more powerful tools. Tools beget tools. 

Technology slew God

Technology may have suggested, then created God, but in the end it slew him. With Copernicus, who drew upon technology-generated data, we found ourselves at some distance from the centre of the Universe, not even at the centre of our own little whirl of planets. Darwin then destroyed the last conceit, that we were unique and created in the eyes of a God. We were the product of the blind watchmaker, a mechanical, double-helix process, not a maker, reduced to mere accidents of genetic generation, the sons not of Gods but genetic mistakes. Dawkins titled his book The Blind Watchmaker as an evolutionary counterpoint to Paley.

We have now resurrected a modern form of animism with AI and software, that we first used to our ends but then realised that we ourselves are animistic beings, driven by software in our brains. The separation of us from the natural world is no longer viable. Human exceptionalism was wounded by Copernicus and Darwin, now killed dead by AI.

Anchors lost, we are adrift, but we humans are a cunning species. We not only make things up, we make things and make things happen.

We are makers

Once God was dead, in the Nietzschean sense of a conceptual death, we were left with ourselves and technology. We got our solace not from being created forms but by creating our own forms. We became little Gods and began to create our own universe. We abandoned the fields for factories and designed machines that could do the work of many men. What we learned was scale. We scaled agricultural production through technology in the agricultural revolution, scaled factory production in the industrial revolution, scaled mass production in the consumer revolution. Then more machines to take us to far-off places – the seaside, another country, the moon. We now scale the very thing that created this technology, ourselves. We alchemists of AI have learned to scale our own brains.

Small Gods

Eventually we realised that even we, as creators, could make machines that could know and think on our behalf. God had died but Little Gods are emerging. We may return to that pre-agricultural age, as hunters and gatherers, hunting for meaning, gathering ideas and enthusiasms and making new worlds for ourselves. In an age of an abundance we, once more, will have to reflect on the brief folly of 9-5 work and learn to accept that was never our fate, only an aberration. Technology now literally shapes our conception of place and space. With film, radio, TV and the web but we spiders may have got entangled in our own created web and the danger is that it begins to spin us.

Technology is not now a ‘black box’, something separate from us. It has shaped our evolution, shaped our progress, shaped out thinking - it will shape our future. Forget the simplistic ‘it’s about people not technology’ trope. There has always been a complex dialectic between our species and technology, that dialectic has suddenly got a lot more complex with AI. That dialogue has just got very real, as with the invention of writing, then printing, the sum total of human cultural knowledge was gathered and used to train LLMs, small Gods. We now engage in dialogue with these small Gods. We are speaking to a created small God - US.

Only a God can save us

As Martin Heidegger said in his famous Spiegel interview, “Only a God can save us”. What I think this commentator on being, technology and the human condition meant, was that technology has become something greater than us, something we now find difficult to even see, as its hand has become ever more subterranean and invisible. It is vital that we reflect on technology, not as a ‘thing-in-itself’, separate from us, but as part of us. Now that we know there may be no maker God, no omnipotent technologist, we have to face up to our own future as makers. For that we need to turn to philosophy – Plato, Aristotle, Nietzsche and Heidegger are a good start….

The postscript is that AI may, in the end, be the way forward even in philosophy. In the same way that the brain has limits on its ability to play chess or GO, it may also have limits on the application of reason and logic. Philosophical problems themselves may need the power of AI to find solutions to these intractable problems. AI may be the God that saves us....

Tuesday, September 03, 2024

AI in schools - how one large school organisation is kick starting the process....


Never heard such a loud audience before I started! But that positive and was indicative of the whole day. The Woodland Academy Trust have set out on a journey with AI and this is a leap on that journey – with a series of presentations and workshops, opening up the possibilities, and practicalities.

My line in the Keynote was that the whole world is using ‘AI on the sly’ already – learners, parents employees. Rather than let this uncontrolled activity happen, it is time to act. I’m not one for Gartner curving everything, especially AI, as they hype curve is not always right. Printing, personal computer, internet, Google search, Wikipedia… major technical innovations tend to be pretty much linear ascents, as is AI. In fact, its ascent has been steeper than any technology in the history of our species. My focus was more on where you are and want to be on the maturity curve.

A.I. Kingsley

I was followed by a good, practical talk by Al Kingsley or A.I. Kingsley. He pressed the slide clicker, only to see the screen rise and disappear towards the roof – ooops…. wrong clicker. He’s an entertaining presenter, so turned it into a bit of fun. Never seen that before!

He made the point that DfE and OFSTED not really prepared for AI then got super practical. 

I liked his point, which I also emphasised, that tech needs to be EASY TO USE. Then proffered a plan:

Where to start with AI?

  • A few core tools
  • Set baseline policy
  • Let staff play and explore
  • Feedback and share
  • Agree policy (usefully showed an actual school policy)

That was followed by his vision of a school for a digital world. Ask where are we as a school, as all on different parts of improvement journey. And don’t get obsessed with the classroom as lots of tech not in classroom. Don’t try to do too many things at once – build confidence, set up pedagogy checks. 

I liked his 6 pillars for a successful strategy slide:


  1. Innovate learning
  2. Student digital skills
  3. Teacher skills
  4. Technology & infrastructure
  5. Effective communication
  6. Everyone’s wellbeing

Tech is NOT the solution, it’s a lever. A final point was to develop CHAMPIONS – question from audience confirmed this and went down well. PedTech not EdTech!

Are we nearly there yet?

He was followed by Mark Anderson, with a presentation called ‘Are we nearly there yet?’

It started with an image of a worn shortcut path across the grass. AI helps shortcut your progress. Then a great list of tools that are already being used. Very useful.

A must read paper was recommended The Future of AI in education; 13 things we can do to minimise the damage. This was followed by some good lines. But darn there’s that Gartner curve again – it gets everywhere.

After rightly dissing Blooms 2 sigma problem he had a go at illustrating the limits of AI with the now famous meme:

How many Rs in strawberry?

ChatGPT

There are two "r"s in the word "strawberry."

I’m not so sure about these ‘edge’ cases, as who would ever ask that question? Much better and more likely teacher questions include:

What the most common mistake children make when spelling strawberry? 

ChatGPT

The most common mistake children often make when spelling "strawberry" is leaving out one of the "r"s, particularly the second "r" before the "y." 

Why is the word strawberry difficult for dyslexics?

ChatGPT

The word "strawberry" can be particularly challenging for individuals with dyslexia for several reasons:

1. Phonological Complexity:

Multiple Syllables: "Strawberry" is a three-syllable word ("straw-ber-ry"), and dyslexic individuals often struggle with breaking down and accurately sequencing the sounds in multi-syllable words.

Consonant Clusters: The word begins with the consonant cluster "str," which can be tricky to process and remember, especially for those who struggle with phonological awareness.

2. Irregular Spelling:

Irregular Phonics: The word "strawberry" contains irregular phonetic elements. For instance, the "aw" sound in "straw" and the silent "w" can be confusing because they do not directly correspond to typical phonetic rules that dyslexic individuals might rely on.

Double Letters: The presence of double letters ("rr") adds to the difficulty, as remembering which letters are doubled and where they appear in the word can be challenging.

3. Visual Similarities:

Letter Reversals: Dyslexic individuals often struggle with letter reversals or substitutions, such as confusing "b" and "d." The presence of "b" and "r" in close proximity, as in "berr," can lead to confusion.

Letter and Word Length: Longer words with many letters, like "strawberry," can be harder for dyslexic individuals to process visually and mentally. They may lose track of the sequence of letters or skip over parts of the word.

4. Memory Load:

Working Memory Challenges: Dyslexia is often associated with difficulties in working memory, making it harder to remember and sequence the individual components of longer and more complex words like "strawberry."

Where do strawberries come from?

ChatGPT

Long and brilliant answer…

AI and the humans

Then Fiona Aubrey-Smith with a talk on AI and the humans. She rightly put pedagogy first. 

Again, however, there was the assumption that technology always goes through a cycle of hype and failure. This is not always true. But I wholly agree with her point on the lack of agreement about what schooling is for…. but wonder of that is ever possible. It may well be that schools have different roles for different people. Politicians frame it as a national, economic issue. Some parents see it as a vehicle for their children’s success, other parents simply something that happens. Teachers tend to see in terms of their workplace.

She sees pedagogy as an umbrella term and I agree, to a point, as it stimulates questions around AI, such as; What does it mean to be a learner in age of AI? What is role of teacher in age of AI?

Then a point I REALLY agree with - Digital technology too often focuses on teachers not learners. This is so true. Government initiatives and spend too often focus on the teacher but teaching is a means to an end, that end being the learner. Children spend 1267 hours per year? How much of that is limited by our practices and capacity? Let’s use AI to make us more efficient – not just faster but better. I also liked her point about using AI to promote Metacognition in learners. 

Conclusion

Altogether, a fine one-day event. I have a theory of education – defined as ‘creating autonomous adults’. I like the word ‘autonomy as it doesn’t define a fixed curriculum or teaching methods. What it does in focus entirely on the outcome for learners – autonomy in life, autonomy in finance and health. Autonomy in life. AI not only accelerates that goal, in terms of time to actual competences, it induces autonomy and agency in the learner. Fun day, fun people tackling a serious issue – AI in schools.

 

Monday, September 02, 2024

Motivation - in the blink of an AI…. an underestimated dimension of Generative AI

In the blink of an AI, I’ve seen people go from 0 to 60, motivated, keen to learn, creative and more productive. 

In just one example, I have seen Claude have a huge and immediate impact on a small business. Honestly, this is a very real example, with someone I know well. After we showed the CEO what AI could do, he stayed up for hours that night played with it and discovered use after use. I have never seen him more excited about something that he was told would improve his business. For all the networking, business mentors and support he’s had, this blew all of that out of the water and, like a fountain, it has continued to flow. Described by him as “like having a new skilled employee for free” it is the gift that keeps on giving. Make no mistake, this will lead to growth in his business. The sheer enthusiasm of the CEO was infectious. It is particularly satisfying to see the people at the top get the fever as it cascades downward. Not just by example but by giving permission to others to use it in their own jobs. The dam bursts and the water flows.

We need to reflect on dimensions of Generative AI, other than its functionality, and productivity gains, as there is another greater prize to be won – motivation. Dig deeper and we find behavioural change lies at the root of its global use.

High and low agency

There’s a huge difference between high and low agency people in organisations. It’s the difference between those standing still on an escalator and those that briskly walk, even when it’s going down. You can often see and feel this when you deal with its people. High agency in organisations have individuals or teams who have significant autonomy and control over their work.

People are empowered to take ownership of their tasks and projects, and AI scaffolds this activity. They can make decisions, influence outcomes, and take initiative without excessive oversight or restrictive rules. High agency AI environments also foster creativity and innovation, as individuals feel free to experiment and explore new ideas without fear of failure or micromanagement. Of course, with greater freedom comes greater responsibility. High agency implies that individuals are accountable for their actions and outcomes, but this can also lead to a stronger sense of ownership and engagement.

What you tend to see in high agency environments are individuals with the power and autonomy to control their own work, which leads to higher job satisfaction, performance and well-being. Low-agency environments often lead to dissatisfaction, disengagement, and higher turnover rates.

AI and Intrinsic motivation

Self-Determination Theory (SDT) gives credence to this idea of agency. It shows that people have intrinsic motivations to act when they feel autonomous, competent, and connected to others. High agency aligns with these needs, leading to higher motivation and satisfaction.

Everyone has that ‘Holy shit’ feeling when using AI for the first time, feeling More specifically and  Self-determination theory, as defined by Edward Deci and Richard Ryan in their book ‘Self-Determination and Intrinsic Motivation in Human Behaviour’, see the active self, being in control, as the primary driver behind personal growth and fulfilment. It sees intrinsic, not extrinsic motivation as the driver for personal satisfaction and success. Your own need for growth that drives other personal needs. This means growing in competence as well as feeling related or connected to other people. 

The theory has three components:

  1. Autonomy - being in control, able to take action that results in actual change. 
  2. Competence - learning knowledge and skills to achieve more autonomy
  3. Connection or relatedness - feeling attached to other people

AI provides autonomy for people by freeing them to do things felt they were never capable of, so gives immediate agency. It also gives that feeling of rapidly increasing competence, of learning quickly to confirm that feeling of autonomy. On top of this, I’d argue that dialogue with a LLM is like speaking to another person or expert (see research by Nass and Reeves).

AI and wellbeing

Another dimension of motivation in AI, is wellbeing. In ‘Lost Connections: Uncovering the Real Causes of Depression – and the Unexpected Solutions’, Johann Hari, sees the causes of depression and anxiety, not as simply the result of chemical imbalances in the brain but largely social and environmental. Depression, anxiety and unhappiness at work stem from various forms of disconnection in people's lives; disconnection from meaningful work, helplessness, meaningful values, status and respect, a hopeful or secure future and a  disconnection from a sense of meaning. AI can partly help (it is by no means the sole solution) by reconnecting the individual with meaningful work, rebooting intrinsic motivation through a strong sense of productivity and achievement.

We know that personal agency matters in terms of job satisfaction, wellbeing and staff retention. We also know that agency matters in learning. People learn faster when they feel a sense of agency, growth and achievement. AI as door to learning

Low floor, High ceiling, Wide walls 

Intrinsic motivation is amplified by the ease of the interface, along with speed and breadth of results. Donald Norman said good technology should be invisible. The future of online learning is that it will be smart & that these smarts will disappear. The invisible hand of AI will transform why, what and how we learn. But it was Seymour Papert who defined what this should look and feel like in practice. Papert's concept of low floor, high ceiling and wide walls are wholly relevant to AI in both tasks and learning, foundational ideas in the design of productive and learning environments. 

Low Floor is the ease with which a beginner can start using a tool or engaging in an activity. The entry point is astonishingly simple in AI, a chatbot letterbox or voice, simple and accessible, allowing novices, even children or those with no prior experience, to begin learning without feeling overwhelmed. 

High Ceiling offers complexity and depth for those who wish to explore further. It means that learners can continue to build on their knowledge through dialogue, taking on more challenging tasks as their skills develop, something available and evolving fast in AI, with multimodality, coding, data analysis, agents and additional technology built into and around Generative AI.

Finally, we need Wide Walls, a depth, breadth and diversity of knowledge and paths that learners can take within a productive process, learning task or environment. Generative AI seems to have a Degree in every subject, speaking dozens and dozens of languages. It suggests that there are many different ways to engage with the material, catering to various interests and forms of expression. Do you want full exposition, brief summary, checklist? Or writing at the right level in a certain style, even poem or story, in any language? As an image, animation or video? 

Generative AI has, and continues to release these three features. A low floor, high ceiling, wide walls interface promotes inclusivity, creativity, and personalised learning. It encourages the design of educational tools and activities that are accessible to beginners (low floor), offer room for advanced exploration (high ceiling), and provide multiple ways to engage and express creativity (wide walls).

Conclusion

Let me add one other thing. Adding AI to improve productivity, quality and meet individual and organisational objectives, especially if you build proprietary AI applications, adds value for everyone. If you are an individual, you will feel better using this technology, find that a sense of release in being able to do things quicker, have less stress and be more productive. It gives people a release of energy and purpose that many other deliberate interventions, such as courses and edicts from above do not. It has an immediacy, with instant results and gives a sense of wonder.

This is why most use of AI in organisations is ‘on the sly’. Organisations veer towards top-down bureaucratic solutions to try to solve problems, which are often cumbersome, requiring difficult skills to master. We finally have a technology that allows one to be more productive, learn faster and feel better.