Friday, September 06, 2024

AI is a provocation – that explains a lot of the bizarre reactions


Huge numbers are using ‘AI on the SLY’ in schools, Universities and workplaces. Institutions will try to preserve the problem to which they are a solution. The institutions are protecting themselves against their own learners and employees.

State of play

Even if we stopped right now, randomised controlled trials show large performance gains in real institutions for management, administration, ideas generation, translation, writing and coding using AI. But opinions among insiders (different from outsiders) has coalesced around the idea that AI will experience continuing rapid development for years. GPT5 is a real thing, we have not exhausted scaling and many other techniques are in play stimulated by huge investment and a recognition that this now a massive shift in technology with huge impacts all our lives in culture, economics, education, health, finance, entertainment and above all, the nature of work itself. That’s why it will change why we learn, what we learn and how we learn.

Provocation

We would do well to ask why has this happened? I suspect it is because Generative AI is a brilliant ‘provocation’, not because it is the fastest adopted technology in the history of our species, but its greatest challenge. It destabilises the establishment, especially those who want to control power through knowledge. That’s why the reaction has been strongest among technocrats in Government and academia. Institutions just can’t deal with it. 

They can’t deal with the fact that it behaves like us, because it is ‘us’. It has been trained on our vast cultural legacy. It throws traditional transfer of knowledge thinkers because they think they are in the game of teaching the ‘truth’ (as they see it) when it actually behaves like real people. Thy expect a search engine and get dialogue. Used to telling people what they need to know and do, they see the technology as a treat to their positions. 

To be honest, it is pretty good at being traditionally smart – only 18 months in and it’s passing high stakes exams in most subjects, now performing better than humans in many. And when it comes to the meat and potatoes of a largely ‘text-based’ education, it can do most of what we expect the average pupil or student do in seconds. We are meant to be teaching people to be critical but anyone with a modest critical outlook, on their first use of GenAI, thinks ‘Holy shit… much of what I’m learning and do in my job can be done quickly, in most languages, at any time – and it is free. It makes learners think – is all of this essentially text-based learning worth my while? This is exactly what was uncovered in a recent Harvard student survey where almost every student admitted to using AI as well as having a rethink about their education and careers. LINK

A lot of what learners are doing in school, college or work has started to seem a little predictable, tired, often irrelevant. All of that time writing essays, doing maths you’ll never use. It takes 20 years of pretty constant slog to get our young people even remotely ready for the workplace. We send our kids to school, from 5-25, yet almost all of this time is spent on text - reading and writing oodles and oodles of text. Most vocational skills have been erased from curricula. No wonder they react by gaming the system as they are assessed on text only.

All of this effort in playing the cat and mouse game around text consumption and production rather than reflecting on the purpose of education, has led to a certain suspicion about this 20 years of schooling. It is hard to maintain enthusiasm for a process that has become odder and odder as the years go by. This is not to say bthat text is unimportant, only that it is not everything and very often, a poor form of assessment.

Morality police

Another symptom of institutions not being able to cope with the provocation is the endless moralising by paid ‘AI and Ethics’ folk, is the assumption that they have the keys to the ethical behaviour when the rest of us do not. When, in fact, they are often no more qualified on the subject, show massive biases towards their own ‘values’ and often have a poor understanding of how the technology actually works. It’s a heady cocktail of pious moralising.

Worse still, is their assumption that we, and by that I mean teachers, employees and learners, don’t really know what we’re doing, and need to be educated on matters of morality. They speak like the morality police, a well-paid professional cadre, demanding that we obey their exhortations. We poor amoral souls need their help in behaving in the right way. Sur, there are ethical issues, that does not mean Iranian levels of policing.

Conclusion

If we really want our young people to become autonomous adults, and that is what I think education is about, then AI can help foster and support that autonomy. It already gives them amazing levels of agency, access and support. Let’s see how we can help them become more autonomous and use these tools to achieve their goals, not stuff that can be done in seconds by machines. Treat learners like machines and they’ll behave like machines. Treat learners like autonomous people and they’ll use the machines.


No comments: