Politicians love a good report. Problem is, we produce them
like pills, in the hope that they will make things better, when all they do is
act as a placebo. It seems as though things are happening but they ain’t.
Whenever we are worried by something, in this case AI, we get a bunch of people,
usually well past their sell by date to produce a ‘report’. To be fair this is
a substantial piece of work, at 420 numbered sections and 74 recommendations,
but it’s all over the place, lacks focus and at times is way off the mark.
Ethics heavy
First, I’m not sure about a document that tries to climb and
descend a mountain at the same time. No sooner has something been stated as a
way forward, than it’s drowned under a wave of repetitive moralising. Although
they wisely stop short at blanket regulations, it full of pious statements
about dangers, challenges and ethics. As Hume said, you can’t derive an ought
from an is – and that’s exactly what they do, over and over again. It is
hopelessly utopian in its assumption, even that AI can be defined, never mind
regulated. Perhaps too much is attributed to its efficacy and promise. In the
end it’s just software.
Crass identity
politics
There’s the usual obsession with identity politics and the
idea that bias in algorithms will be solved as follows, “The main ways to address
these kinds of biases are to ensure that developers are drawn from diverse
gender, ethnic and socio-economic backgrounds.” Oh dear – not that tired old idea. All this shows is that
the writers of the report have succumbed to the diversity lobby or suffer from
a series of human biases, starting with confirmation bias – the confirmation
that diversity will solve mathematical and ethical problems. Bias is a complex set of problems in both human affairs and AI – it needs sharp analysis, not Woolworth’s‘pick and mix’
team building. There’s one
really puzzling sentence on this that sums their naivety up perfectly.“The
prejudices of the past must not be unwittingly built into automated systems,
and such systems must be carefully designed from
the beginning.” Put
aside the fact that this is largely what the House of Lords does for a living,
it
is not even
wrong. AI has 2300 years of mathematics behind it – from the first identified
algorithm in Euclid’s Elements, through centuries of theory in logic,
probability, statistics and other areas of mathematics. AI is built on the
past.
Exploiting AI
The UK has an
excellent track record of academic research in the field of artificial
intelligence, but there is a long-standing issue with converting such research
into commercially viable products. Damn right. They’re once again pained
over the age-old problem the UK has on spending oodles of public money on
world-class research, which doesn’t translate into commercial success. There is the usual error of equating AI
SMEs with University start-ups. Actually, many have nothing to do with
Universities. We need to support SMEs with business ideas. Yet where are
the people like me, who put their own money and energy into starting an AI
company and invest in others? Every AI academic in the land seems to have been
consulted, along with many who wouldn’t know AI of they saw it in their soup.
We know that our HE system is deeply anti-corporate. To assume that research
equals success is a complete non sequitur. We need to encourage innovation AND
commerce around AI – not just hose yet more money into Universities.
Usual suspects
Then there’s the usual tired old suspects. First, a Global
Summit. Really? Nothing like a junket to advance our AI capability. Then a code
of conduct. Yet another one? Politicians do love codes of conduct. Then there
is the predictable call for a quango – creatively named the ‘AI
Council. It’s all so unimaginative.
AI in education
But the worst
section by far is the section on EDUCATION. There is a great deal of soul searching about AI in education
but only in the sense of teachers and curricula about AI. The big win here is using
AI to improve and accelerate teaching and learning. This is what happens when
you only talk to teachers about AI. It’s all about the curriculum and nothing
about actual practice. This is a massive, wasted opportunity. I’m selling an AI learning company to the
US as I write this. We’re already losing ground. There’s something called the Hall-Presenti
review – whatever that is. I’ve worked in AI in learning for years, run an AI company (WildFire),
have invested in AI in learning companies, speak all over the world on the
topic, write constantly on the topic – yet have no idea what this is. That’s the problem – Parliament is an echo-chamber. They don’t really speak to the people who DO things.
Conclusion
To be fair there’s some good stuff on healthcare and a
few shells over the bow for defence and autonomous weapons, but it’s a bit
tired, pious and lacks punch. It will, of course, fall stillborn from the
press.
1 comment:
Dear Donald
Great article. Education/schooling is very conservative, current curriculum really not preparing our kids for the changed world ahead. There will be a lot fewer jobs (traditional route School-Uni-Job) - preparing kids to be mini entrepreneurs online is not in the Lexicon.
Those with kids will have to do it yourselves.
Ben
Post a Comment