When AI is mentioned it is only a matter of time before the
word ‘bias’ is heard. They seem to go together like ping and pong, especially in debates around AI in education. Yet the
discussions are often merely examples of bias themselves – confirmation,
negativity and availability baises. There’s little analysis behind the claims. ‘AI programmers are largely white males so all
algorithms are biased - patriarchal and racist’ or the commonly uttered
phrase ‘All algorithms are biased’. In
practice, you hear the same few examples being brought up time and time again:
black face/gorilla, recruitment and reoffender software. Most of these examples have their origin in
Cathy O’Neil’s Weapons of Math
destruction orminternet memes. More of this later.
To be fair AI is for most an invisible force, the part of
the iceberg that lies below the surface. AI is many things, can be opaque
technically and true causality difficult to trace. So, to unpack this issue it
may be wise to look at the premises of the argument, as this is where many of
the misconceptions arise.
Coders and AI
First up, the charge that the root cause is male, white
coders. AI programmers these days are more likely to be Chinese or Indian than
white. AI is a global phenomenon, not confined to the western world.
The
Chinese government has invested a great deal in these skills through Artificial
Intelligence 2.0. The 13th Five-Year Plan (2016-2020), the Made in China 2025
program, Robotics Industry Development Plan and Three-Year Guidance for
Internet Plus Artificial Intelligence Plan (2016-2018) are all contributing to
boosting AI skills, research and development. India has an education system
that sees ‘engineering’ and ‘programming’ as admirable careers and a huge
outsourcing software industry with a $150 billion IT export business.
Even in
Silicon Valley the presence of Asian and Indian programmers is so prevalent
that they feature in every sitcom on the subject. Even if the numbers are wrong
the idea that coders infect AI with racist code, like the spread of Ebola, is
ridiculous. One wouldn’t deny the probable presence of some bias but the idea
that it is omnipresent is ridiculous.
Gender and AI
True there is a gender differential, and this will continue,
as there are gender differences when it comes to focused, attention to detail
coding in the higher echelons of AI programming. We know that there is a
genetic cause of autism, a constellation (not spectrum), of cognitive traits and
that this is weighted towards males (and no it is not merely a function of underdiagnosis in girls). For this reason alone there is
likely to be a gender difference in high-performance coding teams for the
foreseeable future. In addition, the idea that these coders are unconsciously,
or worse, consciously creating racist and sexist algorithms is an exaggeration.
One has to work quite hard to do this and to suggest that ALL algorithms are
written in this way is another exaggeration. Some may, but most are not.
Anthropomorphic bias
and AI
The term Artificial Intelligence can in itself be a problem,
as the word ‘intelligence’ is a genuinely misleading, anthropomorphic term. AI
is not cognitive in any meaningful sense, not conscious and not intelligent
other than in the sense that it can perform some very specific tasks well. It
may win at chess and GO but it doesn’t know that it even playing
these things, never mind the fact that it has won.
Anthropomorphic bias appears
to arise from our natural ability to read the minds of others and therefore
attribute qualities to computers and software that are not actually there.
Behind this basic confusion is the idea that AI is one thing – it is not – it
encapsulates 2500 years of mathematics since Euclid put the first algorithm
down on papyrus and there are many schools of AI that take radically different
approaches. The field is an array of different techniques, often
mathematically, quite separate from each other.
ALL humans are biased
First, it is true that ALL humans are biased, as shown by
Nobel Prize winning psychologist Daniel Kahneman and his colleague Amos
Tversky, who exposed a whole pantheon of biases that we are largely born with
and are difficult to shift, even through education and training. Teaching is
soaked in bias. There is socio-economic bias in policy as it is often made by
those who favour a certain type of education. Education can be bought privately
introducing inequalities. Gender, race and socio-economic bias is often found
in the act of teaching itself. We know that gender bias is present in subtly
directing girls away from STEM subjects and we know that children from lower
socio-economic groups are treated differently. Even, so-called objective
assessment is biased, often influenced by all sorts of cognitive factors –
content bias, context bias, marking bias and so on.
Bias in thinking
about AI
There are several human biases behind our thinking about AI.
We have already mentioned Anthropomorphic bias, where reading ‘bias’ into software is often the result of this over-anthropomorphising.
Availability bias arises when we frame thoughts on what is available, rather than pure reason. So crude images of robots enter the mind as characterising AI, as opposed to software or mathematics, which is not, for most, easy to call to mind or visualise. This skews our view of what AI is and its dangers, often producing dystopian ‘Hollywood’ perspectives, rather than objective judgement.
Then there’s Negativity
bias, where the negative has more impact than the positive, so the Rise of
the Robots and other dystopian visions come to mind more readily than positive
examples such as fraud detection or cancer diagnosis.
Most of all we have Confirmation bias, that leaps into
action whenever we hear of something that seems like a threat and we want to
confirm our view of it as ethically wrong.
Indeed, the accusation that all algorithms are biased is
often (not always) a combination of ignorance about what algorithms are and a
combination of four human biases – anthropomorphism, availability, negativity, confirmation and
anthropomorphism bias. It is often a sign of bias in the objector, who wants to
confirm their own deficit-based weltanschauung and apply a universal, dystopian
interpretation to AI with a healthy dose of neophobia (fear of the new).
ALL AI is not biased
You are likely in your first lesson on algorithms to be
taught some sorting mechanisms (there are many). Now it is difficult to see how
sorting a set of random numbers into ascending order can be either sexist or
racist. The point is that most algorithms are benign, doing a mechanical job and
free from bias. They can improve performance in terms of strength, precision and
performance over time (robots in factories), compressing and decompressing
comms, encryption algorithms, computational strategies in games (chess, GO,
Poker and so on), diagnosis-investigation-treatment in healthcare and reduced
fraud in finance. Most algorithms, embedded in most contexts are benign and
free from bias.
Note that I said ‘most’ not ‘all’. It is not true to say
that all algorithms and/or data sets are biased, unless one resorts to the idea
that everything is socially constructed and therefore subject to bias. As
Popper showed, this is an all-embracing theory to which there is no possible
objection, as even the objections are interpreted as being part of the problem.
This is, in effect, a sociological dead-end.
Bias in statistics
and maths
Al is is not conscious or aware of its purpose. It is, as
Roger Schank kept saying, just software, and as such, is not ‘biased’ in the
way we attribute that word to ‘humans’. The biases in humans have evolved over
millions of years with additional cultural input. AI is maths and we must be
careful about anthropomorphising the problem. There is a definition of ‘bias’
in statistics, which is not a pejorative term, but precisely defined as the
difference between an expected value and the true value of a parameter. If the
value is zero, it is called unbiased. This is not so much bias as a precise recognition of differentials.
However, human bias can be translated into other forms of statistical
or mathematical bias. One must now distinguish between algorithms and data.
There is no exact mathematical definition of ‘algorithm’ where bias is most
likely to be introduced through weightings and techniques used. Data is where
most of the problems arise. One example is poor sampling; too small a sample,
under-representations or over-representations. Data collection can also have
bias due to faulty data gathering in the instruments themselves. Selection bias
in data occurs when it is gathered selectively and not randomly.
However, the statistical approach at least recognises these biases and adopts scientific and mathematical methods to try to eliminate these biases. This is a key point – human bias often goes unchecked, statistical and mathematical bias is subjected to rigorous checks. That is not to say that it is flawless but error rates and attempts to quantify statistical and mathematical bias have been developed over a long time, to counter human bias. That is the essence of the scientific method.
An aside…
The word ‘algorithm’ induces a sort of simplistic
interpretation of AI. Some algorithms are not created by humans, code can
create code, some are deliberately generated in evolutionary AI to create
variation and then selection against a fitness purpose. It’s complex. There are
algorithms in nature that determine genetic outcomes, the way plants grow and
many other natural phenomena. Some thing that there is a set of deep algorithms
that determine the whole of life itself. Evolutionary AI allows algorithms to be
promulgated or generated by algorithms themselves, in an attempt to mimic
evolution, but defining fitness and selecting those that work. While it is true
that bias can creep into this process it is wrong to claim that all algorithms
are created solely by the hand of the coder.
AI and transparency
A common observation in contemporary AI is that its inner
workings are opaque, especially machine learning using neural networks. But
compare this to another social good – medicine. We know it works but we don’t
know how. As Jon Clardy, a professor of biological chemistry and molecular
pharmacology at Harvard Medical School says, "the idea that drugs are the result
of a clean, logical search for molecules that work is a ‘fairytale'”. Many drugs
work but we have no idea why they work. Medicine tends to throw possible
solutions at problems, then observe if it works or not. Now most AI is not like
this but some is. We need to be careful about bias but in many cases,
especially in education, we are more interested in outputs and attainment,
which can be measured in relation to social equality and equality of
opportunity. We have a far greater chance of tackling these problems using AI
than by sticking to good, old-fashioned bias in human teaching.
Fail means First
Attempt In Learning
Nass and Reeves through 35 studies in The Media Equation
showed that the temptation to anthropomorphise technology is always there. We must resist the temptation to think this is anything but bias. When an algorithm, for example, correlates a
black face with a gorilla, it is not that it is biased in the human sense of
being a racist, namely a racist agent. The AI knows nothing of itself, it is
just software. Indeed, it is merely an attempt to execute code and this sort of
error is often how machine learning actually learns. Indeed, this repeated
attempt at statistical optimisation lies at the very heart of what AI is. Failure is what makes it tick. The good news is that repeated failure results
in improvement in machine learning, reinforcement learning, adversarial
techniques and so on. It is often absolutely necessary to learn from mistakes
to make progress. We need to applaud failure, not jump on the bias bandwagon.
When Google was found to stick the label of gorilla on black
faces in 2015, there is no doubt that it was racist in the sense of causing
offence. Rather then someone being racist in Google, or having a piece of maths
that is racist in any intentional sense, this is a systems failure. The problem
was spotted and Google responded within the hour. We need to recognise that
technology is rarely foolproof, neither are humans. Failures will occur.
Machines do not have the cognitive checks and balances that humans have on such
cultural issues but they can be changed and improved to avoid them. We need to
see this as a process and not just block progress on the back of outliers. We
need to accept that these are mistakes and learn from these mistakes. If
mistakes are made, call them out, eliminate the errors and move on. FAIL in
this case means First Attempt In Learning. The correct response is not to
define and dismiss AI because of these failures but see them as opportunities
for success.
The main problem here, is not the very real issue of
emanating bias from software, which is what we must strive to do but the simple
contrarianism behind much of the debate. This was largely fuelled by one book….
Weapons of 'Math' Destruction - sexed up
dossier on AI?
Unfortunate
title, as O’Neil’s supposed WMDs are as bad as Saddam Hussein’s mythical WMDs,
the evidence similarly weak, sexed up and cherry picked. This is the go-to book
for those who want to stick it to AI by reading a pot-boiler. But rather than
taking an honest look at the subject, O’Neil takes the ‘Weapons of Math
Destruction’ line far too literally, and unwittingly re-uses a term that has
come to mean exaggeration and untruths. The book has some good case studies and
passages but the search for truth is lost as she tries too hard to be a
clickbait contrarian.
Bad
examples
The first
example borders on the bizarre. It concerns a teacher who is supposedly sacked
because an algorithm said she should be sacked. Yet the true cause, as revealed
by O’Neil, are other teachers who have cheated on behalf of their students in
tests. Interestingly, they were caught through statistical checking, as too
many erasures were found on the test sheets. That’s more man than machine.
The second
is even worse. Nobody really thinks that US College Rankings are algorithmic in
any serious sense. The ranking models are quite simply statistically wrong. The
problem is not the existence of fictional WMDs but poor schoolboy errors in the
basic maths. It is a straw man, as they use subjective surveys and proxies and
everybody knows they are gamed. Malcolm Gladwell did a much better job in
exposing them as self-fulfilling exercises in marketing. In fact. most of the
problems uncovered in the book, if one does a deeper analysis, are human.
Take
PredPol, the predictive policing software. Sure it has its glitches but the
advantages vastly outweigh the disadvantages and the system, and its use,
evolve over time to eliminate the problems. The main problem here is a form of
bias or one-sidedness in the analysis. Most technology has a downside. We drive
cars, despite the fact that well over a million people die gruesome and painful
deaths every year from in car accidents. Rather than tease out the complexity,
even comparing upsides with downsides, we are given over-simplifications. The
proposition that all algorithms are biased is as foolish as the idea that all
algorithms are free from bias. This is a complex area that needs careful
thought and the real truth lies, as usual, somewhere in-between. Technology
often has this cost-benefit feature. To focus on just one side is quite simply
a mathematical distortion.
The chapter
headings are also a dead giveaway - Bomb Parts, Shell Shocked, Arms Race,
Civilian Casualties, Ineligible to serve, Sweating Bullets, Collateral Damage,
No Safe Zone, The Targeted Civilian and Propaganda Machine. This is not 9/11
and the language of WMDs is hyperbolic - verging on propaganda itself.
At times
O’Neil makes good points on ‘data' – small data sets, subjective survey data
and proxies – but this is nothing new and features in any 101 statistics
course. The mistake is to pin the bad data problem on algorithms and AI –
that’s often a misattribution. Time and time again we get straw men in online advertising,
personality tests, credit scoring, recruitment, insurance, social media. Sure
problems exist but posing marginal errors as a global threat is a tactic that
may sell books but is hardly objective. In this sense, O'Neil plays the very
game she professes to despise - bias and exaggeration.
The final
chapter is where it all goes badly wrong, with the laughable Hippocratic Oath.
Here’s the first line in her imagined oath “I will remember that I didn’t
make the world, and it doesn’t satisfy my equations” a flimsy line. There
is, however one interesting idea – that AI be used to police itself. A number
of people are working on this and it is a good example of seeing technology
realistically, as being a force for both good and bad, and that the good will
triumph if we use it for human good.
This book
relentlessly lays the blame at the door of AI for all kinds of injustices, but
mostly it exaggerates or fails to identify the real, root causes. The book is
readable, as it is lightly autobiographical, and does pose the right questions
about the dangers inherent in these technologies. Unfortunately it provides
exaggerated analyses and rarely the right answers. Let us remember that Weapons
of Mass Destruction turned out to be lies, used to promote a disastrous war.
They were sexed up through dodgy dossiers. So it is with this populist
paperback.
Conclusion
This is an
important issue being clouded by often uninformed and exaggerated. Positions.
AI is unique, in my view, in having a large number of well-funded entities, set
up to research and advise on the ethical issues around AI. They are doing a
good job in surfacing issues, suggesting solutions and will influence
regulation and policy. Hyperbolic statements based on a few flawed meme-like cases
do not solve the problems that will inevitably arise. Technology is almost
always a balance up upsides and downsides, let’s not throw the opportunities in
education away on the basis of bias, whether in commentators or AI.
1 comment:
Do you need Finance? Are you looking for Finance? Are you looking for finance to enlarge your business? We help individuals and companies to obtain finance for business expanding and to setup a new business ranging any amount. Get finance at affordable interest rate of 3%, Do you need this finance for business and to clear your bills? Then send us an email now for more information contact us now via (financialserviceoffer876@gmail.com) whats-App +918929509036 Dr James Eric Finance Pvt Ltd Thanks
Post a Comment