Friday, August 14, 2020

AI and ethics - it's not as good as you think and not as bad as you fear

Joanna Bryson, one of the world’s experts in AI and ethics is right when she points out that the big problem in AI and Ethics is ‘anthropomorphising’. AI is competence without comprehension. It can beat you at chess, Go and poker but doesn’t know it has won. Literally hundreds of AI and ethics groups have sprung up over the last couple of years. Some are serious international bodies like the EU, IEEE and so on, but it is important to examine but remain level-headed on this issue. The danger is that we destroy the social goods that AI offer, by demonising it  before it has been tried.

Having just launched a new book ‘AI for Learning’ in which I tackle these ethical issues in some detail, I thought I’d provide a taster for the ethical concerns as they may affect the world of learning. 

Existential

Let’s get one moral issue out of the way – the existential threat. This often centres around Ray Kurzweil's ‘Singularity’, the idea that AI will at one point transcend human intelligence and become uncontrollable. Other AI experts like Stuart Russell, Brett Frischmann and Nick Bostrom have speculated at length on ways in which runaway AI could be a threat to our species. Although there are possible scenarios where runaway AI will lead to our demise as a species, this is not an issue that should worry us much in using AI for learning. Many, such as Stephen Pinker, Daniel Dennett and other serious researchers in AI are sceptical of these end-of-days theories. In any case, it is highly unlikely that AI for education will do much other than protect us from such scenarios.

Bias

Much more relevant is the topic of ‘bias’. The problem with many of the discussions around bias in AI, is that the discussions themselves are loaded with biases; confirmation bias, negativity bias, immediacy bias and so on. Remember that AI is ‘competence without comprehension’ competences that can be changed, whereas all humans have cognitive biases, which are difficult to change. AI is just maths, software and data. This is mathematical bias, for which there are definitions. It is easy to anthropomorphize these problems by seeing one form of bias as the same as the other. That aside, mathematical bias can be built into algorithms and data sets. What the science of statistics, and therefore AI, does, is quantify and try to eliminate such biases. This is, essentially, a design problem, and I don’t see much of a problem in the learning game, where datasets tend to be quite small, for example in adaptive learning. It gets to be a greater problem when using a model such as GPT-3 for learning, where the data set is massive. It can literally produce essay-like content at the click of a button. Nevertheless, I think that the ability of AI to be blind to gender, race, sexuality and social class may, in learning, make it less biased than humans. We need to be careful when I comes to making decisions that humans often make, but at the level of learning engagement, support there’s lots of low hanging fruit that need be of little ethical concern.

Race

The most valuable companies in the world are AI companies, in that their core strategic technology is AI. As to the common charge that AI is largely written by white coders, I can only respond by saying that the total number of white AI coders is massively outgunned by Chinese, Asian and Indian coders. The CEOs of Microsoft and Alphabet (Google) were both born and educated in India. And the CEOs of the three top Chinese tech companies are Chinese. Having spent some time in Silicon Valley last year, it is one of the most diverse working environment I’ve seen in terms of race. We can always do better but this should, in my view not be seen as a crippling ethical issue.

Gender

Gender is an altogether different issue and a much more intractable problem. There seems to be bias in the educational system among parents, teachers and others to steer girls away from STEM subjects and computer studies. But the idea that all algorithms are gender-biased is naïve. If such bias does arise one can work to eliminate the bias. Eliminating human gender bias is much more difficult.

Transparency

It is true that some AI is not wholly transparent, especially deep learning using neural networks. However, we shouldn’t throw out the baby with the bathwater… and the bath. We all use Google and academics use Google Scholar, because they are reliably useful. They are not transparent. The problems arise when AI is used to say, select or assess students. Here, we must ensure that we use systems that are fair. A lot of work is going into technology that interprets other AI software and reveals their inner workings.

Dehumanisation

A danger expressed by some educators is that AI may automate and therefore dehumanise the process of learning. This is often within discussions of robot teachers. I discuss the fallacy of robot teachers in the book. It is largely a silly idea, as silly as having a robot driver in a self-driving car. It is literally getting the wrong end of the stick, as AI in learning is largely about support for learners. Far from dehumanising learning it may empower learners.

Employment

The impact of AI on employment is a lively political and economic topic. Yet, before Covid, we had record levels of employment in the US, UK and China. There seems to be a fair amount of scaremongering at learning conferences, where you commonly see completely fictional quotes, such as ‘65% of children entering primary school today will be doing jobs that have yet to exist’. Even academic studies tend to be hyperbolic, such as the Frey and Osborne (2013) report from Oxford University that claimed ‘47% of jobs will be automated in the next two decades’. Seven years in and the evidence that this is true is slim. What is clear is that skills in creating and using AI for learning will be necessary. Indeed, Covid has accelerated this process. I categorise and list these new skills in the book.

Conclusion

I touch upon all of these issues in the book and stick to my original premise that AI is ‘not as good as you think it is and not as bad as you fear’. Sure there are ethical issues, but these are similar to general ethical issues in software and any area of human endeavour where technology is used. It is important not to see AI as separate from software and technology in general. That’s why I’m on the side of Pinker and Dennett in saying these are manageable problems. We can use technology to police technology. Indeed AI is used to stop sexist, racist and hate text and imagery from appearing online. Technology is always a balance between good and bad. We drive cars despite the fact that 1.3 million people die horrible deaths every year from crashes and many more have serious injuries. Let’s not demonise AI to such a degree that its benefits are not realised and , as I discuss in the book, in education and training the benefits are considerable.

 

AI for Learning

The book ‘AI for Learning’ is available on Amazon. In addition to ethics it covers many facets of AI for learning; teaching, learning, learning support, content creation, chatbots, learning analytics, sentiment analysis, and assessment.

 

Bibliography

Bryson, J.J., Diamantis, M.E. and Grant, T.D., 2017. Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law25(3), pp.273-291.

Kurzweil, R., 2005. The singularity is near: When humans transcend biology. Penguin.

Russell, S., 2019. Human compatible: Artificial intelligence and the problem of control. Penguin.

Clark, D., Review of Human Compatible https://donaldclarkplanb.blogspot.com/search?q=Human+Compatible+by+Stuart+Russell+-+go+to+guy+on+AI+-+a+must+read..

Frischmann, B. and Selinger, E., 2018. Re-engineering humanity. Cambridge University Press.

Clark, D., Review of Re-engineering humanity

https://donaldclarkplanb.blogspot.com/search?q=Frischmann

Bostrom, N., 2017. Superintelligence. Dunod.

Pinker, S., 2018. Enlightenment now: The case for reason, science, humanism, and progress. Penguin.

Dennett, D.C., 2017. From bacteria to Bach and back: The evolution of minds. WW Norton & Company.

Clark, D., Review of From bacteria to Bach and back

https://donaldclarkplanb.blogspot.com/search?q=Dennett+-+why+we+need+polymaths+in+the+AI+ethics+debate


1 comment:

Riajengenelen said...

Hello everyone, Are you into trading or just wish to give it a try, please becareful on the platform you choose to invest on and the manager you choose to manage your account because that’s where failure starts from be wise. After reading so much comment i had to give trading tips a try, I have to come to the conclusion that binary options pays massively but the masses has refused to show us the right way to earn That’s why I have to give trading tips the accolades because they have been so helpful to traders . For a free masterclass strategy kindly contact (paytondyian699@gmail.com) for a free masterclass strategy. He'll give you a free tutors on how you can earn and recover your losses in trading for free..or Whatsapp +1 562 384 7738