Sunday, April 02, 2023

Ethics, Experts and AI

Ethics of AI has been discussed for decades. Even before Generative AI hit the world, the level of debate, from the philosophical to regulatory - was intense and at a high level. It has got even more intense over the last few months.

It is an area that demands high end thinking in complex philosophical, technical and regulatory domains. In short, it needs expertise. In public domain, the spate of Lex Friedman podcasts have been excellent - the recent Altman and Yudkoswsky ChatGPT programmes, in particular, represent the two ends of the spectrum. I also recommend high-end AI experts, such as Marcus, Brown, Hassabis, Karpathy, Lenat, Zaremba, Brockman and Tegman. At the philosophical level, Chalmers, Floridi, Bostrom, Russell, Larson, and no end of real experts have published good work in this field. They are now always easy to read but are worth the effort if you want to enter the debate. As you can imagine there is a wide spectrum from the super-optimists to catastrophic pessimists and everything in between. However, AI experts tend to be very smart and very academic, therefore often capable of justifying and taking very certain moral positions, even extremes, overdosing on their own certainty. 


Debate in the culture

In popular culture dozens of movies from Metropolis onwards have tacked the issues of mind and machine, more recently the excellent Her and ex machina, along many series, such as Black Mirror have covered many of the moral dilemmas. Harari covers an issue in Homo Deus and here's no end of books, some academic, some more readable and some pot-boilers. No one can say this topic has received no attention.


Catastrophists

At one end there are the catastrophists like Yudkoswsky, Leahy and Musk clustered around the famous Open Letter. The people behind the letter, the Future of Life Institute, have an ax to grind and couldn't even get the letter right, as it had fake names and some have backtracked. There is also something odd about a self-selecting group claiming that we are all stupid and easily manipulated, whereas they are enlightened and will fix it, then release to to use on their say so.


Pessimists

It is hard to align the catastrophists with the pessimists, like Noam Chomsky, who basically says, 'nothing to see here, move on'. There have been harbingers, not of doom but disinterest in LLMs, as they thought they would produce little that was useful. We can, I think, safely say they were wrong. They may well be right on such models reaching limits of functionality but in tandem with other tools they are here to stay.


Pragmatists

Then there’s a long tail of pragmatists, with varying levels of concern, from Larson, Hinton (his vision of an alternative Mortal Computer is worth listening to), as is Sam Altman and Yann LeCun, who thinks the open letter is a regressive, knee-jerk reaction, akin to previous censorious attitudes in history towards technology. Everyone of any worth or background in the field rightly sees problems, that is true of ant technology, which is always a double -edged sword. Unlike the catastrophists I have yet to read an unconditional optimists, who sees absolutely no problems here. In education, for example.some big hitters, such as Bill Gates and Salman Khan have put their shoulders into the task of realising the he benefits this technology has in learning.


Regulatory bodies

Further good news is that the regulatory bodies publish both their thinking and results and they have been, on the whole, deep, reasoned and careful. Rushed legislation that is too rigid does not respond well to new advances and can cripple innovation and I think they have been progressing well. I have written about this separately but the US, UK, EU and China, all in their different ways have something to offer and international alignment seems to be emerging.


My pragmatist view

I am aligned with LeCun and Altman on this and sit in the middle.The technology changes quickly and as it changes I take a dynamic view of these problems, a Bayesian view if you will, revising my view in the light of new approaches, models, data and problems as they arise. This was after a lot of reading, listening and the completion of my books on ‘AI for Learning’ where I looked at a suite of ethical issues, then ‘Learning Technologies’ where I wrote in depth about the features of new technologies such as writing, printing, computers, the internet and AI, including their cultural impact and the tendency for counter-reformations. One can see how all of these were met with opposition, even calls to be banned, certainly censored. I have lived through moral panics on calculators, computers, the internet, Wikipedia, social media, computer games and smartphones. There is always a ‘ban it’ movement.

Alignment

I’d rather take my time, as scalable alignment is the real issue. It is not as if there is one single, universal set of social norms and values to align to. Everybody's in an echo chamber says the people who think they're not. Alignment is therefore tricky, and needs software fixes (guardrails), human training of models and moderation. It may even need adjustments for different cultural contexts.


This is a control issue and we control the design and delivery of these systems, their high-level functions, representations and data. The only thing we don’t control is the optimization. That’s why there is no chance, for now, that these systems will destroy our civilization. As Lecun said “Some folks say "I'm scared of AGI. Are they scared of flying? No! Not because airplanes can't crash. But because engineers have made airliners very safe. Why would AI be any different? Why should AI engineers be more scared of AI than aircraft engineers were scared of flying?" The point is that we have successfully regulated, worldwide, carsbanking, pharma, encryption, the internet. We should look at this sector by sector. The one that does worry me is the military.

Escape

The big argument is that, without stopping it now, AGIs could learn to pursue goals which are undesirable (i.e. misaligned) from our human goals. They may, for example,  literally escape out onto the internet and autonomously cause chaos and real harm. I have seen no evidence that this is true, although the arguments are complex and I could be convinced. The emphasis in Generative AI into switch from pure moderation, the game of swatting issues as they arise, to training through RLHF (Reinforcement Learning through Human Feedback). This makes sense.


Ghosts in the machine

I’m wary of people soaked in a sci-fi culture, or more recently a censorious culture, coming up with non-falsifiable dystopian arguments, suggestions and visions that have no basis in reality. It is easy to project ideas into the depths of a layered neural network but much of the that projection is of ghosts in the machine. I’m also wary of those who are activists, not practitioners with an anti-tech agenda and who don’t really do ethics, in the sense of weighing up the benefits and downsides, but want to focus entirely on the downsides. I’m also wary of people bandying around general hypotheses on misinformation, alignment and bias, without much in the way of empirical data or definitions.


In fact, the signs so far on ChatGPT4 are that is has the potential to do great good. Sure you could, like Italy ban it, and allow the EU, despite its enormous spend on research, to fall further behind as investors freeze all investment across the EU. I’m with Bill Gates when he says the benefits could be global, with learning the No 1 application. In a world where there are often poorly trained teachers with classes of up to 50 children and places where there is one Doctor for every 10,000 people, the applications are obvious. This negativity over generative AI could do a lot of harm and have bad economic consequences. So let's not stop it for six months and not look at those opportunities.


Bibliography

The Oxford handbook of ethics of AI by Dubber, M. D., Pasquale, F., & Das, S. 

Ethics of artificial intelligence by Matthew S. Liao

AI Narratives: A history of imaginative thinking about intelligent machines by Stephen Cave, Kanta Dihal, & Sarah Dillon 

Human Compatible by Stuart Russell

Re-Engineering Humanity by Brett Frischmann and Evan Selinger

Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence by Patrick Lin, Keith Abney, and George A. Bekey

Artificial Intelligence Safety and Security edited by Roman V. Yampolskiy

Robot Rights by David J. Gunkel

Artificial Intelligence and Ethics: A Critical Edition" by Heinz D. Kurz and Mark J. Hauser

Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Colin Allen

Artificial Intelligence and the Future of Power: 5 Battlegrounds" by Rajiv Malhotra

The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff

AI Ethics by Mark Coeckelbergh



No comments: