Tuesday, January 30, 2024

Have we passed 'Peak Ethics' in AI?

We seem to have passed 'Peak Ethics' with AI. Now that little of any moral consequence has happened we're coming back to the real world. More focus on real utility, applications and use in productivity, teaching, learning. It was all a bit dizzy, rarefied and weird up there... 

It’s sometimes harder, as Scottish poet Norman McCaig wrote, to come down than climb a mountain. Enthusiasm and certainty get you up, one must tred carefully coming back down to the real world.

Remember the letter? No one cares about it now. An Open Letter is the argument from authority, always suspect, and is the very opposite of 'open'. It simply says we're right you're wrong. It's straight up bullying.

After that famous letter, demanding a six months halt, what actually happened? A lot of backtracking and embarrassment by some of the signatories. In all that time GPT4 has remained king of the hill and the world has seen upsides with no real downsides. Italy banning ChatGPT looks like a childish gesture. Yudkowsky and Tegmark now seem a bit boorish, grifters seling Doom. It all seems so Y2Kish, more cult-like, end-of-days, millenarian, than realism. All of those folk who suddenly had AI and Ethics in their titles, seem a bit old-hat, boring and superfluous. Having worked with this technology for many years, I wondered at the time where they all came from, all of those experts in ‘AI’ and ‘Ethics’. I never saw any of them before November 2022. Never saw the projects they were involved in, actual writing, books they’d written. It was a pretty lonely world back then.

Suddenly, an army of arrivistes were seen talking earnestly on panels, running workshops, uttering memes such as ‘stochastic parrot’ with absolutely no idea what that meant or where the phrase came from. Heads of this and that, experts all – within zero practitioner experience, in just months!

All we’ve seen, 14 months in, from the research, is evidence of increased productivity, ideation, creativity and even signs of reason and semantic sophistication. On top of that amazing multimodal capability when we can create images, video, speak to it, it speaks back, less errors, better performance, create our own avatars, chatbots and massive reductions in prices. Not a week passes without something wondrous happening. Deepmind continues to astonish with its Alpha software and research is getting a boost in terms of planning and execution. In healthcare we've seen significant leaps.

Yann Lecun is in charge of AI at Meta. He knows more than anyone on the planet about moderation of content using technology and AI. On Twitter, he made the point that despite GPT-2 and numerous other LLMs having been available in open source for 5 years now, there has been no flood of "extremist synthetic propaganda" that "researchers" warned about? He has a point. This tech has been available since Obama was in office and we saw a couple of deepfakes but hardly overwhelming... why?... AI detection... Sure some gets through and it will happen but it has been conspicuous by its absence.

Lecun’s point is that AI not only has the potential to solve some of our most pressing problems, especially those that pose an existential threat but it can also police itself. Engaging in lots of confirmation bias, negativity and sci-fi levels of speculation is fine but the whole thing got out of hand. I suspect this is because it’s easier to ruminate on ethics, with lots of hand wringing, than get to know and use the technology in real projects.

Now that we’ve calmed down, and had time to try things out, see the potential, the world seems like a better place, less angst, less moralising. Having ridden onto town on their moral high horses, they’ve found that people are not that interested in yet another repetitive report, framework or list of ethical platitudes. They’ve had to tie their high horses up, and get inside the tent with the rest of us – using it, doing it. It was always thus. I have a whole presentation on how such moralisers blow their trumpets at the start of every technical advance from the sundial, writing, books, printing, radio, film, television, jazz, rock music, rap, walkmans, typewriters, photocopiers, computers, internet, social media, smartphones… and now AI.

Tech Doomerism, I’ve realised, is actually a form of advertising, a species of hype, with clickbait examples, binary thinking of good & evil, a liberal dose of anthopomorphising AI, a narrow focus on edge of debate and seeing Sci-Fi as credible predictions. Elon Musk was the perfect example, signs letter demanding stop to AI, exactly 6 months later releases GROK1! 

No comments: