Monday, August 26, 2019

Dennett - why we need polymaths in the AI ethics debate

Daniel Dennett, is a renowned philosopher, but also a polymath, with a deep understanding of psychology, science and maths. His book Bacteria to Bach and Back is essential reading for those who want to see a synthesis of human evolution and AI, and its consequences for the ethics of AI. Dennett has the backgrounded depth in philosophy to understand the exaggerations and limits of AI and its consequences for ethics. He also grounds his views in biology an humans as the benchmark and progenitor of AI.

Competence without comprehension
He takes a holistic view of AI. Just as the Darwinian evolution of life over nearly 4 billion years has been ‘competence without comprehension’ the result of blind design, what Dawkins called the ‘blind watchmaker’, so cultural evolution and AI is often competence without comprehension. We have all sorts of memes in our brains but it is not clear that we know why they are there. Similarly with AI, Watson may have won Jeopardy! But it didn’t know it had won. This basic concept of competence without comprehension is something that has to be understood and assumes as there is far too much exaggeration and anthropomorphism around in the subject, especially in the ethics debate. His view, which I agree with, is that AI is not as good as you think it is and not as bad as you fear.

Bayesian brain
His vision, which has gained some traction in cognitive science, is that the brain uses Bayesian hierarchical coding (Hinton 2007; Clark 2013; Hohwy 2013), a prediction machine, constantly modelling forward. Interestingly, he sees this as the cause of dreams and hallucinations – random and arbitrary attempts at Bayesian prediction. This is an interesting species of the computational model of the brain and explains why the brain has been a productive, intuitive source of inspiration for AI, especially neural networks. 

Cultural evolution
He then examines cultural evolution as the invasion or infection of the brain by memes, primarily words, and that these memes operate in a sort of Bayesian marketplace, without a single soul or executive function. These informational memes, like Darwinian evolution, also show competence without comprehension and fitness in the sense of being advantageous to themselves. That brings us back to the ethical considerations around AI. He surfaces the contribution of Baldwin as an evolutionary theorist who saw 'learning' as an evolutionary accelerator.

AI
As he rightly says, we make children without actually understanding the entirety of the process, so it is with generative technology. Almost all AI is parasitic on human achievements, corpuses of text, images, music, maths and so on. He is rightly sceptical about Strong AI, master algorithms and super-intelligent agents.

We already trust systems that are way more competent than we humans and so we should. His call is for us to keep an eye on the boundaries between mind and machine, as we have a tendency to over-estimate the 'comprehension' of the machines, way beyond their actual competence, and investing in or anthropomorphising comprehension. We see this with even the most casual encounters with chatbots and devices such as Alexa or Google Home. We all too readily read intentionality, comprehension, even consciousness into technology when it is completely absent.

AI ethics
By adopting regulatory rules around false claims of anthropomorphism, especially in advertising and marketing, we can steer ourselves through the ethical concerns around AI. Over reach and concealing anthropomorphism and false claims should be illegal, just as exaggeration and side effects are regulated in the pharmaceutical industry. Tests, such as variations of Turing’s test, can be used to test their upper limit and actual competence. He is no fan of the demand for full transparency, which, he thinks, and I agree, are utopian. Many use Google Scholar, Google and other tools without knowing how they work. Competence without comprehension is not unusual.

Learning
His hope is that machines will open up “notorious pedagogical bottlenecks” even “imagination prostheses” working with and for us to solve big problems. We must recognise that the future is only partly, yet largely, in our control. Let our artificial intelligences depend on us even ”as we become more warily dependent on them”.

No comments: