Daniel Dennett, is a philosopher, but also a polymath, with a deep understanding of science and maths. His book, Bacteria to Bach and Back, is essential reading for those who want to see a synthesis of human evolution and AI, and its consequences for the ethics of AI.
Holistic view of AI
He takes a holistic vies of AI. Just as the Darwinian evolution of life over nearly 4 billion years has been ‘competence without comprehension’ the result of blind design, what Dawkins called the ‘blind watchmaker’, so cultural evolution and AI is often competence without comprehension. We have all sorts of memes in our brains but it is not clear that we know why they are there. And in AI, Watson may have won Jeopardy! But it didn’t know it had won.
His vision, which has gained some traction in cognitive science, is that the brain uses Bayesian hierarchical coding (Hinton 2007; Clark 2013; Hohwy 2013), a prediction machine, constantly modelling forward. Interestingly, he sees this as the cause of dreams and hallucinations – random and arbitrary attempts at Bayesian prediction.
He then examines cultural evolution as the invasion or infection of the brain by memes, primarily words, and that these memes operate in a sort of Bayesian marketplace, without a single soul or executive function. These informational memes, like Darwinian evolution, also show competence without comprehension and fitness in the sense of being advantageous to themselves. That brings us back to the ethical considerations around AI.
As he rightly says, we make children without actually understanding the entirety of the process, so it is with generative technology. Almost all AI is parasitic on human achievements, corpuses of text, music, maths and so on. He is rightly sceptical about Strong AI, master algorithms and super-intelligent agents.
We already trust systems that are way more competent than we humans and so we should. His call is for us to keep an eye on the boundaries between mind and machine, as we have a tendency to over-estimate the comprehension of the machines, way beyond their actual competence, and investing or anthropomorphising comprehension. We see this with even the most casual encounters with chatbots and devices such as Alexa or Google Home. We all too readily read intentionality, comprehension, even consciousness into technology when it is completely absent.
By adopting regulatory rules around false claims of anthropomorphism, especially in advertising and marketing, we can steer ourselves through the ethical concerns around AI. Over reach and concealing anthropomorphism and false claims should be illegal, just as exaggeration and side effects are regulated in the pharmaceutical industry. Tests, such as variations of Turing’s test, can be used to test their upper limits.
He is no fan of the demand for full transparency, which, he thinks, and I agree, are utopian. Many use Google Scholar, Google and other tools without knowing how they work. Competence without comprehension is not unusual.
His hope is that machines will open up “notorious pedagogical bottlenecks” even “imagination prostheses” working with and for us to solve big problems. We must recognise that the future is only partly, yet largely, in our control. Let our artificial intelligences depend on us even ”as we become more warily dependent on them”.