Sunday, April 23, 2023

11 words in AI that I wish were used with more care...

I’m normally sanguine about the use of language in technology and avoid the definition game. Dictionary definitions are derived from actual use, not the other way around - meaning is use. Like those who want to hair split about pedagogy, didactics, androgogy, heutology… whatever. I prefer practical pedagogy to pedantry.

Another is the age old dispute about whether learning can be used as both a verb and a noun, some claiming the noun ‘learning’ is nonsense. Alexander Pope used it as a noun in 1709:


A little learning is a dangerous thing’ 


The next two lines are instructive


'Drink deep, or taste not the Pierian Spring

There shallow draughts intoxicate the brain' 


In the same work he also wrote 'To err is human: to forgive divine" and 'Fools rush in where angels fear to tred.'  Pretty impressive for a 23 year old! It is a wise warning to those who want to bandy about loose language in AI.

 

In the discourse around AI, several terms make me uneasy, mainly because other forces are at work, such as confirmation bias and anthropomorphism. We tend to treat tech as if it were animate, like us. Nass & Reeves did 35 brilliant studies on this, published in their wonderful work ‘The Media Equation’, well worth a read. But this also leads to all sorts of problems as we tend to make abrupt contextual shifts, for example, between different meanings of words like hallucinations, bias and learning. 

 

Another form of anthropomorphism is to bring utopian expectations to the table, such as the idea that software is or should be a perfect single source of truth. There can be no such thing if it includes different opinions and perspectives. Epistemology, the science of knowledge, quickly disabuses you of this idea as do theories of ‘truth’. Take science, where the findings are not ‘true’ in any absolute sense’ but provide the best theories for the given evidence at that time. In other words, avoid absolutes.

 

These category errors, the attribution of a concept from one context to something very different, are common in AI. First, the technology is often difficult to understand – how it is trained, the nature of knowledge propagation from Large Language Models (LLMs), what is known about how they work, the degree which emergent qualities, such as reasoned behaviour, arises from such models and so on. But that should make us more, not less, wary of anthropomorphism.

 

Artificial

'Artificial' Intelligence has a pejorative ring, like artificial grass or 'artificial sweeteners' - crude, man-made and inferior, a poor imitation. It qualifies the whole field as something less worthy, a bit fake, hence the cliched jokes - 'artificial stupidity' and so on. Of course it works on the tension between artificial and real, cleaving the two worlds apart.


Intelligence

John McCarthy, who invented the phrase ‘Artificial Intelligence’ regretted his definition. Again, taking a word used largely in a human or animal content, is an odd benchmark, given that humans have limited working memories, are forgetful, inattentive, full of biases, sleep eight hours a day, can’t network and die. We have evolutionary baggage; genetic, instincts, emotions and cognitive abilities. In other words, human ‘intelligence’ is rather limited.


Generative

LLMs are certainly good at generating text, images and will be at audio and video but by characterising them as 'generators' is, I think, a mistake. They are much more than this, reviewing, spotting errors, evaluating, judging and reviewing things. Its summarising and other functions are astounding.


Ethics

The one area around AI that produces most ‘noise’ is a fundamental lack of understanding about what ‘ethics’ is as an activity. Ethics is not activism, that is a post-ethical activity that comes from those who have come to a fixed and certain ethical conclusion. It tends to be shrill, accusatory and anyone who blunts their certainty becomes their enemy. This faux ethics, replaces actual ethics, the deep consideration of what is right and wrong, upsides and downsides, known and unknown, the domain of real moral philosophy and ethics. One could do worse that start with Aristotle who recommends some moderation on such issues or Hume who understood the difficulty of deriving an ‘ought’ from an ‘'is’, even an appreciation of Mill whose Utilitarian views can be a useful lens through which we can identify whether there is a net benefit or deficit in a technology. It is not as if ethics is some kind of new kid on the block.

 

Hallucinations

Large Language Models tend to ‘hallucinate’. This is an odd semantic shift. Humans hallucinate, usually when they’re seriously ill, mentally ill or on psychedelic drugs, so it brings with it, a lot of pejorative baggage. Large Language Models do NOT hallucinate in the sense of imagining in its consciousness. It is competent without comprehension. They optimise, calculate weightings, probabilities and all sorts of mathematical wonders and generate data – they are NOT conscious, therefore cannot hallucinate. That word exaggerates the problem, suggesting that it is dysfunctional. In a sense it is a feature not a bug, one that can be altered and improved and that is exactly what is happening. ChatGPT produces poems and stories – are those hallucinations because they are fiction?

 

Bias

Another word that seems loaded with bias, is ‘bias’! Meaning is use and the meaning of bias has been largely around human bias, whether innate or social. Daniel Kahneman got a Nobel Prize for uncovering many of these. This makes the use of the word difficult when applied to technology. It needs to be used carefully. There is statistical bias, indeed statistics is the science of identifying and dealing with bias. Statistics is a field that aims to provide objective and accurate information about a population or a phenomenon based on data. Bias can occur in software and statistical analysis, which can lead to incorrect conclusions or misleading results. There is also a mathematical definition of bias, the difference between the expected value and the real value of the parameter. The entire science of statistics, where statisticians have developed a ton of methods to deal with bias; random sampling, stratified sampling, adjusting for bias, blind and double-blind studies and so on, help to ensure that the results are as objective and accurate as possible. As AI uses mathematical and statical techniques it works hard to eliminate or at least identify bias. We need to be careful not to turn the ethical discussion of bias into a simple expression of bias. Above it needs objectivity and careful consideration.

 

Reason

Another terms that worries me is ‘reason’. We say that AI cannot reason. This is not true. Some forms of AI specialise in this skill, as they employ formal logic in the tool itself. Others, like ChatGPT show emergent reasoning. We, as humans are actually rather bad at logical reasoning, we second guess and bring all sort of cognitive biases to the table almost every time we think. It is correct to question the reasoning ability of AI but this is a complex issue, and if you want it to behave like humans, reason cannot be your benchmark. 


Learning

This word, in common parlance, was something unique to human and many animals. When we talk about learning in AI, it has many different meanings, all involving very technical methods. We should not confuse the two, yet the basic idea that a mind or machine is improved by the learning process is right. What is not right is the straight comparison. Both may have neural networks but the similarity is more metaphorical than real. AI has gained a lot from looking at the brain as a network and I have traced this in detail from Hebb onwards through McUlloch & Pitts, Rosenblatt, Rumelhart and Hinton. Yet the differences are huge – in terms of the types of learning, inputs, natures of memory and so on.

 

Alignment

This word worries me as many assume there is such a thing as a single set of human values to which we can align AI systems. That is not true. The value and moral domain is rife with differences and disputes. The danger is in thinking that the US, EU or some body like UNESCO knows what this magical set of human values is – they don’t. This can quickly turn into the imposition of one set of values on others. Alignment may be a conceit - 'be like us' they implore. No - be better.


Stochastic Parrot

Made popular in a 2021 paper by Bender, it is 'parroted' by people who are often a big age about what that paper claimed. That LLMs are probabilistic (stochastic) is correct but that is only part of the story, a story which ignores the many other ingredients in the cake, such as Reinforcement Learning from Human Preferences (RLHF), where humans train the model to optimise identified policies. These models are trained in both unsupervised and supervised ('reward' methods). It is simply incorrect to say that it merely spits out probabilistic words or tokens from a unsupervised trained model. The word 'parrot' is also misleading as it suggests direct copying or mimicry, which is exactly what LLMs do not do, as they create freshly minted words and text.

 

Conclusion

These are just a few examples of the pitfalls of sloganeering and throwing words around as if we were certain of their meaning when what we are actually doing is slight of hand, using a word in one context and taking that meaning into another. We are in danger of parroting terms without reflection on their actual use and meaning. Conceptual clarification is a good thing in this domain.

No comments: