Leslie Willcocks, London School of Economics,
Ah metaphors! Don’t you just love ‘em? They do such great work packaging up all that meaning and knowledge. Some say even the brain itself operates metaphorically. But alas, metaphors are also terrible traps for the unwary— inventors and recipients alike. I am going to argue that being misled by metaphors is no idle frolic; it has serious consequences. Enter artificial intelligence—the brain metaphor—and robots.
With ‘AI’ (‘Artificial Intelligence’, and please note the inverted commas) I often feel dizzy enough to channel Alice In Wonderland: “When I use a word,” Humpty Dumpty said in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.” Yes, indeed. So first there is that word ‘intelligence’. Here is metaphorical thinking at work. Very intelligent people are great at chess and Go. Computers are now great at both. Therefore computers are intelligent. Robots look like people; therefore they have human attributes, including intelligence. Like humans, my computer learns, applies algorithms through neural networks to produce recommended actions; therefore we can say it is intelligent. Loose, misleading, but too often how we think.
So to ‘AI’. Margaret Boden, a lifelong expert, sees ‘AI’ as seeking to make computers do the sorts of things that minds can do. She, like most experts, says that we are many decades away (if ever) from ‘strong AI’—where programs really do have understanding (consciousness etc.) in the way that people do. Not to be deterred, and wanting to keep the intelligence metaphor, let’s all retreat to ‘weak AI’: that is, building programs that demonstrate a capability that looks human, but without any claim that the programs actually possess human attributes. In practice, what we have at the moment, in most of our businesses, is at best ‘weak weak AI’—algorithms driven by massive computing power. What I like to call ‘statistics on steroids’! But this does not deter vendors, media, consultants and technologists from using the umbrella term of ‘AI’ for a battery of technologies that impossibly stretch the metaphor. The harsh truth is: if it’s artificial it’s not intelligent; if it’s intelligent, it’s not artificial.
The misleading metaphor in all this is that the brain resembles a computer, and a computer the brain. Actually, historically, we have always done a (faulty) two-way comparison of the brain to the technology of the time. Descartes was impressed by hydraulic figures in the royal garden and offered a hydraulic theory of the action of the brain. In the 19th century the brain/nervous system and telegraph systems were metaphorically aligned. There have been telephone theories, electric field theories and now computational and information-processing metaphors. The whole language of computing and AI is suffused with the fundamental misunderstanding that the brain is some kind of computer, and that machines have progressively human qualities. Machines are said to remember; understand; possess intelligence; make sense of data; know; even—most recently—empathise and create; and eventually exhibit consciousness. But none of these things are true today. This misdirecting, metaphorical language reflects perhaps the fairy tale that we want to believe—they really are like us. But do the cognitive automation technologies we are now using—machine learning; neural networks; deep learning; algorithms; and all that, backed by massive computing power and memory—really have neural networks like human brains do, or is the terminology just wish fulfillment? Ah, where is Wittgenstein when you need him, to show the fly (us) the way out of the fly bottle? How bewitched by language we have become.
‘Robot’ is a lesser word crime, but certainly on the crime scene. Yes, there are physical robots than can look like humans. So what? And where no physical robots appear (e.g in robotic process automation (RPA) and cognitive automation deployments) it’s just software. ‘RPA’ itself was a phrase coined for marketing purposes back in 2012. It has been very successful in attracting attention—less so on clarifying exactly what was being sold!
But the word ‘robot’ has deep metaphorical, historical and psychological resonances. Robots are found in Greek, Indian, Chinese and Persian myths, and throughout history. In Greek myth, for example, the gods’ blacksmith, Hephaestus, built the first mortal woman Pandora—possessor of the evils for mankind in her notorious Pandora’s ‘box’. Robot myths are a repository for our anxieties and hopes when it comes to our self-created machines. As the technology becomes more virtual, opaque and less visible, so humans feel the need to make sense of the machines by rendering them in physical form. This appears to be a deep-set, human psychological need, not easily circumvented, or substituted for. But how misleading is it to think in that way?
Unfortunately, this metamorphising is not harmless. Much of it has serious consequences. We are in the hands of a technology industry and marketing departments that love moving the vocabulary on to the next big, new thing. The fact is the whole area of automation has become a veritable ‘Tower of Babel’, proliferating terms such as ‘digital workforce’; ‘intelligent automation’; ‘algorithmic certainty’; the catch-all term ‘AI’; and many, many more to come. Basically, we are being seriously oversold to. Buying into technology as solution, we inflate expectations, resulting too often in misapplications and disappointment. Conversely, many become cynical at the hype, and switch off from the real value that can be gained. Furthermore, for individuals, the psychological anxiety generated by the media focus on ‘AI’ can be disproportionate to the actual risks. There are, indeed, lots of bigger things to worry about.
Worse still, the misleading metaphor of ‘AI’s’ progressive, human-like qualities drives pessimistic anxieties or optimistic hopes of an ‘AI’ tsunami quickly and massively overwhelming us. This is not going to happen. But the ‘AI’ delusion fuels beliefs that technology will solve all our problems, or, conversely, that a massive job cull will occur as AI replaces humans, rendering human life pointless. The truth is more complex and nuanced. For example, our research suggests that over the next ten years net job loss will be low, but there is going to be considerable disruptive skills shifts. But misdirected emotional energy can push us into the wrong actions and policies at societal, organisational and individual levels.
The massive hype around what is now called ‘artificial intelligence’ needs to be confronted and pointed out. Words and what they represent cannot be totally disassociated and their relationships rendered fluid and moveable in the way that the hype merchants and everyday usage would wish them to be. There are fundamental flaws in thinking going on here. Perhaps we need to recruit a better metaphor. Igor Aleksander, a world expert in ‘AI’, suggested that ‘smart computing’ was a better, if more humble, description of what he had been practicing over the decades. Perhaps we will get a better one from bio-technology? Whatever it is, metaphor provides a fundamental way in which we think and make sense of the world. But it remains vitally important to identify the limits of every metaphor that we live by.
Note: This is a further-developed version of an article published in ‘Forbes Online’ in April 2020
Leslie Willcocks
is professor in the department of management at LSE, and co–author of four recent books on service automation. His latest book ‘Becoming Strategic With Robotic Process Automation’, is available to purchase onlione at www.sbpublishing.org.