A Review of:
Kate Crawford (2021) Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence, Yale University Press, New Haven.
In Part 1 of this look at the realities behind ‘AI’, we highlighted two reality checks on the rhetoric of ‘AI’ as promise and panacea—limitations with the technologies, and dangers inherent in the use/abuse of statistical techniques. Kate Crawford is after bigger game. Her warnings are complementary, but bring to the fore further, much bigger concerns. The AI field’s original sin may be to compare the human mind with a computer and vice versa; but what if AI and big data are more like the oil and mining industries?
Her book is refreshing for materially rooting AI and digital technologies in and of the planet. The other worldly, virtual attributes of AI, data, and the internet have physical, social and human bases all too easily ignored. This is not least because we experience the virtual as different from the physical, as if they are separate worlds. Indeed I would surmise that, for many of us, the virtual is a pleasant escape from the physical world in all its stickiness. Forget the physical world, let’s leave it behind.
Does this ‘forgetting’ matter? Unfortunately yes, it does, on many counts. The strength of the book lies in spelling out the material, economic and social foundations supporting this seemingly weightless, frictionless, ‘free’ world. Despite the elegiac tones in which ‘AI’ and digital technologies are often discussed—data extraction as a benevolent practice, ‘AI’ as necessary saviour, digital as environmentally friendly—there are all too often high costs and damaging consequences. Kate Crawford also addresses the all-important, perennial, human question: ‘cui bono?’—who gains—and by implication, who does not.
Crawford begins with pointing out two myths: that non-human systems (e.g. computers and AI) are analogies for human minds, and that intelligence exists independently, and distinct from social, cultural, historical and political forces. This is familiar ground for readers of my first article. But it is not just that ‘AI’ is not intelligent; it’s not artificial either. What we call ‘AI’ is embodied and material, made from natural resources, human labour, fuel, logistics, histories, infrastructures, and classifications. ‘AI’ depends entirely on a much wider set of social and political structures, and as such is fundamentally political, and, in this sense, a registry of power. Given this, Crawford portrays her book as an atlas showing where and how ‘AI’ is an extractive industry, dependent on exploiting energy and mineral resources from the planet, cheap labour, and data at scale.
Earth, and mining for ‘AI’, is our first stop-off point. Silver Peak in Nevada’s Clayton Valley is located on the edge of a massive underground lake of lithium—the ‘grey gold’ of multiple technology industries. Rechargeable lithium-ion batteries are essential for mobile devices, laptops, in-home digital assistants, data centre power backup, the internet and every platform that runs on it. Crawford uses Clayton Valley as a modern example of the long history of extractive mining that, because profitable to some, ignores, and does not account for its true costs and environmental damage, invariably suffered by others.
But it not just lithium brine that is being extracted. The Cloud—the platform for the ‘AI’ industry—is made up of rocks, lithium, and crude oil. Each object in the extended ‘AI’ network “from network routers, to batteries to data centres, is built using elements that required billions of years to form inside the earth.” Crawford documents in detail the exorbitant costs and enormous wastefulness of these extractive practices and subsequent use behaviours. Lithium sites abound, for example, in south west Bolivia, central Congo, Mongolia, Indonesia, and Western Australia. Some 23 rare earth minerals, 95 percent supplied by China, are also mined as vital to multiple industries, including the hi-tech sector. Such extraction is done by keeping the real costs, in terms of conflict, economic and environmental damage, out of sight. For example, the ratio of usable materials to waste toxins is extreme: 0.2 percent versus 99.8 percent. Crawford is essentially attacking the myth of clean tech.
If minerals are the backbone, then electrical energy is the lifeblood of AI, internet and computing. The use of fossil fuels, and the resulting carbon footprints and pollution from running our advanced computational infrastructure and applications—these are immense and rising. Data centres consume water, and are amongst the world’s largest consumers of electricity, mostly produced from fossil fuel use. The tech sector could contribute 14 percent of greenhouse emissions by 2040. Crawford makes two fundamental points. Firstly, the rapid growth of cloud–based computation and ‘AI’ has been portrayed as environmentally friendly, but has paradoxically driven an expansion of the frontiers of resource extraction and waste. Secondly, there is a powerful resistance against engaging with the materialities and consequences of these technologies.
Labour is our next stop-off. Crawford begins in an obvious place: Amazon’s fulfilment centre in Robbinsville, New Jersey. Labour is heavily monitored and rendered efficient as necessary connective tissue in a massive technologically-based system. Ironically, she notes, in this kind of operation humans are increasingly being treated as robots. Monitoring, time control, the separation of conceptual work and decision-making from execution—is this the future of work in the context of artificial intelligence? Certainly such a labour model has plenty of pre-history, and has moved further into the ‘brave new workplace’, as Crawford interestingly documents. In my view it is not the only model; rather we are seeing a more complex flexible core-periphery model being applied, enhanced and facilitated by digital technologies. But Crawford is correct to point out the development of what Lilly Irani has called ‘human-fuelled automation’, carried out as often exploitative, mostly hidden ‘ghost work’(1) that fulfils the last mile of automation—those menial tasks that AI and automation cannot do … at least not yet. In practice much of so-called ‘AI’ is highly dependent on such work. Crawford calls it ‘faking AI’; Jeff Bezos refers to is as artificial artificial intelligence. Astra Taylor uses the very apt phrase ‘fauxtomation’(2).
Next, we come to data, its limitations, and as an exploited resource. The location is the US-based National Institute of Standards and Technology (NIST), tasked with setting standards for ‘AI’, including biometric data. Crawford uses the NIST biometric data sets library—used for training AI algorithms—to forensically establish the limitations of such data and how it exemplifies the beliefs that everything is data, and it’s there for the taking—mostly for free, and without consent. Such data is without context, meaning or specificity, and scraped from multiple sources. Is this really a reliable source of the ‘ground truth’ we want for our machines? What is it, where does it come from, and who gains most from its use? And is there really no data like more data, even if it is so flawed? Crawford points out that what is the case with biometric data is also true for all forms and sources of data feeding ‘AI’ (e.g., text, image, email, social media). And of course, the scale of the data pool—and room for error—is now immense. For example, on an average day in 2019, some 350 million photos were uploaded to Facebook, and 500 million tweets were sent.
Data has become the new oil it is said, but in this case it’s a resource to be captured, collected, classified and consumed. Massive scale demands efficient ‘data-mining’ techniques for classifying, quantifying and extracting useful information—again note the ‘extraction’ theme. Meanwhile human subjects inexorably become data subjects, and the ethics are held at arm’s length in many other ways. Data sets are removed from the context of collection. Training data can be skewed and riddled with errors yet used for training AI predictive systems with real world impacts, for example predicting criminal acts. Data extraction increasingly proceeds without major concerns over privacy, ethics and safety. And we are seeing the commercialised capture, for restricted use, of what was previously part of the commons.
For Crawford, classifying data—relatively easily done with today’s digital and ‘AI’ technologies—turns out to be a minefield. Too many key questions are unasked in the classification process. For example, what are our prior assumptions? Are there unspoken social and political theories buried therein? How dirty is the data? How do classifications interact and impact the classified? There are all too many types of bias, affecting gender, race, and age, for example, and too many limits to de-biasing systems. None of this is helped by the fact that classificatory systems contain gaps and contradictions, reduce context and complexity, all to make the world more computable and calculable. These strictures pass into where ‘AI’ is being used to identify emotions from facial data. The algorithms by Amazon, Microsoft, IBM and the like tend to use classification systems that focus on a small number of distinct emotions (the KDEF system(3) , based on Ekman, suggests only six), assumed to be universally applicable across cultures. But Crawford provides much evidence that questions whether these are universal, whether a face does reveal my thinking and feeling, and whether this can be detected accurately by machines. These are important reservations given such systems are being widely used, for example, in recruitment, training, appraisal, crime detection, and even political attacks.
The final parts of Crawford’s book focus on political and social issues and the intertwining AI industry. One chapter is devoted to the state, intelligence agencies and how these have helped to develop and utilise many of the techniques we now refer to as artificial intelligence. Superior technology can help balance military capabilities geo-politically. AI, computational warfare and robots are the Third Offset, following nuclear weapons, then the expansion of covert, logistical and conventional weapons in the 1970s and 1980s. Today, of course, it is thinly acknowledged that there is a cyber war going on between several states.
Crawford focuses mainly on recent US developments, including the Algorithmic Warfare Cross-Functional Team or Project Maven out of the Department of Defence, but bringing into play the commercial tech sector. Indeed, more generally, local and state governments in the USA have outsourced key functions of the state to technology contractors. Palantir is used as just one example of how militarized forms of pattern detection and threat assessment are moving at scale into municipal-level institutions and services. Designed as an intelligence platform for the global war on ‘Terror’ by 2018, in a shift towards big data surveillance, its various platforms were being used to detect Medicare fraud, in criminal probes, to screen air travellers, and monitor immigrants. According to Crawford, given the limitations of AI, “inequity is not only deepened but tech washed” in these applications.
More broadly, beyond Palantir, surveillance capacities once ruled over by courts are now on offer in, for example, Apple’s App Store. Rapidly developing social credit systems are founded on such surveillance technologies, yet bring with them all the biases, limitations, breaches of privacy and consequent inequities signalled above and in our previous article. For Crawford, what is being assembled is a massive command and control infrastructure through a combination of extractive data techniques, targeting logics and surveillance. She focuses on the USA but could usefully direct her attention to several other countries leading the way here.
Crawford’s final chapters bring the threads together and also point toward the future: from the virtual to space. She argues that another story has to be told beyond the AI mystique and the ‘enchanted determinism’ whereby ‘AI’ is seen as beyond the known world, but deterministic in being able to discover patterns that can be applied with predictive certainty to everyday life. While games figure highly in expositions of ‘AI’ prowess, this is no coincidence; they represent a closed more certain world where computing and ‘AI’ techniques work most effectively.
The book certainly explores the limitations of ‘AI’. But, more importantly, it portrays the planetary infrastructure of AI as an extractive industry. The positive story needs to be challenged by re-linking ‘AI’ with the interests and forms of power it serves and magnifies. And for Crawford these are tools “that see with the logics of capitalism, policing and militarization”. The book highlights the material genesis of ‘AI’, the political economy of its operations, and the discourse that suggests ‘AI’s’ immateriality and inevitability. In her Coda she finds the logic of space colonisation and frontier mining deeply interconnected with the tech billionaires and the AI industry they preside over. Space exploitation like AI, is, in many ways, both an attack on, and a hedge against, Earth itself.
What to make of Atlas of AI? It is a well informed, well thought through and well-argued counter to much of what we hear, read, and even think about when we consider the realm of ‘AI’. Information technology has never been neutral or merely informational. Its ability to extend our reach has made it inextricably political, and about control. I first wrote about this in 1987. Today the technologies are so much more powerful, extended into the world so much further, and are increasingly penetrating and shaping areas and issues—for example climate, war, communication, inequality, economic success—in ways not conceived even ten years ago. ‘AI’ has massive potential and is in the vanguard of multiple emerging digital technologies. Crawford does not tell us how to manage ‘AI’ and related technologies going forward, but she does expand our understanding of the dangers inherent in their wider under-theorised, uncritical application. As such, her book ignites a vital debate: Do we go on as we are; if not, what needs to change?
(1) See Gray, M. and Suri, S. (2019) Ghost Work, HMH, Boston.
(2) Taylor, A. (2018) The Automation Charade. Logic Magazine, August 1st.
(3) The Karolinska Directed Emotional Faces (KDEF) is a set of 4900 pictures of human facial expressions used by researchers. The Averaged KDEF (AKDEF) is a set of averaged pictures created from the original KDEF images. >