- 2026
- March
Leslie Willcocks
Professor Emeritus
London School of Economics and Political Science
Wendy Currie
Professor at Audencia Business School, Nantes.
Hype has plagued the subject of technology futures since the birth of commercial computing. Leslie Willcocks and Wendy Currie argue that AI is no exception, that it has more in common with older technologies than most people realise and that overestimating the speed of technological change can lead to fear. On AI, post-truth is doing us few favours.
Artificial intelligence (AI) has become much more than a set of technologies that try to mimic human minds. Today, AI is what Richard Dawkins coined a ‘meme’ – a unit for carrying themed cultural ideas, symbols or practices. As such, the term carries much freight.
Today, ‘AI’ refers to a host of technologies often not strictly AI. It is frequently utilised as a marketing badge and applied to (mis)represent wider processes of digital transformation. It has become a repository for culturally widespread emotions and anxieties (hope, fear and outrage) related to technological change. But will it really be, as the meme suggests, stand-alone, ‘unprecedented’, rapidly deployed and highly ‘impactful’ – for good or ill? How much is hype or fear?
AI has a history and context
Our work shows that, after four previous computing eras beginning in the 1960s, we are in the midst of a fifth. The ‘convergence/AI-centric era’ began in 2021 and will run until at least 2035. It will see further expansion of social media, mobile, analytics, big data and cloud infrastructure and services. We also expect the combining of these with blockchain, automation of knowledge work (such as robotic process automation, cognitive automation and AI), augmented reality, robotics, the Internet of Things and digital fabrication.
We are also seeing advances in new technologies such as quantum computing, Web 3, 5G and bioengineering. Research into base materials and component technologies continue to expand, enhancing speed, power, memory, calculation and judgment. Clearly, AI will hardly be the stand-alone driver of value. In fact, we posit a ‘law’ of combinatorial innovation: that the increasing integration of digital technologies in new ways will be the key to growing value exponentially.
Since the early 2020s investment has flowed into a very hyped new ‘AI revolution’. Call this an ‘AI summer.’ But AI is not exactly new. In the 1970s and 1980s-90s saw periods of hype and disappointment in AI resulting in two AI ‘winters’ when funding, interest and progress were significantly reduced. Will a third cold snap materialise?
Today’s AI technologies have undoubtedly seen advances. But they still depend on older statistical techniques, software and coding. The new potential is impressive and stems from massive data sets, increased processing power and enhanced memory capacity. However, most disinterested specialists suggest that, if we pursue this path, we are unlikely to produce anything unprecedentedly new like general intelligence capability (AGI).
Recurring themes in the AI meme include automation anxiety, fear of job loss, a perceived technology-regulation lag, a fear of missing out (known as ‘FOMO’), and a belief in an unprecedented digital future. But these were, in fact, present throughout all four previous computing eras. So why position it as so ‘new’?
To answer that question, we come to the hype that has plagued the subject of technology futures since the birth of commercial computing.
Interrogating AI hype
For over 50 years a ‘Hype-Fear’ lens has been a common way through which new digital technologies have been viewed. As AI has become the poster child to represent all automation, automation anxiety has crystallised into AI anxiety – a container of all digital ‘superagency’, but also its perils and potential disasters.
A famous framework that supposedly deals with this techno-hype is the Gartner Hype Cycle, first published in 1995. But does it? Gartner suggests a pattern over time of 1) a technology trigger; 2) peak of inflated expectations; 3) a trough of disillusionment; 4) a slope of enlightenment; and 5) a plateau of productivity. This explains the development and perceptions of technology over time. It uses arresting language. But how adequate is it as a model and explanation?
Looking at 20 years of technology, very few move through this cycle. And most key technologies adopted since 2000 were not identified early in their adoption phases. Many trends emerge briefly and quickly fade away, lacking lasting impact. Other technologies just die. Gartner’s technical insight is often correct, but the implementation is not there; several core long-term technical problems do not appear in hype cycle analyses. Some technologies keep receding into the future; lots of technologies make progress when no one is looking.
Analysis by The Economist in 2024 found that the cycle is, in fact, a rarity. Only a fifth of technologies moved through the cycle; many are adopted without transitioning through the steps; a high number of technologies are flashes in the pan. Another study found that Gartner’s expectation of a normal technology passing through the stages within five to eight years did not hold; instead, the average hype cycle duration is 21.76 years.
The analytical capability of the hype cycle model can be easily questioned. In fact, the journey the Gartner hype cycle portrays is heavily flawed to the point of becoming a hyped part of the AI meme itself.
Stronger theory needs to be applied. Everett Rogers’ theory of diffusion of innovation, published in 1962, underlines that AI’s technical prowess is not the only or even leading factor in rate of adoption. There are also perceived attributes of the innovation, communication channels and the nature of the hosting social system. The organisational rate of innovation is shaped by the individual leader’s support for change, the organisation’s characteristics and openness to external systems.
Moreover, the innovation process unfolds over time through five phases: knowledge, persuasion, decision, implementation and confirmation. The hype-fear lens applied to AI tends to overlook major factors that will shape whether and how these technologies are adopted, and with what consequences.
Is AI really ‘unprecedently’ impactful?
We also need to confront a well-developed narrative pointing to the power, transformation and perils brought by technology. The storyline is familiar. Media, consultancies and technology vendors – and more recently social media –herald each emerging digital technology as distinctive, unprecedented and ‘breakthrough.’ One can see this with the internet, mobile phones, social media, blockchain, cloud, the metaverse and, from 2023, ChatGPT, agentic and GenAI. After years of exposure, this narrative has come to sound much more like marketing than reality. The story may be impactful, but it is hardly unprecedented.
The reality is that there are major implementation challenges that contradict this meme component. We see four domains needing to be navigated: digital hype, digital capability, useful digital and strategic digital. With AI we are certainly in the areas of hype and capability. But few corporates are beyond the pilot stage, let alone have strategic applications of these technologies.
Three critical implementation challenges must be solved before these technologies can deliver useful, let alone strategic, value.
Firstly, what often escapes remark in the AI meme is ‘the shock of the old’ that is, how much stays the same. But with information and digital technologies generally, this is hardly surprising. Organisations have made huge investments in technology infrastructure, skills and applications; these are embedded in multiple processes. These technologies have established dependencies but also drive value. Replacing or fitting modern technology with inherited technical and organisational legacies is costly in terms of time and resources, and also challenging. In the face of even more change, organisations may – and do – run out of absorption capacity.
Secondly, more empirically informed views of technology adoption need to become a key AI meme component. Exploiting these technologies in organisational contexts has been consistently sobering. When examining automation across decades, depending on the sector, only around 15 to 23 per cent of corporations optimise business value, even though they typically accrue only 68 per cent of the potential value. A common finding has been that up to 65 per cent of digital transformations are seriously disappointing.
Computer and digital technologies require adjustments and ongoing maintenance throughout their life cycle. Fitting them within an existing technical infrastructure is a challenging yet understated task. And if, as we think, the massive value anticipated from AI will come from technological synergies, this will make the technical route to digital transformation much more challenging, and probably much more delayed.
Thirdly, organisations are very siloed. Culture, processes, skills, data, technologies, managerial mindsets and strategies are difficult to integrate and challenging to implement modern technology through. As a result, the time horizons for developing a technology for full, optimal enterprise use can be quite long and their impacts may well be stretched out.
Hope and fear
AI is heavily hyped. Automation anxiety is understandable, but has been, through various processes, placed on steroids. AI, like previous technologies, is evolving much more slowly than anticipated, not least due to the sizeable adoption challenges AI presents to organisations. Our study findings on automation, anxiety, AI adoption and digital transformation are consistent with other observers who suggest, for example, that, apart from deep fakes, most AI fears are still speculative. Many seem all too familiar, often inherited from past scares – and many seem manageable. The 60 years between 1900 and 1960 experienced technological and other innovations across industry, work and society that had much more profound impacts than those of the following years.
Overestimating the speed of technological change can lead to exaggerations, fears, anxieties and an anti-innovation mindset.
Today’s concerns often display an ahistorical, recency bias that contributes to unnecessary anxiety about the future. We need to continue to deconstruct populist narratives about AI in order to switch attention towards the limitations, inaccuracies and paradoxes of these representations – and towards more credible, evidence-based sources and studies. On AI, post-truth is doing us few favours.