- 2019
- JUL
A Three-Part Conversation With Professor Leslie Willcocks, co-author of ‘Becoming Strategic With Robotic Process Automation’
What is the state of your research into the RPA industry?
Up to 2014, I didn’t know very much about robotic process automation but had studied all the other IT in waves, as it were, and used that knowledge to really try to understand the level of hype around the RPA space and subsequent automation and AI space, as they call it.
So, around about end of 2014, we started researching actual deployments to find out if there were any out there. Jump forward five years, and we have now written three books, which contain all the empirical research, and I’m just finishing a fourth called Becoming Strategic with RPA. This latest book is very relevant because recent examples show us that although organisations are buying RPA, they’re not getting as much value out of them as they could, because they have too frequently behaved opportunistically and at too low a level and not strategically enough.
We have now looked at 400 plus deployments over time, so we know what the outcomes are, and we can explain why they succeed and why they don’t. Basically, we think that the companies that adopt RPA fall into four kinds: One is the 20% who are leaders and doing very well. Another group of about 25% are trying to do very well but fall at managerial hurdles, in various places. There is about 35-40% that are sort of bumping along - they’ve got it, but they’re not really getting an awful lot out of it. Meanwhile the remainder (about 20%) are pretty much losers or also-rans because they just don’t get it at all.
The general finding would be that 25% of the mistakes or problems experienced has to do with the technology, but 75% of them are more to do with the management of the technology. A lot of people unfortunately see RPA, and it is sold as, a silver bullet. But in fact, RPA needs quite a lot of managing, just like any technology, although the pay-offs seem to be higher and quicker than previous rounds of technology.
How does automation relate to job loss?
One idea is that robotic process automation and cognitive automation will probably create more jobs than they will lose, although a lot of people seem to think that it’s mainly job displacement. The problem here is that people don’t understand why people automate. They don’t automate to get rid of people – that would be a minor objective. They automate because they have such serious problems ¬– a dramatic increase in the amount of work that needs to be done and skills shortages – that automation is the way out. Basically, they’re seeking to do a lot more with about the same number of people – or slightly less. So that’s one aspect. The second aspect is that when you use automation, it’s very rarely a 100% solution in the sense that, RPA has very limited functionality. You know, it has to be a very repetitive task you are seeking to automate, with large-scale volumes. You can write rules for it – it needs structured data. You then get a pretty good outcome. But there are so many ‘ifs’ and ‘buts’ that there would be quite a lot of work still for humans (or for cognitive automation technologies, that could do more than RPA) to do. So there is always a need for people.
Someone recently wrote a book called Ghost Work, and they reckoned that the sort of the people who fill in for what automation cannot do adds up to about 20 million globally at the moment. Consequently, the assumption that automation will be total is always going to be wrong, partly because automation can only do so much – even the most sophisticated technologies, even when you start calling it ‘artificial intelligence’. We did an analysis of this, McKinsey did too, and we came to the conclusion that, of 18 skill sets needed at work, you could probably automate nine or ten of them now, or in the near future, but the rest of them will really be quite difficult to automate. So people are not sort of as dispensable as you might think in the work process.
What is creating this mountain of work you refer to?
One of the biggest omissions in the whole study of automation and the future of work is precisely around this this point. Everywhere you go, there’s a dramatic increase in the amount of work to be done, and it comes from a number of features. The first one is the exponential data explosion. If you’re doubling the amount of data on the planet that you have to process every two years (and that’s a conservative estimate), you have an awful lot of data to manage; process; look for insights in; use for analytics; store; and people can’t do that. There are just not enough people available. So automation helps you to cope there.
The second area that is totally under researched, is the massive increase in audit regulation and bureaucracy, especially since 2008, that has produced a lot of work to be done. You’ve only got to look at any job. I mean, I’m a professor, and you would have thought that all I do is teach and research. But no, I spend 40% of my time on administration and justifying myself in terms of writing research ethics forms, all this sort of stuff. So all the regulation and bureaucracy sees a massive increase in the amount of work to be done. It might not be productive, but it’s necessary because everyone insists upon it.
The third source is technology itself, because technology doesn’t just solve problems, it also creates problems. The more obvious example would be cyber-security, which is a massive and growing market. Again, automation is a solution, but it creates lots of kinds of work that would otherwise not be done. For example, you’ll be aware that Google and Microsoft, and those types of companies, have a lot of people working to make sure that the content on their websites is actually acceptable. That can only be done by people, not by machines. So there’s this whole area of technology creating problems. A further example: a distracted set of workers using technology getting interrupted by emails, by mobile phones, etc., is a great drain on productivity because of the time it takes to put aside one piece of work, focus on something else, then pick up the first piece of work again. So it stretches out the amount of time taken to complete that piece of work.
These are just a few examples of why there’s a mountain of work to be done, which is probably increasing by at least eight to ten percent per annum across the globe in all sectors – but no one’s mapping it.
To what extent do you think RPA creates opportunities for new work due to the fact that several of these bots need to be orchestrated and to be part of more complex and holistic business processes? Do you think over time this will be taken care of by other sets of more intelligent bots?
That’s what’s actually happening. We’re in an interesting phase at the moment (August 2019). People almost talk as if robotic process automation has finished, as if it’s been overtaken by cognitive automation and artificial intelligence. But let’s consider a couple of things. Firstly, the more advanced forms of artificial intelligence are not actually being utilised in businesses outside the top technology companies, defence, the military, secret services and places like that. The very advanced, artificial intelligence is not there and people are definitely not deploying it within businesses. What they are investing in is cognitive automation, which is machine-learning, algorithms, visual image processing, natural language programming – all supported, massively, by greatly improved memory and huge increases in the speed at which processing can be done.
But none of this replaces RPA. RPA is essentially the execution engine, and these cognitive technologies surround RPA and help it to do things it couldn’t otherwise do. As an example, at the front end of an RPA system, it only can use structured data. But any cognitive technology (and there are plenty out there now) that can use unstructured data, and structure it so that the RPA system can utilise it, is very useful. Likewise any cognitive technology that can do analytics – taking the data exhaust from an RPA system and extracting insight and analytical information – is also very beneficial. Cognitive technology is very powerful at using not just text at the front end and the back end, but at processing images and information data in all sorts of forms so that it can eventually be very useful to an RPA system. So what you’re seeing is that these cognitive technologies are what I call ‘RPA on steroids’. It sort of greatly enhances the performance of RPA, but RPA is the essential execution engine. The present phase we’re in is where the RPA providers and others are trying to create RPA as a foundation into which you can very quickly link cognitive technologies that exploit the strengths of the RPA platform.
I see most of these cognitive technologies being enablers to what RPA does, and handling more unstructured data, or doing some sort of analytics on qualitative data so that the RPA can do better processing.
But what about decisions and the extent to which you need to orchestrate the technology. Do you think that these sorts of things will remain a human job?
There is a limit to the decision-making that machines can make. Obviously, RPA can make limited decisions based on the rules of the structured data that it’s got. Cognitive technologies can make probabilistic decision-making based on algorithms that have learned to be accurate as a result of a lot of learning from training datasets that have been fed into them. So cognitive technologies create decisions that go… “Well we recommend A as the best one, B is the second best, C is the third best.” So for example, if you want to send work to different people to resolve a problem, you could come out with a decision using probabilistic reasoning, A is the best person available, B is the second best person available, C is the third best person available – but it’s all highly dependent upon how good the algorithm is and how much context you can include into the algorithm.
There are certain decisions that, ultimately, have to be made by human beings, as well as mo by human beings, and you have to have humans standing around trying to check and approve those decision-making processes by the technologies. There is a serious limitation, because there are certain things that machines can’t take into account in their decision-making. There’s just no way the technologies could have quite as much information that a human might have – machines don’t do empathy and make decisions that motivate or demotivate people, for example. A human has these abilities and therefore could, possibly, understand better the consequences of a decision than a machine would. So you know, there’s always going to be humans involved with these machines in the vast majority of cases, especially when it comes to decision-making.
Are there effective practices emerging for RPA and cognitive deployment?
We identified 30 action principles for RPA. We ended up with 32 action principles for cognitive automation with 19 being distinctive to cognitive automation, in addition to the RPA action principles. So, for example, we found with RPA, that you didn’t really have a problem getting the data in the right shape at the front end. It was either structured or not – and if it wasn’t, you couldn’t do very much about it. With cognitive automation, you really had to sort out your data – your whole organisation data – in order to get at the right training data, and to and be able to get large-scale datasets in order to train the algorithm. This was never a problem with RPA. The other thing with cognitive automation tools, is that they are lifelong learners. When you set them to learn from data coming in, you wait until it’s relatively accurate, (at least as accurate as a human would be in terms of what it would output). But thereafter, you improve it by feeding it more and more data forever. So you’ve got to treat it as a lifelong learner and not expect instant results, but improvements over a number of years.
We found that cognitive automation was much more difficult to manage because you had a limitless number of things it could do. Whereas where RPA was easier, because it was pretty clear what it could do, and if you added imagination, you could probably think of innovations you could do with RPA. But, with RPA, you are limited by the technology, what the technology can actually do. The cognitive technologies, however, are so much bigger in their scope, and they open up so much more potential that it becomes more difficult to make management decisions about what to use it for. And that creates prioritisation problems. You can’t just buy say, you can’t just buy IBM’s Watson and then plug it in and off we go. There are big questions about how to use it, where to use it and on what to prioritise the spend. The other thing about cognitive technologies is they’re much more expensive to implement and train and develop than RPA.