Will AI make you unemployed or finally give you the job you really want?

A CROWC Blog on the Future of Work and AI

Artificial Intelligence (AI) will transform the way we work, at least according to commentary in the media, blogosphere and academic journals. Simplifying a little, the debate has polarised into a predictable pattern. Critics point to the risk of AI automating and deskilling work, with job replacement as the most likely outcome. Proponents, on the other hand, argue that AI will take away all of the boring work, automating drudgery and allowing us – the humans – to focus on being more creative: the value-adding work that is difficult or impossible to automate (at least for now).

If AI really did take away the more tedious aspects of work, that could only be a good thing. As David Graeber noted in his best-selling book Bullshit Jobs, many of us secretly believe that our jobs are unnecessary and don’t contribute to the greater good. Graeber’s argument resonated with a lot of people, even if we might question his methods. Anyone working in a large organization today will recognise the stresses and time pressures arising from increased form-filling and monitoring. Paradoxically, however, whilst digitalisation is often sold as promising debureaucratisation, as entrenched technologies like email have shown, it can actually increase the workload of administrative, non-core activities. This was certainly the experience for many office workers going into COVID lockdowns. The flexibility of being able to work from home didn’t lead to a high-trust, autonomous working culture, but increased surveillance and demands for continuous reporting to assuage managers’ anxieties. The result was a further erosion of work-life boundaries and privacy violations.

This doesn’t mean that increased control, automation, and surveillance are the inevitable outcome of technological change. We often refer to ‘digitalisation’ and ‘AI’ as if they were single, fixed thing, rather than a social process. This misperception is baked into ideas of progress that frame technological change as an inevitable, evolutionary process of ‘things getting better’. The idea of digitalisation and technological progress has become a kind of secular faith today. Anyone arguing against AI today is likely to be labelled as a Luddite, pointlessly fighting against the inevitable future.

The future is not inevitable. It is inherently open, uncertain, and shaped by human actions and decisions. As sociologists and organizational researchers recognised long ago, technologies are designed, developed, and applied in social contexts. Decisions about which technologies to develop, how they are designed, and how they are used, are all made by people in organizations. This makes these decisions a result of power and politics: who has the ability and authority to frame the situation and make decisions, who controls the resources to invest in change, and who determines how value is accounted for in these decisions. In short, technological change is a social process, as much as a technological one.

The social is more than just people, however. In organizations we are influenced by institutional factors. Most workplaces have a regular quarterly or annual reporting process. Financial statements position wages as a cost, while a new machine or system might count as an investment. These accounting conventions frame technological investment decisions by playing off employees against new technology. As I have repeatedly been told when researching innovation, managers have a Return-On-Investment (ROI) figure in mind when a new technology must pay for itself. In practice this is usually two to three years, meaning that a new machine must cost less than an employee over that period. Two to three years of salary and pension costs becomes the rough ‘value for money’ calculation applied to investing in technological change. This kind of accountancy logic is good business sense. The result, however, is a focus on short-term costs where technology is pitted against employees, rather than in the service of them. Why would any business owner or manager invest in a new technology like AI and then keep the same number of staff? For the ‘augmentationist’ argument – that AI can augment human capabilities rather than replace them – to hold, then innovation needs to lead to growth. If we are going to pay for new technology to take on routine work, and free up our human employees for creative work, there needs to be growth in the firm to absorb the extra work. The only alternative would be a complete break with a cost-based logic of efficiency, which really would be a paradigm change in how we organize business.

This means that the physical limits to growth are an obstacle to realising the goal of augmentation, and AI won’t escape those limits. AI data centres are already one of the largest and fastest growing users of electricity and water, leading Google to shift investments towards small-scale, private nuclear energy generation. We can hope that AI will somehow generate enough cumulative, small-scale efficiency savings to off-set this additional demand on the earth’s natural resources but so far there is no evidence that this is happening.

Another obstacle to the realisation of an AI-augmented future of work is that most organizational design takes place within an engineering paradigm that treats humans as a source of risk and error within a system. As work by Kendra Briken and colleagues from Strathclyde University has demonstrated, this perspective re-frames ‘human centrism’ as ‘risk-minimisation’ when designing organizational systems. Humans are a weakness to be protected (from boring or dangerous work), but also a source of error that the system must be protected from. These ideological and institutional factors shape how innovation plays out in practice. Whilst there are many possible futures of digital organizing and work, without transforming these social factors we are much more likely to pursue a future of automation and redundancy, at least in the short term, rather than refocusing on the human as the real centre of industry. To do that would require starting from a different point, and designing our organizations around good work and what humans need in work and life, rather than what senior managers and investors say ‘the organization’ needs.