I am a fan of generative AI because I think it can enhance productivity significantly. And we desperately need to improve productivity to boost growth in an ageing society. Yet, when I read a recently published short note from Daron Acemoglu on the likely growth impact of AI, I couldn’t help but feel like I have been doused in a bucket of ice water.
But first, for the uninitiated. If Daron Acemogly writes a note about AI, everyone should pay attention. Daron is probably the world’s leading expert on automation and inequality, he has been named the most cited economist in the world in 2015 and won every prestigious economics award there is except the big one. But I guess the last one is just a matter of time.
So, what did he write about the likely impact of AI on productivity?
He built a model on the on the influence of AI on productivity and backed his theory up with early empirical evidence on productivity gains from generative AI. I know, I normally don’t write about economic theory and models, but I’ll make an exception for Daron because his stuff is so good, and it really challenges my notion of how much AI will boost productivity.
His model allows for AI to increase productivity through four different channels:
Automation: AI takes over and reduces costs for certain tasks. This is the example of automated legal contracts generated by AI taking the jobs of paralegals or AI-generated standardised software modules programmed by the machine rather than humans.
Complementarity: AI can enhance productivity by providing assistance to humans engaged in a task. This could be software programmers using the standard modules from AI in a more complex context or journalists using AI to hone their texts or find data on a specific subject.
Deepening automation: AI can improve already automated tasks as is the case with AI applied for manufacturing and process automation in factories or AI-enhanced security software.
New tasks: AI may be able to perform entirely new tasks that no human could ever do or that we have never thought about in the past.
Starting with the GDP share of the tasks that are affected by AI and the likely cost savings from AI as evaluated in recent real-life experiments, Daron comes up with an estimate of productivity growth of…
…0.66% over the next ten years.
Not per year. In total.
Even when using the most optimistic estimates for cost savings, he could only get to 0.89%, not even 0.1% per year.
In his view, even the 0.66% increase in productivity over the coming decade may be an overstatement because so far, we have only empirically tested generative AI on easy tasks like software programming or text creation. But these are the quick wins. Over time, we will have to apply AI to harder tasks and that is where the productivity gains may slow down significantly.
Just think of self-driving cars which have been all the rage five years ago and have since gone nowhere with companies like Apple abandoning their plans to develop autonomous cars altogether. If one takes into account that productivity gains slow down over time, Acemoglu estimates the combined productivity growth over the coming decade could be 0.53%.
If there is one good thing about Acemoglu’s estimates, it is that generative AI seems unlikely to increase wage inequality across society because the applications of this new technology are spread evenly across all strata of society with blue-collar and white-collar workers equally exposed. However, what seems likely to happen is that the trend for our societies to become more capital-intensive and for the labour share of GDP to decline even further is going to extend further as bargaining power for workers is reduced further and more and more people are being (partially) replaced by machines.
US labour share of GDP
Source: Bureau of Labor Statistics
The result is quite astonishing, but I would disagree with some of the referenced papers on which it is based. As a software and data engineer, I really would estimate the productivity gains to be much higher than the ~50% stated in the study by Peng et al. - rather between something like 100% and 1000% (yes, that is 10 times). This could be also due to the more powerful models available today and increased experience in knowing which tasks to apply them to, and how. (The study's task was also unusual since no software engineer worth their salt would (a) write a web server from scratch, and (b) do so in JavaScript. ;P)