Warning: Today’s post is about a thought experiment, not empirical research. Normal service will be resumed tomorrow.
The rise of AI seems unstoppable and there are plenty of warnings about the machines taking over that come with it. The Wikipedia page on existential risks from AI has a long list of things that could go wrong and traces warnings against catastrophic risks from AI back to Alan Touring. So, if AI has even a tiny chance of destroying humanity, our planet or both, shouldn’t we think about investing in mitigating or preventing these risks?
Charles Jones from Stanford University tried to crunch some numbers by building a cost benefit model. He started with an analogy from the insurance industry. The statistical value of a life in the US is $10m. This means that the government or the insurance company should be willing to pay $100,000 to reduce the risk of death of a single individual by 1%. If that is cumulative over the next ten years, then prevention measures that reduce mortality by 1% over the next ten years are worth $10,000. Or, to turn it around, to get a $10m life insurance that pays over the next 10 years when you have a 1% risk of dying will cost you $10,000 per year.
Ok, now calculate the amount of money the government should spend on mitigating the existential risks of AI when the risk of AI destroying the planet or mankind is 1% over the next ten years. Of course, we don’t know how large the risk is in practice nor do we know how effective any mitigation measure we take will be. So, let’s further assume that by engaging in mitigation against extreme risks from AI, we can reduce disaster risks by half. How much should we spend on AI disaster mitigation each year?
You won’t believe the results shown in the chart below. In this baseline scenario, the government should spend 15.3% of GDP each year to avoid the existential risks from AI. That is four times as much as the US government currently spends on defence.
If the risk of killing the human race with AI over the next ten years is only half as much, we should still spend 8.3% of GDP each year to avoid that outcome. If our mitigation measures are less effective and reduce existential risks by 20% rather than half, we should still be spending 5.9% of GDP ion AI mitigation.
And none of that takes into account that if we destroy mankind in the next ten years, we lose all the future generations that are yet unborn. This is an entirely selfish calculation that only considers our survival over the next ten years. If we place value on the lives of our children and grandchildren, we should spend an enormous 29.5% of GDP each year on the mitigation of catastrophic risks from AI.
Optimal spending to reduce existential risk of AI
Source: Jones (2025)
Do you think we will ever spend anything near enough to prevent the existential risks from AI? Of course not.
This is simply a theoretical exercise that tells us that if we were to take existential risks seriously, we would have a very different government budget.
Or to turn it around: The fact that we are spending nothing on existential risk prevention shows that as a society, we are carelessly living in the moment without any concerns for the future. As always, we will underprepare and then try to throw huge sums at the problem once we are in a crisis. That has been the story of mankind. And so far, we mostly avoided societal collapse. Let’s hope it stays that way.
That's the problem with the Precautionary Principle. It calls for action in cases of risk of ruin, i.e. when risk is both global and potentially cascading. You can use the Principle locally to isolate yourself, like Europe does with GMOs, but often isolation won't work, or can't work (pandemics, or A.I.).
This is an intersting thought experiment. How do such numbers compare with the "real" calculated costs of climate change and loss of lifes there?
Afaik there the calculated costs of investments are reasonable investments compared to costs of GDP loss. The GDP loss does not include the value of lifes - which makes it even more favourable to invest today.