Global Catastrophic Risks: An Impact-Focused Overview

Preventing global catastrophic risks is very important and highly neglected, though it is generally less tractable than other promising cause areas like global health and development or animal welfare. Overall, reducing the threat of catastrophic risks could be a highly promising cause area to work on – if you’re a good fit.

Read on to find out how we reached this conclusion.

What are global catastrophic risks?

A global catastrophic risk is an event that has the potential to cause harm, death, and damage on an enormous scale. One well-known example of such a risk is climate change; it threatens to increase the frequency of extreme weather events, cause widespread hunger and chronic water shortage, and make some areas around the world uninhabitable. 

Unfortunately, though, climate change isn’t the only potentially catastrophic risk we face. Others include the possibility of nuclear war, asteroid & comet collisions, and (as we’ve recently seen) pandemics. And, like with the invention of nuclear weapons decades ago, we may also find ourselves under threat from new emerging technologies, like artificial intelligence

A note: This article was written in early 2022. Because global catastrophic risks encompass quickly evolving spaces, some of our figures and conclusions could be out of date by the time you’re reading this. 

How important are global catastrophic risks?

They’ll cause massive harm

Unsurprisingly, global catastrophic risks are very important– perhaps more so than any of our other top recommended cause areas. There are a few reasons for this, and the first of which is obvious: global catastrophes could cause unprecedented amounts of damage, suffering, and loss of life. 

A nuclear war, for example, would likely kill tens of millions in the initial exchange; global pandemics could kill many millions, and climate change threatens to cause huge destruction in a number of ways, particularly if any of the more pessimistic projections come to pass. On top of this, some global catastrophic risks fall into the category of existential risk, which is a catastrophic risk serious enough to threaten human extinction (or other harms that civilization may never recover from). 

There’s a slim chance, for example, that a nuclear war could cause enough smoke to release into the atmosphere that the Earth would cool significantly. In turn, crops would be nearly impossible to grow, and we could face a famine severe enough to starve everyone on the planet.

​Another potential existential threat could arise from new transformative technologies, like artificial intelligence (AI). AI is a complex and unprecedented technology, so there’s a lot of uncertainty around it. But increasing numbers of people in the field are worried about potential catastrophic and existential risks it could entail – like the emergence of an incredibly powerful “superintelligent” AI.Though such an AI might be used to do great things if it comes to exist – like inventing new medical technologies – some are concerned that such an AI might inadvertently do something to endanger humanity instead.

They’re surprisingly likely

On top of the damage they’ll cause, global catastrophic risks are also important because they are surprisingly likely to occur. For example, experts estimate that there’s around a 1% chance of nuclear war every year. Though this might sound low on its own, this level of risk quickly stacks up over time. In fact, this level of risk implies a 63% probability of nuclear war happening over the next 100 years. Those are worse odds than a coin flip.

Additionally, some global catastrophic risks can make other catastrophes more likely. For instance, climate change may generate various forms of political instability, making global conflict more probable.

They’ll impact the future

Catastrophic risks are also important because of the impact they’ll have on future generations. The impacts of climate change, such as rising sea levels, could persist for many years to come, leaving the next few generations quite a different world than the one we have today. Worse – the impacts of existential risks threaten to prevent future generations from even existing. If we don’t succumb to an existential catastrophe, humanity could exist for a really long time. This means that many trillions more humans than alive today could exist in the future— but only if we don’t go extinct before then. So, while more near-term catastrophic risks could cause enormous damage, existential risks could rob a future from our children, and their children, and their children (and their children—you get the idea). 

However, it’s worth bearing in mind that there’s quite a bit of variance in likelihood between different catastrophic and existential risks. For instance, risks like asteroid and comet collisions, as well as ominous-sounding super-volcano eruptions, are less likely to occur in the near future than the other risks discussed above – so they’re probably less pressing.

How neglected are global catastrophic risks?

There are far fewer resources being put into catastrophic & existential risk than needed to properly combat them (and probably far less than you might have hoped). This shouldn’t be shocking; financial and electoral incentives mean that corporations and governments are often focused on the near-term. A commonly cited (and worrying) example is that the UN Biological Weapons Convention, which prohibits the development of biological weapons, has a smaller annual budget than an average branch of McDonald’s. More broadly, 80,000 Hours estimates that only $1 billion per year (adjusted for quality) is spent on preventing the most serious pandemics globally, a surprisingly small amount relative to the scale of the threat. For context, it’s estimated that Covid-19 will have cost the global economy $12.5 trillion of damage by 2025.

Although AI is touted as one of the most likely sources of existential risk in the coming decades, it’s estimated that there are only 100-200 people working full-time on AI safety efforts. This means it’s probably even more neglected than global catastrophic biological risks. Given that 20,000+ people work to produce plastic bricks for LEGO, it’s plausible that AI risk should probably absorb a fair few more workers. (We’re not saying that plastic bricks aren’t important, just that they’re probably not 100 times as important as protecting humanity.)

It’s also clear that not enough is being done to combat climate change. The world is starting to pay a bit more attention to it (over $600bn was spent on climate change in 2020), but this isn’t nearly sufficient. And, unfortunately, other catastrophic risks receive even less funding and attention than climate change, even though recent years have seen more philanthropic money move into the area. For example, Open Philanthropy has spent hundreds of millions of dollars on a variety of projects within existential risk over the last few years, with more expected to come. 

Though these figures pale in comparison to global expenditure on many other issues – meaning catastrophic risk reduction is still highly neglected relative to its importance – we should expect that the cause area will become slightly less neglected over the medium term. Excitingly, this also means that there is a growing number of career opportunities, as well as a growing capacity for additional organizations to be started in this cause area.

How tractable are global catastrophic risks?

While global catastrophic risk reduction is both highly important and quite neglected, we think it’s generally less tractable than other promising cause areas. There are a few reasons for this.

First, in other cause areas like global health and development, there are lots of interventions that we can be confident will work, like distributing bednets to combat malaria. We can run experiments to quickly and directly test the results of various methods to combat these problems. This isn’t really the case for global catastrophic risks. Such risks are big, one-off events, rather than continuous problems like disease and hunger, so it’s much more difficult to test what works and what doesn’t. Instead, we have to make educated guesses, simulate what a catastrophe would look like in practice, or find other, less obvious indicators of success. Thus, mitigating global catastrophic risks generally requires more speculative efforts, at least compared to the actions we can take to help in other cause areas.

Second, it’s often just really technically hard to implement effective solutions to global catastrophic risks— even when we know broadly what sorts of interventions and technologies might work. Consider AI safety, the field of research working to make sure that advanced artificial intelligence works for the good of humanity. Given that superintelligent AI does not yet exist, and there are deep uncertainties about how such an AI might work in the future, it might be difficult to produce work now which will be useful or relevant if or when such AI comes to exist.

Finally, tackling existential risks will require decisions to be made by very senior decision makers — like politicians and leaders of large organizations. Convincing influential actors to care about existential risk and draw up relevant policy or funding is often tricky. These people have their attention spread over many different issues, so it’s difficult to get them to focus on any issue, let alone ones that might not benefit them electorally or financially.

Why might you not prioritize global catastrophic risks

Alongside concerns about tractability, there are other reasons you might choose to prioritize other cause areas over global catastrophic risks.

One reason is the possibility of unintentional harm. Because we’re uncertain about how best to combat these risks, there’s a chance that attempts to do so might backfire. For example, some have expressed concerns that some research in AI alignment could end up accelerating the development of advanced Al, giving us less time to work out how to make it safe. Likewise, research in biosecurity may produce insights also usable by malicious actors wishing to start a pandemic. These potential backfires might make you more cautious about working on global catastrophic risks. Though, alternatively, you might decide that reducing the likelihood of these mistakes is a high priority within this cause area.

What can you do about global catastrophic risks?

Despite the challenges outlined above, we think it could be extremely valuable to dedicate your career to reducing global catastrophic risks. Though some attempts to avert catastrophe might not succeed, the ones that do succeed will have enormously high impact. And given the recent injection of funding into this cause area, now might be one of the best times to work to reduce existential risks, as organizations in this area look to scale up.

We highly recommend that you explore the brilliant resources over at 80,000 Hours, a career-advice organization with lots of great content on preventing global catastrophes. You might find it helpful to start with their article on the case for reducing existential risks. You can also browse their frequently updated job board for a more detailed picture of the different sorts of roles available within this cause area.

Additional resources