This week scientists published a report listing 12 risks that threaten human civilization. Many of those risks were non-existent a century ago and are the result of technological progress. But technology can also play a role in countering threats to our survival.

The report is issued by the Global Challenges Foundation, an organization aiming to reduce the risks of major global catastrophes and written by Dennis Pamlin and Dr. Stuart Armstrong with input of over 20 scientists.

The 12 risks all have the potential to destabilize systems – human or natural – to such an extent that it will lead to the collapse of civilization or even human extinction. The authors call this 'a category of risks where the worst possible impact in all practical senses is infinite' by which they mean the result is 'irreversible and lasts forever'.

First, to end the suspense, here are the 12 culprits threatening life as we know it.

Current risks
1. Extreme Climate Change
2. Nuclear War
3. Ecological Catastrophe
4. Global Pandemic
5. Global System Collapse

Exogenic risks
6. Major Asteroid Impact
7. Supervolcano

Emerging risks
8. Synthetic Biology
9. Nanotechnology
10. Artificial Intelligence
11. Uncertain Risks

Global policy risk
12. Future Bad Global Governance

Risky technology


Most of these risks are linked to or the direct result of technological and economic development. The authors aren't alone in thinking that when it comes to survival, humankind has become its own biggest threat: 'Given humanity’s track record of surviving natural risks to date, it has been strongly argued that the greatest sources of risk over the coming century will be those that emerge from human activity, whether by “terror or error”: by deliberate use of weapons, or from the catastrophic impacts of accidents involving powerful technology', according to the Centre for the Study of Existential Risk of the University of Cambridge.

The unprecedented acceleration of technology and economic growth makes it increasingly difficult to predict what new technologies and scientific fields will emerge in the mid and long term, let alone control them.

An example is the current discussion about Artificial Intelligence, or rather superintelligence: an intellect far surpassing that of any human. So far it has been impossible to determine a credible timeline for its emergence or even whether it is attainable. But if it does come to pass, there is an inherent control problem for how will the lesser human minds impose their will on the superior intelligence? Now, it isn't a given that it will prefer a pristine quiet world free of messy humans, but it isn't inconceivable either.





Technological advancement has put us on a path where we may become the agents of our own demise but it simultaneously has the potential to lift us out of this mess. 'The potential for technology to help solve existing and future global challenges is almost limitless. And so unfortunately is its potential to accelerate existing risks and create new ones', Pamlin and Armstrong write.

Prepare


The aim of the report is to increase awareness about these threats in order to inspire a global effort to minimize the risk where possible and prepare where needed. The authors note that these risks are often ignored because the probability of their occurrence is very low. But risk assessment isn't only about probability, it's also about impact: Risk = Probability × Impact. These risks deserve serious attention, the authors argue, because if they happen we'll be confronted with 'impacts of a magnitude that pose a threat to human civilization, or even possibly to all human life'.