What is X-Risk?

X-Risks are any events that could potentially cause the extinction of all of Humanity, or even all life on Earth. X-Risks have happened in the past, and there is no reason to assume we are immune to them now. Rome seemed invulnerable for hundreds of years, and the Dinosaurs for millions, and both are now confined to the dustbin of archaeology and paleontology. The same could easily happen to modern society, and indeed the entire world.

What X-Risks are there?

While there are several possibilities for natural disasters to be X-Risks, SHfHS is dedicated to preventing the most likely X-Risks: those caused by humans.

  • Human technology has changed the world more dramatically than almost any other process, and not all change has been for the better. Nuclear weapons put humans in the unique position of being able to purposefully cause our own extinction. This is the doctrine of Mutually Assured Destruction that reigned throughout the Cold War. Despite the Cold War ending, the danger has not left: even a limited nuclear exchange could be catastrophic to life on Earth.
  • Catastrophic Global Warming could set off a chain reaction causing global temperatures to skyrocket, leading to the extinction of crucial crops, massive flooding, and the eventual decimation or annihilation of the human population. Alternatively, shortsighted attempts at ending global climate change through geoengineering could have unseen, dramatic unintended consequences. For instance, a tiny particulate shield to block out harmful UV radiation could block too much useful light and cause crop failures.
  • Future technologies present an even greater threat. Self-replicating nanobots could, if accidentally released, devour the entire Earth and everything on it. Safeguards are needed to ensure that nanotechnology doesn't progress in that fashion without a surefire shutdown mechanism and protocols to keep the nanobots from escaping.
  • A recursively self-improving general Artificial Intelligence would quickly become the smartest and most powerful creature on Earth, and could reroute all of the planet's resources to satisfying its own goals, a possible event known as an Intelligence Explosion, or the Singularity. To prevent this AI from killing all humans, we need to ensure that the "seed" AI that it grows from is designed in such a way as to be "friendly" towards humans and human desires, and that it will remain so throughout all of its self-modification.

These are Serious Threats

Some of these scenarios sound like something out of Science Fiction, and so it is easy to dismiss them as flights of fancy. It is important to remember, however, that not only have there been planetary extinction events in the past, but that many of the predictions of Science Fiction have come true. After all, we live in a world where cyborgs walk around on artificial limbs, robots do our housework, and we all regularly carry around formerly fictional tools that contain all human knowledge and can instantly communicate with anyone, anywhere, at any time. We already live in the future. It's too late to be a luddite: these technologies are coming, or in many cases, are already here. We need to be able to deal with them safely, rationally, and humanely.

Center For Aplied Rationality
Future of Humanity Institute
Machine Intelligence Research Institute