Click on any of the logos to the right to learn more about the organizations we support and their activities.

The following articles will help you gain an understanding of how to think about existential risks, and why most people don't.

  • Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. Superintelligence is the definitive introduction to AI risk, and the closest there is to a primer for would-be world-savers in this domain. Bostrom covers the reasons recursively self-improving Artificial Intelligences have the potential to be extremely powerful. That power has tremendous potential to aid humanity or to bring catastrophe, if we don’t solve the specific difficulties of value-alignment and stability under self-improvement, before the first recursively self-improving AI is built.

    Thoroughly researched, the book goes into depth on each of several possible scenarios, and the implications of each. Far from being the starry-eyed, overzealous excitement about speculated developments that many of us have come to expect from futurism, Bostrom is rigorous intellectually cautious, calmly laying out the arguments for the various positions. Even if you ultimately disagree with the conclusion that AGI poses an existential risk, I encourage you to deal seriously with Bostrom’s arguments. If you think the problem is trivial, or think you have an easy solution, it’s nearly certain to be flawed, and this book has covered it in detail. To anyone who already thinks that the “singularity” might be important, Superintelligence is the best summation of research that has been done up to this point. Read this book, it’s an eye-opener.

  • Smarter Than Us: The Rise of Machine Intelligence by Stuart Armstrong. What happens when machines become smarter than humans? Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit.

  • Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat. Other books about the future of Artificial Intelligence (specifically Artificial General Intelligence = AGI, also known as "strong" AI, seed AI or thinking machines), tend to be about the power, the promise, and the wonder of AGI, and spend about 2 minutes on the dangers. Even if they literally say, "If we get this wrong, it will kill us all," the relative time spent tends to leave the reader unimpressed by the danger.

    This book explains the danger in detail. James Barrat interviewed many of the researchers in the field, plus knowledgeable external experts, to assemble this layman-readable overview. It is easy to comprehend, well organized, and is about the people involved as well as the technologies. It points out the ordinary human motivations that lead to inappropriately discounting and denying dangers, and how they play out in this field.

    Although he does not pitch a fund drive, it is clear that anyone can make a difference. Barrat identifies the very small number of tiny organizations that are working to reduce the danger. Small size means small contributions have a big impact. Ordinary people, experts in various fields (especially mathematics), students, and wealthy people can contribute and improve humanity's chances to survive this century and prosper.

    I invite you to read this book, then get into action. The clock is ticking.

  • Global Catastrophic Risks by Nick Bostrom & Milan M. Cirkovic.
  • Facing the Intelligence Explosion by Luke Muehlhauser.
  • Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards by Nick Bostrom. This provides an overview of the various existential risks affecting mankind. It provides good background information to understand what the threats are.
  • Cognitive Biases Potentially Affecting Judgment of Global Risks by Eliezer Yudkowsky. This lists some common cognitive biases suffered by people who have to think using ape brains, and how they affect how people deal with the possibility of extinction.
  • The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb This book presents the case that history is determined not by the common, highly regular and probable occurences of everyday life, but by the rare, extremely significant events that upend the common world.
  • A valuable list of introductory resources on AI safety has been gathered by the Future of Life Institute.
Center For Aplied Rationality
Future of Humanity Institute
Machine Intelligence Research Institute