Part 5 and 6
Week 5: Existential Risk
“So if we drop the baton, succumbing to an existential catastrophe, we would fail our ancestors in a multitude of ways. We would fail to achieve the dreams they hoped for; we would betray the trust they placed in us, their heirs; and we would fail in any duty we had to pay forward the work they did for us. To neglect existential risk might thus be to wrong not only the people of the future, but the people of the past.” – Toby Ord This week we’ll cover the definition of an existential risk; examine why existential risks might be a moral priority; and explore why existential risks are so neglected by society. |
Organisation SpotlightFuture of Humanity InstituteThe Future of Humanity Institute (FHI) is a multidisciplinary research institute working on big picture questions for human civilisation and exploring what can be done now to ensure a flourishing long-term future. Currently, their four main research areas are:
|
Organisation SpotlightNuclear Threat InitiativeThe Nuclear Threat Initiative (NTI) works to prevent catastrophic attacks of a nuclear, biological, radiological, chemical or cyber nature. Alongside other projects, they work with heads of state, scientists, and educators to develop policies to reduce reliance on nuclear weapons, prevent their use, and end them as a threat. |
Required Materials
The Precipice, Chapter 2 – Existential Risk (65 mins),
The Precipice, Chapter 4 – Anthropogenic Risks (65 mins)
Policy and research ideas to reduce existential risk – 80,000 Hours (5 mins.)
Recommended reading
More to explore
The Precipice – Chapter 3 Natural Risks- How big is the threat to humanity posed by asteroids and comets, supervolcanoes, stellar explosions, and other natural risks? (60 mins.)
The Vulnerable World Hypothesis – Future of Humanity Institute – Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default. (45 mins.)
Open until dangerous: the case for reforming research to reduce global catastrophic risk (Video – 50 mins.)
Dr Greg Lewis on COVID-19 & the importance of reducing global catastrophic biological risks
Global governance and international peace
Ambassador Bonnie Jenkins on 8 years of combating WMD terrorism – an interview with Bonnie Jenkins, Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation. (Podcast – 1 hour 40 mins.)
Why effective altruists should care about global governance – Because global catastrophic risks transcend national borders, we need new global solutions that our current systems of global governance struggle to deliver. (Video – 20 mins.)
Destined for War: Can America and China Escape Thucydides’s Trap (Book)
Climate Change
Climate Change Problem Profile – 80,000 Hours – An analysis of the worst risks of climate change, and some of the most promising ways to reduce those risks. (30 mins.)
What can a technologist do about climate change? – A wide collection of technical projects to reduce the burning of fossil fuels. (60 mins.)
Nuclear security
Daniel Ellsberg on the creation of nuclear doomsday machines – Daniel Ellsberg on the institutional insanity that maintains large nuclear arsenals, and a practical plan for dismantling them (Podcast – 2 hours 45 mins.)
List of nuclear close calls – Wikipedia – A description of the thirteen events in human history so far that could have led to an unintended nuclear detonation (5 mins.)
Week 6: Emerging Technologies
One way to look for opportunities to accomplish as much good as possible is to ask “which developments might have an extremely large or irreversible impact on human civilisation?” During this week, we’ll explore a few technological trends which might have relevance for existential risk. This week, understandably, can’t cover all the major considerations for what the future will be like, but we aim to cover two key emerging technologies that might be less well known – transformative artificial intelligence and advances in biotechnology. |
Organisation SpotlightCentre for Security and Emerging TechnologyThe Centre for Security and Emerging Technology (CSET) is a policy research organisation that produces data-driven research at the intersection of security and technology, providing nonpartisan analysis to the US policy community. They are currently focusing on the effects of progress in artificial intelligence, advanced computing and biotechnology. CSET is aiming to prepare the next generation of decision-makers to address the challenges and opportunities of emerging technologies. Their staff include renowned experts with experience directing intelligence and research operations at the National Security Council, the intelligence community and the Departments of Homeland Security, Defense and State. |
Required Materials
The Precipice – Chapter 5 (pages 121-138) – Pandemics (25 min.)
The case for taking AI seriously as a threat to humanity (30 min)
How Students Will Lead the Alternative Protein Revolution – Amy Huang (13 min at 2x speed)
Recommended reading
Global Catastrophic Risks Chapter 20 – Biotechnology and Biosecurity Biotechnological power is increasing exponentially, at a rate as fast or faster than that of Moore’s law, as measured by the time needed to synthesise a certain sequence of DNA. This has important implications for biosecurity. (60 mins.)
Efforts to Improve Accuracy in our Judgements and Forecasts – Open Philanthropy (10 min.)
Some Background on Our Views Regarding Advanced Artificial Intelligence – Open Philanthropy Project – An explication of why there is a serious possibility that progress in artificial intelligence could precipitate a transition comparable to the Neolithic and Industrial revolutions. (60 mins.)
Three wild speculations from amateur quantitative macrohistory (10 min.)
The Precipice – Chapter 5 (pages 138-152) – Unaligned Artificial Intelligence (25 min.)
What Failure Looks Like (12 minutes) – Two specific stories about what a very bad society-wide AI alignment failure could look like, which differ considerably from the classic “intelligence explosion” story
More to explore
Global historical trends
How big a deal was the Industrial Revolution? (1hr. 20 mins.)
Modeling the Human Trajectory – Open Philanthropy Project (30 mins.)
Books on macrohistory: Guns, Germs, and Steel, Global Economic History: A Very Short Introduction, or Sapiens
Biosecurity
Current Topics in Microbiology and Immunology – Volume 424, Chapter 7 – Does Biotechnology Pose New Catastrophic Risks? – A description of the challenges of managing dual-use capabilities enabled by modern biotechnology. (60 mins.)
Explaining Our Bet on Sherlock Biosciences’ Innovations in Viral Diagnostics – Open Philanthropy Project – The Open Philanthropy Project report on their investment in Sherlock Biosciences to support the development of a diagnostic platform to quickly, easily, and inexpensively identify any human virus present in a sample. (15 mins.)
Shaping the development of artificial intelligence
What is artificial intelligence? Your AI questions, answered – Vox (40 mins.)
The new 30-person research team in DC investigating how emerging technologies could affect national security – 80,000 Hours – How might international security be altered if the impact of machine learning is similar in scope to that of electricity? (Podcast – 2hr.)
AGI Safety from first principles (1 hr 15 min) – one AI PhD student’s take, from first principles, on the specific factors for the problem of aligning general AI
Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity – Open Philanthropy Project – Why reducing risks from AI might be one of the most outstanding philanthropic opportunities. (40 mins.)
Human Compatible: Artificial Intelligence and The Problem of Control (Book)
The Alignment Problem: Machine Learning and Human Values (Book)
Other
Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority – Centre for a New American Security – An argument for how advances in military technology (including but not limited to AI) can impede relevant decision making and create risk, thus demanding greater attention by the national security establishment. (60 mins.)
Big nanotech: towards post-industrial manufacturing – The Guardian – an explanation of how atomically precise manufacturing could displace industrial production technologies and bring radical improvements in production cost, scope, and resource efficiency. (10 mins.)
AlphaGo – The Movie – DeepMind – A documentary exploring what artificial intelligence can reveal about the 3000-year-old game of Go, and what that can teach us about the future potential of artificial intelligence. (Video – 1hr. 30 mins.)
The Artificial Intelligence Revolution: Part 1 – A fun and interesting exploration of artificial intelligence by the popular blogger Tim Urban. (45 mins.)
The Future of Surveillance – An exploration of ways in which the future of surveillance could be bad, and an investigation into accountable, privacy preserving surveillance protocols. (Video – 15 mins.)
Exercise (30 mins.)
Every day each of us makes judgments about the future in the face of uncertainty. Some of these judgments can have a huge impact on our lives, so it’s really important that we make them as accurately as possible. But what can you do if you have limited information about the future? This week we’ll practice making predictions, with the goal of honing your ability to make accurate judgments in uncertain situations.
The aim of the exercise is to help you become “well-calibrated.” This means that when you say you’re 50% confident, you’re right about 50% of the time, not more, not less; when you say you’re 90% confident, you’re right about 90% of the time; and so on. The app you’ll use contains thousands of questions – enough for many hours of calibration training – that will measure how accurate your predictions are and chart your improvement over time. Nobody is perfectly calibrated; in fact, most of us are overconfident. But various studies show that this kind of training can quickly improve the accuracy of your predictions.
Of course, most of the time we can’t check the answers to the questions life presents us with, and the predictions we’re trying to make in real life are aimed at complex events. The Calibrate Your Judgement tool helps you practice on simpler situations where the answer is already known, providing you with immediate feedback to help you improve.