Do you want a first-principled preparedness guide to prepare yourself and loved ones for potential catastrophes?
Many of us have heard how we should prepare for potential disasters. But is such advice outdated given increasing threats from both AI and biotechnology? This post is both a test for people’s appetite for an up-to-date, first principled rationalist preparedness guide as well as a call for collaborators to create such a guide (even just reading and commenting on draft versions of the guide would be helpful). In the rest of this post, I attempt to outline my preliminary thoughts around the contents of such a guide and the analysis I foresee needing to be done. I hope this is sufficient for people to let me know if this guide is something they are likely to find useful. I also hope readers can help improve this guide right from the outset: * What sections/topics are missing from the guide? * Is there better/additional analysis that is likely to improve the guide? * Is anything proposed in this post wrong or irrelevant? * Is the guide likely to be useful at all? * Any other comments that are likely to help people be better prepared for the most likely catastrophes that could harm them I am structuring this post the way I currently and preliminarily envision structuring the actual guide. Therefore, each section below roughly corresponds to each chapter/heading/section in the final preparedness guide, as well as their ordering. Moreover, the sections below contain a mix of the content I foresee in these chapters, and what analysis needs to be performed. It should be noted that an outcome of the process of creating an updated, rationalist preparedness guide might be that it is realized that current preparedness advice is sufficient even for threats posed by emerging technologies like AI and biotechnology. Or it might be decided that the uncertainty around these new risks is so high that it is hard to anticipate how to prepare. Moreover, as I was writing this post, I realized that some sections below might in fact already be the draft text of the final guide. For
Note for future work:
Look at roles or institutions with explicit early-action triggers — for example nuclear early-warning / launch-on-warning systems, where early action is pre-approved and procedurally mediated because delay is irrecoverable.
Not making a claim — just flagging this in case follow-on pieces explore how early-action systems are actually set up in practice.