Why SENS makes sense
Summary In this post, you'll find why I think SENS Research Foundation (SRF) is great to finance from an EA perspective along with the interview questions I want to ask its Chief Science Officer, Aubrey de Grey. You are welcome to contribute with your own questions in the comments or through a private message. Here is a brief summary of each section: Introduction: Aging research looks extremely good as a cause-area from an EA perspective. Under a total utilitarian view, it is probably second or third after existential risk mitigation. There are many reasons why it makes sense to donate to many EA cause-areas, such as to reduce risk, if there are particularly effective specific interventions, or if some cause-areas are already well funded. SRF's approach to aging research: SRF selects its research following the SENS general strategy, which divides aging into seven categories of damage, each having a corresponding line of research. This categorization is very similar to the one described in the landmark paper The Hallmarks of Aging. This sort of damage repair approach seems more effective and tractable than current geriatrics and biogerontology that are aimed at slowing down aging, as it enables LEV and many more QALYs. It makes rejuvenation possible instead of just slowing down aging as a best-case scenario, and it doesn't require an in-depth knowledge of our metabolism, which is extremely complicated and full of unknown-unknowns. Funding methodology and focus: By watching the talks that Aubrey de Grey gives, we can see that the core tenets of EA, scope, tractability, and neglectedness, guide SRF's focus. After choosing the general strategy, the subcategories of research are chosen, prioritizing the most difficult projects that are neglected and need to catch up in order to have the greatest impact on the date of Longevity Escape Velocity, a metric that is being addressed head-on by Dr. de Grey's prioritization strategy and constitutes the major source of impact o
Thanks a lot for writing this.
These disagreements mainly concern the relative power of future AIs, the polarity of takeoff, takeoff speed, and, in general, the shape of future AIs. Do you also have detailed disagreements about the difficulty of alignment? If anything, the fact that the future unfolds differently in your view should impact future alignment efforts (but you also might have other considerations informing your view on alignment).
You partially answer this in the last point, saying: "But, equally, one could view these theses pessimistically." But what do you personally think? Are you more pessimistic, more optimistic, or equally pessimistic about humanity's chances of surviving AI progress? And why?