The Effective Altruism Community has been an unexpected and pleasant surprise. I remember wishing there was a group out there that shared at least one of my ideals. Instead, I found one that shares three: global reduction of suffering, rationality, and longtermism. However, with each conference I attend, posts I read on the forum, and organizations being created, I notice most of them fall into a few distinct categories. Global development/health, animal welfare, biosecurity, climate change, nuclear risk/global conflict, and AI Safety. Don’t get me wrong, these are some of the most important areas to possibly be working on (I’m currently focusing 90% of my energy on AI Safety, myself). But I think there are at least five other areas that could benefit substantially from a small growth in interest. 
 

Interplanetary Species Expansion

This might come as the biggest surprise to be on the list. After all, space exploration is expensive and difficult. But there are very few out there who are actually working on how to change humanity from being a Single Point of Failure System. If we are serious about longtermism and truly decreasing x-risk, this might be one of the most crucial achievements needed. Any x-risk is most likely greatly reduced by this, even perhaps AGI*. The sooner this process begins, the greater the reduction in risk, since this will be a very slow process. One comparatively low-cost research area is studying biospheres and how a separate ecosystem and climate could be created in complete isolation. And this can be studied on Earth. It’s been decades since someone has attempted creating a closed ecological system, and advancement in this could even improve our chances of surviving on Earth if the climate proves inhospitable.

 

Life Extension

~100,000 people die from age-related diseases every day. ~100 billion people have died in our history. (Read that again.) Aging causes an immense amount of suffering, both to those who suffer from it for years, and to those who must grieve. It also causes irrecoverable loss, and is perhaps the greatest tragedy that is treated as normal. If every person who dies of preventable diseases like malaria is a tragedy, I do not see the difference in those dying of other causes also being a tragedy. Even if you do believe extending the human lifespan is not important, consider the alternative case where you’re wrong. If your perspective is incorrect, then ~100k more tragedies happen for every day we delay solving it.
 

Cryonics

This is related to Life Extension, but even more neglected, and probably even more impactful. The number of people actually working on cryonics to improve human minds is easily below 100. A key advancement from one individual in research, technology, or organizational improvement could likely have enormous impact. The reason for doing this goes back to the idea of irrecoverable loss of sentient minds. As with life extension, if you do not believe cryonics to be important or even possible, consider the alternative where you’re wrong. If one day we do manage to bring people back  from suspended animation, I believe humanity will weep for all those that were needlessly thrown in the dirt or the fire: for they are the ones there is no hope for, an irreversible tragedy. The main reason why I think this isn’t being worked on more is because it is even "weirder" than most EA causes, despite making a good deal of sense.

 

Nanotechnology

80k lists  a survey provided on the 80k website** places nanotechnology as having a 5% chance of causing human extinction, the same as artificial superintelligence***, and 4% greater than nuclear war. Many do not seem to dispute the possible danger of nanoweapons. Many agree that nanoweapons are possible. Many agree that nanotechnology is expanding, even if it’s no longer in the news. So, where are all the EAs tackling nanotech? Where are the organizations devoted to it? Where are the research institutions?**** Despite so many seeming to agree that this cause is important, there seems to be a perplexing lack of pursuit.

 

Coordination Failures

Most of humanity’s problems come from coordination failures. Nuclear war and proliferation is a coordination failure: everyone would be safer if there were no nukes in the world, and very few people (with some obvious current world exceptions) actually benefit from many entities having them. Climate change is partially a coordination failure: everyone wants the benefits of reducing it, but no one wants to be the only one footing the bill. A large amount of AGI risk will likely be from coordination failures: everyone will be so concerned about others building dangerous AGI that they will be incentivized to build dangerous AGI first. Finding fundamental ways to solve this could not only radically decrease x-risk, but would probably make the lives of everyone unbelievably better. This is a big ask, though. It’s likely that most attempts at this will fail, but even a 1-5% chance of success I think is worth putting far more effort into. We have already seen some achievements. As Eliezer Yudkowsky notes in Inadequate Equilibria, Kickstarter achieved a way for people to contribute to a project, but only if the project got enough funding to actually be created, so that no one ended-up wasting their own money. The Satoshi Nakamoto Consensus created a way for contracts to be enforced without the need for government coercion. These were insights from a few individuals, with inspiration from a wide variety of domains. It is likely there are many others waiting to be discovered.





*I do not think AGI risk is prevented by having multiple human bases, but I think the high uncertainty around how an AGI might kill us all does pose the chance that other home worlds might be safe from it. This is contingent on 1: the AGI not wishing to expand exponentially, and 2: the AGI not being specifically interested in our extinction. All other x-risks I know of (nuclear war, climate change, bioweapons, etc.) are all substantially reduced by having other bases.

**80k actually places AI risk closer to 10%, and nanoweapons much lower.

***I believe this is far too low for AGI.

****There are a few. But institutions such as the Center for Responsible Nanotechnology don’t seem to have many people/funding, and haven’t published anything in years.


 

New to LessWrong?

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 11:43 AM

Strongly agree on life extension and the sheer scale of the damage caused by aging-related disease. Has always confused me somewhat that more EA attention hasn't gone towards this cause area considering how enormous the potential impact is and how well it has always seemed to perform to me on the important/tractable/neglected criteria.

~100,000 people die from age-related diseases every day. ~100 billion people have died in our history. (Read that again.) Aging causes an immense amount of suffering, both to those who suffer from it for years, and to those who must grieve. It also causes irrecoverable loss, and is perhaps the greatest tragedy that is treated as normal. If every person who dies of preventable diseases like malaria is a tragedy, I do not see the difference in those dying of other causes also being a tragedy. Even if you do believe extending the human lifespan is not important, consider the alternative case where you’re wrong. If your perspective is incorrect, then ~100k more tragedies happen for every day we delay solving it.

 

I agree that longevity research is worth closer attention by EA, but this argument needs work. And we need the right argument to merit EA attention.

What is life extension research?

Let's define three types of related research: life extension research (LER), ordinary biomedical research (OBR), and public health research (PHR).

In OBR, scientists induce specific diseases or injuries in model research organisms to study a causal pathway or treatment. In LER, they do not induce any specific disease.

In both PHR and LER, scientists study the relationship between environmental factors, behavior, biology, and health metrics. PHR examines environmental or behavioral changes to promote health metrics. LER examines biological changes to promote health metrics.

If life extension research isn't uniquely trying to extend life, what's the argument for focusing on this specific area?

It's plausible that by achieving a health-promoting environment and behaviors via PHR, and hammering down specific diseases via OBR, we could prolong health and life indefinitely.

So far, this hasn't happened, but it's possible that precision medicine is the answer to all our problems. By breaking diseases down into more subtypes, and equipping ourselves with tools to personalize each patient's course of treatment, we can much more reliably treat diseases as they occur. Perhaps we can also systematically tackle the individual symptoms of old age, from thymic involution to arthritis to wrinkles. By becoming excellent at treating each one individually, we can achieve big gains in healthspan and lifespan.

LER takes a complementary approach. It's widely accepted that normal, healthy living still subjects organisms to forms of damage, and that this damage accumulates over the lifespan.

What's more controversial is whether we can find enough tractable interventions to slow or reverse this damage. Biochemical pathways, genetic architectures, and tissue structures are ludicrously complicated. Intervening in many of the pathways considered root causes of aging, such as accumulated genetic damage, has so far provent to be incredibly hard even in research organisms in basic and preclinical studies, for both scientific and regulatory reasons.

To make LER an EA cause area, we need an argument that there's a lot of concrete underexplored and potentially tractable interventions we could be studying, or that the cause is even more important than the death count makes it seem.

Just this year, a flood of funding poured into longevity research, from government and billionaires. So we need to identify life extension research that's not adequately covered by these funding sources. Is there simply still a lot more room for more funding? Are billionaires thinking too short-term? Is government too conservative? Are we worried that if the big gains from life extension come from privately held companies, that we'll wind up missing the big gains from life extension because they'll end up in the hands of the few?

An alternative to a tractability-and-neglect based argument is an importance-based argument. There's a lot of pessimism about the prospects for technical AI alignment. If serious life extension becomes a real possibility without depending on an AI singularity, that might convince AI capabilities researchers to slow down or stop their research and prioritize AI safety much more. Possibly, they might become more risk-averse, realizing that they no longer have to make their mark on humanity within the few decades that ordinary lifespans allow for a career. Possibly, they might even be creating AI with the main hope that the AI will cure aging and let them live a very long time. Showing that superintelligent AI isn't necessary for this outcome might convince them to slow down. If we're as pessimistic as Eliezer Yudkowsky about the prospects for technical AI alignment, then maybe we ought to move to an array of alternative strategies.

Is the life extension research as AI safety intervention argument reasonable?

There's a common trend of shoehorning this or that pet mainstream cause area into EA. Are we just doing that here? One way we can check is by seeing if the argument proves too much. Can we argue that climate change, funding for the arts, or abortion access in the USA is a pressing AI safety intervention using the same argument? If so, and if we'd find that argument dubious, then we should also be dubious about LER.

We can imagine the following arguments:

  • Fighting climate change is crucial for AI safety. A lot of AI capabilities researchers might be afraid their most likely cause of death is from climate change, or believe that AI capabilities will be crucial for fighting climate change effectively. If we can show them that we can fight climate change wtihout relying on AI capabilities, maybe they'll stop!
  • Funding the arts is crucial for AI safety. A lot of AI capabilities researchers might think the world is too ugly and dull, with artists rehashing old styles or producing work that's ever more abstract and unpleasant. They might be hoping that superintelligent AI can revitalize the arts and dramatically enhance our wellbeing on a daily basis through artistic enjoyment. But if we simply fund the arts a lot more, we can show them that the world can be a more beautiful place without relying on AI capabilities research!
  • Pro-choice activism is crucial for AI safety. A lot of AI capabilities researchers might think that... OK, I have to admit I am having a hard time coming up with anything coherent here.

My gut check is that life extension is a much more compelling example of an intervention having AI safety implications as a secondary effect. I think the reason is that climate change is not likely to kill everybody, everybody has different opinions on what constitutes beauty, and there's already a lot of great art out there.

By contrast, life extension potentially impacts everybody, and there is no substitute for the benefit it would provide.

If death isn't the tragedy, then what is?

Right now, we haven't adequately worked out exactly how to specify our values. That sort of work is what Toby Ord describes as proper to the "Long Reflection." We're not there yet. We're on The Precipice, trying to create enough stability and long-term security to survive into the Long Reflection.

So we don't need to specify why individual death is or isn't bad, solve population ethics, or anything like that. In Ord's model, the key thing is for humanity and its capacity for future flourishing to survive and stabilize. If LER is an important way we can achieve that, then that becomes the most important argument in its favor as an EA cause area, at least from a mainstream longtermist perspective.

The tragedy of death during our current Precipice era of history is that the prospect of near-term old age and death terrifies individual people into doing terrible things and neglect altruism. If we typically lived in good health to age 200, then trying to cram in a whole high-achievement career, family, etc. into ages 20-65 would be a "live fast, die young" strategy. It only seems like mature adult behavior because nobody lives to age 200 right now.

Conclusion

I think it would help turn LER into an EA cause area if we emphasize the potential impact on AI safety and more generally on short-term values alignment with the longterm future. It would also help if we got very specific about room for more funding, identifying tractable concrete interventions left inadequately explored, and made strong efforts to explain the difference between life extension, ordinary biomedical, and public health research, and why LER specifically needed more attention.

As much as the LER -> AI safety argument strikes me as plausible and important, it's not nearly good enough in the form I'm outlining here. Needs more work!

An alternative to a tractability-and-neglect based argument is an importance-based argument. There's a lot of pessimism about the prospects for technical AI alignment. If serious life extension becomes a real possibility without depending on an AI singularity, that might convince AI capabilities researchers to slow down or stop their research and prioritize AI safety much more. Possibly, they might become more risk-averse, realizing that they no longer have to make their mark on humanity within the few decades that ordinary lifespans allow for a career. Possibly, they might even be creating AI with the main hope that the AI will cure aging and let them live a very long time. Showing that superintelligent AI isn't necessary for this outcome might convince them to slow down. If we're as pessimistic as Eliezer Yudkowsky about the prospects for technical AI alignment, then maybe we ought to move to an array of alternative strategies.

This is a very interesting line of argument that I wish was true but I'm not sure is very convincing as it is. We can hypothesize about capabilities researchers who are relying on making advancements in AI in order to make a mark during their finite lifespans, or in order for the AI to cure aging-related disease to save them from dying. But how many capabilities researchers are actually primarily motivated by these factors, such that solving aging will significantly move the needle in convincing them not to work on AI?

What's also missing is acknowledgement that some of the forces could push in the other direction - that solving the diseases of old age would contribute to greater AI risk in various ways. Aubrey de Grey is an example of a highly prominent figure in life extension and aging-related disease who was originally an AI capabilities researcher, and only changed careers because he thought aging was both more neglected and important. 

Another possibility is that solving aging-related disease could result in extending the productive lifespan of capabilities researchers. John Carmack for example is a prodigous software engineer in his 50s who has recently decided to put all of his energy into AI capabilities research, and that he's pushing on with this despite people trying to convince him about the risks[1]. Morbid and tasteless as it might sound, it's possible in principle that succeeding in life extension/aging-related-disease research would give people like him enough additional productive and healthy years with which to become the creator of doom, wheras in worlds like ours where such breakthroughs are not made, they are limited by when they are struck down by death or dementia. 

Those are very small examples, but in any case it isn't obvious to me where things would balance out to, considering the myriad complicated possible nth-order effects of such a massive change. You could speculate all day about these, maybe the sheer surplus of economic resources/growth from e.g. not having to deal with massive human capital loss/turnover that occurs thanks to aging-related disease killing everyone after a while results in significantly more resources going into capabilities research, speeding up timelines. There are plenty of ways things could go.

  1. ^

    Eliezer Yudkowsky has personally tried to convince him about AI risk without success. This despite Carmack being an HPMOR fan.

I agree the argument needs fleshing out - only intended as a rough sketch.

There are three possibilities:

  1. Longevity research success -> AI capabilities researchers slow down b/c more risk-averse + achieved their immortality aims that motivated their AI research
  2. Longevity research success -> no effect on AI capabilities researcher activity
  3. Longevity research success -> Extends research career of AI capabilities researchers, accelerating AI discovery

You also appeal to just open-ended uncertainty - even if we come up with strong confident predictions on these specific mechanisms, we still haven't moved the needle on predicting the effect of longevity research success on AI timelines.

Here are a few quick responses.

  1. Longevity research success would also extend the careers of AI safety researchers. A counterargument is that AI safety researchers are mostly young. In the very short term, this may benefit AI capabilities research more than AI safety research. Over time, that may flip. However with short AI timelines, longevity research is not an effective solution because it's extremely unlikely that convincing proof we've achieved longevity escape velocity within the next 10-20 years. If we all became immortal now and AI capabilities were to be invented soon, this aspect might be net bad for safety. If we became immortal in 20 years and AI capabilities would otherwise be invented in 40 years, now both the safety and capabilities researchers get the benefit of career extension. 
  2. Longevity research success may also make politicians and powerful people in the private sector (early beneficiaries of longevity research success) more risk-averse, making them regulate AI capabilities with more scrutiny. If they shut off the giant GPUs, it will be hard for capabilities research to succeed. It's even easier to imagine politicians + powerful businessmen allowing AI capabilities research to accelerate as a desperate longevity gamble than it is to imagine the AI capabilities researchers themselves pursuing it for that reason.
  3. It is difficult for researchers to switch from CS to biology and vice versa. I think de Grey is probably a rare exception, and I think the problem of longevity research success causing a flood of research into AI capabilities is unlikely. Indeed, I expect concrete wins in longevity research would pull people in the other direction as the field became superheated.
  4. We should emphasize that under longtermist EV calculus, we only need to become mildly confident that longevity research success has a positive sign to think it's overwhelmingly important.
  5. If we're extremely uncertain and we really truly think the issue is course-of-the-universe-determiningly important, then that just means we really ought to think it through, not stop at "I'm just very uncertain." What are some additional concrete scenarios where longevity research makes things better or worse? 

You also appeal to just open-ended uncertainty

I think it would be more accurate to say that I'm simply acknowledging the sheer complexity of the world and the massive ramifications that such a large change would have. Hypothesizing about a few possible downstream effects of something like life extension on something as far away from it causally as AI risk is all well and good, but I think you would need to put a lot of time and effort into it in order to be very confident at all about things like directionality of net effects overall. 

I would go as far as to say the implementation details of how we get life extension itself could change the sign of the impact with regards to AI risk - there are enough different possible scenarios as to how it could go that could each amplify different components of its impact on AI risk to produce a different overall net effect.

What are some additional concrete scenarios where longevity research makes things better or worse? 

So first you didn't respond to the example I gave with regards to preventing human capital waste (preventing people with experience/education/knowledge/expertise dying from aging-related disease), and the additional slack from the additional general productive capacity in the economy more broadly that is able to go into AI capabilities research.

Here's another one. Lets say medicine and healthcare becomes a much smaller field after the advent of popularly available regenerative therapies that prevent diseases of old age. In this world people only need to go see a medical professional when they face injury or the increasingly rare infection by a communicable disease. The demand for medical professionals disappears to a massive extent, and the best and brightest (medical programs often have the highest/most competitive entry requirements) that would have gone into medicine are routed elsewhere, including AI which accelerating capabilities and causing faster overall timelines. 

An assumption that much might hinge on is that I expect differential technological development with regards to capability versus safety to be pretty heavily favouring accelerating capabilities over safety in circumstances where additional resources are made available for both. This isn't necessarily going to be the case of course, for example the resources in theory could be exclusively routed towards safety, but I just don't expect most worlds to go that way, or even for the ratio of resources to be allocated towards safety enough such that you get better posistive expected value from the additional resources very often. But even something as basic as this is subject to a lot of uncertainty. 

Personally I’d be shocked if longevity medicine resulted in a downsizing of the healthcare industry.

Longevity medicine likely will displace some treatments for acute illness with various maintenance treatments to prevent onset of acute illness. There will be more monitoring, complex surgeries, all kinds of things to do.

And the medical profession doesn’t overlap that well with AI research. It’s a service industry with a helping of biochem. People who do medicine typically hate math. AI is a super hot industry. If people aren’t going into it, it’s because they don’t have great fit.

I don’t know enough about differential development arguments to respond to that bit right now.

Overall, I agree that the issue is complex, but I think it’s tractable complex and we shouldn’t overestimate the number of major uncertainties. If in general it was too hard to predict the macro consequences of strategy X then it would not be possible to strategize. We clearly have a lot of confidence around here about the likelihood of AI doom. I think we need a good clean argument about why we can make confident predictions in certain areas and why we can make “massive complexity” arguments in others.

I thought I did respond to your human capital waste example. Can you clarify the mechanism you’re proposing? Maybe it wasn’t clear to me.

With regard to the massive complexity argument, I think this points to a broader issue. Sometimes, we feel confident about the macroeconomic impact of X on Y. For example, people in the know seem pretty confident that the US insourcing the chip industry is bad for AI capability and thus good for AI safety. What is it that causes us to be confidently uncertain due to a “massive complexity” argument in the case of longevity, but mildly confident in the sign of the intervention in the case of chip insourcing?

I don’t know your view on chip insourcing, but I think it’s relevant to the argument whether you’d also make a “massive complexity” argument for that issue or not.

Edit: I misclicked submit too early. Will finish replying in another comment.

Let's define three types of related research: life extension research (LER), ordinary biomedical research (OBR), and public health research (PHR).

In OBR, scientists induce specific diseases or injuries in model research organisms to study a causal pathway or treatment. In LER, they do not induce any specific disease.

I would also count a third path that we might call tool-making. Building better gene-sequencer is toolmaking. AlphaFold is toolmaking. CRISPR is toolmaking.

When looking at many problems in biology, those problems might not be solvable with the current toolkit and need the development of new tools to be solved. 

I agree tools are important. I'm trying to define the difference between how LER/OBR/PHR go about using tools to improve health outcomes.

Interplanetary Species Expansion

Any argument that sees climate change as an existential risk should also see the chance of being able to create a base on another planet as zero. The amount of work it takes to do geoengeering to make Mars habitable is a lot less than it takes to manage even the most extreme global warming scenarios. 

When we look at the economic effect that sanctions have on countries because of how specialized some technology happens to be, Elon Musks idea that 1 million might be enough for a self-sustaining civilisation on Mars seems questionable with near-current technology. 

The work that SpaceX does seems at the moment like a good project to lay necessary groundwork for doing anything on other planets. It's unclear whether there are currently other projects that would make sense. Once Starship is operational, other projects that use the technology might become more viable. 

When we look at the economic effect that sanctions have on countries because of how specialized some technology happens to be, Elon Musks idea that 1 million might be enough for a self-sustaining civilisation on Mars seems questionable with near-current technology.

Sort of want to say it, but this is probably a worst-case scenario, and thus should set a upper bound here for sanctions with oil.

Yes, international sanctions have hurt, but Russia did most of the damage to Russia, and the idiocy of Russia's economic policy was a big portion of why it's economy is hurting.

The rational explanation is that Putin wants to destroy anything that could threaten authoritarianism long-term.

Russia is not the only case of sanctions. 

The chip sanctions against China are currently also in discussion and it's interesting how many different countries are involved in the supply chain for that technology.

The problem that the United States has with the Jones Act is another example that shows the necessity of scale for the best modern technologies. 

I notice most of them fall into a few distinct categories. Global development/health, biotech, climate change, nuclear risk/global conflict, and AI Safety.

Life Extension

Life extension is inherently about biotech. It makes sense to put money resources into life extension than we are currently do, but the moment you believe that it's something qualitatively different than biotech you are likely not going to do very effective work in the field and try to search the key under the street light. 

I meant the focus on biotech in terms of the prevention/mitigation of bioweapons, rather than the positive side of biotech. I'll change the wording to avoid confusion.

Many agree that nanotechnology is expanding, even if it’s no longer in the news. 

Most of the expansion in nanotech is not about Drexler's self-replicating nanobots that are the x-risk people are worried about. 

Protein-folding advances seem to me the only thing that points into that direction and I'm unsure whether it helps to think in terms of nanotech risk instead of biorisk about them.

Re nanotechnology, you link to Ben Snodin's post as agreeing that nanotechnology is feasible, and then ask where all the nanotechnology research institutions are, but fail to mention that Snodin recommends only "2-3 people spending at least 50% of their time on this by 3 years from now". I guess I agree that there should be more EA research on nanotechnology, but I think you exaggerate the amount of attention it should have.

Re coordination failures, there is one group focused on it, the Game B community, however they aren't EAs and I have little confidence that they'll make any progress. EA does have people working on improving institutional decision-making, which seems closely related, like the Effective Institutions Project. I think "solving coordination problems" more generally is not that neglected and/or tractable, given that there are strong incentives for a lot of people and organisations to do so already, but I may be wrong.

Re coordination failures, there is one group focused on it, the Game B community

The interesting thing about that article is that it doesn't say anything about how the Game B community actually does anything to organize their community in a Game B way. 

Yeah, I was excited when I heard Game B was being created. Will have to wait and see if it yields any fruit. Improving institutional decision making is more of the symptom than the cause, but it might work as a proxy solution, which is probably much easier.

 

"I think "solving coordination problems" more generally is not that neglected and/or tractable, given that there are strong incentives for a lot of people and organisations to do so already, but I may be wrong."

 

But this seems to be the core of coordination problems. Everyone has a collective incentive to do it, and yet we see failures in it all around us. I'm too pessimistic to think we can get to something like "dath ilan", but it seems like we can surely do better than our current SNAFU. I agree that it might not be tractable. I imagine it might depend more on a few key breakthroughs that are able to outcompete less-than-optimal methods.