Wiki Contributions


So you are effectively a revolutionary.

I'm not sure about this label, how government/societal structures will react to eventual development of life extension technology remains to be seen, so any revolutionary action may not be necessary. But regardless of which label you pick, it's true that I would prefer not to be killed merely so others can reproduce. I'm more indifferent as to the specifics as to how that should be achieved than you seem to imagine - there are a wide range of possible societies in which I am allowed to survive, not just variations on those you described. 

I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The decision is between using those resources to support you vs using those resources to support someone else's child.

That's an example of something the resources could go towards, under some value systems, sure. Different value systems would suggest that different entities or purposes would make best moral use of those resources, of course.

To try and make things clear: yes, what I said is perfectly compatible with what you said. Your reply to this point feels like you're trying to tell me something that you think I'm not aware of, but the point you're replying to encompasses the example you gave - "someone else's child" is potentially a candidate for "the next best thing you could do with the resources to run me" under some value systems.

I don't think you have engaged with my core point so I"ll just state it again in a different way: continuous economic growth can support some mix of both reproduction and immortality, but at some point in the not distant future ease/speed of reproduction may outstrip economic growth, at which point there is a fundemental inescapable choice that societies must make between rentier immortality and full reproduction rights.

I think you may be confusing me for arguing for reproduction over immortality, or arguing against rentier existence - I am not. Instead I'm arguing simply that you haven't yet acknowledged the fundemental tradeoff and its consequences.

I thought I made myself very clear, but if you want I can try to say it again differently. I simply choose myself and my values over values that aren't mine.

The tradeoff between reproduction and immortality is only relevant if reproduction has some kind of benefit - if it doesn't then you're trading off a good with something that has no value. For some, with different values, they might have a difficult choice to make and the tradeoff is real. But for me, not so much. 

As for the consequences, sacrificing immortality for reproduction means I die, which is itself the thing I'm trying to avoid. Sacrificing reproduction for immortality on the other hand seems to get me the thing I care about. The choice is fairly clear on the consequences.

Even on a societal level, I simply wish not to be killed, including for the purpose of allowing for the existence of other entities that I value less than my own existence, and whose values are not mine. I merely don't want the choice to be made for me in my own case, and if that can be guaranteed, I am more than fine with others being allowed to make their own choices for themselves too.

Say you asked me anyway what I would prefer for the rest of society? What I might advocate for others would be highly dependent on individual factors. Maybe I would care about things like how much a particular existing person shares my values, and compare that to how much a new person would share my values. Eventually perhaps I would be happy with the makeup of the society I'm in, and prefer no more reproduction take place. But really it's only an interesting question insofar as it's instrumentally relevant to much more important concerns, and it doesn't seem likely that I will be in a privileged position to affect such decisions in any case.

Of course I have a moral opportunity cost. However, I personally believe that this opportunity cost is low, or at least it seems that way to me. I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The question of what to do about scarcity of resources seems like a potentially very scary one then for exactly the reasons that you bring up - I don't particularly think for example that a political zeitgeist that guarantees my death to be one that does a great job of maximizing what I believe to be valuable. 

In the long term the evolution of a civilization does seem to benefit from turnover - ie fresh minds being born - which due to the simple and completely unavoidable physics of energy costs necessarily implies indefinite economic growth or that some other minds must sleep.

I will say that I am skeptical of the idea that what "benefit" here is capturing is what I think we should really care about. Perhaps some amount of turnover will help in order to successfully compete with other alien civilsations that we run across - I can understand that, if hope that it isn't necessary. But absent competitive pressures like this, I think it's okay to take a stand for your own life and values over those of newer, different minds, with new, different values. Their values are not necessarily mine and we should be careful not to sacrifice our own values for some nebulous "benefit" that may never come to be. 

Of course, if it is your preference, if it is genuinely you truthfully pursuing your own values to sleep or die so that some new minds can be born, then I can understand why you might choose to voluntarily do so and sacrifice yourself. But I think it is a decision people should take very carefully, and I certainly don't wish for the civilisation I live in to make the choice for me and sacrifice me for such reasons. 

The "10 years at most" part of the prediction is still open, to be fair.

While this seems to me to be true, as a non-maximally competitive entity by various metrics myself I see it more as an issue to overcome or sidestep somehow, in order to enjoy the relative slack that I would prefer. It would seem distatefully molochian to me if someone were to suggest that I and people like me should be retired/killed in order to use the resources to power some more "efficient" entity, by whatever metrics this efficiency is calculated.

To me it seems likely that pursuing economic efficiencies of this kind could easily wipe out what I personally care about, at the very least. I see Hanson's Em worlds for example as being probably quite hellish as a future, or maybe if luckier closer to a "Disneyland with no Children" style scenario.

I strongly hope that my values and people who share my values aren't outcompeted in this way in the future, as I want to be able to have nice things and enjoy my life. As we may yet succeed in extending the Dream Time, I would urge people to recognize that we still have the power to do so and preserve much of what we care about, and not be too eager to race to the bottom and sacrifice everything we know and love.

You also appeal to just open-ended uncertainty

I think it would be more accurate to say that I'm simply acknowledging the sheer complexity of the world and the massive ramifications that such a large change would have. Hypothesizing about a few possible downstream effects of something like life extension on something as far away from it causally as AI risk is all well and good, but I think you would need to put a lot of time and effort into it in order to be very confident at all about things like directionality of net effects overall. 

I would go as far as to say the implementation details of how we get life extension itself could change the sign of the impact with regards to AI risk - there are enough different possible scenarios as to how it could go that could each amplify different components of its impact on AI risk to produce a different overall net effect.

What are some additional concrete scenarios where longevity research makes things better or worse? 

So first you didn't respond to the example I gave with regards to preventing human capital waste (preventing people with experience/education/knowledge/expertise dying from aging-related disease), and the additional slack from the additional general productive capacity in the economy more broadly that is able to go into AI capabilities research.

Here's another one. Lets say medicine and healthcare becomes a much smaller field after the advent of popularly available regenerative therapies that prevent diseases of old age. In this world people only need to go see a medical professional when they face injury or the increasingly rare infection by a communicable disease. The demand for medical professionals disappears to a massive extent, and the best and brightest (medical programs often have the highest/most competitive entry requirements) that would have gone into medicine are routed elsewhere, including AI which accelerating capabilities and causing faster overall timelines. 

An assumption that much might hinge on is that I expect differential technological development with regards to capability versus safety to be pretty heavily favouring accelerating capabilities over safety in circumstances where additional resources are made available for both. This isn't necessarily going to be the case of course, for example the resources in theory could be exclusively routed towards safety, but I just don't expect most worlds to go that way, or even for the ratio of resources to be allocated towards safety enough such that you get better posistive expected value from the additional resources very often. But even something as basic as this is subject to a lot of uncertainty. 

Strongly agree on life extension and the sheer scale of the damage caused by aging-related disease. Has always confused me somewhat that more EA attention hasn't gone towards this cause area considering how enormous the potential impact is and how well it has always seemed to perform to me on the important/tractable/neglected criteria.

An alternative to a tractability-and-neglect based argument is an importance-based argument. There's a lot of pessimism about the prospects for technical AI alignment. If serious life extension becomes a real possibility without depending on an AI singularity, that might convince AI capabilities researchers to slow down or stop their research and prioritize AI safety much more. Possibly, they might become more risk-averse, realizing that they no longer have to make their mark on humanity within the few decades that ordinary lifespans allow for a career. Possibly, they might even be creating AI with the main hope that the AI will cure aging and let them live a very long time. Showing that superintelligent AI isn't necessary for this outcome might convince them to slow down. If we're as pessimistic as Eliezer Yudkowsky about the prospects for technical AI alignment, then maybe we ought to move to an array of alternative strategies.

This is a very interesting line of argument that I wish was true but I'm not sure is very convincing as it is. We can hypothesize about capabilities researchers who are relying on making advancements in AI in order to make a mark during their finite lifespans, or in order for the AI to cure aging-related disease to save them from dying. But how many capabilities researchers are actually primarily motivated by these factors, such that solving aging will significantly move the needle in convincing them not to work on AI?

What's also missing is acknowledgement that some of the forces could push in the other direction - that solving the diseases of old age would contribute to greater AI risk in various ways. Aubrey de Grey is an example of a highly prominent figure in life extension and aging-related disease who was originally an AI capabilities researcher, and only changed careers because he thought aging was both more neglected and important. 

Another possibility is that solving aging-related disease could result in extending the productive lifespan of capabilities researchers. John Carmack for example is a prodigous software engineer in his 50s who has recently decided to put all of his energy into AI capabilities research, and that he's pushing on with this despite people trying to convince him about the risks[1]. Morbid and tasteless as it might sound, it's possible in principle that succeeding in life extension/aging-related-disease research would give people like him enough additional productive and healthy years with which to become the creator of doom, wheras in worlds like ours where such breakthroughs are not made, they are limited by when they are struck down by death or dementia. 

Those are very small examples, but in any case it isn't obvious to me where things would balance out to, considering the myriad complicated possible nth-order effects of such a massive change. You could speculate all day about these, maybe the sheer surplus of economic resources/growth from e.g. not having to deal with massive human capital loss/turnover that occurs thanks to aging-related disease killing everyone after a while results in significantly more resources going into capabilities research, speeding up timelines. There are plenty of ways things could go.

  1. ^

    Eliezer Yudkowsky has personally tried to convince him about AI risk without success. This despite Carmack being an HPMOR fan.

What you are looking for sounds very much like Vanessa Kosoy's agenda

As it so happens, the author of the post also wrote this overview post on Vanessa Kosoy's PreDCA protocol.

Load More