Natália Mendonça

Sometimes I wish I had a halting oracle.


Anti-Aging: State of the Art

[F]ew members of [LessWrong] seem to be aware of the current state of the anti-aging field, and how close we are to developing effective anti-aging therapies. As a result, there is a much greater (and in my opinion, irrational) overemphasis on the Plan B of cryonics for life extension, rather than Plan A of solving aging. Both are important, but the latter is under-emphasised despite being a potentially more feasible strategy for life extension given the potentially high probability that cryonics will not work.

I think there is a good reason for there being more focus on cryonics than solving aging on LessWrong. Cryonics is a service anyone with the means can purchase right now, whereas there is barely anything anyone can do to slow their aging (modulo getting young blood transfusion and perhaps taking a few drugs, neither of which work that well).

If you are a billionaire, or very knowledgeable about biology, you might be able to contribute somewhat to anti-aging research — but only a very small fraction of the population is either of those things, whereas pretty much anyone that can get life insurance in the US can get cryopreserved.

What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse?

Things outside of your future light cone (that is, things you cannot physically affect) can “subjunctively depend” on your decisions. If beings outside of your future light cone simulate your decision-making process (and base their own decisions on yours), you can affect things that happen there. It can be helpful to take into account those effects when you’re determining your decision-making process, and to act as if you were all of your copies at once.

Those were some of my takeaways from reading about functional decision theory (described in the post I linked above) and updateless decision theory.

What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse?

I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions).

I think you mean to say “causally-connected,” not “causally-disconnected”?

I’m referring to regions outside of our future light cone.

A causally disconnected part would be caring now about something already beyond the cosmological horizon

Yes, that is what I’m referring to.

What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse?

Thanks for your comment :) The definition of causality I meant to use in the question is physical causality, which doesn’t refer to things like affecting what happens in causally-disconnected regions of the multiverse that simulate your decision-making process. I’m going to edit the question to make that clearer.

Engaging Seriously with Short Timelines

Thanks for pointing this out; you’re right that your net worth wouldn’t necessarily be correlated with world GDP in many plausible scenarios of how takeoff could happen. I suppose the viability of things like taxation and redistribution of wealth by governments as well as trade involving humans during and after a takeoff could be the main determinants of whether the correlation between the two would be as strong as it is today or closer to zero. I wonder what I should expect the correlation to be.

ETA: After all, governments don’t redistribute human wealth to either horses or chimpanzees, and humans don’t engage in trade with them.

Engaging Seriously with Short Timelines
if things get crazy you want your capital to grow rapidly.

Why (if by "crazy" you mean "world output increasing rapidly")? Isn't investing to try to have much more money in case world output is very high somewhat like buying insurance to pay the cost of a taxi to the lottery office in case you win the lottery? Your net worth is positively correlated with world GDP, so worlds in which world GDP is higher are worlds in which you have more money, and thus worlds in which money has a lower marginal utility to you. People do tend to value being richer than others in addition to merely being rich, but perhaps not enough to generate the numbers you need to make those investments be the obviously best choice.

(h/t to Avraham Eisenberg for this point)

Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns

The third question is

Does X agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it?

Note the word “superintelligent.” This question would not resolve as “never” if the consensus specified in the question is reached after AGI is built (but before superintelligent AGI is built). Rohin Shah notes something similar in his comment:

even if we build human-level reasoning before a majority is reached, the question could still resolve positively after that, since human-level reasoning != AI researchers are out of a job

Unrelatedly, you should probably label your comment “aside.” [edit: I don't endorse this remark anymore.]

Six economics misconceptions of mine which I've resolved over the last few years

Agreed — I feel like it makes more sense to be proud of changing your mind when that entails acquiring a model of complexity similar to or lower than that of the model you used to have that makes better predictions, rather than merely making your model more complex.

What information on (or relevant to) modal immortality do you recommend?

I recommend Forever and Again: Necessary Conditions for “Quantum Immortality” and its Practical Implications by Alexey Turchin. I don’t endorse everything in there (especially not the usage of “Time” in the x-axis of Figure 3, the assumption that there is such a thing as a “correct” theory of personal identity, and the claim that there is a risk of “losing something important about one’s own existence” when using a teletransporter) but it is one of the articles most relevant to modal immortality that I’ve found in the bibliography of Digital Immortality: Theory and Protocol for Indirect Mind Uploading, which you mentioned in your answer (and, for that matter, one of the articles most relevant to modal immortality I’ve found anywhere), and it is fairly thorough.

Project Proposal: Gears of Aging

I don’t see how that contradicts his claim. Having the data required to figure out X is really not the same as knowing X.

Load More