Sometimes I wish I had a halting oracle.
Things outside of your future light cone (that is, things you cannot physically affect) can “subjunctively depend” on your decisions. If beings outside of your future light cone simulate your decision-making process (and base their own decisions on yours), you can affect things that happen there. It can be helpful to take into account those effects when you’re determining your decision-making process, and to act as if you were all of your copies at once.
Those were some of my takeaways from reading about functional decision theory (described in the post I linked above) and updateless decision theory.
I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions).
I think you mean to say “causally-connected,” not “causally-disconnected”?
I’m referring to regions outside of our future light cone.
A causally disconnected part would be caring now about something already beyond the cosmological horizon
Yes, that is what I’m referring to.
Thanks for your comment :) The definition of causality I meant to use in the question is physical causality, which doesn’t refer to things like affecting what happens in causally-disconnected regions of the multiverse that simulate your decision-making process. I’m going to edit the question to make that clearer.
Thanks for pointing this out; you’re right that your net worth wouldn’t necessarily be correlated with world GDP in many plausible scenarios of how takeoff could happen. I suppose the viability of things like taxation and redistribution of wealth by governments as well as trade involving humans during and after a takeoff could be the main determinants of whether the correlation between the two would be as strong as it is today or closer to zero. I wonder what I should expect the correlation to be.
ETA: After all, governments don’t redistribute human wealth to either horses or chimpanzees, and humans don’t engage in trade with them.
if things get crazy you want your capital to grow rapidly.
Why (if by "crazy" you mean "world output increasing rapidly")? Isn't investing to try to have much more money in case world output is very high somewhat like buying insurance to pay the cost of a taxi to the lottery office in case you win the lottery? Your net worth is positively correlated with world GDP, so worlds in which world GDP is higher are worlds in which you have more money, and thus worlds in which money has a lower marginal utility to you. People do tend to value being richer than others in addition to merely being rich, but perhaps not enough to generate the numbers you need to make those investments be the obviously best choice.
(h/t to Avraham Eisenberg for this point)
The third question is
Does X agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it?
Note the word “superintelligent.” This question would not resolve as “never” if the consensus specified in the question is reached after AGI is built (but before superintelligent AGI is built). Rohin Shah notes something similar in his comment:
even if we build human-level reasoning before a majority is reached, the question could still resolve positively after that, since human-level reasoning != AI researchers are out of a job
Unrelatedly, you should probably label your comment “aside.” [edit: I don't endorse this remark anymore.]
Agreed — I feel like it makes more sense to be proud of changing your mind when that entails acquiring a model of complexity similar to or lower than that of the model you used to have that makes better predictions, rather than merely making your model more complex.
I recommend Forever and Again: Necessary Conditions for “Quantum Immortality” and its Practical Implications by Alexey Turchin. I don’t endorse everything in there (especially not the usage of “Time” in the x-axis of Figure 3, the assumption that there is such a thing as a “correct” theory of personal identity, and the claim that there is a risk of “losing something important about one’s own existence” when using a teletransporter) but it is one of the articles most relevant to modal immortality that I’ve found in the bibliography of Digital Immortality: Theory and Protocol for Indirect Mind Uploading, which you mentioned in your answer (and, for that matter, one of the articles most relevant to modal immortality I’ve found anywhere), and it is fairly thorough.
I don’t see how that contradicts his claim. Having the data required to figure out X is really not the same as knowing X.
I think there is a good reason for there being more focus on cryonics than solving aging on LessWrong. Cryonics is a service anyone with the means can purchase right now, whereas there is barely anything anyone can do to slow their aging (modulo getting young blood transfusion and perhaps taking a few drugs, neither of which work that well).
If you are a billionaire, or very knowledgeable about biology, you might be able to contribute somewhat to anti-aging research — but only a very small fraction of the population is either of those things, whereas pretty much anyone that can get life insurance in the US can get cryopreserved.