"You just said that you doubt that "more than a third" of EAs identify as LW-rationalist. Even aside from the fact that you can be one without identifying as one, one third shows a huge influence. I wouldn't find that one third of vegetarians are LW-rationalists, or 1/3 of atheists, for instance, even though those are popular positions here."
I feel like you're making a pretty elementary subset error there...
"Ther very fact that you're asking how to reconcile cryonics with EA shows that cryonics is not in the category of psychologically easy to give up things. Otherwise you'd just avoid cryonics immediately." No, I currently see no inside view need to go for cryonics, emotionally or otherwise. There were enough people I respect who went for cryonics that my outside view was that they knew something I did not. This does not appear to be the case, and I see no reason to consider this further, at least until I grow substantially older or sicker. Nor do I see a need to continue this conversation.
Happy New Year.
"then rationality doesn't demand that I stop spending money on myself in order to be good." Well, yes, because whether you're "being" good is somewhat irrelevant. Objective conditions of the world don't change based on what you're "being" ontologically, reality is affected by what you do.
My terminal goals involve the alleviation of suffering, with the minimization of bad habits being an instrumental goal. It so happens that spending money on cryogenics is unlikely to be the best way to solve this goal (or so it appears. No strong arguments have been made in its favor as of today, which is what I initially asked for).
"you've ended up considering normal human behavior to be bad and you have a standard which no person can meet (including yourself). " Normality is not a terminal value of mine, and I doubt it is for you. Having a impossible goal to reach would be absurd IF success/failure is a binary case. But it really isn't. There is so much suffering in the world that being halfway, or even a tenth of the way successful still means a lot of reduction of suffering in the world.
"LW tries to get people to support MIRI based on rationality, multiplying utility, and ignoring warm fuzzies. Someone who believes all of that, but doesn't believe the part about the AI being a danger, would end up in EA, so in practice LW is associated with EA." Your argument is of the form A, B, C results in X, but A, B and not C results in Y, so "in practice" X and Y are associated. But this is bizarre when a lot of different things can result in Y, at best tangentially related to A and B, and completely independent of the truthiness of C. Plain ol' egalitarianism comes to mind, as does Rawls and libertarian theology.
I will ignore the ad hominem.
You still have not addressed the point that adopting new behaviors is qualitatively different psychologically than getting rid of old ones. And from an ethical, non-egotistical perspective, this difference is quite significant.
"By the same reasoning, the marginal utility of any amount used to improve your health is greater than the marginal utility of using it on malaria nets (except insofar as improving your health lets you survive to produce more money for malaria nets). In fact, the same could be said about any expenditure on yourself whatsoever, whether health-related or not."
Your point being?
"Cryonics is not special in this regard compared to all the other ways of spending money on yourself, which you do do."
I spend money on myself (less than you think, probably) because a) inertia bias and b) signaling/minimizing weirdness points. For a), Perhaps you and your peers are different, but I have substantially more self-control against starting new bad habits than quitting previous bad ones. Thus, it makes sense to apply greater scrutiny to newer actions I might do than to give up things that I emotionally find difficult to lose. If you are more rational than me on this point, I congratulate you.
For b), outside of this tiny microcosm and some affiliated places, cryonics is highly unlikely to bring me greater status/minimize my perceived weirdness. Indeed, my prior is very strongly that it has the opposite effect.
Thus, while having a selective demand for rigor for newer ideas may initially seem offputting to rationalists, I think it makes a lot of sense in practice to do so.
"Though it amuses me to see one LW weird idea collide head on with another LW weird idea." I doubt Singer is a LW'er, and there are plenty of ethical people of an optimizing variety in the world before Singer or LW. EA as a term is closely tied with the LW-sphere, yes, but it's really just a collection of obvious-in-retrospect ideas put together. I doubt more than a third of the current population of EA identifies as LW/rationalist (I certainly don't), and I also strongly suspect that EA will outgrow or outlive LW, but I admit to some perhaps unjustifiable optimism on that front.
Your double negatives are confusing me. :) Can you clarify?
It's not at all obvious to me that the marginal utility of $120/year (at a time where I'm extremely healthy, as part of a demographic that's exceptionally long-lived) is greater than that of eg. 20 malarial nets (which is an absolute lower bound for any decision, there are ways that I think can leverage my donations significantly further). Can somebody clarify this intuition for me?
Note that many rationalists/LW'ers have taken the Giving What We Can pledge, the most famous of whom is Scott Alexander (Yvain):