Aka, coming up with a better term for applying LW-style rationality techniques to 'rational self-interest'.
Aka, in parallel with the current movement of 'Effective Altruism', which seeks the best available ways to fulfill one's values, when those values focus roughly on improving the well-being and reducing the suffering of people in general, seeking the best available ways to fulfill one's values, when those values focus roughly on improving the well-being and reducing the suffering of oneself.
(I find that I may have use for this term both in reality and in my NaNoWriMo attempt.)
"Effective self-care" or "effective well-being".
Okay. The "effective"-part in Effective Altruism" refers to the tool (rationality). "Altruism" refers to the values. The cool thing about "Effective Altruism", compared to rationality (like in LW or CFAR), is that it's specific enough that it allows a community to work on relatively concrete problems. EA is mostly about the global poor, animal welfare, existential risk and a few others.
What I'd imagine "Effective self-care" would be about is such things as health, fitness, happiness, positive psychology, life-extension, etc. It wouldn't be about "everything that isn't covered by effective altruism", as that's too broad to be useful. Things like truth and beauty wouldn't be valued (aside from their instrumental value) by either altruism nor self-care.
"Effective Egoism" sounds like the opposite of Effective Altruism. Like they are enemies. "Effective self-care" sounds like it complements Effective Altruism. You could argue that effective altruists should be interested in spreading effective self-care both amongst others since altruism is about making others better off, and amongst themselves because if you take good care for yourself you are in a better position to help others, and if you are efficient about it you have more resources to help others.
On the negative side, both terms might sound too medical. And self-care might sound too limited compared to what you might have in mind. For example,one might be under the impression that "self-care" is concerned with bringing happiness levels to "normal" or "average", instead of super duper high.
Consider this line to have gotten an extra thumbs-up from me. :)
The fact that you have highlit the differences between these two closely-related concepts, which I hadn't managed to think through on my own, means this thread has been worthwhile whatever the result of the poll might be.
More brainstorm fodder here.
That's a very good suggestion list, and a good link; thank you kindly. :)
What's wrong with (instrumental) Rationality?
"Rationality" is the tool, but by itself, doesn't describe what goals and values the tool is being used to promote. There can be rational altruists, rational hedonists, rational omnicidal maniacs who want to eliminate suffering by eliminating life, rational egoists, and so on.
Lets poll. Which do you think captures it best?
(I apologize for not responding sooner; I've just realized I'm in one of my periodic bouts of anhedonia and social procrastination.)
My short absence seems to have given enough time to get a selection of votes in, and since I'm just about to actually apply the results of this discussion to my fiction, it's time to analyze the results.
I'm ruling out 'rational self-interest' as already being used to refer to a closely-related but not-quite-identical concept, so that I can have my characters discuss the differences.
It looks like 'rational' or 'effective' beat out all the other suggestions fairly handily. This happens to line up with my own instincts, so I'm willing to take it as confirmation.
And, again, it looks like we have two clear winners, with enough margin over the others to be confident they're actually the most popular.
Checking the votes of the available combinations of those, though
... there isn't quite as clear a preference for any one over any of the others. (As in, a single weird voter could have skewed the results.) But it looks like either 'effective egoism' or 'effective self-interest' is going to win out. ... And, at least for fictional purposes, I think I'll apply the Crazy Straws principle and simply have continuing arguments over which of the two terms should be applied in any given case.
I like "effective egoism" enough already, the alternatives I've seen suggested sound dumb and this one sounds snappy. It might not be perfect for communicating exactly the right message of what the idea is about, but you can do that by explaining, and having a cool name can only be achieved within the name itself.
How bout "Egoism"? Like, the brand doesn't need a makeover. Those who get it, get it. Those who don't, well, if we cared about reaching out, some egoists we'd be, yeah?
Unfortunately, that term is somewhat overbroad, as it includes variants of egoism that I want to be able to avoid pointing to when I use 'Effective Egoism' (and/or its replacement term) to point to the specific category of egoism I'm focusing on. Eg, I want to avoid including certain forms of Randian Objectivism which lead to long-term harm, I want to exclude simple hedonism, and I definitely want to exclude uses of the term 'egoism' that aren't about the ethical aspects, such as psychological egoism.
I may be an egoist (by some definitions), and I may have schizoid personality disorder, but modern medicine has saved my life on more than one occasion, and if I subscribed to a form of egoism which disparaged such interactions, then I'd be doing myself harm. /Because/ I'm selfish, I care about promoting the good for other people, at least in certain specific ways.
Right, but the great thing about being an egotist is that you can endorse modern medicine saving lives, without having to be a lifesaver yourself. It is better if other people are altruists. Making them egotists would make leeching harder.
There are various counter-arguments, such as that if there are too few egoists and too many altruists, then then Overton Window will shift to the point that egoism can become socially disapproved of; or that altruism isn't even necessary for reasonably rational egoists to engage in positive-sum interactions which are nearly indistinguishable from altruistic behaviour, as has been explored in some depth by libertarian philosophers; or that any one egoist is unlikely to be able to persuade any significant number of altruists to become egoists, so the optimal egoist approach is more likely to focus attention on one's own actions rather than persuading others to become egoists; and so on.
I guess. I feel if your egoism is more complicated than "Do whatever you want.", then you've kind of lost sight of the main thing. But obviously this comment is vulnerable to the same objection, so I guess I'll just close by saying that calling egoism where you end up caring about Overton Windows "effective egoism" seems pretty exactly wrong. There's a whole Fake Selfishness link on LW, yeah? That seems like what this is.
I /want/ to go camping on Phobos. There are certain practical problems in accomplishing that. Likewise, there are a great many practical problems in accomplishing many other more ordinary things that I want to do; but some of those problems are soluble, depending on the resources I choose to throw at them, but with only a finite amount of resources, I have to make choices about /which/ of my wants to try to full.
I'm aiming for not dying at all. (At least, not permanently.) Which leads, in this case, to not considering there to be much difference between having a few more seconds of life compared to one year of life, if those are the only two options; and as long as humanity survives, then there's a small but reasonable chance of an approximation of my mind being reconstructed, which, while not as good a choice as a full continuation of all my memories, is still better than nothing. So I would selfishly choose to save the Earth.
On the other hand, if I consider the original question...
... without assuming that I'm a member of humanity doomed to die anyway, such as if I'm an upload; I'm currently drafting a novel in which the consideration of precisely that question is a significant plot point, and it is not a One-Sided Policy Debate.
If I live in a world where someone in physical proximity to me is likely to be horribly tortured for fifty years, then I very likely live in a world where /I/ have a moderately high chance of being horribly tortured for fifty years. If I balance the odds, then a certainty of minor pain from a stubbed toe seems a small price to pay to not live in a world with even a moderate chance of me experiencing fifty years of horrible torture.
Mu; I do not think that such a guarantee is feasible.
curious about the "long-term harm" you're concerned about.
I have had conversations with some self-proclaimed Objectivists who were of the opinion that sticking to their stated principles was more important than avoiding dying; in other words, that Objectivism was so manifestly correct that it was a suicide pact; restated again, they said that death was preferable to negotiation and tactical compromise on econo-political arguments; put still another way, they had not "learned how to lose". That form of Objectivism falls outside the parameters of the form of egoism I seek to name.
I get the impression that you search for a specific term that applies to your specific variant of egoism. I think it a good idea to provide a counter weight to the optimization of self-less goals. I'd prefer a clear two-sided distinction instead of two independent concepts that may overlap or leave open cases. One might also go for a continuous range.
How about a distinction of rationalism in general via the weight placed on one own versus others. And the two ends indeed already have names: Egoism and altruism.
Hm... is it possible that my stab at a temporary term is actually sufficient as a permanent one?
Effective Altruism... Towards My Future Selves
egoism/altruism is a false dichotomy.
Yes, yes, we are all fundamentally merely computational algorithms running on the same sort of hardware substrate made of stardust, with no fundamental differences between one another. But if one piece of the universe which my algorithms identify as 'other' comes towards the piece of the universe which my algorithms identify as 'myself' while waving a knife and screaming, I'm still going to treat the 'other' differently than I will treat 'myself' and give myself's desire to run a higher priority than the other's desire to grab my wallet. Other bits of stardust's algorithms will lead to different behaviours, such as surrendering the wallet freely, and my algorithms find it useful to have words that can describe the different behaviours differently. Thus, even if the underlying theory is false, being able to describe the dichotomy still has value in terms of instrumental rationality, and in this case (using a scifi terminology anology), there is no reason to coin a new word like 'smeerp' since the existing term 'rabbit' already exists and is generally understood well enough to allow both thought and communication.
I'm not making a handwavy philosophical argument. Pretending that you are at the production-possibility frontier of altruism/egoism is both a result of, and inducing further cognitive distortion.
Thank you for pointing out the term 'production-possibility frontier' in this context, which helps clarify some of my thoughts.
As it is, I don't actually disagree with you, in the main. More than once, I've mentioned that it's often the case that considering both effective altruism and effective egoism (by whatever name) as guides tends to lead towards the same behaviour, in most everyday situations.
“Effective Personal Hedonism”
“Effective Egoistic Hedonism”
“Effective Egocentric Hedonism”
“Effective Ego-Centered Hedonism”
“Effective Self-Centric Hedonism”
“Effective Self-Centered Hedonism”
I have previously been critical of Effective Altruism, comparing it to trying to be an Effective CooperateBot (example).
If we were to extend Effective CooperateBot to the other sorts of players in prisoner's dilemmas, by analogy we'd call egoists Effective FairBot / Effective PrudentBot, depending on how your brand of egoism deals with CooperateBots.
That's a mouthfull and might be too technical, but it might be a nice way of reframing the question; when you said 'egoism,' I didn't know exactly what you meant.
Yes, I came across that term; but it doesn't quite fit, any more than the mere term 'altruism' fits the concept of 'effective altruism'.