Didn't downvote only because it was already negative enough. This didn't add anything to my understanding of your beliefs, or of any general approach to preferences or decision-making.
Extreme hypotheticals don't tell us much. They can be useful as extrapolations or to test how scalable our intuitions are (note: they rarely are), but they don't work as a starting point. More importantly, this argument hinges on definition of happiness, and the unstated (and incorrect, IMO) assumption that it's time-consistent and reflectively available. In truth, people are very bad at predicting their future happiness level, and even worse at ascribing causality to that happiness.
Mind responding to:
For example, it's hard to argue someone has your best interest at heart if they advise you to say no to the following:
Suppose that an advanced team of neuroscientists and computer scientists could hook your brain up to a machine that gave you maximal, beyond-orgasmic pleasure for eternity. Then they will blast you and the pleasure machine into deep space at near light-speed so that you could never be interfered with but at the cost of a billion innocent lives. Would you let them do this for you?
As you didn't respond to my post despite commenting
Mind responding to:
For example, it's hard to argue someone has your best interest at heart ...
I think my response is above - I have no intention to argue either side of that. It's so far out of what's possible that there's really no information available about how one should or could react.
In truth, I would advise you to say no (or just walk away and don't engage) if someone made you this offer - they're dangerously delusional or trying to scam you.
It's so far out of what's possible that there's really no information available about how one should or could react.
Fine, if Bob gets an offer that makes his life 1% better at the cost of the lives of everyone else being much worser. Would you be able to have person A's best interest at heart if you were to tell him to not take that offer.
You are basically making a claim about what you believe, but I don't think you have thought about the reader. Do you believe that what you wrote might convince something who's currently not maximizing their own happiness?
You are also making a very maximalist claim. You not only claim that it's worth to do things to increase your happiness when there are tradeoffs but you are calling for maximizing it.
Do you believe that what you wrote might convince something who's currently not maximizing their own happiness?
I do, do you share the same world-view I shared?
I call it a fallacy despite it not being a fallacy if happiness isn't your goal, because I believe it's irrational for it to not be your goal as you would be acting against your own best interest.
For example, it's hard to argue someone has your best interest at heart if they advise you to say no to the following:
Suppose that an advanced team of neuroscientists and computer scientists could hook your brain up to a machine that gave you maximal, beyond-orgasmic pleasure for eternity. Then they will blast you and the pleasure machine into deep space at near light-speed so that you could never be interfered with but at the cost of a billion innocent lives. Would you let them do this for you?
Saying no to this choice contradicts the idea of acting in your own self-interest. Hence I call not maximizing your own happiness a fallacy.
In addition, you should say yes because as soon as you get to that high-state of happiness you are no longer concerned about ethics, regrets, morals and the guilt you feel would disappear and you would be happy that you made that decision to feel that high-level of happiness.
Since the above example is extremely unrealistic, here are some more realistic ones: