"Regarding the first question: evolution hasn’t made great pleasure as accessible to us as it has made pain. Fitness advantages from things like a good meal accumulate slowly but a single injury can drop one’s fitness to zero, so the pain of an injury is felt stronger than the joy of pizza. But even pizza, though quite an achievement, is far from the greatest pleasure imaginable.
Humankind has only recently begun exploring the landscape of bliss, compared to our long evolutionary history of pain. If you can’t imagine a pleasure great enough to make the trade-off worthwhile, consider that you may be falling prey to the availability heuristic. Pain is a lot more plentiful and salient, but it’s not a lot more important. The fact that pleasure is rare should only make it more valuable when offsetting pain, and an hour is a lot longer than 5 minutes."What makes you think there's an equilibrium where the greatest pleasure imaginable is as good as the greatest suffering imaginable is bad (That's at least what I think what you think)? I think there's an asymetrie insofar that truly great suffering is hard to outweigh with great happiness. However, since no finite suffering can be infinitly bad, there has to be some amount of pleasure that outweights 5 minutes of the greatest suffering imaginable, but I don't think 1 hour of greatest pleaure is enough. Something like 1,000,000 years may be enough.EDIT: 1,000,000 years might be over-the-top. Assuming 100 years of greatest pleasure outweigh 5 seconds of greatest suffering, 6,000 years of greatest pleasure should be enogh."Taking seriously the position that life is not worth living should lead one to a philosophy of extinctionism – the stance that it would be pretty great if all humans died in their sleep tonight."if you subscribe to timeless decision theory, you may still be against extinctionism even if you think life is net-negative, because, when people would expect to die painlessly in there sleep, they would be absolutly terrified, and this would be bad.
If I understand correctly, you may also reach your position without using a of non-causal decision theory if you mix utilitarianism with the deontological constraint of being honest (or at least meta-honest [see https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases]) about the moral decisions you would make.If people would ask you whether you would kill/did kill a patient, and you couldn't confidently say "No" (because of the deontological constraint of (meta-)honesty), that would be pretty bad, so you must not kill the patient.EDIT: honesty must mean keeping promises (to a reasonable degree -- it is always possible that something unexpected happens which you didn't even consider as an improbable possibility when making the promise) to avoid Parfit's Hitchhiker-like problems.
slighly modified version:Instead of chosing at once whether you want to take one box or both boxes, you first take box 1 (and see whether it includes 0$ or 1.000.000$), and then, you decide whether you want to also take box 2.Assume that you only care about the money, you don't care about doing the opposite of what Omega predicted.
slightly related:Suppose Omega forces you to chose a number 0<p<=1 and then, with probability p, you get tortured for 1/(p²) seconds. Assume for any T, being tortured for 2T seconds is exactly twice as bad as being tortured for T seconds.Also assume that your memory gets erased afterwards (this is to make sure there won't be additional suffering from something like PTSD)The expected value of seconds being tortured is p * 1/(p²)=1/p, so, in terms of expected value, you should chose p=1 and be tortured for 1 second. The smaller the p you chose, the higher the expected value.Would you actually chose p=1 to maximize the expected value, or would you rather chose a very low p (like 1/3^^^^3)?
I think this could be considered one the the very basics of rational thinking. Like, if someone asked what rationality/being rational means and wants a short answer, this Litany is a pretty good summary.
I once thought I could prove that the set of all natural numbers is as large as its power set. However, I was smart enough to acknowledge my limitations (What‘s more likely: That I made a mistake in my thinking I haven‘t yet noticed, or that a theorem pretty much any professional mathematician accepts as true is actually false?), so I activly searched for errors in my thinking. Eventually, I noticed that my methods only works for finite sub sets (The set of all natural numbers is, indeed, as large as the set of all FINITE subsets), but not for infinite subsets.
Eliziers method also works for all finite subsets, but not for infinite subsets
1.No, because their belief doesn't make any sense. It even has logical contradictions, which makes it "super impossible", meaning there's no possible world where it could be true (the omnipotence paradox proves that omnipotence is logically inconsistent; a god which is nearly omnipotent, nearly omniscient and nearly omnibenevolent wouldn't allow suffering, which, undoubtably, exists; "God wants to allow free will" isn't a valid defence, since there's a lot of suffering that isn't caused by other humans, like illness and natural catastrophes) (note: I'm adding "nearly" to avoid paradoxes like the omnipotence paradox)
2. belief isn't a choice, for example, you can't "chose" to believe that the continent Australia doesn't actually exist. Therefore, I wouldn't be able to hold religious believes even if I'd acknowledge that this would bring greater happiness without negative side effects.
However, if we make the hypothetical world even less convenient by adding that, actually, I would be able to effectivly self-deceive, and there would be absolutly no negative side effects, then Yes, I would chose to believe.
3. I'm already highly sympathetic towards the "Effective altruism" movement and donate a lot of money to their causes. The reason I'm not donating literally everything I don't need for survival is that I'm not morally perfect; I admit that.
(EDIT just to correct spelling)
There would actually be several changes:
I would stop being vegan.
I would stop donating money (note: I currently donate quite a lot of money for projects of "Effective altruism").
I would stop caring about Fairtrade.
I would stop feeling guilty about anything I did, and stop making any moral considerations about my future behaviour.
If others are overly friendly, I would fully abuse this for my advantage.
I might insult or punch strangers "for fun" if I'm pretty sure I will never see them again (and they don't seem like the kind of person who seeks retribution).
I would become less willing to help others.
I would care very little about politics, and might not go voting.
I wouldn't be angry at anyone unless they're action influences me personally (note: If they hurt a person with which I have a relationship, this would influence me. If they hurt a stranger, this wouldn't influence me)
And there would probably be quite a few more changes I haven't thought of yet.
I would still continue my current hobbies, and do things if I have a "feeling "that I "want" to do them. These "feelings" would only be stopped by fear for personal costs, not by moral consideration (And not making moral considerations would indeed make a change see above)
More acuratly, "absence of evidence you would expect to see if the statement is true" is evidence of absence.
If there's no evidence you'd expect if the statement is true, absence of evidence is not evidence of absence.
For example, if I tell you I've eaten cornflakes for breakfast, no matter whether or not the statement is true, you won't have any evidence in either direction (except for the statement itself) unless you're willing to investigate the matter (like, asking my roommates). In this case, absence of evidence is not evidence of absence.
Now, suppose we meet in person and I tell you I've eaten garlic just an hour before. You'd expect evidence if that statement is true (bad breath), in this case, absence of evidence is evidence of absence.
I've actually noticed this long before I've read the post. For me, the thought "I'm having many old thoughts" is itself an old thought now.
The same is true for the thought "the thought "I'm having many old thoughts" is itself an old thought now" and so on