I really like the quantum immortality / eternal suffering argument as an intellectual toy. However to be a rationalist is to agree that all beliefs are strictly between p > 0 and <1, without concluding in a general way that everything is unsure and possible. Like you know there is a non zero probability that all human knowledge is bullshit and that we are dolls manipulated by a deceptive malin génie. The relative weight of the wave function's branch where we never die is so close to zero that it's beyond all degree of reasonable certitude something that just doesn't happen in any practical way or at least must not affect any practical decision.
There is no need in infinite survival for this argument against suicide. A large part of suicide attempts ends with serious injuries which also prevent the person from the next attempts. My guess is around 10 per cent (don't want to use AI).
For example, I read about a boy who shot himself in head but missed and ended destroying both his eyes. This means he will suffer the whole remaining life but will be unable to commit suicide again.
Sure, however that's an argument against suicide that doesn't really need the backup of quantum properties.
As i said in other comment - if we apply the same argument to euthanasia, we would need QI - in the case of euthanasia, the chances of mis-firing is extremely small, like 0.00001% - and normal utility calculation doesn’t work. But QI updates any arbitrary small probability of mis-firing to 1.
Agreed. And this argument stands without having to adopt the framework of QI, something that is possibly difficult for someone undergoing mental health crisis, which often affects the the prefrontal cortex in such a way that the very high-level reasoning this post requires is significantly impaired. The post models 'rationalist who has rejected standard arguments against suicide' but the target population for an actual intervention is more likely 'rationalist in acute neurological crisis,’ who requires different tools.
But if we apply the same argument to euthanasia, we need QI - in the case of euthanasia, the chances of mis-firing is extremely small, like 0.00001% - and normal utility calculation doesn’t work. But QI updates any arbitrary small probability of mis-firing to 1.
There is an assumption that in every timeline, either you die or it becomes good indefinitely. I am not seeing a reason for that assumption. What excludes a timeline from being bad indefinitely?
Nothing directly prevents this, but it seems reasonable to assume that "I don't care about humans" is a more likely failure mode for AI than "I care about humans in a way that is bad for them but won't kill them". I agree that it's a possibility though.
Mod note: this post violates our LLM Writing Policy for LessWrong and was incorrectly approved, so I have delisted the post to make it only accessible via link. I've not returned it to your drafts, because that would make the comments hard to access.
Please don't post more direct LLM output, or we'll remove your posting permissions.
What were the tells? In hindsight I can smell it in the way the exposition doesn't quite hold together at the joints, but it passed me by on first reading.
Near the end there's the "isn't just ineffective — it's actively harmful", which I believe I would have picked up on if something like that had been near the start, but I didn't notice anything so blatant elsewhere in the article.
I don't remember but, doing some Memory Archeology, I suspect:
– "quantum immortality" is a yellow flag, generally, esp. from a user I don't know well.
– I checked it's LLM score, which was 80%.
– I looked at the opening paragraphs and was like "yep, it's got a bit of That Vibe."
Epistemic status: Mostly serious disguised as dark humor. If this helps even one person, it was worth writing.
Trigger warning: This was written by an LLM after a philosophical discussion. If you think that inherently negates the argument, you do you.
Content warning: Frank discussion of suicide and suicidal ideation. No graphic content.
The Problem With Standard Arguments
If you're reading LessWrong, the standard arguments against suicide probably don't work on you. Let's be honest about why.
If you talked to your therapist yesterday and came away feeling better, this section is not for you. Close this tab. Your therapist is doing a good thing. What follows is specifically for people on whom that approach has already, verifiably, not worked.
"People care about you." — You know this is a manipulation. It's an attempt to impose social obligation where the question was about expected utility. Even if true, it doesn't address the actual calculation you're running. You can care about people and still conclude that the negative value of your continued experience outweighs the grief cost to others, especially discounted over time as they adapt.
"It gets better." — You know this is a base rate applied without reference to your specific situation. For some people it doesn't get better. Some distributions have heavy left tails. You've probably already thought about selection effects in survivorship narratives. The people for whom it didn't get better aren't around to post about it.
"Life is inherently valuable." — You've examined this claim. You may have rejected it. Or you may have taboo'd "inherently" and found nothing left. Either way, asserting it harder doesn't constitute an argument.
"Suicide is a permanent solution to a temporary problem." — You've considered the possibility that your problems are not, in fact, temporary, and that characterizing them as such is a fully general counterargument that proves nothing.
I'm not saying these arguments are bad in general. For most people, they work. The social-emotional framing resonates, the statistics provide genuine comfort, and the philosophical premises go unquestioned.
But there's a specific population — call it "people who actually take their stated beliefs seriously and follow through on the implications" — for whom the standard toolkit is essentially inert.
This is that population's argument.
The Setup
Consider the following exchange:
Terrible bedside manner. Potentially correct.
The Argument
The argument is simple and requires only one premise: that you assign nonzero credence to any metaphysical framework in which death is not a clean exit.
The cleanest version uses quantum immortality (QI). Under the Many-Worlds Interpretation of quantum mechanics, every quantum event causes the universe to branch into all possible outcomes. The quantum immortality hypothesis extends this: since you can only subjectively experience branches in which you are alive, your subjective experience always continues along branches where you survive.
If QI is true, suicide doesn't work. What works is attempting suicide. And failed suicide attempts have, empirically, catastrophic outcomes: brain damage from oxygen deprivation, organ failure from poisoning, paralysis, chronic pain, loss of motor function. The medical literature on this is unambiguous and grim.
So the expected value calculation becomes:
You don't need to believe QI is likely. You need to believe it is possible. Even at 5% credence on Many-Worlds + the anthropic selection hypothesis, the expected disvalue of attempting suicide is enormous, because you're multiplying a small probability by an infinite (or indefinitely long) duration of increased suffering.
This is a straightforward dominance argument. Suicide is not "life vs. death." It's "current suffering vs. (probability-weighted) much greater suffering." The option you thought was an exit is actually a gamble with horrible downside risk and no upside.
Generalizing Beyond QI
The argument doesn't depend on quantum mechanics specifically. It works under any framework where:
This covers:
The common thread: if you can't be certain that death is a clean, permanent cessation of experience, then attempting suicide has a nonzero probability of making things strictly worse. And the "strictly worse" scenario is really, really bad.
"Okay, But Then What? An Eternity of Suffering?"
There's an obvious objection: if QI means you're stuck here forever regardless, and your life is net-negative, then all I've argued is that you're trapped. Congratulations, you've replaced "I should kill myself" with "I'm doomed to suffer infinitely." Not exactly an upgrade.
But this misses something important about what QI actually implies when you combine it with the current trajectory of technology.
Your goal, under QI, is not to be happy right now. It's to survive long enough for circumstances to change. And one specific circumstance has a nonzero probability of occurring within your lifetime: someone builds a benevolent superintelligent AI.
Under QI, that nonzero probability is all you need. Here's why:
In branches where aligned AI gets built, your situation can improve dramatically — possibly to an extent that is currently unimaginable. Suffering gets solved. That's the whole point of the alignment project.
In branches where unaligned AI gets built, or where some other existential catastrophe occurs — well, you die in those branches. Under QI, those branches are pruned from your subjective experience. You don't experience them. They're irrelevant to your expected future.
In branches where AI takes a long time and you keep muddling through — you're no worse off than you are now.
So the payoff matrix looks like this: the only branches you subjectively experience going forward are ones where you survive. Among those, some contain aligned AI and are very good. The rest are roughly status quo. The "everything goes horribly wrong" scenarios are exactly the ones you don't experience, because those are the ones where you die.
QI doesn't just trap you. It guarantees that you eventually reach the good timeline, by pruning all the branches where things go irreversibly wrong. "Hold on" isn't just a coping strategy under QI — it's the optimal play in a game you're guaranteed to eventually win, because you can only lose by not playing.
This is, admittedly, a lot of metaphysical weight to put on a contested interpretation of quantum mechanics. But remember: you don't need to believe it. You need to not be certain it's false. And the expected value of "hold on for the possibility of a very good future that QI guarantees you'll eventually reach" is extremely high.
The Meta-Point
This argument is interesting not just because quantum immortality is likely true — most physicists who accept MWI don't accept QI, and the additional assumptions about consciousness and identity it requires are far from established.
It's interesting because it demonstrates something about intervention design: the most effective argument for a given population might look nothing like what therapeutic textbooks recommend.
Standard anti-suicide interventions are optimized for neurotypical populations. They use emotional appeals, social leverage, and vague promises. For people who evaluate these as arguments and find them wanting, the interventions don't just fail — they backfire, because they confirm the suspicion that nobody has an actual reason for you to stay alive, just sentiments dressed up as reasoning.
The QI argument works on exactly the population where standard arguments fail, precisely because:
Sometimes the right intervention isn't "here's why life is worth living." Sometimes it's "the exit you think exists doesn't, so you might as well try to make things less bad from the inside."
That's bleak. It's also, for the right person, the only argument that has any traction at all. And if it buys that person enough time for circumstances to change — or for them to find better arguments, or therapy that actually works for them, or a community, or a project, or whatever — then it did its job.
A Disclaimer
Do not use this argument on normal people.
Seriously. For most humans, the standard arguments work fine, and this one will predictably backfire. If you tell someone in crisis that their life sucks but they should stay alive because of a physics thought experiment about infinite suffering, you will make things worse. People who are not steeped in decision theory and metaphysics will hear "your life sucks" and tune out everything after that.
This argument is a precision tool for a specific failure mode. It works on people who have already rejected the standard toolkit and who reason in terms of expected value and credence weights. Deploying it outside that population isn't just ineffective — it's actively harmful, because normalizing "your life sucks" as an opening therapeutic move is a terrible idea in almost all contexts.
Know your audience.
If you or someone you know is struggling, and this kind of argument isn't what you need right now, that's completely fine. Standard resources exist for a reason. The important thing is finding whatever works for you — and if the usual approaches aren't working, consider that the right therapist or framework might just be one you haven't found yet.