There's an upper limit to how relatively bad it can be due to the fact that you are shedding copies of your genome in public all the time.
Yes, lol :)
I noticed after playing a bunch of games of a mafia-type game with some rationalists that when people made edgy jokes about being in the mob or whatever, they were more likely to end up actually being in the mob.
What schedule are you going to posting these at? I've been eagerly looking forward to the next installment!
[Note: potential info hazard, but probably good to read if you already read the question.]
[Epistemic status: this stuff is all super speculative due to the nature of the scenarios involved. Based on my understanding of physics, neuroscience, and consciousness, I haven't seen anything that would rule this possibility out.]
All I want to know is, is this stuff just being pulled out of his butt? Like, an extremely unlikely hypothetical that nonetheless carries huge negative utility? I'd be okay with that, as I'm not a utilitarian. Or have these scenarios actually been considered plausible by AI theorists?
FWIW, I've thought about this a lot and independently came up with and considered all the scenarios mentioned in the Turchin excerpt. It used to really really freak me out, and I believed it on a gut level. Avoiding this kind of outcome was my main motivation for actually getting the insurance for cryonics (the part I was previously cryocrastinating on). However, I now believe that QI is not an s-Risk and don't feel personally worried about the possibility anymore.
One thing to note is that this is a potential problem in any sufficiently large universe, and doesn't depend on a many-worlds style interpretation being correct. Tegmark has a list of various multiverses, which are different and affect what scenarios we might face. I do believe in many-worlds (as a broad category of interpretations) though.
Lots of the comments here seem confused about how this works, so I'll recap. If I'm at the point of death where I'm still conscious, the next moment I'll experience will be (in expectation) whatever conscious state has the highest probability mass in the multiverse, which is also a valid next conscious moment from the previous moment. Note that this next conscious moment is not necessarily in the future of the previous moment. If the multiverse contains no such moments, then we would just die the normal way. If the multiverse includes lots of humans doing ancestor simulations, you potentially could end up in one of those, etc... The key is that out of all conscious beings in the multiverse who feel like this just happened to them, those are (tautologically) the ones having the subjective experience of the next valid conscious moment. And it's valid to care about these potential beings, and is AFAICT the reason I care about my future selves (who do not exist yet) in the normal sense.
Regarding cryonics, it seems like the best way to preserve a significant amount of information about my last conscious moment. To whatever extent information about this is lost, a civilization that cares about this could optimize for likelihood of being a valid next conscious moment. I think this is the main actionable thing you can do for this. Of course, this only passes the buck to the future, since there is still the inevitable heat death of the universe to contend with.
Another thing that seems especially plausible for sudden deaths Aranyosi's 1 scenario. In this case, the highest probability mass next conscious moment will be a moment based on the moment from a few seconds before, but with a "false" memory of having survived a sudden death. This has relatively high probability because people sometimes report having kind of experience when they have a close call. But this again simply passes the buck to the future, where you're most likely to die from a gradual decline.
However, I think that by far, the most likely situation is common to death by aging, illness, or heat death of the universe. At the last moment of consciousness, the only next conscious moments that will be left will be in highly improbable worlds. But which world you are most likely to "wake up" in is still determined by Occam's razor. People seem to imagine that these improbable worlds will be ones where your consciousness remains in a similar state to the one you died in, but I think this is wrong.
Think carefully about what things are actually happening to support a conscious experience. Some minimal set of neurons would need to be kept functional -- but beyond that, we should expect entropy to effect things which are not causally upstream of the functionality of this set of neurons. Since strokes happen often, and don't always cause loss of consciousness, we can expect them to eventually occur for every non-essential (for consciousness) region of the brain. Because people can experience nerve damage to their sensory neurons without losing consciousness, we can expect that the ability to experience physical pain will decay. Emotional pain doesn't seem to be that qualitatively different from physical pain (e.g. is also mitigated by NSAIDs), so I expect this will be true for pain in general.
So most of your body and most of your mind will still decay as normal, only the absolutely essential neuronal circuitry (and whatever else, perhaps blood circulation) to induce a valid next conscious moment will miraculously survive. Anesthesia works by globally reducing synapse activity. So the initial stages this would likely feel like going under anesthesia, but where you never quite go out. Because anesthetics stop pain (remember this is still true if applied locally), and because by default, we do not experience pain, I'm now pretty sure that given QI being real: infinite agony is very unlikely.
Yeah, I think the engineer intuition is the bottleneck I'm pointing at here.
This rings really true with my own experiences; glad to see it written up so clearly!
I think that lots of meditation stuff (in particular The Mind Illuminated) is pointing at something like this. One of the goals is to train all of your subminds to pay attention to the same thing, which leads to increasing your ability to have an intention shared across subminds (which feels related to Romeo's post). Anyway, I think it's really great to have multiple different frames for approaching this kind of goal!
I think people make decisions based on accurate models of other people all the time. I think of Newcomb's problem as the limiting case where Omega has extremely accurate predictions, but that the solution is still relevant even when "Omega" is only 60% likely to guess correctly. A fun illustration of a computer program capable of predicting (most) humans this accurately is the Aaronson oracle.
This post has caused me to update my probability of this kind of scenario!
Another issue related to the information leakage: in the industrial revolution era, 30 years was plenty of time for people to understand and replicate leaked or stolen knowledge. But if the slower team managed to obtain the leading team's source code, it seems plausible that 3 years, or especially 0.3 years, would not be enough time to learn how to use that information as skillfully as the leading team can.
Is there a reason not to take it if you're younger than 40?