The general problem with Bostrom's argument is that it tries to apply incorrect probabilstic model. It implicitly assumes independence where there is causal connection, therefore arriving to a wrong conclusion. Similarly to conventional reasoning in Doomsday Argument or Sleeping Beauty problems.
For future humans, say in year 3000, to create simulations of year 2025, first actual year 2025 has to happen in the base reality. And then all the next years up to 3000. We know about it very well. Not a single simulation can happen unless an actual reality happens first.
And yet Bostroms models our knowledge about this setting as if we participate in a probability experiment with random sample between many "simulation" outcomes and one "reality" outcome. The inadequacy of such modelling should be obvious. Consider:
There is a bag with a thousand balls. One red and 999 blue. First a red ball is picked from the bag. Then all the blue balls are picked one by one.
and compare it to
There is a bag with a thousand balls. One red and 999 blue. For a thousand iterations a random ball is picked from the bag.
Clearly, the second procedure is very different from the first. The mathematical model that describes it doesn't describe the first at all for exactly the same reasons why Bostrom's model doesn't describe our knowledge state.
Unless I'm misunderstanding what you mean by "In which ten billion interval"?
You seem to be.
Imagine all humans ever, ordered by date of their birth. The first ten billion humans are in the first ten-billion interval, the second 10 billion humans are in the second ten billion interval and so on.
We are in 6th group - 6th ten billion interval. Different choice of a spouse of one woman isn't going to change it.
Also, in general, this is beside the point. The Doomsday argument is not about some alternative history which we can imagine, where the past was different. It's about our history and its projection to the future. Facts of the history are given and not up to debate.
Consider an experiment where a coin is put Tails. Not tossed - simply always put Tails.
We say that the size sample space of such experiment consists of one outcome: Tails. Even though we can imagine a different experiment with alternative rules where the coin is tossed or always put Heads.
In the domain of anthropics reasoning, the questions we're asking aren't of the form
They can be all kind of forms. The important part, which most people doing anthropic reasoning keep failing, is not to assume things that you do not actually know as given and to assume things that you actually know as given. If you know that the sample space consists of 1 outcome, don't use sample space consisting of a thousand.
An unknown number of n-sided die were thrown, and landed according to unknown metaphysics to produce the reality observed, which locally works according to known deterministic physics, but contains reflective reasoners able to generate internally coherent counterfactuals which include apparently plausible violations of "what would happen according to physics alone". Attempt to draw any conclusions about metaphysics.
I think you've done a quite good job at capturing what's wrong with standard anthropic reasoning.
Even otherwise reasonable people, rationalists, physicalists and reductionists, suddenly start talking about some poorly defined non-physical stuff that they have no evidence in favor of, as if it's a given. As if there is some blind spot, some systematic flaw in human minds, that everything they know about systematic ways to find truth suddenly turns off as soon as word "anthropics" is uttered. As if "anthropic reasoning" is some separate magisterium that excuses us from common laws of rationality.
Why don't we take a huge step back to ask the standards questions, first? How do we know that any dice were thrown at all in the first place? Where is this assumption is coming from? What is this "metaphysics" thingy we are talking about? Even if it was real, how could we know that it's real, in the first place?
As with any application of probability theory, any application of math, even, we are trying to construct a model that is approximating reality to some degree. A map that describes a territory. In reality there is some process that created you. This process can very well be totally deterministic. But we don't know how exactly it works. And so we use an approximation. Our map that incorporates our level of ignorance about the territory, represents the territory only to the best of our knowledge.
When we gain some new knowledge about the territory, we show it on our map. We do not keep using and outdated map that still assumes that we didn't get this piece of evidence. When we learn that with all likelihood souls are not real and you are your body, it becomes clear that the outcome of you existing in far future or far past doesn't fit with our knowledge about the territory. Our knowledge state doesn't allow it anymore. Our ignorance can no more be represented by throwing some kind of dice. We know that you couldn't have gotten anything else but 6. Case closed.
What if my next-door neighbor's mother had married and settled down with a different man?
Then your neighbor wouldn't exist and the whole probability experiment wouldn't happen from their perspective.
The point is that if you consider all iterations in parallel, you can realize all possible outcomes of the sample space
Likewise if I consider every digit of pi in parallel, some of them are odd and some of them are even.
and assign a probability to each outcome occurring for a Bayesian superintelligence
And likewise I can assign probabilities based on how often an unknown to me digit of pi is even or odd. Not sure what does a superintelligence has to do here.
while in a consistent proof system, not all possible outcomes/statements can be proved
The same applies to a coin toss. I can't prove both "This particular coin toss is Heads" and "This particular coin toss is Tails", no more than I can simultaneously prove both "This particular digit of pi is odd" and "This particular digit of pi is even"
because for logical uncertainty, there is only 1 possible outcome no matter the amount of iterations
You just need to define you probability experiment more broadly, talking about not a particular digit of pi but a random one, the same way we are doing it for a toss of the coin.
There always is only one correct answer for what outcome from the sample space is actually realised in this particular iteration of the probability experiment.
This doesn't screw up our update procedure, because probability update represent changes in our knowledge state about which iteration of probability experiment could be this one, not changes in what has actually happened in any particular iteration.
Your demand that programs were causally closed from low level representation of the hardware seem to be extremely limiting. According to such paradigm, a program that checks what CPU it's been executed on and prints it's name, can't be conceptualized as a program.
Your reasoning about levels of abstraction seem to be a map-territory confusion. Abstractions and their levels are in the map. Evolution doesn't create or not create them. Minds conceptualize what evolution created in terms of abstractions.
Granted, some things are easier to conceptualize in terms of software/hardware than others, because they were specifically designed with this separation in mind. This makes the problem harder, not impossible. As for whether we get so much complexity that we wouldn't be able to execute on a computer on the surface of the Earth, I would be very surprised if it was the case. Yes, a lot of things causally affect neurons but it doesn't mean that all of these things are relevant for phenomenal consciousness in the sense that without representing them the resulting program wouldn't be conscious. Brains do a bazzilion of other stuff as well.
In the worst case, we can say that human consciousness is a program but such a complicated one that we better look for a different abstraction. But even this wouldn't mean that we can't write some different, simpler conscious program.
You can't say "equiprobable" if you have no known set of possible outcomes to begin with.
Not really. Nothing prevents us from reasoning about a set with unknown number of elements and saying that measure is spreaded equally among them, no matter how many of them there is. But this is irrelevant to the question at hand.
We know very well the size of set of possible outcomes for "In which ten billion interval your birth rank could've been". This size is 1. No amount of pregnancy complications could postpone or hurry your birth so that you managed to be in a different 10 billion group.
Genuine question: what are your opinions on the breakfast hypothetical?
I think it's prudent to be careful about counterfactual reasoning on general principles. And among other reasons for it, to prevent the kind of mistake that you seem to be making: confusing
A) I've thrown a six sided die, even though I could've thrown a 20 sided one, what is the probability to observe 6?
and
B) I've thrown a six sided die, what would be the probability to observe 6, if I've thrown a 20 sided die instead?
The fact that question B has an answer doesn't mean that question A has the same answer as well.
As for whether breakfast hypothetical is a good intelligence test, I doubt it. I can't remember a single person whom I've seen have problems with intuitive understanding of counterfactual reasoning. On the other hand I've seen a bunch of principled hard determinists who didn't know how to formalize "couldness" in a compatibilist way and threfore were not sure that counterfactuals are coherent on philosophical grounds. At best the distribution of the intelligence is going to be bi-modal.
All humans that actually were and all humans that actually will. This is the framework of the Doomsday argument - it attempts to make a prediction about the actual number of humans in our actual reality not in some counterfactual world.
Again, it's not my choice. It's how the argument was initially framed. I simply encorage that we stayed on topic instead of wandering sideways and talking about something else instead.
I don't see how it's relevant. Ordered sequence can have some mutual information with a random one. It doesn't mean that the same mathematical model describes generation of both.