You can't say "equiprobable" if you have no known set of possible outcomes to begin with.
Not really. Nothing prevents us from reasoning about a set with unknown number of elements and saying that measure is spreaded equally among them, no matter how many of them there is. But this is irrelevant to the question at hand.
We know very well the size of set of possible outcomes for "In which ten billion interval your birth rank could've been". This size is 1. No amount of pregnancy complications could postpone or hurry your birth so that you managed to be in a different 10 billion group.
Genuine question: what are your opinions on the breakfast hypothetical?
I think it's prudent to be careful about counterfactual reasoning on general principles. And among other reasons for it, to prevent the kind of mistake that you seem to be making: confusing
A) I've thrown a six sided die, even though I could've thrown a 20 sided one, what is the probability to observe 6?
and
B) I've thrown a six sided die, what would be the probability to observe 6, if I've thrown a 20 sided die instead?
The fact that question B has an answer doesn't mean that question A has the same answer as well.
As for whether breakfast hypothetical is a good intelligence test, I doubt it. I can't remember a single person whom I've seen have problems with intuitive understanding of counterfactual reasoning. On the other hand I've seen a bunch of principled hard determinists who didn't know how to formalize "couldness" in a compatibilist way and threfore were not sure that counterfactuals are coherent on philosophical grounds. At best the distribution of the intelligence is going to be bi-modal.
In your thought experiment only qualia of redness and greenness are switched, everything else is the same, including qualia of finding something beautiful.
You claim that this doesn't lead to any causal effects in the world. I show you how this actually has physical consequences. The fact that this effect has an extra causal link to the qualia of beautifulness is beside the point. And of course the example with selectively coulor blind person doesn't need to appeal to beautifulness at all.
Now you may change your thought experiment in such a manner that some other qualia are affected in a compensatory manner but at that point the more or less intuitive thought experiment becomes complicated and controversal. Can you actually change qualia in such compensatory way? Will there be some other unforeseen consequences of this change? How can we know that? Pieces of reality are connected to each other. If you claim that one can just affect a small part of the world and nothing else, you need to present some actual evidence in favor of such weird claim.
Of course, the full debunk of zombie-like arguments comes from the exposing all the flaws of conceivability argument, which I'm adressing in the next post.
I think we can use the same method Eliezer applied to the regular epiphenomenalist Zombie argument to deal with this, weaker one.
Whether your mind interprets certain colour in a certain way actually has causal effects on the world. Namely, things that appear beautiful to you in our world may not appear beautiful to your qualia inversed counterpart. Which naturally affects your behaviour: whether you look at a certain object more, whether you buy a certain object and so on.
This is even more obvious for people with selective colour blindness. Suppose your mind is unable to distinguish between qualia of blueness and redness. And suppose there are three objects: A is red, B is blue and C is green. In our world you can't distinguish between objects A and B. But in the qualia inversed world you wouldn't be able to distinguish between objects B and C.
And if you try to switch to substance dualist version - all the reasoning from this post still stands.
"Random" is the null value we can give as an answer to the question "What is our prior?"
I think the word you are looking for here is "equiprobable".
It's propper to have equiprobable prior between outcomes of a probability experiment, if you do not have any reason to expect that one is more likely than the other.
It's ridiculous to have equiprobable prior between states that are not even possible outcomes of the experiment, to the best of your knowledge.
You are not an incorporeal ghost that could've inhabited any body throughout human history. You are your parents child. You couldn't have been born before them or after they are already dead. Thinking otherwise is as silly as throwing a 6 sided die and then expecting to receive any outcome from a 20 sided die.
I was anthropically sampled out of some space
You were not anthropically sampled. You were born as a result of a physical process in a real world that you are trying to approximate as a probability experiment. This process had nothing to do with selecting universes that support conscious processes. This process has already been instantiated in a specific universe and has very limited time frame for your existence.
You will have to ignore all this knowledge and pretend that the process is completely different, without any evidence to back it up, to satisfy the conditions of Doomsday argument.
All sampling is nonrandom if you bother to overcome your own ignorance about the sampling mechanism.
And after you bothered to overcome your ignorance, naturally you can't keep treating the setting as random sampling.
With Doomsday argument, we did bother - to the best of our knowledge we are not a random sample throught all the humans history. So case closed.
The intuition that this is absurd is pointing at the fact that these technical details aren't what most people probably would care about, except if they insist on treating these probability numbers as real things and trying to make them follow consistent rules.
Except, this is exactly how people reason about the identities of everything.
Suppose you own a ball. And then a copy of this ball is created. Is there 50% chance that you now own the newly created ball? Do you half-own both balls? Of course not! Your ball is the same phisical object, no matter how many copies of it are created, you know which of the balls is yours.
Now, suppose that two balls are shuffled so that you don't know where is yours. Naturally, you assume that for every ball there is 50% probability that it's "your ball". Not because the two balls are copies of each other - they were so even before the shuffling. This probability represents your knowledge state and the shuffling made you less certain about which ball is yours.
And then suppose that one of these two balls is randomly selected and placed in a bag, with another identical ball. Now, to the best of your knowledge there is 50% probability that your ball is in the bag. And if a random ball is selected from the bag, there is 25% chance that it's yours.
So as a result of such manipulations there are three identical balls and one has 50% chance to be yours, while the other two have 25% chance to be yours. Is it a paradox? Oh course not. So why does it suddenly become a paradox when we are talking about copies of humans?
The moment such numbers stop being convenient, like assigning different weights to copies you are actually indifferent between
But we are not indifferent between them! That's the whole point. The idea that we should be indifferent between them is an extra assumption, which we are not making while reasoning about ownership of the balls. So why should we make it here?
Your demand that programs were causally closed from low level representation of the hardware seem to be extremely limiting. According to such paradigm, a program that checks what CPU it's been executed on and prints it's name, can't be conceptualized as a program.
Your reasoning about levels of abstraction seem to be a map-territory confusion. Abstractions and their levels are in the map. Evolution doesn't create or not create them. Minds conceptualize what evolution created in terms of abstractions.
Granted, some things are easier to conceptualize in terms of software/hardware than others, because they were specifically designed with this separation in mind. This makes the problem harder, not impossible. As for whether we get so much complexity that we wouldn't be able to execute on a computer on the surface of the Earth, I would be very surprised if it was the case. Yes, a lot of things causally affect neurons but it doesn't mean that all of these things are relevant for phenomenal consciousness in the sense that without representing them the resulting program wouldn't be conscious. Brains do a bazzilion of other stuff as well.
In the worst case, we can say that human consciousness is a program but such a complicated one that we better look for a different abstraction. But even this wouldn't mean that we can't write some different, simpler conscious program.