Sleeping Beauty Not Resolved

You are simply assuming that what I've calculated is irrelevant. But the only way to know absolutely for sure whether it is irrelevant is to actually do the calculation! That is, if you have information X and Y, and you think Y is irrelevant to proposition A, the only way you can justify leaving out Y is if Pr(A | X and Y) = Pr(A | X). We often make informal arguments as to why this is so, but an actual calculation showing that, in fact, Pr(A | X and Y) != Pr(A | X) always trumps an informal argument that they should be equal.

Your "probability of guessing the correct card" presupposes some decision rule for choosing a particular card to guess. Given a particular decision rule, we could compute this probability, but it is something entirely different from "the probability that the card is a king". If I assume that's just bad wording, and that you're actually talking about the frequency of heads when some condition occurs, well now you're doing frequentist probabilities, and we were talking about *epistemic* probabilities.

Sleeping Beauty Not Resolved

But randomly awakening Beauty on only one day is a different scenario than waking her both days. A priori you can't just replace one with the other.

Sleeping Beauty Not Resolved

Yes, in exactly the same sense that *any* mathematical / logical model needs some justification of why it corresponds to the system or phenomenon under consideration. As I've mentioned before, though, if you are able to express your background knowledge in propositional form, then your probabilities are uniquely determined by that collection of propositional formulas. So this reduces to the usual modeling question in any application of logic -- does this set of propositional formulas appropriately express the relevant information I actually have available?

Bayesian Probability is for things that are Space-like Separated from You

This is the first thing I've read from Scott Garrabant, so "otherwise reputable" doesn't apply here. And I have frequently seen things written on LessWrong that display pretty significant misunderstandings of the philosophical basis of Bayesian probability, so that gives me a high prior to expect more of them.

Bayesian Probability is for things that are Space-like Separated from You

I'm not trying to be mean here, but this post is completely wrong at all levels. No, Bayesian probability is not just for things that are space-like. None of the theorems from which it derived even refer to time.

So, you know the things in your past, so there is no need for probability there.

This simply is not true. There would be no need of detectives or historical researchers if it were true.

If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn't, and say that the part you observed is in your past, while the part you didn't observe is space-like separated from you.

You can say it, but it's not even approximately true. If someone flips a coin in front of me but covers it up just before it hits the table, I observe that a coin flip has occurred, but not whether it was heads or tails -- and that second even is definitely within my past light-cone.

You may have cached that you should use Bayesian probability to deal with things you are uncertain about.

No, I cached nothing. I first spent a considerable amount of time understanding Cox's Theorem in detail, which derives probability theory as the uniquely determined extension of classical propositional logic to a logic that handles uncertainty. There is some controversy about some of its assumptions, so I later proved and published my own theorem that arrives at the same conclusion (and more) using purely logical assumptions/requirements, all of the form, "our extended logic should retain this existing property of classical propositional logical."

The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them!

1) It's not clear this is really true. It seems to me that any situation that is affected by an agent's beliefs can be handled within Bayesian probability theory by modeling the agent.

2) So what?

Therefore, our reasons for liking Bayesian probability do not apply to our uncertainty about the things that are in our future!

This is a complete non sequitur. Even if I grant your premise, most things in my future are unaffected by my beliefs. The date on which the Sun will expand and engulf the Earth is in no way affected by any of my beliefs. Whether you will get luck with that woman at the bar next Friday is in no way affected by any of my beliefs. And so on,

Book review: Pearl's Book of Why

path analysis requires scientific thinking, as does every exercise in causal inference. Statistics, as frequently practiced, discourages it, and encouraged "canned" procedures instead.

Despite Pearl's early work on Bayesian networks, he doesn't seem to be very familiar with Bayesian statistics -- the above comment really only applies to frequentist statistics. Model construction and criticism ("scientific thinking") is an important part of Bayesian statistics. Causal thinking is common in Bayesian statistics, because causal intuition provides the most effective guide for Bayesian model building.

I've worked implementing Bayesian models of consumer behavior for marketing research, and these are grounded in microeconomic theory, models of consumer decision making processes, common patterns of deviation from strictly rational choice, etc.

Sleeping Beauty Not Resolved

I don't believe that the term "probability" is completely unambiguous once we start including weird scenarios that fall outside the scope which standard probability was intended to address.

The intended scope is *anything* that you can reason about using classical propositional logic. And if you can't reason about it using classical propositional logic, then there is still no ambiguity, because there are no probabilities.

Sleeping Beauty Not Resolved

You know, it has not actually been demonstrated that human consciousness can be mimicked by Turing-equivalent computer.

The evidence is extremely strong that human minds are processes that occur in human brains. All known physical laws are Turing computable, and we have no hint of any sort of physical law that is not Turing computable. Since brains are physical systems, the previous two observations imply that it is highly likely that they can be simulated on a Turing-equivalent computer (given enough time and memory).

But regardless of that, the Sleeping Beauty problem is a question of epistemology, and the answer necessarily revolves around the information available to Beauty. None of this requires an actual human mind to be meaningful, and the required computations can be carried out by a simple machine. The only real question here is, what information does Beauty have available? Once we agree on that, the answer is determined.

Sleeping Beauty Not Resolved

In these kinds of scenarios we need to define our reference class and then we calculate the probability for someone in this class.

No, that is not what probability theory tells us to do. Reference classes are a rough technique to try to come up with prior distributions. They are not part of probability theory per se, and they are problematic because often there is disagreement as to which is the correct reference class.

The context is *all* applications of probability theory. Look, when I tell you that A or not A is a rule of classical propositional logic, we don't argue about the context or what assumptions we are relying on. That's just a universal rule of classical logic. Ditto with conditioning on all the information you have. That's just one of the rules of epistemic probability theory that *always* applies. The only time you are allowed to NOT condition on some piece of known information is if you would get the same answer whether or not you conditioned on it. When we leave known information Y out and say it is "irrelevant", what that means is that Pr(A | Y and X) = Pr(A | X), where X is the rest of the information we're using. If I can show that these probabilities are NOT the same, then I have proven that Y is, in fact, relevant.