Wiki Contributions


But you'd have to be one really stupid correctional officer to get an order to disable the cameras around Epstein's cell the night he was murdered, and not know who killed him after he dies.

I assume you mean "who ordered him killed."

Here's what a news report says happened:

A letter filed by Assistant US Attorneys Jason Swergold and Maurene Comey said "the footage contained on the preserved video was for the correct date and time, but captured a different tier than the one where Cell-1 was located", New York City media report.

Prince Andrew spoke to the BBC in November about his links to Epstein

"The requested video no longer exists on the backup system and has not since at least August 2019 as a result of technical errors."

Could be a lot more subtle than that. Just ask for the wrong video. A little mess up in procedures. Maybe some operative clandestinely gets into the system and causes some "technical errors."

I'm not an expert on assassinations, and I suspect you aren't either. It seems to me that you're using the argument from lack of imagination -- "I can't think of a way to do it, therefore it can't be done." If, say, the CIA were behind Epstein and didn't want him to talk, is it unreasonable to suspect that they would know of many techniques to assassinate someone while covering their tracks that neither you nor I would have a clue about?

Note that I'm not claiming that there's a strong case that Epstein was assassinated, just that it's not so easy to rule out.

Domain methodsofrationality.com

I own the domain methodsofrationality.com, but I'm not really doing anything with it. If you want it, send me a message telling me what you plan to do with it. I'll give it to whomever, in my opinion, has the best use for it.

But when you do assert that basically the entire U.S. government has collaborated on murdering Epstein

Isn't this a straw man? If someone powerful wanted Epstein dead, how many people does that require, and how many of them even have to know why they're doing what they're doing? It seems to me that only one person -- the murderer -- absolutely has to be in on it. Other people could get orders that sound innocuous, or maybe just a little odd, without knowing the reasons behind them. And, of course, there are always versions of "Will no one rid me of this troublesome priest?" to ensure deniability.

The context is *all* applications of probability theory. Look, when I tell you that A or not A is a rule of classical propositional logic, we don't argue about the context or what assumptions we are relying on. That's just a universal rule of classical logic. Ditto with conditioning on all the information you have. That's just one of the rules of epistemic probability theory that *always* applies. The only time you are allowed to NOT condition on some piece of known information is if you would get the same answer whether or not you conditioned on it. When we leave known information Y out and say it is "irrelevant", what that means is that Pr(A | Y and X) = Pr(A | X), where X is the rest of the information we're using. If I can show that these probabilities are NOT the same, then I have proven that Y is, in fact, relevant.

You are simply assuming that what I've calculated is irrelevant. But the only way to know absolutely for sure whether it is irrelevant is to actually do the calculation! That is, if you have information X and Y, and you think Y is irrelevant to proposition A, the only way you can justify leaving out Y is if Pr(A | X and Y) = Pr(A | X). We often make informal arguments as to why this is so, but an actual calculation showing that, in fact, Pr(A | X and Y) != Pr(A | X) always trumps an informal argument that they should be equal.

Your "probability of guessing the correct card" presupposes some decision rule for choosing a particular card to guess. Given a particular decision rule, we could compute this probability, but it is something entirely different from "the probability that the card is a king". If I assume that's just bad wording, and that you're actually talking about the frequency of heads when some condition occurs, well now you're doing frequentist probabilities, and we were talking about *epistemic* probabilities.

But randomly awakening Beauty on only one day is a different scenario than waking her both days. A priori you can't just replace one with the other.

Yes, in exactly the same sense that *any* mathematical / logical model needs some justification of why it corresponds to the system or phenomenon under consideration. As I've mentioned before, though, if you are able to express your background knowledge in propositional form, then your probabilities are uniquely determined by that collection of propositional formulas. So this reduces to the usual modeling question in any application of logic -- does this set of propositional formulas appropriately express the relevant information I actually have available?

This is the first thing I've read from Scott Garrabant, so "otherwise reputable" doesn't apply here. And I have frequently seen things written on LessWrong that display pretty significant misunderstandings of the philosophical basis of Bayesian probability, so that gives me a high prior to expect more of them.

I'm not trying to be mean here, but this post is completely wrong at all levels. No, Bayesian probability is not just for things that are space-like. None of the theorems from which it derived even refer to time.

So, you know the things in your past, so there is no need for probability there.

This simply is not true. There would be no need of detectives or historical researchers if it were true.

If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn't, and say that the part you observed is in your past, while the part you didn't observe is space-like separated from you.

You can say it, but it's not even approximately true. If someone flips a coin in front of me but covers it up just before it hits the table, I observe that a coin flip has occurred, but not whether it was heads or tails -- and that second even is definitely within my past light-cone.

You may have cached that you should use Bayesian probability to deal with things you are uncertain about.

No, I cached nothing. I first spent a considerable amount of time understanding Cox's Theorem in detail, which derives probability theory as the uniquely determined extension of classical propositional logic to a logic that handles uncertainty. There is some controversy about some of its assumptions, so I later proved and published my own theorem that arrives at the same conclusion (and more) using purely logical assumptions/requirements, all of the form, "our extended logic should retain this existing property of classical propositional logical."

The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them!

1) It's not clear this is really true. It seems to me that any situation that is affected by an agent's beliefs can be handled within Bayesian probability theory by modeling the agent.

2) So what?

Therefore, our reasons for liking Bayesian probability do not apply to our uncertainty about the things that are in our future!

This is a complete non sequitur. Even if I grant your premise, most things in my future are unaffected by my beliefs. The date on which the Sun will expand and engulf the Earth is in no way affected by any of my beliefs. Whether you will get luck with that woman at the bar next Friday is in no way affected by any of my beliefs. And so on,

path analysis requires scientific thinking, as does every exercise in causal inference. Statistics, as frequently practiced, discourages it, and encouraged "canned" procedures instead.

Despite Pearl's early work on Bayesian networks, he doesn't seem to be very familiar with Bayesian statistics -- the above comment really only applies to frequentist statistics. Model construction and criticism ("scientific thinking") is an important part of Bayesian statistics. Causal thinking is common in Bayesian statistics, because causal intuition provides the most effective guide for Bayesian model building.

I've worked implementing Bayesian models of consumer behavior for marketing research, and these are grounded in microeconomic theory, models of consumer decision making processes, common patterns of deviation from strictly rational choice, etc.

Load More