Posts

Sorted by New

Wiki Contributions

Comments

tmercer9d-3-1

you can still apply a rough estimate on the probability distribution.

No, you cannot. For things you have no idea, there is no way to (rationally) estimate their probabilities.

Bayesian updates usually produces better estimate than your prior (and always better than your prior if you can do perfect updates, but that's impossible), and you can use many methods to guestimate a prior distribution.

No. There are many many things that have "priors" of 1, 0, or undefined. These are undefined. You can't know anything about their "distribution" because they aren't distributions. Everything is either true or false, 1 or 0. Probabilities only make sense when talking about human (or more generally, agent) expectations/uncertainties.

You can't completely rationally Bayesian reason about it, and that doesn't mean you can't try to Bayesian reason about it.

That's not what I mean, and it's not even what I wrote. I'm not saying "completely". I said you can't Bayesian reason about it. I mean you are completely irrational when you even try to Bayesian reason about undefined, 1, or 0 things. What would trying to Bayesian reason about an undefined thing even look like to you?

Do you admit that you have no idea (probability/certainty/confidence-wise) about what might cause the sun not to rise tomorrow? Like is that a good example to you of a completely undefined thing, for which there is no "prior"? It's one of the best to me, because the sun rising tomorrow is such a cornerstone example for introducing Bayesian reasoning. But to me it's a perfect example of why Bayesianism is utterly insane. You're not getting more certain that anything will happen again just because something like it happened before. You can never prove a hypothesis/theory/belief right (because you can't prove a negative). We can only disprove hypotheses/theories/beliefs. So, with the sun, we have no idea what might cause it to not-rise tomorrow, so we can't Bayesian ourselves into any sort of "confidence" or "certainty" or "probability" about it. A Bayesian alive but isolated from things that have died would believe itself immortal. This is not rational. Rationality is to just fail to disprove the null hypothesis, not believe ever-stronger in the null just because disconfirming evidence hasn't yet been encountered.

Back to the blog post, there are cases in which absence of evidence is evidence of absence, but this isn't it. If you look for something expected by a theory/hypothesis/belief, and you fail to find what it predicts, then that is evidence against it. But, "The 5th column exists" doesn't predict anything (different from "the 5th column doesn't exist"), so "the 5th column hasn't attacked (yet)" isn't evidence against it.

tmercer12d1-1

If "Observing nothing carries no information", then you should not be able to use it to update belief. I agree.

Any belief must be updated based on new information I agree. Observing nothing carries no information, so you don't use it to update belief.

I would say observing nothing carries the information that the action (sabotage) which your belief predicts to happen did not happen during the observation interval.

Yes, so if you observe no sabotage, then you do update about the existence of a fifth column that would have, with some probability, sabotaged (an infinite possibility). But you don't update about the existence of the fifth column that doesn't sabotage, or wouldn't have sabotaged YET, which are also infinite possibilities. Possibilities aren't probabilities, and you have no probability of what kind of fifth column you're dealing with, so you can't do any Bayesian reasoning. I guess it's a general failure of Bayesian reasoning. You can't update 1 confidence beliefs, you can't update 0 confidence beliefs, and you can't update undefined beliefs. So, for example, you can't Bayesian reason about most of the important things in the universe, like whether the sun will rise tomorrow, because you have no idea what causes that's based on. You have a pretty good model about what might cause the sun to rise tomorrow, but no idea, complete uncertainty (not 0 with certainty nor 1 with certainty, nor 50/50 uncertainty, just completely undefined certainty) about what would make the sun NOT rise tomorrow, so you can't (rationally) Bayesian reason about it. You can bet on it, but you can't rationally believe about it.

I guessed 55,000 for the fast multiplication after this 65 anchor. I think the percentage of UN countries in Africa is <65%.

I think there's some nuance missing from Scenario 1. Authority doesn't include competence or expert-ness. Authority is like believing someone because they're paid to be a scientist, but you absolutely SHOULD assign more weight to assertions from actual scientists, people who follow the scientific method to update their beliefs, people who don't believe in the wrong default/null beliefs/hypotheses. This gets mixed up all the time.

Authority: M.D., member of AHA, J.D., professional ______

Competence/expert-ness: performs specific surgery with X% success rate, wins X% of cases in specific legal niche

You shouldn't care at all, not even ceteris paribus, about this definition of authority. You should care a lot about competence/expert-ness, because people can't get competent/expert without having accurate beliefs in the area of competence, so even if they're ignoring some piece of evidence or got to their beliefs very unconsciously or un-Bayesianly, you have strong evidence that their beliefs are close to correct.

Another problem with policies like this hypothetical nuclear reactor is that people don't have access to facts about the future or hypothetical futures, so we're left with estimates. People don't acknowledge that their "facts" are actually estimates, and don't share how they're estimating things. If we did this, politics would be better. Just give your methods and assumptions, as well as the estimates that are determined by the methods and assumptions, of costs and benefits, and then people can pick the choice(s) with the best estimated benefits - costs.

The other thing about political arguments is that people don't start with the foundation to the above, which is values. People will often talk about "lives saved", which is ridiculous, because you can't save lives, only postpone deaths. People who don't agree on values aren't ready to look at estimated costs and benefits. If I value 3 years of postponed-death at $100k, and someone else values it at $10, then we're almost certain to disagree about which policies are best. Values are the prices at which you trade things you like/want/value.

A few ideas:

You can't save a life. Every living thing is doomed to die. You can only postpone deaths.

Morality ought to be based on the expected values of decisions people make or actions they do, not the actual outcomes. Morality includes the responsibility to correctly evaluate EV by gathering sufficient evidence.

Earplug gang represent!

All the no-earplug sleepers are fools.

"you'll also immediately know the others all don't"

No. Receiving an anonymous love note from among the 6 in NO WAY informs you that 5 of the 6 DON'T have a crush on you. All it does is take the unspecified prior (rate of these 6 humans having a crush on you), and INCREASE it for all 6 of them.

@irmckenzie is right. There's no way you get < 10:1 with MORE positive (confirmatory) evidence for Bob than a random stranger. All positive evidence HAS TO make a rational mind MORE certain the thing is true. Weak evidence, like the letter, which informs that AT LEAST 1 in 6 has a crush, should move a rational mind LESS than strong evidence, like the wink, but it must move it all the same, and in the affirmative direction.

This is obviously correct. The error was that Rob interpreted the evidence incorrectly. Getting an anonymous letter DOES NOT inform a rational mind that Bob has 1:5 odds of crushing. It informs the rational mind that AT LEAST ONE of the 6 classmates has a crush on you. It DOES NOT inform a rational mind that 5 of the 6 classmates DO NOT have a crush on you. I also hated this. Obviously, two pieces of evidence should make Bob MORE LIKELY to have a crush on you that one. There's no baseline rate of humans having a crush on us, so the real prior isn't in the problem.

Strongly believed the reverse on 1 and 4, and had very little belief either way on the rest. But it was enough that I began to suspect they were all false, perhaps also the big white space beneath it tipped off my subconscious to such a possibility. Can't find the paper on sci-hub. What are the answers?

Load More