LESSWRONG
LW

Veedrac
1281Ω11842010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
6Veedrac's Shortform
1y
12
Optimality is the tiger, and agents are its teeth
Veedrac26d30

Yes, your understanding matches what I was trying to convey. The feedback is appreciated also.

Reply
leogao's Shortform
Veedrac2mo20

It's just Bayes, but I'll give it a shot.

You're having a conversation with someone. They believe certain things are more probable than other things. They mention a reference class: if you look at this grouping of claims, most of them are wrong. Then you consider the set of hypotheses: under each of them, how plausible is it given the noted tendency for this grouping of claims to be wrong? Some of them pass easily, eg. the hypothesis that this is just another such claim. Some of them less easily; they are either a modal part of this group and uncommon on base rate, or else nonmodal or not part of the group at all. You continue, with maybe a different reference class, or an observation about the scenario.

Hopefully this illustrates the point. Reference classes are just evidence about the world. There's no special operation needed for them.

Reply
leogao's Shortform
Veedrac2mo20

Firstly, it's just not more reasonable. When you ask yourself "Is a machine learning run going to lead to human extinction?" you should not first say "How trustworthy are people who have historically claimed the world is ending?"

But you should absolutely ask "does it look like I'm making the same mistakes they did, and how would I notice if it were so?" Sometimes one is indeed in a cult with your methods of reason subverted, or having a psychotic break, or captured by a content filter that hides the counterevidence, or many of the more mundane and pervasive failures in kind.

Reply
leogao's Shortform
Veedrac2mo20

c. I don't get this one. I'm pretty sure I said that if you believe that you're in a highly adversarial epistemic environment, then you should become more distrusting of evidence about memetically fit claims.

Well, sure, it's just you seemed to frame this as a binary on/off thing, sometimes you're exposed and need to count it and sometimes you're not, whereas to me it's basically never implausible that a belief has been exposed to selection pressures, and the question is of probabilities and degrees.

Reply
leogao's Shortform
Veedrac2mo20

I think you're underestimating the inferential gap here. I'm not sure why you'd think the Bayes updating rule is meant to "tell you anything about" the original post. My claim was that the whole proposal about selecting reference classes was framed badly and you should just do (approximate) Bayes instead.

Reply
leogao's Shortform
Veedrac2mo31

I think the framing that sits better to me is ‘You should meet people where they're at.’ If they seem like they need confidence that you're arguing from a place of reason, that's probably indeed the place to start.

Reply1
leogao's Shortform
Veedrac2mo20

What argument are you referring to when you say "doesn't tell you anything about the original argument"?

My framing is basically this: you generally don't start a conversation with someone as a blank pre-priors slate that you get to inject your priors into. The prior is what you get handed, and then the question is how people should respond to the evidence and arguments available. Well, you should use (read: approximate) the basic Bayesian update rule: hypotheses where an observation is unlikely are that much less probable.

Reply
leogao's Shortform
Veedrac2mo20

I agree this is an interesting philosophical question but again I'm not sure why you're bringing it up.

Given your link maybe you think me mentioning Bayes was referring to some method of selecting a single final hypothesis? I'm not, I'm using it to refer to the Bayesian update rule.

Reply
leogao's Shortform
Veedrac2mo*40

The heuristic "be more skeptical of claims that would have big implications if true" makes sense only when you suspect a claim may have been adversarially optimized for memetic fitness; it is not otherwise true that "a claim that something really bad is going to happen is fundamentally less likely to be true than other claims".

This seems wrong to me.

a. More smaller things happen and there are fewer kinds of smaller thing that happen.
b. I bet people genuinely have more evidence for small claims they state than big ones on average.
c. The skepticism you should have because particular claims are frequently adversarially generated shouldn't first depend on deciding to be skeptical about it.

If you'll forgive the lack of charity, ISTM that leogao is making IMO largely true points about the reference class and then doing the wrong thing with those points, and you're reacting to the thing being done wrong at the end, but trying to do this in part by disagreeing with the points being made about the reference class. leogao is right that people are reasonable in being skeptical of this class of claims on priors, and right that when communicating with someone it's often best to start within their framing. You are right that regardless it's still correct to evaluate the sum of evidence for and against a proposition, and that other people failing to communicate honestly in this reference class doesn't mean we ought to throw out or stop contributing to the good faith conversations avaialable to us.

Reply
leogao's Shortform
Veedrac2mo20

I'm not really sure what that has to do with my comment. My point is the original post seemed to be operating as if you look for the argmax reference class, you start there, and then you allow arguments. My point isn't that their prior is wrong, it's that this whole operation is wrong.

I think also you're maybe assuming I'm saying the prior looks something like {reference class A, reference class B} and arguing about the relative probability of each, but it doesn't, a prior should be over all valid explanations of the prior evidence. Reference classes come in because they're evidence about base rates of particular causal structures; you can say 'given the propensity for the world to look this way, how should I be correcting the probability of the hypotheses under consideration? Which new hypotheses should I be explicitly tracking?'

I can see where the original post might have gone astray. People have limits on what they can think about and it's normal to narrow one's consideration to the top most likely hypothesis. But it's important to be aware of what you're approximating here, else you get into a confusion where you have two valid reference classes and you start telling people that there's a correct one to start arguing from.

Reply
Load More
6Veedrac's Shortform
1y
12
50Post-history is written by the martyrs
3y
2
338Optimality is the tiger, and agents are its teeth
Ω
3y
Ω
46
128Moore's Law, AI, and the pace of progress
Ω
4y
Ω
38