Jim Buhler

www.jimbuhler.site

Wikitag Contributions

Comments

Sorted by

some factors, like the Coriolis force, are systematic.

Yup! (a related comment fwiw).

If the distribution was "a ring of 1 m around the aimed point" then you would know for sure you won't hit the terrorist that way

Well, not if you factor in other factors that might luckily exactly compensate for the Coriolis effect (e.g., the wind). But yeah, considering that it's a Gaussian distribution where the top is "target hit" (rather than "kid hit" or "rock over there hit") just because that's where you happen to be aiming (ignoring the Coriolis effect, the wind and all) seems very suspiciously convenient.

Interesting, thanks!

I guess one could object that in you're even more clueless sniper example, applying the POI between Hit and Not Hit is just as arbitrary as applying it between, e.g., Hit, Hit on his right, and Hit on his left. This is what Greaves (2016) -- and maybe others? -- called the "problem of multiple partitions". In my original scenario, people might argue that there isn't such a problem and that there is only one sensible way to apply POI. So it'd be ok to apply it in my case and not in yours. 

I don't know what to make of this objection, though. I'm not sure it makes sense. It feels a bit arbitrary to say "we can apply POI but only when there is one way of applying it that clearly seems more sensible". Maybe this problem of multiple partitions is a reason to reject POI altogether (in situations of what Greaves call "complex cluelessness" at least, like in my sniper example).

Yeah, I guess I meant something like "aim as if there were no external factors other than gravity".

Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back - but only makes sense to do that if you agree that the random walk model is appropriate in the first place.

Oh yeah, good question. I'm not sure because random walk models are chaotic and seem to model situations of what Greaves (2016) calls "simple cluelessness". Here, we're in a case she would call "complex". There are systematic reasons to believe the bullet will go right (e.g., the Earth's rotation, say) and systematic reasons to believe it will go left (e.g., the wind that we see blowing left). The problem is not that it is random/chaotic, but that we are incapable of weighing up the evidence for left vs the evidence for right, incapable to the point where we cannot update away from a radically agnostic prior on whether the bullet will hit the target or the kid.

My answer: because strictly monotonic[1] probability distribution prior to accounting for external factors

Ok so that's defo what I think assuming no external factors, yes. But if I know that there are external factors, I know the bullet will deviate for sure. I don't know where but I know it will. And it might luckily deviate a bit back and forth and come back exactly where I aimed, but I don't get how I can rationally believe that's any more likely than it doing something else and landing 10 centimeters more on the right. And I feel like what everyone in the comments so far is saying is basically "Well, POI!", taking it for granted/self-obvious, but afaict, no one has actually justified why we should use POI rather than simply remain radically agnostic on whether the bullet is more likely to hit the target than the kid. I feel like your intuition pump, for example, is implicitly assuming POI and is sort of justifying POI with POI.

Interesting, thanks. My intuition is that if you draw a circle of say a dozen (?) meters around the target, there's no spot within that circle that is more or less likely to be hit than any other, and it's only outside the circle than you start having something like a normal distribution. I really don't see why I should think the 35 centimeters on the target's right is any more (or less) likely than 42 centimeters on his left. Can you think of any good reason why I should think that? (Not saying my intuition is better than yours. I just want to get where I'm wrong if I am.)

I'm just interested in the POI thing, yeah.

At some sufficiently far distance, it is essentially landing in a random spot in a normal distribution around the intended target

Say I tell you the bullet landed either 35 centimeters on the target's right or 42 centimeters on his left, and ask you to bet on which one you think it is. Are you indifferent/agnostic or do you favor 35 very (very very very very) slightly? (If the former, you reject the POI. If the latter, you embrace it. Or at least that's my understanding. If you don't find it more likely the bullet hits a spot a bit closer to the target, than you don't agree with the superior that aiming at the target makes you more likely to hit him over the child, all else equal.)

Without an objective standard of “winning” to turn to, this leaves us searching for new principles that could guide us in the face of indeterminacy. But that’s all for another post.

First time ever I am left hanging by a LW post. Genuinely.

Thanks! I guess myopia is a specific example of one form of scope-insensitivity (which has to do with longterm thinking, according to this at least), yes. 

> This is plausibly a beneficial alignment property, but like every plausibly beneficial alignment property, we don't yet know how to instill them in a system via ML training.

I didn't follow discussions around myopia and didn't have this context (e.g., I thought maybe people didn't find myopia promising at all to begin with or something) so thanks a lot. That's very helpful.

Load More