Thank you so much for the explanation! I haven’t looked into the behavior of dependent distributions in the context of the CLT at all. It’s totally believable that non-independence could destroy convergence properties in a way that non-identicality doesn’t. I’m on my phone right now but will probably add a disclaimer to the end of this post to reflect your challenge to it. Thanks again~
Appreciate it! And you're right, that was a mistake in the last section - just fixed it. Thanks!
Hmm - a big problem if so! Can you give me an example of the kind of intermediate conditional distribution you mean?
I don’t agree that one generally wants maximum optionality. Too much optionality for first strikes makes you a bigger threat, which may worry others to the extent that they first-strike you. If, say, the Jamaican government woke up tomorrow to find itself in posession of nukes and launchers for them, it would be a headache for them, they’d probably wish it never happened, and try to conspicuously give up the weapons, so that other nations didn’t view them as a nuclear threat. Giving up first-strike options can often make you safer, which I don’t think is true for second-strike options.
That makes sense.
I’m surprised to hear you are concerned about censorship at Substack. I read this link https://on.substack.com/p/substacks-view-of-content-moderation a while back and thought it suggested a pretty strong commitment to not censoring. I don’t really know anything about WordPress though, so maybe they’re even more committed in this regard?
When I read in the main post that the inclusion of Oil Ooze was confusing, I thought my magic box might be the guilty one!
Was anyone able to use bayesian probability theory on this problem directly to do things besides 1-v-1 hero matchups? If so, what was your approach? I couldn’t figure out how to start.
Yup, it would be - I guess I’m saying that I don’t think swapping the columns has any effect. To the model, during training, it is just 38 unnamed columns. Swapping the first 19 with the last shouldn’t do anything? Weird weird weird
Actually, I’ve thought about it more, and I don’t think it’s possible for a flip to change the predictor like this. Flipping like this is equivalent to swapping the first 19 columns of the matrices with the latter 19 columns, and bit-flipping the response. This should end up giving prediction vectors v_flip such that 1 - v_flip = v_original. So my money is currently on something being off with the code that flips.