Availability: Almost all times between 10 AM and PM, California time, regardless of day. Highly flexible hours. Text over voice is preferred, I'm easiest to reach on Discord. The LW Walled Garden can also be nice.
A note to clarify for confused readers of the proof. We started out by assuming □(cross→U=−10), and cross. We conclude □(cross→U=10)∨□(cross→U=0) by how the agent works. But the step from there to □⊥ (ie, inconsistency of PA) isn't entirely spelled out in this post.Pretty much, that follows from a proof by contradiction. Assume con(PA) ie ¬□⊥, and it happens to be a con(PA) theorem that the agent can't prove in advance what it will do, ie, ¬□(¬cross). (I can spell this out in more detail if anyone wants) However, combining □(cross→U=−10) and □(cross→U=10) (or the other option) gets you □(¬cross), which, along with ¬□(¬cross), gets you ⊥. So PA isn't consistent, ie, □⊥.
In the proof of Lemma 3, it should be "Finally, since χFC(z,z)=z, we have that polyFC(z)⋅polyFB∖C(z)=QFz.
Thus, QFz⋅QFx∩y∩z and QFx∩z⋅QFy∩z are both equal to polyFC(x∩z)⋅polyFB∖C(y∩z)⋅polyFC(z)⋅polyFB∖C(z).instead.
Any idea of how well this would generalize to stuff like Chicken or games with more than 2-players, 2-moves?
I was subclinically depressed, acquired some bupropion from Canada, and it's been extremely worthwhile.
I don't know, we're hunting for it, relaxations of dynamic consistency would be extremely interesting if found, and I'll let you know if we turn up with anything nifty.
Looks good. Re: the dispute over normal bayesianism: For me, "environment" denotes "thingy that can freely interact with any policy in order to produce a probability distribution over histories". This is a different type signature than a probability distribution over histories, which doesn't have a degree of freedom corresponding to which policy you pick.But for infra-bayes, we can associate a classical environment with the set of probability distributions over histories (for various possible choices of policy), and then the two distinct notions become the same sort of thing (set of probability distributions over histories, some of which can be made to be inconsistent by how you act), so you can compare them.
I'd say this is mostly accurate, but I'd amend number 3. There's still a sort of non-causal influence going on in pseudocausal problems, you can easily formalize counterfactual mugging and XOR blackmail as pseudocausal problems (you need acausal specifically for transparent newcomb, not vanilla newcomb). But it's specifically a sort of influence that's like "reality will adjust itself so contradictions don't happen, and there may be correlations between what happened in the past, or other branches, and what your action is now, so you can exploit this by acting to make bad outcomes inconsistent". It's purely action-based, in a way that manages to capture some but not all weird decision-theoretic scenarios.In normal bayesianism, you do not have a pseudocausal-causal equivalence. Every ordinary environment is straight-up causal.
Re point 1, 2: Check this out. For the specific case of 0 to even bits, ??? to odd bits, I think solomonoff can probably get that, but not more general relations.Re: point 3, Solomonoff is about stochastic environments that just take your action as an input, and aren't reading your policy. For infra-Bayes, you can deal with policy-dependent environments without issue, as you can consider hard-coding in every possible policy to get a family of stochastic environments, and UDT behavior naturally falls out as a result from this encoding. There's still some open work to be done on which sorts of policy-dependent environments like this are learnable (inferrable from observations), but it's pretty straightforward to cram all sorts of weird decision-theory scenarios in as infra-Bayes hypothesis, and do the right thing in them.