I am currently conducting research on cooperation and conflict in non-causal contexts (particularly between AI systems).
I've recently completed a Master's degree in moral philosophy (wrote my thesis on s-risks and evidential cooperation).
Previously, I was EA France's community director. I also briefly worked on event management and community building at the Center on Long-Term Risk.
Thanks a lot for these comments, Oscar! :)
I think something can't be both neat and so vague as to use a word like 'significant'.
I forgot to copy-paste a footnote clarifying that "as made explicit in the Appendix, what "significant" exactly means depends on the payoffs of the game"! Fixed. I agree this is vague, still, although I guess it has to be since the payoffs are unspecified?
In the EDT section of Perfect-copy PD, you replace some p's with q's and vice versa, but not all, is there a principled reason for this? Maybe it is just a mistake and it should be U_Alice(p)=4p-pp-p+1=1+3p-p^2 and U_Bob(q) = 4q-qq-q+1 = 1+3q-q^2.
Also a copy-pasting mistake. Thanks for catching it! :)
I am unconvinced of the utility of the concept of compatible decision theories. In my mind I am just thinking of it as 'entanglement can only happen if both players use decisions that allow for superrationality'. I am worried your framing would imply that two CDT players are entangled, when I think they are not, they just happen to both always defect.
This may be an unimportant detail, but -- interestingly -- I opted for this concept of "compatible DT" precisely because I wanted to imply that two CDT players may be decision-entangled! Say CDT-agent David plays a PD against a perfect copy of himself. Their decisions to defect are entangled, right? Whatever David does, his copy does the same (although David sort of "ignores" that when he makes his decision). David is very unlikely to be decision-entangled with any random CDT agent, however (in that case, the mutual defection is just a "coincidence" and is not due to some dependence between their respective reasoning/choices). I didn't mean the concept of "decision-entanglement" to pre-assume superrationality. I want CDT-David to agree/admit that he is decision-entangled with his perfect copy. Nonetheless, since he doesn't buy superrationality, I know that he won't factor the decision-entanglement into his expected value optimization (he won't "factor in the possibility that p=q".) That's why you need significant credence in both decision-entanglement and superrationality to get cooperation, here. :)
Also, if decision-entanglement is an objective feature of the world, then I would think it shouldn't depend on what decision theory I personally hold. I could be CDTer who happens to have a perfect copy and so be decision-entangeled, while still refusing to believe in superrationality.
Agreed, but if you're CDTer, you can't be decision-entangled with an EDTer, right? Say you're both told you're decision-entangled. What happens? Well, you don't care so you still defect while EDTer cooperates. Different decisions. So... you two weren't entangled after all. The person who told you you were was mistaken.
So yes, decision-entanglement can't depend on your DT per see, but doesn't it have to depend on its "compatibility" with the other's for there to be any dependence between your algos/choices? How could a CDTer and an EDTer be decision-entangled in a PD?
Not very confident about my answers. Feel free to object. :) And thanks for making me rethink my assumptions/definitions!
Interesting! Did thinking about those variants make you update your credences in SIA/SSA (or else)?
(Btw, maybe it's worth adding the motivation for thinking about these problems in the intro of the post.) :)