Allan Crossman: Only if they believe that their decision somehow causes the other to make the same decision.
No line of causality from one to the other is required.
If a computer finds that (2^3021377)-1 is prime, it can also conclude
that an identical computer a light year away will do the same. This
doesn't mean one computation caused the other.
The decisions of perfectly rational optimization processes are just as
I apologize if this is covered by basic decision theory, but if we
the choice in our universe is made by a perfectly rational
optimization process instead of a human
the paperclip maximizer is also a perfect rationalist, albeit with a
very different utility function
each optimization process can verify the rationality of the other
then won't each side choose to cooperate, after correctly concluding
that it will defect iff the other does?
Each side's choice necessarily reveals the other's; they're the
outputs of equivalent computations.