Clickbait: Distant superintelligences may be able to hack your local AI, if your local AI's utility function depends on its most probable environment.
Summary: A distant superintelligence can change 'the most likely environment' for your AI by simulating many copies of AIs similar to your AI, such that your local AI does not expect to be able to distinguish itself from those AIs. This means that, e.g., if there is any reference in your AI's framework to the Causes of sense data - like, programmers being the cause of sensed keystrokes - a distant superintelligence can try to hack that reference. This would place us in an security context versus a superintelligence, and should be avoided if at all possible.
Some proposals for AI preference frameworks involve references to the AI's environment and not just the AI's immediate sense events. For example, a DWIM preference framework would putatively have the AI identify 'programmers' in the environment, model those programmers, and care about what its model of the programmers 'really wanted the AI to do'.
This potentially opens our AIs to a remote root attack by a distant superintelligence. A distant superintelligence has the power to simulate lots of copies of our AI, or lots of AIs such that our AI doesn't think it can introspectively distinguish itself from those AIs. Then it can force the 'most likely' explanation of the AI's apparent sensory experiences to be that the AI is in such a simulation. Then the superintelligence can change arbitrary features of the most likely facts about the environment.
This problem was observed by Paul Christiano and was named Christiano's Hack for short.
Christiano's Hack depends on the local AI trying to model distant superintelligences. The actual proximal harm is done by the local AI's model of distant superintelligences, rather than by the superintelligences themselves. However, a distant superintelligence that uses a decision theory may model its actual actions as correlated to the local AI's model of its actions. Thus, a local AI that models a distant superintelligence that uses a logical decision theory may model that distant superintelligence as behaving as though it could control the AI's model of its actions via its actions.
Christiano's Hack would be worthwhile, from the perspective of a distant superintelligence, if it could gain control of the whole future light cone of 'naturally arising' AIs like ours, in exchange for expending some much smaller amount of resource (small compared to our whole future light cone) in order to simulate lots of AIs such that our AI couldn't distinguish itself from it in expectation.
For any AI short of a full-scale autonomous Sovereign, we should probably try to get our AI to not think about distant superintelligences, at all, since this creates a host of adversarial security problems of which Christiano's Hack is only one.
We might also think twice about DWIM architectures that seem to permit catastrophe purely via the AI's beliefs about the environment, without any check that goes through a direct sense event of the AI (which distant superintelligences cannot control locally, since we can directly hit the sense switch).