My understanding is that the US Marshals are not only accountable to the Court either. They take their day-to-day commands from the Director of the US Marshals Service, a presidential appointee, who in turn reports to the US Attorney General, another presidential appointee.
This makes me even more doubtful that the US Marshals would side with SCOTUS and, eg, arrest the President in a worst-case constitutional crisis. Both the Marshals' boss and their boss's boss would likely side with the President, having been chosen by him for their loyalty.
It's sometimes suggested that as a defense against superpersuasion, one could get a trusted AI to paraphrase all of one's incoming messages. That way an attacker can't subliminally steer the target by choosing exactly the right words and phrases to push their buttons.
Maybe one could also defend against superhuman persuasion and manipulation by getting a trusted AI to paraphrase all of one's outgoing messages. That way an attacker can't pick up on subtle clues about the target's psychology embedded in their writing style. If this paraphrasing strategy works, the attacker is denied the training data they need to figure out how to push the target's buttons.
If your two assumptions hold, takeover by misaligned AGIs that go on to create space-faring civilizations looks much worse than existential disasters that simply wipe humanity out (eg, nuclear extinction). In the latter case, an alien civilization will soon claim the resources humanity would have taken over had we become a space-faring civ, and they'll create just as much value with these resources as we would have created. Assuming that all civs eventually come to an end at some fixed time (see this article by Toby Ord), some amount of potential value will be lost, but not as much as would be lost if a misaligned AI used all the resources in our future light cone for something valueless while excluding aliens from using them. So the main strategic implication is that we should try to build fail-safes into AGI to prevent it from becoming grabby in the event of alignment failure.
Interesting! Thanks for the link.