Jonathan Birch recently published an interesting critique of Bostrom's simulation argument. Here's the abstract:
Nick Bostrom’s ‘Simulation Argument’ purports to show that, unless we are confident that advanced ‘posthuman’ civilizations are either extremely rare or extremely rarely interested in running simulations of their own ancestors, we should assign significant credence to the hypothesis that we are simulated. I argue that Bostrom does not succeed in grounding this constraint on credence. I first show that the Simulation Argument requires a curious form of selective scepticism, for it presupposes that we possess good evidence for claims about the physical limits of computation and yet lack good evidence for claims about our own physical constitution. I then show that two ways of modifying the argument so as to remove the need for this presupposition fail to preserve the original conclusion. Finally, I argue that, while there are unusual circumstances in which Bostrom’s selective scepticism might be reasonable, we do not currently find ourselves in such circumstances. There is no good reason to uphold the selective scepticism the Simulation Argument presupposes. There is thus no good reason to believe its conclusion.
The paper is behind a paywall, but I have uploaded it to my shared Dropbox folder, here.
EDIT: I emailed the author and am glad to see that he's decided to participate in the discussion below.

The Simulation Argument is incoherent in the first place, and no complicated refutation is required to illustrate this. It is simply nonsensical to speak of entities in "another" universe simulating "our" universe, as the word universe already means "everything that exists." (Note that more liberal definitions, like "universe = everything we can even conceive of existing," only serve to show the incoherence more directly: the speaker talks of everything she can conceive of existing "plus more" that she is also conceiving as existing - immediately contradictory.)
By the way, this is the same reason an AI in a box cannot ever know it's in a box. No matter how intelligent it may be, it remains an incoherent notion for an AI in a box to conceive of something "outside the box." Not even a superintelligence gets a free pass on self-contradiction.
This seems a silly linguistic nitpick - e.g perhaps other people use "universe" to mean our particular set of three dimensions of space and one dimension of time, or perhaps other people use "universe" to mean everything which is causally connected forwards and backwards to our own existence, etc.
If the Simulation Argument used the word "local set o... (read more)