Let us stipulate the following:

- The universe is fundamentally discrete / deterministic. (Seems reasonably plausible.)
- There's a way to perform hypercomputation, in the sense that one can build or leverage a Turing-complete mechanism which can perform infinitely many computations in seemingly finite time, and store an infinite amount of data in seemingly finite space. (Seems wildly implausible.)

Take your hypercomputer and write some trivial code that systematically executes every possible algorithm, in particular all of the non-computable functions that require an infinite amount of steps to run. Among just a few other things, this program will eventually stumble onto a perfect simulation of our own universe, and suppose you make sure that simulation has time to run until it catches up with the current moment, so your universe and the simulation are synced up temporally (from your reference frame, perhaps, if GR is an issue).

Then take a mallet and smash your hypercomputer.

By my reasoning, it seems like it should be turtles all the way down, with infinitely many copies of you and your universe simulation if you were to drill down and examine that stack. By assertion, these are *perfect* simulations of the universe, such that not only will you be unable to tell them apart, but to the point where they're isomorphic and are arguably one and the same thing.

In the slightly 'weaker' version of this, there remains a 'topmost reality'. Even though you seemed to begin there, subjectively it's all the same, so you're Almost Surely in one of the infinite simulations, meaning you and your universe will wink out of existence as soon as you destroy your copy of it. Of course, the only you that will actually care is the one that will continue to exist, as well as presumably conclude that the whole thing was very anticlimactic.

In the stronger version, the simulated universe and your universe truly are the same thing, so when you destroy it, you don't leave behind even a single copy of yourself that can breathe a sigh of relief.

This is obviously grossly speculative and not especially in my wheelhouse, so I expect I got more than one thing wrong about arithmetical hierarchies or selection effects or more basic logical flaws, or what have you. By all means point them out if you're inclined. I'm especially curious if any of those mistakes immediately takes apart the whole argument, in which case it's probably a thing I should learn about.

The common resolution to this is the belief (possibly provable, given rigorous definitions) that the existence of a hypercomputer is incompatible with the universe being discrete and finite. That is, regardless of plausibility of #1 and of #2, separately, they cannot both be true at the same time.

Which? Discreteness and determinism are not equivalent.

Suppose you messed up your synchronization and the simulation went ahead to the "anticlimatic sigh" phase. Hammering of the computer that is being run will either break the simulation via distortion or termination. In either case the simulation is not perfect.

Even if we shift back to the case that the synchronisation succeeds the computer will not reach some states which its programming would call it to do. Thus its not isomorphic to the realities but the "mere instation" being vulnerable to "bugs" in the original sense of biological creatures interfering with the silicon order can not have the perfection. For example iterating over the programs that take 1000 steps to terminate would not included a faithful image of our universe. If we know before hand that the computer is going to be smashed that is knowledge that the program can only be so and so long as it will for sure terminate there.