Yeah, classical computers might need a lot of resources to simulate quantum mechanics. Quantum computers have no such limitation though, so it's probably not relevant to the simulation argument. Note that the paper doesn't mention the simulation argument, it was added by journalists working under evil incentives.
EurekAlert mentions the simulation argument, and the page implies that this was a press release from Oxford - even providing a media contact - though I have not found the document on Oxford's own website.
I am also skeptical of what the paper (arxiv) is actually saying, on a technical level. It reminds me of another paper a few months ago, which was hyped as exhibiting a "gravitational anomaly" in a condensed-matter system. From all that I could make out, there was no actual gravitational effect involved, only a formal analogy.
This paper seems to engage in exactly the same equivocation, now with the objective of proving something about computational complexity. But I'll have to study it in more detail to be sure.
I disagree with the claim. I'm posting this link to foster discussion of what might be simulated and what might not.
I might agree with the technical claim - precisely simulating macroscopic results of quantum effects - if I were qualified which I am not. But I don't think that is necessary. If scientists can come up with measurable macroscopic effects (like the cited one), then a sufficiently sophisticated simulation can come up with observations matching these expectations.
Agree. The real point of a simulation is to use less computational resources to get approximately the same result as in reality, depending on the goal of the simulation. So it may simulate only surface of the things, like in computer games.
Why would we need more research to work out that the simulation hypothesis is a bad idea? Computational universality implies that if we were being simulated on a computer, it would be impossible for us to know about the underlying hardware. Any hardware that implements a universal set of computational gates can support universal computation. There are lots of different kinds of universal gates, so you can't tell what gates are being used by looking at the results of a computation. So the simulation hypothesis does no work in explaining what we observe. The simulation hypothesis also implies we can't understand the real laws of physics, the physics of the simulator, since no experiment we conduct can tell us anything about the hardware. Another problem: the simulation might be programmed to change the laws of physics arbitrarily so it ruins all of our existing knowledge of the laws of physics and everything else.
There are no answers to these criticisms so the simulation hypothesis is false.
The strength of the claim being made by Slashdot and the lack of any examination of ways in which it could be false by whoever wrote Slashdot's summary both invite skepticism.
I'm of the opinion that we are in base reality regardless, though. The reason for this being is that the incentive for running a simulation is so that you can observe the behavior of the system being simulated. If you have some vertical stack of simulations all simulating intelligent agents in a virtual world, and most of these simulations are simulating basically the same thing, that makes simulation very costly because the 0th-level simulators won't learn anything from a simulation being run by the simulants that they won't learn from the "base-level" simulation. They would have an incentive to develop ways to starve non-useful simulant activity of computing resources.
Abstract question here, but does this paper prove we're not living in a simulation, versus proving that at our current amount of knowledge, we can't prove if we're in one, or not?
The bigger question, of course, would be how many simulations deep we are at, and how long until we make our own.
I'd say it proves that we are not living in a simulation that
a) runs in a universe that has the same computational constraints as ours and
b) simulates quantum effects faithfully at macroscopic levels
To take this a step farther; while this doesn't prove we're not in a simulation, I think if you accept that our universe can't be simulated from a universe that looks like ours, it destroys the whole anthro/ probability argument in favor of simulations, because that argument seems to rely on the claim that we will eventually create a singularity which will simulate a lot of universes like ours. If that's not possible, then the main positive argument for the simulation hypothesis gets a lot weaker, I think.
Maybe there's a higher level universe with more permissive computational constraints, maybe not, but either way I'm not sure I see how you can make a probability argument for or against it.