So I've been trying to read the Quantum Physics sequence. I think I've understood about half of it- I've been rushed, and haven't really sat down and worked through the math. And so I apologize in advance for any mistakes I make here.

It seems like classical mechanics with quantized time is really easy to simulate with a computer: every step, you just calculate force, figure out where velocity is going, then add the change in position to the new position.

Then when you change to relativity, it seems like it's suddenly a lot harder to implement. Whereas classical mechanics are easy on a computer, it seems to me that you would have to set up a system where the outcomes of relativity are explicitly stated, while the classical outcomes are implicit.

The same thing seems to occur, except more, with quantum physics. Continuous wave functions seems to be far harder than discrete particles. Similarly, the whole thing about "no individual particle identity" seems more difficult, although as I think of it now, I suppose this could be because the whole concept of particles is naive.

It doesn't seem like the computation rules simply get harder as we learn more physics. After all, trying to do thermal physics got a lot easier when we started using the ideal gas model.

Also, it's not just that ever improving theories must be ever more difficult to implement on a computer. Imagine that we lived inside Conway's Game of Life. We would figure out all kinds of high level physics, which would be probably way more complex than the eventual B3/S23 which they would discover.

It feels like the actual implemented physics shouldn't much affect how computation works. After all, we live in a quantum universe and classical physics is still simpler to compute.

Is there any value to this speculation?

New Comment
35 comments, sorted by Click to highlight new comments since: Today at 6:29 AM

It feels like the actual implemented physics shouldn't much affect how computation works. After all, we live in a quantum universe and classical physics is still simpler to compute.

If I understand Scott Aaronson correctly (his site seeks to correct public understanding of quantum computing), any quantum system can be simulated with a classical system, but with exponential slowdown (as far as we know -- it's not proven whether someone will be able to come up with a clever way to do it with less of a slowdown). But everything computable via a quantum system is computable via a classical system, just not necessarily as fast.

The formal term for the question of whether classical computers can work as efficiently as quantum computers is that of "Is BQP = P?". (BQP = problems solvable in polynomial time using a quantum computer; P = problems solvable in polynomial time with classic.)

[-][anonymous]13y80

To be concrete, see Aaronson's works here and here for more specific analysis. I think that just as Silas said, it is the fact that you must worry about an exponentially larger set of classical states in quantum mechanics. When(/if) we build quantum computers, part of the point will be that they natively avoid this slowdown by virtue of being able to produce computations that are themselves superpositions of different states. If you then carefully engineer the way in which you make a measurement, you can provide probability bounds on the likelihood that the observer version of yourself happens to be in the Everett branch corresponding to the correct answer. By repeating the computation, you can make the probability that "you" are not in the "right" Everett branch become smaller and smaller, at least if you subscribe to the many worlds interpretation.

You may also like to watch Aaronson's 2011 Buhl lecture as linked from this post. Among other things, it highlights reasons why we might regard computability and complexity theory as being more fundamental than physics, i.e. that complexity theory would actually enable you to predict what physics should be like in some sense.

Discrete particles vs. continuous wave functions is a red herring, I think. It's true that in a simulation of QM one would have to approximate amplitudes up to some finite precision (and approximate infinite dimensional Hilbert spaces using finite dimensional Hilbert spaces). But this is not a problem that is unique to QM. Simulating classical mechanics also requires approximating the positions and momenta of particles to finite precision.

You are right, though, that computing quantum mechanics is harder than computing classical mechanics. This is true even if we completely discretize both theories. The length of a vector representing the state of a classical system is linear in the size of the system, but the length of a vector representing the state of a quantum system is exponential in the size of the system.

Continuous wave functions seems to be far harder than discrete particles.

True, but consider that discrete particles are a special case of continuous wave functions: The case where the phase-space distribution is a delta function or some very good approximation to it. If you limit your quantum mechanics to cases that classical mechanics can describe well, then you are operating, in effect, on delta functions, and this simplifies the math immensely.

The apparent simplicity of classical mechanics comes from looking only at these special cases; quantum mechanics deals with more general situations, so it is unavoidably more complex. But if you made a quantum-mechanics program to predict only classical situations, it would not necessarily be more complex than the corresponding program using the laws of Newton, who is not forgotten.

I presume that you refer to the difference between point particles and fields. The wave function in QM is essentially a classical field), where for every point in space you have a set of numbers describing the quantum state. This state evolved in time according to the time-dependent Schrodinger equation, until "something classical" happens, at which point you have to throw dice to pick one of the preferred states (also known as the wave-function collapse or the MWI world split).

The part of QM which is just the evolution of the Schrodinger equation is computationally equivalent to that of modeling diffusion or heat flow: the equation structure is very similar, with complex numbers instead of reals. The "measurement" part (calculating an outcome) is nearly trivial, just pick one of the outcomes (or one of the worlds, if you are a fan of MWI), according to the Born rule.

While it is true that there are many shortcuts to solving the Schrodinger equation numerically, and a set of these shortcuts is what tends to be studied in most QM courses, there is no substitution for numerical evolution in a general case.

Quantum computing is a rather different beast from the regular quantum physics, just like classical computing is different from classical physics: computing is an abstraction level on top, it lets you think only about the parts of the problem that are relevant, and not worry about the underlying details. Quantum computing is no more about quantum mechanics than the classical computer science is about the physics of silicon gates.

One final piont. It is a general observation that understanding of any scientific topic is greatly enhanced by teaching it, and teaching it to an ultimate idiot savant, a computer, is bound to probe your understanding of the topic extensively. So, if you want to learn QM, get your hands dirty with one of the computational projects like this one, and if you want to learn quantum computing, write a simulation of the Schor's algorithm or something similar.

There are a variety of ways of measuring how complex something is. Three of them seem relevant here. The first is, given a sequence a_i of integers, what is the smallest classical Turing machine which on input i outputs a_i. This is really tough in general, and quickly leads to unsolvability issues. In general, the simpler question "what is the smallest Turing machine which outputs string S" is called the Kolmogorov complexity of S and is essentially undecidable in the general case. One can look at empirical Kolmogorov complexity and ask the same question about smallest known. This turns out to be not that useful as I understand it.

More effective methods look not at the Turing machine's size but how much tape space or how much time it takes up. You may have heard of this sort of thing in the context of P?= NP. In this context, the set of function problems which can be performed by a quantum computer reliably in polynomial time is BQP. We do have specific problems that seem to be in BQP that cannot be performed in reasonable time on a classical computer in polynomial time. Although this has not been formally proven, recent work by Scott Aaaronson leads one to suspect the stronger statement that BQP includes problems which are outside what is called the polynomial hierarchy, which is a much broader computational framework. If this is the case, then quantum computing must be very hard.

Note that when talking about BQP one is getting essentially a single bit of information out, so it can always be performed with finite resources (in fact in PSPACE and thus bounded by exponential time). If one wants to predict more behavior of the system then you need do need a lot more resources and it seemed potentially infinite resources to fully predict basic behavior.

[-]TAG10mo20

Classical physics isn't exactly computable, because some systems are critically dependent on their initial conditions, and an exact specification of the initial conditions requires an infinite amount of information (unlimited precision real number).

Just a quick question in case anybody knows... the complex amplitude value that each point in configuration space contributes to the amplitude of some future point in configuration space... the phase can be any value, but is the magnitude variable too? Or are there just lots of vector additions of the same sized vectors pointing in different directions?

As pragmatist says, the magnitude can vary, but note that the integral of the squared magnitude over the whole of phase space is constant - this is the unitarity condition, also known as "conservation of probability". So the magnitudes cannot vary completely freely.

The "conservation of probability" only applies in low energy limits where the numbers of particles are conserved. So if there are 6 electrons, the integral over all (problem) space of the magnitude of the electron wave function will add up to 6.

But in problems where the energies are high enough, positron-electron pairs can be created from the vacuum. In problems like this, the total number of electrons is variable, is uncertain, and the "conservation of probability" does not apply.

What Alejandro said. I phrased myself carefully; "the whole of phase space" can in some cases include numbers of particles as one of the variables. Unitarity still holds.

This seems wrong to me. If you are in this high-energy case, then you should use QFT and the argument of your wavefunction is a field configuration, not particle positions. And conservation of probability still applies, in the sense that the wavefunction for the quantum field evolves unitarily.

It is just that the observable "number of particles" is not conserved. In most states, it does not even have a defined value (i.e. you can get superpositions of states with different number of particles). But properly defined, unitarity is still valid, indeed it is one of the cornerstones of quantum field theory.

The magnitude is variable as well. A wavefunction is a map from configuration space to the entire complex plane, not just the unit circle on the complex plane.

Indeed, and I might add, since the probability of an outcome is the modulus squared of the wavefunction, if the range of the wavefunction was just the unit circle, then all outcomes would always have equal probability.

If you're talking about the complexity of the program, as opposed to how much computing power it takes to actually run, time-dependent quantum physics is pretty easy. You just take a n^m lattice (that is n x n x n x n x ... x n) of complex numbers, calculate the Hamiltonian (a constant times the sum of the double derivatives for each dimension plus potential energy) multiply it by i times the current amplitude, and add a tiny fraction of that to the current amplitude. You have to do this to really high precision to prevent error from accumulating (though you could improve it to make error accumulate slower).

Time independent quantum mechanics isn't so easy. You could calculate it propagating out from some initial condition, but I think this will result in runaway amplitude. I suspect that if you mess with the Born rule and make the probability go down as the position moves further from the origin and get it to work about right.

If that doesn't work, if you calculate enough of these improper universes, you should be able to get a proper one as a linear combination of them. Just keep track of how much they're running away, and add them so it all goes to zero.

I'm not certain what you mean by time independent quantum mechanics in this case. Do you mean identifying energy eigenstates and their eigenvalues?

I mean Timeless Physics. I accidentally called it "time independent" because it's governed by the time-independent Schroedinger equation.

It's basically equivalent to assuming that the entire universe is a standing wave.

But the universe isn't in an energy eigenstate. Things have been known to happen, from time to time, which they wouldn't if it were.

Perhaps "multiverse" would have been a better word.

Imagine an electron in an eigenstate. It's moving around the atom, but the entire waveform is stationary. Similarly, if the entire universe is in an eigenstate, the universe would still be moving, but the probability for a given universe would stay constant. Put another way, as time passes in this universe, a parallel universe just like this one was moments ago moves into its place. We have no way of proving that this doesn't happen. In fact, to some extent, it must. The immediate past is almost exactly like the present, so a universe that looks just like it must have similar amplitude to this one now.

Do you speak about calculating eigenfunctions of the Hamiltonian, then? If so, it is not such a big problem numerically. In one spatial dimension it can be done in a straightforward fashion: select an energy E, chose two distinct doubles of values of ψ(x) and dψ(x)/dx at some point x0, numerically solve the Schrödinger equation for both values at x0, finding ψ1(x) and ψ2(x), look whether you can find a linear combination ψ = a ψ1 + b ψ2 so that ψ(x) = 0 at both boundaries, move over all values of E to find those when this is possible, those are the eigenvalues.

In more spatial dimension you can do the same if you can separate the variables. If not, you can still do something. For example, if you want to find the ground state, first make sure that the Hamiltonian is positive (if not, add a constant). Then begin with arbitrary wave function (which satisfies the boundary conditions, but isn't stationary) and let it evolve in imaginary time, i.e. solve the Schrödinger equation without i at the time derivative. All components will be supressed exponentially, but the lowest energy component will vanish most slowly (or remain constant if you managed to choose the additive constant so that the lowest energy is zero). Effectively you get the ground state wave function.

By the way, I find the "Timeless Physics" label somewhat confusing here. As I understand it, it is a way how to look at physics, not giving time a special role. Whatever one thinks about ontological status of time shouldn't influence whether one is interested in stationary solutions or not.

Do you speak about calculating eigenfunctions of the Hamiltonian, then?

That would only tell you what values are possible for E, not what to set ψ(0) and ψ'(0) to, wouldn't it? That said, I doubt finding those would be much harder.

As I understand it, it is a way how to look at physics, not giving time a special role.

Not exactly. It's a different theory. If you believe time exists, the past and future wouldn't interfere with each other. If you believe it doesn't, there's only one "time" and everything interferes. To my knowledge, it's not different enough to actually test, like MWI vs. Copenhagen, but it's different.

That would only tell you what values are possible for E, not what to set ψ(0) and ψ'(0) to,

They are already calculated in the process - they are a ψ1(0) + b ψ2(0), or with primes respectively.

Not exactly. It's a different theory. If you believe time exists, the past and future wouldn't interfere with each other. If you believe it doesn't, there's only one "time" and everything interferes.

I don't understand what interference of past and future means.

They are already calculated in the process - they are a ψ1(0) + b ψ2(0), or with primes respectively.

Wouldn't that just make it so it's zero at that point, not that it doesn't increase without limit as you get further?

I don't understand what interference of past and future means.

According to normal MWI, there's a universe in the present that's just like this one was in the immediate past. Each of them have their own amplitude. If you wanted to calculate the probability of a universe that looks exactly like that, you'd take the probabilities of those universes individually and add them together. With timeless physics, there's only one universe that looks like that. If you were to add two waveforms, perhaps one representing then and one now, you'd add the amplitudes and then square the magnitude of that.

I try to make a ψ1 + b ψ2 zero at boundaries, not at origin. If the boundaries lie in +- infinity, I have to approximate it by some finite number (when I was doing that last time, I used infinity = 8, it worked well with the respective problem). If this is what you have asked about.

According to normal MWI, there's a universe in the present that's just like this one was in the immediate past. Each of them have their own amplitude. If you wanted to calculate the probability of a universe that looks exactly like that, you'd take the probabilities of those universes individually and add them together.

I'm sorry, but I can't parse the demonstrative pronouns. Please denote the universes by symbols.

It all depends on your system. If you're computing the fully quantum system of a spin glass on a regular grid using an Ising Hamiltonian or something like that, it can be computed far more straightforwardly than most fully classical systems of a similar number of particles.

It also depends on the approximations you're making. You mention the ideal gas law, but that doesn't apply to gases sufficiently far from thermal equilibrium, or even mechanical equilibrium. If you pop a balloon in outer space, the gases will self-sort by momentum, and thus cool faster than the ideal gas law over their volume would suggest.

So, each time we learn new physics that isn't clever approximation, new effects are being added. They will necessarily add computational complexity, unless they allow new, different approximations to be made which more than make up for that new complexity.

By "relativity" you mean what? If relativistic mechanics of point particles in a given background, then it is computationally as complex as classical mechanics. If the background (i.e. gravitational and electromagnetic fields) is to be determined dynamically, then it is harder because you have infinitely many degrees of freedom. But that's the case of many non-relativistic classical systems (fluid dynamics...) too.

it seems to me that you would have to set up a system where the outcomes of relativity are explicitly stated, while the classical outcomes are implicit

What does this mean? I can think of several interpretations, all of them false:

  1. It is impossible to calculate the evolution of a relativistic system from the initial state only, you also need to know the final state too.
  2. It is possible to do calculate the relativistic evolution from the initial state, but you have to do it symbolically, numerical methods won't work.
  3. It is impossible to calculate anything numerically in a relativistic system, solutions have to be guessed and later verified, they can't be systematically constructed.

It is impossible to calculate anything numerically in a relativistic system, solutions have to be guessed and later verified, they can't be systematically constructed.

From a practical point of view, in general relativity this is almost true.

Because there is large gauge freedom in choice of coordinates and random choice of gauge will likely produce a coordinate singularity somewhere and you will not see what's beyond. So you don't reconstruct whole spacetime history, but you can reconstruct at least something, and perhaps use different coordinates to move further. Of course there are problems with precision whenever the equations are enough non-linear, but that's nothing specific to relativity.

Not just that, you are free to choose a gauge that only "kicks in" the future. In fact there is no unique well-defined future history, just a future defined up,to gauge even if you fix a choice of gauge for the present.

Gauge fixing has to be done for all history, else there is fewer equations than dynamical variables, of course.

The point is that this is very hard to do for general relativity.

Our best models of things require an infinite number of classical bits to completely specify. Though it's not like that's new - it takes an infinite number of bits to specify an arbitrary classical velocity too.

The basic rules are already pretty simple. It's just that representing them on a classical computer is impossible.

I'm not talking about bit count so much as complexity of the program.

[-][anonymous]13y00

I'm not talking about bit count so much as complexity of the program.

That is the complexity.

[This comment is no longer endorsed by its author]Reply
[+]Louie13y-50