Followup to: Decoherence is Pointless
In "Decoherence is Pointless", we talked about quantum states such as
(Human-BLANK) * ((Sensor-LEFT * Atom-LEFT) + (Sensor-RIGHT * Atom-RIGHT))
which describes the evolution of a quantum system just after a sensor has measured an atom, and right before a human has looked at the sensor—or before the human has interacted gravitationally with the sensor, for that matter. (It doesn't take much interaction to decohere objects the size of a human.)
But this is only one way of looking at the amplitude distribution—a way that makes it easy to see objects like humans, sensors, and atoms. There are other ways of looking at this amplitude distribution—different choices of basis—that will make the decoherence less obvious.
Suppose that you have the "entangled" (non-independent) state:
(Sensor-LEFT * Atom-LEFT) + (Sensor-RIGHT * Atom-RIGHT)
considering now only the sensor and the atom.
This state looks nicely diagonalized—separated into two distinct blobs. But by linearity, we can take apart a quantum amplitude distribution any way we like, and get the same laws of physics back out. So in a different basis, we might end up writing (Sensor-LEFT * Atom-LEFT) as:
(0.5(Sensor-LEFT + Sensor-RIGHT) + 0.5(Sensor-LEFT - Sensor-RIGHT)) * (0.5(Atom-RIGHT + Atom-LEFT) - 0.5(Atom-RIGHT - Atom-LEFT))
(Don't laugh. There are legitimate reasons for physicists to reformulate their quantum representations in weird ways.)
The result works out the same, of course. But if you view the entangled state in a basis made up of linearly independent components like (Sensor-LEFT - Sensor-RIGHT) and (Atom-RIGHT - Atom-LEFT), you see a differently shaped amplitude distribution, and it may not look like the blobs are separated.
Oh noes! The decoherence has disappeared!
...or that's the source of a huge academic literature asking, "Doesn't the decoherence interpretation require us to choose a preferred basis?"
To which the short answer is: Choosing a basis is an isomorphism; it doesn't change any experimental predictions. Decoherence is an experimentally visible phenomenon or we would not have to protect quantum computers from it. You can't protect a quantum computer by "choosing the right basis" instead of using environmental shielding. Likewise, looking at splitting humans from another angle won't make their decoherence go away.
But this is an issue that you're bound to encounter if you pursue quantum mechanics, especially if you talk to anyone from the Old School, and so it may be worth expanding on this reply.
After all, if the short answer is as obvious as I've made it sound, then why, oh why, would anyone ever think you could eliminate an experimentally visible phenomenon like decoherence, by isomorphically reformulating the mathematical representation of quantum physics?
That's a bit difficult to describe in one mere blog post. It has to do with history. You know the warning I gave about dragging history into explanations of QM... so consider yourself warned: Quantum mechanics is simpler than the arguments we have about quantum mechanics. But here, then, is the history:
Once upon a time,
Long ago and far away, back when the theory of quantum mechanics was first being developed,
No one had ever thought of decoherence. The question of why a human researcher only saw one thing at a time, was a Great Mystery with no obvious answer.
You had to interpret quantum mechanics to get an answer back out of it. Like reading meanings into an oracle. And there were different, competing interpretations. In one popular interpretation, when you "measured" a system, the Quantum Spaghetti Monster would eat all but one blob of amplitude, at some unspecified time that was exactly right to give you whatever experimental result you actually saw.
Needless to say, this "interpretation" wasn't in the quantum equations. You had to add in the extra postulate of a Quantum Spaghetti Monster on top, additionally to the differential equations you had fixed experimentally for describing how an amplitude distribution evolved.
Along came Hugh Everett and said, "Hey, maybe the formalism just describes the way the universe is, without any need to 'interpret' it."
But people were so used to adding extra postulates to interpret quantum mechanics, and so unused to the idea of amplitude distributions as real, that they couldn't see this new "interpretation" as anything except an additional Decoherence Postulate which said:
"When clouds of amplitude become separated enough, the Quantum Spaghetti Monster steps in and creates a new world corresponding to each cloud of amplitude."
So then they asked:
"Exactly how separated do two clouds of amplitude have to be, quantitatively speaking, in order to invoke the instantaneous action of the Quantum Spaghetti Monster? And in which basis does the Quantum Spaghetti Monster measure separation?"
But, in the modern view of quantum mechanics—which is accepted by everyone except for a handful of old fogeys who may or may not still constitute a numerical majority—well, as David Wallace puts it:
"If I were to pick one theme as central to the tangled development of the Everett interpretation of quantum mechanics, it would probably be: the formalism is to be left alone."
Decoherence is not an extra phenomenon. Decoherence is not something that has to be proposed additionally. There is no Decoherence Postulate on top of standard QM. It is implicit in the standard rules. Decoherence is just what happens by default, given the standard quantum equations, unless the Quantum Spaghetti Monster intervenes.
Some still claim that the quantum equations are unreal—a mere model that just happens to give amazingly good experimental predictions. But then decoherence is what happens to the particles in the "unreal model", if you apply the rules universally and uniformly. It is denying decoherence that requires you to postulate an extra law of physics, or an act of the Quantum Spaghetti Monster.
(Needless to say, no one has ever observed a quantum system behaving coherently, when the untouched equations say it should be decoherent; nor observed a quantum system behaving decoherently, when the untouched equations say it should be coherent.)
If you're talking about anything that isn't in the equations, you must not be talking about "decoherence". The standard equations of QM, uninterpreted, do not talk about a Quantum Spaghetti Monster creating new worlds. So if you ask when the Quantum Spaghetti Monster creates a new world, and you can't answer the question just by looking at the equations, then you must not be talking about "decoherence". QED.
Which basis you use in your calculations makes no difference to standard QM. "Decoherence" is a phenomenon implicit in standard QM. Which basis you use makes no difference to "decoherence". QED.
Changing your view of the configuration space can change your view of the blobs of amplitude, but ultimately the same physical events happen for the same causal reasons. Momentum basis, position basis, position basis with a different relativistic space of simultaneity—it doesn't matter to QM, ergo it doesn't matter to decoherence.
If this were not so, you could do an experiment to find out which basis was the right one! Decoherence is an experimentally visible phenomenon—that's why we have to protect quantum computers from it.
Ah, but then where is the decoherence in
(0.5(Sensor-LEFT + Sensor-RIGHT) + 0.5(Sensor-LEFT - Sensor-RIGHT)) * (0.5(Atom-RIGHT + Atom-LEFT) - 0.5(Atom-RIGHT - Atom-LEFT)) + (0.5(Sensor-LEFT + Sensor-RIGHT) - 0.5(Sensor-LEFT - Sensor-RIGHT)) * (0.5(Atom-RIGHT + Atom-LEFT) + 0.5(Atom-RIGHT - Atom-LEFT))
The decoherence is still there. We've just made it harder for a human to see, in the new representation.
The main interesting fact I would point to, about this amazing new representation, is that we can no longer calculate its evolution with local causality. For a technical definition of what I mean by "causality" or "local", see Judea Pearl's Causality. Roughly, to compute the evolution of an amplitude cloud in a locally causal basis, each point in configuration space only has to look at its infinitesimal neighborhood to determine its instantaneous change. As I understand quantum physics—I pray to some physicist to correct me if I'm wrong—the position basis is local in this sense.
(Note: It's okay to pray to physicists, because physicists actually exist and can answer prayers.)
However, once you start breaking down the amplitude distribution into components like (Sensor-RIGHT—Sensor-LEFT), then the flow of amplitude, and the flow of causality, is no longer local within the new configuration space. You can still calculate it, but you have to use nonlocal calculations.
In essence, you've obscured the chessboard by subtracting the queen's position from the king's position. All the information is still there, but it's harder to see.
When it comes to talking about whether "decoherence" has occurred in the quantum state of a human brain, what should intuitively matter is questions like, "Does the event of a neuron firing in Human-LEFT have a noticeable influence on whether a corresponding neuron fires in Human-RIGHT?" You can choose a basis that will mix up the amplitude for Human-LEFT and Human-RIGHT, in your calculations. You cannot, however, choose a basis that makes a human neuron fire when it would not otherwise have fired; any more than you can choose a basis that will protect a quantum computer without the trouble of shielding, or choose a basis that will make apples fall upward instead of down, etcetera.
The formalism is to be left alone! If you're talking about anything that isn't in the equations, you're not talking about decoherence! Decoherence is part of the invariant essence that doesn't change no matter how you spin your basis—just like the physical reality of apples and quantum computers and brains.
There may be a kind of Mind Projection Fallacy at work here. A tendency to see the basis itself as real—something that a Quantum Spaghetti Monster might come in and act upon—because you spend so much time calculating with it.
In a strange way, I think, this sort of jump is actively encouraged by the Old School idea that the amplitude distributions aren't real. If you were told the amplitude distributions were physically real, you would (hopefully) get in the habit of looking past mere representations, to see through to some invariant essence inside—a reality that doesn't change no matter how you choose to represent it.
But people are told the amplitude distribution is not real. The calculation itself is all there is, and has no virtue save its mysteriously excellent experimental predictions. And so there is no point in trying to see through the calculations to something within.
Then why not interpret all this talk of "decoherence" in terms of an arbitrarily chosen basis? Isn't that all there is to interpret—the calculation that you did in some representation or another? Why not complain, if—having thus interpreted decoherence—the separatedness of amplitude blobs seems to change, when you change the basis? Why try to see through to the neurons, or the flows of causality, when you've been told that the calculations are all?
(This notion of seeing through—looking for an essence, and not being distracted by surfaces—is one that pops up again and again, and again and again and again, in the Way of Rationality.)
Another possible problem is that the calculations are crisp, but the essences inside them are not. Write out an integral, and the symbols are digitally distinct. But an entire apple, or an entire brain, is larger than anything you can handle formally.
Yet the form of that crisp integral will change when you change your basis; and that sloppy real essence will remain invariant. Reformulating your equations won't remove a dagger, or silence a firing neuron, or shield a quantum computer from decoherence.
The phenomenon of decoherence within brains and sensors, may not be any more crisply defined than the brains and sensors themselves. Brains, as high-level phenomena, don't always make a clear appearance in fundamental equations. Apples aren't crisp, you might say.
For historical reasons, some Old School physicists are accustomed to QM being "interpreted" using extra postulates that involve crisp actions by the Quantum Spaghetti Monster—eating blobs of amplitude at a particular instant, or creating worlds as a particular instant. Since the equations aren't supposed to be real, the sloppy borders of real things are not looked for, and the crisp calculations are primary. This makes it hard to see through to a real (but uncrisp) phenomenon among real (but uncrisp) brains and apples, invariant under changes of crisp (but arbitrary) representation.
Likewise, any change of representation that makes apples harder to see, or brains harder to see, will make decoherence within brains harder to see. But it won't change the apple, the brain, or the decoherence.
As always, any philosophical problems that result from "brain" or "person" or "consciousness" not being crisply defined, are not the responsibility of physicists or of any fundamental physical theory. Nor are they limited to decoherent quantum physics particularly, appearing likewise in splitting brains constructed under classical physics, etcetera.
Coming tomorrow (hopefully): The Born Probabilities, aka, that mysterious thing we do with the squared modulus to get our experimental predictions.
Part of The Quantum Physics Sequence
Next post: "The Born Probabilities"
Previous post: "Decoherence is Pointless"
I can't say that I've understood everything in the series on QM, but it has been immensely useful for me in beginning to understand it. And, in general, most of what I've read of OB I've found useful - especially the comments, because I find myself "catching up" with a lot of these ideas, and I have a tendency when I'm catching up to the ideas of someone more intelligent than me to not find fault where I would if I had a better grasp of the ideas. Though I'm still not very far along on the path to being a rationalist, I know that it's a path I've been trying to walk my whole life, despite the fact that much of it was spent stumbling and tripping through religion, popular politics, and arguments that were more about proving who was "right" rather than finding out what was right. I'm glad to have found yet another resource for walking the path, especially one as useful as this one. I haven't commented here before, but I just thought I'd toss in that I really appreciate the writing you do here (and yours as well, Robin) and I'm glad that I stumbled across this blog.
Hrm... I wonder if there are other basies with local behavior other than the usual positional one?
If yes, does what we see as decoherence automaticlaly "look decoherent" in that basis too?
Psy-Kosh: in the basis of eigenvalues of the Hamiltonian, not only is the equation local, but nothing even moves.
Chris, forgive me if this is a foolish question, but wouldn't the components corresponding to eigenvalues of the Hamiltonian change only by a constant complex factor, rather than not changing at all?
Chris, Eliezer: Yeah, at least last what I recall studying, the time development for Hamiltonian eigenvectors basically has them spinning around the complex plane (with the rate of rotation being a function of the eigenvalue. In fact, I believe it is directly proportional)
Actually, this discussion leads me to wonder something: What properties does a matrix have to have such that its eigenvectors form a complete basis?
The eigenvectors of a matrix form a complete orthogonal basis if and only if the matrix commutes with its Hermitian conjugate (i.e. the complex conjugate of its transpose). Matrices with this property are called "normal". Any Hamiltonian is Hermitian: it is equal to its Hermitian conjugate. Any quantum time evolution operator is unitary: its Hermitian conjugate is its inverse. Any matrix commutes with itself and its inverse, so the eigenvectors of any Hamiltonian or time evolution operator will always form a complete orthogonal basis. (I don't remember what the answer is if you don't require the basis to be orthogonal.)
It would be a pleasure and a treat to join the recent discussion on QM, especially the Ebborian interlude, but I cannot afford the dozens of hours of study and reflection it would take to get to the point where I could actually contribute to the discussion.
If I ever find myself with the luxury of being able to study QM, this blog or the book that comes from it is where I would go first for written study material. (I'd probably need a reliable mathematical treatment, too, but those are easy to find.)
Physics is the study of ultimate reality!
QM is in my humble opinion humankind's greatest achievement.
And yeah, I was wondering what the answer was if I don't necessarally demand them to be orthognal, just that I require them to span the space.
Anyways, am right now reading through Down with Determinants. Maybe that'll have the answer in there.
(Actually, the part which I get to, at least for finite dimensional spaces, is already effectively in there: The number of distinct eigenvalues has to equal the dimension of the space. Of course, the question of what has to be true about a linear operator for that to hold is something I'm wondering. :))
"The number of distinct eigenvalues has to equal the dimension of the space."
That may be a sufficient condition but it is definitely not a necessary one. The identity matrix has only one eigenvalue, but it has a set of eigenvectors that span the space.
Stephen: whoops. Just realized that and came here to post that correction, and you already did. :)
I don't really follow a lot of what you've written on this, so maybe this isn't fair, but I'll put it out there anyway:
I have a hard time seeing much difference between you (Eliezer Yudkowsky) and the people you keep describing as wrong. They don't look beyond the surface, you look beyond it and see something that looks just like the surface (or the surface that's easiest to look at). They layer mysterious things on top of the theory to explain it, you layer mysterious things on top of physics to explain it. Their explanations all have fatal flaws, yours has just one serious problem. Their explanations don't actually explain anything, yours renames things (e.g. probability becomes "subjective expectation") without clearing up the cause of their relationships -- at least, not yet.
Psy-Kosh, Stephen: A finite-dimensional complex matrix has a complete basis of eigenvectors (i.e. it is diagonalizable) if and only if every generalized eigenvector is also an eigenvector. Intuitively, this means roughly that there are n independent directions (where n is the size of the matrix) such that vectors along these directions are stretched or shrunk uniformly by the matrix.
Try googling "jordan normal form", that may help clarify the situation.
I don't know the answer in the infinite-dimensional case.
Roland, make that two. Though this mooshed my head.
I used to really enjoy thinking about how weird QM was. Look! The little photon goes through both holes at the same time! Not really any more though, it's starting to seem a little bit...ordinary.
Which is a good thing, of course.
Quick question - since you can't integrate over a single point, does that preclude the existence of any 'motionless' particle? Anything that ceased to have an appreciable (Planck-length?) amplitude spread would, in effect, not be there? That would chime with the transform duality thingy between location and velocity.
Hope I get chatting to someone who thinks in terms of quantum/classical dualities at some point, purely so that I can use the line "you're very clever, old man, but it's all amplitudes, all the time."
It's odd that the QM sequence is so little commented-on and voted-on, which suggests it's little-read. Which is particularly strange in that so much of EY's philosophy appears to build directly on his interpretation of QM. Does anyone have ideas on why? Are people just reading the headlines and going along with what they seem to say, and not reading the posts themselves and particularly not their comments?
The QM sequence was originally posted at overcoming bias, and was later posted here when LW was created. That explains its lack of comments and votes relative to posts made here originally. However, if there's a lack of comments and posts relative to other parts of the sequences (which were almost all originally posted at overcoming bias), then you've noticed something.
If this puzzle exists, I'd guess many people didn't read them because they were turned off by the math early on in the sequence.
I've been ploughing through the sequences in my idle reading time, more or less in wiki order, and yes, these have noticeably less votes and comments than other sequences. The QM sequence is the only place I've seen EY posts with votes of 0 or even -1. This suggests to me a lot less readers. (Perhaps displaying up and down totals, as per Reddit, would help distinguish "controversial" from "nobody cares".)
Controversial is a decent possibility. What EY says IS controversial among physicists, and that may be the source of some of his downvotes.
The lack of comments compared to other sequences doesn't fit that, though.
The QM sequence is also linked to a lot less than other posts, as it tends to be less directly relevant to conversation topics.
Is this really the case? It seems to me that that the interpretation of QM (and almost all micro-level details of fundamental physics) ought to be (and in Eliezer's case, are) independent of "macro-level" philosophy. Eliezer could justify his reductionism, his Bayesianism, his utilitarian ethics, his atheism, his opposition to most kinds of moral discounting, his intuitions regarding decision theory, his models of mind and of language, and his futurism - he could justify all these things even if he were a strict Newtonian believer in simple determinism who models all apparent indeterminacy as ignorance of the true initial conditions.
To my mind, the micro assumptions don't change the macro conclusions, they only change the way we talk about and justify them.
I agree with you that one should reach most if not all of the same conclusions from a strict Newtonian perspective (or from a Copenhagenite perspective, and so on). But the way it's talked about does scare me, because it's difficult for me to tell why they believe the things they believe, and opaque reasoning rings several warning bells.
That is, to answer your original question- "Is this really the case?"- it certainly is the case that it appears that EY's philosophy builds directly on his interpretation of QM. When judging by appearances, we have to take the language into account, and to go deeper requires that you go down the rabbit hole to tell whether or not EY's philosophy actually requires those things- and that rabbit hole is one that is forbidding for non-mathematicians and oddly disquieting for physicists (at least, that's my impression as a physicist). QM is an inferential distance minefield.
It seems to me that MWI is just a convenient visualization trick, and thus there is equivalence, but I don't feel I understand EY's philosophy and its development well enough to argue for that interpretation.
Agree. It would be nice to have Eliezer's take on this question.
That's as I understand it, too. However, I think that he also means that QM gives some additional evidence that consciousness is not substrate-dependent, as for instance Massimo Pigliucci meant in the Bloggingheads.TV discussion, because given QM there is no unique time-continuous neuron-number-124 in brain-234 etc. etc. at all. Only functions.
For a discussion of ems this helps. Pigliucci on the other hand meant substrate-independence would imply a dualism. What left me somehow puzzling as he seemed to accept that there is more than one consciousness in the universe, but now I start drifting off...
I got this from Quantum Explanations:
That is, Eliezer brought QM up at all as part of a philosophical discussion, because he felt he had to in order to make his philosophical points. You may then argue (as you seem to in your comment) that he did not in fact have to bring in QM to make his points, but he felt he had to, per that quote.
And then there's Timeless Identity, which expressly claims to be the philosophical payoff from the QM sequence. Given that post and the introduction I quoted from Quantum Explanations, I really don't see how you can deny that his philosophy builds directly on his interpretation of QM.
It appears you are right. Eliezer derives his conclusions regarding zombies, personal identity, and the philosophy of transporters and duplicators from his understanding of QM.
On the other hand, I reach exactly the same conclusions on these issues without really understanding QM. Of course, I have the advantage over Eliezer that I have read far less Philosophy. :)
Hah, same here.
People shouldn't build too much of their philosophy on top of the MWI, IMO. If evidence that relatively "distant" worlds are being deleted is found then they would have to revisit it all. That doesn't seem terribly likely - but we can hardly rule it out. Occam's razor just doesn't rule against it that strongly.
Love the Philosophy jibe! :)
Well, ISTM that this sort of reductionism/functionalism is still right in a classical universe, just going by the whole notion of beliefs should pay rent; but it's not forced like it is in the actual universe.
A technical subject. The gist seemed to be: Rah, MWI.
I've thought the MWI was correct since way back in the 1980s - after reading this - and so didn't feel an urgent need to be lectured on its virtues.
"No one had ever thought of decoherence. The question of why a human researcher only saw one thing at a time, was a Great Mystery with no obvious answer."
This is not true, and saying things like this will reduce your credibility in the eyes of intelligent observers. In "The Present State of Quantum Mechanics" Schroedinger writes
(This is in translation, but I don't think you can deny in good faith that he understands decoherence and almost certainly grasps the predicted existence of many worlds).
You should consider changing the way you talk about the history of quantum mechanics (and probably learning more about the history) before writing at more length about it.
Well, that's an interesting quote, but did he come out and say that QM was all there was, no exceptions ever, and collapse is not real? If he did, it was in private and did not spread, for when Everett (re-?)proposed it later, it was exceedingly controversial and derided.
And certainly decoherence is a considerably more complicated beast than that, and simply the notion that QM is all there really is NOT sufficient to understand decoherence, not by a long shot.
Yes. He said it in the passage I quoted. ("it would not be quite right to say that the psi-function of the object...should now change leap-fashion because of a mental act." You could quibble with the word 'quite,' but I think the surrounding text is plenty clear.) His understanding comes through in his writing more generally. The fact that one person has understood something (or many) does not preclude it from being controversial some time later.
I don't know quite what you mean. In what way is decoherence "more complicated," and than what? It looks to me like Schrodinger understands exactly what is going on.
I think he is expressing dissatisfaction with QM rather than endorsing MWI. I found a different quote, from 1950, that seems to support the former.
That isn't that clear a statement of his views, but it is from a letter written in reply to Einstein, who said
(Both quotes are taken from Karl Przibram's Letters on wave mechanics: Schrodinger, Planck, Einstein, Lorentz p. 35-38.)
This is clearly against quantum mechanics rather in support of MWI. They both realize that QM's ontology needs to be revised, but neither knows how.