Background: Our mental models of the universe can contain uncertainty or probability-links, as in a causal network. One may have a deterministic understanding of a phenomenon if the probability-values are all 0 and 1.

Question: Beyond that, is it meaningful to distinguish whether or not the *universe itself* is deterministic or nondeterministic?

For example, is it meaningful to say that the Copenhagen interpretation of QM implies a "nondeterministic universe", while Many Worlds implies a "deterministic universe"?

New Answer
Ask Related Question
New Comment

3 Answers sorted by

In MWI, the future state of the universe is uniquely determined by the past state of the universe and the laws of physics. In Copenhagen, the future state of the universe isn't uniquely determined by those things, but is uniquely determined by those things plus a lot of additional bits that represent how each measurement goes. You could either call those bits part of the state of the universe (in which case Copenhagen is deterministic) or you could call them something else (in which case Copenhagen is nondeterministic), so it seems like a matter of convention. The usual convention is to call the bits something else than part of the state of the universe, making Copenhagen nondeterministic, but I don't think there's a fully principled way across theories to decide what to call part of the state of the universe.

Thanks I think that clarifies everything I'm wondering about. If we had a feature like Stack Overflow's "accepted answer" this would be it for me :)

In Copenhagen, the future state of the universe isn’t uniquely determined by those things, but is uniquely determined by those things plus a lot of additional bits that represent how each measurement goes.

That's rather misleading. If the extra bits pop into existence at time T, then the outcome at time T+1 isn't determined by the conditions at time T-1 as standardly envisaged by determinism. Your kind of redefining determinism.

What does it mean for a bit to pop into existence? As I see it, if I measure a particle's spin at time t, then it's either timelessly the case that the result is "up" or timelessly the case that the result is "down". Maybe this is an issue of A Theory versus B Theory []?
I thought you were referring to collapse in that way.

Interesting question! My answer is basically a long warning about essentialism: this question might seem like it's stepping down from the realm of human models to the realm of actual things, to ask about the essence of those things. But I think better answers are going to come from stepping up from the realm of models to the realm of meta-models, to ask about the properties of models.

At the most basic level of description, things - the quantum fields or branes or whatever - just do what they do. They don't do it nondeterministically, but they also don't do it deterministically! Without recourse to human models, all us humans can say is that the things just do what they do - models are the things that make talk about categories possible in the first place.

Any answer of the sort "no, things can always be rendered into a deterministic form by treating 'random' results as fixed constants" or "yes, there are perfectly valid classes of models that include nondeterminism" is going to be an answer about models, within some meta-level framework. And that's fine!

This can seem unsatisfying because it goes against our essentialist instinct - that the properties in our models should reflect the real properties that things have. If water is considered wet, it's because water has the basic property or essence of wetness (so the instinct goes).

Note that this doesn't explain any of the mechanics or physics of wetness. If you could look inside someone's head as they were performing this essentialist maneuver, they would start with a model ("water is wet"), then they would notice their model ("I model water as wet"), then they would justify themselves to themselves, in a sort of reassuring pat on the back ("I model water as wet because water is really, specially wet").

I think that this line of self-reassuring reasoning is flawed, and a much better explanation of wetness would be in terms of surface tension and intermolecular forces and so on - illuminating the functional and causal story behind our model of the world, rather than believing you've explained wetness in terms of "real wetness". Also see the story about Bleggs and Rubes.

Long story short, any good explanation for why we should or shoudn't have nondeterminism in a model is either going to be about how to choose good models, or it's going to be a causal and functional story that doesn't preserve nondeterminism (or determinism) as an essence.

I think there's an interesting question about physics in whether or not (and how) we should include nondeterminism as an option in fundamental theories. But first I just wanted to warn that the question "models aside, are things really nondeterministic" is not going to have an interesting answer.

At the most basic level of description, things—the quantum fields or branes or whatever—just do what they do. They don’t do it nondeterministically, but they also don’t do it deterministically

How do you know? If that claim isn't based on a model, what is it based on?

much better explanation of wetness would be in terms of surface tension and intermolecular forces and so on

Why? Because they are real properties? Are you saying that only some properties are real, or no properties are real?

Long story short, any good explanation for why we should or sho

... (read more)
3Charlie Steiner3y
I'm happy to reply that the message of my comment as a whole applies to this part - my claim about "what things do at a basic level of description" is a meta-model claim about what you can say about things at different levels of description. It's human nature to interpret this as a claim that things have some essence of "just doing what they do" that is a competitor to the essences of determinism and nondeterminism, but there is no such essence for the same reasons I'm already talking about in the comment. Maybe I could have worded it more carefully to prevent this reading, but I figure that would sacrifice more clarity than it gained. The point is not about some "basic nature of things," the point is about some " basic level of description." We might imagine someone saying "I know there are some deterministic models of atoms and some nondeterministic models, but are the atoms really deterministic or not?" Where this "really" seems to mean some atheoretic direct understanding of the nature of atoms. My point, in short, is that atheoretic understanding is fruitless ("It's just one damn thing after another") and the instinct that says it's desirable is misleading. Because they're part of a detailed model of the world that helps tell a "functional and causal story" about the phenomenon. If I was going to badmouth one set of essences just to prop up another, I would have said so :P My point is that this residue is never going to be the "Real Properties," they're just going to be the same theory-laden properties as always. What makes a theory of everything a theory of everything is not that it provides a final answer for which properties are the real properties that atoms have in some atheoretic direct way. It's that it provides a useful framework in which we can understand all (literally all) sorts of stuff.
I found parts of your comment as a whole to be unclear or underargued,which is why I asked the questions l. I don't see how you can know what the most basic level of description looks like. (I previously phrased that as a question, which did me no good). A claim to the effect that no theoretical term applies atheoretically might prove too much - - it might be more of a general point than the OP was getting at. Well, there a kind of fake indeterminism based on an observers lack of information. Someone could asking a question about that rather than what is true atheoretically. You seem to be happy enough with functional and causal. Are you arguing it or stating it? Maybe. Physicalism asserts the opposite, so an argument would be helpful.

I completely agree with the answer above. I'll also add that, on an object level, all of the models agree about the outcomes of every experiment we've ever been able to do. It really doesn't matter whether you think of an isotope as having a 50% chance of decaying within 12 years, or whether you think of yourself as having a 50% amplitude, over the next `12 years, of branching into universes where the nucleus has decayed. As Feynman put it, "shut up and calculate" - the models work, but asking what they mean is a one-way ticket to epistemology.

You haven't said what you mean by meaningful. It seems that we can test aspects (pun intended) of the issue... the Bell inequalities, Penroses proposed test for collapse, and so on.

4 comments, sorted by Click to highlight new comments since: Today at 10:51 AM

Consider reading The Ghost in the Quantum Turing Machine, it deals with this question in a clear, accessible and unbiased way, and a proposal for non-determinism.

Looks like there are a lot of topics in there besides the question of whether physical nondeterminism is a meaningful concept. Can you summarize or point to the relevant section?

You can certainly get anthropic uncertainty in a universe that allows you to be duplicated. In a universe that duplicates, and the duplicates can never interact, we would see the appearance of randomness. Mathematically, randomness is defined in terms of the set of all possibilities.

An ontology that allows universes to be intrinsically random seems well defined. However, it can be considered as a syntactic shortcut for describing universes that are anthropically random.

In the time travel game Achron my sense of what determinism is was really put into a stress test and I think it teased out distinctions that are not relevant in more casual settings.

The game has a world where time travel is not involved things appear very reliable and deteministic. When time travel influences things there are previously unfamiliar concepts to take into account. There are mechanics about that and it forms a system. The overall rules end up being deterministic in that the enlargened ontology works like a clockwork. However from the perspective of a entity that is not privy to the more esoteric parts fo the ontology things are not pure chaos but their sense of determinism will be of a different kind than what ontologically holds.

In the system you can have a system in grandfather paradox which approximately means that in half the timelines the system will be in one state and in half the timelines the system will be in another state. It's not obvious to non-time-travellers how paradoxes work (but there actually are rules about it). In all the timelines they can be in exact same epistemological state before they come in contact with such paradoxed systems. After they interact they are aware what the systems state is in this timeline. Ofcourse they do not think in terms of multiple timelines but it happens to be that in different timelines they are now in different epistemological states (ie the paradox has "spread" to them). It pretty much must appear to them that what the state of the system is is stochastic before they have done such interaction.

A being in such position might be well served to take note of when "weird" things happen and they might be able to narrow down what the relevant choice outcomes might be. For example if it is now T and someone is ordered to enter a chronoporter at T+20 to go back 10 seconds to shoot themselfs at T+10 you know that at T+15 the ordered person will either be alive or dead ie you know that those are the relevant alternatives. however becuase you don't know the esoterics you don't have the capability to determine which one of those it will be. The situation has strong paralells to schrödingers cat. However in this game we know that god does not indeed throw dice althought we might make use of two kinds of time to specify the esoterics. But even if we know that on game mechanics level no dice is thrown it would seem useful to refer to the fact that a person not having good access to the second kind of time really has a super hard time / impossible for them to figure things out. It's not because they observe the system sloppily or are undiligent. So in a sense it's not illusory that it's stochastic for them but really is effectively stochastic for them.

So it becomes meanigful and useful to say something to the effect of "The best linear time understanding of the game verse will neccesarily be stochastic".

New to LessWrong?