Is this viable physics?

43evhub

32johnswentworth

30Vanessa Kosoy

22Viliam

8Connor_Flexman

2Rudi C

2Viliam

3ESRogs

8Vanessa Kosoy

2Daniel Kokotajlo

13[anonymous]

3David_Kristoffersson

13Charlie Steiner

10shminux

9Ben

9Yandong Zhang

8romeostevensit

7Andrew Jacob Sauer

6TheMajor

21Taran

9Daniel Kokotajlo

2[anonymous]

1Anomalia Klimatyczna

1Kenny

5Derek M. Jones

4ESRogs

4[anonymous]

5Yandong Zhang

New Answer

New Comment

12 Answers sorted by

First of all, I'm very unsurprised that you can get special and general relativity out of something like this. Relativity fundamentally just isn't that complicated and you can see what are basically relativistic phenomenon pop out of all sorts of natural setups where you have some sort of space with an emergent distance metric.

The real question is how this approach handles quantum mechanics. The fact that causal graph updates produce branching structure that's consistent with quantum mechanics is nice—and certainly suggestive that graphs could form a nice underlying substrate for quantum field theory (which isn't really new; I would have told you that before reading this)—but it's not a solution in and of itself. And again what the article calls “branchial space” does look vaguely like what you want out of Hilbert space on top of an underlying graph substrate. And it's certainly nice that it connects entanglement to distance, but again that was already theorized to be true in ER = EPR. Beyond that, though, it doesn't seem to really have all that much additional content—the best steelman I can give is that it's saying “hey, graphs could be a really good underlying substrate for QFT,” which I agree with, but isn't really all that new, and leaves the bulk of the work still undone.

That being said—credit where credit is due—I think this is in fact working on what is imo the “right problem” to be working on if you want to find an actual theory of everything. And it's certainly nice to have more of the math worked out for quantum mechanics on top of graphs. But beyond that I don't think this really amounts to much yet other than being pointed in the right direction (which does make it promising in terms of potentially producing real results eventually, even if doesn't have them right now).

TL;DR: This looks fairly pointed in the right direction to me but not really all that novel.

EDIT 1: If you're interested in some of the existing work on quantum mechanics on top of graphs, Sean Carroll wrote up a pretty accessible explanation of how that could work in this 2016 post (which also does a good job of summarizing what is basically my view on the subject).

EDIT 2: It looks like Scott Aaronson has a proof that a previous version of Wolfram's graph stuff is incompatible with quantum mechanics—if you really want to figure out how legit this stuff is I'd probably recommend taking a look at that and determining whether it still applies to this version.

I agree with both evhub's answer and Charlie Steiner/TheMajor's answers: these models don't really do anything that previous models couldn't do, and they don't really offer near-term experimentally-testable predictions. However, I think these both miss the main value of the contribution. Wolfram sums it up well in this sentence:

I have to say that I don’t think our recent discoveries shed any particular light on [simplicity of the fundamental laws]—because they basically say that lots of things in physics are generic, and independent of the specifics of the underlying rule, however simple or complex it may be.

That last sentence is the real contribution of this work: "lots of things in physics are generic, and independent of the specifics of the underlying rule, however simple or complex it may be". I think Wolfram & co are demonstrating that certain physical laws are generic to a much greater extent than was previously realized.

Drawing an analogy to existing theoretical physics, this isn't like general relativity or quantum mechanics (which made new testable predictions) or like unification (which integrates different physical phenomena into one model). Instead, a good analogy is Noether's Theorem. Noether's Theorem says that conserved quantities in physics come from the symmetry of the underlying laws - i.e. momentum is conserved because physical laws are the same throughout space, energy is conserved because the laws are the same over time, etc. It shows that momentum/energy conservation aren't just physical phenomena of our universe, they're mathematical phenomena which apply to large classes of dynamical systems.

Wolfram & co are doing something similar. They're showing that e.g. the Einstein field equations aren't just a physical phenomenon of our universe, they're a mathematical phenomenon which applies to a large class of systems.

I think Wolfram's "theory" is complete gibberish. Reading through "some relativistic and gravitational properties of the Wolfram model" I haven't encountered a single claim that was simultaneously novel, correct and non-trivial.

Using a set of rules for hypergraph evolution they construct a directed graph. Then they decide to embed it into a lattice that they equip with the Minkowski metric. This embedding is completely ad hoc. It establishes as much connection between their formalism and relativity, as writing the two formalisms next to each other on the same page would. Their "proof" of Lorentz covariance consists of observing that they can apply a Lorentz transformation (but there is nothing non-trivial it preserves). At some point they mention the concept of "discrete Lorentzian metric" without giving the definition. As far as I know it is a completely non-standard notion and I have no idea what it means. Later they talk about discrete analogues of concepts in Riemannian geometry and completely ignore the Lorentzian signature. Then they claim to derive Einstein's equation by assuming that the "dimensionality" of their causal graph converges, which is supposed to imply that something they call "global dimension anomaly" goes to zero. They claim that this global dimension anomaly corresponds to the Einstein-Hilbert action in the continuum limit. Only, instead of concluding the action converges to zero, they inexplicably conclude the *variation* of the action converges to zero, which is equivalent to the Einstein equation.

Alas, no theory of everything there.

Thanks for writing this. I hesitated before commenting, because I am not an expert on physics, but something just felt *wrong*. It took some time to pinpoint the source of wrongness, but now it seems to me that the author is (I assume unknowingly) playing the following game:

1) Find something that is Turing-complete

The important thing is that it should be something simple, where the Turing-completeness comes as a *surprise*. A programming language would be bad. Turing machine would be great a few decades ago, but is bad now. A system for replacing structures in a directed graph... yeah, this type of thing. Until people get used to it; then you would need a fresh example.

2) Argue that you *could *build a universe using this thing

Yes, technically true. If something is Turing-complete, you can use it to implement anything that can be implemented on a computer. Therefore, assuming that a universe could be simulated on a hypothetical computer, it could also be simulated using that thing.

But the fact that many different things can be Turing-complete, means that their technical details are irrelevant (beyond the fact they they cause the Turing-completeness) for the simulated object. Just because...

8

Love this description. All of the results I've skimmed look an awful lot like showing that a thing which can correspond to space and time (your "thing that is Turing-complete") allows you to rederive the things about our space and time.
That being said, I still think exploring various translations and embeddings of mathematical, physical, and computational paradigms into each other is a very valuable exercise and may shed light on very important properties of abstract systems in the future. Also, cool compressed explanation of how some concepts in physics fit together, even if somewhat shallowly.

2

It might lead to new insights by showing that some properties are shared by all Turing-simulateable universes.

2

The property such universes have in common is that they are computable on a hypothetical classical computer with unlimited capacity. (Potentially very inefficiently, like maybe computing one Planck unit of time in a tiny part of the simulated universe would require greater computing capacity than our universe could provide during its entire existence. These are mathematical abstractions unrelated to the real world.)
That implies a few things, for example that in none of these universe you could solve a halting problem. (That there would be certain potentially infinite calculations, whose output you would not be able to predict in limited time.)
But Wolfram's theory doesn't provide any benefit here, other than circular reasoning. "If the universe is Turing-complete, then it can be simulated using Wolfram's latest favorite system, which would imply that it is Turing-complete." Why not just skip that part?

Using a set of rules for hypergraph evolution they construct a directed graph. Then they decide to embed it into a lattice that they equip with the Minkowski metric. This embedding is completely ad hoc. It establishes as much connection between their formalism and relativity, as writing the two formalisms next to each other on the same page would.

Some discussion by Gorard here, which makes it sound like the Minkowski embedding was meant as an illustration, and not meant to do any heavy lifting. Given that, it's not surprising that it might seem a bit ad hoc.

8

The problem is, there is no heavy lifting. "We made a causal network, therefore, special relativity". Sorry, but no, you need to actually explain why the vacuum seems Lorentz invariant on the macroscopic level, something that's highly non-obvious given that you start from something discrete. A discrete object cannot be Lorentz invariant, the best you can hope for is something like a probability measure over discrete objects that is Lorentz invariant, but there is nothing like that in the paper. Moreover, if the embedding is just an illustration, then where do they even get the Riemannian metric that is supposed to satisfy the Einstein equation?

Thanks for this reply. It's interesting to see such wildly different answers to this question. I guess that's what you get when most people are giving hot takes built after a skim. You seem to have looked at this more closely than anyone else so far, so I'm updating heavily in your direction.

This reminds me STRIKINGLY of Sean Carrol's musings on the way to approach quantum gravity using the concept of emergent spacetime. He posits that space could emerge from the graph of all entanglements between variables, with 'more entangled' becoming 'close together' rather than the other way around. He has some very preliminary math showing similar things as here, specifically that under certain assumptions you get the equations of general relativity out of it.

See:

https://www.youtube.com/watch?v=jHLfMXvQqX8

https://www.preposterousuniverse.com/blog/2016/07/18/space-emerging-from-quantum-mechanics/

https://arxiv.org/abs/1606.08444

https://arxiv.org/abs/1712.02803

EDIT: On a sort of stylistic note... I am reminded of the way that in every epoch, whatever is hardest to understand that is newly understood is understood in terms of the most successful and powerful technology or new concept of the day. Classically, along one historical stream minds and nervous systems were talked about in hydraulic terms, then in terms of wiring diagrams, then in terms of computation. At this moment in history computation is a very powerful set of organizing metaphors and tools, and could stand to kick open new areas. That being said, I would bet that one would be able to find other formalisms that are equivalent after kicking down the door...

That being said, I would bet that one would be able to find other formalisms that are equivalent after kicking down the door...

At least, we've now hit one limit in the shape of universal computation: No new formalism will be able to do something that couldn't be done with computers. (Unless we're gravely missing something about what's going on in the universe...)

This is actual physics work, but it's also not going to lead to any sort of prediction of our own universe anytime in the next 20 years at least. Take string theory's problems with being compatible with anything (and therefore predicting / retrodicting nothing) and magnify them by 100.

Also, it seems like they're incredibly literal in interpreting space as graph distance, time as ticks of the rules, and amplitude as number of possible realizations via the rules. These present big incompatibilities with relativity and QM, but before I say they've for sure overlooked something it would probably behoove me to read like 300 pages of what they've written. Except I'm not going to, because that sounds boring, and see above about 0 predictive power.

I am a bit of a physicist, and I really really hope this can be a good step forward. It certainly has the feel of being new enough to have a snowball's chance in hell, at least. Some of his graph-based ideas match what I've also been pondering, not on anywhere near grand a level: how to relate cellular structures to Lorentz invariance:

This approach also purports to describe quantum mechanics, including the measurement process, and the general relativity in the same language, sort of. Which would be neat. My quick browse through the "technical introduction" didn't let me form a coherent opinion about the viability or quality of it. But, at least it's not Gisin's "let's just use intuitionist math instead of the standard math" approach. On the other hand, Scott Aaronson seems to be skeptical.

Ultimately, the real test will be the predictions that this approach makes that are outside of what QM and GR predict already. And how well they can be tested.

I think Wolfram is probably doing excellent maths, but is doing physics in a somewhat backwards way.

I think good physics starts with observations, things that we notice as patterns or similar. Then seek good explanations of what we see.

In the link they start with an extremely general mathematical framework, with infinitely many different possible update rules. We have only ever (as a civilisation) seen a finite number of data points, and that will always be the case. Therefore, of this infinite number of update rules there are infinitely many that (given the right interpretational handles) can "explain" all human experiments ever performed perfectly. Of those infinite set of equations, the vast majority are about to be disproved by the very next observation.

I think that the mathematical structure Wolfram lays out is powerful enough that, without specifics, it can support *anything*. Any kind of universe. That may include ours, but that doesn't tell us anything useful, because it also includes all the nonsense. By starting with the maths and then trying to "work up to" the physics I worry that this is like The Library of Babel of physics theories. Something equivalent to a true theory of everything is in the Wolfram framework somewhere, just like it is in the Library of Babel somewhere.

The fundamental flaw, as I see it, is from trying to start with the maths. Better to pick a single observation that current theories don't explain well, try and fix that one problem. Most of the time this just reveals an error in the experiment or perhaps misuse of existing theory. But every so often it shows a glaring problem with the models we have, that is how we make physics better.

[Library of Babel : https://en.wikipedia.org/wiki/The_Library_of_Babel]

From the perspective of mathematical logic, string replacement systems could be as powerful as a full functional computer. The proposed graph evolution systems are of the same power too. The author provided many well explained good features of the system and I was persuaded to try to think some science topics from the viewpoint of "graph evolution".

If in future the author or others can obtain new physics findings by using this system, then evidentially the new "fundamental ontology" had some advantages.

However, at this moment, I did not find any provable advantages of this system comparing with other string replacement system yet. I would like to view this work as a thoughtful and well explained scientific investigation ----but the value to people is not proved yet.

Ref: The power of string replacement was well explained in the last article of the book: <The essential Turing>

Because representations are so sticky it's easy when discovering a new (to you) one to rederive everything you already know in terms of the new representation and get very excited that you are doing something new.

This seems equivalent to Tegmark Level IV Multiverse to me. Very simple, and probably our universe is somewhere in there, but doesn't have enough explanatory power to be considered a Theory of Everything in the physical sense.

I've tried to read through the linked page, and swapped to `academic reading' (checking the pictures, and sometimes the first and last line of paragraphs) halfway through. I think this is not viable.

There is a host of "theories of the universe" with a similar structure on a meta-level, consisting of some kind of emergent complexity. It is important to keep in mind the strength of a theory lies in what it forbids, not in what it permits. To date most theories of the universe fail this test hard, by being so vague and nonspecific that any scientific concept can be pattern-matched to some aspect of it. Judging by what I've read so far this is no exception (and in fact, I suspect that the reason Wolfram references so many big scientific theories is because large concepts are easier to pattern-match, whereas specific predictions are not as open to interpretation). Why will his patterns produce Einstein's equations (note that they currently do no such thing, he states we first need to "find the right universe"), and not Newton's, or Einstein's with double the speed of light?

As always with these nonspecific `theories' it is very difficult to nail down one specific weakness. But currently all I'm seeing are red flags. I predict serious media attention and possibly some relevant discoveries in physics (some of the paragraphs sounded better than all other crackpot theories I've seen), but the majority of it seems wrong/worthless.

The technical reports do seem to contain at least one strong, surprising prediction:

This [multiway formulation of QM] leads to an immediate, and potentially testable, prediction of our interpretation of quantum mechanics: namely that, following appropriate coarse-graining...the class of problems that can be solved efficiently by quantum computers should be identical to the class of problems that can be solved efficiently by classical computers. More precisely, we predict in this appropriately coarse-grained case that P = BQP...

Of course Wolfram and Gorard are not the only people to say this, but it's definitely a minority view these days and would be very striking if it were somehow proved.

It appears that an implicit prediction is that at least a good fraction of dark matter would consist of almost arbitrarily low mass low interaction particles in obscene quantities, that froze out at absurd temperatures in the early universe before momentum-redshifting to near zero velocity such that they behave more like matter than radiation in gravity wells (unlike neutrinos which normally move far too fast to stay bound anywhere).

So if BQP turns out to be larger than P, will Wolfram announce his theory wrong or rather modify its predictions? This would be a great test of... himself.

I think – very tentatively – that it could be viable.

I highly recommend Wolfram's previous book, available for free here on one of his sites:

I recommend it both on its own as well as crucial context for his recent post.

Wolfram's statement about needing to "find the specific rule for our universe" describes a problem that any theory of everything is likely to have. String theory noticeably had this same problem.

Take some interesting ideas that allow larger structures to be built up, run an awful lot of computer simulations, and then have somebody who knows a huge amount about physics look for those outputs that match how parts of the universe have been previously modeled (with some success).

Where are the predictions? There are no predictions about basic stuff we know, like the electron orbitals of a Hydrogen atom, let alone predictions about stuff we don't know.

This work looks interesting, and Wolfram is a great story teller. I hope something comes of it, but at the moment it is pure arm waving of just-so stories found in the output from a few computer runs.

This beautiful epic grandiose... thing seems pretty impressive to me. It seems like someone took the lessons of Game of Life, and then used computers to explore those lessons, and sure enough they are well on their way to a Theory of Everything.

But I'm not a physicist so I don't know how novel (or even error-free) this is. Which is why I'm asking.

On the one hand, it seems several orders of magnitude too good to be true. It explains so many phenomena so elegantly that it makes me wonder if what's really going on is that Wolfram is staring at his computer too much and seeing patterns in the noise.

On the other hand, for years I've thought -- and I expect most of us have thought this too -- that when the final theory of physics is found it would be something like this. Some very simple rule that, when applied repeatedly zillions of times, generates the world around us. A fundamental ontology that consists of objects and relations, rather than different kinds of particles or waves or whatnot. Discrete, not continuous.