[ Question ]

Is this viable physics?

by Daniel Kokotajlo1 min read14th Apr 202026 comments


Epistemic ReviewWorld Modeling

This beautiful epic grandiose... thing seems pretty impressive to me. It seems like someone took the lessons of Game of Life, and then used computers to explore those lessons, and sure enough they are well on their way to a Theory of Everything.

But I'm not a physicist so I don't know how novel (or even error-free) this is. Which is why I'm asking.

On the one hand, it seems several orders of magnitude too good to be true. It explains so many phenomena so elegantly that it makes me wonder if what's really going on is that Wolfram is staring at his computer too much and seeing patterns in the noise.

On the other hand, for years I've thought -- and I expect most of us have thought this too -- that when the final theory of physics is found it would be something like this. Some very simple rule that, when applied repeatedly zillions of times, generates the world around us. A fundamental ontology that consists of objects and relations, rather than different kinds of particles or waves or whatnot. Discrete, not continuous.

New Answer
Ask Related Question
New Comment

11 Answers

First of all, I'm very unsurprised that you can get special and general relativity out of something like this. Relativity fundamentally just isn't that complicated and you can see what are basically relativistic phenomenon pop out of all sorts of natural setups where you have some sort of space with an emergent distance metric.

The real question is how this approach handles quantum mechanics. The fact that causal graph updates produce branching structure that's consistent with quantum mechanics is nice—and certainly suggestive that graphs could form a nice underlying substrate for quantum field theory (which isn't really new; I would have told you that before reading this)—but it's not a solution in and of itself. And again what the article calls “branchial space” does look vaguely like what you want out of Hilbert space on top of an underlying graph substrate. And it's certainly nice that it connects entanglement to distance, but again that was already theorized to be true in ER = EPR. Beyond that, though, it doesn't seem to really have all that much additional content—the best steelman I can give is that it's saying “hey, graphs could be a really good underlying substrate for QFT,” which I agree with, but isn't really all that new, and leaves the bulk of the work still undone.

That being said—credit where credit is due—I think this is in fact working on what is imo the “right problem” to be working on if you want to find an actual theory of everything. And it's certainly nice to have more of the math worked out for quantum mechanics on top of graphs. But beyond that I don't think this really amounts to much yet other than being pointed in the right direction (which does make it promising in terms of potentially producing real results eventually, even if doesn't have them right now).

TL;DR: This looks fairly pointed in the right direction to me but not really all that novel.

EDIT 1: If you're interested in some of the existing work on quantum mechanics on top of graphs, Sean Carroll wrote up a pretty accessible explanation of how that could work in this 2016 post (which also does a good job of summarizing what is basically my view on the subject).

EDIT 2: It looks like Scott Aaronson has a proof that a previous version of Wolfram's graph stuff is incompatible with quantum mechanics—if you really want to figure out how legit this stuff is I'd probably recommend taking a look at that and determining whether it still applies to this version.

I agree with both evhub's answer and Charlie Steiner/TheMajor's answers: these models don't really do anything that previous models couldn't do, and they don't really offer near-term experimentally-testable predictions. However, I think these both miss the main value of the contribution. Wolfram sums it up well in this sentence:

I have to say that I don’t think our recent discoveries shed any particular light on [simplicity of the fundamental laws]—because they basically say that lots of things in physics are generic, and independent of the specifics of the underlying rule, however simple or complex it may be.

That last sentence is the real contribution of this work: "lots of things in physics are generic, and independent of the specifics of the underlying rule, however simple or complex it may be". I think Wolfram & co are demonstrating that certain physical laws are generic to a much greater extent than was previously realized.

Drawing an analogy to existing theoretical physics, this isn't like general relativity or quantum mechanics (which made new testable predictions) or like unification (which integrates different physical phenomena into one model). Instead, a good analogy is Noether's Theorem. Noether's Theorem says that conserved quantities in physics come from the symmetry of the underlying laws - i.e. momentum is conserved because physical laws are the same throughout space, energy is conserved because the laws are the same over time, etc. It shows that momentum/energy conservation aren't just physical phenomena of our universe, they're mathematical phenomena which apply to large classes of dynamical systems.

Wolfram & co are doing something similar. They're showing that e.g. the Einstein field equations aren't just a physical phenomenon of our universe, they're a mathematical phenomenon which applies to a large class of systems.

I think Wolfram's "theory" is complete gibberish. Reading through "some relativistic and gravitational properties of the Wolfram model" I haven't encountered a single claim that was simultaneously novel, correct and non-trivial.

Using a set of rules for hypergraph evolution they construct a directed graph. Then they decide to embed it into a lattice that they equip with the Minkowski metric. This embedding is completely ad hoc. It establishes as much connection between their formalism and relativity, as writing the two formalisms next to each other on the same page would. Their "proof" of Lorentz covariance consists of observing that they can apply a Lorentz transformation (but there is nothing non-trivial it preserves). At some point they mention the concept of "discrete Lorentzian metric" without giving the definition. As far as I know it is a completely non-standard notion and I have no idea what it means. Later they talk about discrete analogues of concepts in Riemannian geometry and completely ignore the Lorentzian signature. Then they claim to derive Einstein's equation by assuming that the "dimensionality" of their causal graph converges, which is supposed to imply that something they call "global dimension anomaly" goes to zero. They claim that this global dimension anomaly corresponds to the Einstein-Hilbert action in the continuum limit. Only, instead of concluding the action converges to zero, they inexplicably conclude the variation of the action converges to zero, which is equivalent to the Einstein equation.

Alas, no theory of everything there.

This reminds me STRIKINGLY of Sean Carrol's musings on the way to approach quantum gravity using the concept of emergent spacetime. He posits that space could emerge from the graph of all entanglements between variables, with 'more entangled' becoming 'close together' rather than the other way around. He has some very preliminary math showing similar things as here, specifically that under certain assumptions you get the equations of general relativity out of it.






EDIT: On a sort of stylistic note... I am reminded of the way that in every epoch, whatever is hardest to understand that is newly understood is understood in terms of the most successful and powerful technology or new concept of the day. Classically, along one historical stream minds and nervous systems were talked about in hydraulic terms, then in terms of wiring diagrams, then in terms of computation. At this moment in history computation is a very powerful set of organizing metaphors and tools, and could stand to kick open new areas. That being said, I would bet that one would be able to find other formalisms that are equivalent after kicking down the door...

This is actual physics work, but it's also not going to lead to any sort of prediction of our own universe anytime in the next 20 years at least. Take string theory's problems with being compatible with anything (and therefore predicting / retrodicting nothing) and magnify them by 100.

Also, it seems like they're incredibly literal in interpreting space as graph distance, time as ticks of the rules, and amplitude as number of possible realizations via the rules. These present big incompatibilities with relativity and QM, but before I say they've for sure overlooked something it would probably behoove me to read like 300 pages of what they've written. Except I'm not going to, because that sounds boring, and see above about 0 predictive power.

I am a bit of a physicist, and I really really hope this can be a good step forward. It certainly has the feel of being new enough to have a snowball's chance in hell, at least. Some of his graph-based ideas match what I've also been pondering, not on anywhere near grand a level: how to relate cellular structures to Lorentz invariance:


This approach also purports to describe quantum mechanics, including the measurement process, and the general relativity in the same language, sort of. Which would be neat. My quick browse through the "technical introduction" didn't let me form a coherent opinion about the viability or quality of it. But, at least it's not Gisin's "let's just use intuitionist math instead of the standard math" approach. On the other hand, Scott Aaronson seems to be skeptical.

Ultimately, the real test will be the predictions that this approach makes that are outside of what QM and GR predict already. And how well they can be tested.

From the perspective of mathematical logic, string replacement systems could be as powerful as a full functional computer. The proposed graph evolution systems are of the same power too. The author provided many well explained good features of the system and I was persuaded to try to think some science topics from the viewpoint of "graph evolution".

If in future the author or others can obtain new physics findings by using this system, then evidentially the new "fundamental ontology" had some advantages.

However, at this moment, I did not find any provable advantages of this system comparing with other string replacement system yet. I would like to view this work as a thoughtful and well explained scientific investigation ----but the value to people is not proved yet.

Ref: The power of string replacement was well explained in the last article of the book: <The essential Turing>

Because representations are so sticky it's easy when discovering a new (to you) one to rederive everything you already know in terms of the new representation and get very excited that you are doing something new.

This seems equivalent to Tegmark Level IV Multiverse to me. Very simple, and probably our universe is somewhere in there, but doesn't have enough explanatory power to be considered a Theory of Everything in the physical sense.

I've tried to read through the linked page, and swapped to `academic reading' (checking the pictures, and sometimes the first and last line of paragraphs) halfway through. I think this is not viable.

There is a host of "theories of the universe" with a similar structure on a meta-level, consisting of some kind of emergent complexity. It is important to keep in mind the strength of a theory lies in what it forbids, not in what it permits. To date most theories of the universe fail this test hard, by being so vague and nonspecific that any scientific concept can be pattern-matched to some aspect of it. Judging by what I've read so far this is no exception (and in fact, I suspect that the reason Wolfram references so many big scientific theories is because large concepts are easier to pattern-match, whereas specific predictions are not as open to interpretation). Why will his patterns produce Einstein's equations (note that they currently do no such thing, he states we first need to "find the right universe"), and not Newton's, or Einstein's with double the speed of light?

As always with these nonspecific `theories' it is very difficult to nail down one specific weakness. But currently all I'm seeing are red flags. I predict serious media attention and possibly some relevant discoveries in physics (some of the paragraphs sounded better than all other crackpot theories I've seen), but the majority of it seems wrong/worthless.

I think – very tentatively – that it could be viable.

I highly recommend Wolfram's previous book, available for free here on one of his sites:

I recommend it both on its own as well as crucial context for his recent post.

Wolfram's statement about needing to "find the specific rule for our universe" describes a problem that any theory of everything is likely to have. String theory noticeably had this same problem.