A theory of everything has been the Holy Grail of some of the greatest physicists of the last century, from Albert Einstein to Stephen Hawking. A theory of everything, a single, all-encompassing theoretical framework which would be able to explain, coherently explain and link together all the physical aspects of the universe, from the atomic scale to galaxies, from the choppiness of the quantum world to the smoothness of relativity. The motive for a theory of relativity is simple: the standard model and general relativity tells us very different things about the ontology of reality. Albert Einstein discussed the uneasiness that physicists have with there being “two distinct fields totally independent of each other by their nature” when he gave his Nobel Lecture in 1923. In the past, String theory and M-theory have been put forward as theories of everything. Over the last few centuries, the general theory of relativity and quantum mechanics are the basis of all modern physics. Crudely, general relativity focuses on gravity in order to understand the universe at the large scale and high mass level. It is the theory of galaxies, stars, planets, clusters of galaxies and other such large phenomena. Quantum mechanics, on the other hand, focuses on three non-gravitational forces in order to understand the universe at the very small scale and low mass level. Quantum mechanics built on the Standard Model, which describes the three non-gravitational forces: weak nuclear, strong nuclear, and electromagnetic force, along with observed elementary particles. Uniting these two is one of the great unresolved problems of physics. Einstein worked on it even on his deathbed. In his paper, Gödel and the End of Physics, Hawking argued that Gödel’s incompleteness theorems, which states that “any finite system of axioms is not sufficient to prove every result in mathematics”, implied that it was impossible to have a theory of everything. According to an article in Interesting Engineering, this has not dissuaded physics from trying, with Stephon Alexander et al proposing a framework for thinking about a theory of everything.

The Autodidactic Universe

In The Autodidactic Universe, Alexander et al invert the problem, and, instead of asking what the laws governing the physical world are, they ask why we have the physical laws we do. Although they acknowledge that a theory of everything is still far off, they are less skeptical than Hawking that the job cannot be done. They argue that the universe “learns its own laws”, is, in effect, autodidactic. The Universe does this by exploring the landscape of all possible laws, which they express as a certain class of matrix models. These maps are each placed in correspondence with both a mathematical model of a learning machine, such as a deep recurrent, cyclic neural network, and a gauge/gravity theory. There is, therefore, a correspondence between each solution of a physical theory and a run of neural network. Obviously, this autodidactic system has no supervision. The authors propose that if a neural network model can learn unsupervised, then the same can be said of a corresponding physical theory. 

Commonalities Between General Relativity and the Standard Model

According to Alexander et al, the Standard Model and general relativity have a lot of commonalities. The most profound connection between the two is that they are built on gauge theories and symmetry principles. The mathematics used allows us to understand how objects move and interact. When String theory was developed, these commonalities were used to develop a theory of everything, by seeing certain particles as strings, or one-dimensional objects. Unfortunately, this gave rise to a frothiness that proved intractable, with far too many, and too complex laws, what Alexander et al refer to as a “multiverse of theories”, which included the Standard model as well as general relativity, and many other laws, some of which are so small they cannot be tested. Although physicists such as Brain Greene still believe in String theory, Alexander believes that String theory leads us to a dead end. 


To Alexander et al, the physical world is the product of multiple iterations in which the universe has strived to find a stable configuration of laws, something which only arrived with the present configuration, which allowed it to evolve with consistency.

A Meta-Law Makes Autodidacticism Possible

Richard Dawkins liked to speak about a “blind watchmaker”, so perhaps it is apt to talk about a “blind learner”, guided by what Alexander et al refer to as a “meta-law”. This meta-law enables that universe to learn and to continue to try different configurations of laws. It is also, to a certain degree at least, the universe itself.


Alexander et al liken their attempt to Charles Dawin’s theory of evolution. Darwin resolved the problem of why certain species emerged and others did not, by proposing a theory that explained why any species would come about. Even though the notion of survival of the fittest and the importance of passing on valuable traits was a massive achievement, biologists have worked for the last 160 years to fill in the details. 


Alecander et al’s theory is not physics’ version of the theory of evolution, but it is, as they say, a baby step toward a theory of everything.