Theoretical physicist have no more than five good unique insights in a lifetime. Then spend years chasing down the implications of those insights. May computers at least help with if not supplant us in this effort? Maybe, maybe not. I Went to talk, did a lot of good listening, and served as a sort of unofficial chair of the session because APS didn’t assign one for session Z13. (No so unusual peruse the session index and a handful don’t have chairs. Sessions B10, Y08). That was a new interesting experience. I could see the overarching themes of the talks APS put in the session. One of the most interesting concepts was the good work being done at the University of Chicago and many other places to automate certain aspects of theoretical physics. As a theoretical physicist, this is both concerning and encouraging.

The aspect of theoretical physics they are trying to automate is testing multiple hypotheses against available data and other well established theoretical constraints. How a human theoretical physicist does this was well illustrated by the excellent talk given by Oliver Tattersall. Testing modified theories of gravity by working out their consequences mathematically the old fashioned way then comparing the results to available data. He said what he’d most like to see from LISA is gravitational wave spectroscopy so we can tease out the quasinormal modes. A signature of changes to general relativity that may hide in gravitational wave signals.



Then we have the interesting work of Reed Essick and Philippe Landry both from the University of Chicago. Using machine learning to use ”...this non-parametric scheme can be tuned to resemble existing nuclear theoretic models or specific nuclear phenomenology to a specifiable degree self-consistently.” Which they described in speaking a little after the conference as designing a “theoretical physicist”. I was mesmerized by the concept. Even though the “theoretical physicist” they design is quite limited in scope. Simply a physicist who knows a lot of other peoples theories but never proposes a truly novel approach of their own. Knowing other peoples theories is something all good theoretical physicists do. It is how we know what theories to consider and if a new idea is here needed.

This concept, as it evolves, could be a good idea. A good way to rapidly test a new theory. We human theoretical physicists could come up with a new “fundamental principle” then derive equations of motion/field equations from them. Then have the computer access a database of all relevant data and quantitatively compare the theory to the data. Then reaching a objective conclusion on whether the theory is credible or not.

Right now we seem to choose who to fund, publish, support, or employ based on what their name is or where they went to school etc.


A way of removing that kind of bias in the process could be good. However, I’d warn anyone working on that against thinking that knowing a lot of theory is all there is to theoretical physics. In the airport by chance a post dock at Northwestern conversation with me about the conference. They reminded me of that creative aspect. (Sorry I don’t recall your name but I do remember you work on searching for sterile neutrinos using rockets cool.) It can be easy to forget that initial insight moment and how rare of a spark it is even when you experience it. Since you get one or two of those in a lifetime and then you do years of work to follow up on it…. Which is like what they propose to have a neural network do.

In short will we someday have computers that are able to do what us human theoreticians do. To take what is established and turn it on it’s head. To learn the established principles then question the basic assumptions in order to find new principles is the task of a theoretical physicist. Will a computer someday be able to take the world from another point of view?



Tomorrow I will collect my thoughts and write a breakdown of what it was like giving a talk and acting as a sort of moderator where none was assigned.