Theories of everything in Physics by their nature need to be theories of almost anything. How can we call a theory which is so hard to test we can't test it, right now, science? George Ellis and Joe Silk have stated in Nature that we really should not call such theories science. That they are more like pure mathematics.  My fellow Science 2.0 blogger,Johannes Koelman, addressed this earlier. In this blog I will attempt to show by simple boolean logic that science can be science even if we don't have the technology or just the creative thinking to devise an experiment to check it.  Science is at least testable even if not currently falsifiable by a single experiment.
The idea that we can do this at all comes from a comment on that blog by George Crewes.  However, he does not directly apply this notion explicitly to theories of everything or  M-Theory specifically so I will.  The length of this warrants it's own blog entry.   


The theory that cause this discussion is the most popular one in theoretical cosmology M-Theory.  M is a unification of the various string theories with a theory called super-gravity.  Super as in it uses super symmetry which is a kind of symmetry that unites the symmetry of Relativity with the internal symmetries of string theory.  These theories were found to be related by what are known as dualities.  In various limits and under various transformations one theory can become the other which makes the theories equivalent.   Other than knowing the various string theories and super-gravity we don't know much about M-Theory.  

The problem is M-Theory seems to predict almost anything. 


M-Theory is so compelling in part because the vibrational modes of strings, springs, etc are well understood mathematics both classically and quantum mechanically.  The are something all physicist learn to work with at length. Adding extra ingredients to the known physics of oscillators theorist have been able to model many possible combinations of vibrations which give different vacuum states. In M-Theory it is more like a ground state excitation of whatever d dimensional Dirichlet boundary surface (known as a D-brane) under consideration.  If d=1 then it is a 1-brane or a string, if d=2 then it is a 2-brane or a membrane and so on.  These objects combining in various ways give all the known particles and fields and many which haven't been observed yet.    

The one number that can quantify just how many possible realities M-theory predicts is the number of possible vacuum states allowed by the theory.  The last time I checked it would be at least a one followed by 100 zeroes and at most a one followed by 1000 zeroes!  A vacuum state in standard QFT is a state with zero particles (but not zero energy).   

Each vacuum state corresponds to a set of vibrations which correspond to various particles with charges, masses, and fundamental constants.  Each vacuum would be a separate universe which we could never observe.  What's more is any number of vacuums could describe the universe we have observed in experiments and astronomical observations to date.  Yet each of those, so-called, anthropic vacuums could look very different at high energy or near the big bang or near/inside a black hole (having consequences for astrophysics and cosmology).   Thus we don't have the technology to test these now.  

Theories of everything would have to be theories of anything. 

Theories which try to explain all of known physics must explain what we know about standard ordinary matter, gravity, and give some explanation for dark matter and dark energy.    They have to be flexible enough to explain every property of those particles and fields.  They have to account for every possible way that those particles can interact.  They have to predict what is impossible as well.  In all of that it seems likely that such a theory would have so many adjustable parameters(even if we don't call them that) that it would be able to predict almost anything.  

How can a theory which predicts almost anything, which we can barely test still be science? 

For a long time I thought such a theory couldn't be science.  

Science is at least testable even if not falsifiable. 

Treat a theory as a logical proposition  P.  An experiment E  tries to test weather P is TRUE or if P is false. 

A typical simple hypothesis is easily testable.  The math of a theory predicts that y=mx+b.  If the experiment E finds that within the error bars y=mx+b then P is true.  If   not then P is FALSE. 


Now consider M-Theory and call it the proposition.  M-Theory has as an ingredient the ability to post-dict both General Relativity and the standard model of particle physics and gives candidates for dark matter and dark energy.  So for all of those observations E(P)=TRUE.


The question is can we in theory find an experiment at high energy which would predict E(P)=FALSE in a universe where we can still exist?   YES!  M-Theory predicts a finite number of possibilities.  So all we need to do is design an astronomical observation or a particle physics experiment which would show E(P)=TRUE for one reason but E(P)=FALSE at the same time.  We would need to find an observation(s) which could show a contradiction between the prediction and two separate vaccua.  We would need to find phenomena M-Theory says can't be in the same universe.


Suppose we find a particle at the LHC which is the mass of a supposed Higgsino (A SUSY Higgs particle). Great, but then a while latter we find a Neutralino or gravitino which is inconsistent with that higgsino to such a degree that it is not an allowed M-Theory vaccum?  That would disprove M-Theory and that is possible.  


There are astrophysicist who are trying to use M-Theory to construct models of the big bang and of black holes which may provide test in the near future. 


One might say The problem is that when an experiment is done and E(P)=FALSE in some regime the theorist just reinvent the theory.  The energy scale slips etc.  So what?  Isn't another part of the scientific method devising a new version of a hypothesis when an older version is found to be deficient?  


It would be unscientific to insist that theorist must abandon a model which could explain so much without also proposing a better model which has the same power but also isn't a theory of so many many things. 


Ellis and Silk are right about one thing.  Theorist who insist that a theory does not need to be tested due to it's mathematical elegance are wrong.  Theories need to be tested that is what makes them something other than religion.  If we accept that theories never need to be tested then we might as well give credit to God for creating such a dandy universe for us. 



I have noticed a consequence of this thinking.  Some young would be physicist don't think that theory can be scientific and so shy away from it all together. 





TL;DR: The Short version of this posting is even with a theory that predicts an unimaginable number of experimental outcomes, it is still possible for it to have predictions that are mutually exclusive.  Hence if  for experiment E1(P1)=TRUE  => that for E2(P2)=FALSE  but for E1(P1)=TRUE but we find E2(P2)=TRUE => E1(P1)=FALSE  Then P is FALSE.