Will The Standard Model Die By The Hands Of Its Dearest Child ?
By Tommaso Dorigo | July 23rd 2009 05:25 PM | 19 comments | Print | E-mail | Track Comments

I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

View Tommaso's Profile
A new paper on the ArXiV caught my attention this evening for several reasons. First of all, because two of its five authors (J.Ellis, J.R.Espinosa, G.F.Giudice, A.Hoecker, and A.Riotto) are (or have been) my colleagues in Padova University; second, because the title is quite catchy; third, because indeed the results it presents are valuable food for thought.

The paper is titled "The Probable Fate of the Standard Model". Here, contrarily to what is fashionable nowadays, rather than predicting the demise of the Standard Model because of the discovery of this or that brand of new physics unexplained within its boundaries, the authors consider whether the SM can survive a precision measurement of the Higgs mass, given the fact that only a narrow range of mass values allows the SM to work at arbitrarily high energies.

The theoretical arguments at the basis of the study are not trivial to understand, and even less trivial to summarize to non-experts. I honestly think it is above my head to provide an explanation below (or above) the graduate student level, so I will just provide a "summary for experts", whatever that may mean. However, some of the conclusions I report on at the end might be readable by all.

Aim of the study

The authors start by recalling the theoretical shortcomings of the Standard Model, in particular the need to fine-tune the Higgs mass, but they make it clear that they are not challenging the model on that ground:

"There are, of course, plenty of theoretical arguments why the Higgs sector of the SM is inadequate, many of them related to the apparently unnatural fine-tuning of its parameters, but we have in mind a more direct empirical argument based on the available experimental information about the Higgs sector".

What they set out to do is to examine in the light of the "available experimental information" the theoretical boundaries within which the Higgs mass must lie in order to allow the SM to be a good theory of Nature at any energy scale. The first boundary is given by vacuum stability bounds of the Higgs potential: as the energy scale at which the model is tested increases, these bounds become tighter, and basically allow the SM to be valid for all energies up to the Planck scale (set at $2 \times 10^{18} GeV$) only if the Higgs boson has a mass above 135 GeV or so. The second boundary comes from the need, for a well-behaved theory, to be "computable": a too large value of the Higgs mass makes the Higgs coupling "blow up" at large energies, such that the theory loses the property of being perturbatively calculable.

The boundaries are shown in the figure below, taken from the paper. You can see as a function of the logarithm of the energy scale Lambda at which the SM is supposed to be a valid, perturbatively-calculable theory, the range of Higgs masses which make this possible.

A Higgs mass close to the LEP II lower limit runs into trouble at high energy, when the Higgs potential may develop additional minima which can be reached by quantum tunneling, depending on the temperature of the Universe. The green area shows the lower bound and it is not a narrow line because of uncertainties in the calculation. The blue and red areas are more stringent bounds below which metastability occurs. At high Higgs masses, instead, two different blue lines mark the fuzzy boundary of the region where perturbative calculations are enabled or prevented by the size of the Higgs boson self-coupling. Also shown in the figure are direct search limits, in grey.

The recent precise measurement of the top quark and W boson masses, together with the machinery of global fits to Standard Model observables (most of which are still those determined by LEP and SLD in the nineties) and with the direct limits on the Higgs mass coming from LEP II ($m_h>114.4 GeV$at 95% CL) and the Tevatron (an exclusion of the region 160<170 GeV at 95% CL), can be used all together to verify which scenario is most likely: whether the Higgs mass is one which makes the SM unworkable above a certain energy scale, or whether it is one which makes the Higgs potential unstable above some energy, or whether instead the Higgs mass is compatible with a SM which lives happily all the way to the Planck scale, where new physics associated with quantum gravity must surely appear. The experimental inputs can be summarized by the Gfitter plot shown on the right above, whose yellow band displays the variation from minimum chisquared of the global fit (which includes direct bounds) as a function of Higgs boson mass.

Results of the study

The results of the analysis of all experimental inputs are nicely summarized in another figure (below), which presents the probability that current data give to different scenarios, as a function of Higgs mass. Here, the behavior of the Higgs potential at infinitely high energies is considered, to check whether the SM can or cannot be valid at any energy scale. The red region is the one where the Higgs potential is liable to become unstable in time scales shorter than the age of the Universe; the blue one marks situations where the quantum tunneling out of the ordinary minimum of the Higgs potential is slow enough to make it an acceptable solution; the dark green area shows a situation where the Higgs potential is stable against thermal fluctuations defined by the Planck scale; and the light green area is the solution which makes the SM a candidate for a theory that survives to the Planck scale untroubled by changes of behavior of the Higgs potential or by non-perturbativity of the Higgs coupling. The grey area is the region which makes the theory non-calculable at high energy.

Finally, overlaid with those areas, is the computed value of the probability of Higgs boson masses, extracted by the most recent experimental inputs as discussed above.

It transpires that there is a wide chunk of green in a region where the blue curve has respectable values of probability (1-CL). The authors compute that the probability that the Higgs mass lies in a region where the Higgs self-coupling becomes non-perturbative at high energy (the grey area on the right) is less than 1%: this is an important addition to our understanding of the limitations of the Standard Model, because the "blowing up" of the Higgs self-coupling has at times been cited as one of the potential shortcomings of the Standard Model at high energy, and a reason for believing that new physics should set in at energy scales well below the one of quantum gravity.

Another point to make from the figure shown above is that the highest probabilities match with a region where it is not perfectly clear whether the Higgs potential is well-behaved in the ultraviolet regime. Because of that confusion at low masses, the authors consider two specific scenarios in the second part of their paper, where the Higgs mass is thought to be measured by a long LHC run at 115 or 120 GeV, with a small (0.1%) uncertainty.

In such scenarios, one could indeed conclusively prove that the Standard Model has to break down at energy scales well below the Planck scale! Take for example the case of $m_h=120 GeV$ in the figure below, which give the 1-CL probability for the Standard Model to not run into problems of vacuum stability as a function of the scale at which this is probed. Here, the probability values are due to uncertainties in the calculations of the global electroweak observables, while the two green colours show the effect of theoretical errors in the calculations of the stability bounds.

In Summary...

The authors note in their conclusions that:
"the present data exhibit no clear preference between scenarios in which the SM survives up to the Planck scale, and in which it develops new minima at a scale Lambda and becomes metastable with respect to either thermal or zero-temperature fluctuations."
They finally make the point that the
"discovery of the Higgs boson might reveal quite conclusively the possible fate of the Standard Model. For example, if the SM Higgs boson were to be discovered with a mass of 120 GeV, the effective potential of the SM would develop a new vacuum at log_10(Lambda/GeV)<10.4 and remain in a metastable state, unless new physics beyond the SM intervenes."
All in all, I found this study a fresh new look at data we have seen and scrutinized for quite a few months by now. The point that a discovery of the Higgs boson at specific values of mass may prove that the SM is bound to break down, rather than making it even a greater, unsinkable success, is remarkable. The SM might be sentenced to death by the discovery of the very particle which crowns it!

Hi Tommaso, I also enjoyed the paper and think that, as you said, it gives s fresh look at things.

One thing however that I would like to comment is that there seems to be a misunderstanding as to what is meant by saying that discovering some new physics (whatever that may be) will prove to be de demise of the SM. Comments like

"The SM might be sentenced to death by the discovery of the very particle which crowns it!"

do not make any sense and are at best misleading. Practically all work in physics beyond the SM (BSM) relies on the validity and proven success of the SM. The scalar sector of the SM is certainly a mistery as of today, and its simplest realization is what we usually mean by ths SM. Discovery of the Higgs will certainly add some beautiful stones to its crown (regardless of which type of Higgs!), but most importantly, its discovery and subsequent study will hopefully shed some light into areas where we now do not really understand. This is basicaly the seed for most of what has been done in BSM!

I honestly do not think anyone working on BSM physics has predicted the demise of the SM. If somebody did, I would be very interested in checking what was suggested as a replacement!

Saludos.....
Hi Alfredo,
of course. It is not a "death" that the SM is sentenced to if it is found that it cannot work at all energies, but rather an observation on its maximum range of validity as an effective theory. When I said that the authors of the paper had chosen a catchy title for their work ("the fate of the SM"), I decided I would make an even more resounding one for my writeup. It's journalism :)
As for deceiving readers, well, I think those who read this article are mature enough to figure it out by themselves. Sometimes I take more care to avoid misleading my audience, but in this case I do not think it matters, given the complexity of the topic.
Cheers,
T.
Well, that's all nice, and I agree with all the expectations of the authors - that are written in between the lines. Still, this is no real argument what to expect. Of course that if the Standard Model is complete up to the Planck scale, the Higgs mass is constrained to be somewhere, in a limited region: it's a prediction.
Whoever believes that the SM is the full story up to the Planck scale must also believe that the Higgs will be found e.g. at 135 GeV or something like that. I am not sure what these people do with the dark matter in their minds, and so on. But if I were believing that the SM were true up to the Planck scale, I would surely have no problem with believing its predictions about any electroweak phenomenon, including very accurate predictions of the Higgs mass, as long as this prediction would be alive (not yet falsified).

What I find puzzling is the contrast between "theoretical arguments" and the "direct empirical arguments" they offer. This distinction is non-existent. In fact, at this very moment, the argument they offer is another theoretical argument - de facto equivalent to the fine-tuning of the Higgs sector parameters (to get sensible physics at very low sub-TeV energies) as any other theoretical argument indicating that the Higgs sector in the SM is inadequate because the high-energy behavior requires a lot of tuning.

Such a theoretical argument will become an empirical argument once the experiments confirm its assumptions about the Higgs mass. But the same thing is true for any theoretical argument in science. If an appropriate experimental test is made, any meaningful theoretical argument in science becomes an empirical argument. But as long as the measurements are only "expected", the experiments are gedanken (or planned) ones, and the argument building upon them is a theoretical argument. ;-)

Concerning the very matter, the more one actually thinks about the high-energy behavior of any of these theories, the more inadequate the bare Standard Model looks. I think that every sane high-energy physicist thinks that physics near the fundamental scale - perhaps close to the Planck scale but surely higher than the LHC scale - is more fundamental than its low-energy manifestations, and should follow "more natural rules", while the phenomena at low (and LHC) energies (i.e. long distances) are derived and may look convoluted.

The real criterion whether a model is natural or pretty comes from its high-energy behavior, and e.g. MSSM is surely prettier in this respect than SM.

Even if one knew nothing about the high-energy restrictions on the parameters (and the ability of SUSY to explain them etc.), the assumption that there only has to be one doublet is clearly just a "guess". This naive type of minimality has been falsified so many times in the history of science that I think that people should have already learned a lesson. One Higgs doublet is simple and Nature would be stupid to do really awkward things with 876 Higgses (unless they're explained). But it's just comparably likely that there is one Higgs as the combined probability that there are 2 Higgses or 3 Higgses or 6 Higgses (2 doublets per generation). One shouldn't try hard to convince himself that we know something if we clearly don't know it - and the number of Higgs fields is an example.
The view that I adopt is the following: The SM is an effective framework, almost not likely to be valid up to the Planck scale (and perhaps not even at the EWSB scale), that is extremely successful in the energy range so far tested. The existence or lack thereof of a Higgs at around 135 GeV as a prediction of the SM assumes the SM is valid up to Mplanck, thus I do not consider this part of the SM (as defined above).

As you mention there are several reasons to expect new physics (including dark matter of course, but even understanding the mass patterns of the observes particles). Also, it is my gut feeling that the actual physics of electroweak symmetry breaking will involve physics beyond the SM.

In this tenor I think it is misleading to say that finding new physics at or above the EW scale will demise the SM (as understood above). It will certainly kill the idea/belief that the SM is some sort of ultimate theory attemting at explaining everything. In fact, we do not need to wait for the Higgs(es) to be discovered, we certainly know this idea is deluded.
Dear Alfredo, it is surely a legitimate viewpoint that the SM is just an effective theory and shouldn't be extrapolated.
At the same time, extrapolation of a theory is the simplest and most natural expectation about what happens beyond its tested range of validity. Physicists in the past were often too eager to bring new physics - e.g. the 1938 conference shows how widespread this "permanent revolution" thinking used to be.

However, many theories and principles were found to be valid much further than initially expected, many new particles were found to be much heavier than initially expected, and so on. So while I agree that the SM is unlikely to be the full story, the MSSM could be damn close to it, up to the GUT scale, as the coupling unification indicates.

We never know how far and whether the existing theory can be extrapolated, if it is consistent there. But the assumption that "it is much further valid than tested now" is always a priori comparably likely as the assumption that it breaks even if it doesn't have to. And I would also claim that a theory that can be valid - without "fixes" - much further than in the currently tested range of validity is more beautiful, more robust, and in this sense, probably more likely to be true.
Hi Lubos,

I have to agree, the number of Higgs doublets is a free choice at this point. The minimality of the Standard Model is not an asset, but a declaration of ignorance. However, minimality is such a powerful working base for experimental investigations in Physics that we should follow its predictions as our best guess.
Cheers,
T.
Dear Tommaso, sure, I agree that minimality should play a role, and so on. But what I disagree is written in between the lines of yours: namely that the first idea or feeling about "minimality" has to be the final word. In other words, I disagree with your implicit assumption that there's no evolution of the concept of "minimality".
It is not the case: there is evolution, after all. There have been many particles and phenomena found in the past that still look "non-minimal" and useless games of Nature - who ordered the muon. On the other hand, there are also many insights that used to look non-minimal but currently look minimal because we know certain previously unknown qualitative conditions that rule out the previous minimal options.

So people were imagining that just the W bosons - the charged currents - were enough and minimal for the weak interactions.  But in fact, spin 1 bosons have to interact via gauge interactions, and the only realistic completion involves both W and Z bosons, from an SU(2) x U(1) theory. It's really the minimal choice to get the observed beta decay from a structure that is as consistent as a QFT as our current ideas about QFTs dictate. The Z bosons surely looked non-minimal for beta-decay in the past.

In the same way, only leptons or only quarks could look minimal, but anomalies imply that you need both. One generation of them could look minimal, but CP violation is needed for leptogenesis and the birth of our world (to mention an "anthropic" argument: there may be others), so one also needs three generations. The Higgsless Standard Model could be minimal except that it's not unitary.

Now, non-supersymmetric SM can look consistent as a QFT but it is very plausible that quantum gravity dictates that SUSY has to exist at some scale, for some general stability or consistency of the framework. I don't have a full proof of this assertion at this moment, but based on our analysis of well-known models consistent at higher energies, those in string theory, it's damn possible that a similar statement is true and will be proved in the future.

Whether or not it strictly holds, it's surely the perspective that string-related physicists take. The supersymmetric models are more minimal than the non-supersymmetric ones - because the fundamental theory has some SUSY at some scale (superstring theory is surely a better starting point for the real world than bosonic string theory!), and additional stuff is needed to break SUSY i.e. to add an unnatural thing to the theory. All non-SUSY theories are just effective approximations, in this optics, that actually have much more stuff than SUSY theories, stuff that is just hidden in their effective descriptions.

The change of the notion of "minimality" has many more stronger, subtle, technical examples in more technical considerations. For example, what is the representation of a particle responsible for breaking SUSY or communication of SUSY breaking. The fundamental rep may look more minimal, but more specific constructions can show that the antisymmetric tensor is actually more minimal from a stringy viewpoint, and so on, because it is linked to a simpler singularity with smaller topological invariants, so to say.

I could go on and on and on to argue that the idea that the "minimality" is always determined forever is a symptom of a frozen mind, not a sustainable rational attitude in science. The same comment surely applies to the counting of the Higgs boson. We're preferring one because we're uncertain about more detailed arguments how many Higgses there should be, and "one" has a higher weight among positive integers with a Poisson or any similar distribution. ;-)
Minimality in HEP is just a statement about Occams razor. Which itself is just a statement of Bayesian logic and assigning priors in a consistent way. They of course are updated whenever nature chooses not to cooperate (say by giving Neutrinos a small mass, or inventing a muon).

I agree that theoretical insights do play a role. For instance, the 'naive' simplest explanation of darkmatter would be to simply add a single scalar on top of the standard model and make it weakly interacting. Otoh, that clearly is disfavored, b/c we have a bunch of good theoretical prejudices about the subject of adding random scalars with no additional motivation. The naive minimal explanation becomes nonminimal when viewed with modern insight.
More compelling of course is if that scalar actually solved something else (like the strong CP problem). Hence the axion was (at least for awhile) well motivated. Of course the bayesian prior for this simple scenario has now been updated to the point where its almost, but not quite, empirically ruled out (the allowed mass range lives in a very small window) and it became apparent that it caused a host of other problems.

Hi Tommaso, it seems that the allowed window for Higgs is right where LHC has the lowest sensitivity. How long will it take to collect enought probability at, say 135 GeV?
That is a tough call Daniel, but I will present the most recent sensitivity estimates for a SM Higgs boson at the LHC in a month. For now I can eyeball a couple of years of data, but in a few weeks I will come back to you with a more precise estimate.
Cheers,
T.
Extrapolating effective theories to high energies may tell us something about their limits of validity. This kind of extrapolation is most useful with non-renormalizable effective theories such as the Fermi model.

For renormalizable effective theories, however, such extrapolations are usually not meaningful. For example, extrapolating pure QED to arbitrarily high energies leads to unphysical results such as the Landau pole which tell us that the theory by itself is incomplete. But it tells us nothing about the new physics that appears at much lower scales, such as QCD.

With the Standard Model something similar is probably the case.

Tommaso, you say that "... there is a wide chunk of green in a region where the blue curve has respectable values of probability ...".

So, if the Higgs were to be found by the LHC in such a region (say, around 145 GeV where the blue curve has a peak), would it be fair to modify your title to:
"The Standard Model will Live by the Hand of its Dearest Child"
and
that the Standard Model might then "... survive... to the Planck scale untroubled by changes of behavior of the Higgs potential or by non-perturbativity of the Higgs coupling ...".

Tony Smith

Well, Tony, a 145 GeV Higgs just fails to show there MUST be NP at some scale lower than Mp. But it cannot be taken as a proof in favor of the absence of NP IMO.
Cheers,
T.
However, if a Higgs were to be found around 185 GeV,
at the little (possibly statistically insignificant) peak just inside the gray region of Non-perturbativity,
then
I guess we could clearly say that the Standard Model suffers from NP. As to what that might mean physically, consider the following quote (from the book "Journeys Beyond the Standard Model" (Perseus 1999) by Pierre Ramond):
"... This does not necessarily mean that the theory is incomplete, only that we can no longer handle it ... it is natural to think that this effect is caused by new strong interactions, and that the Higgs actually is a composite ...".
So,
could you say that finding a Higgs around 185 GeV would support the idea of a composite Higgs ?

Tony Smith

Hi Tony, I am unable to answer meaningfully your question, although I may offer a couple of notes:
1) the requirement that the SM stays perturbatively calculable in all its energy domain of validity does not look so strong to me: it rather appears a somewhat aesthetic requirement. The low-mass bound imposed by the form of the potential is much more stringent, since a non well-behaved potential puts to question the whole mechanism.
2) even if one were to accept that the NP bound is absolute, I do not see it pointing directly to a composite higgs. Maybe all it would say is that there is some new physics not far away.

These are just my two cents...
Cheers,
T.
I don't know if I will have time reading it, but I don't get where the fresh look is.

Perturbativity and vacuum stability bounds were known for a lot of time: perturbativity is a weak requirement (our unmotivated desire to have an easily calculable theory) and the (large!) scale of new physics from vacuum stability is just a calculation.

It's an effective theoery..

Personally it's 145GeV that would be a disaster, the end of HEP.

It should be also honestly remarked that precision tests depend heavily on new physics that may be just around the corner (and thus they give no strict info on the higgs alone), that new physics is anyway required for dark matter, that that aestetically one may hope a more complete theory (Left-Right symetry.. unification..)

FAS

Wasn't Higgs observed as a t-channel of top quark decay already?

This is the same situation, like the search for hidden dimensions, while ignoring Casimir force, or like search for Lorentz symmetry violation while ignoring gravity lensing and so on.. Why we should pay for such RE-search?

Zephir, no, as far as I recall the Higgs has not been observed yet, but it might have escaped my scarce attention. Can you provide more detail on what you refer to ?
Cheers,
T.
He may have been confused by articles similar to Take That LHC: Fermi Scores Again In Discovering Rare Single Top Quark, because the single top is a background for Higgs.   Statistically, he (and a lot of physicists) are right that with 10 inverse femtobarns of collision data in each Fermi experiment by the end of 2010, Higgs must be in there.   Finding it is something else.
...but it might have escaped my scarce attention.
... ha ha.  Nicely understated.    It would be on the cover of every magazine, of course, and New Scientist would have a title like "God Found ..."
Want more no-nonsense, independent science? Buy Science Left Behind