*Jacques Distler is a Professor of Physics at the University of Texas at Austin, and a distinguished theorist, as well as a physics blogger. Along with experimentalist Gordon Watts (who covered $250) he took my $1000 bet that the LHC would not discover new physics in its first 10/fb of proton-proton collision data. I discussed my take on the bet in a previous post; here Jacques explains his point of view, why he took the bet, and what he thinks of the present situation with new physics searches at the high-energy frontier.*

The article below has appeared today at Distler's blog, and I reproduce it here with his permission.

The article below has appeared today at Distler's blog, and I reproduce it here with his permission.

* * *

It’s been 20 years since I had the surreal experience of turning on C-Span late at night to see my future boss, Steve Weinberg, testify before Congress on behalf of the SSC.

Steve, alas, was unsuccessful; the SSC was cancelled, and the High Energy Physics community threw our collective eggs in the basket of the LHC. The SSC, at sqrt(s)=40TeV, was

*designed*as a discovery machine for TeV-scale physics. The LHC, with a design energy of sqrt(s)=14TeV, is the best one could do, using the existing LEP tunnel. It was guaranteed to discover the Higgs. But for new physics, one would have to be somewhat lucky.

14 TeV sounds like more than enough energy, to hunt for new particles with masses of a few TeV. But that appearance is deceptive. The protons circulating in a hadron collider are like sacks of marbles, and each marble (“parton”, if you want to sound sophisticated) carries only a fraction of the total kinetic energy of the proton. At the energies we are talking about, the collisions are actually parton-parton collisions. So it’s the energy of the pair of partons undergoing the actual collision that matters. And that energy is typically

*far less*than the nominal sqrt(s). In fact, things are slightly

*worse*than the metaphor implies. Each sack contains a variable number of marbles, and the mean number of marbles (sharing, between them, the total kinetic energy of the proton) increases with increasing sqrt(s).

The upshot is that, at a hadron collider, the “interesting” collisions — the ones where, by chance, the colliding partons happen to carry a large-enough fraction of the proton’s total energy — are few and far between. To some extent, you can compensate for their rarity by increasing the total number of collisions (running the machine at higher luminosity). That introduces its own difficulties, but it’s the tradeoff that the designers of the LHC needed to make.

Still, there are (or were) lots of scenarios with new physics, accessible to the LHC. And theorists, being perennial optimists, put a lot of effort into exploring those scenarios. Moreover, I think we’d have to go back to Isabelle to find an example of an accelerator which opened up a new range of energies and didn’t find

*anything*new. So, back in 2006, when Tommaso Dorigo proposed a bet, I was willing to take the position that the LHC

*would*discover new physics.

I didn’t, however, like Tommaso’s original terms (a new particle discovery, announced before the end of 2010).

Experience with previous machines, like the Tevatron, is that startup dates tend to slip, and that it can often take years to ramp up to the full design luminosity. As it turns out, the LHC had barely begun to collect data by then, and the very first trickle of physics results started coming out in October of 2010. So I had wisely insisted that, rather than fixing a date, we agree on a fixed amount of data collected (10fb^−1), plus a suitable period (12 months) for the analyses to be done.

Moreover (for reasons that I will recall, below), I thought the “new particle” criterion too narrow, and substituted “a 5σ discrepancy with the Standard Model.”

Those terms seemed pretty solid to me, and I agreed to put $750 behind them.

One thing which I didn’t count on was the 2008 quench incident, which led to the aforementioned delay in starting up the LHC and (more important for the bet, at hand), to its operation at about half of the design energy (sqrt(s)=7–8 TeV) up through 2013.

Historically, the ramp-up in energy tends to be much easier and (since it drastically improves the “reach” for new physics) tends to be accomplished much more quickly than the ramp-up in luminosity. So I fully expected that

*most*of that first 10 fb^−1 would be collected at sqrt(s)=14TeV. Alas,

*none*of it was (and, foolish me for not insisting on a provision about sqrt(s) of the data).

What about the “new particle” criterion?

There are lots of scenarios where you would see a stark deviation from SM expectations at the LHC, but still be unable to ascribe that deviation to a new particle of a particular mass, etc. For example, much excitement was generated by the initial measurements of the H→γγ branching ratio, which were higher than the SM prediction by 2–3σ. With more data, that discrepancy seems to have gone away, but

*imagine*if it had

*persisted*. We would now find ourselves with a 5σ deviation from the SM — clear indication of the existence of new heavy charged particle(s) which couple strongly to the Higgs. But, since they only contribute to H→γγ via a loop, we would have almost no handle on their mass or other quantum numbers.

Well, it’s been a little over a year since we reached the 10fb^−1 mark. The Lepton-Photon Conference seemed like a natural end-point for the wager. If there had been a discovery to announce, that would have been the natural venue.

Needless to say, there were no big announcements at the Lepton-Photon Conference. And, since the LHC is shut down for an upgrade until 2015, there won’t be any forthcoming. So Tommaso is $750 richer.

Would the outcome (aside from being delayed for another ~3 years) have been any different had I been smart enough to add a stipulation about sqrt(s)? Put differently, would I be willing to bet on the 2015 LHC run uncovering new BSM physics?

The answer, I think, is: not unless you were willing to give me some substantial odds (at least 5–1; if I think about it, maybe even higher).

Knowing the mass of the Higgs (∼125GeV) rules out huge swaths of BSM ideas. Seeing absolutely

*nothing*in the 7 and 8 TeV data (not even the sort of 2–3σ deviations that, while not sufficient to claim a “discovery,” might at least serve as tantalizing hints of things to come) disfavours even more.

The probability (in my Bayesian estimation) that the LHC will discover BSM physics has gone from fairly likely (as witnessed by my previous willingness to take even-odds) to rather unlikely. N.B.: that’s not quite the same thing as saying that there’s no BSM physics at these energies; rather that, if it’s there, the LHC won’t be able to see it (at least, not without accumulating many years worth of data).

Ironically, a better bet for discovering new physics in this energy range might be on an ILC, running as a precision Higgs factory. I’ll leave it to you to calculate the odds that such a machine gets built.

Rereading the comments on Tommaso’s post (and other things he’s written), you might well think this discussion is a proxy for a narrower one, about the status of supersymmetry. The 7– and 8–TeV runs at the LHC have, indeed, been very unkind to the MSSM. But they have been even more unkind to other BSM ideas. So

- While the probability that the LHC will see any BSM physics (supersymmetric or not) has plunged dramatically,

- the
*conditional probability*that if the LHC were to see BSM physics, then said new physics would turn out to be supersymmetry, has gone*up*.

That may be of little immediate consolation (and not an obviously-exploitable vehicle for making back some of the money I lost), but it is motivation for my experimental colleagues to spend the next couple of years thinking about how to optimize their searches to tease the maximum amount of information out of the post-upgrade LHC data.

Doesn't that mean that an observer in an accelerated reference frame travelling with the proton and an observer at rest will disagree on the number of partons? That surprises me a little, I would have naively guessed it to be invariant..