Banner
    Naturalness Explained With The Roulette
    By Tommaso Dorigo | October 9th 2012 06:00 AM | 12 comments | Print | E-mail | Track Comments
    About Tommaso

    I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

    View Tommaso's Profile
    Five years ago I was fascinated by an analogy used by my friend Michelangelo Mangano to explain the problem of naturalness, a crucial issue in fundamental physics, and maybe the biggest single indicium we have that new physics beyond the standard model of particle physics should exist, and be not too far away from our current experimental reach.

    Today I am preparing a talk at "ComunicareFisica 2012", a conference which is taking place in Torino this week, and I took on the issue of powers and limits of the analogy in the explanation of fundamental physics. So I found it natural to revisit Mangano's analogy, and in so doing I realized that it could be significantly improved. Before I discuss the improvement, let me state in short what the naturalness problem is.

    The mass of the Higgs boson is known today: 125 GeV. Even before we measured it, however, we knew it had to be in that ballpark - indirect proof was already around. From a theoretical standpoint, however, it is hard to understand why it is not orders and orders of magnitude larger. There are, in fact, quantum corrections to the Higgs boson mass that arise from diagrams involving loops of virtual particles. It is as if the Higgs "dresses up" by continuously emitting and reabsorbing virtual particles, and this occupation affects its mass.

    Now, the magnitude of these corrections (there are a dozen of them, both positive and negative) must be enormous, if we allow the virtual particles in the quantum loops to have any value of momentum, up to the Planck mass scale (an energy scale of quadrillions of TeV). But their combined effect magically cancels to very high accuracy, leaving the Higgs mass at a "mere" 125 GeV. This fact has been taken to imply that there must exist a cut-off in the available range of momenta of the virtual particles in the quantum loops. The logical explanation of the physical origin of such a maximum value is the fact that at those very high energies some unknown new physics turns on.

    Mangano thus cooked up the following analogy:

    "Imagine to ask ten friends to give you each a irrational real number in the range ]-1,1[. You take the ten numbers and compute their sum, discovering that the result differs from zero only at the thirtifourth decimal place (0.00000....00001).
    What do you conclude ? Are you willing to believe it a chance, or you take it as evidence that your friends conspired somehow for that result to come up ?"

    If we examine this analogy we notice that it is not clear what is its deductive power. What is there in the target (the ten friends and the summing game) that is known, and which is unknown in the source (the quantum corrections) ? The tiny probability that the sum of large numbers gives a small result is not a concept requiring an analogy to be absorbed. The listener is certainly capable of considering the game of 10 numbers, but that system has nothing in common with quantum corrections to the Higgs mass which is easier to comprehend there. Moreover, taking a ]-1,1[ interval may be elegant, but gets us farther from the idea of the enormity of the cut-off at the Planck mass.

    By virtue of having identified the shortcomings of Mangano's analogy, we can improve it by constructing a target which have as a parameter the dimension of the ten numbers: from the smallness of their sum we can then deduce the dimensions of the parameter, and the need of a small cut-off. Here is my bid, then:

    "Imagine that Bob, a friend of yours, plays no-limits roulette, betting sums on red ten times. Each amount is determined at random, but all are smaller of a pre-determined maximum number M; Bob does not tell you what M is though.
    After the ten bets Bob has one less dollar than he originally had in his pocket. What can we deduce on the maximum M ? May we believe it was M= a quadrillion dollars ? Of course not! We are led to believe that M was equal to just a few dollars."
    [If you need a precise stipulation: The ten amounts can be thought to be given by a call to the root function gRandom->Uniform(0,1): that is each of them is x_i = y_i*M, with y_i a random number chosen with a flat probability distribution between zero and one.]

    This analogy is better than the original one because it allows us to understand more quickly how theoretical physicists deduce that there must exist a cut-off, new physics at an energy scale not so much higher than the mass of the Higgs boson itself. The focus of the analogy is here not so much in the paradox of large numbers canceling each other -which is easy to explain even just considering the source- but on the inference we can draw on the magnitude of M.

    Comments

    "Bill the billionaire" could be better than "Bob, a friend of yours". It would communicate the prior expectation that the number is large.

    Bill the billionaire plays no limits roulette. He takes all the chips he had in his pocket, puts them on the table and plays ten spins of the wheel, pushing a random part of his stack onto red every time. After the ten bets, Bill has a single dollar less than he started with. Did he start with a million? Probably not. Then why did he have just a few dollars in his pocket and why does he bother playing for peanuts?

    Wouldn't this naturaleness occur if the existing theory would be just a numerical approximation to a theory which is different mathematically (but not necessarily in physics). Like if we would "blindly" compute the sum of (1/k^2,k=1..inf) then we would found that the sum is for some "magic" reason pi^2/6, and we wouldn't understand why unless we knew for example how to rewrite these series as fourier expansion of x^2.

    The fact that we would consider the described outcome of the roulette game as "not natural" depends, of course, on the fact that we know how a roulette table works. The trouble in physics is that we *don't* know how nature's roulette table works. That is, we don't know according to what underlying principles nature chooses its parameters. We don't even know what the "fundamental" parameters are.

    In my opinion, splitting a physical observable like the Higgs mass into tree level and loop level contributions may be completely artificial. It is based on the fact that we come from a classical world and therefore like to describe nature in terms of a classical theory (= a Lagrangian) plus some quantum corrections. We ascribe a fundamental meaning to the parameters appearing in the Lagrangian, but why shouldn't the truly fundamental parameter instead be the physical Higgs mass (= the location of the pole in its propagator)?

    Maybe the Lagrangian formalism, which leads to the apparently miraculous cancellations between the tree level Higgs mass and the higher-order corrections, is simply not the best way of describing nature. Not that I have a better way readily available ...

    Johannes Koelman
    "Imagine that Bob, a friend of yours, plays no-limits roulette, betting sums on red ten times. Each amount is determined at random, but all are smaller of a pre-determined maximum number M; Bob does not tell you what M is though. 
    After the ten bets Bob has one less dollar than he originally had in his pocket. What can we deduce on the maximum M ? May we believe it was M= a quadrillion dollars ? Of course not! We are led to believe that M was equal to just a few dollars."




    This analogy makes the critical assumption that random amounts are betted. In the real case of the Higgs, this translates into the assumption that the various diagram contributions to the Higgs mass are distributed randomly. A justification for which I see no justification whatsoever. 

    Rather than postulating "new physics" (translating into a cut-off) at energies not far above the Higgs mass, I would conclude that our current theories fail to identify a profound correlation between the diagrammatic contributions. 

    In other words, I would be tempted to construct an analogy based on a series that 'magically' sums to a small figure. Something like: Sum Sin(k), k=1..N for N>>1.
    dorigo
    Hi Johannes,

    but these different contributions depend on different (free) parameters of the model. So their relative "randomness" is a good model for an analogy.

    Also, I would be very, very wary of using a sine series to explain the naturalness. How many people can understand that ?

    Cheers,
    T.
    Johannes Koelman
    Well, you have a point that correlations between free parameters in our current model would per definition point to "new physics" (defined as physics not contained in the present model). But why would such new physics manifest itself as a cut-off?
     
    dorigo
    The cut-off is an approximation. It's just like saying that you're allowed to work with Newtonian mechanics to compute the motion of bodies, until v<εc (say), with ε much smaller than 1. If new physics kicks in, it changes the rules of the game, and one is not allowed to integrate those quantum loops all the way to any conceivable energy. In fact the cut-off is the planck mass now - where we know that the rules of the game must change.

    Cheers,
    T.
    Forget about supersymmetry for a moment. Given the existing bosons and fermions, how many additional species of bosons (or fermions) is needed for cancellations to corrections to the Higgs mass to happen? Assume that these bosons or fermions can have only known quantum numbers.

    dorigo
    Not a well-defined question Anonyrat. No new bosons are needed to cancel the contributions to the Higgs mass; only, if the cancelation occurs with no new physics entering at some low mass scale, this equates to a very high level of fine-tuning -sort of what was explained in the post. The more additional free parameters you add, the less fine tuning you need to invoke. Susy is appealing because it solves radically the issue, with contributions canceling one by one using the symmetry between fermions and bosons.

    Cheers,
    T.
    Second try. Supersymmetry solves the problem by having for each boson a fermion with the right quantum numbers so that each boson loop is cancelled out by a fermion loop.

    The Standard Model does not have these cancellations because the bosons don't match up with the fermions. But instead of doubling the number of particles like supersymmetry does, what is the minimal additional content that I need to add to the standard model so that boson and fermion loops effectively cancel out (at high energies where they are all essentially massless)?

    dorigo
    I don't think anybody knows the answer to this question Anonyrat, because in principle the answer could even be zero, i.e. there might be no need to add anything: cancelation already occurs, the Higgs is light, there is no physics beyond the Planck mass, and all is good. The fact is that this cancelation appears accidental and not "order by order, particle by particle" as in SUSY.

    Cheers,
    T.
    Most phenomena in everyday life that we know about has cut off scales. The cup on your desk is a cup only up to a few joules of energy, even half a meter of falling can break it into pieces. The pieces broke into smaller crumbs in higher energies and so on. Nevertheless, you use the cup with forces just a bit below the limit. What I want to highlight is that most objects have only a small energy range where it maintains its features, only a few orders of magnitude between the energy required to create it and the energy that could destroy it. You got used to this, and use this belief as a prior when constructing new theories for new phenomena. The Higgs is like that, it is very hard to believe that there is nothing new up to 10^19 GeV, because the small-range-prior suppresses any posteriori probability for large-range theories.

    An other prior like this is that we got used to engineer things with large safety zones, so the built structure allows few percent deviations in the parameters of the building blocks without the need of re-planning. This feature is totally missing in the hierarchy and the naturalness problem, as the calculations require very high precisions.

    My prior is that strong correlations it a theory usually mean that the theory needs to explain a missing degree of freedom, we want to describe something in 3D when it is actually a surface.