Banner
    The New Higgs Mass From ATLAS: Still Twin Peaks
    By Tommaso Dorigo | June 17th 2014 06:20 AM | 12 comments | Print | E-mail | Track Comments
    About Tommaso

    I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

    View Tommaso's Profile
    One and a half years ago ATLAS produced measurements for the Higgs boson mass using their selected sample of H->gamma gamma and H->ZZ*-> 4-lepton decay candidates, based on data collected in 2011 and 2012. That preliminary measurement was rather surprising as the two independent determinations appeared to disagree with one another at the 2.5-sigma level. The matter even spurred some online debate (see e.g. my blog entry) and a few gambling addicts waged $100 on the fact that those might be two distinct particle states.

    On the matter you of course know well my opinion: as CMS has been measuring two very consistent mass values for the two Higgs boson decay modes, with accuracy superior to that of ATLAS and with combined mass right in the middle of the two ATLAS determinations, the 2.5 sigma effect is certainly just a fluke or some unknown systematical bias affecting the ATLAS results.

    While we wait for more data to decide the issue of whether we are in presence of two distinct resonances or not - we will be collecting more Higgs bosons in 2015 than we have in our hands so far - it is nice to see a reanalysis of the ATLAS Higgs samples, which is the experiment's final word on the mass of the Higgs boson based on the 7- and 8-TeV proton-proton collisions collected until 2012. This has appeared as a preprint two days ago. I should be precise and say that it is not just a reanalysis: ATLAS has added some 33% more statistics in this measurement, coming from the latter part of the 2012 run.

    The bottomline of the complex and very detailed measurements made by ATLAS is summarized graphically in the figure below, which shows 1- and 2-sigma contours on the mass versus signal rate plane. You clearly see that the ATLAS data insist on finding two distinct minima for the two datasets, although they have gotten a bit closer than they used to be.



    The two mass measurements are quoted as follows in the paper:


    You can see that the discrepancy is almost exactly at the 2-sigma level today. One interesting thing to notice is that while in the past ATLAS result the H->gamma gamma rate measurement was higher than the H->ZZ one, now it is the opposite. The reanalysis anyway finds results compatible with the old ones, if a bit more precise due to increased statistics.

    So is it one particle or two particles ? Well, of course it is just one particle ! But the funny twin peaks of ATLAS will live on for another year or two, I suppose. If the 2015 data will insist in showing a difference between the determinations using the two final states, the conclusions we will be forced to draw is that there is a well-hidden, nasty systematical effect that spoils the energy scale of the ATLAS detector (either the photons or the leptons) at the level of 1%... But we are not there yet!

    Comments

    I have a question about how high-energy physicists report uncertainties. I notice that they frequently report uncertainty to two significant figures. I acknowledge that HEP's probably know a lot more about statistics than I do, but that goes against what we normally teach our introductory physics students. We usually teach that there should be only one significant digit in the uncertainty of a result (unless the digit is a 1, but lets not worry about the exception for now). For example above, the combined Higgs mass is quoted as 125.36 +/- 0.41 GeV.
    In most introductory science courses ,we would tell our students that this should instead be reported as 125.4 +/- 4 GeV. The reason is that if one is "uncertain" about the tenths place, then one really has no idea what belongs in the hundredths place. If one doesn't really know what number belongs in the tenths place, then any information one gives about the hundredths place is speculation. Are the introductory science textbooks wrong? Why do HEP's report two digits of uncertainty?
    I apologize if that this is slightly off topic, but I have wondered it for a while. Since this post is at least to some extent about the uncertainties involved, I thought this was as good a time as any to ask.

    Hfarmer
    The reason is that actual HEP measurements are often to far more sig figs than can be printed.... so in publications they round off to one or two sig figs.   Yet in reality the measurement could have 10    (Consider the CODATA value for the mass of the electron.  http://physics.nist.gov/cgi-bin/cuu/Value?me )  After all we're talking about "weighing" a sub atomic particle through it's interactions with better know sub atomic particles.   They can measure it with about the same accuracy that we can measure the energy of gamma ray photons etc.       
    Different disciplines develop different rules for this.  In my thesis in astronomy, for instance, I had to compare data gathered with very different sig figs and margins of error.   Say, angular measurements down to 0.001 arc seconds, and distances determined only to within +/- 1 Kilo parsec, to compute the distances between objects in the sky in parsecs.    I ended up rounding to the measurement with the least number of sig figs across the board.  
    Science advances as much by mistakes as by plans.
    I don't really understand that answer - if you are saying that the measurement is so precise that they cannot practically be printed, then the uncertainty would be in a digit that is not printed either. That cannot be the case in the Higgs measurement quoted above. The uncertainty really is in the tenths of GeV place - they didn't just round off to that because they didn't want to print all the digits. If it were the latter, their mass discrepancy would be a lot more than 2 sigma.

    Hfarmer
    Dr. Dorigos answer below this one really is the best answer....but I will try to explain my point again. 
    I don't really understand that answer - if you are saying that the measurement is so precise that they cannot practically be printed, then the uncertainty would be in a digit that is not printed either.

    Not quite.  Consider this fact which makes what you see reported in particle physics measurements different than anything one would encounter in an intro physics course. 
    Calculations based on the Standard Model has been tested in some cases to 1 part
    in 10 billion. So far all experimental data is consistent with the Standard Model. -http://www.lepp.cornell.edu/~ryd/SM.pdf

    1/10,000,000,000  or 0.0000000001    So suppose the actual measured number is 125.9795829309
    The intro physics best practice would be to round that off to 125.979582931  (Indeed if one were to download raw data for processing in ones own project that data would have more figures than would be printed in a paper.)  


    The practice that Tommaso explains so well is the report the confidence intervals.  Those tell you how precise the information is.  


    Between 125.979582931 and 125.98 +/- 0.42  the second option is much more useful and a more precise statement of what is known and how well it is known.  
    Science advances as much by mistakes as by plans.
    dorigo
    Dear KJ,

    two significant digits are in general a good way of reporting an uncertainty. There are exceptions, of course; and there are even arguments against a specific rule. For instance, the numbers 0.099 and 0.011 have both two significant digits, but while reporting them this way gives the former a relative 1% accuracy, it gives the latter a 10% accuracy.

    The review of particle properties has a definite recipe for how to report measurements:

    [...] if the three highest order digits of the error lie between 100 and 354, we round to two significant digits. if they lie between 355 and 949, we round to one significant digit. Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits. In all cases, the central value is given with a precision that matches that of the error. So, for example, the result (coming from an average) 0.827+-0.119 would appear as 0.83+-0.12, while 0.827+-0.367 would turn into 0.8+-0.4.

    As you can see, it is a rather convoluted rule. I do not subscribe to it, but I believe that any choice has up and down sides, so it is better to stick to one rule and forget the issue altogether.

    Note that there are specific use cases when reporting a larger number of digits is useful. This is e.g. when one is reporting an intermediate result, which gets averaged with others or further processed. In that case, rounding off can produce unwanted loss of precision.

    Now for the Higgs mass: when we report an uncertainty of 0.41 GeV we really mean 0.41 and not 0.4. That is because the uncertainty has been estimated very carefully, and that second digit is useful. Do not be deceived by your line of reasoning ("if the first digit is uncertain, then why bother about the second"). If I write 125.36+-0.41 I mean that there is a 68.3% probability that the true value lies in the interval 124.95 : 125.77 GeV. If I instead report it as 125.4+-0.4 I am implicitly saying that there is a 68.3% probability that the true value is between 125 and 125.8 GeV. (This of course holds only in the Gaussian approximation). It may not look like a big difference but it may indeed change some conclusion in specific cases.

    Cheers,
    T.





    Thanks. Thinking about it in terms of confidence interval helps this make sense.

    Not sure where did you get this:
    "ATLAS has added some 33% more statistics in this measurement, coming from the latter part of the 2012 run."
    It is not true.

    dorigo
    Dear Anon,

    the statement is correct. I am referring to this measurement as an update of the one reported in my December 2012 blog post, where ATLAS was reporting a measurement based on a total of 18/fb. Now these have become 25/fb.

    Check your sources,
    T.
    Tommaso,

    The previous commenter has a point: a combined mas measurement using all 2012 statistics was released in March 2013.

    https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/CONFNOTES/ATLAS-CONF-2013...

    I guess the most correct comparison would have been with this result, and not the one you discuss in your December 2012 blog.

    dorigo
    Thanks but no, I disagree - I was referring to my old coverage of that result, on which were based: 1) several speculations; 2) a 2.5 sigma effect; 3) a $100 bet that I took. I agree that the text above was not clear in this respect...

    Cheers,
    T.
    ATLAS mass uncertainty is +/-0.41 (quoted above)
    CMS uncertainty is +/-0.42 (CMS-PAS-HIG-13-005)

    Your statement "CMS has been measuring two very consistent mass values for the two Higgs boson decay modes, with accuracy superior to that of ATLAS" is wrong.

    Also, it is precision, not accuracy.

    dorigo
    Hi Anon,

    true, it is precision, not accuracy.
    CMS measured the Higgs mass better than ATLAS until now. The new ATLAS result is better than the one of CMS, but that is a new situation... We'll see how long it lasts.

    Cheers,
    T.