Banner
    No New Heavy Quarks In ATLAS
    By Tommaso Dorigo | February 17th 2012 07:12 AM | 8 comments | Print | E-mail | Track Comments
    About Tommaso

    I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

    View Tommaso's Profile
    A nice new search for heavy quarks has been completed by the ATLAS collaboration in 7 TeV proton-proton collisions data collected in 2011. The ideas behind the search are instructive to describe, so I will spend some time trying to do that before I discuss the results and their meaning.

    Quarks: properties and decays

    According to the Standard Model of particle physics, matter at the subnuclear level takes two forms: quarks and leptons. What distinguishes these particles is that the former, in addition to an electric and a weak charge, carry units of an additional quantum number called colour charge. This peculiar kind of colour comes in three kinds, say red, green, and blue; and each has positive and negative units: quarks carry colour, antiquarks carry anti-colour.

    The colour field of which they are a source binds quarks in colour-neutral combinations: a quark-antiquark pair with opposite colour is called a meson, and a combination of three quarks of the three different colour charges is called a baryon. Both mesons and baryons are colour-neutral.

    The colour-neutral combinations of quarks are fundamental for the existence of all matter structure we know. But quarks also carry additional quantum numbers, which distinguish them and allow us to categorize them in three different "generations": up and down-type quarks belong to the first generation, charm and strange-type ones belong to the second, and top and bottom-type ones belong to the third. The up quark is the least massive, so it is stable, being unable to decay into anything lighter; all others are instead short-lived.

    The decay of a heavy quark occurs when the particle "emits" a weak vector boson, the W particle. The W is the true object of desire of ancient-times alchemists, since it is the key of transmutation of matter. For example, a down quark inside a neutron may turn into a up quark by emitting a W boson: the latter immediately decays into an electron-neutrino pair, and the neutron has turned into a proton - changing an atomic nucleus of atomic number Z into its Z+1 neighbor in the periodic table of Mendeleev.

    The above mechanism, radioactive decay (also known as "beta decay" to be more precise), is the one responsible of the turning of quarks of any kind into lighter ones. Experiments searching for new heavy quarks (heavier than the massive top quark, because lighter ones would have already been detected by earlier experiments) usually assume that this continues to work, although interesting searches have also been performed which hypothesize that Z bosons can be mediating the transmutation. ATLAS looks for W boson mediated decays in its new search.

    If a quark is heavier than the W boson, its decay to a lighter cousin occurs at blitzing speed. There are two ingredients that determine this speed: the mass difference of the initial body and its daughters (the larger this is, the faster the decay takes place) and a factor called "Cabibbo-Kobayashi-Maskawa matrix element". Most searches for new heavy quarks, such as the one we are about to discuss, assume that such factor is not too small, otherwise the heavy quark would spend a significant amount of time inside a heavy colourless hadron before disintegrating into lighter particles.

    But why additional quark families ?

    Soon after the fourth quark, the charm, was discovered in 1974, physicists grew convinced that there needed to exist a third family. This would accommodate in a simple theoretical setting a funny property of weak interactions observed back in 1964, the phenomenon of CP violation. Kobayashi and Maskawa had in fact already postulated in 1970 that if quarks were real, then to explain CP violation in weak interactions there needed to exist three generations of quarks, at a minimum.

    At a minimum. So you could think that by the time the sixth quark (the top) was discovered, in 1995 by the CDF and DZERO collaborations at the Fermilab Tevatron collider, one could have thought that more were in store. However, by then we were cooled down by the result of a study of the "invisible" Z boson decays: the LEP collider had produced millions of Z bosons in electron-positron collisions at 91 GeV, and only 20% of them had been seen to be "missing", decaying into three different kinds of neutrino-antineutrino pairs. If matter was made up of more than three generations, one would have expected more missing Z events.

    Three kinds of neutrinos for a while meant the end of the hopes for a fourth generation of matter fields, including quarks -one could not introduce an additional quark family in the theory without adding a corresponding lepton-neutrino pair, lest the theory would become utterly inconsistent. But then, there came the discovery by SuperKamiokande, in 1998, that neutrinos do have a tiny but non-zero mass!

    If neutrinos do have mass, then a fourth kind of neutrino might exist, and be more massive than half the Z boson mass: in that case, no contribution would be expected to invisible Z decays. Everything can be consistent again, with four, or five, or who knows how many more generations of matter.

    Things are not so simple, however. Although the Z cannot decay into fermion pairs of mass larger than 45 GeV, the presence of these particles in virtual electroweak processes -ones to which the physics of Z bosons is very sensitive- would modify the observed values of measured quantities that LEP and other experiments have studied with care in the course of the last thirty years. The so-called "oblique" parameters would not be so oblique, the Z would take offence and decay more asymmetrically, and other electroweak processes would misbehave.

    But we are well in the realm of detailed model predictions. And theoretical models, of course, are called such because they are only a description of reality. What if those models are incorrect ? Maybe the Standard Model could accommodate new families of quarks and leptons without requiring such a huge overhaul, after all.

    The ATLAS search

    So it makes sense for the LHC experiments to search for these new heavy particles. ATLAS did it by assuming that new quarks get pair-produced in QCD interactions (which occurs if new quarks are coloured as the good old ones), and that they decay by charged-current weak interactions -W exchange- as all others.

    The picture suddenly becomes very close to that of top quark pair production and decay: the only difference is that the top quark decays into a W boson and a bottom quark, while the new hypothetical fourth-generation quark is taken to decay into a W boson and any kind of light quark. In fact, not knowing the relative value of the Cabibbo-Kobayashi-Maskawa mixing matrix for these new quarks, one must keep open the possibility that the heavy quark may decay into any of the lighter ones; except the top, which is too massive and would require a separate treatment.

    So ATLAS looks for events with two W bosons and two hadronic jets, without trying to characterize the latter as coming from bottom-quark hadronization (as is instead the case of top quark searches). The cleanest final state involves the decay of each W boson into an electron-neutrino or a muon-neutrino pair, because high-energy electrons and muons are rare in hadronic collisions.

    Once one selects events with two high-momentum charged leptons, two jets, and significant missing transverse energy (the latter due to the neutrinos), backgrounds are mostly due to leptonic Z decays and top pair production.

    The Z decay background is reduced by requiring that the missing energy is large when the two leptons have the same flavour (electron-positron or positive-negative muon pairs), and that the lepton pair mass is outside of the 81-101 GeV window.

    To discriminate the heavy quarks from top production, it would be good to reconstruct the mass of the decaying object: top has a mass of 175 GeV, while here we are looking for heavy quarks of masses above 250 GeV, so constructing an observable quantity connected to the mass of the quark would allow a good separation. But how to do this in dilepton decays, which involve two neutrinos ? We do not know where the neutrinos have gone, since we can only measure the combined transverse component of their momentum. A kinematical fit to the quark pair decay is insufficiently constrained, given that the constraints (quark and antiquark mass equality, plus known mass of the W bosons, plust total transverse momentum equal to zero) are fewer than the unknowns (mass of the heavy quark, and neutrino momenta).

    ATLAS notices that the heavy quark mass causes the W bosons emitted from its decay will have a large boost: this causes the neutrino and the charged lepton produced in the subsequent W boson decay to travel close in angle, as exemplified in the figure on the right (on the abscissa is the angular separation in radians, on the y axis the transverse momentum of the W; the population shown refers to a heavy quark of mass 350 GeV). One can therefore assume that the neutrinos have been emitted in the direction of the leptons, and this suffices to allow a kinematic fit to the unknown heavy quark mass to converge meaningfully.

    The resulting "collinear mass" is shown for different heavy quark masses in the figure below: a good separation of signal and top background is achieved.

    A selection of the data aimed at maximizing the "significance" (this is the term used in the ATLAS paper) S/sqrt(S+B) is then performed separately for each different mass hypothesis. The selection is based on the event kinematics and is derived from a comparison of simulated signal and background events. I will move some criticism to this criterion at the end...

    Finally, a likelihood fit is performed on the collinear mass distribution for each mass hypothesis. This allows to put a limit on the number of signal events present in the data, using the CLs criterion -basically a prescription on how to compute the probability of the observed data given a hypothesis on the number of signal present in it. Below you can see the mass distribution in the case of the search for a 400 GeV quark.


    The set of upper limits in the number of signal events is translated into a corresponding set of  upper limits in the signal cross section, and this in turn allows to obtain a lower limit on the mass of the hypothetical quarks by producing the graph below, which shows the cross section upper limit versus mass curve (in purple) along with the theoretical prediction for the signal cross section, which is also a (of course decreasing) function of the quark mass. The upper limit observed is complemented by a "brazil band" describing the possible ranges of limits that the search methodology was expected to return.



    The point where the theoretical curve meets the upper limit is found for a quark mass of 350 GeV: ATLAS therefore rules out lighter new quarks, at 95% confidence level, under the assumption of 100% decay of the heavy quark into Wq final states.

    The above results appear rather mysterious to me. If I look at the table of expected background events and observed data events for the various selections (see right), I see that ATLAS observe more events than they expect, in all cases. The departures are mild -of the order of one to 1.5 standard deviations- but they are all in the same direction; while the observed limit is seen to be below the expected one, for quark masses of 300 and 350 GeV. How come ?

    The fact that collectively one sees more events than backgrounds does not mean that one must by force fit a positive signal in a likelihood fit using also the shapes of the distributions and background uncertainties; but it is highly peculiar to see the opposite behaviour -that is, obtaining a better result than expected, in the presence of an excess of data. At 350 GeV, ATLAS expects 148+22-18 events and sees 180. The relevant histogram is the one on the left.

    I might be wrong (and I have only read very quickly the ATLAS paper, so I might be overlooking something) but it seems strange to me that ATLAS obtains a better-than-expected limit by fitting the above distribution. I hope my ATLAS colleagues will correct me and clarify the observed behaviour.


    The sting is at the end: a mild criticism of the ATLAS analysis

    Besides the request for clarification just discussed, I promised above I would have something to say about the way the ATLAS search is optimized, so let me do that here. In general, the function Q=S/sqrt(S+B) is a good indicator of how much a signal will "stand out" in a given set of data, if one performs a counting experiment where one's background prediction is B. The reason is that upon observing a total number of events N=B+S, the standard deviation on N -a Poisson random variable- is sqrt(N)=sqrt(B+S), so the number of standard deviations separating the observation N=B+S from the prediction B is indeed Q=S/sqrt(B+S).

    It is well-known that the above reasoning fails when N is small, because the approximation implicit in using the standard deviation of the Poisson distribution rapidly becomes invalid. Leaving aside statistical jargon, what happens with small event counts is that there is an added benefit in making B as small as possible, with respect to the above calculation. Let us take the number of background and signal events in the mass plot above as a starting point for a totally didactical example.  

    In the 280-480 GeV range, the one most populated by signal (and thus the region to which the ATLAS likelihood fit is most sensitive), I count about 10 expected signal events and 23 expected background events. It is fair to say that the likelihood results will be most affected by the amount of signal and background in that region, for the MQ=400 GeV hypothesis.

    S=10 and B=23 corresponds to a Q-value of S/sqrt(S+B)=1.74. Let us take as a possible different working point one obtained by making a purely hypothetical tighter selection, causing (say) S=5 and B=4 events in the same mass region: a higher signal-to-noise ratio would be here paid by a worse value of Q=5/sqrt(9)=1.67. So ATLAS, maximizing Q for their "optimization", would choose to stick with the baseline selection of S=10 vs B=23, even if some different choice of cuts were to offer the chance of settling to the tighter, higher S/N, working point.

    But if I naively compute the probability that 33 or more events are observed when 23 are predicted, I get a p-value of 3%; while if I compute the probability that 9 or more events are observed when 4 are expected from backgrounds, the p-value is of 2%: in other words, the lower-Q situation of S=5,B=4 should be privileged with respect to the higher-Q situation of S=10, B=23 that ATLAS chose in its "optimization", because the smaller S=5 event signal would "stand out" more -the background hypothesis would be more disfavoured by the data. This is just an example to stress that for small event counts, what matters is the S/N ratio rather than the Q-value.

    The literature is full with possible improvements to the rather rough Q-value formula employed by ATLAS, which address the problem I highlighted above; nevertheless, all these "approximate" pseudo-significances may represent a meaningful way to optimize one's analysis only in the lack of the possibility of carrying out a full pseudo-analysis which would account for the fitting method, the related systematics, and all the other nuisances that make a end result better than another. What I wish to stress is that the word "optimization" should be used with paucity and caution in similar situations !

    Comments

    Dear Tommaso
    not directly related to this post, but in the line of a comparison of statistical results, I have a simple question about the very recent ATLAS and CMS papers regarding the HIGGS search in the four lepton decay channel. In both papers the background distributions are reported , Fig. 1 in the CMS paper and Fig. 4 in the ATLAS paper; comparing the distributions done with 10 Gev bins, it results that in the low mass region the ATLAS background is a factor two less than the CMS background. The maximum signal "clustering" is similar , 3 events around 124 GeV for ATLAS, 3 events at 119 GeV for CMS. Looking at the local p-values , CMS reports 2.5 sigma local significance at 119 Gev, and ATLAS 2.1 sigma at 124. This seems incompatible with the fact that CMS has more background than ATLAS. Is there any explanation for this apparent discrepancy? Thanks.

    dorigo
    Dear Anon,

    sorry for not noticing your question earlier. You should be careful of the binning, which is wider in ATLAS than in CMS in those plots. The background levels are in fact very similar if you correctly take the binning into account.

    Best,
    T.
    Dear Tommaso
    thank you for your answer, but by comparing the histograms with similar binning (10 GeV) it appears that ATLAS has almost a factor 2 less background.
    Regards

    dorigo
    What histograms do you refer to: the low-mass zoom in ones, or the full mass range ones ? The latter may be done differently. The low-mass ones have much, much narrower binning than 10 GeV fortunately ! (Our resolution at low mass for the H->ZZ is of a GeV or so).

    I suppose you refer to the full-mass range ones. There, CMS has a peak in their background distribution at about 200 GeV, where the rate is of the order of 0.6 events per GeV.
    Atlas at 200 GeV has a peak too, with a rate of 0.54 events per GeV. I do not know what you are talking about, can you point to the plots ?

    In any case, please note that the leptons in these H->ZZ candidates pass different selection criteria in ATLAS and CMS. If CMS has larger acceptance, for instance, it will also have larger background rates. What should be compared is the expected limit in the two searches, and there CMS is actually doing slightly better.

    Thanks,
    T.
    Dear Tommaso
    I refer to the the two histograms in Fig. 1 of the CMS paper arXiv:1202.1997 and to the two histograms in Fig. 4 of the ATLAS paper arXiv:1202.1415 (also reported in Fig. 4 of the note ATLAS-CONF-2011-162). The difference is in the low mass region, between 100 and 150 GeV. The full range histograms are both binned at 10 GeV; the CMS histogram in Fig 1a of arXiv:1202.1997 shows the two background contributions ZZ (pink) and Z+X (green), their sum produces a maximum value in the low mass region of about 2.5 ev/10 GeV. On the contrary, the maximum value in the low mass region of the ATLAS histogram in fig. 4a is about 1.4 ev/10 GeV.
    As you point out, the zoom in low mass region is done with different bins, 2 GeV in the case of CMS and 5 GeV for ATLAS; summing all the bins content from 100 to 150 in the case of CMS the integrated background in this range is 11.47 events, while doing the same exercise on the ATLAS zoomed histogram one gets 5.46 events in the same interval. This is the factor two difference I referred to.
    The maximum clustering of events in the low mass region is comparable for the two experiments, 3 CMS events at 119 GeV and 3 ATLAS events at 124 GeV. The conversion of these numbers into p-value and significance is my question: CMS declares a local significance at 119 GeV of 2.5 sigma, ATLAS a local significance of 2.1 sigma at 124 GeV; but given the fact that CMS in this region has a factor two more background it should get a lower significance than ATLAS, not higher.
    Thanks

    dorigo
    Hi Anon,

    thanks for your message. Now I see exactly what you are referring to.

    I should say first off that one cannot judge significances by eye, especially in the presence of background uncertainties (which in the ATLAS plot are displayed, in the CMS case aren't) and of different signal resolutions. But I will try to do my best to provide a qualitative answer to your puzzle.

    First of all, the CMS plot has background histograms stacked. I believe you are counting the
    green one twice, because you consider them independently, while CMS plots the pink one "on top of" the green one. So your estimate of the CMS background level is overestimated: it is at most 1.85 events in the 125-145 GeV region, whereas ATLAS has indeed a lower value, about 1.3 events. Not a factor of 2 higher, but 42% higher.

    With the same acceptance, the most sensitive experiment should be indeed ATLAS, which has smaller background as you reckon. However, it turns out that ATLAS is the weaker one, because it has lower acceptance for ZZ events, due to the different identification cuts on leptons of lower energy (I believe the largest difference is due to the superior muon ID capabilities of CMS).

    But of course, when we discuss significances, it is not the acceptance which counts, so let us turn to the experimentally observed data. The other ingredient, as I've mentioned above, is the resolution. The three CMS events cluster tightly at 119 GeV, and their individual mass resolutions are very good (they are all four-muon events) -the resolutions are displayed at the bottom of the zoom-in figure you point at. This causes their contribution to build up on top of a background of merely 0.6-0.7 events. The three ATLAS events at 124 GeV, on the other hand, have worse resolution, as one may see by looking at the expected signal shapes overlaid. So they are still 3 events on top of 0.7 events, as the relevant zoom-in figure shows.

    I think background uncertainties and resolutions are the two factors which cannot be estimated by eye and which cause a slightly different perception with respect to the local significances quoted by the two experiments. For the rest, I should mention that the two collaborations use exactly the same prescription to compute their limits and significance (I mean based on the same software).

    Cheers,
    T.
    I've not read the paper yet (and I am a member of the CMS collaboration, so I cannot be accused to have a bias in favor of ATLAS) but I'm not surprised by the fact that despite an excess of data they find a stricter limit than expected: they are certainly fitting the shape of the final discriminant variable by keeping the background floating within its uncertainty (which, I argue, is what should be normally done.)

    Before hitting the "submit" button I just jumped to the Results section to check, and indeed:

    "The rates of background and signal events are fit ted simultaneously."

    So, no surprise at all: if you look at the plot that you chose to post, you can see by eye that the shape of this distribution, in data, is quite "background-like": it even has a deficit in the signal region with respect to the sidebands around the expected peak.
    Something that we usually do in CMS, at least in the Top group (other sub-communities with our experiment may have different habits) is to normalize each process to the prediction from the fit itself. This prevents exactly this kind of confusion.

    dorigo
    Hi Andrea,

    I see both excesses and deficits in the signal region (say 200-450 GeV). I agree one cannot judge by eye, but they overall have a 1.5 sigma excess of event counts, and in these conditions fitting no signal continues to look fishy to me.

    Best,
    T.