Banner
    DZERO Refutes New CDF Dijet Resonance!
    By Tommaso Dorigo | June 10th 2011 09:33 AM | 29 comments | Print | E-mail | Track Comments
    About Tommaso

    I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

    View Tommaso's Profile
    And here they come. Much awaited (and anticipated), today the DZERO collaboration presents their findings in the search of the same dijet resonance which made it to the New York Times as well as to several physics blogs around the web, and which brought frantic theorists back to the blackboard to try and figure out a model that could allocate the cumbersome new find.

    You need to have been sleeping for the last four months in order to have missed the news about the CDF signal (I don't even link my own posts on this except the last one -please dig in the last months of stuff if you feel the urge to). In a nutshell, however, what CDF sees is an enhancement in the invariant mass distribution of pairs of jets produced together with a leptonically-decaying W boson, produced in the 2-TeV proton-antiproton collisions at the Tevatron. This enhancement fits very well the hypothesis that the W boson be produced, with a cross section of about 4 picobarns, together with a 145-GeV resonance. CDF quotes a significance of over four standard deviations by analyzing a dataset corresponding to an integrated luminosity of 7.3 inverse femtobarns -all the W bosons plus jets they could collect from 500 trillion collisions. The signal can be seen in the background-subtracted figure shown on the right panel of the figure below: it is represented by the blue Gaussian function.


    Now, it is a general rule of thumb that under normal circumstances two eyes must see the same thing if they belong to the same head. CDF and DZERO are expected to see things similarly,  because the two detectors are not too different overall, in their sensitivity to high-energy processes at least. So if the CDF signal were true, DZERO should also see it.

    Unfortunately, DZERO had better things to do than study W+dijet events in recent times, and their latest analysis looking for a diboson decay process in the mixed leptonic-hadronic final state (say, for instance, WW-> lv jj) had stopped looking after studying little more than one inverse femtobarn of data. They saw the dibosons, and started doing more important things. But then the CDF signal came, and DZERO found the time and energy to redo the analysis with four times more data.

    The results are as follows: DZERO sees no enhancement in the 145-GeV region. They understand the distribution of invariant mass of jet pairs collected together with leptonically-decaying W bosons well as the simple sum of well-known Standard Model processes, and they need no Gaussian bump anywhere in the spectrum. You can see the dijet mass found by DZERO in the figure below.



    From the plot you may observe that the background composition of the Wjj data in the DZERO analysis is quite similar to the one predicted by CDF: this is not a surprise - once you select such a final state, most of the times you end up with W production together with two jets emitted by the initial state of the collision (the blue histogram); sometimes you have top pair decays where you miss one additional lepton or a few jets (the purple histogram); and a few other times you mistake the lepton-neutrino signal for a W when it is in fact just hadronic jet activity somehow misread by the detector (in grey). The red histogram instead is the first motivation of such searches -the diboson signal whereby one W decays into leptons and a further boson -W or Z- decays to a pair of jets, thus producing a bump in the 80-90 GeV ballpark.

    As you note, the data follow very well the predicted background anywhere. There is no hint of a deviation, while if the CDF signal were present in DZERO data it would certainly show up. This is evidenced by the figure below, which shows a background-subtracted distribution (all backgrounds except the diboson WW/WZ processes have been subtracted from the observed experimental data, and this results in the black points with error bars). The background uncertainty is here plotted as a empty blue band sitting at zero, and the signal corresponding to the CDF observation is the hatched black histogram.



    From the observed agreement of data and background, and from a calculation of the predicted lineshape of the invariant mass that a narrow resonant state would produce after detector reconstruction, DZERO proceeds to extract a limit in the cross section that the tentative new particle could have, given the observed data. This results in the figure shown below.



    The black curve is the upper limit found by DZERO on the rate of production of a WX state decaying into W+dijets, as a function of the tentative mass of the X state. The hatched line shows the median expected limit -the median of upper limits that DZERO would expect to set, if the experiment were repeated many times with as many equal-sized, independent datasets all homogeneously collected in the same experimental conditions.

    You can see that a 4-picobarn resonance would never have escaped the analysis: the limit is half that value, and the green and yellow bands show that only in very unlucky instances would DZERO fail to see it if it had a 3-picobarn cross section. However, much more interesting is the following figure, which is however a bit more complicated to explain. I'll do my best.



    The vertical axis this time is not a physical quantity but just a statistical estimator, called "Log-Likelihood Ratio" (LLR). This is a number you can extract from the mass distributions by fitting either for background alone (hypothesis H_0, described by the black dashed line) or for background plus a Gaussian signal (the alternative H_1 hypothesis, described by the red dashed line). Due to the difference in the two hypotheses H_0 and H_1, the LLR statistic would be much different, and become increasingly so as the mass of the hypothetical X particle increases (you can understand this behaviour from the fact that backgrounds die out as the dijet mass increases, and thus a given signal would be easier to spot there).

    The data have a LLR which follows closely the H_0 hypothesis -unsurprising, given the results of the previous figure. But this one figure does contain some additional information: it tells you just how significant would the H_1 hypothesis be, if it were true. Take the full red line, which represents the result that you would expect to get if a X particle of 4 picobarns did exist, with a mass of 145 GeV as found by CDF. You would find that the LLR would start to deviate from the H_0 prediction for mass hypotheses above 120 GeV, reach a maximum deviation at 145 GeV, and then return to conform to H_0 if the tentative mass were hypothesized to be too large to fit for the signal that a 4 picobarn, 145-GeV particle would yield.

    In other words, the red line is what you should observe if the CDF data were produced by a new particle. Instead, DZERO observes the full black line, which lies at a LLR distance which corresponds to four standard deviations. One may thus conclude that the CDF signal is not observed in DZERO data, and that the latter disproves the former.

    Now, this is all you need to know, but in truth, there is something more to say. Because the DZERO analysis does something differently from what CDF does. Otherwise, what should we conclude from the fact that the two datasets appear so horribly in disagreement ?

    The devil, as they say, is in the details. But unfortunately, from the information available to me, I see no detail whatsoever that can make a real difference in the observed dijet mass spectra. DZERO does apply some corrections to the Monte Carlo simulations, to account for differences observed between simulation and real data in the reconstruction efficiency of leptons and jets. Other corrections applied to the MC, which are derived from data, are those that account for trigger effects. But I see no reweighting applied other than that, and so I must conclude that DZERO did their homework conscentiously and that the CDF analysis must have something weird going on somewhere.

    Which is even more weird, if I think at it. The CDF analysis underwent a deep level of scrutiny; a group redid the full analysis from scratch after it was first presented; and the added data that CDF could throw in after the first observation of a 3.5 standard deviation effect (in 4.3/fb of data) has confirmed the effect, which is clearly of systematic nature. What is going on ?

    I guess we might never really find out what is going on. If more time and effort is invested in this issue by CDF, by DZERO, and by the other experiments, we might end up understanding more about the structure of the bacgrkounds contributing to this data selection, and maybe also pinpoint some peculiarities of the jet energy measurement of CDF. But I venture to predict that the interest in this funny effect will slowly and steadily decrease. The final blow is probably going to be given in a few months, when ATLAS and CMS will present their own findings in statistics three times smaller than that of DZERO, but more sensitive due to the better quality of the newer detectors and to the higher cross section of W plus jets processes in the higher-energy LHC collisions.

    What is sure is that I will let you know promptly about those searches, too. My interest is also economic: I have bet $200 that the CDF signal is not real, and I hope I will collect the cash by the end of the summer... (By the way: when I put the betting offer out I had no knowledge of the DZERO analysis result.)

    Comments

    "DZERO does apply some corrections to the Monte Carlo simulations, to account for differences observed between simulation and real data in the reconstruction efficiency of leptons and jets."

    I found this interesting. As a layman, I'm wondering if this sort of thing could end up hiding a real (but unknown) physical process?

    dorigo
    Hi Anon,

    of course it could. Every thing you do to massage your data may result in unwanted effects, if you are not careful. But I think what they do is very straightforward - DZERO has huge samples of clean leptons and jets on which to play all sorts of games and cross-check every bit of their procedure.

    Best,
    T.
    Is it possible that the leptonic decay rate of W might be slightly different from the Standard Model prediction? Or is that also ruled out by the D0 result?

    dorigo
    Hi Anon,

    the W branching fractions are well known and agree with the SM to the precision available today. In any case, there is no connection with observing something together with the W or not. I.e., the W in the CDF events always decays to leptons, because their selection enforces it.

    Cheers,
    T.
    It seems to me that the D0 analysis from 4.3 fb-1 data uses a dotted line CDF reference from their 7.3 fb-1 data set (expecting 100 events at the peak around 150 GeV). Shouldn’t that expectation be scaled to ~46 events consistent with the original CDF data set of 4.3 fb-1? In my blog post http://theoryofeverything.org/wordpress/?p=357, the overlay of the 4.3 fb-1 charts shows more consistency than reported.

    The two plots have different bin sizes -- you can't overlay them directly. The shape of the purported gaussian on the D0 plot has been scaled appropriately to account for this.

    The bin sizes are not that different (8 vs 10) - and that can't account for a doubling of expected events for 4.3 fb-1 of data. The shape and center for the gaussian is consistent w/the first CDF 4.3 fb-1 data set - but the peak is doubled in expected events
    .
    Subsequent data for CDF shifted the center to 147 GeV w/7.3 fb-1 data. If you shift the peak to 148 and lower it (or find more D0 data to add - as they did w/CDF) the data isn't a null result.

    In the graph immediately following:
    "the CDF observation is the hatched black histogram."
    Why is there a deficit at around 255 GeV? It seems that datapoint is significantly outside the bands. I am somewhat suspicious.

    dorigo
    No way! That is a perfectly likely statistical fluctuation in a many-bin histogram!

    Error bars represent 68% coverage - so you expect that in, say, 50 bins there must be of the order of 50*(1-0.68)=15 bins which disagree by at least one error bar width with the prediction. Add to that the fact that besides the statistical error bar there are systematic errors, and you understand that your observation of the single downward fluke at 255 is not significant.

    Cheers,
    T.
    Perhaps one bin deficit might be reasonable, but in the data I see resonances in deficits at 115, 135, 165, 185 and 255. spacing about 20, 30, 20, 70. Now those deficits are marginal against the background, but I will remain suspicious.

    Maybe the alpha centaurians are goofing with us and planted the bump in the CDF detector.

    Clearly the Wjj signal is longitude-dependent. It varies between Iowa and Illinois with a wavelength of a few hundred meters. D0 just happens to be in the wrong place. So sad....

    Tommaso, after reading the D0 paper this morning I agreed with your conclusions. They even did not re-weight, all looked pretty solid. However, the talk raised many red flags... Not so sure anymore...

    Walter, what were the red flags that you saw to be raised by the talk ?

    Here is something about the talk that seemed strange to me: A question from the audience was:
    Why did D0 make a specific 4.3 sigma refutation against a bump with cross-section exactly 4 pb
    when CDF had only said that the bump had cross-section "of the order of 4 pb", which might be consistent with the upper end of the D0 best fit cross-section range of 0.82 ( +0.82 -0.83) pb ?

    It seems that D0 was trying to set up a straw man version of the CDF bump (i.e. EXACTLY 4 pb) that would be easy to refute strongly (i.e. 4.3 sigma)
    instead of
    trying to see what common ground might exist between the CDF and D0 analyses (for example, maybe a bump with 1.6 pb cross section).

    Another thing:
    why does the CDF QCD background peak around 75 GeV/c2
    while the D0 QCD background peaks around 105 GeV/c2,
    a circumstance about which Resonaances (with kudos to Jay) said:
    "... the data are consistent between the 2 experiments
    but they do not agree about the background,
    with D0 predicting more background events near 150 GeV.
    Looking closer at the plots,
    it seems the main difference lies in the prediction of the QCD multijet contribution ..."

    In short, it seems to me that, as Resonaances said:
    "... the 2 Tevatron experiments got into an epic standoff. ...
    We need a shootout to decide who's right ...".

    As to who is to run the "shootout", Fermilab Today said:
    "... Fermilab Director Pierre Oddone and ... CDF and D0 ... have agreed to create a task force ...[which]... will consist of member from both experiments and .. Estia Eichten and Keith Ellis ...".

    It will be interesting to see the interplay of the deliberations of the task force
    with
    the analysis efforts of ATLAS and CMS that will be ongoing at the same time.

    Tony

    PS - I noticed that at the end an audience member said that the discussion should be taken offline, at which the question session was closed and transmission terminated. Also, Joe Haley's transparencies from the talk have not yet been posted to the Fermilab site, although it said that the link to the transparencies would "be active in the evening of Friday 10 June" and it is now about 15 minutes to midnight 10 June Central (Fermilab) Time.

    Hi Tony, I do not want to give a core-dump here on the blogs. But here is a very short list of some questions:
    - different jet energy scales for "quark" and "gluon" jets (up to 5% shifts between the two JES's)? This is done just for this analysis??? What processes were used to fix these scales? How does it impact other D0 measurements?
    - what are they doing with the renormalization/factorization scales and parameters used in MLM matching. It sounded they were considered free fitting parameters?
    - D0's multijet background fake contribution is double that of CDF's. It is normalized using the transverse mass distribution. How believable is this normalization, it is crucial to get this one right...
    - The actual background estimated in the di-jet mass distribution are very different between CDF and D0, even if you look at the bins with invariant masses above 200 GeV. The only possible explanation is the cone size difference (0.4 for CDF, 0.5 for D0) combined with the exclusivity requirement. Its a bit puzzling right now...
    - D0 let the WW and WZ cross section normalizations float and find roughly 2 times what one would expect from NLO. CDF is very close to NLO expectation. Somewhat troubling...
    -The explanation of the excess around 150GeV in the D0 thesis (uses W+2jets, muons only and same luminosity) was that with the higgs-search like cuts used in the thesis the top background caused this excess. Very interesting, not sure I buy that yet. (In the thesis the bump was gone after a reweighting of alfgen).

    The fact that QCD contamination and its shape are very different between CDF and D0 is not suspicious at all.
    In an analysis where tight selection cuts are applied on a lepton candidate, like this one, or where there is a selection based on missing transverse energy, and again this is here the case, what we call "QCD background" can pass the selection mostly because of instrumental effects which create an artificial missing energy or make a hadron pass the leptonic selection. Different detectors obviously have different instrumental effects; but even for the same detector, different selection criteria would yield different QCD contaminations, with different shapes.

    dorigo
    Hi Tony,

    thanks for the notes on the talk -which I did not follow.

    Since the DZERO analysis is less powerful than the CDF one in terms of used statistics, and since the two results are in utter disagreement (although maybe not so much as DZERO claims, since they indeed focused on 4+-0 pb as a reference), it is normal that things from here progress jointly with a task force. We have, in other words, a disagreement which only more studies (or other experiments) can resolve.

    BTW the bet should not be paid yet....
    Cheers,
    T.
    Why do we conclude CDF and D0 disagree?
    Taking the subtracted plots at 4.3 fb^-1 from both CDF and D0 I can draw two hypothesis. Either CDF has an upward fluctuation from the SM fluctuation, or there is some extra source of dijets feeding in with a cross section in the range of 1-2 pb. Both hypothesis are in agreement with the observations.
    The prudent thing to do next is to collect more data. Luckily CDF did just that last week! It made the hypothesis that there is the extra source of dijets more likely, strongly disfavoring the hypothesis this is a fluctuation from the SM.
    I'm disappointed in D0 for misrepresenting the statistics. I am even more disappointed in CDF to let this go. By agreeing to the taskforce they seem to agree that D0 refutes CDF. Get some backbone CDF and defend you results.....

    Nobody is letting it go.... Agreeing to the task force means exactly the opposite!

    Tommaso,

    "has confirmed the effect, which is clearly of systematic nature."

    Do you still believe no detector effect is involved? What is the source of the systematics in your opinion? Any clue?

    By the way, what kind of reweight effect you had in mind before seeing the DZero result that could have accounted for the discrepancy? (the ones you described now are the ones I thought and I saw no reason to believe this could be the explanation. Since you agree with that I´m wondering what else you had in mind).

    Thanks,

    Bernhard

    dorigo
    Hi Bernhard,

    systematic means that it is something not statistical, nothing more.

    It could be a detector mismodeling, or a basic problem with our knowledge of background processes, or a feature of multijet events.

    A reweighting of Monte Carlo can be done by looking at some meaningful distributions, and taking the ratio of the two distributions as a weight. Each MC event gets multiplied by a weight which depends on the value of that variable, thus "reabsorbing" the discrepancy. I think this is what DZERO (and others) do when they reweight their MC.

    Cheers,
    T.
    Dear Tommaso,

    Then fine, I think we agree. I however tend to think that detector missmodeling goes hand in hand with not having understood some feature of the detector however small. When you said that the detectors are very well understood I thought you were dismissing that completely but I made the wrong assumption.

    Thanks for the answer!

    Sure, Higgs boson never explained the whole. But attempted to explain the oldest concepts of modern physics in quantum framework, which was given a raw demonstration by Galileo in 1589 by dropping objects from the Leaning Tower of Pisa. We start learning Physics with his laws of force and friction but never learn more than that ever during our whole tenure as students. A quantum explanation of mass and gravity thus will give a body to the 100 year old painting of quantum mechanics.

    However, it's not Higgs boson, but unnamed particles of two origins, which create gravity between 'masses'. The discovery already took place in 2010 and has now been reported as a USPTO application which will be officially published by the US Patent Office. Some general landmarks are on my site http://www.anadish.com/. I have refrained from giving details, as the details are already under publication.

    I have a naive uninformed layman question:

    Could variation in D0 trigger shape requirements affect data samples used in calculating QCD background,
    shifting the D0 QCD background to higher Dijet mass values than the CDF QCD background,
    and
    therefore reducing the bump signal seen by D0 ?

    Here is why I ask that question:

    D0 says that its diject mass figure (figure 1 in the D0 paper and shown in the body of the blog above) is the combination of two separate plots for the electron channel and for the muon channel (figure 9 in the D0 paper).

    It is clear from figure 9 that the electron channel Multijet QCD background is much larger than the muon channel Multijet QCD background,
    so
    the QCD background difference between CDF (peak around 75 GeV/c2) and D0 (peak around 105 GeV/c2) seems to be primarily due to the D0 electron channel Multijet QCD background.

    The D0 paper discusses how D0 handled the muon channel differently from the electron channel, saying:

    "... In the muon channel, the multijet background is modeled with data events that fail the muon isolation requirements, but pass all other selections.

    In the electron channel, the multijet background is estimated using a data sample containing events that pass loosened electron quality requirements, but fail the tight electron quality criteria. ...".

    In my naive efforts to compare
    the D0 electron channel method "estimated using a data sample"
    with
    the D0 muon channel method "modeled with data events"
    I went to the recent D0 paper at arxiv 1106.1457 which for multijets referred to "the data-driven method" ... used because "estimation of this background from Monte Carlo simulations is not reliable".
    That "data-driven method" was described in another D0 paper at arxiv 0705.2788 which said:
    "... The background within the selected samples ... also include contributions from multijet events in which a jet is misidentified as an electron ...
    These instrumental backgrounds are collectively called “multijet backgrounds”, and their contribution is estimated directly from data since Monte Carlo simulations do not describe them reliably.
    In order to estimate the contribution of the multijet background to the selected data samples we define two samples of events in each channel, a “loose” and a “tight” set where the latter is a subset of the former.
    ...
    As for the shape of the multijet background, for a given variable it is predicted using a data sample where the full selection has been applied except for the tight lepton requirement. ...
    we calculate Eb, the ratio of the number of tight events to the number of loose events ...
    In the electron channel ... we observe a statistically significant variation of Eb between different data-taking periods. ...
    We attribute this increase to the more stringent electron shower shape requirements applied at trigger level 3 ...".

    Tony

    PS - As to what CDF did, their paper at arxiv 1104.0699 referred to the thesis of Viviana Cavaliere which said:
    "... The quality of our QCD fits combined with the almost independent checks on ..[ delta phi]... give us very good confidence in our estimation of the QCD contribution, both in the electron and muon samples. In addition, the QCD fraction is not fixed in the fit, but allowed to float with a gaussian constraint.
    We then look at the multijet QCD Mjj distribution, to check how sensitive we are to the particular QCD selections. We consider ... alternative templates ... We do not observe any particular difference among the templates, but given
    the low statistics, we will use the alternative templates to assess the systematic uncertainty related to the QCD shape. ...".

    Slides of the Joseph Haley Fermilab talk are now available at Fermilab
    and
    slides of the Aran Garcia-Bellido Perugia talk are now available at CERN

    The two sets of slides are similar, but the Perugia set has some slides that I did not notice in the Fermilab set, such as:
    Perugia 37 - a nice picture about Limit Setting.
    Perugia 38-42 - slides about the CDF Dijet Mass Excess
    Perugia 43 - about D0 lvqq Measurement
    Perugia 44-46 - about Re-weighting Bias

    On the other hand, the Fermilab set has slides such as 6 to 16 with background information about D0 and Event Selection and a few others that I did not notice in the Perugia set.

    Tony

    lo dicevo io....in CDF c'hanno i sistematici che fanno provincia!!!!!!11
    County-wide systematics, CDF guys!!!!

    --
    - meetings
    + analysis

    Marco Frasca on The Gauge Connection Blog about Back from Paris for the Eleventh Workshop on Non-Perturbative Quantum Chromodynamics said "the most shocking declaration from an experimentalist: “We do not understand the proton”. The reason for this arises from the results presented by people from CERN working at LHC. They showed a systematic deviation of their Montecarlo simulations from experimental data. This means for us, working in this area, that their modeling of low-energy QCD is bad and their possible estimation of the background unsure. There is no way currently to get an exact evaluation of the proton scattering section.".

    Is that why D0 said in arxiv 11096.1457 that for multijet background calculations (particularly including the electron channel in the studies of the CDF Wjj bump) a "data-driven method" was used because "estimation of this background from Monte Carlo simulations is not reliable" ?

    Could that explain at least some of the discrepancy between D0 and CDF analyses of the Wjj bump,
    particularly in light of the fact that the QCD Multijets background
    in the D0 electron channel data is centered around 105 GeV/c2
    while
    in the CDF analysis it is centered around 75 GeV/c2
    so that
    the D0 QCD Multijet background might be high in the relevant region of the Wjj bump (120 to 160 GEV/c2) thus obscuring a Wjj signal seen by CDF
    because the CDF method of calculating electron-channel QCD Multijet background may have been significantly different ?

    Since Marco Frasca quoted an experimentalist at the Paris QCD conference as saying that "results ... from ... LHC ... showed a systematic deviation of their Montecarlo similations from experimental data ... their modeling of low-energy QCD is bad and their possible estimation of the background unsure"
    could there arise in ATLAS and CMS analyses some discrepancies similar to that between D0 and CDF
    or
    if ATLAS and CMS decide to use the same methods of QCD multijet background calculation, then could both be wrong and nobody would be able to give an independent check?

    Tony

    dorigo
    Hi Tony,

    I do not know. Our understanding of the proton, however, has little to do with our understanding of high-pT QCD processes. The underlying theory may well be the same, but it is in an entirely different energy regime. In the non-perturbative realm of structure formation we cannot calculate.

    Cheers,
    T.