Banner
    DZERO Confirms New Chi_b(3P) State
    By Tommaso Dorigo | March 28th 2012 02:43 AM | 14 comments | Print | E-mail | Track Comments
    About Tommaso

    I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

    View Tommaso's Profile
    While most High-Energy Physicists nowadays are kept busy with the idle search for non-existent new physics beyond the standard model in the form of improbable Supersymmetric particles, phantom leptoquarks, fairy Z' resonances, putative colorons, invented gravitinos, and what not, the subset of lucky experimentalists who decided to go against the flow and kept their feet on the ground are provided with endless entertainment in the study of resonances that are as real as your breakfast today. 

    It is the case of B physics analysts in the DZERO experiment, for instance, who published yesterday the signal of a nice new resonance -one that everybody knew would be there, but which only ATLAS had cared to look for yet.

    Chi_b states are particles composed of a bottom-antibottom quark pair, just like the Upsilon particles observed by Lederman in 1977. Unlike what happens in the Upsilons, the quarks in the chi_b particles are in a P-wave configuration of relative angular momenta.  This makes a big difference, because these particles cannot then branch into lepton pairs as the Upsilons, which are readily observed in their Y->μμ decay: the latter have one unit of spin which can be brought away by a virtual photon; the virtual photon then materializes into the lepton pair. So chi_b states are less easy to detect.

    One way out is to search for radiative decays of a chi_b state into a Upsilon and a photon. We can detect these resonances via their radiative decays since they are heavier than the corresponding Upsilon states they decay into, otherwise they would be prevented to do so by energy conservation.

    Such a search was recently performed by the ATLAS collaboration at the CERN LHC. They searched for muon pairs making the mass of the Y(1S) or Y(2S), attached a photon candidate to the pair, computed the total three-body mass, and together with lower-energy resonances they saw a bump at 10539+-4+-8 MeV, which they interpreted as the chi_b(3P), a particle not yet catalogued in the Review of Particle Properties: new ground to claim for ATLAS. One such three-body mass spectrum is shown on the right: you can clearly see the three separate states, the chi_b(1P), chi_b(2P), and new-found chi_b(3P). I wrote about that analysis last December here, shortly after the ATLAS discovery.

    Now DZERO just did a similar search, and confirmed the ATLAS finding. To search for μμγ combinations, DZERO chose to rely solely on photon candidates reconstructed as electron-positron pairs, because the photon emitted in the radiative decay of chi_b particles is usually too soft to be detected and measured effectively in the DZERO calorimeter. Instead, photons which hit nuclei of material in the DZERO silicon tracker may convert into electron-positron pairs which are later well reconstructed by the tracking algorithms. The figure on the right shows the reconstructed vertex position of a sample of electron-positron pairs in DZERO, in the plane transverse to the beam direction. The structure of the DZERO silicon tracker is evidenced by the high-density of points corresponding to the location of conversion vertices.

    The three-body mass found by DZERO in 1.3 inverse femtobarns of 2 TeV proton-antiproton collisions is shown on the right. The three states also seen by ATLAS are clearly observable over the predicted background. DZERO can thus confirm the ATLAS finding, measuring a mass m(chi_b(3P)) = 10551+-14+-17 MeV, in good agreement with the previous determination. Note that DZERO has a worse mass resolution, so the three states they see are less separated in the mass distribution. Nevertheless, the significance of the 3P state is above 5 standard deviations, so this particle can safely be archived as a confirmed state.

    Comments

    Great article and hard physics, but this is why you rock, T!

    While most High-Energy Physicists nowadays are kept busy with the idle search for non-existent new physics beyond the standard model in the form of improbable Supersymmetric particles, phantom leptoquarks, fairy Z' resonances, putative colorons, invented gravitinos, and what not, the subset of lucky experimentalists who decided to go against the flow and kept their feet on the ground are provided with endless entertainment in the study of resonances that are as real as your breakfast today.

    Graciously overlooking the first off topic, unnecessary, and overly confrontational paragraph this is otherwise a nice and interesting article ;-)

    lumidek
    "... study of resonances that are as real as your breakfast today."
    They are also as fundamental as my breakfast today.
    I very seriously question that, Luboš. It's impressive to see how very sharply defined is a pure quantum state: there is virtually no intrinsic scatter or variance about the peaks, its just resonance and damping. The critical feature is the points of inflection, where the rate of change or second differential changes sign: above they are right next to the peak, but on a Gaussian or normal, at closer to 50% peak frequency. Other physical distributions like the Poisson and gamma are similar.

    Counting quantum numbers won't get you to the play of events, still less process. Tell me, do you have to count calories to book a flight to the Red Dwarf Restaurant?

    lumidek
    Dear Orwin, a pure quantum state e.g. energy eigenstate is defined totally sharply in principle but this fact has been known since the mid 1920s and one doesn't have to investigate obscure examples of a quarkonium to see it. If you want more comments of mine about chi_b(3P), see

    http://motls.blogspot.com/2011/12/lhc-is-chib3p-new-particle.html

    I don't have to count calories for a flight to a restaurant and I don't need to study excitations of a quarkonium to see or check the postulates of quantum mechanics and sharpness of energy eigenstates or whatever is fundamental. 

    This is quite a traditional distortion by various experimenters who work on problems that are extremely far from being fundamental. They try to link their research to fundamental questions but the link is always just about marketing; their actual research isn't bringing anything new to our understanding of the fundamental questions.


    There are many examples of it. People in acoustics present some of their experiments as "sonic black holes" because black holes are viewed as cool. Atomic physicists are also trying to find arrangements that look like the speed of light is surpassed because special relativity is cool. The excited states of a quarkonium is just a messy system - much like the states of a large molecule in molecular physics. It's being presented as close to the "cutting edge" because one needs higher energies than those in molecular physics. But the actual physical substance of these systems' inner workings are completely analogous and both molecular and light-quark "nonrelativistic" physics are still far from the energy frontier.


    Studying 17th excited state of the 89th most important bound state of quarks is a legitimate scientific enterprise but one shouldn't try to pretend that it's qualitatively more important than e.g. studying the 13th excited state of the 135th most important molecule in Nature. None of these things can remotely compare to the importance of the Higgs boson or even superpartners.
    Correction, I should have said curvature for the second differential.

    No, Luboš, spectra were not resolved in the 1920s at anything like this accuracy, and when the statistical interpretation of quantum mechanics came out, no-one had much idea how to unpack it in actual statistics.

    And to claim acoustics is irrelevant is derisory: have you not heard of a Debye phonon, a Polyakov metric or an ion-acoustic wave? Or a shock wave a soliton or a Kortweg-deVries equation? You give the impression that your fundamentalism is first-order, linear, corpuscular, and in these ways thoroughly dated.

    It concerns me that the best universities are investing heavily in interdisciplinary research, but this doesn't seem to work outside of the life sciences. They've just run off with your precious graphene to revolutionize DNA sequencing and uncovered quantum computing in chlorophyl conductivity! And they're out there hunting for qbits of negentropy...

    Its not going to be productive to fight over the Higgs enigma, or excited states of matter. What we need is active collaboration with statisticians, mathematicians, logicians, systems analysts and programmers, which is why I appreciate Tommaso's perspective. But again the life sciences are in there ahead of you.

    lumidek
    Sorry, Orwin. When you write things like
    No, Luboš, spectra were not resolved in the 1920s at anything like this accuracy
    you're just saying things that are totally untrue. In the 1920s (and earlier than that, i.e. when they didn't understand the right theory), people would study spectra of atoms and/or molecules and they were already (experimentally) known much more accurately than spectra of excited hadrons will be known for the next centuries simply because it's easier to do experiments with low-energy atomic physics near the energy of 1 eV than it is to do experiments with nuclear physics at MeVs.
    If I say the same thing from the opposite side, it's impressive that subnuclear physicists may make experiments with hadrons that are much more demanding than the experiments with atomic and molecular physics people have known for a century. But while they're more demanding, they're neither more accurate nor more fundamental. It's about the same kind of physics investigated at a different energy scale.
    And to claim acoustics is irrelevant is derisory: have you not heard of a Debye phonon, a Polyakov metric or an ion-acoustic wave? 
    I haven't discussed these three concepts; I may have heard about them or not (amusingly enough, the Polyakov metric is a concept helping to define perturbative string theory in the BRST formalism; it's really formalism more than deep physics but it's not acoustics and it's still more important than the concepts from acoustics). At any rate, it's completely irrelevant for my  comments. I said that concepts in fields like acoustics (including your three concepts - but I discussed mainly concepts that try to imitate things in fundamental physics) are much less fundamental than concepts in particle physics. It follows immediately from the definition of the word "fundamental". Check e.g. Wikipedia to see that "fundamental" is about the "foundations of reality" which acoustics surely isn't. Particle physics operates in all phenomena in Nature, everything is "made out of it" while acoustics just investigates a tiny subset of somewhat ill-defined and emergent processes, the "sound". There is nothing personal about my comparison of the fundamental nature of statistics and particle physics; after all, I understand both particle physics and statistics more than you do, so you don't earn much if one field is confirmed to be more fundamental than the other.

    In reality, interdisciplinary studies are mainly a tool to lower the quality of education because people who can get degrees for interdisciplinary "disciplines" usually don't understand a single of these disciplines properly, or they at least don't have to. Whenever they're shown to be ignorant about one of the adjacent old-fashioned disciplines, they suggest that they may know another one but they actually don't know either. Climate change "science" has grown into the most embarrassing example of this pathological evolution we have as of today. At any rate, if you also think that the climate change alarmist "scientists" should be executed, we agree at least about something.

    Tommaso, do you know why the DZERO and CDF folks never publish full 8-10 fb-1 results? I've seen quite a few recently that only use a fraction of that data. Is it how the data is stored from year to year making it hard to aggregate or is based somehow on the processing time required to perform the analysis etc?

    dorigo
    Never is a strong word. They have published their Higgs results using their full datasets already.
    The fact is, sometimes the added statistics wins you very little, on the face of other trouble it brings in. Some triggers have been constantly modified during data taking because luminosity kept increasing, and this means that the resulting datasets are hard to analyze as one single piece. So in these cases it makes sense to select a uniform chunk and do that first.

    A good example is the measurement of the W boson mass by CDF, recently released. The reason for not doing that analysis with the full statistics is a compound of all these things. I.e., the measurement is systematics dominated, and systematics come from sources constrained in part by subsidiary measurements in other datasets. If the other datasets are taken under different running conditions, however, their estimate may be affected non trivially. So even systematics, which in principle could also scale with data size (but not with sqrt(data), more often smaller powers of data size) are non easily reducible by increasing the data sample.

    Best,
    T.
    Tommaso, I know you believe that the Higgs is "right there" (in fact, you believe it has already been discovered) and that you do not really have a lot of faith in SUSY (to put it mildly). Having said that, where do you think the LHC experimentalists should focus on in the last year of pp running before a two-year shutdown to maximize the chances of putting a hole through the SM picture? Surely not by looking for chi resonances, no?

    PS Your RSS feed is empty in the last week or so. Probably some glitch.

    dorigo
    Hi,

    I simply believe there is no such possibility (and I bet accordingly $1000 on the fact). But if you want an answer, I think that there is much more left to understand in low-energy spectroscopy than in high-energy tails, as far as LHC accessible information goes.

    Cheers,
    T.
    lumidek
    How far would you be willing to go when you say that there can't be beyond-the-SM physics? Are you aware of the Higgs potential instability? And what about the hierarchy problem? Do you deny naturalness in any context or just this one?
    I am obviously agnostic whether "already" the LHC will see something beyond the Higgs boson, slightly more inclined that it will, but there's of course no proof either way. I wonder what may possibly lead you to a "belief" in one particular answer, especially the No answer...

    dorigo
    Dear Lubos,

    I know the SM is not the end of it. I am as convinced of that as you are.

    However, I have observed that there is a disconcertingly good agreement in a too large number of observable quantities in the Standard Model, many of which would be affected in non trivial ways by new physics. This observation has a strong statistical power, due to the large number of tests that have been performed in HEP experiments during the last forty years.

    The LHC is a bigger, better machine than its predecessors, but it does not constitute, in my opinion, such an enormous change of paradigm from the earlier instruments that we may be so darn optimistic about the necessity that it will find what lies beyond. What the LHC would find must have been hiding so deviously in all past measurements, to avoid us recognizing it earlier, that I simply do not buy into the story.

    Cheers,
    T.
    lumidek
    Dear Tommaso, what you write here sounds sensible and I would even agree that the LHC is not a qualitative paradigm shift. Nevertheless, I still think that you're implicitly underestimating the LHC....

    It's a collider that may study analogous effects at energies that are 1 order-of-magnitude higher than its predecessor, I mean the Tevatron.

    One order of magnitude isn't a "huge difference" that guarantees that something new has to occur but it is a relatively high difference. For example, it's hard to find a "decade" or order of magnitude so that there would be no particles of this mass. There are particles between 100-1000 GeV, between 10-100 GeV, between 1-10 GeV, between 0.1-1 GeV, and so on, and so on. It would look a bit strange and irregular if right above a TeV there would be nothing, the first nothing of this type.

    If the LHC doesn't see any new physics relevant for the hierarchy problem, not even the 14 TeV LHC, then the Higgs mass will look fine-tuned even to people like me  who have never been dogmatic about naturalness, who considered the "little hierarchy problems" to be a self-inflicted wound, and who would always tolerate fine-tuning by 2 orders of magnitude because 0.01 is of order one according to my taste.

    But it will be more than 2 orders if the LHC never sees anything new.

    Moreover, there's also cosmological evidence that there should probably be a dark matter particle below 10 TeV or so which is also "more likely than not" observable by the LHC. Those things aren't a proof that there will be something beyond the SM but in my opinion, they're obviously sensible arguments that show that the opinion that the LHC surely can't see anything new are irrational.

    Things mostly agree, with some 2-sigma deviations in various graphs etc. But that doesn't mean that the agreement will always hold. Of course, one always needs to double the total luminosity to enhance 2-sigma to 3-sigma or 3-sigma to 5-sigma so it's unlikely that discoveries will occur in the known channels very soon. But as the total number of collisions is almost exponentially growing, the chance to see new physics is kept uniform in time.

    Even if you are just comparing a particular channel precisely to the SM predictions, every time you double the total number of collisions, there's a "completely new chance" to see a 3-sigma deviation even though there was no 2-sigma deviation before the doubling. The discrepancies may start to occur at every moment.

    I think you are implicitly using  mathematical induction to think that if nothing new has been seen so far, it has to stay so. But you're missing that this induction step is unproved and the progress keeps on going. If you agree that every time you multiply the energy  by 10 and/or luminosity by 100, you should find  a new particle or two in average, then the rate of finding new things by the LHC is a new particle per 2-3 years or so.

    Of course, we just had the Higgs but it's not the end of time yet.