Banner
    Luminosity, Michel Parameter, Phase Space: What A Lousy Title For A Great Post
    By Tommaso Dorigo | January 2nd 2010 11:54 AM | 10 comments | Print | E-mail | Track Comments
    About Tommaso

    I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

    View Tommaso's Profile
    After re-emerging from a rather debilitating new years' eve banquet, I feel I can provide my own answers to the second batch of physics questions I proposed a few days ago to the most active readers of this column.

    Be sure about one thing: the answers to the three questions have already been given in some form by a few of the readers in the comments thread; I will nonetheless provide my own explanations, and in so doing I might pick a graph or two to illustrate better the essence of the problems. But first, there was a bonus question included in the package, and nobody found the solution to it. Here is the bonus question again:

    "What do you get if you put together three sexy red quarks ?"

    The answer is

    "I dunno, but I'm getting a haDR-on just thinking of it!"

    Okay, leaving such trivialities (for which I have to thank Robyn M.) behind our back, we can now discuss the physics. To make this piece easier to read, I paste below the three questions:

    1) The LHC experiments will search for Z bosons in
    their early 2010 data. The Z decay to muon pairs, in particular,
    provides a means to verify the correct alignment of tracking detectors
    and the precise modeling of the magnetic field inside the solenoid,
    which bends charged tracks traversing its volume.
    If the signal cross section equals 50 nanobarns, and only three decays in a hundred produce muon pairs, calculate the integrated luminosity required to identify 100 candidates, assuming that the efficiency with which a muon is detected is 70%.
    Hint: you will need to use the formula , and all the information provided above.

    2) In the decay of stopped muons, ,
    the produced electron is observed to have an energy spectrum peaking
    close to the maximum allowed value. What is this maximum value, and
    what causes the preferential decay to energetic electrons ?
    Hint: you might find inspiration in the answer to the first question I posted on Dec. 26th.

    3) The decay to an electron-neutrino pair of the W
    boson occurs one-ninth of the time, because the W may also decay to the
    other lepton pairs and to light quarks, and the universality of charged
    weak currents guarantees an equal treatment of all fermion pairs. The
    question is: if the W boson had a mass of 300 GeV, what would the rate
    of electron-neutrino decays ?
    Hint: the top quark would play a role...
    1 - The first question was the easiest to answer, in my opinion. It is, in fact, a very simple multiplication of the given data, but if you have never done a similar computation you might easily get confused. So let me go over it in some more detail than I would think necessary.

    The number of subatomic interactions producing a particular reaction, whose probability of occurring is encoded in its cross section , is indeed obtained by multiplying that cross section by a "flux" factor, described by what we call "integrated luminosity", . Since cross sections are measured in units of area, it is natural that a multiplication of that area by a quantity having units of "number per area" -id est, a flux integrated over time- gives as a result a pure number -the number of collisions producing the wanted process. has indeed units of inverse area, and so we do write , as in the hint above.

    Now, we want to know the luminosity needed to see 100 Z decays to muon pairs: to see 100 decays in your detector, however, you have to produce many more Z bosons, because only 3% of them do decay into muon pairs; and further, you get to reconstruct both muons only 70% x 70% = 49% of the time, because of the incomplete acceptance of your detector and the imperfect identification algorithms. So if we want to see 100 decays, we need to produce more: the number is given by , roughly (don't expect that I fetch my pocket calculator for such a calculation!).

    So the answer we seek is simply , or 134 inverse nanobarns, give or take a few. It is an integrated luminosity that the LHC will collect in less than a day of running at startup; the same integrated flux is achieved nowadays by the Tevatron in half an hour, but the Tevatron is a perfectly tuned machine -the LHC at full speed should collect it in less than a minute.

    I highlighted "give or take a few" above to remind myself of making a point here. Before I go to the next exercise, let me explain one thing about calculations by heart like the one above. Since we want to know the integrated luminosity needed to see 100 detected Z decays to dimuons, we know that we do not need L to be computed with a 1% accuracy, since those 100 Z decays obey Poisson statistics, and therefore even if we knew perfectly the Z cross section, and the dimuon branching fraction, and the detection efficiency, all that precision would be useless: the determined L would be subjected by a random fluctuation of the order of 10% of itself -that is, the intrinsic variability of 100: +- 10.

    What I am trying to explain is that the answer to the question, for an experimental physicist, is not "to see 100 Z decays to dimuon pairs we need to acquire an integrated luminosity of L = 134/nb", but rather "to see 100 Z decays to dimuon pairs we need to acquire an integrated luminosity of about 130 inverse nanobarns". The second answer, thanks to the intrinsic inaccuracy which it contains, is -to me- actually more accurate!!! Remember this, because it is an important lesson: the uncertainty is more important than the measurement. In the sentence "about 130" we make it clear that we are not serious about the last digit, and so we provide MORE important and accurate information than "=134"!

    Let me stress it again (and I will make it a "say of the week" one day): given a chance to either be given the result of a measurement without its uncertainty, or be told the uncertainty of the measurement without the value of the measurement itself, you should choose the second!

    2 - The key to understand the behavior of the weak decay is to realize that the process proceeds through a V-A charged weak current: V stands for "vector", and A stands for "axial-vector". I am not going to start writing down gamma matrices and spinors here: I could explain that to you, but it would take too much of my patience for today.

    Instead, let me just try to explain it as follows. We have three final state particles; two (neutrinos) basically massless, the third (the electron) also almost massless. For such low-mass particles the momenta involved make them all ultra-relativistic, so that helicity (the projection of spin along their direction of motion) is a good quantum number: we can use it in our reasoning. Two of the decay particles (the electron and the muon neutrino) want to be left-handed, i.e. have a -1/2 helicity (as dictated by the weak interactions), the third (electron antineutrino) wants to be right-handed, and thus have a helicity of +1/2.

    Now let us take a x-axis (do not get scared if I slip into calling it a "quantization axis") as the one along which the original muon spin is aligned: the muon has spin "+1/2" along x. The vector current which produces the weak muon decay "flips" the spin of the muon, such that its sibling -the particle carrying away the "muon-ness", the muon neutrino- has spin opposite to that of the parent. This means that the muon neutrino will travel away in the positive x direction, with spin -1/2 along that direction.

    The unit of spin necessary for this spin flip -a "+1" along x-  is carried away by the electron-electron antineutrino system, against which the is recoiling. Since the electron wants to be left-handed and the electron antineutrino wants to be right-handed, then if the two traveled in the same direction they would cancel their spins: but we just said that the system has to carry away a full +1 unit of spin. This is only possible if they are emitted back-to-back, with the electron traveling along the negative x direction (with spin +1/2  along x), and the electron antineutrino traveling along the positive x direction (with spin +1/2 along x again).

    We thus have a preference for the decay to yield the two neutrinos traveling together in the direction of the original muon spin, and the electron shooting out in the opposite direction. If that is the case, the momenta of the two neutrino will have to balance the one of the electron. Since the three particles have masses much smaller than the parent mass (105 MeV), their momenta and energies are almost equal. The electron gets an energy of 52.5 MeV, and the two neutrinos share the other 52.5 MeV. The mass of the muon, that is, has been converted in momentum of the decay products (and minimally into the electron mass).

    Please bear in mind that the real situation is not so clear-cut as I have described it above, of course: the emitted electron is not perfectly left-handed because its mass is not negligible; and I have considerably simplified the picture by considering the extreme case of a perfect alignment of the three final state particles along an axis, spin flip, etcetera. Despite my inaccurate treatment, I think it appears clearly that there is a preference for the configuration discussed. A sketch of the electron energy distribution from stopped muons is shown on the right: as you see, rather than a fair share of the parent's energy (105/3=35 MeV) the most probable electron energy is higher, close to the maximum allowed value.

    3 - This question was rather easy to answer, if you correctly accounted for the fact that quarks need to be counted three times, because there are three different species of each: red, green, and blue (or pick any other trio of colours you fancy). So, a 300 GeV W boson would not just decay into : it would also readily produce  pairs, one per each of the three colours those quarks may take.

    Now, the democratic nature of the charged-current weak interaction makes the regular W decay one-ninth of the time into each of the listed pairs; a 300 GeV W would be just as democratic, and each of the now 12 possible final states would get a one-twelfth chance of occurring.

    As one reader correctly pointed out (I think it was Lubos), the top quark is quite heavy, and its mass cannot be neglected in the decay. The large mass of the top requires a large "investment" of part of the released 300 GeV coming from the hypothetical W boson's disintegration. This would actually make the mode less probable than the other quark decays! It is a subtlety, but it is useful to point it out. The disadvantage of the top-bottom decay would be due to something we call in jargon "phase space suppression".


    You can see the suppression quite clearly in another calculation, which was done twenty years ago, when the top quark had not been discovered yet. Back then, we did know very well that the W boson had a mass of 80.4 GeV: but the top could still be lighter than the W, so one could compute, as a function of the back-then-unknown top quark mass, the fraction of W decays that would be yielding electrons: this number should be one ninth if the top is heavier than the W, or smaller if the top is lighter. The curve in the graph above shows exactly how the "phase space suppression" acts to make it less and less probable that a W decays into tb pairs, as the top mass gets higher and closer to the kinematic limit (what is shown in the vertical axis in the figure is the inverse of the fraction of W decays to electrons, that is a number close to 9 for a large top mass, or close to 12 for a small top mass).
     
    In the figure you see actually a direct exploitation of the very effect of phase space suppression. Since the total number of modes into which the W boson may decay can be inferred by directly measuring its natural width -how thick the peak is, which is a number proportional to the speed of the decay, which in turn depends on the number of possible ways it can disintegrate- the determination of the number of W->electron neutrino decays can be turned into a value on the vertical axis (hatched horizontal band measuring at 9.14+-0.36). That, combined with the theoretical curve shown in black, allowed to give a lower limit to the top mass! The arrow shows the limit, M(top)>62 GeV, at 90 % confidence level (the value corresponding to the 90% upper bound on the hatched relative inverse width, 9.93). The result shown in the CDF figure above was produced twenty years ago, and it is now of only historical value. And didactical, too!

    Comments

    Ciao Tommaso,

    posting three fun questions like this is a good idea - I hope your readership enjoyed them.

    I've always liked the last problem on W decays, and decided to poach a little bit and post my own commentary at Collider Blog, written before this latest post of yours. I hope you don't mind! ;)

    Michael

    dorigo
    Hi Michael,

    of course I don't mind! I am actually happy to see you have been posting quite frequently in your blog recently (contrarily to what I wrote in the comment I left a moment ago there!). So I added a link to your site from here.

    Cheers,
    T.
    Tommaso may I say you are unfair once more! When the person who ventured the only answer to the bonus question wrote "various puns with hadrons [etc]", that was what she meant -- probably put *too* politely. But thanks for the rest!

    I count 58 google hits for mispellings of "hadron" already on arXiv. Nealy a half dozen in the abstracts alone. Another mention won't hurt.

    lumidek
    That's an interesting cultural gap.
    In my corners, accuracy and rigor has always been relatively important. In your corners, Tommaso, *inaccuracy* and sloppiness is more important as you told us. ;-)

    It's very perverse to deliberately write a highly inaccurate, rounded value in order to emphasize the "uncertainty". You claim that the uncertainty is 10%. There's no physical law that would determine that the error margin for luminosity must be 10%. It's just your invention.

    As you could have copied from the readers, the ratio you couldn't calculate is 136, not 134, so the rounded value would be 140, not 130 inverse nanobarns. You introduced a new error that is almost as big as the standard deviation - you effectively increased the error twice. That's too bad. It's sensible to remove excessive, "too accurate" digits of a number that is more uncertain. But you must know the limits: doubling the error is too much of a good thing.

    You don't have to have a pocket calculator in your pocket. It's enough to use e.g. Chrome for browsing, and download

    https://chrome.google.com/extensions/detail/okchmhdoihblcikdcedjpofciafc...


    my old calculator, or another calculator, which is available within a fraction of a second, directly in the same window. See also a line-based competition

    https://chrome.google.com/extensions/detail/acgimceffoceigocablmjdpebeod...


    Get other Chrome extensions I did:

    https://chrome.google.com/extensions/search?q=lubos.motl


    Cheers
    LM
    dorigo
    Dear Lubos,
    inaccuracy is fundamental for outreach. Without the omission of inessential or distracting details it is impossible to keep readers interest in tough topics. I have found that those readers who miss the omitted detail or spot the approximation are quick to point it out in the comments thread, increasing the interest of the piece.
    But you know these things, and are just trying to be a troll. What you have to prove, I fail to understand.
    Cheers,
    T.
    dorigo
    Dear Lubos,
    inaccuracy is fundamental for outreach. Without the omission of inessential or distracting details it is impossible to keep readers interest in tough topics. I have found that those readers who miss the omitted detail or spot the approximation are quick to point it out in the comments thread, increasing the interest of the piece.
    But you know these things, and are just trying to be a troll. What you have to prove, I fail to understand.
    Cheers,
    T.
    lumidek
    Come on, Tommaso, you know that what you write is just plain rubbish, at least in this context - you just try to be obnoxious. You wrote that a result that was close to 136 (everyone else but you wrote it as 136!) was 130, and you emphasized that it's better to write a wrong number than the more correct number. I just wrote it was preposterous....

    If you think that it's harder for readers to read 136 than 130 (which should have been 140, anyway), you're just wrong. Quite on the contrary, the people who calculated the right figure 136 could have been made uncertain by your bizarrely inaccurate figure, 136. Is it the same thing? Did they make an error in the calculation? Well, they didn't.

    There could be a fair discussion whether you may be forgiven for having doubled the error margin of the luminosity. But there can't be any discussion about the fact that it is preposterous for you to criticize all those who have written down a sensibly accurate figure of 136 instead of 130. Such a criticism from your keyboard is completely indefensible. By the way, the normal error of the Tevatron luminosity is dominated by the uncertainties of the inelastic cross section which is just 3%, see

    http://scholar.google.com/scholar?hl=en&q=tevatron+"error+of+the+luminosity"&btnG=Search&as_sdt=2000&as_ylo=&as_vis=0


    And more generally, inaccuracies are often helpful to "distort" the final picture and create demagogy - in much more serious situations than some irrelevant 136 vs 130 debate. But we may differ because distorting science is something that you actively like. I don't. Sorry.

    Cheers
    LM

    dorigo
    Ouch, Lubos, be careful!

    You already ventured in a discussion on luminosity with me once, as you well remember. You started off being as offensive as you can be, I told you quietly to retreat, you insisted, and finally had to apologize not just with me, but with the readers of my blog and with the CDF collaboration, as many readers still remember.

    Now I was making a rather general point, that accuracy is not required in outreach. You are not satisfied, and you decide to insist on the detail. But you fail. Thrice. You force me to explain to you what other readers of the post above have already understood.

    When I say that 130 (or 140) is "more accurate", in its lack of accuracy, than the precisely computed number, 136, I am making an important point. But you fail to understand it. That number is the "luminosity needed to collect 100 Z decays". 100, you should not forget, is a variable following Poisson statistics -it varies. One experiments collects 136 inverse nanobarns, but still gets only 85 Z decays. Another with 130 has already 110 Z decays in the bag.... It is an intrinsic fluctuation of the number. Because of that, the "luminosity needed to collect 100 Z decays" is a quantity affected by uncertainty at the same level: it is, one might say, determined only to within 10% -because 100 has a statistical error of 10. This is a not commonly known form of error, but a very real and frequent one: the ill-defined nature of the quantity one is determining.

    A different thing would have been to ask "what is the luminosity for which the expectation number of the collected Z boson decays is 100", in which case you would be right. No, wait, you would be just "less wrong".

    Why is that so ? It is because, my dear Lubos, you err twice more on this one! First, you quote that the Tevatron has a 3% uncertainty on its luminosity (pointing to a CDF paper on the precision of the luminosity monitors, which you have not even read)- while what counts is the total error in the acquired luminosity by the detector, and CDF and DZERO have it both at 6%, not 3% (sorry but I have no time to explain to you why that is so, but you are well advised to document yourself now). Second, because you forget that the experiment we have been discussing is at the LHC, not at the Tevatron! And the LHC will have an even larger uncertainty in the luminosity than the Tevatron.

    Anyway, I have to admit I do not pretend that you understand the experimental subtleties that make the luminosity error what it is at CDF and DZERO, but since I know you are likely to cast some smoke on this mistake of yours, here are two of the most precise recent determinations of cross sections -that start to be dominated by the luminosity uncertainty- at the Tevatron:
    CDF: sigma(ttbar)=7.27+-0.71+-0.46+-0.42 (lumi) pb -yes, the last one is the luminosity error - which is 6%, not 3%;
    DZERO: sigma(ttbar)=6.75...+-0.39(lumi) pb - the paper even says it clearly at the bottom line on page 5, "the systematic uncertainty on the luminosity is 6.1%". Not surprising, since DZERO relies on CDF for the luminosity, as does the Tevatron itself! Poor Lubos, next time avoid a google search and try to read a paper or two, it will do you good.

    So, you see: even if we forgot, as you did, the Poisson uncertainty, the error on the cross section, and the error on branching fraction and detector acceptance, we would still be left with a 136*0.06= 8 inverse nanobarn uncertainty (mind you, 8, and not 8.16, since the uncertainty in the luminosity is itself uncertain). So 136 is a number just as good as 130, or 140. But as I said, it is more correct to convey the information that even the second digit (the three in 130) is uncertain, due to the Poisson error, as I tried to teach stubborn souls like you in my post.

    Sorry for being pedantic, Lubos, but you seem to always want to lose these discussions, and you call it upon yourself.

    Cheers,
    T.
    lumidek
    This was no "outreach". It was a would-be solution to a quantitative homework for your readers. And by the way, accuracy is valuable even in "outreach".
    Even if the error is 6%, you still almost doubled it. You just can't arbitrarily change the mean value by a standard deviation.

    The reason why I had apologized was that I used to guess that your collaboration wouldn't be willing to subscribe your preposterous paper with cross sections thrice the original - already preposterously high - values. It was, I was wrong. 

    You know and everyone else knows very well that the main point of all these misunderstandings was that your paper on lepton jets was a crackpottery. That's what really matters - not some ludicrous debates whether some number that should be zero is 70 or 200.

    Cheers
    LM