A good part of basic research in fundamental physics focuses on the definition, the prediction, and the measurement of quantities which put the current theory -the standard model- to the test in the most stringent way possible. The choice of the quantities on which to base our comparisons between theory prediction and measurement is critical: it entails understanding what may make the comparison imprecise (i.e. experimental systematics affecting the measurement) or fruitless (i.e. theoretical assumptions or a bad definition of the quantity to measure).
One clear example, which I used last week in my lessons of Subnuclear Physics to undergraduates in Padova, is the measurement of the W and Z boson cross sections at the Tevatron proton-antiproton collider.
The "cross section" of a particle reaction tells us how probable it is that a head-on collision will give rise to it. In the case of W and Z boson production, this can be readily computed from electroweak theory and other data, including some functions which describe the probability that the quarks actually involved in the hard collision carry a given fraction of the total proton energy. Theorists can produce very accurate predictions for the W and Z cross section, because the technology to perform these calculations is quite advanced.
(Above: the transverse mass of Z candidates extracted by CDF in Run II, a sample which will provide the electron energy calibration necessary for a precise W mass measurement).
To measure the W and Z boson cross sections, experimentalists need to put together several inputs. We are talking about a counting experiment: we determine how many W or Z bosons are collected in our datasets, and work back to the cross section by accounting for the size of the dataset and the identification efficiencies. Ignoring the rule of thumb that dictates that any line of math in a text will halve the number of its readers, I am going to quote below the master formula for determining the cross section from the number of events N, integrated luminosity L, and efficiency E. It is the following:
Now, imagine you want to check electroweak theory calculations by comparing theory and measurement on the W and Z cross sections. You can do two things: compare separately theory and experiment on W and Z production rate, or compare the ratio of the two rates. These two strategies are quite different in scope and in precision, as I hope I will be able to explain to you in the following. But to do that, I need to consider a specific example: an early measurement provided by the CDF experiment at Fermilab, using data collected between 1992 and 1993. I could have picked a more recent example, but that is the one I used for my students, so that is what you also get today.
We need some data. Let us say we found N(W)=20,000 W boson candidates and N(Z)=2000 Z boson candidates, in a dataset correspondibg to an integrated luminosity L= 20 inverse picobarns. The efficiency to collect W bosons is E(W)=0.05, the efficiency to collect Z bosons is E(Z)=0.02. Mind you, these numbers include the fact that we restrict our search to leptonic W and Z decays: we only try to catch these particles when they yield energetic electrons or muons. In other words, the efficiencies E(W) and E(Z) include the so-called "branching fraction" of the particles into electrons or muons. But that's a detail you can happily forget here.
With those numbers, the W cross section can be obtained by using the simple formula show above: we thus find
All is well: the above numbers can be compared with the precise theory prediction. But a number without an error is like a ruler without inch marks: absolutely useless. So we need to account for statistical and systematic uncertainties. In so doing, we need to find the relative statistical uncertainties, as well as the relative uncertainty on E(W), E(Z), and L.
Now, as far as statistical uncertainties are concerned, these are found by applying Poisson statistics, which is relevant to the counting of events. You need not worry about this, but just accept that the relative uncertainty in a count is a fraction equal to sqrt(1/N): so 20,000 has a relative uncertainty of about 0.7 percent, and 2000 a relative uncertainty of about 2.2 percent.
And how about systematic uncertainties on the cross sections ? These are due to the fact that we obtain cross sections by multiplying numbers which are not known precisely. Efficiencies are measured in the detector with good accuracy, say 1 percent, while the integrated luminosity is a much harder quantity to determine, and is typically known only with a precision of 6 percent.
Putting everything together is easier than you might think: if we treat each source of uncertainty as uncorrelated with the others -which in this example is a hypothesis which holds water-, we just have to add in quadrature all the sources. And since each source affects the cross section measurement with a multiplicative factor, we can add the relative uncertainties as follows:
We thus find that the two experimental determinations are totally dominated by the systematic uncertainty on the integrated luminosity, 6 percent! Our data-theory comparison of the cross sections is limited in its accuracy by how well we know the size of our datasets.
Enter cross section ratios: we now decide we are not too concerned with the absolute value of the W and Z cross sections, and that we want to measure their ratio instead, R=sigma(W)/sigma(Z). This is still a powerful check of electroweak theory, because the ratio R is sensitive to the different coupling of quarks to the W and Z particles, as well to other subtleties.
Let us see what we can say for the ratio R: R =20 nb/5nb = 4.0. A nice round result. But the relevant question is still, what is the relative uncertainty on R ?
It is here that one glimpses the power of taking the ratio of quantities as a check of theoretical calculations. One could be tempted to say that since both numerator and denominator in R are affected by a 6% uncertainty, the ratio should be given an error equal to the quadrature sum of 0.06 and 0.06, or 0.085. Wrong!! Actually, doubly wrong.
Upon hearing my claim that the above 8.5% uncertainty is a gross mistake, you might realize that the luminosity with which the W and Z events were taken is the same. So one does not need to consider it twice: is R affected by a mere 6% uncertainty then ?
Wrong again. We really need to consider the formula of R and do the math. Let us see:
This can be rewritten as
Luminosity has disappeared! In fact, in R we are only interested on the relative number of W and Z events, which should not depend on the sample size. The uncertainty on R is thus obtained by accounting for statistical and systematics as follows:
R is thus known with a precision more than twice higher than each individual cross section! This allows for a very interesting check with theory. Note, however, that when measuring a ratio of cross sections, one gains something while losing something else. In fact, the absolute measurements are very informative, because they also test how well we know the structure of the proton -in particular, the fraction of quarks carrying a fraction of the proton's momentum sufficient to produce W or Z bosons. But fortunately, there are other ways to get around the luminosity uncertainty and be sensitive to the proton structure. One such way is to measure the asymmetry in the production of the bosons, something which, however, deserves a separate article to be illustrated.
 Integrated luminosity L is a number describing the amount of collisions we are considering: the size of the dataset.
 Efficiency is a number smaller than 1.0 which describes how likely it is that an event of a particular kind (say a W decay candidate) is identified as such.
 In the discussion above I have neglected to mention it, but maybe if you have gotten this far down the post, you do want to know it: the efficiency of W and Z identification, as you might have guessed, is also quite correlated, since both W and Z particles are identified with the charged leptons they yield in the decay. So when we consider the ratio E(Z)/E(W) in the formula for R, much of it also vanishes, and the relevant uncertainty on R is further decreased.
- PHYSICAL SCIENCES
- EARTH SCIENCES
- LIFE SCIENCES
- SOCIAL SCIENCES
Subscribe to the newsletter
Stay in touch with the scientific world!
Know Science And Want To Write?
- World Did NOT End On 29th July! AWFUL "Silly Season" Story - Journalists Please Be More Responsible
- Why An Extra Planet Can't Be Hidden Behind The Sun Or Above The South Pole
- My Applied National Security Paper. Being President Isn't For Idiots.
- Hugh Hefner's Wife Was Not Poisoned By Breast Implants
- Mind The (Risk Perception) Gap On BPA
- Why Do Consumers Participate In 'green' Programs?
- SYRINA: A Trojan Horse For Endocrine Disrupting Chemicals?
- "...not tracking donaldtrump2. The above piece is a fictionalized scenario of Iran testing an atomic..."
- "Yes, they can, fake, hoax, or misunderstandings. The thing is that this Nibiru idea is so general..."
- "alex,i had to go back and carefully re-read your piece on national security. first, your solution..."
- "Theres thousands of videos on youtube tho mr walker surely they cant all be fake can they i just..."
- "That's fine. I don't know. As I said it shows that there are lots of people out there who have..."
- Tracking how HIV disrupts immune system informs vaccine development
- Green monkeys acquired Staphylococcus aureus from humans
- Researchers find molecular switch that triggers bacterial pathogenicity
- Scientists identify immunological profiles of people who make powerful HIV antibodies
- Breastfeeding associated with better brain development and neurocognitive outcomes