This is the third part of Chapter 3 of the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab". The chapter recounts the pioneering measurement of the Z mass by the CDF detector, and the competition with SLAC during the summer of 1989. The title of the post is the same as the one of chapter 3, and it refers to the way some SLAC physicists called their Fermilab colleagues, whose hadron collider was to their eyes obviously inferior to the electron-positron linear collider. For part 1, see here. For part 2, see here.

---


Unfathomable Code

The computer algorithm which performed the fit of position hits measured by the tracking detectors, extracting the most probable trajectory of charged particles, was still unfinished business in 1989. A working version of the code existed and did reasonably well, but due to the complexity of the problem, the program lent itself to further significant improvements. It was thanks to Peter Berge . the "tracking guru" of CDF, one of the many unsung heroes of the experiment . that the tracking algorithm reached the level of sophistication required for a precision measurement of the Z mass. Peter was considered a magician by his colleagues. Among other things, he was probably the only CDF member capable of effortlessly reading and writing in Postscript, a programming language used for creating vector graphics which is not meant to be written by humans, but by other programs. Berge worked alone and left quite rarely his small office in the oldest section of the CDF trailers complex. The office was often plagued with water leaks from the roof during rainfalls. He did not complain: when the dripping started, he used to place a bucket under the largest leak, close to the entrance of his office. He then taped up on the bucket a handwritten notice which read "Do not disturb . leak test in progress." The pun was for insiders only: a leak test is a standard procedure employed during the commissioning of a gas detector. It was a joking reference to the CTC, the focus of Berge's work.

Building on Berge's improved tracking code, Aseet Mukherjee was the guy who pulled off the remarkable feat of inserting the beam constraint in the fit of charged particle trajectories. The beam constraint was also Berge's idea. The tracking code in Run 0 did not use the knowledge of the beam position in the fit to charged particle trajectories. Neglecting that information made sense in general, as it meant treating in the same way particles that originated from the decay in flight of a long-lived parent and those that originated in the primary interaction. Only the latter have back-propagated trajectories that intercept the path of the proton and antiproton beams. However, it made a lot of sense to enforce that condition to the trajectories of the decay products of the extremely short-lived
Z boson, using a mathematical constraint in the fitting procedure. Such a beam constraint would improve quite significantly the precision of the momentum measurement, since it amounted to adding to the track one very well-measured spatial point at one end of its helical path: a pivot.

In order to explain the procedure employed by Aseet to determine the coordinates of the collision point to be used for the beam constraint procedure, it is necessary to point out that the Tevatron collider bunched the circulating protons and antiprotons in 11-inch-long packets. Collisions at
the center of the detector were consequently spread out along the beam by the same length. The position of each collision could be determined by fitting all the resulting charged tracks to their common origin. The precision obtained on the coordinate along the beam was more than sufficient,
as its knowledge did not have a large impact on the fit of particle trajectories. On the other hand, the position in the plane perpendicular to the beam was crucial for the determination of transverse momenta. Using the knowledge of the coordinate along the beam and of the precise beam trajectory,
which is a very thin line, Aseet could determine the transverse position of the collision with high precision. The only problem was that the procedure required quite a bit of fiddling with Terabytes of data, as hundreds of different fits on as many data-taking periods were needed to extract the required beam line parameters.

Aseet (see picture) was well known for his mathematical skills. Once he produced the computer code that re-measured the trajectory of charged particles using the beam constraint, he was literally hailed as a hero by his colleagues. The better measured tracks had a dramatic effect on the momentum resolution, as demonstrated by the fact that the invariant mass peaks of particles reconstructed with the re-measured momenta narrowed down quite significantly. It was a proof that the measurement had gotten more precise. Encouraged by the general enthusiasm for his new method, Aseet kept
changing his code on a daily basis for a while, providing a string of incremental improvements. That was a good thing, of course, but it forced everybody to re-run their own analysis programs one, two, three times a day in order to incorporate Aseet's improvements and benefit from them. It simply made people crazy.

Steve Errede decided to look into Aseet's source code to try and figure out whether the beam-constrained fitting could at last be signed off. That way, people would spend their time focusing on their own studies rather than on the spinning and re-spinning of the data. Time was running short,
and the better looked increasingly an enemy of the good. Also, Steve was genuinely interested in how the algorithm worked and wanted to satiate his curiosity. When Aseet sent to Steve his fitting program, Steve suffered a shock. The code was long and complex, and this much he had anticipated. To make matters worse, the code did not contain any explanatory comments of the kind programmers insert at the most cryptic junctures amidst the Fortran instructions. That was already enough to ground Steve's attempts at making sense of the long computer routine. But, in addition, the names of variables and parameters were absolutely nondescriptive of their meaning. As a result, the program was almost as unreadable to a human eye as would have been its compiled image in the form of a machine-crunchable string of zeroes and ones. Steve had gotten more than he had bargained for! After spending a few sorry hours on the code, he decided that understanding it was not a priority; he had other things to worry about.

The kind of trouble that Errede ran into on that occasion was not an exception but rather a common occurrence in CDF. With few exceptions, particle physicists are not professional computer scientists. They usually acquire their programming skills by nose-diving into complex code projects when they join an experiment as undergraduates or graduate students. Hence, they typically learn to write very untidy, hard-to-read programs. They often completely neglect the good practice of inserting descriptive comments here and there. That is the result of being under constant time pressure. Also, young physicists are keen to use their own arbitrary conventions for the naming of variables and other coding choices. Unless the "Offline coordinator" (the scientist responsible for software development, who coordinates a group of software developers) enforces very strict rules and conventions for the programs that perform event reconstruction, calibrations, or other common applications, the result can be close to chaotic. However, in the case of Aseet's beam-constrained
fit, the source of trouble were not his coding habits. Aseet was an expert programmer and he knew how to work under pressure. In order to quickly produce a version of the track fitter which added a fixed point to the track at the very precisely determined beam position, he had used a fairly complex mathematical method. The formulas had been taken from a scientific article, and Aseet had found it practical and time-effective to stick to the naming conventions used there, to have more control of what he was doing. The source of Steve's frustration was the choice of variable names in the article. Together with the code Steve had been pointed by Aseet to that reference; but he, in turn pressed by time, had not read it.


"Three MeV?! You Must Be Kidding Me!"

The CDF scientists involved in the Z mass analysis were all quite skilled and determined, and they worked around the clock for six weeks in a row. Barry Wicklund painstakingly kept improving the analytical description of the detector material in the simulation of the apparatus. That gave a better agreement between the E/p distribution of electrons in data and simulation, and hence a smaller calibration error. Bob Wagner, an Argonne colleague of Barry's nicknamed "Argobob" to not confuse him with a homonymous colleague, wrote simulation programs to understand the effects of electromagnetic radiation in Z production, improving the modeling of the Z mass distribution. William Trischuk took care of generating large samples of simulated Z boson decays. Morris Binkley helped Aseet with the beam constraint code, which required as input the measurements of track positions in the vicinity of the beam. Those were provided by the VTPC detector, which Morris knew inside out. Hovhannes Keutelian, Errede's graduate student, developed and tested the program that produced
a fit of the Z mass histograms: that was the code which would obtain the final result. The combined effort put together during those six weeks would pay great dividends in the future, since the problems the Z group had to solve stayed solved since then. The improved calibration of energy
and momentum measurements, the more precise tracking algorithm, and the better tuned simulations would be an asset to CDF, reducing systematic uncertainties in hundreds of future measurements.

One of the crucial tasks was to calibrate the momentum measurement provided by the tracking chamber to as high a precision as possible. Only once that is done can the Z mass be precisely measured from the momenta of the two muons in Z->mumu. the calibration of the electron energy
with the E/p method also requires that input. To calibrate the momentum measurement, one may use the tracks produced by the decays of J/Psi mesons to muon pairs. In 1989, the mass of the J/Psi meson was already known with an uncertainty smaller than 1 MeV thanks to earlier measurements,
so its signal was a perfect "reference" for the calibration procedure. Using the large sample of identified J/Psi decays to muon pairs, the measured J/Psi mass resulted 3 MeV lower than expected. This was a one per mille downward bias. What was its source? The mass was measured with particle momenta, and momenta were determined from the curvature of their trajectories and the value of the magnetic field. Steve Errede and Bob Wagner could not believe that the precisely determined intensity of the magnetic field could be wrong by a part in a thousand. Maybe the bias was due to incorrectly modeled radiation effects? They decided to pose the question to the in-house theorist, Michelangelo Mangano.

Michelangelo (see picture) was a young and brilliant theoretical physicist who in 1988 had joined the CDF experiment to work on QCD measurements. He had previously been a post-doctoral scientist at Fermilab in the theory division. There, he had developed new techniques for the calculation of
strong interaction processes. This made him an invaluable resource for the experimental studies that the QCD group in CDF was starting to carry out. At that time, QCD was not yet a very well-understood theory, and the CDF measurements were extremely interesting. The Tevatron was stepping for the first time in a totally new energy regime, where the understanding of strong interactions needed to be tested with experimental data.

The basic processes which allowed Mangano to carry out his QCD studies were those yielding many hadronic jets, whose energy was back then also still in the need of a precise calibration. The typical uncertainties of measured jet energies were tens of GeV, hence thousands of times larger than the ones Errede and Wagner were puzzling over! When they explained their problem to Michelangelo, he could not help bursting into a hearty laugh: "Three MeV? M-e-V? Are you guys kidding me?" But indeed, a 3-MeV shift was a quite significant effect for the J/Psi. The statistical uncertainty on the mass measurement was just 1 MeV, so that 3-MeV shift could only be interpreted as a systematic bias. A failure to understand its source and correct for it would force Errede and Wagner to ascribe the bias to an unknown systematic uncertainty associated with the momentum measurement,
significantly blowing up the uncertainty on the Z mass. Michelangelo spent a good deal of time working at the problem from a theoretical perspective, but he could find nothing that could account for the mass shift. Indeed, final-state QED radiation did affect the measurement, but its effect was well understood and under control. The problem had to be elsewhere. And it was finally discovered that what was to be blamed was the experimentalists' habit of rounding numbers!

If you give a perfectly machined circular ring to an experimentalist and a theorist, asking them to tell you what is its circumference, the theorist will tell you it is pi times its diameter and will stop there with this perfectly correct, albeit unspecific answer. The experimentalist will instead duly measure the diameter as well as she can with a ruler, then multiply that by 3.14, reporting the result with three digits of accuracy. She will not bother to use a dozen digits in the expression of pi: the experimental uncertainty on the estimate of the diameter is larger than a few percent, so three
significant digits for pi are more than sufficient. However, if another experimentalist measured the diameter with a laser caliber, obtaining a value with six significant digits, and proceeded to multiply that by 3.14 forgetting that pi is in fact 3.1415926..., she would be wasting the high precision of her apparatus! A similar thing was happening with the reconstruction code that spewed out particle momenta from fits to the track trajectories measured in the CTC. The conversion from curvature to
momentum required an implicit multiplication by the speed of light, which equals 299,792,458 meters per second. Whoever had written the routine had not bothered to look up the exact value, inserting instead the approximate value of 300,000,000 meters per second commonly employed for back-of-the-envelope calculations. The difference was less than one per mille, but was still huge considering the high precision of the tracking measurement. That error had surfaced as a puzzling 0.1% shift in the momentum scale!

(to be continued in part 4)

-----

Tommaso Dorigo is an experimental particle physicist, who works for the INFN at the University of Padova, and collaborates with the CMS experiment at the CERN LHC. He coordinates the European network AMVA4NewPhysics as well as research in accelerator-based physics for INFN-Padova, and is an editor of the journal Reviews in Physics. In 2016 Dorigo published the book “Anomaly! Collider physics and the quest for new phenomena at Fermilab”. You can purchase a copy of the book by clicking on the book cover in the column on the right.