Banner
    Triggering - The Subtle Art Of Being Picky
    By Tommaso Dorigo | January 17th 2010 08:39 AM | 29 comments | Print | E-mail | Track Comments
    About Tommaso

    I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

    View Tommaso's Profile
    The success of today's particle physics experiments relies to a surprisingly large extent on a seldom told functionality of the giant apparata that detect the faint echoes of subatomic particles hitting or punching through their sensitive regions: the capability of triggering.

    All hadron collider experiments, such as those run by the CDF and DZERO collaborations at the Fermilab Tevatron, or the ATLAS and CMS collaborations at the CERN Large Hadron Collider, are endowed with complex, state-of-the-art, outstandingly designed, and precisely crafted systems which are generically called "triggers". Without the perfect operation of their triggering systems, those beautiful experiments would be like behind-the-back-handcuffed jugglers, utterly unable to make any meaningful use of their ultra-sophisticated capabilities.



    (Above, a view of the CMS detector during assembly).

    I wish to explain why hadron collider experiments needs a trigger system today, and in so doing I will have to also digress into an explanation of a few ancillary topics. Let me start with telling you what really happens when two hadrons collide.

    Hadron collisions are boring -well, most of them are!

    Even if through years of R&D and design of a powerful accelerator you have made sure that the energy of your protons is the highest ever achieved, when you launch them one against the other you are still by no means likely to produce a very energetic collision; let alone producing interesting, massive new particles! In fact, it takes a lucky chance for a constituent in the proton to carry a large fraction of the total energy of its parent, and the real collision is between constituents, not between the two protons as a whole.

    Protons are like bags of junk, and they are capable of flying one through the other without much happening. The quarks and gluons they contain are "hard" objects instead: it occasionally happens that a tin can inside one bag comes in a collision course with a bottle of gin contained in the other bag, and then -only then- an interesting collision takes place. Glass bits will fly away in specific directions, and we will learn something about the brands that the owner of the bag likes to drink.

    Now, as much as I like the above example with bags of rubbish, let me leave it alone. I need to explain that even in the rare cases when the interaction between proton constituents that we manage to originate is indeed energetic, usually nothing much interesting happens. This is because the most likely interaction occurring between one of the quarks and gluons that bind together inside a proton and its counterpart in the other proton is one governed by Quantum Chromodynamics, QCD for insiders. And QCD is not something we need to study in any more detail than we have in the last thirty years!

    Indeed, no: the Large Hadron Collider has not been built to increase our knowledge of QCD-mediated processes, despite the fact that it will nevertheless do so, to a very significant degree. the LHC, with the experiments sitting in its underground caverns, has been built to study the origin of electroweak symmetry breaking, and search for new physics beyond the standard model.

    Now, electroweak processes are exceedingly rare as compared to QCD processes, when we take protons as the projectiles: for instance, only one collision in a million involves the exchange of an electroweak boson; and to produce one single Higgs boson, the particle which will finally explain many of our unanswered questions on subatomic physics, we need to collide ten billion proton pairs, or more.

    A reference candle: the Higgs

    The rarity of the processes we are interested in studying poses an enormous problem to the design of a hadron collider experiment. Let us take as a benchmark the goal of collecting a thousand Higgs boson events in the very clean "four-muon" final state: such a sample (together with the others that are concurrently gathered) is sufficiently large to yield answers to some critical questions about the nature of the Higgs field and electroweak symmetry breaking.


    Unfortunately, if we want to collect a thousand "clean" Higgs boson decays we need to produce many more: this is because of several facts. First of all, the detector efficiency for correctly measuring all the final states from the decay is not 100%.

    A H -> ZZ -> 4 muon decay will produce four energetic muons, and if our detector is 90% hermetic to each muon from such decays, a 4-muon event will be seen as such only 0.9^4=65% of the times. But more importantly, the multitude of possible decays of the Higgs makes the four-muon final state a rarity: the H decays to a pair of Z bosons only twice every ten times, in the most favourable circumstances; and the Z boson decays to a pair of muons only 3.3% of the times... All in all, only 0.65 x 0.2 x 0.033 x 0.033, or about 0.0001 Higgs decays produce four well-identified muons in our detector. So we need to produce ten million of them to get a thousand!

    A hundred thousand trillion collisions

    Now, given the above mentioned rarity (the one-in-ten-billion chance) of Higgs production processes, ten million Higgs events require us to produce a hundred-thousand trillions of collisions - a number which is better written as 10^17. How to picture 10^17 in a way we can grasp ? If you have that many dollars, you can give 20 millions to every human being. If we are talking of sand grains, 10^17 of them are enough to dress up a hundred miles of shoreline.

    10^17 collisions are thus quite many. The question is then, how long does it take to collect such a large statistics of collisions in our detector ? This is not the most meaningful way to pose the question - much better is to change the perspective and refer everything to the time we are willing to wait.

    So let us start by asking ourself what is an acceptable time we can wait while our detector collects the data. Let me take seven years as a reasonable time span during which we expect to collect the information of the proton collisions. Seven years are about two thousand working days, for a typical accelerator. Machinists are not lazier than postmen or attorneys: but such large machines require periodic shutdowns, servicing, and other down-time activities that reduce the number of good days during which collisions can be produced.

    In two thousand working days there are, say, a hundred and seventy million seconds; but even a very efficient collider will not be running at full power continuously, and the same goes for the detector, so I feel justified in taking 100 million seconds as the full operational time of our giant toy, in seven years. If we are to collect 10^17 collisions in a hundred million seconds, we are looking at 10^17/10^8= 10^9, or a billion events per second that need to be acquired, digitized, and stored on mass storage.

    A billion events per second ? For crying out loud - Houston, we have a problem! Actually, there are at least two big problems with such a large rate.

    The first problem is that the detector takes some time to record the passage of the particles, and even more time to be read out. The physical processes that take place in the detecting elements, allowing crossing particles to be identified, are very fast but not instantaneous; the electronics reading out the output signals of those detecting elements also takes its time. We are not capable of acquiring data at the rate we are talking about here, i.e. one event per nanosecond. Not in all of the detector subsystems: some of them are less fast than others.

    The second problem, and the most grievious one, is that today's detectors produce of the order of a megabyte of raw data every time you read it out, and with today's or tomorrow's technology, storing on the fly a billion megabytes per second, that is a thousand Terabytes per second, is a non-manageable task.

    Does the above mean that the whole endeavour is useless ? No: remember, I said it at the beginning: most of the collisions are not interesting. We may throw them away, if we are capable of selecting and storing the really golden ones. There is a concordance between the fact that we do not need to store all events and the fact that we cannot do it!

    So, despite approximations and the fact that I am hiding from your view the added complication of the possibility of collecting and storing multiple events in a single detector readout -something which may "ease" the problem by up to a factor of a 100- this is, in a nutshell, why hadron collider experiments need a trigger system.

    So, the Trigger...

    A trigger is a online selection system which takes care of reading out the fastest components of the detector, and deciding on the fly with that partial imformation whether the event is interesting enough to be stored to disk, or deleted forever. It is a critical device! What we discard during data-taking will never make it into our analyses. So the selection needs to be very wise.



    (Above, a block diagram of the CMS L1 trigger).

    Triggers usually make heavy use of a parallelized design, exploiting the symmetry of the detector: the same operation may be performed at the same time on different parts of the detector. Yet on a global scale they are essentially serial devices: in fact, they are usually divided in "levels". At CMS, there are only two such levels, but in all other collider experiments there are three. The idea is that each level selects the data that becomes an input for the following, which has more time available and more information to take a better decision.


    In CMS the  first level trigger uses a readout of very specific, fast components of the detector, and tries to sort out the most energetic collisions, and those that are likely to contain interesting objects, such as electron or muon candidates (particles of high value in hadronic collisions, because they only originate from electroweak interactions!). The decision is made in a very short time -of the order of a microsecond or so- with custom-made electronic components, and it is usually a "NAY!". That is, the first-level trigger usually rejects 9999 events in ten thousand!

    By cutting the input rate by a factor of 10,000, the first-level trigger allows the second-level trigger of CMS to breathe. At the LHC, the collision rate is not a billion per second as I calculated above in back-of-the-envelope style, but rather 40 million per second. The reason is that at full power a single interaction between a "bunch" of protons orbiting in one direction and the opposite one coming toward it will on average yield 25 proton-proton collisions! That is, by having very dense proton bunches, we increase the number of collisions occurring simultaneously. An "event" is thus the combined result of those 25 hard interactions -it will be the task of the analysis experts to sort them out offline.


    Now, what was an input rate of 40 million events per second, becomes a rate of four thousand (mind you, I am quoting numbers rather liberally here -I am not even bothering to check the real figures in the CMS technical design report. I do it on purpose: what I would like you to grasp is the essence of the idea, and not the details of the implementation).

    Four thousand events per second ? This is quite manageable! In 250 microseconds the whole detector may be read out with care, the information processed in fast computer systems, and a well-informed decision can be taken on whether the event does contain electrons, or muons, or suspiciously large missing energy (a hint of the production of neutrinos, or dark matter particles!), or possible new massive particles produced and exploded in very energetic jets of hadrons.

    The devices performing this final decision at CMS are called "High-Level Triggers". Note the plural: this is a collection of software programs optimized for speed, each of them scanning the detector information in search for a signal of one, or two, interesting objects -electrons, or muons, or jets, or missing energy, or photons, or tau candidates. Each trigger takes its own decision, and in the case this is a "Yes", the event is sent to an appropriate output stream.

    The details of how the storage is handled is beyond the scope of this far too long article, and I will not discuss it here. What is interesting is that of the forty million collisions occurring per second in the core of the detector, only two or three hundred are stored for later analysis!

    So in the end, what CMS does -as ATLAS, and the other experiments operating at hadron colliders- is to restrict its search to a very narrow subset of the original bounty of collisions. If the trigger system has done its work well, those few survivors will tell us what we want to know. If the trigger, however, allowed interesting events of some exotic kind to seep through its mesh, and be lost forever, we would be in trouble!

    We are very confident that no interesting physical processes get lost down the drain. Yet despite our self-confidence, we have set "parachute triggers" in place, that collect events without applying any selection. That is, these triggers do select events, but they do so blindly: one event every million or so gets recorded regardless of its features. By studying those "unbiased" events which passed thanks to a random choice, we get confident that we did not lose anything interesting!

    Conclusions

    I hope this long post has been less selective than the CMS trigger: of ten thousand readers that will load it in the next few weeks, maybe a thousand will make it through the first page, and maybe a hundred through the first half. I would be happy if ten readers were interested enough in the topic to read through to this conclusions section: if you did so, you belong to the very selected few who justified the writing of the whole piece, as the few events that get selected every second by the CMS trigger justify the whole multi-billion-dollar project!

    Comments

    Extremely interesting.

    Your blog has renewed my interest in particle physics in a big way,

    Very interesting. I'm sure there will be more than 10 people to reach the end. Please do not succumb to the Twitter limit!

    dorigo
    theVoid&Adam,
    glad I made 20% of my target of ten readers already!
    Cheers,
    T.
    Thanks for the article! Very interesting to read.

    ... and another 10% ... keep on postin' :-)

    Very interesting, ihave also finished it.
    I enjoy a lot reading your blog that keep my interest in particle physics growing up each day.

    Thank you Tommaso.

    You got another one... we are 5 already...

    Well, it seems we are 6...

    Thank You Tommaso
    i enjoyed your walk through of the challenges involved in collecting the selected events that may produce the Higgs
    signature.
    it also puts into perspective of the amount time it will take before they experiment collects enough signatures to say
    whether they have the Higgs or they are looking at new physics beyond the standard model.
    in the mean time (and beyond) until they do i plan to keep stopping by everyday and learn sometime new.

    Please keep posting :)

    Always interesting posts, from start to finish.

    logicman
    Data processing is always a bottleneck in science.

    When I first started running linguistic text analyses on a PC, the 8086 processor would take a week for one ASCII text book.  My current 2-core Pentium will process 10,000 ASCII books similarly  in 30 seconds.  Part of that speed increase is down to the hard-drive data rate/bandwidth, since my programs make extensive use of virtual memory*.

    From looking at your TX-RX trigger link circuit board, I would say that your biggest problem is bandwidth: that basic board design goes back to the 1960s, and the surface-mount components look definitely '80s to me.  Perhaps the next 10 years will bring the advances (and budget!)  needed for you to process more raw data 'on-the-fly'.

    * Virtual memory: use of hard drive space to temporarily store intermediate program data where the computer memory is insufficient for the task.
    Thank You Tommaso,

    This is a very interesting posts. I does illustrate one of the many challenges you are facing in this fascinating exploration of the mater.

    I was wondering as you said "Triggers usually make heavy use of a parallelized design", what kind of processing logic you use to make the triggers are you using FGPA?

    Only one more to trigger your happiness,

    Namaste

    S.

    Incredibly good written post. I read it with great pleasure - until the end! Science is science when it is expainable with understandable words.

    dorigo
    Well, thanks everybody - this is very encouraging. We already got of the order of ten readers down to the end of the post, on a total readership of barely 260 so far!
    Reader feedback is important not just for my large ego, but to gauge whether the article is written in a sufficiently clear and easy way, or whether it is too arcane... Looks like I managed to make things understandable this time.
    Cheers,
    T.
    I have been reading about the LHC since one year but I did not get idea of the limitations of the detecting system in LHC.
    Your article has given me an idea about the importance of the detecting system in the LHC. Will we be able to detect the reasonable amount of Higg's bosons in this experiment?
    Arun.

    dorigo
    Hi Arun,

    there are two main experiments collecting LHC collisions, ATLAS and CMS - I belong to the CMS collaboration so I discuss CMS in the article above, although ATLAS has pretty similar constraints and a three-level trigger.

    The focus has been on the LHC, because that is the wild bet that was placed twenty years ago, and now materializes in a fantastic machine. In the meantime, however, the two giant detectors have been designed and built, and have been tested with cosmic rays during the last two years. There is little doubt that they will work as planned, as the recent results on the first collisions colllected at the end of 2009 confirms (see a post I published here a week or two ago).

    The issue is really how much data the two experiments will be able to receive from the LHC, which is by far the most daring part of the whole complex.

    Cheers,
    T.
    Hi,
    I have written a blog on the LHC. It is of general nature . it is for a non-science reader.
    Will you please spare some time to go through it and give your comments/suggestion ?
    Thanks.

    Sorry.
    My blog site is : arunsdixit.blogspot.com and my blog name is `My Blog.'
    Arun.

    rholley
    I have forwarded this link to "our chap" who teaches particles and symmetry.
    Robert H. Olley / Quondam Physics Department / University of Reading / England
    Hi Tommaso,

    I made it through your post and got the feeling that I understood it. Which goes to show that you are very good at explaing complex problems (all those blogging years!).

    Is it important that CMS uses two levels of triggers and all the others three?

    What do people at electron coliders do? Can they afford to be more inclusive due to the lower event rates? Or can they skimp on the electronics for triggers? To phrase the question differently, do the enormous event rates at LHC mean that you have a Guiness book of records entry for the most restrictive triggers in high energy physics?

    Cheers,
    Martin

    dorigo
    Hi Martin,

    two levels are enough, as CMS shows, but three might in my opinion be a more logical choice. A fast intermediate level allows to fine-tune some selection criteria, increasing the acceptance for objects which the first level is not capable of recognizing very precisely, and which cannot be all passed to a high-level trigger. CMS opted for a two-level design to exploit the increasing availability of fast software processors. I am unable to judge from here whether it will end up being a better choice or not.

    As for electron colliders, they still need a trigger, but they do not have the problem of rate that hadron colliders have. I think the highest rates are in any case those of fixed-target experiments.

    Cheers,
    T.
    I liked the "bag of garbage" description of the proton. So does that make nuclear physics the equivalent to dumpster diving?

    Before I started grad school at U. Cal., Irvine I worked that summer for the physics department. My primary task was designing and building a CAMAC trigger module. It was one of the early time projection chambers. The experiment was looking for double beta decay in a selenium isotope.

    Good article, I think you may have to revise your readership model that for more than 10 expected events.

    Great post. Thanks for putting it all in perspective.

    Please explain the concept of triggering and its mechanism for a lay science reader.

    Arun.

    dorigo
    Arun, what do you mean ? I think I exactly did that above, or did you want something more specific ?

    Maybe you want to know what exactly happens when a proton-proton collision occurs, particles are produced and emitted in all directions, and they hit the detector components of the detector. A meaningful question would be to ask "how does that generates a trigger" ?

    The detector elements are read out when a bunch of proton crosses another in the core of the detector. The resulting information is processed by the front-end electronics, and already at this stage some information is available on whether there are interesting particle candidates. The information is collected by a hardware system that takes a decision. This is what is called a "Level-1 trigger". Following that decision, the full detector information for that collision is collected and sent to the high-level triggers.

    Cheers,
    T.
    No doubt a very instructive post and interesting original standpoint, but let me put a few apostillas:

    “Protons are like bags of junk, and they are capable of flying one through the other without much happening. The quarks and gluons they contain are "hard" objects”. This is surely a risky metaphor that may invite to think that the bag content, quarks, gluons, and the QCD, would be junk. Aside this sarcastic comment the pedagogic purpose is quite illustrative. Nevertheless, the concept of a structuring orbital would fit better in the scientific context.

    But I have a more serious comment to add about the standard model, i.e. the conceptual instrument with which all data are interpreted:

    “And QCD is not something we need to study in any more detail than we have in the last thirty years!” A quite interesting and surprising comment in spite of being somewhat belated. Let me express my opinion about the standard model.

    I have been following the weird developments of the standard model for 4 decades. Besides thinking that it is a raving model, it does not solve anything but just makes things worse. Its conceptual weft is based is no less than 62 primordial elements. Let us re-count them. Just to account for hadrons, the sole particles supposedly composed of quarks, 6 families of quarks are needed, with their 3 colours, which give 18 different quarks, plus the same amount of anti-quarks, giving so a total of 36 different quarks.

    Still, 26 elementary particles stand off this scheme, not being composed of quarks. These are: 12 different leptons, 8 types of gluons, the 3 vector bosons W+, W- and Z0, the photon, the presumed graviton and the Higgs boson. So, the standard model stands on 62 primordial elements. How poor is its reductive power. And let us not get fooled, putting these particles in doubtful families does not reduce their number.

    Should I recall that our universe is made of only 4 stable elementary particles, the proton and the electron, which provides its material body, plus the photon (proceeding from transitions of spin 1) and the neutrino (proceeding from transitions of spin 1/2 and which manifests in 3 different states). Possibly, the still undetected fugacious graviton should be added. The presumed other particles are no more than fugacious excited quantum states, with an average mean-life between 10-6 and 10-23 s. They are just quantum scintillas.

    Can any reductive and clarifying value be conferred to a model based on 62 primordial elements? It surely looks quite doubtful.

    Protons are void in essence, and their presumed constituents, quarks and gluons, also generically called partons, are scarcely observed to scatter in protons collisions. What more void than an orbital? Considering partons to be quarks and gluons belongs to the Standard Model dogma. In the proposed orbital model of the proton structure, which applies to all elementary particles and not jus to hadrons, the parton passes to be nothing else but the every day integer electric charge.

    The basic standpoint of the Orbital Model is the concept of orbital (the same as in atomic physics) of a single parton (having two states: the positive and negative integer electric charge) and not a bag full of different types of partons. It is not convenient to complicate thinks more than needed. This proposal has been published in 1999, but has not been considered by any official mean. What is the reason?

    Those with some curiosity in knowing more about the Orbital Model may turn on the following links:

    Fundamentals of the Orbital Conception of Elementary Particles http://arxiv.org/abs/hep-ph/0102268

    The proton gyromagnetic g-factor: an electromagnetic model http://arxiv.org/abs/0912.4962

    Nature and Quantization of the Proton Mass: An Electromagnetic Model http://arxiv.org/abs/physics/0512108

    And to distract a while you may also take a look at the following animation:

    Standard Model versus Orbital Model http://www.youtube.com/watch?v=fQrIuu31pFA

    Interesting and informative as always, down to the last drop. I've just been doing work on a compression scheme for some data acquisition (in a completely unrelated field) that needs to be relayed through a narrow pipe; if we can't compress it enough we'll have to resort to triggers.

    excellent post! conveys all that is important and in such an understandable way, kudos to you and thanks for making this material so accessible to your readers!