Banner
    How SUSY Gets Excluded: A New ATLAS Search
    By Tommaso Dorigo | October 23rd 2011 11:53 AM | 27 comments | Print | E-mail | Track Comments
    About Tommaso

    I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

    View Tommaso's Profile
    October is passing and the neutrino saga continues to make headlines here and there, but I know that the excitement is bound to slowly dampen, as the preprint claiming superluminal speeds ages in the Arxiv without being sent to a scientific magazine.

    So it is better for us to return to our usual business: the exclusion of possible new physics models at the Large Hadron Collider. Mind you, we are not talking here of superluminal speeds or other quite fancy and unexpected things that nobody really believed could exist: before the LHC turned on, the majority of my colleagues were confident that the CMS and ATLAS detectors would see the first hints of Supersymmetric particle decays in the matter of weeks, or even days after the first collisions. In retrospect (if you did not do it years back already) one is bound to ask oneself: how gullible can a physicist be ?

    SUSY is around the corner. But where is the corner ?

    Indeed, physicists are usually optimistic in the predictions they make for future discoveries. And the field of high-energy collider physics is not an exception.

    Because new physics must be there, see: the Standard Model cannot work by itself all the way to the Planck mass scale, and on the other hand there is a lot of stuff going on at the energy scale where electroweak unification occurs -the one corresponding to the mass of weak vector bosons. And that is the energy scale which we are currently in the position of testing with the LHC.

    Given the above, it is both reasonable and convenient to believe that any new physics should make itself visible at the same mass scale at which it allegedly "mends" the shortcomings of the Standard Model . And SUSY, the theory that postulates the existence of a doubled set of particles and fields, is among the best candidates of new physics in the market.

    But things are turning out differently. Rather than seeing spectacular new cascade decays of SUSY particles, we are amassing large amounts of data that follow diligently the Standard Model predictions. Depending on how you look at it, it is conforting or depressing.

    Actually we understand our data so well, that in the tens of thousands of distributions we have checked in ATLAS and CMS while collecting the data, we have so far found only a handful of 3-sigma or 4-sigma bumps here and there. And these fluctuations invariably die off when more statistics becomes available.

    I know, you'd die to see the bumps we bump into in our study of the data. Sorry - of course we cannot share that early and unmotivated excitement with outsiders. That would damage the perception of good science by the public, see. But okay, this is another topic than the one I wish to discuss here today.

    The ATLAS analysis: jets and missing energy

    So let us go back to the topic in the title. ATLAS published a few days ago a neat analysis of 1.35 inverse femtobarns of proton-proton collision data, equivalent to roughly a hundred trillion collisions. They selected events which contained six energetic hadronic jets plus a large amount of missing transverse energy: a signature which should be contributed by the decay of Supersymmetric particles, if those existed and had a mass small enough to be copiously produced.

    I feel I am losing some of you already, so let me explain what is that signature in easy terms and why it could be the signal of Supersymmetry. What we call a hadronic jet is the result of the violent kicking off a proton of a quark or gluon. Quarks and gluons are the constituents of protons, but they cannot be pulled off as you would extract a raisin from a plum cake.

    If you pulled a quark off, you would feel a force increasing with the distance you take it apart from the proton, until the energy you spend would turn into an additional quark-antiquark pair popped out of the vacuum: the antiquark would bind tightly with the quark you had in your hands, while the other quark would rush back inside the proton to balance the quantum numbers of the latter. You would end up with a meson (a quark-antiquark bound state) in your hands, not a single free quark. If you then pulled the quark and antiquark in the meson apart, you would cause the popping out of the vacuum of another quark-antiquark pair: your efforts (and energy spent) would thus create two new mesons, but again, no free quarks.

    The phenomenon above, repeated multiple times, is responsible for the so-called "hadronization", and occurs when a violent collision pushes with high energy a quark off a proton. The end result is the creation of a spray of light hadrons, which collectively retains the energy and direction of the original quark. As I anticipated, we call such things hadronic jets: we measure their kinematics in the calorimeter systems as if they were a single "physics object". They are not; yet the simplification is quite useful.

    Typically, a proton-proton collision creates a pair of jets emitted transversally in opposite direction if seen from the angle of view of the incoming protons. But often more jets are created. This occurs when the two original quarks radiate off gluons before hadronizing: gluon radiation is not too different from electromagnetic radiation off a moving electric charge.

    And what about missing transverse energy ? I must have explained it a hundred times in this blog, but there's always new readers to care for, so here goes. Missing transverse energy is the signal that one or more particles are flying out of the collision point unseen by the detector. These particles are usually neutrinos, but if SUSY were real they might also be fancier bodies called "neutralinos".

    Neutralinos are not the superpartners of neutrinos, but that is a detail and I prefer to avoid discussing it now. The lightest neutralinos in the particle spectrum are similar in their behavior to neutrinos, in that they do not interact with the matter in the detector, so they escape unseen. Now, what happens when a particle escapes unseen ? It happens that the rest of the particles produced in the interaction appear to "recoil" against nothing. A neutralino goes to the right and you do not see it; other particles you see all go to the left, and you wonder what the heck is going on. It is just as if you sent a billiard ball against another one head-on, and observed the pair moving both toward the left after the collision. Conservation of momentum forbids that! In the case of particles something unseen must have flown the other direction.

    So, jets and missing energy. I have not explained to you just why a neutralino would arise preferentially together with many hadronic jets in a LHC proton-proton collision. This occurs because what would be produced first would be a pair of squarks or a pair of gluinos, the superpartners of quarks and gluons. Squarks and gluinos feel the strong force and are therefore produced copiously if they exist and are not too massive. They then decay into other lighter supersymmetric particles, emitting ordinary quarks or gluons as they do.



    The total process ends up with the two lightest supersymmetric particles, the neutralinos, plus (in most cases) a large number of hadronic jets. There is the signature sought by ATLAS. A cartoon picturing a typical chain of decays is shown above: a proton-proton collision creates a pair of gluinos (red); one of them is followed to show its decay to a quark-squark pair, and the subsequent decay of the squark in a quark plus second-generation neutralino, then the latter decays into a lepton-slepton pair, and finally one gets the lightest neutralino and a further lepton. Note that the leptons present here are not searched for by the ATLAS analysis, which retains a inclusive nature (and has higher efficiency than more exclusive searches).

    So what did ATLAS see ? Their results can be summarized in the sample figure on the right, which I took from the paper. The data (black points) lines up very well with the sum of expected backgrounds in a histogram of the observed missing transverse energy (here normalized by the square root of the total visible energy, but please ignore the detail). Supersymmetric particles would produce a enhancement at high values of missing transverse energy, as shown by the dashed histogram.

    From the data plot to the exclusion plot

    The agreement of data with Standard Model processes allows ATLAS to place stringent new limits on the production cross section of superparticles. These cross section limits are then compared to the predicted value of the new physics model, determining if the data exclude it or not. To explain how this is done, let me make an example.

    Say that for a certain set of unknown Susy parameters (gluino mass of 500 GeV, etcetera) the theory predicted a cross section of 100 femtobarns for the searched signal: This means that in a 1.4 inverse femtobarn dataset a total of 140 signal events should have been produced -that's right, since to get the number of events you just multiply cross section and luminosity.

    Now, let us assume you have made a tight event selection which, according to your simulation, should accept one every 10 signal events (but rejects backgrounds 999 times out of 1000, so that it is a sound thing to do). When you fit your data, the fit returns a probability density function (PDF) for the number of signal events N which is peaked at zero, and has a tail dying off quickly as N exceeds a few units.

    Let us reflect on the meaning of this PDF. This is the result of your search. It tells you how likely it is that the data in your hands contains a certain number of signal events. To get your 95% confidence-level limit you now need to find the number of events N_95 which divides the probability density function in two parts, one on the left containing 95% of the area, and the other on the right containing only 5% of it.

    If you find this N_95 equals 7, for instance, you can say that the search "excludes the presence of 7 or more signal events in the data" at 95% confidence level. This implies that a smaller signal is less incompatible with the data you got, while a larger signal is increasingly more unlikely. Also observe that 95% is a convention: even if a real signal were there, once in twenty cases your analysis would exclude that number.

    Since the 140 events theoretically predicted multiplied by the 10 percent efficiency makes 14 predicted events in the final selection, your observation that N<7 allows you to say that the cross section must be smaller than half of the theory prediction. The ratio observed/predicted being smaller than unity in turn allows you to specifically claim that you have excluded the new physics model corresponding to the choice of parameters you made when you simulated your signal.

    Since Supersymmetry is not a physics model but a infinite set of physics models, depending on the value of more than a hundred unknown parameters, you can then play the game of determining which parameter space points are excluded by your data. This requires you to redo some of the analysis steps: you change the gluino mass from 500 to 550 GeV and re-run your simulation, recompute the signal cross section, determine the new efficiency (it will be higher than 10%, in general, since the more massive your signal particles are, the easier it is for them to pass your kinematical cuts, which usually are of the form "energy above some threshold").

    You can then redo your fits with a modified signal template, determine a new N_95, and compare with the theory prediction (which is of course different for a different set of parameters). Suppose N_95 is now 10, and the theory prediction is 8: the point is not excluded this time. Your data allowed you to excluded a gluino mass of 500 GeV, but cannot exclude 550 GeV.

    Note that the above procedure is not immediate -it requires quite a bit of gymnastics. And since the SUSY parameter space has over 100 dimensions, it is utterly impossible to compare your data to all of these hypotheses one by one. What ATLAS does -in line with the other experiments- is to select a subset of the theories, which can be specified by the value of much fewer parameters. In the so-called "mSUGRA model", for instance, you need to specify no more than five of them. Further choose two among the most relevant ones -be it the mass of scalars and fermions at the grand-unification scale- and you can construct a two-dimensional plot, each point of which is a different theory.

    Oh joy, finally you can display your complicated analysis results in a concise and tidy-looking form! You draw a line separating the parameter space points you have excluded and those you haven't, do the same thing for past experiments, and you are done -anybody looking at the graph will immediately visualize how much of the unknown territory you have managed to explore.

    [A few notes on the graph: the two quantities on the axes are the universal masses of scalars (in the horizontal axis) and fermions (in the vertical axis). The filled areas contain parameter space points excluded by previous experiments; the black thin lines show the mass of gluinos and quarks that the parameter space points correspond to. The ATLAS exclusion region is the one to the left and below the thick purple curve. In green is shown the result of another ATLAS search, which considered events with smaller jet multiplicities, again containing large missing Et.]

    Expected versus observed limits

    One further detail is missing, however. What are those dashed blue lines bracketing the observed limit in the parameter space ? They are concisely labeled "+-1 sigma expected limit", but what does that really mean ?

    Those lines answer a different question than the most poignant one of "what parameter space points did you exclude with your search". The question they address is "given your super-duper detector, your fantastic data, and your cunning analysis, how far were you expecting to exclude in the parameter space, in the absence of SUSY ?". To answer this you need to go back to your computer, and sit through long sessions of so-called "pseudoexperiments generation".

    A pseudoexperiment consists in the generation of false data sampled randomly from the background processes alone. The size of data should be the same of the actual dataset you collected (what a statistician would call "conditioning"), but details may vary. Once you have the "toy" data, you pass it through your analysis chain and determine a new exclusion set -a set of parameter space points you would exclude if that toy data were the real thing.

    If you repeat the generation of pseudoexperiments a large number of times, you can then associate to each parameter space point a probability that it would have been excluded, computed as a simple ratio of successes over trials. You finally draw lines passing through the points excluded at least 84% of the times, 50% of the times, and only 16% of the times. Those are the dashed curves in the figure.

    A comparison of the actual exclusion curve with those "plus-or-minus-one-sigma expected" does not make you terribly more intelligent: it only reveals how lucky you've been in your exclusion game. If your curve stays on the side of the "minus-one-sigma", you have been unlucky -your data is a bit more signal-like than the Standard Model simulation predicted it to be, and you cannot exclude as much of the parameter space as you might have for data following precisely the SM prediction. This, incidentally, might also arise if the data did contain a SUSY signal, one to which you were not sensitive enough to detect. On the other hand, if your limit extends past the average expected one, your data "underfluctuated", i.e. they are even less signal-like than backgrounds alone, allowing you to set tighter limits.

    Summing up

    To conclude, I apologize for this long post to the very few of you who got reading this far down. I hope I have clarified a little the cuisine behind these exclusion graphs, which are so common in nowadays particle physics. Of course: we cannot find signals, and yet we need to publish papers!

    So we publish exclusion plots, and pretend it's all what matters. And since there is no real boundary in the parameter space of SUSY, every time we extend our investigation we are free to change the axis boundaries to higher gluino-squark masses, showing that there remains a large swath of parameter space still to be explored...

    Comments

    How did the 95 per cent Confidence Level become the Consensus Standard for exclusion plots in particle physics ?

    The weakness of a 95 per cent CL is shown by XKCD here:
    http://www.xkcd.com/882/

    Is the real reason that 99 per cent (or higher) is not the Consensus Standard for Higgs exclusion
    that
    the current data level accumulated by the LHC can now only reach 95 per cent
    and as you say
    "... we cannot find signals, and yet we need to publish papers!
    So we publish exclusion plots, and pretend it's all what matters. ..."
    and
    the best LHC Higgs exclusion plots that can be published with current data are at 95 per cent CL.

    Does that mean that we should treat the current torrent of 95 per cent Higgs exclusion zone plots and articles as
    unrealistic stuff that should be ignored in serious discussion of the search for how the Higgs really works,
    but
    necessary stuff to satisfy the "need to publish papers" and to make happy the politicians that provide funding ?

    Tony

    Sorry, guys, but the Higgs is just a chimera because quarks are composite. You may say now 'come on, we haven’t seen it', and the truth is that we have seen several indications of it. The first one was found in 1956 by Hofstadter when he determined the charge distributions of both nucleons. (one can see them around p. 450 (depending on edition) of the Berkeley Physics Course, vol. 1 (Mechanics)). We clearly see that both nucleons have two layers of internal constituents. Unfortunately these results were put aside from 1964 on due to the success of the quark model and of QCD later on. From 1985 on we began to see more indications of compositeness, but we were so enthusiastic with the SM that we didn’t pay much attention to them. A partial list of them: 1) in 1983 the European Muon Collaboration (EMC) at CERN found that the quarks of nucleons are slower when the nucleons are inside nuclei; 2) in 1988 the SLAC E143 Collaboration and the Spin Muon Collaboration found that the three quarks of the proton account for only half of its total spin (other subsequent collaborations (EMC in 1989 and Hermes in 2007) have confirmed this result which is called the proton spin puzzle); 3) in 1995 CDF at Fermilab found hard collisions among quarks indicating that they have constituents (this was not published because CDF didn’t reach a final consensus); 4) Gerald Miller at Argonne (Phys. Rev. Lett. 99, 112001 (2007)) found that close to its center the neutron has a negative charge equal to -1/3e (inside the positive region with +1/2e); 5) new measurements of the EMC effect have been carried out by J. Arrington et al. at Jefferson Lab and they have shown that the effect is much stronger than was previously observed; etc.
    Gerald Miller wrongly attibuted to d quarks the -1/3 charge at the neutron center, but as the neutron ia a udd system we know (from QCD) that none of the 3 quarks spends much time at the center.
    The relevant paper on this subject is Weak decays of hadrons reveal compositeness of quarks which can be accessed from Google (it is at the top on the subjects Weak decays of hadrons, Decays of Hadrons and Weak decays).

    Therefore, we should go back and probe further the nucleons in the low energy scale, and carry on Miller's experiment with the proton.

    dorigo
    Hi Mario,

    quite interesting comment. I think you might as well be right, but the point is that quark being composite or not, the Higgs mechanism is still the most economical way to make the Standard Model work. So I do not see the two things in conflict.

    Cheers,
    T.
    dorigo
    Yes!

    Or, to better specify at NLO: The 95% confidence-level results are INDICATIVE of whether a given searched process is favoured or disfavoured by the data. Indeed, the word "excluded" is ugly and untrue.

    In certain fora physicists are indeed trying to agree on changing the wording to "disfavoured".

    Cheers,
    T.
    Thank you for being honest. Now I understand 'finding nothing' is as difficult as 'finding something', perhaps even more.

    "one is bound to ask oneself: how gullible can a physicist be ?"

    Isn't it a little more complicated than this? After all, the great advantage of SUSY always being just around the next corner is that it provides a readymade additional justification for building the next-generation machine. This, in fact, has been SUSY's real role for the last twenty years, first with the Tevatron (Top Quark +SUSY) and then with the LHC (Higgs+SUSY). Isn't this why physicists tend to be a little coy about dismissing the eternally failing SUSY? She is indeed uniquely valuable, as a huge slice of string theory also requires SUSY for its continued existence.
    She may be showing her age now but there's life in the old girl yet! And if we don't manage to squeeze another machine out of her, well, she can change her name to Split Supersymmetry and enjoy many years of continued favour in the string theory community.

    Great explanation of the exclusion plot, btw.

    dorigo
    Hi DB,
    I agree, it is more complicated. I was being sarcastic to some extent. But I do mean that many colleagues were really overenthusiastic with SUSY being at reach of the LHC. For the Tevatron, though, it was different - I do not think that SUSY was among the motivations for building the CDF or DZERO experiments. CDF was built to find the top quark; DZERO was built to measure precisely the W mass. Then in Run II the goal was a precision top quark campaign, the B_s oscillation and CP, and the Higgs; I think SUSY was not among the main ingredients in budgetary proposals in 1996-2000.

    Cheers,
    T.
    Tommaso Dorigo: So we publish exclusion plots, and pretend it's all what matters. And since there is no real boundary in the parameter space of SUSY, every time we extend our investigation we are free to change the axis boundaries to higher gluino-squark masses, showing that there remains a large swath of parameter space still to be explored...

    Been following your work for a long time. If you were to hold still in a given, "we are free to change the axis boundaries to higher gluino-squark masses" do you think you could have found something? Obviously, you are saying no? Is that correct?

    Best,

    dorigo
    Hi Plato,
    I am not sure I understood what your question is -must be my poor English. What does it mean "to hold still in a given" ?
    T.
    Tommaso, you wrote:

    Because new physics must be there, see: the Standard Model cannot work by itself all the way to the Planck mass scale.

    Mike Peskin, in http://arxiv.org/abs/1110.3805 (linked from Peter Woit's blog) wrote (emphasis added)

    There is one more issue to discuss: Couldn't it just be the Standard Model?
    The answer to this question is, unfortunately, yes. We know that there are phenomena
    in Nature that the Standard Model cannot explain, most prominently, the
    dark matter of the universe and the nonzero values of neutrino masses. However,
    there exist very plausible models in which the explanations for those phenomena lie
    ten or more orders of magnitude above the reach of current accelerators. Within the
    realm that we can explore directly with particle physics experiments, there is no hint
    or anomaly we see today that could not simply disappear with improved measurements,
    leaving the Standard Model unchallenged. If the Higgs boson mass is above
    the LEP lower bound of 114 GeV and below the upper limit from the LHC quoted in
    Section 9, the Standard Model is self-consistent up to very high energies, all the way
    to the Planck scale [151].
    Thus, a possible outcome of the LHC experiments could be
    the end of experimental particle physics. The LHC experiments would con rm the
    predictions of the Standard Model at accessible energies, and would give no guidance
    as to the location of a higher energy scale at which the Standard Model might break
    down.

    dorigo
    Hi Arun,

    and who am I to disagree with Peskin ? In fact, I totally agree with him! And thanks for the relevant quote.

    Indeed, the Standard Model cannot be a definitive theory, but there is no guarantee that its extension or breakdown can be found anywhere closer than 10^18 GeV or so. In fact one might claim the opposite: the very fact that the Higgs boson is in the whereabouts of 120 GeV (if we find it there in the next few months) is an indication that the model works up to very high energy scales without incurring in unbounded potentials etcetera.

    What I was writing in the post was a bit tongue-in-cheek -I have made the point so many times in this blog (that I do not believe anything new except a H will be found by the LHC) that I was trying to provide the majority report in my introduction! I thought my jargon was a give-away (", see", or "it is both reasonable and convenient",

    Cheers,
    T.
    Actually, for a Higgs at 120GeV the upper end of the new physics scale is around 10^9GeV << M_GUT. if the Higgs is indeed lighter than 135GeV that would be the first direct evidence (apart from neutrino masses possibly) that the SM as an effective theory is not the whole story up to M_planck. So that would actually be an exciting result.

    Thanks for taking the time to write the this long post. It was very informative, interesting and filled in some gaps in my understanding of what is going on. As a layman it is interesting for me to watch the drama of the SUSY exclusions. It is beginning to have the feel of a failed theory, where the proponents go to ever increasing lengths to try to save it. At some point all that human energy is better spent looking for the alternative that explains what the data is telling us.
    - Vance

    Tommaso, you wrote:

    Because new physics must be there, see: the Standard Model cannot work by itself all the way to the Planck mass scale.

    Mike Peskin, in http://arxiv.org/abs/1110.3805 (linked from Peter Woit's blog) wrote (emphasis added)

    There is one more issue to discuss: Couldn't it just be the Standard Model?
    The answer to this question is, unfortunately, yes. We know that there are phenomena
    in Nature that the Standard Model cannot explain, most prominently, the
    dark matter of the universe and the nonzero values of neutrino masses. However,
    there exist very plausible models in which the explanations for those phenomena lie
    ten or more orders of magnitude above the reach of current accelerators. Within the
    realm that we can explore directly with particle physics experiments, there is no hint
    or anomaly we see today that could not simply disappear with improved measurements,
    leaving the Standard Model unchallenged. If the Higgs boson mass is above
    the LEP lower bound of 114 GeV and below the upper limit from the LHC quoted in
    Section 9, the Standard Model is self-consistent up to very high energies, all the way
    to the Planck scale [151].
    Thus, a possible outcome of the LHC experiments could be
    the end of experimental particle physics. The LHC experiments would con rm the
    predictions of the Standard Model at accessible energies, and would give no guidance
    as to the location of a higher energy scale at which the Standard Model might break
    down.

    dorigo
    (message sent on behalf of Tony Smith):

    As Arun said  "... If the Higgs boson mass is above the LEP lower bound of 114 GeV and below the upper limit ... the Standard Model is self-consistent up to very high energies, all the way to the Planck scale ...".

    However, I do not see that would mean "... the end of experimental particle physics ..."
    because
    the LHC could explore a considerable (several TeV) energy range above the energy level of ElectroWeak Symmetry breaking in which range some interesting questions emerge:

    1 - all particles would be massless, so what happens to the Kobayashi-Maskawa mixing matrix and CP violation in the massless realm ?

    2 - with no mass, what would happen to neutrino oscillations ?

    2 - what Twistor/Conformal phenomena might emerge in that massless realm (perhaps shedding light on Dark Energy) ?

    3 - the Full Higgs and W+ and W- and W0 and Original Photon would no longer be combined
    into Remnant Scalar Higgs and W+ and W- and Z0 and Our Photon
    so how would their interactions look ?

    4 - ...  etc ... ?

    Tony

    Hello,

    one question if I may. If it turns out that that no Supersymmetry is observed in the lab in the long run, does that mean String theory and basically M theory is out the door?

    Thank you.

    Hi Anon,

    If I may jump in here - No ;)

    Sociologically, it will surely be damaged, but Physics-wise, not so much. There is no really good reason apart from the Higgs hierarchy problem, why there should be low energy observable supersymmetry. My suspicion is that one could use something else than a Calabi-Yau space to compactify the 6 extra dimensions, and that would then kill off all the low energy supersymmetry. It would maybe be near impossible to do calculations in that, but as we know, nature knows perfectly well how to do nonperturbative calculations, and thus doesn't mind :)

    apart from the Higgs hierarchy problem and the Cosmological constant, I should maybe say, although the latter is terribly hard already in the SUSY case.

    actually, there is more: coupling unification and dark matter.

    so essentially, if the SUSY scale is much higher, there is no experimental evidence even hinting at the existence of SUSY.

    maybe people were too greedy and tried solving too many problems in one sweep - who knows?

    Sure, those are reasons why one would like SUSY, but the question was whether string theory would be excluded when/if TeV scale susy is ruled out.

    Great post!
    I started reading your blog about two years ago (around the time LHC started, again), and I really learnt a lot about signals and exclusion plots. Before that the most extreme experimental plot I could understand was the S and T parameters 95%CL ellipse plot. Thanks very much!

    I have a question concerning these SUSY exclusions for stops, Tommaso -

    All these SUSY exclusion plots seem to treat all the squarks the same(or at least collectively), but in reality I've often heard that the bounds on the stop squark are weaker because it decays into tops. Do you know how true that is? This might be kind of important since the top gives big contributions to the finetuning, and if a relatively light stop could still be possible, that might be partly ameliorated. But as much as I look, I don't find any statements about it....

    dorigo
    Dear Alex,

    searches for specific sparticle signatures are less of fashion lately, since everybody is using the high CM energy of the LHC to try and push the global limits on constrained MSSM universal parameters up.
    The simplification of restricting oneselves to cMSSM or other models is reasonable to me, but of course we are also studying exclusive final states. Searches for stops, sbottoms, charginos and neutralinos are underway in the collaborations.
    For instance the search for light stops and sbottoms is covered by a paper published by the ATLAS Collaboration, Phys. Lett. B 701 (2011) 398.
    Cheers,
    T.
    Thanks Tommaso,

    so my suspicion was correct, there is really not that much one can reliably say right now about these nonstandard scenarios... That's interesting , and maybe also a sign that the general wisdom about how supersymmetry is excluded below the TeV skale should be taken with a grain of salt until more general analyses have been performed with >1 fb^-1. At least that's what I tell myself ;)

    "Neutralinos are not the superpartners of neutralinos"

    Did you mean "Neutralinos are not the superpartners of neutrinos"?

    dorigo
    off yep. Thanks!
    T.
    you definitely love best-knockoff-designerbags.com with low price best-knockoff-designerbags.com , for special offer

    </