Banner
    Why I Am Rooting For Lubos
    By Tommaso Dorigo | June 15th 2014 04:12 PM | 30 comments | Print | E-mail | Track Comments
    About Tommaso

    I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

    View Tommaso's Profile
    With still three months to go and 663 teams participating, the Higgs challenge has not even entered a hot phase yet, and still there is a lot to watch in the leaderboard at the kaggle site.
    In the last few days, there has been a total revolution in the leading position, and a considerable increase in the best scores. And Lubos Motl is again third (and he would be first if there had been no movement in the other positions), implicitly answering some detractors who wrote comments in a previous post on the matter here. See the standings below.


    "Ok, but why are you rooting for Motl ?," you might ask. After all, he has often been rather harsh with me, getting close to positions liable of a libel cause. My main guilt to him is to be a friend of Peter Woit, whom he perceives like the absolute evil, as Peter has been and is a critic of string theory. Plus, politically I belong to the left, while he is on rather reactionary positions. Further, I do believe that we are negatively affecting our climate with CO2 emissions, while he says it's baloney. Anyway, why am I supporting him ?

    Well, first of all, I will always prefer an outsider with talent and good ideas to a big team with lots of resources and experience, and I have the impression that some of the leaders of the Higgs challenge belong to the latter category. But it would be nice to see Lubos giving a talk at CERN explaining to our ATLAS colleagues how to optimize their discriminating tools for complex multivariate searches.

    One further question you might have is, "Doesn't a theorist winning a contest in categorization sound like a defeat for experimentalists?" After all, there are dozens of experimentalists also participating in the challenge, and they are those same physicists who analyze LHC data in search for new particles - is it not like saying that the LHC experiments are staffed with incompetent ignorants? No, that is not the case, and let me explain to you why.

    Finding the Higgs in a background-ridden dataset does involve a performant selection of signal-like events. There are dozens of ways to perform such a selection, and some approaches are known to produce more promising discrimination levels than others. What one does with such tools is to take the initial dataset, which contains, say, 100 Higgs boson events immersed in 100000 background events, and work one's way to a super-selected sample where by discarding background-like events one remains with 20 Higgs events and 25 background ones.

    A "counting experiment" may then allow the physicist to say "AHA: I expected 25+-5 events if the data were only made up by backgrounds, but I see 45: there is a significant excess due to what could be Higgs boson decays in the data!".  The "background-only" hypothesis is very unlikely in such a circumstance, and we can speak of "evidence" for the Higgs signal contribution in our selected data.

    It is important to realize that such a stringent selection as the one of the example above is based on believing that the kinematics of the signal and backgrounds conform exactly to those that are derived from simulation programs. Now, experimental physicists know that it is very dangerous to believe in these simulations to their last bit: in the past forty years dozens of spurious signals have been observed in super-selected data, obtained by relying too much on the tails of the distribution of this or that kinematical variable. See the graph below to understand what I exactly mean.



    While when studying a single kinematical distribution it is easy to understand when one may rely on the simulation and when one should instead be careful, when one considers a multi-dimensional space of 30 variables things get murkier. The inputs to these Monte Carlo simulations are not only our theoretical knowledge of the physical processes producing the various particle reactions, but just as well a lot of modeling -of parton distribution functions that decide how large a fraction of the protons' momenta is carried by the partons that give rise to the actual subnuclear process; of the fragmentation of final state quarks and gluons into jets of stable particles; of the interactions that take place as those stable particles hit our detectors.

    You clearly see that we have a problem. How can we be sure that we are perfectly simulating all corners of phase space of a 30-dimensional space ? We cannot. What we can do is to carefully work out estimates of the systematic uncertainty that our modeling brings in. The final "background estimate" of a tight multivariate selection is thus plagued not just by the statistical uncertainty, but by that systematical error. Now, it is very hard to estimate correctly a systematic uncertainty even when we discuss a single one-dimensional distribution; imagine how hard it becomes when we deal with 30 dimensions all together, and a complex algorithm we have no intuition about, which freely picks around this space the regions where more signal should be hiding.

    Because of the problem with estimating the systematic uncertainty of a complicated selection, experimentalists usually try to not push the selection too far. More stringent, more optimized selections may suffer from larger systematical uncertainties because the stricter those selections are the larger is the uncertainty on the accuracy of the simulations in the narrow regions of phase space that gets selected as signal-rich. Because of that, there is a compromise to strike.

    The compromise is usually the one of selecting variables one is relatively more confident that the simulation describe well, and to avoid pushing too far the investigation in the multi-dimensional space. You therefore understand that experimental physicists, while quite interested in understanding what are the software algorithms that produce the best discriminating power on specific problems, will not automatically let those algorithms run at their optimized points, but will rather interact with the software and restrict a bit its freedom to use too many variables or select too narrow regions of phase space.

    Knowing the above, you understand why as an experimental physicist I do not feel disowned of my legitimate expertise as a data analyst - I may not always pick the most powerful classification method, but the reason is usually that one has to consider everything together when optimizing for a search - not just the statistical significance after the selection, but the systematic uncertainties that the method carries with itself.

    Comments

    He is also a misogynist and a racist. It's hard to like him, especially when if a person is, for example, black or in my case, who is married to a black woman. It's hard for me to contain my thoughts of killing him, though, that is unlikely to happen since I wouldn't like to be on jail and that, given the consequences, given that he is such a worthless human being, that is not worth the effort.

    But, this is not exactly why I wouldn't like to see him winning, since despite everything he went through, it was not enough to see him let go his horrible arrogance. He cannot admit to be wrong, even when he is. I remember when he had a hard time admitting he was wrong about an issue about experimental data, I think?. I am sure you will remember, I don't care to look for it. For things involving non-string theories, he resembles a 5 year old throwing tantrums. So, I will hardly expect a sane analysis, even though he could do it.

    I can also clearly see he is the one working harder, far harder, among all contestants (it seems the other with more submission than him is officially a 3 person team). So, if he loses, after doing hundreds submissions, all days within the limit of posts, that would do something to make him a bit more humble, I really hope.

    I think your problem is that you can't handle the truth being the way that it is, and so use language to psychologically manipulate people into pretending they don't see that the emperor naked.

    There will always be bigots like you using hate to shut out the truth of there being average differences between men and women, between sub-Saharan Africans and native Europeans. The internet is freely available to all, yet you'll find the Kaggle competition will be dominated by white men, rather than women with an African ancestory.

    Lubos is a character: So far, Lubos has not advanced the quest unification, not even by a small step. Lubos could use his intellect to tell us where the fine structure constant 1/137.04 comes from, but he is unable and even unwilling to do so. What he writes, for example on extra dimensions, is in conflict with experimental data. The way he treats other people, including women, is not really an example, to say the least. The universe is playing a nasty trick to him: it is telling him that he is smarter and morally better than most people he knows, while the facts show the opposite. And other people tend to look at the facts. That is his tragedy.

    dorigo
    Sorry guys, but that's already a bit farther than I wanted to get in this thread. Please stop discussing Lubos Motl here. He is what he is, but this post is actually about systematic uncertainties (although I will admit that I put him in the title as I am using the fact that a theorist is scoring well in this challenge to drive home my points).

    So Anon, you would like to kill a person because that person is a racist ? And does that make you a better person because...?

    Cheers,
    T.
    Yes, that's the reason. No, it wouldn't make me a better person at all, I just feel like it for the sake of it. Obviously, I won't do it since I don't want to spend time in jail and... I don't have the means to do that.

    dorigo
    Sorry Anon, what I meant is that already what you said puts you on a very low level in my scale of humanity.
    T.
    I really don't care. The way you are is excellent for me, anyway.

    BTW, notice, I live in a developing country and the suffering of multitudes of poor people, including of my wife, who had to scramble garbage, makes me furious.

    I have been following Lubos Motl's blogging for years. He is one of the intelligent person I have ever read and met, and I have been around PhD circles for years.

    I do not think that Lubos is much different to anyone of us. We all hold strong views and label people constantly internally. The difference is that he is honest enough to communicate his thoughts with the world!

    His career path is symptomatic for the damage that political correctness does to truly free intellectual debate and thinkers. In the current political climate, people are more and more careful what to say and what not to say.

    Well, I feel that we are not political correct enough in some points. So, while it is good that he was "damaged", I don't think he was punished enough.

    /* I do not think that Lubos is much different to anyone of us */

    I really don't think so http://motls.blogspot.cz/2014/06/a-czech-anti-maidan-warrior.html

    I do not consider LM honest anymore.

    First of all - his idiotic political oppinions. I am Czech and I know the context that many of his readers are missing. He regularly meets with former KGB/STB agents (STB was a communist secret police). Look at his recent blog about the the birthday party of Klaus where he had a coffee with Tříska (STB agent). Read this post about Klaus
    http://www.frontpagemag.com/2010/jamie-glazov/kgb-yesterday-today-and-to...
    Read Motls idiotic oppinions about the crisis in Ukraine and ask yourself honestly: is he paid by the Russians or is he so immoral and dumb? And if he is paid by the Russians to do propaganda, could he also be paid in the case of global warming propaganda? My answer is very probably yes.

    Second - his physics. His "logical arrow of time" ideas are crackpot. It is a circular reasoning (tautology). Time is asymmetric because our logic is asymmetric and our logic is asymmetric because time is asymmetric. He needs an aditional axiom about the initial conditions (low entropy at the beginning of the universe). When I politely told his this, he banned me from his blog without really offering any kind of valid argument (not very intellectually honest)
    I even came to the conclusion that he has a similar mental block concerning quantum mechanics. He confuses a description of reality for reality itself and he never really understood the objections of Einstein or Bell. He just aggressively kicks and screams and behaves like some retarded immature kid. If you try to read his blog, you need to filter a lot of insults, personal attacks, irony to find some kind of argument. If you give him questions like "how does nature remember the correlations during entanglement if nothing exists prior to measurement, i.e. not even the correlations" he calls you an idiot and repeats his bullshit how these things were understood in 1925 and you never really get an honest and straight answer.

    So no, I do not think that he is an honest and moral person. He is at best mentally challenged (Asperger), at worst he is immoral and corrupt and paid by KGB (FSB)

    LOL, is it a highest level poker strategy or just acceptance of loss? Don't work harder, but smarter. Hard work is good for computers only.

    T, This is off topic -- but any comments on LHCb's latest talks on 2.5 sigma results showing 25% discrepancy in universality of b decays predicted by std model? https://cds.cern.ch/record/1706212/files/LHCb-TALK-2014-108.pdf
    and http://www.symmetrymagazine.org/article/june-2014/lhcb-glimpses-possible...

    dorigo
    My comment on that ? A minor experiment trying to attract attention ;-)
    2.5 sigma are nothing. It will go away.
    Cheers,
    T.
    Hmm, it seems that the efficiency of the algorithm is not enough since it will get false positives, is that it? So, that means wasted cycles from the computers. So, it's better to keep a lower efficiency, but improving on the speed of the code.

    /* according to the Standard Model, this type of decay should have created an equal number of electrons and muons. Instead, they found that electrons were produced 25 percent of the time more often. */

    The more stable particles are usually produced in excess during decays. If the Standard Model cannot account to it, it cannot even explain, why the 2nd-3rd generations particles are only rarely produced during radioactive decays...

    dorigo
    Dear Zephir,

    please stop diffusing false information. What you write is deceiving
    ("the more stable particles are usually produced in excess") and false
    ("the standard model cannot account to it") to not even wrong
    ("cannot even explain why....").

    1 - the first comment is deceiving. In some cases it is true that "more stable particles" are more
    frequent, but this usually has to do with the fact that the lighter a particle is, the more stable
    it is; and lighter particles are more frequently produced in decays because there is more phase
    space available to produce them. Note that this is not always the case: eg. the positive pion
    will decay more frequently to muon-neutrino pairs (the muon lives 2x10^-6s) than to electron-neutrino (the electron lifetime is >> the age of the universe); this fact is due to the conservation of helicity at high energy (the pion has spin zero so the two leptons have to be emitted with oppositely aligned spins, which forces the heavier of them to be right-handed, suppressing the decay to electrons which have a harder time fulfilling that requirement).

    2 - the Standard Model beautifully accounts for all of that, in every detail. E.g. the "universality" of the weak charged current is a very well tested property of weak interactions.

    3 - radioactive decays usually do not have enough energy to produce massive states (2nd and 3rd generation leptons, e.g.).

    Cheers,
    T.
    /* the Standard Model beautifully accounts for all of that, in every detail */

    Yep, this is actually just my point, if you didn't realize it. So why it couldn't account to excess of muons at the case of meson decay? Why this phenomenological triviality is considered a New Physics? Why we cannot simply tell, that the decay of mesons doesn't have enough of energy to produce the massive stance? Why the physicists are doing a hype from it?

    dorigo
    Dear Zephir,

    I believe you are asking too much to a simple explanation. The subtleties behind the LHCb measurement cannot be trivialized to a level that they "answer" your questions. The SM
    explains all that we have observed so far. Now there is a small deviations from predictions;
    if that were an established fact it would be new physics because the SM is a quite precise
    theory with no freedom to tweak it to account for extra branching fraction, but it is just a 2-sigma
    effect, and if you ask me - it will go away.

    The explanation you suggest works for some perfectly well understood physics, not for the
    one studied in the LHCb analysis (which already of course accounts for everything known).

    Cheers,
    T.
    Ok thanks. Always helpful to know your view!

    Still achieving higher scores with the hyperball? Apparently Lubos will lose $100 if anyone gets over 3.8, so you may know something us others don't :)

    Given that the top contender right now seems not to be a physicist (educated guess, has taken part in other Kaggle contests), you might get a decisive advantage by taking knowledge of real-world physics into account. I assume the mass biases approximated with your hyperball are something standard methods would not capture correctly?

    dorigo
    Dear JJ,

    I have been working at the HB algorithm intermittently, but the problem I have is one of CPU - it is extremely demanding to run my genetic algorithm on the kNN I use. I am presently running 5 evolution jobs that have been there since last week; when I have a chance I will try to use their results to do an application run - but that will take two weeks! I am not too concerned though; as I said, my interest in this challenge is only in seeing how well a non-boosted algorithm does in comparison with those tools. Right now I already know I can do better than the off-the-shelf BDT, but I would like to reach to 3.60 or so to be satisfied. At that point, I could consider feeding the discriminant functions I get from my algorithm into a BDT; otherwise I won't bother.

    As for physics: no, it is not going to help much. First of all, because ATLAS has already done the dirty work quite extensively - the first 15 variables are derived quantities and it is unlikely that without knowing the subtleties of their detector one can improve on that; second, it is disallowed: your algorithm must produce its results without you picking variables or combining them. Or at least that is what I understood of the spirit of the competition.

    As for mass biases, no - the HB is used here for classification, not regression (I assume you meant that I could "correct" the reconstructed masses with it: I can do it but it is a different job).

    Cheers,
    T.
    Ok, I got the impression from

    http://www.science20.com/quantum_diaries_survivor/thoughts_hyperball_alg...

    that you used it to correct the masses and get an advantage from that. Otherwise it just looks like a "nearest-neighbor" approach using distances weighted differently for each dimension, right?

    "second, it is disallowed: your algorithm must produce its results without you picking variables or combining them"

    I didn't realize this was the case. That limits the possibilities quite a lot.

    dorigo
    Hi,

    yes, what is described there is a regression problem. In this case no regression is
    possible, as there is not enough information available to correct dijet masses.

    The algorithm I am using here is a very advanced form of kNN, using random subspace
    search to "destabilize" the output and allow for improvements. There are several
    subtleties, I will have a post about that one day...

    Cheers,
    T.
    Is it just the winner that will be giving a talk to the ATLAS team at CERN?

    As things stand, it looks as if the top 5 are all capable of over taking one another, so perhaps more competitors should be invited if things are very close at the end. At the very least, the top 20 could be invited to post a brief description of their method, and then be invited to give a talk if it's seen as original and interesting enough.

    dorigo
    Hi Larry,

    inviting people to give talks is expensive - inviting 20 people may take as much money as the $13000 allotment for the competition.

    I am certainly interested in knowing what each of the high scorers is doing; but perhaps there are other ways to get that information, if they agree to share it...

    Cheers,
    T.
    @Tommaso,

    The top 20 or 10 could submit an article to the organisers of the Kaggle competition, giving a rough overview of their method, where confidentiality is assured.

    The top 5 methods judged on elegance and originality would then be announced on this blog or via the Kaggle competition, where no prizes would be given out. It's just a boost to the prestige and reputation of the competitors, and something to put on their CV.

    @ Tommaso whoops!

    It looks as if the competition organisers have already thought of this, with additional prizes:

    HEP meets ML Award:

    An Award will be given to the team that, as judged by the ATLAS collaboration members on the organizing committee, creates a model that is most useful for the ATLAS experiment: optimizes accuracy, simplicity/straightforwardness of approach, performance requirements (CPU and memory demands), and robustness with respect to lack of training statistics. The winning team will be invited to meet the ATLAS collaboration physicists at CERN, with up to $2,700 (2000€) to cover their travel expenses."

    NIPS Workshop

    Strong performers in this competition may be invited to contribute to a NIPS workshop associated with this competition (pending acceptance from NIPS) with limited travel and conference support. Details will be posted here as they are known.

    Awesome!

    dorigo
    Yes, I had read that - but I would be interested in a contribution from all the top scorers, not just the one they judge worth of it. My judgement may in fact differ from that of the ATLAS folks...

    Cheers,
    T.