Particle physics conferences are a place where you can listen to many different topics - not just news about the latest precision tests of the standard model or searches for new particles at the energy frontier. If we exclude the very small, workshop-like events where people gather to focus on a very precise topic, all other events do allow for the contamination from reports of parallel fields of research. The reason is of course that there is a significant cross-fertilization between these fields. 
Scientists often hop from one to the other fields as they develop their careers. It is not hard for a researcher having developed his or her expertise in collider physics to move to astroparticle physics, where the focus may be quite different but the tools of the trade are often very similar (calorimeters, silicon trackers, neutrino detectors). Other examples are the contiguity of particle physics and nuclear physics: the interest for quark-gluon plasma is then the bridge between the two communities.

A good particle physics conference must therefore provide updates on a number of fronts. Parallel sessions can provide the right setting for such reports, and indeed large conferences have a dozen different such venues. Take ICHEP 2016 as an example: it featured over a dozen different sessions, with topics ranging from string theory to top quark physics, from detector R&D to neutrinos, from dark matter to heavy ion physics, from formal theory developments to interactions with industry.

But not statistics.

Statistics is a different field of study. In most Universities around the world, you can graduate in Physics or you can graduate in Statistics (you can also graduate in Statistical Physics, but let's not get foncused). These are as different as are, e.g., Architecture and Art History. You can learn a lot about Architecture by studying Art History, but the two are quite unrelated topics in general. 

So I made an experiment, as I happened to find myself in the right spot at the right moment. As the scientific coordinator of a international network that brings together physicists and statisticians to develop machine learning tools for physics and industry, and as an organizer of a international conference, I decided to try my hand at offering a parallel session on "Statistical Methods for Physics Analysis in the XXI Century" at the "XII Quark Confinement and the Hadron Spectrum" conference that was held in Thessaloniki (Greece) from August 28th to September 3rd. Through the network I could offer a sponsorship to the conference, which helped the organization of the session as well as a few outreach initiatives we organized as satellite events.

Would it be possible to bring physicists to listen to statistical-themed presentations at a conference that focused on particle physics and nuclear physics ? I thought it would be tough. Many colleagues in HEP do realize that statistical tools are crucial to extract better results from their data, but few really take the time to spend their time thinking about the matter.

I tried to attack the problem by putting together a list of talks that could be of as wide interest as possible, while of course being fully centered on the statistical issues. I also decided to allow for shorter talks ideally given by students and post-docs, who could bring to the attention of an arena of experts their own specific problems in data analysis and the adopted solutions.

In the end, this coalesced to a two-afternoon session, with a total of 18 talks - 11 long ones (25' plus 5' for questions) and 7 short ones (15'+5'). I had to work on the invitations, but eventually I was very happy of the outcome, as I could bring to Thessaloniki experts such as Eilam Gross, Luca Lista, Harrison Prosper, Sergey Gleyzer, Jean Cleymens, and many others (who will excuse me for not citing them here). Below is the program of the sessions.






In addition to the parallel session, I asked Eilam to also give a plenary talk on "Frequentist Statistics for Physicists" on the morning before the first parallel session. This was a very well received talk and Eilam gave a brilliant talk - he started by explaining how he had tested the hypothesis of being in the wrong place by noting he only knew 6 participants, as compared to the usual percentage in other events.




QCHS XII ended up gathering about 380 participants. There were 10 parallel sessions running, well, in parallel. As at any given time (due to early departures or late arrivals) one should account for 75% presence at the conference, and in addition the parallel sessions are a time when participants may find it useful to take some time off, one could naively expect that the typical parallel session would have on average about 25 participants. Would the statistics session be on par with that ?

It was. There were always above 20 attendees to the talks, and over 30 during some of the most interesting talks. I was very happy to see that the experiment was successful: not only did the session attracted physicists who had come to the conference to listen to their own stuff - heavy ion physics and QCD; it also collected some of the random guys who had been called to give a talk on a "sideline" topic (astro-hep, or theory developments in topics not connected with QCD). There is interest in these matters nowadays in our community!

Needless to say, I am very eager to repeat this experiment at another conference. It would be very good to do so at EPS 2017, which will be held in Venice next July - but unfortunately that event has a very fixed format and it would be hard to squeeze in something "exogenous". But if you are organizing an event somewhere else, and read these lines, please consider adding statistics to the menu!