Today I am giving the opening speech at a workshop with the same title of this post. The workshop takes place at the Center for Particle Physics and Phenomenology of Université catholique de Louvain, in Belgium, and it is in a mixed formula - we will have 33 in-person attendees and 72 more attending by videolink. 
The workshop is organized by the MODE collaboration, which I lead. It is a small group of physicists and computer scientists from 10 institutions in Europe and America, who have realized how today's deep learning technology allows us to raise the bar of our optimization tasks - we are now targeting the full optimization of the design of some of the most complex instruments ever built by humankind, particle detectors.

Below I offer you a preview of my opening speech. Enjoy! Text in [] parentheses is for clarification here.

----------------

[skipping thanks and other similar introductory statements]

Why are we here? Why another machine learning (ML) workshop? There are lots of great workshops on machine learning applications to fundamental physics around, and they succeed in gathering innovative ideas in a melting pot - e.g., the ML4JETS series. Indeed, over the past two months alone I ran two workshops (at the QCHS and ICNFP conferences) and lectured at two more.

In our field, the machine learning revolution has first changed the way we do our data analysis; and now, a transition from maximizing a ROC curve [a figure of merit for classification problems] to other applications is taking place. Today, we are learning to use ML to ease hard problems we are facing. E.g., in HEP [high-energy physics] we study how to apply it to
- mitigate pileup in high-luminosity collider operation
- evolve pattern recognition to higher performance, in harsher environments 
- improve triggering capabilities of our data acquisition pipelines
- produce fast simulations of physics processes and reconstruction
- improve inference from given datasets

Those, and other applications, essentially target the same thing: they try to IMPROVE OUR SENSITIVITY - in other words, get us more bang for the buck.

That's great, so let's have more of that!

All the above approaches are great, indeed, but they only deal with a part of the system at a time. Yet today's computer science tools allow us to take a more holistic view at the problem of getting more from our funds.

Three years ago I was in a board meeting of a funding agency, and I recall experiencing a strong cognitive dissonance as I heard a distinguished colleague not far from retirement propose the design of a detector for a future big collider, leveraging sound and robust construction principles, which were however dating back to the last century. Great ideas, excellent cutting-edge technology, fail-safe layouts. And yet, a dissonance.

I can also recll that I had read some outreach piece where the same colleague correctly pointed out how in HEP we have grown into the idea that detector development has a logic of its own, independent on the goals for which we want to build detectors. We do not just detector R&D to improve our chances of discovering this or that new physics process - we do it for its own sake! So, part of my cognitive dissonance came from realizing that the same people who have built (and mostly already spent) a career by focusing on detector R&D are those who take crucial decisions on how the next generation of physicists will go, in 20 years, from a Feynman diagram to a summary statistic.

I will not argue against the honorable tradition of detector development, or on our general reliance on the experience of our seniors - why, I do belong to that class myself, alas! But I will point out here that the times of omnipurpose physicists have long gone. The Fermis, the Rutherfords, the Ledermans and the Rubbias are a thing of the past. You cannot excel any more at being a theorist, a data analyst, and a detector builder at the same time today!

Back to my cognitive dissonance, the other bit of it is realizing that in 20 years we will not use a Kalman filter to do track reconstruction. It will be an AI that does that, and that AI may be smart enough to curse our short-sightedness in designing appropriate instruments that could be exploited to better results. In other words, a misalignment is lurking!

So, how do we design a detector today, and what can we do to improve on the procedure, and to make it ready for the software capabilities of the future? Even if we leave alone my hoourable colleague and his idea, I have to acknowledge that detector design rests on some well-consolidated paradigms which appear inescapable at first sight - walking away from them seems very hard.

1) Robustness: we are after invisible, microscopic phenomena, so we think we need to measure things in different, complementary ways, enabling cross-calibration and verifications. This is all good, but it comes at a price - redundancy is the enemy of optimization.

2) In collider detectors, but also in other fundamental physics instruments, we rely on the principle of "track first, destroy later": we first measure the trajectory of charged particles in a lightweight tracker, and then destroy particles neutral and charged in a calorimeter. This paradigm is older than I am, but is it still justified, in light of, e.g.,
- today's silicon pixel detectors performance
- boosted jet tagging needs
- new AI-driven pattern recognition and particle flow capabilities?

3) One should also mention the symmetry of layouts, where I include here the stacking up of 2D structures when designing 3D objects. Ease of assembly is nice but it also walks potentially away from optimality. Nowadays, the 3D printing of detectors is a reality, and submicron technology also allows us to use a third dimension. Are we so lucky that the best construction choices always lay on the 2D subspace of a 3D manifold?

Computer science today offers new solutions to our tasks. But we have to take the matter in our hands - those solutions require interfaces which only we -the physicists- can meaningfully construct. That is why I am so excited to see that today we are putting together a mixed community of researchers with tough problems and computer scientists who are willing to help find solutions.

More in detail, this workshop is organized by the MODE collaboration, and we are focusing explicitly on attacking the very hard problem of detector optimization using differentiable programming (DP). Why DP? Well, optimization does not specifically require a fully differentiable model, in general. But the latter is more likely, IMHO, to learn new, innovative ways to exploit the complexity of the design space.

Is it worth taking this on? In early attempts at cases with O(10) free parameters, we see factor-of-two improvements in the relevant metrics as typical returns. A factor of two is very large per se, but what improvements are possible when the design space has O(100-1000) dimensions, I let you wonder.

So, ok, but, can we really pull it off, for anything but the most trivial applications? I do not have an answer to this which does not run the risk of being dubbed wishful thinking. But, innovation requires investment of effort, and foresight (or, as Groucho Marx would put it, the secret of success is honesty and fair dealing - if you can fake those, you've got it made). Personally, I am sure that what we have in mind will one day be commonplace. Right now, though, it may well seem too audacious.

Last year I wasted three months of my life to put together an ERC advanced grant application based on the research plan above. The answer (by two of the 4 referees who clearly self-connotated as detector experts)? They laughed it off. How naive! Designing a particle detector with ML? This guy has lost his marbles.

What those reviewers were not able to read in the 5-page summary they evaluated is the fact that the software capabilities to explore in a systematic, albeit approximate, way the design space of even very complex, collider-detector-scale instruments do exist. It is the infrastructure to put them to work that does not. That's a very hard task, but not an impossible one. And the dividends are potentially huge; not to mention that we have a moral obligation to spend our research funds as well as possible in a world where global challenges (pandemics, overpopulation, climate change) force us to direct our resources to applied science solutions!

What MODE is about is to work toward creating a versatile, scalable, customizable infrastructure where a generic detector design task can be encoded, along with all the players (pattern recognition, nuisance parameters, cost constraints, and crucially a well-constructed objective function) in a way that allows for the automatic scanning of the space of design solutions. To do this, we aim to tackle a variety of use cases, from easy to hard ones, and within multiple domains (HEP, astro-HEP, nuclear, neutrino physics). This way, we'll gradually build up a library of solutions, which will necessarily be modular (differentiable pipelines are made up by modules taking on separate tasks) and recyclable to some extent. So we will gain expertise and gradually increase our capabilities to solve harder problems, and reduce time to results.

The produced solutions to any used case will grant us visibility, publications, and will be steps in the right direction - the one of making optimal use of research funds, and improve the discovery and measurement potential of our instruments. In addition, I am convinced - and this is fortified by the presence here of colleagues who work in the industry - that what we aim to develop will have spin-off potential. E.g.:
- Muon tomography
- Hadron therapy
- Radiation shielding (for, e.g., nuclear plants or human space travel...)

You are here today, in person or by avatar, because you have worked at topics related to the above research plan, or because you have an interest in doing so in the future. Hence you are the right audience to address when I say we need help to improve the effectiveness of our action.... I thus hope you will join MODE, which according to our statute to accept you as a member only requires you to
- be interested in our research plan
- vow to contribute to it in the future

I stop here, and I wish you three days of fruitful discussions!


---

Tommaso Dorigo (see his personal web page here) is an experimental particle physicist who works for the INFN and the University of Padova, and collaborates with the CMS experiment at the CERN LHC. He coordinates the MODE Collaboration, a group of physicists and computer scientists from eight institutions in Europe and the US who aim to enable end-to-end optimization of detector design with differentiable programming. Dorigo is an editor of the journals Reviews in Physics and Physics Open. In 2016 Dorigo published the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab", an insider view of the sociology of big particle physics experiments. You can get a copy of the book on Amazon, or contact him to get a free pdf copy if you have limited financial means.