Although researchers in fundamental science have a tendency to "stick to what works" and avoid disruptive innovations until they are shown to be well-tested and robust, the recent advances in computer science leading to the diffusion of deep neural networks, ultimately stemming from the large increases in performance of computers of the past few decades (Moore's law), cannot be ignored. And they haven't - the 2012 discovery of the Higgs boson, for instance, heavily used machine learning techniques to improve the sensitivity of the acquired particle signals in the ATLAS and CMS detectors.
Another paradigm shift awaits to be operated, though: it involves the realization that today we can aim for the holistic optimization of the complex systems of detectors we use for experiments in particle physics. Building particle detectors is a subtle art, and some of my colleagues probably have a knee-jerk reaction of disgust if one bounces off them the idea that a machine could inform optimal design of their instruments. With that in mind, we must be more gentle, and rather speak of "human in the middle" schemes and of making tools available that can help the hand of the expert designer.

Either way, the topic of co-design of software and hardware has become hot in the past few years also in fundamental science, especially since I founded the MODE collaboration, a group that today counts 40 participants from 25 institutions in three continents. Since yesterday, MODE is running a workshop in Princeton to discuss recent developments in the field of end-to-end optimization of experiments. This is the third workshop of this series, and I have so far been very pleased by the quality and content of the presentations (mine included ;-) .

Here is the workshop poster:




And below is a picture of some of the participants, in front of the venue (the neuroscience institute, where we are hosted).