Applications for MSCA Post-doctoral fellowships are on, and will be so until September 10 this year. What that means is that if you have less than 8 years of experience after your Ph.D., you can pair up with a research institute in Europe to present a research plan, and the European Commission may decide to fund it for two years (plus 6 months in industry in some cases).

In order for your application to have a chance to win funding, you need to: 
  1. have a great research topic in mind, 
  2. be ready to invest some time in writing a great application, and 
  3. pair up with an outstanding supervisor at a renowned research institute. 

Now, if you are a physicist or computer scientist with a good pedigree and skills in machine learning, this is your lucky day, as I am available to write that application with you, if you are willing to come and work at INFN-Padova!

Where's the catch, you might be wondering. There's no catch, in the sense that if we team up and
write an application together, there are good chances that you will get funded, period. Plus, the EC post-doctoral fellowships are very well-paid positions, and Padova is a nice city in Italy; and last but not least, I am the most awesomest super-dupervisor you may hope to ever have!

My experience with collaborating with PhD students and post-doctoral scientists, but in general with 
research teaming, is forged in decades of participation and leadership of large and small research groups, direction of doctoral networks, supervision and networking; and on my deep loathing of hierarchy, as I am convinced that as researchers we work together for common good and advancement of science, not for somebody else. Finally, as a President of USERN - a 26,000-member network for collaborative, interdisciplinary scientific research and education across borders - I am working to improve the career opportunities of researchers worldwide. 

All the above said, I think the main reason for coming to work with me is simply the fact that I can offer you to jon my group and contribute to prepare for a second AI revolution in fundamental science. Let me explain how.

A new impending AI revolution for fundamental science

We all know what happened in 2012: on that year, trained machine learning models surpassed human performance at image recognition tasks. That moment marked a divide between a world where machine learning was a niche activity for geeks, to a world where we cannot do without machine learning for data analysis. And in physics, 2012 marks the discovery of the Higgs boson, which was pulled off by exploiting the power of machine learning classification methods. 

Overnight, large collaborations went from preventing data analyses based on machine learning from being published (because internal skepticism caused the stalling of internal review procedures) to discouraging data analyses that were *not* based on machine learning from being carried out. How's that for a paradigm change? Physicists are very conservative, and wary to embrace novelty until they fully understand the benefits of the new tools; but once they do, they will not turn back and move fast forward. That is what has happened in the past decade or so, with large benefits in our measurements of fundamental physics parameters and searches for new physics.

But the world, too, has not stayed still: new AI-powered methods have been finding more and more applications in countless human activities beyond research. Of course - they guarantee fast growth and performance increases. AI methods now power autonomous driving, speech and image recognition, computer vision, and large language models, to name just a few use cases. 

Today, fundamental science must make another bold step forward, embracing AI models that allow for end-to-end optimization of scientific instruments and industrial apparatus that work by extracting multi-dimensional data from physical systems and performing pattern recognition and inference, eliciting information about the physical world or other important knowledge from complex systems.

Co-design: the next paradigm change

What I am talking about is co-design. If we stick with the analysis of particle collisions (my field of expertise, or at least my starting point), today we are basically capable of extracting almost
perfect information from the mindbogglingly complex high-energy interactions of particles and the
resulting data. We do this by perusing high-fidelity simulations of the physics processes and advanced architectures of neural networks designed specifically (by us) for those tasks. I highlighted "by us" because fundamental research cannot benefit directly from advances in AI and computer science: we need to develop our own interfaces and specialized tools, because there is no money to make from them. 

However, in an extreme schematization fundamental science experiments are all composed of two parts: a hardware apparatus that extracts data from the physical processes of relevance, and a software framework to analyse those data to measure stuff. It is hard to hide the fact that until now we have only been able to optimize the second part of that system. The optimization of a piece of hardware is usually much harder to pull off than an analysis chain. This is partly because the hardware is the result of tens of thousands of design parameters, material choices, budget allocation decisions, but software tools can be just as complex. However, the hardware produces data through stochastic physics processes which are very hard to model. This hinders the creation of continuous models of the whole system -the enabling step for gradient-based optimization, which is the powerhouse under the hood of most AI methods.

A second AI revolution in fundamental science will take place when we empower ourselves to  optimize together the hardware and the software of our experiments. This will result in a perfect alignment of the optimization procedures, and it is what I summarized above with the magic word "co-design". How can we pull that off? 

Make no mistake: it is damning hard to produce a software pipeline that models a complex scientific experiment in all its relevant parts, and proceeds to adjust its design parameters (that live in tens-of-thousands parameter spaces) to find the optimal solution. 

The high dimensionality of the parameter space is not worrysome - in profit-driven endeavours it is normal to optimize systems that have billons of parameters (see ChatGPT, e.g.). The problem is that we need accurate simulations of the stochastic processes that generate our data, if we want to make predictions on the performance of any given detector configuration and pattern recognition software. But it can be done, and moreover - we _need_ to get doing it, as funding for large scientific endeavors is becoming less secure as days go by. We need to ensure that we squeeze every bit of value from the dollars spent to construct our instruments.

In large scientific endeavors there is one further problem that makes it even more important to work on co-design. We design today instruments that may take data twenty years from now! E.g., the Large Hadron Collider was ideated at the end of the 1980ies, was designed in the following decade, and was built in the next. Finally commissioned in 2009, it will operate until the late 2030ies. 

Now, this means that if we have the arrogance to design a detector for a future collider today with the methods we have used over the past 50-60 years, we are almost certain to introduce bad misalignments and suboptimality in the instrument, as 20 years down the line it will be some super-smart artificial intelligence which will analyze those data. And I am as sure as it gets that the AI will be cursing us small-brain humans for constructing a device that does not allow the extraction of all the useful information. Misalignment is virtually guaranteed with similar time scales and with the explosive development of AI. 

What to do? The only solution is to model the superhuman capabilities of future software methods in the system, and then gauge how the optimal solution moves in the parameter space as you tweak that dial toward less-than-perfect performance.

It is complicated, but we have started to do it

The fact that the challenge is a very hard one should trigger you at this point - otherwise, sorry
for wasting your time today. But if you feel tickled, please stay and keep reading. What a small
group of visionaries I pulled together (the MODE Collaboration) has started to do, in the past few years, is to set out to prove that we today have the tools and the technology to produce a full alignment of hardware and software optimization, and demonstrate how the gains that modus operandi yields - in terms of saved research money, in terms of performance of the resulting instruments and experiments, and in terms of resulting scientific output if you will - are huge.

A long journey always start with a first step. We have made a few already: we demonstrated the
end-to-end optimization of a few experiments of small to medium size, and as we proceed we are
getting capable of partly reusing our models, for our problems all have some commonalities and thus modularity allows us to reuse pieces of the software pipelines.

By the way, I should mention that another group is looking into co-design: it is Working Group 2 of the EUCAIF, a large group of physicists and computer scientists in Europe looking at the integration of AI methods in fundamental science research. Of course I am leading that working group, but I would be happier if more colleagues joined that effort, as we need more brainpower - young minds, enthusiastic researchers!

My offer

Do you have a Ph.D. in Physics, Maths, or Computer Science, acquired less than 8 years ago? Are you quick with software development (Python, PyTorch, TensorFlow are things you are comfortable working with)? Does the above plan look like fun? Send me your extended CV before July 10th and I will get in touch with you! If we resonate, we will team up to write a strong application to a MSCA Post-Doc Fellowship, which would bring you to work in Padova for the INFN in my research group.
See the ad on Euraxess.

I am eager to read you!