At a recent meeting of the board of editors of a journal I am an editor of, it was decided to produce a special issue (to commemorate an important anniversary). As I liked the idea I got carried away a bit, and proposed to write an article for it. 

In general, writing a scientific article is not something you normally do on demand - it requires one to first perform original research, and to then obtain interesting results. While I do have several projects ongoing which all provide for both those ingredients, the corresponding publications are meant for other journals and will not be single-author ones. But in my position I know I can write something more general - i.e., describe broadly the topic I have been working on (using AI for experiment design), without risking too much of either lacking content or shooting up the bullshit index.

Since my blog is sometimes little more than a notebook where I dump ideas waiting for future development, I think it is appropriate to write here the title and abstract of the paper I will prepare for the special issue of that journal, with the promise to my readers that I will later follow up with some more informative content on the matter. So here goes. I will also comment on it below.

Title:

On the Utility Function of Future Experiments in Fundamental Physics

Abstract:
The design of complex instruments used for broad frontier explorations of fundamental physics
has relied in the past on experience and on well-working paradigms, which provided for robust
construction and budget choices. The advent of artificial intelligence (AI) however demands a rethinking of those design procedures and a deconstruction of those paradigms, to avoid suboptimality of resulting data extraction mechanisms and misalignment of the instruments with their supposed goals.
An important step in this process is the realization that a quantitative definition of the global utility
of an experiment, however multi-purpose and wide-ranging, becomes mandatory if we want to allow
for a full exploration of the design space by new AI technologies; we discuss here its implications

The idea of the article is to try to debunk a myth - the idea that large experiments in fundamental physics cannot place precise estimates of the relative value of the scientific goals they are reaching out to achieve. Indeed, as I said and wrote several times, we do the opposite - e.g. even in super multi-purpose hadron collider experiments, we carefully define a trigger strategy whereby we allocate resources and bandwidth for the collection of physical processes of different relevance. In fact, if you think about it, designing a billion dollar experiment would be impossible if one were unable to appraise the relative worth of different scientific objectives. 

So, in a nutshell I know that this topic is controversial, but I also know that in the end I have a strong argument in favor of the notion that writing a quantitative objective function is possible in all setups. But then, the other part of the argument is that it not only is possible, but it has become unavoidable to do it. That is where the new AI tools may come into play: if you do not explain to your AI friend what you want, you are setting yourself up for certain failure.

I expect I will write this paper in the next couple of months, and I also think I will write more on the topic here, so stay tuned if it is of interest to you...