Physicists are accustomed, in their day to day activities, to think rationally about phenomena. Their job is to measure constants and parameters with no bias and with as little uncertainty as possible, interpret the results, and construct worldviews that expand our understanding of the universe. In so doing, they are guided by very well-established principles such as the scientific method, the idea of falsifiability, Occam's razor, and in general by scientific rigor, where hypotheses not verified beyond doubt and in multiple ways are by default considered false. And their actions are shaped by the built-in certainty that the authoritativeness of their word, spoken or written, is their only means of bringing bread to their table: hence they are frightened beyond description by making fools of themselves in print, through sloppiness or mistakes.
It would seem natural to assume that academics who have subscribed to, absorbed, and stuck to the above criteria and principles, throughout lifetimes or half lifetimes of studies and research activities, will exhibit a similar approach when they are confronted with situations where their tools may still be handy, although the domain of application is not the familiar one they usually move within. In a sense one would be authorized to believe that their attainment of a Ph.D. in Physics resulted in the diffusion within their bodily fluids of the fundamentals of the scientific method, which will never leave them until they pull the last kick.
On the other hand during the past month I have witnessed, on Facebook, Twitter, Whatsup, mailing lists, etcetera a rather widespread attitude in a large number of colleagues, who have started to entertain themselves with publically available data on contagions, deaths, hospitalized patients, and related geographical information. It began as a trickle of graphs posted on Facebook or Twitter, showing an exponential function overlaid to a few data points, and then it became very quickly a flood of plots of all kinds, where the data were tortured to confess they wanted to plateau somewhere; this was invariably done using this or that single-dimensional parametrization, picked up without justification, "because the chisquare looks good".
I saw curves describing data from one country overlaid with other curves describing data from other countries, shifted according to ad-hoc criteria; I saw wild extrapolations to plateaus that would later never materialize. I saw fits using Gaussian approximations to uncertainties which ignored that the data had completely different sampling distributions, unintelligible internal correlations that would freeze even the most rabid data fitter in a normal context, and systematic uncertainties so big you could drive a pick-up truck within the bars. I could go on, of course. My focus here, however, is not to discuss the shortcomings of the models, the imprecision of the procedures, or the ad-hockery of the methodologies deployed by esteemed colleagues. What I wish to discuss is exactly what triggered that behaviour, and draw some lessons from this observation.
It was Dick Feynman who once said - I am paraphrasing since I'm too lazy to look it up - "I believe that a scientist dealing with a non-scientific problem is as dumb as the next guy." But here we are observing scientists dealing with a scientific problem, so the outset is very different. What causes my colleagues to have this urge to fit a curve through data points, when indeed this is not what domain experts do? Why don't they spend part of their confined time at home to learn the basics of epidemiology, study diffusion models, or more constructively still, get in touch with academics from that other field of science in search for hints or offering help with their special skills in data analysis, e.g.?
In other words, where has the scientific method gone? Is it really the case that a physicist confined at home magically and instantly feels free from the professional burden of the painstaking checking of details and assumptions, the obsession with preliminary studies of systematic and biasing effects that may invalidate the data, the rigorous assessment of the methodology?
I must say I believe that this virus outbreak has taken a toll on all of us already. From the psychological point of view we have been hit hard: in order to fight the pandemic we have been forced to cut our social ties, to stop all our normal academic activities, and to confine ourselves at home, where thought does not flow as fluently as it does within the dear walls of our snug offices. It looks as if by removing us from our Physics departments we have instantly been stripped of our medals and flashes. And yet, we feel that as scientists we should know better about what awaits us: our superiority complex demands that we consider ourselves as the only ones who understand numbers, the only ones who really realize the frightening behaviour of power law growths, the only ones who get it.
So there goes, another fit in the wall.
As I said at the very beginning, I hope my colleagues understand that here I am not accusing anybody. For I have, many times over during these days of confinement, felt that urge to interpolate data and make sense of the apparent inconsistencies they present. But I resisted it, largely because I knew it would have been a dilettantesque form of entertainment - and I do have a good substitute to data fitting in that compartment, as I spend hours playing online blitz chess, where I am, indeed, a dilettante (although a good one at that). So we are human beings: and we want entertainment, and we find it in weird ways sometimes. Nothing to be too concerned about.
But the bottomline is clear to me: the psychological implications of home confinement in a situation of fear and uncertainty are simply not good for scientific research. We tend to become complacent with ourselves, as we feel more protected from outside criticism. Or maybe the armchair at home is much too comfortable with respect to the chair of our office desk. Besides, social media are not a peer-reviewed scientific outlet, and in a rapidly evolving situation nobody will accuse us of sloppy data analysis if we indulge in that activity.
In a word: we have -temporarily, I hope- turned ourselves into crackpots.
And the other take home point, the most important one I think, is that the scientific method is something to treasure and hold dear and protect. It is not something we can give for granted, or something that once you learn to apply will stick with you for the years to come. Scientific truths can only be learnt by a painful, slow process made of small uphill steps. To fit epidemiological data you must first understand the basics of epidemiology, and then read the literature, and then read all recent research, and then turn to the cited research therein. And finally, one day, you will maybe succeed in putting together a model of the diffusion of Covid-19 which is on the same level of your last paper on string dualities, or on electroweak observables, or your last limit on the gluino mass.
So, please save the scientific method. It may be one of the few things we can cling to, in a very uncertain future. And for goddamn sake, we'll need it as badly as ever.