As I reported in a post a few days ago, the Italian sentencing of seven scientists to 6-years imprisonment for their misassessment of the risks of the population of L'Aquila, soon thereafter struck by a powerful earthquake which killed 309 and injured 2000, raised interest and disconcertment worldwide and spurred a debate which is not likely to end soon.
Who is guilty ?
I personally believe that the real offenders in the L'Aquila case are the politicians who put in charge a "great risks" committee which could be steered and manipulated at will, and who then put words in their mouths as they pleased in the eve of the seism. The fact that this is exactly what happened has now been proven by wiretaped conversations between the head of the italian civil protection Guido Bertolaso and members of the committee.
Also guilty, and even more so, are of course the criminals who constructed buildings in a seismic area with defective materials, earning huge profits. These belong to the same race of subhumans as those who were reported laughing on the phone, upon hearing the news that L'Aquila was destroyed by the earthquake and would soon need their companies to be rebuilt (yes, they laughed).
What I think is worth focusing on here, however, is not Italian politics nor the way politicians manipulate the information for their own agendas - scientists have no more power than private citizens in dealing with that problem. I believe the issue is the way one may communicate the estimate of a risk, in the case this risk is not of the order of magnitude unity, say 30% or 50% (as for instance the risk of a large water surge in the coast of New York at the time of writing, due to the arrival of hurricane Sandy), but rather, 0.002% or 0.1%.
If you read these lines you are probably among the top N% of the population who has familiarity with such small p-values, and I have no doubts you would agree that a 0.002% risk of serious injury or death is one you can occasionally take without worrying much, while a 0.1% risk is worth all your attention. But how many people around us are capable of handling the difference between those numbers ?
Whatsa p-value ?
If I were the lawyer of a scientist brought to court for downplaying too much the risk of a catastrophic event in communicating with the public, I would center my defence on that particular point: at least in Italy, citizens are not capable of handling that kind of information. If I release a statement that the risk of a strong earthquake in the coming month has risen from 0.002% to 0.1% I will give citizens the tools to decide on their own whether, say, to invest money and time into temporarily moving to another town, but how many will really know what it is logical to do and what 0.1% really means ? Would it not be better to strive for a imprecise but understandable information rather than for a correct but obscure one ? I confess I am not sure of the answer to this.
Add to the above the fact that a p-value is not enough by itself. Besides obviously relating the probability to the considered time span (one thing is a 0.1% per day, and quite another is a 0.1% per year!, but note how this is seldom specified), I would want to get the best possible estimate of the p-value as a function of the magnitude of the earthquake. Personally I would be happy with a root file with a two-dimensional graph, but maybe others would prefer a spreadsheet ;-)
The alternative of speaking about p-values could be to make a global risk analysis for the average citizen, and report only the final result of that. A risk analysis involves a cost function (which is of course subjective, but could be "averaged out" over the entire population) and a probability density function describing in this case e.g. how likely it is that they get injured, or killed, if they take either the decision of leaving or of staying in their high-risk home area. Would I, upon performing that calculation and finding that it is better for the citizens to continue living normally during an increased risk period, be justified if I told citizens "Go back home and do not worry, the slightly increased risk of an earthquake is not significant to warrant a change in your habits" ? It would be a half lie, but would it be really worse than a correct information which is harder to handle by most ? Especially if compounded with additional false information spread by alarmists armed with a radon detector, as in the case we are discussing.
I know what you would choose: you want the p-value, the graphs, the hard data, the whole shebang. Yet the matter is not so clear-cut to me. My impression, anyway, is that we first need to change our society, fostering more education to scientific and quantitative information. We need to explain what small chances mean, and how to take decisions based on a logical assessment of all the variables in the system.
Alas, educating how to correctly use small p-values is surprisingly hard to do, because although everybody has experience with events which have very small odds -a relative who wins a large sum at the lottery, somebody who dies in an airplance crush, etcetera- very few are capable of looking at the big picture, evaluating the look-elsewhere effect for the situation at hand, and correctly assessing statistical fluctuations.
The bottomline is my whole starting point, the motivation for writing this blog - as scientists and informed citizens it is our duty to educate the public to rational thinking.
Convicting Scientists For Miscommunicating Risk: What We Should Focus On