UPDATE: BBC radio contacted me to let me know they corrected their mistake. I am very glad to hear that! So you can continue reading BBC after all!

Probability inversion is one of the nastiest mistakes one can do handling the results of a statistical analysis, invalidating to the roots the interpretation of the data to the point that the whole work effectively becomes useless. Unfortunately, it is a very common entertainment for journalists reporting scientific results, and oftentimes scientists themselves fall in the trap.

In a nutshell, the issue is taking a statement on the probability of observing some data, given some hypothesis, and turning it into a statement of the probability of the hypothesis given the observed data. If I observe that a chunk of LHC data does not contain any Higgs boson and calculate a probability of 0.001 of observing such an effect if the Higgs boson does exist, that does not mean that there is a 99.9% probability that the Higgs boson does not exist, given the observed data!

The inversion mistake is nasty because it is quite easy to make it: we are accustomed to reason in terms of probabilities of the causes. We observe some data and we should stick to making statements on the probability of the data, but we much prefer discussing hypotheses, and we err. (Note that I will resist the temptation to enter a discussion of Bayesianism here, since the focus of this article is elsewhere).

For today's example of probability inversion, let us take the BBC News site, which features a fun piece titled "Does Chocolate make you clever?". In it, it is argued that there is a strong correlation between the number of Nobel laureates of one country and the chocolate consumption in that country:

"Messerli took the number of Nobel Prize winners in a country as an indicator of general national intelligence and compared that with the nation's chocolate consumption. The results - published in the New England Journal of Medicine - were striking.

When you correlate the two - the chocolate consumption with the number of Nobel prize laureates per capita - there is an incredibly close relationship," he says. "This correlation has a 'P value' of 0.0001. This means there is a less than one-in-10,000 probability that this correlation is simply down to chance."

Note that this is written in a rather cryptic way. it is totally unclear what it means to say that a statistic -the correlation coefficient- has a p-value of 0.0001: does it mean that the particular value found happens only once in ten thousand trials ? Or that a larger correlation is found only once in ten thousand cases when there is no correlation ? Or a smaller one ? See, these are the pitfalls of careless reporting of mathematical expressions. Anyway, we are entitled to assume for the sake of arguing that the correlation coefficient is found to be large, and one as large or larger has been estimated to occur only once in ten thousand trials if there is no real correlation between the two quantities being studied (see, we talk of probability of the data!).

Now, what is meant by "This means there is a less than one-in-10,000 probability that this correlation is simply down to chance" ? This is the culprit. Does this mean that the observed data is very odd, if one assumes there is no chocolate effect on the brains of the future Nobel prize recipients ? Or does it rather mean that the hypothesis that chocolate does nothing special to your brain is extremely improbable ? I bet a dime that 99.99% of BBC News readers will get the latter. Probability inversion trap !

(I leave alone the issue of what may possibly have to do being "smart" with getting a Nobel price, because I am in a kind mood today).

At this point, my advice to dr. Messerli, or to the journalist if it is her fault in reporting the sentence (alas, this is often the case), is to go eat a bar of chocolate. It will do no good to their mental abilities, but they will probably enjoy it.