Ah, the pleasure of study! I had forgotten the immense intellectual pleasure one may derive by reading a stimulating, informative book. And if half a lifetime has passed from the last time you studied something, and what is left in your brain of it is just Culture, then reading it back again combines the pleasure of the discovery (a rediscovery, in this case) with the one of putting things in perspective, combining the bits of information you recollect with all the knowledge you have acquired since the last time you put the book down.

I am currently reading Frederick James' wonderful book "Statistical Methods in Experimental Physics", which is now in its second, revised, enlarged, clean-looking, and awesome-smelling edition -at less than twentyfive bucks, it is a shame not to buy it. And every page hides a treasure. Let me offer you an example, which I will simplify from James' exposition.

Suppose you have a balance -one of those two-armed things, with left and right dishes to which you can add both the object to weight and the reference weights used for the measurement. And suppose you want to measure the weight of two objects, A and B, by performing two separate measurements. The balance is assumed to yield a measurement with a fixed uncertainty, say E = 1 g, one gram. What are you going to do ?

99.9% of us would do what is straightforward: put A on one dish, measure it by adding weights to the other dish; then put B on the dish, measure it by adding weights to the other. Simple, huh ? The measurements will be both affected by a $E_A=E_B=1g$uncertainty, and we would be confident we have done the right thing. With two measurements and two quantities to determine, we cannot do better, can we ?

We could -if we knew some very basic statistical wizardry, by having read the fundamentals of statistics on James' book or elsewhere. Here is how.

You perform a combined measurement using both objects together, in two different conditions (meaning that you do not measure the weight of A+B twice, for instance). Let us say you first measure the sum $S=A+B$, by putting them both on the left dish, and then the difference $D=A-B$, of the weight of the two objects, by putting one per dish. What gives ?

A lot. Having measured S and D, both with a uncertainty still equal to one gram, you can now derive back A and B: simple algebra (a two-variable system of two equations) yields $A=(S-D)/2$, and  $B=(S+D)/2$. So far so good. But what is the uncertainty on A and B now ? We need error propagation to compute it:

$A = \frac{S}{2}-\frac{D}{2} \Rightarrow E_A = \sqrt{(\frac{E_S}{2})^2+(\frac{E_D}{2})^2}=\sqrt{E^2/2}= \frac{E}{\sqrt 2}$,
$B = \frac{S}{2}+\frac{D}{2} \Rightarrow E_B = \sqrt{(\frac{E_S}{2})^2+(\frac{E_D}{2})^2}=\sqrt{E^2/2}= \frac{E}{\sqrt 2}$.

Lo and behold, the errors on A and B are both smaller by a factor $\sqrt 2$, that is they are of 0.7 grams each! What devilish magic is at the source of this error shrinkage ? The answer is that, ultimately, the source is the optimal use of information. The combination of measurements which allow both objects to participate, and which are not degenerate with each other, is the optimal procedure. When the 99.9% of us set A on the left dish and left B on the table, B was begging us to participate, but we ignored it, and similarly we later measured B ignoring A. A part of the information was lost in that silly act of carelessness.

Unconvinced ? For a more formal derivation, you may refer to F.James's book. But do not try the experiment at home! The presence of dangerous systematic biases might cause a strong headache, frustration, and irritability.

This article could be complete as is, but being an unrepented night-writer, I will add a bonus to it, this time fishing from my own cooking - a minute of homework. Let us imagine this time that our balance offers an uncertainty which is a fixed fraction k of the measured weight, say 1% (a much more realistic hypothesis, by the way!). What changes in the argument above ? A lot. If we make two independent measurements of A and B, we will end up with uncertainties $E_A=kA$and $E_B=kB$. However, if we instead measure the sum S and difference B, this time we will get for the uncertainties the following formulas:

$E_A= k \sqrt{\frac{A^2+B^2}{2}}$,
$E_B= k \sqrt{\frac{A^2+B^2}{2}}$.
Our procedure has democratically shared the uncertainty between the two objects! If they had equal weight, the procedure would not change the uncertainties by an inch with respect to the independent measurement of A and B, since inside the radicals we would get $A^2$and $B^2$, so that $E_A=kA, E_B=kB$; but for different weights, their initial uncertainties converge now to a middle ground. Imagine what happens if A is much larger than B: A's uncertainty has decreased by a factor $\sqrt{2}$, while B's uncertainty has increased a lot. This corresponds to the limiting case of a negligible B, because the sum and difference of A and B is just equivalent to measuring A twice, and two measurements are bound to reduce the uncertainty by a factor $\sqrt 2$. Quod Erat Demonstrandum.