Superfluid liquid Helium is shot under very high pressure out of a tiny nozzle and into vacuum. Outside in front of the nozzle, the excess pressure bursts the liquid apart violently into a myriad of fragments. A cloud of ultra small droplets comes into existence. The liquid beam is almost completely atomized.
If you take a randomly drawn droplet from the explosion, the number of atoms inside of it is mostly just one, a single atom. Finding two atoms is less likely, three atoms even less, and so on.
Similar fractionations occur in earthquakes, the stock market, you name it. The law that describes the probability of the fragment sizes N, which is in this case the so called exponential distribution, is a fundamental, widely applicable decay law. It is the green curve (gray where it overlaps with the red one) labeled with a green "EXP" in the following plot.
Blowing up stuff is fun, but we do not just want to destroy, we want to grow, especially in the stock market, right? - but also in biology, nanotechnology, and many other important fields. So now look at something apparently totally different, at the very opposite of violent destruction, namely a typical random growth.
We shoot a beam of Helium gas out of the same nozzle. The pressure is again very high, so that outside of the nozzle, the gas flies apart again. Since now the starting temperature is slightly higher, we are dealing with a gas instead of a liquid, and the strong expansion after the nozzle cools the gas strongly down. This means that the gas condenses, just like the atmosphere’s moisture condenses into clouds, with the droplets growing until they eventually fall as raindrops out of the sky.
We have a cloud of droplets again, but the probability of large droplets is not necessarily smaller than the probability of small droplets. Now, large droplets are not more likely to burst apart. On the contrary, large guys are likely to meet and eat more small ones and grow even larger. In economics, this is described by an old German proverb: The devil always shits on the biggest heap!
In more cultivated terms: While decays and fractionation lead often to power laws and exponential (EXP) distributions [1,2], random grow processes give mostly rise to so called log-normal (LN) distributions, be it in biology, economics, or cluster physics [3,4,5]. A typical log-normal is the red curve labeled "LN" in the picture above.
You can immediately see the big difference between the two. The LN has a maximum hump that the EXP cannot have. This maximum can wander about. If the Helium gas starts out warmer for example, the maximum will be further to the left. This additional freedom lies at the heart of the mystery about the curious relation I will describe below.
The difference seems to merely confirm how different these two processes are, violent fractionation and growth. The LN that describes growth has two adjustable parameters, namely the average size of a grown something like a droplet, call it Average(N), and the so called standard deviation ΔN, which describes how broad the distribution of sizes is. If for example almost all droplets have close to the average size, then ΔN is small. If there are many much smaller ones and many much larger than average droplets, ΔN is large.
For the EXP that describes destructive processes, there is no such freedom. For the EXP, the average is strictly proportional to the breadth of the variety of sizes. One can see this easily from a simple calculation*.
Now here is the intriguing mystery. There are many parameters that influence growth processes, even in the well delineated example of the Helium gas beam condensing into droplets: The temperature, the pressure, and the diameter of the nozzle. Nevertheless, it seems to be a quite common observation that averages are chained to deviations Δ, much like in explosive fractionations, although this is unexpected in the case of growth processes.
It comes even better: Not only the unexpected decrease in freedom, but in the droplet experiment described for example, the relation Average(N) = ΔN holds also for the growth**. And yet better: By slowly heating the liquid Helium in front of the nozzle until it is a gas, the experiment can smoothly transition from the destructive regime into the growth regime. This means that the curious relation Average(N) = ΔN in the regime of growth seems to be “inherited” from the destructive regime , where it is much better understood. But how so? How on earth does blowing stuff up extremely violently constrain apparently unrelated growth mechanisms? This is the mystery.
There seems to be a hidden symmetry between growth and destruction. This symmetry could tell us something new about random growth. It moreover has practical implications. Nanotechnology researchers often promise that further research will likely allow so called “size selection”, which means they promise to grow a certain particle, like some bio-active atomic cluster for example, at any desired average size while ensuring small sample variability. Surely, if the variability is chained to the average, this is an empty promise.
Only by understanding the nature of the strange relation can we hope to disentangle the parameters and suggest methods to improve size selection. The insights into important random growth processes may be significant.
Any suggestions what the hidden symmetry between growth and destruction is all about? Do not look at the solution  which I will also write about soon (Update : here it is) before not enjoying the fun of thinking up something for yourself. It will make the answer so much more … – well, you will see.
 D. E. Grady and M. E. Kipp, Journal Appl. Phys. 58, 1210 (1985)
 B. L. Holian and D. E. Grady, Phys.Rev. Lett. 60, 1355 (1988)
 M. Villarica, M. J. Casey, J. Goodisman, J. Chaiken, Journal of Chemical Physics 98, 4610 (1993)
 C. R. Wang, R. B. Huang, Z. Y. Liu, and L. S. Zheng, Chem. Phys. Lett. 227,103 (1994)
 E. Limpert, W. A. Stahel, and M. Abbt, BioScience 51, 341 (2001)
 S. Vongehr, S.C. Tang, X.K. Meng: On the Apparently Fixed Dispersion of Size Distributions. Journal of Computational and Theoretical Nanoscience, 8(4), 598-602 (2011)
* P= e-N/U/U. For averages of any parameter X that depends on N, integrate the parameter multiplied by the probability P from N = 0 to infinity: Average(N)= INT[P*N] = INT[(N/U) e-N/U] = INT[e-x] * U = (1-0) * U = U. The square of the standard deviation generally equals the average of the squares minus the square of the average: [ΔN]2= Average(N2) – [Average(N)]2 = INT[P*N2]- U2 = 2 U2 - U2 = U2
** The widely known formula employs the full width at half maximum FWHM, not the standard deviation. There are many more simplifications down to blogging level, and presenting it correctly here would be very tedious indeed. The gist is not distorted by this however: There seems to be a relation between the average and the deviation that should not exist!