I received the text  below from Jim Markovitch, and decided it was fun enough to make a guest post entry with it. Markovitch worked for the world's largest supplier of corporate credit information, where he designed and implemented algorithms to estimate the probability of the equivalence, for credit purposes, of two name/address records. More recently he has adapted these algorithms to help identify unusually efficient approximations of fundamental constants. Let us see what this is about - TD


Quantum Diaries Survivor readers who happen to play chess will be familiar with the concept of the overworked defender: a piece is "overworked" if it performs multiple defensive functions. Chess players help themselves understand their opponent's position by identifying just such weaknesses.

There is an equivalent concept in physics which I will call the overworked constant: it is a constant introduced to help fit one set of data, which somehow also manages to fit other, seemingly unrelated, data. In this sense Planck's constant h is a classic overworked constant. Presumably, the more overworked a constant is, the more likely it is to be fundamental. Physicists help themselves understand physics with the aid of just such constants.

Now consider that if

 x = 10 − 1/30000
 x2 + (x/3)3 = 137.036 000 0023... 
which fits the 2010 CODATA fine structure constant inverse of 137.035 999 074 to within 6.8 parts per billion (ppb). If x also manages to fit other, seemingly unrelated, data, then x too is, in my parlance, overworked and may for that reason be fundamental.

As it turns out, substituting for x above gives

 ( 10  1/30000 )2 + ( 10/3  1/30000×3 )3 = 137.036 000 0023...
whose four constants are "overworked" in that they also closely reproduce the sines squared of the experimental quark and lepton mixing angles (viz., L12, L13, L23, Q12, Q13, and Q23).

[I -TD- am compelled to add here, to allow uninformed readers to understand what is being discussed, that the "mixing angles" that Jim talks about here are the elements of two matrices which relate the mass and flavour eigenstates of quarks and leptons, respectively. The quark mixing matrix is called "Cabibbo-Kobayashi-Maskawa matrix" from the name of the theorists who conceived it (Cabibbo had the original thought of mixing d and s quarks through an angle to explain the phenomenology of weak interactions; Kobayashi and Maskawa extended the formalism to three generations of quarks to include complex phases in the parametrization, thereby allowing CP violation in weak interaction processes). The lepton matrix is much less well known, and its elements still quite uncertain.]


 10   = 1/(sin2 Q12 / sin2 L23) 
= sin2 L13 / sin2 Q23
= 1/ sin2 L12
= sin2 Q13 .

It follows that

 sin2 L12 = 3/10 sin2 Q13 = 1/30000×3 .
And if we assume that
 sin2 L23 = 0.5
 sin2 Q12 = 0.05 .

Moreover, because above

 1/30000 = sin2 L13 / sin2 Q23
it follows that
 sin2 L13 = 1/30000 × sin2 Q23 ,
which, given that Q23 measures roughly 2.4 degrees, produces an L13 of roughly 1/70th of a degree, and a sin2 L13 of ∼10 −7.

    Sine Squared         Predicted Value    
       sin2 L12       0.3
       sin2 L13    ∼10 −7
       sin2 L23       0.5
       sin2 Q12       0.05
       sin2 Q13       0.000011111...  
       sin2 Q23                -  

This value for L13 differs from experiment by 2.7 standard deviations, the largest error produced above for L12, L13, Q12, and Q13. (See Ceccucci, Ligeti, and Sakai, Feb. 2010 and Schwetz, Tortola, and Valle, Aug. 2011 for recent quark and lepton mixing data.)

It is logical to wonder whether all of the above are accidental. The fit achieved for L12? L13? Q12? and Q13? The precise fit of the FSC inverse?

• With respect to the quark and lepton mixing angles, one can estimate the probability that four randomly-generated angles in the interval [0°, 90°] will fit within 2.7 standard deviations the experimental quark and lepton mixing angles L12, Q13, Q12, and L13. Naturally, all 4×3×2=24 ways that four such generated angles can be paired with the four experimental angles must be taken into account. Monte Carlo methods reveal that four randomly-generated angles in the interval [0°, 90°] can be expected to fit experimental L12, Q13, Q12, and L13 within 2.7 standard deviations once in about every 5,000,000 trials.

• With respect to the FSC inverse, its earlier fit to 6.8 ppb represents about eight decimal digits worth of information.

That the simple regular round numbers 10 and 1/30000 reproduce eight digits worth of empirical data is, in itself, excellent evidence of a non-accidental relationship. Why should such simplification be possible? That 10 and 1/30000 independently reproduce the experimental values for L12, Q13, Q12, and L13, despite odds of roughly 5,000,000 to 1 against, represents a separate form of simplification, and confirms that they are overworked to a high degree. This is arresting evidence that 10 and 1/30000 relate non-accidentally to the FSC inverse and the quark and lepton mixing angles.

Moreover, additional evidence is readily provided by an alternative to the above method, which produces the above values plus experimental Q23 (the only mixing angle whose sine squared is not calculated above); see A Mathematical Model of the Quark and Lepton Mixing Angles (2011 Update) for details. Tables I and II on pages 10 and 11 from this source summarize all eight predictions made by this more robust method and how they have fared since 2007, where these eight predictions are comprised not merely of the mixing matrices' six angles, but also their two phases.