Banner
    Physical Reality: Less Is More
    By Johannes Koelman | October 25th 2010 07:37 PM | 21 comments | Print | E-mail | Track Comments
    About Johannes

    I am a Dutchman, currently living in India. Following a PhD in theoretical physics (spin-polarized quantum systems*) I entered a Global Fortune

    ...

    View Johannes's Profile
    Physical reality is composed of properties like distance, duration, velocity, area, volume, mass, energy, and temperature. To quantify these properties you need to measure them. And the act of measuring boils down to comparing against an agreed yardstick, a unit of measurement such as a foot, a gram, etc. 

    Do you need a dedicated yardstick for each quantifiable property?

    Would the answer to this question be 'yes', then physics as we know it, would not be possible. We would not be able to relate the various properties to each other, physics laws would not exist. Fortunately, the answer to the question is a clear 'no'. We need far fewer units than one might expect based on the number of physical properties. 

    Consider measuring velocity. You do not need a unit of speed if you already have agreed on a yardstick for distance (a meter, a foot, a yard, a mile or whatever you agree upon) and a unit for duration (a second, a day, a heartbeat or whatever you can define as a repeatable duration). An alternative way of saying the same is: "you can build a speedometer with nothing more than a yardstick and a stopwatch". 
     
    So what is the minimum number of distinct yardsticks needed to quantify physical reality?

    The widely accepted answer is 'three'. With a ruler, a stopwatch and a weighing scale you can in principle measure all elements of reality. In physics jargon, this is usually phrased as "the number of physical dimensions of reality is three". I prefer to refrain from using the term 'number of dimensions' to indicate the number of independent yardsticks, as this might suggests a link to the spatial dimensions we perceive. Just like the number of physical dimensions, the number of spatial dimensions is also widely agreed to equal three*, yet both numbers have absolutely nothing to do with each other. In fact, as we will see, the number of physical dimension we carry reflects our ignorance of physical reality more than anything else.

    But there is another number of three that pops up in fundamental physics, and that number is related to the number of yardsticks. It is the number of fundamental physical constants that need yardsticks to express their magnitude. Physicists again use the term 'dimension' and refer to these as dimensionful constants. These are to be contrasted to dimensionless constants that are pure numbers such as the proton-to-electron mass ratio and the fine structure constant. The number of fundamental physical constants that are not pure numbers, but numbers dependent on the system of units used, is three. These three are: Maxwell's speed-of-light constant c, Planck's quantum constant h, and Newton's gravitational constant G. For reasons that soon will become clear, I prefer to represent these three constants in a slightly modified but equivalent way: c, h, and c4/G. 

    In the following it will become evident that the number of dimensionful physical constants must equal the number of independent yardsticks. This is related to the character of the ultimate theory of physics, the still elusive theory of everything (TOE). A direct consequence of this identity is that it allows the use of the fundamental dimensionful constants as natural yardsticks. This was first proposed back in 1899 by Max Planck, and the resulting natural units became known as Planck units.   


    Spurious units and apparent constants of nature

    So our question on the required number of yardsticks is answered. But that answer triggers another question:

    If we need only three units of measurement, why does the metric SI system contain as many as seven base units, and imperial systems like the US customary system even an almost uncountable number of units? 

    From a physics perspective, all systems of units currently en vogue are flawed, but to different degrees. When it comes to defining a system of units, the adage "less is more" is key. The fewer units, the better. The level of sophistication of a system of units is inversely proportional to the number of base units it carries. 

    The imperial systems - and also the metric system - contain spurious units, units that are expressible in the other units. Case in point being the American system that contains the unit 'gallon' for volume next to the unit 'foot' for length. Having the unit of foot, one does not need a separate unit for volume: with the yardstick 'foot' available, one can construct a volume of one cubic foot, and use that as yardstick for volumes. 

    Another way of looking at this issue is to start from the fact that 7.48051948052 gallon fit in a cubic foot. Is this number a fundamental constant of nature? A number yielding insight into the relationship between volume and length? 

    No, it is a manmade conversion factor. It is an exact number that can be written as 576/77 gal/ft3. Whenever an exact number is assigned to a dimensionful physical constant, you can be sure you are dealing with an artifact, a conversion factor, rather than a constant of nature that can only be known within a finite precision. Keep this observation in mind, we will encounter other such examples at deeper levels.   

    The metric SI system is only marginally better than the various imperial systems. Amongst its seven base units (the yardsticks for time, distance, mass, temperature, electrical current, amount of substance, and luminous intensity) several spurious units are hiding. An example being the unit for electrical current, the Ampere. Carrying the Ampere as a unit next to the units meter, second and kilogram, causes the conversion factor of 5,000,000 A2/(kg.m2/s2) to appear. Also here, the occurrence of an exact number indicates the presence of a spurious unit, in this case the unit Ampere. 

    A mathematical constant (2 pi) divided by the conversion factor 5,000,000 A2/(kg.m2/s2) is known as the magnetic permeability of the vacuum. Sounds pretty much like a natural constant, right? By now you should know better. Natural constants carry an uncertainty, the magnetic permeability of the vacuum does not. The vacuum permeability is a conversion factor similar to the gallon-per-cubic-foot conversion factor. A constant created by mankind, not by nature.

    The same holds for Boltzmann's constant relating temperature to energy. Contrary to popular belief, Boltzmann's constant is not a natural constant, but a conversion factor. A conversion factor resulting from the introduction of the Kelvin as unit for temperature, a spurious introduction related to the failure at that time to notice that temperature is nothing more than energy per atom. Another way of saying this is that one can get rid of thermometers provided one is capable of measuring the individual energies of large collections of atoms. 

    Extending this critical review of SI base units, one can derive that a total of four out of the seven base units are spurious. In SI, all that is needed to measure reality, are the three units 'kilogram', 'meter' and 'second'. As remarked earlier, there are only three physical dimension.

    Yet, something is not right. In the SI system, the meter is operationally defined in terms of the second using the speed of light c as conversion factor. But this conversion factor is defined as exactly 299,792,458 meters per second. And we just argued that any presence of an exact dimensional constant indicates a spurious unit. 

    What is happening here? Is the speed of light a fundamental constant of nature, or is it a manmade conversion factor? And as the speed of light is defined with zero uncertainty, does it mean we have a spurious unit of measurement hidden amongst the three fundamental units?

    We need to delve a bit deeper into fundamental physics. Starting from Newtonian mechanics, we will explore the route that brings us via special and general relativity to quantum gravity (the 'theory of everything'). The tool that we will be using is dimensional analysis. So, rather than attempting to represent the full theories, we will zoom in on the 'yardsticks' required to describe the features of the progressively more general theories. The outcomes of this exercise will probably surprise you.


    Centuries of physics condensed in a single Venn diagram. The various theories are grouped according to which aspects of reality are ignored. Red circle (h = 0): theories ignoring quantum effects, blue circle (G = 0): theories ignoring gravity, green circle (1/c = 0): theories ignoring relativistic effects. Specific theories are labelled NM: Newtonian mechanics, SR: special relativity, QM: quantum mechanics (non-relativistic), CG: Newton-Cartan gravity, GR: general relativity, QFT: quantum field theory, CQG: Cartan quantum gravity, QG/TOE: quantum gravity also known as theory of everything.


    From Newton to Einstein 

    At the end of the 19th century, when the reality of atoms and molecules did sink in, it became apparent that a physical reality of atoms and molecules behaving according to Newton's laws, should need no more than four different yardsticks:
    Newtonian Reality  <---  Energy + Momentum + Distance + Duration
    Distance and duration (or space and time) is 'the stage' on which all of Newtonian dynamics takes place. Momentum changes in accordance to Newton's second law, and energy provides the link between momentum and velocity, thereby allowing the momentum changes to be linked to movements relative to 'the stage'.

    In the above, energy could be replaced by mass, and momentum could be replaced by velocity, etc. This is not relevant. What is relevant is that a Newtonian description seems to require four physical quantities. However, as we have already seen, these four quantities can not be independent, as the number of independent yardsticks is limited to three. Apparently the above Newtonian perspective gives a too rich description of physical reality. 

    This was the situation that Albert Einstein faced in 1905. His view on physical reality dramatically reduced the number of physical units required. In a stroke of genius he reduced the number of physical dimensions, not by one, but by two. He did so by clarifying that distance and duration are two manifestations of one-and-the-same thing.** For lack of a better term I will refer to this more fundamental physical dimension as 'SpaceTimeExtent'. Interestingly, in order to combine distance and duration into one physical dimension, Einstein also had to combine energy and momentum into a single more general concept, which I will refer to as 'SpacetimeContent':
    (Distance)2  --->  (c) * SpaceTimeExtent
    (Duration)2  --->  (1/c) * SpaceTimeExtent
    (Energy)2  --->  (c) * SpaceTimeContent
    (Momentum)2  --->  (1/c) * SpaceTimeContent
    The result of this transformation is a special relativity description of physical reality that contains only two physical dimensions:
    Special Relativistic Reality  <---  SpaceTimeContent + SpaceTimeExtent
    The quantity 'SpaceTimeExtent' is effectively a space-time area (in SI measured in units of meters times seconds), and 'SpaceTimeContent' is the product of energy and momentum (one might say: an area in energy-momentum space). Both quantities are more fundamental than the four Newtonian concepts of distance, duration, energy and momentum. The quantity SpaceTimeExtent measures the square of the separation between two events in space-time. In a later post I will elaborate on SpaceTimeExtent as a means to understand Special Relativity without being burdened by any math.    

    We haven't discussed yet the constant c used in the above transformation from Newtonian to Relativistic reality. As you probably have guessed: it is the same c that we encountered above, the constant known as 'the speed of light', relating 299,792,458 meters to a second. 

    As long as we work with the more fundamental concepts SpaceTimeContent and SpaceTimeExtent, we do not need this constant c. Progressive insight (describing reality using the more fundamental concepts SpaceTimeContent and SpaceTimeExtent) has effectively made this constant disappear. This gives us a clear answer to the question "is the speed-of-light a fundamental constant or not?" Obviously, if it were a fundamental constant, it would not have been possible to make it disappear from theory. The speed-of-light is a manmade conversion factor that doesn't raise its ugly head, unless we decompose SpaceTimeExtent into the less fundamental but more familiar variables distance or duration, and only so if we use an awkward system of units. A system containing incompatible units for time and distance.

    In essence, Einstein managed to demote the speed of light from it's status as a fundamental physical constant, into a mere conversion factor equal to 299,792,458 m/s. A conversion factor not in any way more fundamental than the volume-to-length conversion factor 7.48051948052 gal/ft3.

    Note that before Einstein we required three yardsticks (in the metric system the units kg, m and s) as well as three fundamental constants: c, h and c4/G. When Einstein completed his special theory of relativity, we were left with only two yardsticks (in the metric system: m*s and kg2*m3/s3) and two fundamental constants: h and c4/G.    

    But Einstein wasn't finished yet. 


    Einstein strikes again

    Einstein worked ten more year, and in 1915 presented his general theory of relativity. A magnificent generalization of his earlier theory of relativity. A generalization that allowed him to include the effects of gravity. And again, Einstein managed to reduce the number of yardsticks required. This time, he did so by clarifying that SpaceTimeContent and SpaceTimeExtent are two manifestations of one-and-the-same thing.*** the 'thing' is known as the action, or more specifically, the Einstein-Hilbert action:
    SpacetimeExtent ---> (G/c4) * Action
    SpacetimeContent  --->  (c4/G) * Action
    The result of this transformation is a description of physical reality that contains only one physical dimension:
    General Relativistic Reality  <---  Action
    With this result, Einstein eliminated SpaceTimeExtent from the description of physical reality. Thereby demonstrating there is no room and no need for a stage on which the physics takes place. The stage got replaced by something more fundamental, the action, the play itself.   

    The constant c4/G appearing in the transformation from SpacetimeExtent and SpacetimeContent into Action, is a force constant associated with gravity. It represents the self-gravitational force generated when  space-time is deformed to its maximum. This maximum is reached when a black hole horizon shields further curvature from our observation. Also this force constant is no more than a conversion factor. A conversion factor that pops up when working in terms of physical properties less fundamental than the Einstein-Hilbert action.

    All-in-all, Einstein reduced the number of independent yardsticks from three to two and finally to one. The only remaining yardstick being the one that measures the action, in SI represented as the unit kg*m2/s. The number of dimensionfull physical constants got similarly reduced from three to two and finally to one. The only remaining constant being Planck's constant h, measured with the same yardstick as the action.

    It seems physical reality got reduced to one yardstick and one fundamental constant. But we are not done yet, as I haven't told you about an amazing path of research that ran partly in parallel to Einstein's efforts: quantum physics.


    From Einstein to Feynman

    Einstein was instrumental in realizing the advent of quantum physics, but turned his back to this theory when he felt its philosophical consequences were getting too weird. Others continued, and eventually a younger generation including brilliant physicists such as Richard Feynman took over. As a result of these efforts, quantum physics became what is without doubt the most successful theoretical framework ever constructed in physics. A success that left Einstein sidelined in his later life.

    Some profound new insights resulting from quantum theory is that one can not associate one single number to the concept 'action'. This is directly related to Werner Heisenberg's uncertainty relations that stipulates products such as distance times momentum, and duration times energy, can be determined no more precisely than roughly within Planck's constant h. 

    It was Richard Feynman who in 1948 showed how to represent the concept 'action' into a form compatible with the principles of quantum physics. Effectively, to describe a system quantum-mechanically, the action expressed in units of Planck's constant gets associated with a sum over all alternatives such that constructive and destructive interference takes place between the alternatives summed over:
    Action/h  --->  SumOverAlternatives    
    The precise way in which Feynman sums are constructed is beyond the scope of this post.**** What is relevant, though, is that this Feynman sum approach yielded highly successful theories known collectively under the name Quantum Field Theory (QFT). QFT describes the whole physical reality except for gravitational phenomena. In particular, all of modern particle physics is based on QFT. It is no exaggeration to state that QFT is the most strictly tested theory of nature. 

     
    The Final Frontier

    Congratulations, now that you have digested all the above, it should be obvious how to derive the ultimate physics theory, the long-awaited Theory Of Everything (TOE). We take Feynman's SumOverAlternatives approach, and apply it to the action in Einstein's general theory of relativity. 
         
    With the Einstein-Hilbert action divided by Planck's constant replaced by a Feynman sum, the result will be a description of physical reality based on one single dimensionless quantity:
    TOE Reality  <---  SumOverAlternatives
    A description of physical reality that requires no physical units at all. A physical reality that reduces the whole universe to mere counting. Why has no-one thought about this?

    Well, fact is that countless numbers of physicists tried exactly this. Feynman was one of them. Each attempt, however, result into disappointment. No matter how the Feynman sum for the general relativistic action is constructed, when evaluated it invariably adds up to infinity. Generations of physicists have tried to tame the divergences in the theoretical description, but no-one succeeded in preventing meaningless infinities popping up.

    Does this mean a TOE does not exist?

    Not at all. The conviction that a TOE can be constructed might seem wishful thinking, but there are reasons to believe that a divergence-free TOE exists but requires a profound rethinking of the fundamental concepts of physics. A potential solution could be realized by finding a way to transform the Einstein-Hilbert action into a holographic description, and to apply the Feynman sum approach not to the Einstein-Hilbert action, but to the corresponding holographic action. The hope is that the vastly reduced number of paths in the holographic Feynman sum would tame the divergences. 

    Here I can not go into any depth into these issues. Rather, I want to conclude with some thoughts on what a TOE would realize in terms of its description of physical reality. As should be clear from the above, a TOE will describe the whole physical universe in terms of one single dimensionless quantity, the Feynman sum. All of physics will in essence be reduced to counting rather than measuring. Dimensionful physical constants will be absent from this theory. TOE will be a theory written in pure numbers that don't require any yardsticks. 

    The character of the TOE will be that of a probabilistic theory. The sum over all alternatives will yield a result that need not be a positive number. The square of the absolute value of this number will be the probability assigned to the predicted outcome. The TOE will predict the large scale features of the universe such as the cosmic acceleration likely within a narrow uncertainty range. But a vast majority of small-scale detailed outcomes such as coin flips will be predicted with 50/50 probabilities. The TOE will not be a magic crystal ball. 

    Is this disappointing? Not at all. As James Hartle has stressed: this is a hopeful message for all working on the construction of the TOE. It is only because so little of the complexity of the present universe will be predicted by the TOE, that we stand a chance to discover it.

    The real value of a TOE will stretch far beyond it's predictions. It is the identification of the true elements of reality, the unification of all of physical reality into one single concept that will yield the TOE its value. Less is more, particularly when it comes to understanding nature.  


    Notes
    * Yes, I know: modern physics makes us doubt even this very fact, hence the phrasing "widely agreed to equal three". Given that both numbers are unrelated, all conclusions from this blogpost hold regardless the exact number of spatial dimensions.  
    ** Einstein actually did not achieve this on his own. Einstein was gifted with a brilliant physical intuition, but was less than brilliant when it came to casting his intuition into the most general and most elegant mathematical format. It was the mathematician Hermann Minkowski who, in 1907 formulated Einstein's ideas in terms of a four dimensional space-time. 
    *** Again, Einstein needed help from a mathematician to formulate his theory into an elegant form. This time it was David Hilbert who showed Einstein the way.
    **** if things go as planned, the next Hammock Physicist post will be a special guest post written by an extraordinary guest. In this guest post you will be presented a beautiful and simple example of a Feynman sum. Watch this space...

    Comments

    Respect. This is the best summary of the history of physics ever.

    I know who I would like to explain this to me (hint, he writes REALLY well about these matters, and is involved in them very, very deeply for some time). If you could get him to write the post, that would really be extraordinary.

    I will watch this space with anticipation.

    Anyhow, know the joke (paraphrased, as I forgot the exact formulation):

    In classical physics, the three body problem could not be solved (I know, it can, but this is a joke). Move on to quantum physics, and the two body problem could not be solved anymore. Then came QED, and they could not solve the one-body problem anymore. Finally, in QFT, the vacuum state cannot be solved anymore.

    There are different ways in order to think a TOE, but the fact that it is not bringing a lot to do it is quite a problem for science, then before to try, some should work in order to make that science deserve what it should deserve, and this for the scientists who deserve it (for this Karl R. Popper did help).

    Excellent article.

    I really like the diagram of physical theories! With seven out of the eight parameter combinations covered, I hope some day we will read here on this blog all about the theory covering the 8th combination: the Theory-of-Everything. (Will it be a theory of life, the universe & everything?)

    Amateur Astronomer
    This is the best presentation I’ve seen of a concept that has been around for a long time. Since Eddington there has been the expectation of finding a dimensionless theory of everything based on counting pure numbers. The obvious problem is how to express a pure number. Our decimal system is not a pure number system for the same reasons Johannes was able to dispose of the several physical dimensions. Binary digits, octal numbers, hexadecimal strings, and Roman numerals all give different representations that are intended to represent a pure number, but with an arbitrary system. Previous researchers spent years attempting to describe every thing in terms of the constant π to avoid the arbitrary choice of a number system. Some scientists still expect that type of TOE to be developed. I don’t believe that approach will succeed, because the constant π is a purely abstract mathematical construction that doesn’t occur in the physical world, just as pure circles, points, lines, and planes do not exist anywhere. The fine structure constant is dimensionless and has a physical existence, suggestion that it might be the basis of a TOE. Now people are arguing about whether or not it is constant or variable over time. Planck units represent the best attempt at a dimensionless TOE that has been made. It is still unfinished. The biggest problem with it is that Planck units do not easily relate to the scale of human thoughts. The unfinished part is how to express QFT in Planck units that encompass general relativity. Feynman sum over paths does not have a problem of infinities. It always adds up to a 100% probability. It is the QFT that has a legacy of infinities unresolved in the argument over the Dirac Sea of Energy. To complete the TOE it is necessary to resolve the issue and get a finite energy density in the vacuum of space. Until recently there was no way to put a finite limit on the Dirac Energy, except with numerology as Eddington did, and many others are trying to do. There were more unknowns than equations to describe them. Numerology always fails, Nicholas of Cusa not withstanding, for the same reasons that number systems are arbitrary. In the past year a finite limit was placed on the Dirac Sea using a partition function to provide the missing equations. If that method prevails, there is no reason that Feynman’s sum over paths cannot be brought into QFT with probabilities adding up to 100%. Then the remaining problem will be in finding a pure way to represent a pure number in terms that relate to the human scale of thought.
    Bonny Bonobo alias Brat
    But there is another number of three that pops up in fundamental physics, and that number is related to the number of yardsticks. It is the number of fundamental physical constants that need yardsticks to express their magnitude. Physicists again use the term 'dimension' and refer to these as dimensionful constants. These are to be contrasted to dimensionless constants that are pure numbers such as the proton-to-electron mass ratio and the fine structure constant.

    The character of the TOE will be that of a probabilistic theory. The sum over all alternatives will yield a result that need not be a positive number. The square of the absolute value of this number will be the probability assigned to the predicted outcome. The TOE will predict the large scale features of the universe such as the cosmic acceleration likely within a narrow uncertainty range.
    If the fundamental, theoretically, dimensionless, fine-structure constant is possibly not a constant, as Jerry mentioned above, would it affect your predictions about the character of TOE? See the article by News staff called ‘If The Fine-Structure Constant Varies, Then The Laws Of Physics Throughout The Universe Do Too’ at http://www.science20.com/news_articles/if_finestructure_constant_varies_...

    “In physics, the fine-structure constant (usually denoted α, the Greek letter alpha) is a fundamental physical constant, namely the coupling constant characterizing the strength of the electromagnetic interaction. The numerical value of α is the same in all systems of units, because α is a dimensionless quantity”. See http://en.wikipedia.org/wiki/Fine-structure_constant
    My article about researchers identifying a potential blue green algae cause & L-Serine treatment for Lou Gehrig's ALS, MND, Parkinsons & Alzheimers is at http://www.science20.com/forums/medicine
    Amateur Astronomer
    Helen, the fine structure constant might actually turn out to be the basis for a Theory of Everything, even if it changes slowly over time. A large change or rapid change would be a problem, because it would impact the stability of everything in the physical universe. A slow change over time or in a magnetic gravity field might be a good thing for fine structure and give a forward direction to the finding of a TOE. That’s why I put so much importance on the experimental results in your first reverence. It moves physical science forward a decade and gives a way to decide which new theories are best. http://www.science20.com/news_articles/if_finestructure_constant_varies_... There are other candidates for a TOE like the ratio of masses for electrons and protons, or the energy ratio between a neutron and a hydrogen atom. Very possibly all of these things are related to the fine structure, such that a TOE would be defined in terms of at least one of them. To develop the relation between fine structure and proton mass ratio, it is necessary to resolve the stalemate over magnetic fields interacting with gravity. There are plenty of theories competing for acceptance. The experimental data you referred to is the best work so far toward finding a winner. Notice in your second reference the fine structure is defined as a ratio between some of the same things Johannes eliminated in his article. Essentially it represents a ratio of the ability of empty space to resist the flow of AC electricity compared to the strength of the action caused by the electric charge on one electron. It lacks a gravitational component to become a complete TOE. Partition theory goes a bit farther and relates gravity to the fine structure. Here fine structure determines the average number of virtual electric charges N like electrons and positrons in one Zero Point oscillator. N = Square-Root (π/2α) Notice the use of abstract constant π, and the use of an arbitrary number system containing the symbol 2. Because of these things, partition has never been offered as a TOE, only a vehicle for developing a TOE. By the end of Australian Summer there might be additional results from experimental data in your reference sufficient to provide the missing link to gravity. General relativity has some theoretical work in that direction, but it didn’t predict the experimental results. So we can expect to get a slightly different GR, and some attempts at writing a TOE.
    Aitch
    Essentially it represents a ratio of the ability of empty space to resist the flow of AC electricity compared to the strength of the action caused by the electric charge on one electron. It lacks a gravitational component to become a complete TOE.
    Sounds extraordinarily like a variant of cause and effect, using the AC impedance transform of space and the rotational action of an electron
    Can I ask why the AC impedance transform is used relative to an electrostatically [DC] charged body?
    Gravity would seem to be effective irrespective of whether the electron is charged/spinning or not, and seems connected to Zero Point Energy, I believe, though I have not seen any postulates, as to whether there is a simple polar relationship, or something deeper

    Aitch
    Johannes Koelman
    All -- thank you all for your kind words. It is good to know there is still plenty of folks browsing the Internet and interested in science-heavy thoughts expressed in well over 140 characters. (Honestly, this is my longest post ever, and before posting I doubted if I should split it in two parts so as not to scare away too many potential readers.)

    And Rob, I hope the next blogpost will not disappoint you!
    Amateur Astronomer
    Aitch, the ratio you referred to of AC impedance combined with action of an electric charge is one way to represent a harmonic oscillator like the frequency controller in an LC circuit. It is a nonlinear cause and effect, but also a storage device for conservation of energy, and a filter to eliminate noise. Fine structure had a different origin in the splitting of spectral lines. Only later was the ratio discovered. It is derived, not arbitrarily defined. The way to bring gravity into this comment is to realize that all examples of electrically charged particles have rest mass. For that reason I think of gravity potential as being a fundamental property of space, even in flat space or where ever it is possible to propagate electromagnetic fields. When Planck units are applied to the Partition Theory, the average frequency limit of the LC oscillator is defined by the virtual mass in the polarized vacuum. In that way the energy becomes finite. When energy is finite, there is competition for a share from each type of storage device. A stress is created in the vacuum when something like an electron is put into it. From ordinary machines it is well known that placing an oscillator under stress changes the frequency and amplitude. A response like that happens in the zero point every time a particle passes through space or a wave propagates. I would guess the competition in the zero point is between an expression of gravity potential and an expression of electromagnetic potential for a share of the oscillator energy. That is the only way I know of to define the virtual mass and the energy level of the oscillator. The expression can be thought of like a ratio between virtual particles that are electrically charged and those that are not. Energy that is expressed as electric charge is not available for expression as virtual mass. When the energy is equally divided Reissner Nordström predicts flat space at the quantum scale of about 30 oscillators. Placing a stress in a region of space causes the partition to be unequal. For QFT and sum over paths the vacuum partition with finite energy represents a set of selection rules that associates paths into pairs or larger groups having mutual exclusion within a group and competition for energy. Infinities are removed by a procedure similar to renormalization, in which matched groups of opposite polarity cancel out. It can also be represented as a selection process on a holographic surface. The recent data on fine structure from quasars is suggesting that fine structure is also shifted by stress of gravity and magnetic fields, two types of curvature. If that opinion prevails and is developed into a derived relationship, it might be sufficient to complete a TOE based on fine structure.
    Aitch
    Jerry
    I think I lost track around Stueckelberg.....pre Feynman

    Johannes explanations give me hope of re-integrating my disconnect, but as an ex-electronics engineer [audio] I find the idea of competition amongst storage devices/accumulators? puzzling....unless there is some impedance variability?
    ....but then there would need to be different diagram maps for energy levels, gravity, and potentials, virtual and real, otherwise it's apples and oranges to me, despite my best efforts

    It may just be my inability to absorb so many abstracts as are demanded by fine structure, plank units, together with QFT.....with its 'strangeness'  of new language ;-)

    I would love to see a TOE that isn't just at the end of my foot

    Aitch
    Dr. Koelman:
    It is good to know there is still plenty of folks browsing the Internet and interested in science-heavy thoughts expressed in well over 140 characters. (Honestly, this is my longest post ever, and before posting I doubted if I should split it in two parts so as not to scare away too many potential readers.)

    Found your blog through a comment of yours over at Cosmic Variance a while back. I'm just an interested layman and have been reading physics/cosmology/astronomy blogs for years now. Physical Reality: Less Is More may just be my new favorite of all time. You have a knack for breaking things down in a way that makes it easy to comprehend. (Ethan Siegelesque, if you will)

    You should write a book. Seriously. I'll buy two if that helps. *ka-ching*

    In the meantime, I look forward to the guest post and your, " In a later post I will elaborate on SpaceTimeExtent as a means to understand Special Relativity without being burdened by any math. " post.

    Well done. Thanks again.

    Johannes - As I began to read your column, I immediately realized that it is a treasure. I paid the ultimate compliment of printing it out so that I could absorb its ideas at my leisure. I agree with Rob R. Write a book. It should be easy - just string together selected columns you have written for Science 2.0.

    Performing a Feyman sum is devilishly difficult for me for everything except the very simplest cases. There is one example in which I think one could actually do with young kids on a playground. You can use one of those measuring wheels that are rolled along pavement. What you do is mount a large arrow to indicate phase in one of the spokes. You are not going to keep track of total distance, so any wheel that can be rolled along in a smooth way from an initial point to a final point will do. You start the wheel with its phase arrow at a set position (say six o’clock) pointing straight down onto the starting point. The kids then roll the wheels along a variety of paths slightly shifted from each other and note where the phase arrow ends up at the finish point. The final positions of the arrows are all vector summed. To keep this short I’ll skip the fine-tuning. It’s an experiment that I have long wanted to perform. It can used to illustrate principles in optics. One could have a tall center mound in the way and discover that a path that minimizes the action curves around the mound (like starlight bending around the sun).

    Fields can be defined by the change in vectors and other geometric objects parallel transported around closed loops.

    Often, what you want to keep track of is the change of phase. Phase is like the orientation of a unit vector in the complex plane. A minus sign rotates something 180 degrees. An imaginary i factor is a 90-degree rotation (counterclockwise). Two 90 degree rotations is the same as 180, therefore ii=-1.

    There is some debate as to the necessity of using anything more sophisticated than complex numbers. I do not think that Feynman ever needed to go to quaternions or octonions when doing his Feynman sums. The only thing that mattered was in how the different phases for the amplitudes lined up to see whether there would be constructive or destructive interference (amongst nearby alternatives).

    In cases (like equilibrium or nearly adiabatic processes) where a Principle of Least Action can be expected to apply … there is a subgroup of objects (or even just one) in which a calculus-like First Order Variation from these objects to the neighboring ones causes only a Second Order Change in the computed Total Action S.

    These objects and their neighboring ones would have their phases almost in alignment and would therefore constructively interfere. The vector sum of their amplitudes (the phase arrows at the finish line) is large as is its norm-squared probability.

    The probability is maximized for paths (alternatives (entire spacetimes even)) that satisfy (variation of S = 0). This is how the Principle of Least Action is derived, instead of simply stating it as a law of nature (as was done for example by Hero of Alexandria in the Ancient Days).

    Again, I find myself at the end of the line comments. I am trying to shorten the time lag between Johannes unveiling his latest creation and me reading it with time to think.
    I lost a little sleep over my note above where I indicate that the measuring wheel’s counter need not be relevant. Actually, it is useful to know what the total Action is for each alternative. This way one can confirm that for paths for which a first order deviation from the path (history, alternative) causes only a second order change in the total action, these paths indeed satisfy a Principle of Least Action.

    The Feynman sum treats each every possible route with equal weighting. It is completely impartial and thus imposes no extra rules as to what is permissible. The vast number of alternatives that are away from the minimizing paths cancel each other out in the sum. How exactly to lop off infinities to keep the sums finite is a problem that Feynman felt was not solved to any sort of intellectual satisfaction. The new hope is to map the sum over alternatives onto a holographic surface that has fewer degrees of freedom and has finite limits on the number of bits of information per unit area. Solve the path integral in this arena and then map the answer back into the familiar spaces between the holographic screens. In this way one can perhaps prove that we spend most of our time in the most probable of all worlds and outline that world’s general features.

    Let us see what the guest artist can pull from his (or her) magic bag of tricks.

    Johannes Koelman
    "Performing a Feyman sum is devilishly difficult for me for everything except the very simplest cases." Not only for you, Anon. Feynman sums are conceptually very attractive, yet computationally mostly a burden. I hope the next post will bring you an example even simpler than 'the very simplest cases' you mentioned. A Mickey-Mouse model for which you can sum the paths while relaxing in a hammock. I like your 'measuring wheel analogy'. It is a good model for Feynman sums of free particles. .
    Incredible post, thanks. Please write a (popular) book someday.

    Hi Johannes,

    I don't know if it is a coincidence, but you provided an excellent answer to my previous question in "Why Physicists Are Smug". Thanks in any case, it's a joy reading you. There are however other fundamental constants which you have not touched, and (if I remember correctly) are not derived, such as those in the Standard Model. A good TOE will have to explain those too!

    Johannes Koelman
    Happy to hear your answers do get answered (in the end)! Yes, there are other constants, but these are dimensionless (or can be rendered dimensionless). In comparison with the dimensionfull c, h and G, these are more fundamental. A TOE will be dramatically more convincing the more of these it explains.
    Did the guest post ever materialize?