Holographic Dark Universe
    By Johannes Koelman | May 25th 2009 01:37 AM | 16 comments | Print | E-mail | Track Comments
    About Johannes

    I am a Dutchman, currently living in India. Following a PhD in theoretical physics (spin-polarized quantum systems*) I entered a Global Fortune


    View Johannes's Profile

    When Albert Einstein constructed his general theory of relativity he decided to resort to some reverse engineering and introduced a 'pressure' term in his equations. The value of this pressure was chosen such that it kept the general relativistic description of the universe stable against the gravitational attraction of the matter filling the universe.

    Einstein never really liked this fudge factor, but it was the only way to get the equations of general relativity to describe a universe that is static in size.

    More than 10 years later, Edwin Hubble's observations showed that the universe is in fact not static, but rather expanding. With this, the need for the pressure term disappeared. Einstein must have felt floored: if only he would have sticked to the bare equations without the fudge factor, he could have predicted the universe to be non-static. Einstein, since kept referring to the introduction of the pressure term as 'his biggest blunder'.

    Would Einstein have lived till the very end of the 20th century, he would certainly have changed this ordeal. Sure, our universe is expanding, but since the end of the 90's we know that this expansion is accelerating. Today the universe is expanding faster than yesterday, and tomorrow it will be expanding again faster than today. Without Einstein's fudge factor, a decelerating expansion is to be expected, and the pressure term is needed to switch from a description yielding a decelerating universe to one that yields an accelerating universe.

    What is causing this pressure that is pushing space apart at ever accelerating rates? Cosmologists refer to 'dark energy' permeating space as what propels this cosmic acceleration. In order to explain the observed accelerated expansion of the universe, this dark energy should comprise the vast majority of the total energy content in the universe. Recent observations lead to a 'dark energy density' in the universe corresponding roughly to one Planck energy (or equivalently: one Planck mass of about 0,00002 gram) per 1000 km cubed. The fact that this tiny density constitutes the dominating component of our universe just demonstrates the vast emptiness of space.

    But what is this 'dark energy'? No one knows. The most likely explanation is that dark energy is quantum mechanical in origin. In fact, most physicists would probably agree that dark energy results from quantum fluctuations, if only this would lead to predictions of the right magnitude of the dark energy effect. However, the standard quantum field-theoretical (QFT) approach leads to an overestimate of the dark energy density. How much of an over estimate? Well, any statement one can make on this tends to be an understatement. In fact, according to standard quantum field theory, vacuum fluctuations would lead to an energy density of one Planck energy quantum per Planck length cubed. That is a Planck energy per cube with sides of 0.000 000 000 000 000 000 000 000 000 000 000 016 m. A volume a wee bit different from 1000 km cubed.

    This mismatch between the theoretical and experimental values for the dark energy density is euphemistically referred to in the scientific literature as 'the problem of the cosmological constant'. Some seek the boundaries of euphemism and tongue-in-cheek refer to the mismatch as the 'fine tuning problem'. Others declare it more appropriately to be 'the biggest embarrassment in theoretical physics'.

    Where have we gone wrong? Surely one would expect an error of such gigantic proportions to be easy to track down. Yet, more than a decade after the discovery of the accelerating cosmic expansion, experts are still puzzled. Far-fetched solutions such as exotic forms of energy, tachyons, dilatons, and quantum quintessence have been proposed. None of these proposals have acquired many followers.

    Now, I am not a cosmologist and certainly not an expert in the field, so I am sure no one will blame me for describing here yet another potential dead alley to the long and winding road of scientific trials and errors towards the distant goal of understanding our dark universe. So let us take the liberty to follow Einstein and apply again some reverse engineering to the problem.

    A simple dimensional analysis hints at a solution. There are two key length scales entering the problem: the Planck length and the diameter of the universe L. The contrast between the two is vast: 61 orders of magnitude. Wouldn't it be a huge surprise if these two extreme length scales can be combined into a volume of the right size to describe the dark energy density? Well - surprise, surprise - this is easy to achieve. The experimental value of the dark energy density happens to coincide with one Planck quantum per volume of size L2. Yet, as we saw above, standard quantum field theory predicts a zero point energy density of one Planck quantum per 3. Can we change two of the ℓ's in this equation into L's?

    Yes we can. Key is to realize that the ℓ3 volume enters into the theoretical description because standard QFT assumes one degree of freedom per Planck cube. So according to QFT our universe has a total of (L/ℓ)3 degrees of freedom. This however ignores the holographic nature of our universe that was postulated by Gerard 't Hooft in 1993. The holographic principle states that standard QFT vastly overestimates the number of degrees of freedom available. More precisely, the holographic principle forbids a system of linear size L to have more than (L/ℓ)2 degrees of freedom. So, this in itself already changes one ℓ in the equation for the dark energy density into an L. But there is more. QFT associates a zero-point energy of one Planck unit with each degree of freedom. In a holographic description this is unlikely to be correct. The degrees of freedom in the holographic description are non-local, and as a result the wavelengths corresponding to the zero-point motion are linked to the macroscopic length L, and not to the microscopic length ℓ. This effect (embodied in the so-called 'UV/IR connection') gives us another swap between ℓ and L in the equation for the dark energy density so that with all holographic effects incorporated we arrive at /L Planck energies per volume of size 2L, or equivalently, one Planck energy quantum per volume of size L2ℓ. The dark energy density thus derived happens to be the highest density that can be achieved without risking a gigantic gravitational collapse of the whole universe.

    Is this all the correct way to look at the expansion of our universe? I don't know. What I do know, is that if the above is in essence correct, holographic considerations will be an integral element of the still elusive theory of quantum gravity. It is also clear that the strict holographic cut-offs to the number of degrees of freedom and the allowed energies per degree of freedom will be of immense help to regularize this theory of quantum gravity. History tells us that experimentally demonstrated discrepancies in our understanding of the fundamental laws of physics never last for more than a few decades. So I dare to make the prediction that early this century we will witness a revolution in our thinking about the universe in the form of a fully consistent theory of quantum gravity. These are exciting times!


    Rick Ryals
    Einstein never really liked this fudge factor, but it was the only way to get the equations of general relativity to describe a universe that is static in size.

    And he didn't know that matter generation in this model causes the universe to expand at an accelerating rate, or he never would have abandoned it:

    This is a very interesting comment. I conclude from geological facts and planetology data, that telluric planets are growing in size and mass (including ours). It follows that matter must condense from some unknown energy source inside planets (no good evidence yet for stars). It suggests that the total mass of the universe is possibly increasing.
    Evidently, a new cosmological theory will have to include these parameters, and you might be onto something.


    Rick Ryals
    Well, it doesn't quite work that way in Einstein's finite model, but good luck to you, regardless.  The total mass of the universe doesn't change, rather the mass-energy for matter generation is taken from the finite vacuum structure.  There are several very simple ways to comprehend this, but this might be the easiest:

    What happens to the vacuum when you rip a huge hole in the mass-energy that comprises it?

    It leaves a **real** hole in the vacuum and that necessarily increases negative pressure, so the vacuum expands, while the positive gravitational effect of the newly created particle serves to keep the system stable by counterbalancing this effect.

    The mass energy that comprises the vacuum becomes more rarefied as you do this, so it requires a greater volume of the vacuum to attain the matter density each time that you make a particle pair, so the vacuum expands at an accelerating rate , but without the runaway effect that cause Dr. Einstein to abandon this model in the first place.

    As I said here:
    The most obvious way to create new matter in Einstein's model, (the most compatible with the spirit of general relativity), also holds it flat and stable, so any other conclusions that have been made since Einstein abandoned his notion without this knowledge, are therefore subject to suspect review!

    And that's per the scientific method, fools, so you owe Einstein another little look-see BEFORE you go any further... duh...and that such a no-brainer that any elementary level school kid would see the obvious truth in that statement.

    Theoretical righteousness is a very dangerous thing.

    Too bad the PhD turn-coat won't comment... ;)
    "What happens to the vacuum when you rip a huge hole in the mass-energy that comprises it? It leaves a **real** hole in the vacuum and that necessarily increases negative pressure, so the vacuum expands, while the positive gravitational effect of the newly created particle serves to keep the system stable by counterbalancing this effect."

    Wow, sounds interesting. Does your theory also predict the holes to be holograms and the negative energy to be a Planck per 1000 km cubed? From the blog I figure that in a hologram universe with diameter of N Planck lengths the negative energy is of order N Plancks. In a *normal* universe this would be N*N*N Plancks, a way too large value. Mind-blowing stuff...

    Sauo-peng Kowh
    I come from China, I believe that the comprehensive understanding of holographic principle, will make new breakthroughs in physics!
    My-Email:aegxp < at >
    Johannes Koelman
    Hi all, thanks for the comments.
    The article focusses on the link between holography and dark energy, and is very brief on the subject of holography itself. Those interested in a popular introduction to holography I can advice the SciAm article by Juan Maldacena.
    Sauo-peng Kowh
      Hi! I now construct a comprehensive understanding of holographic principle there is no concept of time and space of 1 +1 dimensional theory.

    We understand the universe and to be able to from the 1 +1 dimensional theory emerged. What are you interested in it now?

     Sauo-peng Kowh
    I love to hear different view points about Albert's Ideas about the ether. I think that the term "vacuum of space" is or should be considered a relative term when speaking of the ether. Yes the ether is mostly a hard vacuum when contemplating condensed matter items like planets, meteors, atmospheric gases and what is thought of as things that reflect light and we call them normal matter. The pressure of the material space consist of is a constant wether it is deep space or your back yard. When you hear a thunder clap from a lightning bolt you have just heard space collasp inward on its self, with the nearly the same pressure that exist between galaxies. I understand how dumb this must sound but if a partial vacuum where responsible for the sound of a thunder clap, I dought if it would make any noise other than a sucking sound. If there where no air pressure in the lightning's collum it would not make the thunderous claps we hear.

    This message is about vacuum energy and a way of looking at space curvature with regard to the cosmological constant in general relativity.

    Optics and quantum mechanics require space to be filled with oscillators that have energy of ½hf, to propagate the flow of waves and particles. There has been a long standing debate about how many oscillators there are and what frequencies they have.

    One reference to the quantum mechanical oscillators is given in ... oscillator

    Another reference for the electromagnetic type of oscillators is given in

    The debate centers around general relativity and how energy curves space. Empty vacuum space has measurable properties such as light speed c gravity G, the electric and magnetic constants ε and μ, and other constants like Planck's constant h, and Boltzmann's constant k. Those properties can be compared to each other in ways that predict a very strong vacuum energy in space.

    A cosmological constant must be close to zero for agreement with astronomical observations. Space would be tightly curved if the cosmological constant was large.

    The only known way to have large energy that does not curve space was published by Peter Bergman. He published a famous book “Introduction to the Theory of Relativity” reprinted by Dover in 1976. On page 206 in equation set 13.34 a solution to Einstein’s field equations predicts that electromagnetic energy counteracts gravity in curvature of space (grs) and time (g44).

    When the same principle is applied to vacuum energy, there can be enormous energy in space with a zero cosmological constant and no curvature. All that is necessary is that the zero point energy must be equally partitioned between electromagnetic energy and gravitational energy, giving ¼hf energy to each.

    A semi classical deterministic model can be constructed of virtual particle pairs each of virtual mass m, that may be uncharged or electrically charged with virtual ± q. It describes an average action per oscillator over a group of 30 or more oscillators to avoid the quantum wave functions and probabilities that apply to single oscillators.

    The gravitational energy oscillates between two states. There is gravitational potential energy when the virtual pair is separated by one wave length, and dynamic energy when the pair recombines at a center point. All measurements are made in flat space.

    (1) m^2G/λ = 2mc^2 = ¼hf half of the zero point energy,

    (2) λf = c

    (3) m^2 = ¼hc/G

    (4) h^2 f^ 2 = 64 m^2 c^4 = 16 hc^5 / G
    (5) f ^2 = 16 c^5 / hG the frequency squared,

    (6) λ^2 = (¼)^2 hG / c^3 the wave length squared.

    The other half of the zero point energy is represented by an LC electronic oscillator that exchanges energy between virtual static electricity and virtual magnetic fields, using the same frequency and wave lengths as the gravitational energy.

    (7) q^2 / m^2 = 4π ε G where m is given in equation (3); ε is the electrostatic constant of the vacuum.

    (8) q^2 = π ε h c the absolute value of charge squared.

    The capacitance C is defined by

    (9) ½ q^2 / C = ¼hf maximum capacitor energy

    (10) C = 2 q^2 / hf = 2π ε λ = (2π)^2 ε λ / 2π

    The inductance L is given by

    (11) L = (1/4π^2)μλ / 2π = μλ / 8π^3 where μ is the magnetic constant of the vacuum.

    Reactive impedance is given by

    (12) Sqrt(L/C) = ( Sqrt(μ/ε) )/ 4π^2

    Energy density in vacuum space is

    (13) ½hf / λ^3 = ½hf ^2 / λ^2c

    (14) ½hf ^2 / λ^2c = ½((16)^2) c^7 / hG^2 a very large energy field.

    This model predicts each oscillator in flat space to have a fairly large virtual mass and on average 14.67 virtual electron electric charges. The average impedance per oscillator is about 9.5 ohms.

    Vacuum energy is very large but not infinite, sufficient to hold the properties of space nearly constant everywhere unless over powered by the gravity of a black hole or a very powerful observable energy field like electromagnetic energy.

    Calculations of this type have been done since the time of Dirac, with the difference in this message for equal partition between electromagnetic and gravitational energy to give a flat space and zero cosmological constant in a strong energy field.

    This calculation forms the basis of a model for variable light speed and variable gravity competing for vacuum energy, in ways that are too small to be detected on earth or near the sun with present technology.

    The prediction of that model is that light speed goes to zero at the event horizon of a black hole, where gravity G is predicted to increase to twice the normal value it has in flat space.

    In this model the observable gravitational energy of a curved space is taken from the virtual electromagnetic energy of the vacuum, such that the total energy remains constant in the regions where there is no mass. The opposite occurs in a negatively curved space where galaxies are accelerating apart and the prediction is that an observable electromagnetic excess is taken from the virtual gravitational energy of the vacuum, again giving a constant total energy in the absence of mass and electric charge.

    When taken to the extreme at an event horizon, the model has no electromagnetic energy or virtual electric charges, and the oscillation frequency goes to zero, as does Planck’s constant.

    In this way there are a number of competing models that could be created, and all of them could be represented in the probabilistic form of wave mechanics.

    So the cosmological embarrassment is a small one and that is only because of the long delay in getting a theory. The technology that is described here was available to a few in 1920, and to everyone by 1935.

    These predictions may be tested by astronomical methods.

    Some symbols like exponents, fractions, and Greek letters like lambda in the copy and paste may not convert properly from Times New Roman to blog font. So a DOC version of the message is available on request.

    This is a reprint of my previous blog on this site that has been moved to another site inhabited by math majors.

    All comments are welcome.

    Vacuum Polarization is the term used for a lot of physical processes where particle pairs are created from the vacuum. Sometimes it happens spontaneously, and other times it can result from deliberate actions.

    Electric fields can polarize the vacuum, but that is not very efficient because the charge tends to leak and short circuit the device. For the same reason we do not use many electrostatic motors on machines.

    Magnetic polarization of the vacuum is a more interesting topic, where the power levels can be very high. Usually electromagnets are applied for research work, and the energy is said to be conserved with the particle energy taken from the magnetic field. For a sufficiently strong field the particle creation becomes very fast.

    A different situation occurs when permanent magnets are applied to the vacuum polarization. For a sufficiently strong field, the particles still occur in pairs and carry away energy. Then the discussion is about whether the energy came from the vacuum, or from the permanent magnets. In this discussion it is possible for the permanent magnets to be weakened by the particles, but also possible for the vacuum to lend the energy. in exchange for other things.

    In the uncertainty principle the vacuum lends a certain amount of energy for a certain length of time and then takes it back. If a system is biased by non random processes, then the vacuum might not be able to recover the energy by the usual way, but will find some other way to get the energy back. Then the question is about how the vacuum will respond to the over due account.

    Since about 1918 research has been done on this topic , although it was not widely known until 1935 when the scientific words were chosen to describe it.

    The key point is that the vacuum sometimes acts spontaneously to create particle pairs. Certainly in those cases the energy of the particles is coming from the vacuum. In the equal partition model there is a chance that a zero point oscillator can get enough energy from it's surroundings to separate a pair of virtual masses by more than one wavelength. Then the pair become real particles, and presumably the zero point oscillator returns to a slightly lower energy level.

    In the permanent magnet case there are reasons to expect that part of the energy is coming from the vacuum, as in the spontaneous case. Half of the particles are antimatter and they make gamma rays when they hit something solid. In the early work the process was done in glassware with out shielding.

    What remains is the unanswered question about how the vacuum responds to the over due loan. The vacuum energy enforces all of the physical laws, and when there is less energy, there is less enforcement.

    EQUAL PARTITION OF VACUUM ENERGY unifies cosmology with quantum mechanics, putting an end to 'the biggest embarrassment in theoretical physics'.

    The model was created by combining three concepts that have been around for a long time.

    Equal partition of energy supports the foundations of kinetic theory, heat capacity, and the gas laws.

    Competition of gravity and electromagnetic fields for control of space curvature dates from the early days of general relativity, and is routinely used in the stress energy tensor for observable fields.

    The vacuum energy is well established in virtual particles of zero point quantum mechanics, as the foundation upon which all of the physical laws depend.

    Combining the three concepts together as was shown above in a previous message, Einstein's cosmological constant becomes a variable tensor that is calculated by subtracting the electromagnetic vacuum energy from the gravitational vacuum energy.

    When the result is zero, it describes flat space with no field gradients.

    Originally the cosmological constant was thought to be a small positive number. Later it was thought to be zero. Now with accelerating galaxies, the cosmological constant has a small negative value.

    All of these changes in science have caused a great deal of excitement in the rewriting of theories and design of experiments.

    For partition of vacuum energy, all of the changes in the cosmological constant result in only a tiny percentage change in the excess of gravitational or electromagnetic energy.

    Present thinking is that the vacuum energy is almost equally partitioned with a slight excess of electromagnetic energy bending space backward and pushing the galaxies apart.

    Every advancement in science creates some sort of problem for previous work. Partition of the vacuum is predicting that in the big bang, the photon energy was enough to completely nullify gravity. Then the conditions in the first few seconds were considerably different than was previously thought.

    VACUUM ENERGY CAN BE MEASURED indirectly from the observable properties of space. Vacuum space has inertial and electromagnetic properties like light speed c, gravity G, the electric and magnetic constants ε and μ, and other constants like Planck's constant h, and Boltzmann's constant k. Those properties can be compared to each other in ways that predict a very strong vacuum energy in space.

    Equal partition of energy explains how the energy can be very great and not cause much curvature in space It doesn't explain why we can't see or feel vacuum energy or measure it as a radiation field or particle collisions.

    One of the hardest questions to answer is what geometrical shape the ZPE has and how it oscillates in ways that are not directly observable.

    Science accepts the claims that neutrinos travel with energy that is hard to detect, so the ZPE is not entirely unreasonable. It’s just new.

    Electromagnetic radiation can be found in dipoles, quadrupoles, and higher harmonics. Gravity is said to have quadrupoles and higher harmonics, but not dipoles.

    A similar type of thinking is to give ZPE higher harmonics but not quadrupoles or dipoles. Then it would not necessarily be observable by direct measurements.

    The simplest choice is eight poles per ZPE resonator, with 2, 3, or more resonators linked in clusters by induction. The clustering is helpful in wave propagation, but is most useful to meet the rather strict requirements of symmetry about any plane or axis through the center in flat space.

    A lot of estimates have been made about how to maximize the energy density in coupled resonators, with 26 resonators per cluster being the favorite choice, but its not proven.

    The science has reached a point at which it is able to produce theories of how the ZPE is structured, and how it changes under stress of gravity and magnetic fields. Then it can be tested with new ways to make measurements.

    There is another possible way to have large energy field and no curvature without electromagnetic competition.

    Johannes gave the triangle example for the Dirac equation in a previous article.

    In the Planck mass there are virtual particle pairs created where half of the virtual mass is antimatter. In the Dirac equation the negative root of energy was interpreted to be antimatter, and that is how Dirac predicted antiparticles.

    If half of the mass energy is positive (matter) and the other half is negative (antimatter) then the net mass energy is zero and space is flat without reference to electromagnetic energy.

    This theory is not in favor at this moment because galaxies are accelerating and there is no evidence for an excess of antimatter between galaxies.

    It is possible that the concept given in this message will have to be added to the Equal Partition example given above in another message.

    A Note About Partition Functions For The Less Advanced Readers,

    Partition functions are used in a great many physical system to describe how the energy is distributed.

    Every type of system that has more than one possible state has a partition function.

    Then for zero point vacuum energy there is likely to be a partition function that has one profile in flat space and different profiles when space is curved forward or backward.

    Equal partition was the first partition function that was discovered to have physical significance. It essentially requires a system to be in thermal equilibrium, with an equal amount of energy in each of the possible states.

    Quantum mechanics has many other types of partition functions, for deviations of single quanta from thermal equilibrium.

    So if flat space is in thermal equilibrium, an equal partition is expected for an average of 30 or more measurements.

    To be strictly correct for a smaller number of measurements, one of the other partition functions would be used.

    Notice that equal partition only applies to the average with flat space in a quiet neighborhood. In curved space or when there is a lot of traffic for particles and waves, the partition will be unequal.

    new news! my research has found that electrons and other sub atomic particles have their own alphabet and
    they know how to translate all langauges, and are also capable of interperting positive and negative intentions
    to know more just call me @ (818) 912-3888 if your the media get set for censorship, I don't fully know how far
    this is going to go, the implacations may also explain the reasons for war, in abstracts.

    I have a blog against the expansion of the universe, with arguments that show this is impossible.

    How the people consider there are evidences for the Big Bang I study these evidences.

    I have arguments and Hypotheses in: