Banner
    How To Count A Black Hole
    By Johannes Koelman | August 2nd 2010 09:00 PM | 14 comments | Print | E-mail | Track Comments
    About Johannes

    I am a Dutchman, currently living in India. Following a PhD in theoretical physics (spin-polarized quantum systems*) I entered a Global Fortune

    ...

    View Johannes's Profile
    Imagine you are tasked to build the ultimate computer memory. You are provided with an unlimited budget and all the resources you need. 

    How big a memory capacity could you build? 

    You decide to put it to a test. Being a clever cookie, you don't just order a huge pile of computer memory chips. You know you can do far better than relying on standard components. So you start designing and building ultra high density storage devices: high energy modules each capable of storing an insane amount of information. Soon you have set up a manufacturing line for these memory devices, and you find yourself managing a huge workforce connecting the devices together into one single big memory bank. An immense memory structure starts to grow. In the months following, modules continue getting added, and the total data storage capacity keeps growing proportional to the volume of the structure. 

    Then one day a nerdy guy in a raincoat, who introduces himself as 'Gerard', comes along. He looks with interest at the huge memory structure evolving, and scribbles fanatically on a notepad. Before he leaves he takes you apart and makes a puzzling remark: "According to the holographic principle there is a fundamental limit to the information capacity of space, and this limit increases with the surface area. Because volume increases more rapidly than surface area, at some point nature will play a very dirty trick on your growing volume of memory."



    Gerard, the prophet of doom, during his visit, depicted in front of the memory volume under construction. Clearly visible are the liquid nitrogen supply pipes required to cool the structure.

    You have no clue what the guy is talking about, and immediately after he leaves the construction work again requesting your attention. Soon you forget about the strange encounter. Months go by, and the memory structure keeps growing and reaches truly gargantuan proportions. Then one day, 21st December 2012, early in the morning when you are in your office, and just when you bring a cup of coffee to your mouth, a deep rumbling noise emerges from the construction site... 


    Quantum meets gravity
    The holographic principle is a fundamental result obtained by combining key features of the two pillars of modern physics: quantum theory and general relativity. Two pillars which, despite huge efforts over many years, remain incompatible. Any fundamental result applicable to the 'region of overlap' between quantum physics and general relativity therefore constitutes a major breakthrough. Unfortunately, we don't have many of these, and surely the holographic principle stands out as the prime example of such a breakthrough. 

    Below I will demonstrate that the holographic principle boils down to 'counting a black hole'. It is about deriving the number of quantum degrees of freedom (or equivalently, the number of bits) that need to be specified to describe the physical state of a black hole in all its details.   

    Consider a spherical region of space. Quantum physics tells us that a quantum of energy localized in this region can not be arbitrarily small. The minimum energy of such a quantum is inversely proportional to the circumference of the region. Make the circumference twice the size, and the minimum quantum shrinks in half. What happens when one creates a quantum containing less than the minimum energy? Well, nothing particular happens, it is just that Heisenberg's uncertainty principle will prevent such a low-energy quantum to fit within the spherical volume specified.

    Ok, so we accept that a finite region can only contain energy in steps, and that the step size is inversely proportional to the circumference of the region. However, things become really interesting when we consider the other end of the spectrum: the maximum energy that can be contained in a finite region.
     
    Einstein's theory of gravity tells us that there is indeed a limit to the total amount of energy that can be stored within a region of space. This maximum energy is proportional to the region's diameter. Give the region twice the diameter, and the maximum energy doubles. What happens if one keeps on adding energy into a volume of space such that the maximum energy content gets reached and then exceeded? The German physicist Karl Schwarzschild worked this out for us during World War I, when fatally ill and stuck in the trenches at the Russian front. (Anyone got a better candidate for the title 'most dedicated scientist ever'?) What Schwarzschild found was that when the maximum energy of a spherical volume is reached a black hole must have formed that fills the whole volume. When more energy gets added, this black hole starts to outgrow the volume, with its energy content spilling over into the neighboring volume.  


    Human genius meets human cruelty: the idea of a black hole originated in the trenches during World War I. 
       
    The conclusion is inevitable. A finite volume not only enforces energy storage in finite steps, but also limits the total energy that can be stored. We can combine these constraints imposed by quantum theory and gravity theory, respectively, by calculating the ratio of the total maximum energy over the minimum quantum. In doing so a constraint emerges on the number of quanta that can be contained in a spherical region. This maximum number is proportional to the product of the sphere's diameter times its circumference or, in other words, proportional to the surface area of the sphere.  

    This constraint that tells us something deep and profound about the characteristics of the still elusive theory of quantum gravity. To describe the full physics taking place within a spherical region of space, you need to specify all the quanta. This requires a number of bits proportional to the surface area of the region. In other words: what happens within the region can be thought to be encoded in surface bits. If that is indeed true, the fundamental laws of physics that describe what happens within a given volume should take the shape of some kind of fast-paced quantum game-of-life at the surface of the volume. The phenomena we observe within a region of space are thereby reduced to a mere holographic projection of a quantum game taking place on the region's boundary. 

    And that, in a nutshell, is the holographic principle. That was not too difficult, was it? Independent of your answer to that question, you probably want to skip the rest of this blog post. 

    But for those who insist on seeing some math, be my guest:


    Bohr's black hole 

    Where the hydrogen atom formed the playground that allowed the derivation of quantum mechanics, the Schwarzschild black hole similarly acts as a playground for quantum gravity models and for the holographic principle in particular. A simple derivation of the fact that the maximum entropy of a region in space is proportional to its area goes along the lines of the well-known Bohr model of the hydrogen atom. Like the Bohr model, the derivation given below is far from rigorous, yet it does correctly represent the key physics ingredients. 

    Here goes.

    We start by observing that according to Newton's theory of gravity an object of a certain mass will prevent slowly moving objects in it's vicinity to escape from it's gravitational attraction. Starting from Newton's one-over-the-distance gravitational energy and doing the math, it follows that any objects within a distance R from a mass M and moving at a velocity less than the escape velocity v determined by

    v2 = 2 G M / R

    will be gravitationally bound to the mass M. To prevent the formation of a black hole, we require the escape velocity v to be smaller than the speed of light c. We also use Einstein's E = M c2 to eliminate M in favor of E, so as to make explicit that not just rest energy (mass) but any form of energy results in a gravitational attraction. It follows that the amount of energy that can be stored in a spherical region of radius R is bound by

    E c4 R / 2G 

    When this bound is saturated, a black hole of radius R has formed.

    Now we need to bring energy quantization into the picture. Einstein thought us that light of frequency f comes in photons with energy hf each, where h is Planck’s constant. The lowest energy quanta that can be accommodated in a spherical region of given size are photons with a wavelength equal to the region's circumference. Equating the product of the wavelength and the frequency to the speed of light, it follows that the energy content of the region is quantized in steps

    dE  hc / 2pi R

    Combining the last two equations, it follows that a spherical region bounded by an area A can contain a number of quanta equal to 

    E / dE c3 A / 4 G h 
       
    If each quantum carries one nat of information, it follows that the entropy of a spherical region with area A is bounded by

    S k c3 A / 4 G h 

    where k is Boltzmann's constant. Again, the bound gets saturated (the equality sign applies) once a black hole with surface area A has formed. 

    Notwithstanding the hand-waving nature of the derivation, and although we have used Newton's law of gravity rather than Einstein's, and despite having applied a Bohr-like description rather than a full quantum treatment, the correct holographic description results: an entropy bound proportional to surface area. Also the constant of proportionality between entropy and surface area is correct within a factor of 2 pi (note that the constant h features in the equation, rather than h-bar). 

    Not too bad for a simple back-of-the-envelope estimate of a fundamental result at the tricky intersection of quantum physics with gravitational physics, a dark enigmatic area considered by many the exclusive domain of superstring theory.

    Comments

    Short summary:

    CERN's biggest threat to mankind is their computers. Lol.

    Amateur Astronomer
    I agree that holograph theory is a good way to combining key features quantum theory and general relativity. With a slight difference I tend to think of each Planck volume containing more than one bit of information described by a partition function because of the different types of energy and polarizations that are observed in space. Erik Verlinde appeared to be using 4 Planck areas to represent one unit of entropy in his paper at ICHEP conference in Paris on 22 July. http://indico.cern.ch/getFile.py/access?contribId=1201&sessionId=47&resI... Your article and Verlinde’s presentation are a step in the right direction, but I really believe that we should be talking about Planck volume instead of Planck surface, even if the volume is a hollow shell only one Planck length thick. Technically a surface area contains nothing but a set of locations in space. Even to color the surface black or white to signify one bit of information implies a third dimension of at least one unit thickness to contain the pigment. In your example one concept I differ on is how much information inside the hologram is required to be represented on the hologram. One special case is where the computer was designed to store data as dynamical acoustics (sound) contained within a pressure vessel and separated from the hologram by a vacuum space. Some secondary feature of the sound might appear in the hologram, but it is doubtful that all of the data would be found there. I would argue that the hologram is only required to contain the information that is physically propagated to distant locations. That is gravity, changes of gravity, net electric charge, magnetic fields, and thermally produced microwaves. The example is extreme, but it establishes a principle. When we talk about Planck volume the discussion picks up where you left off to connect the microscopic degrees of freedom to the hologram as Verlinde did in his slide presentation. Then the next logical step seems to be something like Schrödinger’s statistical thermodynamics with average energy E and partition function Z. S = E/T + k Ln Z Verlinde has avoided going that far, because it wasn’t necessary for his presentation, and it forces the issue about how many energy states there are in one Planck space, and how they are partitioned. He did very cleverly connect the macroscopic energy (general relativity) associated with a local mass to the microscopic degrees of freedom (quantum mechanics) of a gravity field, somewhat in agreement with the connection you described in your article. The future direction appears to be toward relating the degrees of freedom to the partition function, and eventually toward a quantum mechanical function that separates the average energy into the possible energy partitions. I guess the objective is to eventually arrive at a physical description of the Planck space and how the local mass alters the partition function. At least I hope that is the direction it is going. A very good article. Thanks.
    Johannes Koelman
    Thanks Jerry, you make some interesting comments. "... I really believe that we should be talking about Planck volume instead of Planck surface, even if the volume is a hollow shell only one Planck length thick." Could be. The qualitative semi-classical picture described above will not give an answer to that. On the other hand, we really need to start without carrying the notions of space and time. The real challenge is to get some description on how quantum degrees of freedom can lead to the emergence of spacetime. "I would argue that the hologram is only required to contain the information that is physically propagated to distant locations. That is gravity, changes of gravity, net electric charge, magnetic fields, and thermally produced microwaves." Is there any other information (that would not be subject to Hawking evaporation)? "I guess the objective is to eventually arrive at a physical description of the Planck space and how the local mass alters the partition function. At least I hope that is the direction it is going." I echo that! This blogpost is limited to the counting of the black hole's degrees of freedom. The Bohr-model-like picture that emerges (maximally redshifted photons skating along the horizon with wavelengths equal to the horizon circumference) touches upon black hole complementarity. I might come back to that in a future post.
    Amateur Astronomer
    “The real challenge is to get some description on how quantum degrees of freedom can lead to the emergence of spacetime”. You gave this helpful explanation that is seldom seen in popular media. It makes Verlinde’s method easier to understand. He bypassed the questions of space time and built his theory on degrees of freedom. Other writers have done that, but they didn’t say why. A system with zero degrees of freedom would have entropy of minus infinity. That doesn’t look like a starting place. Infinity in physics means that the math is wrong. If there was no energy and only one degree of freedom in a system, the entropy would be zero. That seems to be the starting point for the cosmos. Then we need something like the uncertainty principle, but probably with a different value of Planck constant. With uncertainty, energy and time emerge as a pair. Momentum emerges with distance (space) as another pair. It looks like space and time emerge from one degree of freedom, together with energy and momentum, when that degree of freedom allows the uncertainty principle to operate. The missing part is how the energy and momentum, or space and time accumulate. For that we have the principle of radiant focusing to provide the non randomness that is required for accumulation. I guess there is a chance that some one will develop a theory of emergent space time with enough detail that it can be tested. From these first few steps it appears that a theory of creation will be developed from the third law of thermodynamics as it was represented by Boltzmann and Schrödinger. That brings me to a final topic. College doesn’t teach the third law to many students. It only appears in a few specialized graduate programs, and then it is only slightly touched on. Almost everyday some educated person approaches me with a question about creation, with reference to the second law of thermodynamics. It would be interesting to see an article about the third law written for general practitioners, by someone who specializes in that part of physics. The average reader is totally lost in discussion of creation without it. Thanks Johannes
    Johannes Koelman
    "A system with zero degrees of freedom would have entropy of minus infinity. [..] If there was no energy and only one degree of freedom in a system, the entropy would be zero." Hmmm... Think here you got degrees of freedom mixed up with number of states. The difference between the two is a logarithm...
    Amateur Astronomer
    Yes, you are correct. I should have said when there is only one possible state there are no degrees of freedom and the entropy is zero. Thanks for the correction.
    "Then one day, 21st December 2012" .... nice one

    Amateur Astronomer
    Johannes, did you notice that your equations for dE/E and S in terms of c, h, and G are essentially the same as Paul Dirac derived for his sea of energy? All you are lacking for a proof of the Dirac Sea is to divide the total area A into Planck areas, like Verlinde did in his presentation, then express each Planck area as a function of the Planck length. You actually get the proof for Dirac Sea of Energy if the Planck areas are represented as spherical triangles of equal sides on an event horizon.
    Johannes Koelman
    Jerry, do you have a reference for me?
    Amateur Astronomer
    Dirac Sea of Energy in the present context is one Planck Energy per Planck Volume. http://en.wikipedia.org/wiki/Planck_units If you take the Planck area to be 2hG/c^3, and n is the number of Planck areas in A, then entropy takes on the classical value. S = kn /2 Dirac Sea of Energy is very large but not infinite as some people try to say. The quantum LC circuit equivalent puts a finite value on the energy when the electromagnetic potential is set equal to the gravitational potential. Then I believe the total entropy would be kn/2, but the gravitational half would be kn/4. So the average Planck area for this purpose should be hG/c^3 in agreement with LC theory and equal partition. I gave a complete derivation of the LC models for Z and equal partition in previous comments on your other articles. On this occasion I will not reference my self, so the easiest to find reference for an incomplete derivation is given here. http://en.wikipedia.org/wiki/Quantum_LC_circuit Notice in that without virtual mass and a partition function for gravitational potential there is no way to define an average LC frequency or the number of oscillators in a volume. Then the old argument about infinite energy occurs. Instead of taking the potential gravity of a virtual mass in ZPE, you took the gravity of a black hole, which is essentially the same energy density converted in the same way to energy. With the LC model your energy equation becomes something like E = V c^7/hG Or a similar function that differs by no more than a small coefficient of Pi depending on your preference of average Planck areas. The dE is just the Planck energy of one Planck volume, or the Dirac Sea of Energy that makes a type of proof if it is all written out correctly. (n/V) dE = c^7/hG That brings a question about Planck volume. The radial dimension is nearly mashed flat by gravity as predicted in the Schwarzschild metric. Nearly flat I said. Actually Schwarzschild mashed it completely flat, but that leads to the conclusion that a black hole has no volume inside the event horizon, and nothing but a singularity. A lot of scientists are rewriting that part of General Relativity, so my choice of a nearly flat radial direction sounds more like something that could contain information S and energy E stored in a zero point oscillator. I don’t have a concise reference from a peer reviewed paper. The conclusion is that empty space has to contain a lot of energy potential to resist the collapse of a black hole. The event horizon is at the radius where that potential is exhausted, giving a good estimate of how much energy is in the vacuum. In summary there are two parts of the proof. First the energy is finite and measurable indirectly from properties of space and energy partition in the LC model. Second the potential energy density from the LC model in flat space is the same as the observable energy at the event horizon where the potential has been fully converted to gravity.
    Johannes Koelman
    Jerry, I think I roughly understand your line of reasoning. Keep in mind, however, that the intuitive picture emerging from the work of Bousso is that of light-like cones (including light-like cones folded back onto themselves, also referred to as horizons) that need to be quantized holographically. In terms of the 'quantum LC circuitry' you refer to, this corresponds to a wiring that is:
    A) light-like
    B) much sparser than one would intuitively expect.
    Gravity is highly effective in 'thinning out' degrees of freedom, thereby generating an ultra-austere description of the universe.The correct semi-classical picture here is that of photons skating along light-like cones and horizons in discrete directions. (See post on the 'mikado model' of the universe.)

    I need to add that all of the above is intuition, no-one has yet come up with a realistic and consistent 'holographic LC circuit' or 'mikado' model. But if such a model exist, gravity need not be build into it, it will emerge statistically.
    Amateur Astronomer
    Raphael Bousso reminds me of Richard Feynman’s early years, both in style and productivity. I rushed out to watch another batch of his videos as soon as you mentioned his name. Your explanation was very helpful and I agree fairly well with the direction new research is taking toward unification. Entropic gravity emerging statistically from fundamental principles seems to be one of the biggest advancements in recent times. My concept of a sparse field may be different than yours. I think in terms of force and power operating over a distance at light speed with no mass transferred in the power flux. It is described in a very well known equation. P = c F For example the power flow required for gravity to hold me in my office chair is 200 mega watts. That much power is hard to describe as sparse. On the other hand it is easy to understand the power flow applied to your statement. “Gravity is highly effective in 'thinning out' degrees of freedom,’ Curvature of Earth gravity is said to be only 3 parts per billion different from flat space. It means the vacuum is able to lend 300 million times more power than it is already providing in my office gravity. Then the hard thing to imagine is that a black hole has reached that limit of power. Without reference to Dirac or quantum mechanics, the vacuum has been described here with a very large power flux. It converts easily to a huge potential energy density. Then the question of a cosmological constant must be resolved. Raphael Bousso has resolved the issue of curvature and the cosmological constant through his multiverse theory. To me it is a type of partition for space time that addresses the requirements for mathematical consistency. The large potential energy density can be partitioned to allow a small cosmological constant. In more convention terms the partition function Z can also be applied to the cosmological constant, but as you indicated, Z would insert gravity as a parameter, and the objective is to have the gravity emerge from fundamentals.
    The story about Gerard was hilarous, thanks.

    Just want to say thanks for these blog posts. As an undergrad I find them very enlightening and informative!