Banner
    Is Information Physical? What Does That Mean?
    By Massimo Pigliucci | February 24th 2014 09:15 AM | 3 comments | Print | E-mail | Track Comments
    About Massimo

    Massimo Pigliucci is Professor of Philosophy at the City University of New York.

    His research focuses on the structure of evolutionary

    ...

    View Massimo's Profile
    I’ve been reading for a while now Jim Baggott’s Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth, a fascinating tour through cutting edge theoretical physics, led by someone with a physics background and a healthy (I think) dose of skepticism about the latest declarations from string theorists and the like.

    Chapter 10 of the book goes through the so-called “black holes war” (BHW) that stretched for two and a half decades between Stephen Hawking on one side and Leonard Susskind, Gerard ’t Hooft, and others. The BHW is interesting because Baggott turns it into an illustration of what he thinks is the problem with current theoretical physics, a problem that has much to do with philosophical theories of truth and with the difference between physics and metaphysics.

    The BHW began with a challenge issued by Hawking at a scientific gathering back in 1981. Quantum theory maintains that information carried by the wave function of a quantum object cannot be destroyed, it must be preserved because it connects past and future. But Hawking (who is a relativist, not a quantum theorist) had arrived at the conclusion that black holes evaporate over time, emitting what is now known as Hawking radiation. Since everything that ends up inside a black hole’s event horizon can be thought of as representing bits of information, Hawking concluded that while the black hole is evaporating information is not just scrambled — as previously thought — but actually destroyed, thereby contradicting a crucial tenet of quantum theory. Oops!

    You can see why Susskind and ’t Hooft, who are quantum theorists, didn’t like this thing, ahem, a single bit. The BHW was on.

    It took Susskind, ’t Hooft and Don Page a number of years to do it, but they finally came up with a serious counter to Hawking’s challenge, indeed one that led Hawking to admit defeat in 2007. The best known visual metaphor that captures the response is Susskind’s famous “holographic universe.” The principle essentially states that the information contained in an n-dimensional space (let’s say, a three-dimensional black hole, for instance) is equivalent to the information found on its n-1 dimensional boundary (for example, the surface of said black hole).

    Susskind boldly proposed that the universe itself behaves as a hologram, i.e., that all the information that constitutes our three-dimensional world is actually encoded on the universe’s equivalent of a black hole’s event horizon (the so-called cosmic horizon).

    If true, this would mean that “reality” as we understand it is an illusion, with the action actually going on at the cosmic horizon. Baggott ingeniously compares this to a sort of reverse Plato’s cave: it isn’t the three-dimensional world that is reflected in a pale way on the walls of a cave were people are chained and can only see shadows of the real thing; it is the three-dimensional world that is a (holographic) projection of the information stored at the cosmic horizon.

    Is your mind spinning properly? Good.

    What does any of this have to do with the BHW? That became clear in 1998, when Juan Maldacena (theoretically) demonstrated a “superstring duality”: it turns out that the physics of an n-dimensional spacetime described by a particular type of superstring theory (which one doesn’t really matter for our purposes here, but you’ll find all the details in Baggott’s book) is equivalent to the physics described by a quantum field theory applied to the n-1 dimensional boundary of that same spacetime. This result has deep connections with the idea of a holographic universe, so much so that Susskind eventually wrote in triumph:
    Whatever else Maldacena and Witten had done, they had proved beyond any shadow of a doubt that information would never be lost behind a black hole horizon. The string theorists would understand this immediately; the relativists would take longer [ouch!]. But the war was over.
    Indeed, as I said, Hawking conceded in 2007, thus ending the BHW, despite some rather large caveats attached to the Maldacena-Witten results, such as that, you know, they actually describe a universe that is not at all like our own.

    And now comes what Baggott properly refers to as the reality check. Let us start with the obvious, but somehow overlooked, fact that we only have (very) indirect evidence of the very existence of black holes, the celestial objects that were at the center of the above sketched dispute. And let us continue with the additional fact that we have no way of investigating the internal properties of black holes, even theoretically (because the laws of physics as we understand them break down inside a black hole’s event horizon). We don’t actually know whether Hawking radiation is a real physical phenomenon, nor whether black holes do evaporate.

    To put it another way, the entire BHW was waged on theoretical grounds, by exploring the consequences of mathematical theories that are connected to, but not at all firmly grounded in, what experimental physics and astronomy are actually capable of telling us. How, then, do we know if any of the above is “true”? Well, that depends on what you mean by truth or, more precisely, to what sort of philosophical account of truth (and of science) you subscribe to.

    There are several theories of truth in epistemology, but the two major contenders, especially as far as the sort of discussion we are having is concerned, are the correspondence and the coherence theories. Roughly speaking, the correspondence theory of truth is what scientists (usually without explicitly thinking about it this way) deploy: in science a statement, hypothesis or theory is considered (provisionally, of course) true if it appears to correspond with the way things actually are out there. So, for instance, it is true that I wrote this essay on an airplane on my way between Rome and New York, because this statement corresponds with reality as ascertainable via a number of empirically verifiable facts (e.g., my airplane tickets, witnesses who saw me boarding, deplaning and writing on my iPad in between, time stamps encoded in the file I generated, and so on). 

    A coherentist account of truth seems to me to be more appropriate for fields like mathematics, logic, and perhaps (to a point) moral reasoning. Coherentism is concerned with the internal consistency of a given account, eschewing any reference to correspondence with a reality that, by definition, we can only access indirectly (after all, if you wish to measure the degree of correspondence between your theories and the way things really are, it would seem that you need some kind of direct access to the latter; but you don’t have it, that’s why you need theories to begin with; there are ways around this, but they would lead us too far from the matter at hand).

    Back to the outcome of the BHW: in what sense is the holographic principle “true,” given our short discussion of theories of truth? As Baggott reminds his readers, the principle hasn’t been established by way of empirical observations or experiments, so it cannot possibly be true in the sense of the correspondence theory. Rather, it has been arrived at by way of superstring theory, which itself is a theoretical structure which has, so far, not been empirically tested either. The holographic principle, therefore, is true — at best — in the sense of the coherence theory of truth. But the history of physics is littered with examples of beautifully coherent theories that turned out to be wrong when the empirical verdict finally came in. Perhaps Hawking conceded a bit prematurely, after all.

    Finally, back to the idea that “information is physical.” What does that mean? Baggott summarizes the two possibilities thusly: “The scientific interpretation acknowledges that information is not much different from other physical quantities [like, say, temperature]. But, as such, it is a secondary quality [italics in the original] … The metaphysical interpretation suggests that information exists independently of the physical system, that it is a primary quality [original italics].” He concludes that he has no problem with either interpretation, as long as nobody is going to attempt to pass the second one as science. I couldn’t agree more.

    Originally appeared on Rationally Speaking.

    Comments

    I find this article's take on theory in science confusing. For example: It is BECAUSE of experimental evidence, in ADDITION to consistency requirements, that led to GR. Experimental evidence strongly supported a (at least local) Lorentz symmetry and the equivalence principle. People tried all kinds of theories of a relativistic gravity ... how was their success decided? Experimental evidence. It is actually due to the wealth of experimental evidence, that many theories could be discarded without even needing a single dedicated experiment to test the theory. Eventually GR (and a few other theories that can be equivalent to GR with some parameter choices) remained as a consistent explanation of the evidence. Does this mean we can't use the predictions of GR until we experimentally "prove" it in all circumstances? That is silly, you cannot "prove" something in that context. This is the best model of the experimental evidence of gravity that we currently have, and if experimental evidence eventually shows a deviation, we can then use this to obtain even more precise models. But to throw out its predictions as "ungrounded" in experiment is misleading. All of the extrapolations have so far withstood newer experimental data (strong gravity tests with pulsars, gravity waves showing infall of binaries, etc.), and based on experimental evidence of quantum effects, we don't expect any deviation until reaching the quantum gravity regime. So for some predictions we are extrapolating, yes. But "ungrounded" in experiment? NO.

    "we have no way of investigating the internal properties of black holes, even theoretically (because the laws of physics as we understand them break down inside a black hole’s event horizon)."

    This is wrong. We can discuss GR predictions if you like, but nothing magical happens at an event horizon. Even in flat spacetime, there are event horizons, as the finite speed of light yeilds causal event horizons. Maybe you meant to refer to the singularity GR predicts. But then this doesn't fit with your other arugments, which refer to a horizon.

    I feel that in this article you are confusing two things:
    Currently, (1) theorists have to use incredibly sophisticated math to obtain a theory that has the necessary properties to "correspond" to the current wealth of experimental data, (2) the only current way to "consistently" describe the experimental data yields some theories that predict consequences well beyond our current experimental capabilities.

    Note that we are led to these theories FROM the experimental data, and that even in (2), the candidate theories aren't untestable, as they need to match the wealth of current experimental data, it is just that some consequences it predicts are beyond current tests. You sem to be trying to leap from the inability to currently test some consequences to claiming the theories have no grounding in experiment at all. That is just false logic.

    Look, our current experimental evidence has led us to string theory. It does have some conceptual features which are unsettling to our intuition, and theorists would LOVE an amazing insight that would give some alternate theories to test out. However, no good contender has come about yet. Does this "prove" string theory is the only solution? No, of course not. But your claim that string theory is somehow outside of comparison to experiment is equally silly. Currently experimental evidence is so restricting that we can't come up with any other theories that are consistent internally and can be consistent ("correspond") with all the experimental evidence.

    If you start advocating that no theory needs to be mathematically consistent, how short are we allowed to shrink the "range of applicability" before it is absurd to you? Would you be okay if we take experimental evidence to an extreme, getting rid of even induction, and saying science is nothing more than a book of facts: I measured X on Y at time Z. That has no predictive power.

    A good theory has BOTH "correspondance" to experimental evidence, and mathematical "consistency" in its range of applicability.

    I've never quite felt comfortable with discussion of the "holographic principle" in regards to the Universe, but it took me some time to understand why. I'm sure it's a limitation of my understanding of how holography works, and how the term is being used in this case.

    Conventional holography involves using a laser beam, splitting it, letting one half of the beam bounce off of an object, and then recombining the beams to allow the recombined beam to strike a photographic film, or other image recording system. Illuminate with a laser beam, and you get an image that you can turn and see parallax in, to some degree. The information for the image is stored in the relationships - the interference patterns created.

    However, there's one thing that hologram can't show you about that object you photographed: the interior. This issue would, I would think, arise as well regarding the holographic "event horizon" of the Universe - though I expect that the mathematical physicists would say that it records information down to the level where the particles recorded only have exteriors, with no interiors (probably something to do with the Planck Length).

    Am I right here, or is there something much simpler I'm missing?

    Massimo Pigliucci is one of the philosophers of science writing today. Here he has a tilt at my favourite topics in philosophy and the philosophy of information theory. He mentions the debate between Stephen Hawking and Leonard Susskind and Gerard 't Hooft, which is centered on the nature of information. Hawking is happy to refer to what is in back holes in terms of bits. This is not so popular with quantum mechanics gurus who think that the universe is fundamentally probabilistic.

    The first comment that I can add is that Norbert Weiner's famous (among philosophers of physics and biology) exhortation that "information is information, not matter or energy..." still rings true to me. However, in my thesis I argue that wherever there is physical structure, there is information, and no where else. Another observation worth considering is that Pigliucci seems to consider that information must reduce to truth, or depends upon truth. Pigliucci is in good company - the world's leading philosophers of information - the late Fred Dretske and current prolific scholar in the field Luciano Floridi agree with him.

    In my thesis and in a paper soon to be published I have argued that in fact information is is intrinsically causally semantic but has not truth value: truth depends upon information and not vice versa. So Leonard Susskind and Gerard 't Hooft might be more comfortable with this view, since it does not involve what has been called digital ontology: the idea that the universe somehow reduces to binary yes/no questions.