Epistemological emergence is the idea that complex systems cannot be described, as a matter of practice, in terms of their component units because of our epistemic limitations, that is our inability to do the computations. According to ontological emergence, on the other hand, a full understanding of complex systems in terms of their components is not possible in principle, not just because of practical considerations, because new levels of causality appear at higher levels of organization.
To take my favorite example, an engineer working on the Brooklyn Bridge, epistemological emergence would say that the engineer cannot work at the quantum mechanical level because he doesn’t have sufficiently powerful computers and enough time in his schedule to do so, but it would be possible in principle. But if ontological emergence is true, then the engineer better work at intermediate, macroscopic levels of analysis, because those are the causally relevant ones.
Kauffman sees ontological emergence as more powerful than the epistemological flavor, and he subscribes to both (well, the first one logically entails the latter anyway). But more powerful to what end? To defeat reductionism, for which he accepts physicist Steven Weinberg’s definition: “the explanatory arrows always point downward.” Reductionism of this sort is problematic for various reasons, according to Kauffman, including that with it “comes the conviction that a court proceeding to try a man for murder is ‘really’ nothing but the movement of atoms, electrons, and other particles in space.” Kauffman is really worried about free will.
Part of the problem here is that it is hard to define what emergence is. I stick to the basics and think of emergent properties as those properties arising from non-linear, non-additive interactions among the component parts of a system (as the popular refrain goes, “the whole is more than the sum of its parts,” because there is something due to the multiplication or other mathematical operations among parts). The advantage is that one can then measure the degree of emergence quantitatively, for instance using statistical tools like the analysis of variance.
Kauffman describes three examples of emergence: the origin of life, the origin of agency (i.e., the capacity of making decisions), and the origin of consciousness. I find all three examples poorly chosen. While there is no question that plenty of non-linear interactions are involved in systems that have made each of these three transitions, unfortunately we understand the transitions themselves very little. It is hard to imagine how one can explain a mystery (emergence) with an enigma (the origin of life, agency or consciousness).
I have often discussed two of my favorite, much more mundane, examples, which I think allow for a better grasp of emergence, and help eliminate the aura of spooky mysticism that often surrounds the topic: water and houses. The properties of the molecular form of water are not the simple sum of the properties of the individual atoms of oxygen and hydrogen that make up that molecule. The reason for this is because new properties emerge into the system once we combine atoms of a given type in a certain spatial arrangement, and then again once molecules in turn interact as higher (than atoms) units. Sure enough, physicists and chemists have historically had a really hard time trying to predict the physical-chemical properties of water from first (quantic) principles, though we can understand and describe them when the analysis shifts to the (higher) molecular level. Even if they will eventually succeed, remember that water is just about one of the simplest examples of emergence one can think of, many many orders of magnitude away from the stuff Kauffman is interested in.
The example of the house is due to Richard Lewontin, and was originally meant to explain the difference between gene-environment interaction (my technical area of expertise) and simple nature-or-nurture thinking. If you build a house of bricks and lime, you could, in principle, ask what the relative contribution of the two components is to the finished edifice. You might, for instance, simply weigh the bricks and the lime, and conclude that X% of the house is “due to” the bricks and Y% is “due to” the lime. But you would be spectacularly missing the point, of course, because a house is not just the sum of a certain number of bricks and a certain amount of lime, it is the result of a precise pattern of alternative laying of bricks and lime. (Substitute “genes” for bricks and “environment” for lime and you get Lewontin’s original point.)
Kauffman faces a second problem with his rather simple definition of reductionism (borrowed from a physicist, nonetheless!). For instance, one can easily concede material reductionism, the idea that everything is made of the same “stuff” (quarks, strings, whatever), without having to go so far as agreeing to process reductionism, the very different and much stronger proposition that causation always originates only at the bottom level.
It would be silly to deny material reductionism, unless you happen to be a mind-body dualist (virtually no scientist is, and very few philosophers). On the other hand, it seems to me that the idea that “the explanatory arrows always point downwards” is pretty difficult to defend. While causality certainly is a slippery notion, the “cause” of, say, the recent woes of the housing sector in the United States is best understood at the level of human individual and societal interactions, not at that of quarks (of which humans, indubitably, are made).
So it is in fact true, from the point of view of material reductionism, that a court proceeding to try a man for murder is ‘really’ nothing but the movement of atoms, electrons, and other particles in space. But to mistake that for a successful example of process reductionism would be precisely like being satisfied with the explanation of a house in terms of counting bricks and weighing lime.