Cosmologist Sean Carroll is one of many who have recently answered the annual question posed by Edge.org, which this year was: What scientific idea is ready for retirement? Sean, whom I’ve met at the Naturalism workshop he organized not long ago, and for whom I have the highest respect both as a scientist and as a writer, picked “falsifiability.”

Which is odd, since the concept — as Sean knows very well — is not a scientific, but rather a philosophical one.

Now, contra some other skeptics of my acquaintance, at least one of whom was present at the above mentioned workshop, Sean is actually somewhat knowledgable and definitely respectful of philosophy of science, as is evident even in the Edge piece. Which means that what follows isn’t going to be yet another diatribe about scientism or borderline anti-intellectualism (phew!).

Rather, I’m interested in Sean’s short essay because he deals directly with the very subject matter I just covered in my recent post based on Jim Baggott’s thought provoking book, Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth. 

Before we proceed, I should also point out that I’m not interested in debating physics with Sean, since he is the expert in that realm (if Jerry Coyne wants to debate evolutionary biology with me, that’s another matter…).

Indeed, I’m happy to watch the ongoing conversation between Carroll (and others) and critics of some trends in contemporary theoretical physics (like Baggott, Lee Smolin and Peter Woit) from the outside — which is actually a pretty good job description for a philosopher of science.

Rather, I’m interested in the philosophical aspects of Sean’s Edge essay and in what they say about his conception of science. I have, of course, invited Sean to respond to this post, if he wishes.

Sean begins the essay by attributing the idea of falsificationism to philosopher Karl Popper, correctly framing it within the broader issue of demarcationism (in this case, between science and pseudoscience). Sean immediately points out that demarcationism, while “a well-meaning idea” is “a blunt instrument” when it comes to separating scientific from non-scientific theorizing, and he is right about that.

Indeed, trouble for Popper’s view began even before it was fully articulated, by means of physicist-inclined-toward-philosophy Pierre Duhem, who raised exactly the same objections that Sean summarizes in his Edge piece. Fundamentally, Duhem noted that in actual scientific practice there is a complex relationship between theory and observation or experiment (what philosophers refer to as the theory-ladeness of empirical results), so that any given set of empirical data (say, from a particle accelerator experiment) doesn’t strictly speaking test the theory itself, but rather a complex web of notions that comprise the focal theory (say, the Standard Model), a number of corollary theories and assumptions needed to build it, as well as assumptions about the correct functionality of the measurement instrument, the way the data are analyzed, and so on. If there is a mismatch, Duhem argued, scientists don’t immediately throw away the theory. Indeed, the first thing they are likely to do is to check the calculations and the instrumentation, moving then to the auxiliary assumptions, and only after repeated failures under different conditions finally abandon the theory (if they had strong reasons to take the theory seriously to begin with).

Later on during the middle part of the 20th century, influential philosopher W.V.O. Quine expanded Duhem’s analysis into what is now known as the Duhem-Quine thesis: scientific (or really, any) knowledge is the result of a complex web of interconnected beliefs, which include not just the elements mentioned by Duhem (i.e., those most closely connected to the theory under scrutiny), but also far removed notions about the world and how it works, up to and including mathematics and logic itself.

This should not be taken as counsel for despair: scientific theories still can, and regularly are, tested. But if we are to speak precisely, what we are testing every time is our entire web of knowledge. If something goes wrong, the problem could in principle reside anywhere in the web. It is then up to clever and creative scientists to focus on the most likely culprits, eliminate the ones they can, and eventually reach a consensus as a community regarding the soundness of the theory being “tested.” That’s why science is just as much an art as a logical pursuit.

So far so good. Sean then proceeds to state that “String theory and other approaches to quantum gravity involve phenomena that are likely to manifest themselves only at energies enormously higher than anything we have access to here on Earth. The cosmological multiverse and the many-worlds interpretation of quantum mechanics posit other realms that are impossible for us to access directly. Some scientists, leaning on Popper, have suggested that these theories are non-scientific because they are not falsifiable.”

If some scientists have indeed leveraged Popper in order to criticize string theory, the multiverse and all the other speculative ideas of modern theoretical physics, those scientists really ought to take my Philosophy of Science 101 course before they write another line on the subject. But I think the problem is actually a bit more complex and nuanced than Sean leads his readers to believe.

He continues: “The truth is the opposite. Whether or not we can observe them directly, the entities involved in these theories are either real or they are not. Refusing to contemplate their possible existence on the grounds of some a priori principle, even though they might play a crucial role in how the world works, is as non-scientific as it gets.”

Well, not exactly. To begin with, I sincerely doubt that critics of those theories refuse to contemplate the existence of strings, branes, and the like. Their point, rather, is that these hypothetical entities (“unobservables” in the lingo of philosophy of science) have in fact been contemplated, for decades, and so far nothing much has come out of it, empirically speaking. After all, Smolin, Woit, and Baggott observe, physics is a science, and science is supposed to make contact with the empirical world, at some point. The longer a theory fails to do so, the more problematic it ought to be considered. That’s all.

Sean does provide his own rough alternative to falsifiability. He claims that two central features of any scientific theory are that they are definite and that they are empirical. While there is a lot more to be said about the nature of scientific theorizing (and yes, I understand that Sean is not a philosopher of science, and moreover that Edge probably strictly limits the length of the responses it seeks) let’s go with it for a moment.

Sean says that “by ‘definite’ we simply mean that they say something clear and unambiguous about how reality functions.” He argues that string theory does precisely that, insofar as it says that in certain regions of parameter space particles behave as one-dimensional strings. He is right, of course, but the criterion is far too inclusive. For instance, someone could argue that the statement “God is a conscious being or entity who exists outside of time and space” is also quite “definite.” We all understand what this means, ironically especially after modern physics has actually helped us make sense of what it may mean to be “outside of time and space.” Whatever “was” “there” before the Big Bang was, from the point of view of our universe, outside (our) time and (our) space. So, to say something definite (as opposed to something postmodernistically nonsensical) is certainly a good thing, but it ain’t enough to pinpoint good scientific theories.

What about the empirical part? Here is, according to Sean, where the smelly stuff hits the fan. As mentioned above, he rejects a straightforward application of the principle of falsifiability, for reasons similar to those brought up so long ago by Duhem. But what then? Sean mentions some examples of what Baggott calls “fairy tale physics,” such as the idea of a multiverse. His strategy is interesting, and revealing. He begins by stating that the multiverse offers a potential solution to the problem of fine tuning in cosmology, i.e. the question of why so many physical constants seem to have taken values that appear to be uncannily tailored to produce a universe “friendly” to life. (I actually think that people who seriously maintain that this universe is friendly to life haven’t gotten around much in our galactic neighborhood, but that’s a different story.)

He continues: “If the universe we see around us is the only one there is, the vacuum energy is a unique constant of nature, and we are faced with the problem of explaining it. If, on the other hand, we live in a multiverse, the vacuum energy could be completely different in different regions, and an explanation suggests itself immediately: in regions where the vacuum energy is much larger, conditions are inhospitable to the existence of life. There is therefore a selection effect, and we should predict a small value of the vacuum energy. Indeed, using this precise reasoning, Steven Weinberg did predict the value of the vacuum energy, long before the acceleration of the universe was discovered.”

Notice two problems here: first, according to Baggott, Weinberg’s prediciton was a matter of straightforward (if brilliant) physics, and it was conceptually independent of the fine tuning problem. The same goes a fortiori for another famous prediction, by Fred Hoyle back in the ‘50s, about the cosmic production of carbon. That one, which is nowadays often trumpeted as an example of how science has advanced by deploying the anthropic principle, was actually put forth (and confirmed empirically) before the very idea of an anthropic principle was formulated in the ‘60s.

More crucially, again as pointed out by Baggott, the reasoning basically boils down to: we have this empirically unsubstantiated but nice theoretical complex (the multiverse) that would very nicely solve this nagging fine tuning problem, so we think the theoretical complex is on the mark. This is dangerously close to being circular reasoning. The fact, if it is a fact, that the idea of a multiverse may help us with cosmological fine tuning is not evidence or reason in favor of the multiverse itself. The latter needs to stand on its own.

And yet Sean comes perilously close to proposing just that: “We can't (as far as we know) observe other parts of the multiverse directly. But their existence has a dramatic effect on how we account for the data in the part of the multiverse we do observe.” I truly don’t think I’m reading him uncharitably here, and again, I’m not the only one to read some cosmologists’ statements in this fashion.

None of the above should be construed as suggesting that ideas like the multiverse or string theory are somehow pseudoscientific. They are complex, elegant speculations somewhat grounded in well established physics. Nor is anyone suggesting that barriers be put around the work or imagination of cosmologists and string theorists.

Go ahead, knock yourselves out and surprise and astonish the rest of us. But at some point the fundamental physics community might want to ask itself whether it has crossed into territory that begins to look a lot more like metaphysics than physics.

And this comes from someone who doesn’t think metaphysics is a dirty word…

Originally appeared on Rationally Speaking