Being a skeptic is a rather lonely art. People often confuse you for a cynic, and I’m not using either term in the classical philosophical sense, of course.

In ancient Greece, the cynics were people who wished to live in harmony with nature, rejecting material goods (the root of the word means “dog-like,” and there are various interpretations as to its origin). The Western equivalent of Buddhist monks, if you will.

The skeptics, on the other hand, were philosophers who claimed that since nothing can be known for certain the only rational thing to do is to suspend judgment on everything. That’s not what I’m talking about.

A skeptic in the modern sense of the term, let’s say from Hume forward, is someone who thinks that belief in X ought to be proportional to the amount of evidence supporting X. Or, in Carl Sagan’s famous popularization of the same principle, extraordinary claims require extraordinary evidence. In that sense, then, what I will call positive skeptics do not automatically reject new claims, they weigh them according to the evidence.


And of course we aren’t cynics in the modern sense of the term either, i.e. we don’t follow Groucho Marx when he famously said “Whatever it is, I’m against it!” (Of course, he was joking, though that seems to be the motto of the current Republican party.)

Now, you would think that few people would object to the pretty straightforward idea (which can actually be formalized using a Bayesian statistical framework) that one’s beliefs should be adjusted to the available evidence. You would also think it hard to disapprove of the corollary that — since the evidence keeps changing and our assessment of it is perennially imperfect — than one ought not to subscribe to absolute beliefs of any sort (except in logic and mathematics: 2+2=4 regardless of any “evidence”). Boy, would you be wrong!

For one thing, the positive skeptic finds herself more often (in fact muchmore often) than not in a position to (provisionally) reject a given claim rather than (provisionally) accepting it. Why, you might ask? Shouldn’t the expected likelihood of the truth of a claim a priori be something like 50-50, in which case the skeptic should accept and reject beliefs in about equal manner? No, as it happens, things aren’t quite that nicely symmetrical.

One way to understand this is to think about a simple concept that everyone learns in statistics 101 (everyone who takes statistics 101, that is): the difference between type I and type II error. A type I error is the one you make if you reject a null hypothesis when it is in fact true. In medicine this is called a false positive: for instance, you are tested for HIV and your doctor, based on the results of the test, rejects the default (null) hypothesis that you are healthy; if you are in fact healthy, the good doctor has committed a type I error. It happens (and you will spend many sleepless nights as a consequence).

A type II error is the converse: it takes place when one accepts a null hypothesis which is in fact not true. In our example above, the doctor concludes that you are healthy, but in reality you do have the disease. You can imagine the dire consequences of making a type II error, also known as a false negative, in that sort of situation. (The smart asses among us usually add that there is also a type III error: not remembering which one is type I and which type II...)

What’s that got to do with skepticism? Whenever confronted with a new claim, it’s reasonable to think that the null hypothesis is that the claim is not true. That is, the default position is one of skepticism. Now the tricky part is that type I and type II errors are inversely proportional: if you lower your threshold for one, you automatically increase your threshold for the other (there is only one way out of this trade-off, and that’s to do the hard work of collecting more data).


So if you decide to be conservative (statistically, not politically), you will raise the bar for evidence, thereby lowering the chances of rejecting the null hypothesis and accepting the new belief when it is not in fact true. Unfortunately, you are also simultaneously increasing your chances of accepting the null and rejecting the new belief when in fact the latter is true.

Human beings are thus bound to navigate the treacherous waters between Scylla and Charybdis, between being too skeptical and too gullible. And yet, the two monsters are not of equal strength: if we accept the assumption that there is only one reality out there, then the number of false hypotheses must be inordinately higher than the number of correct ones. In other words, there must be many more ways of being wrong than right. Take the discovery that DNA is a double helix (the true answer, as far as we know). It could have been a single helix (like RNA), or a triple one (as Linus Pauling suggested before Watson and Crick got it right). Or it could have been a much more complicated molecule, with 20 helices, or 50. Or it may have not been a helicoidal structure at all. And so on.

So when trying to steer the course between skepticism and gullibility, it makes sense to stay much closer to the Scylla of skepticism than to bring our ship of beliefs within reach of the much larger and more menacing Charybdis of gullibility. The net result of this prudent policy, however, is that even positive skeptics are bound to reject a lot of beliefs, with the side effect that their popularity plunges. As I said, it’s a lonely art, but you can take comfort in the psychological satisfaction of being right much more often than not. This will not get you many girls and drinking buddies, though.

(Caveat: I have actually argued in a technical paper that we should abandon the whole idea of null hypotheses and embrace more sophisticated approaches to the comparisons of competing explanations. But that’s another story, and it doesn’t change the basic reasoning of this post.)