“John is a man. All men are mortal. Therefore, John is mortal.” This argument from two premises to the conclusion is a deductive argument. The conclusion logically follows from the premises; equivalently, it is logically impossible for the conclusion not to be true if the premises are true. Mathematics is the primary domain of deductive argument, but our everyday lives and scientific lives are filled mostly with another kind of argument.

Not all arguments are deductive, and ‘inductive’ is the adjective labeling any non-deductive argument. Induction is the kind of argument in which we typically engage.

“John is a man. Most men die before their 100th birthday. Therefore John will die before his 100th birthday.” The conclusion of this argument can, in principle, be false while the premises are true; the premises do not logically entail the conclusion that John will die before his 100th birthday. It nevertheless is a pretty good argument.

It is through inductive arguments that we learn about our world. Any time a claim about infinitely many things is made on the evidence of only finitely many things, this is induction; e.g., when you draw a best-fit line through data points, your line consists of infinitely many points, and thus infinitely many claims. Generalizations are kinds of induction. Even more generally, any time a claim is made about more than what is given in the evidence itself, one is engaging in induction. It is with induction that courtrooms and juries grapple. When simpler hypotheses are favored, or when hypotheses that postulate unnecessary entities are disfavored (Occam’s Razor), this is induction. When medical doctors diagnose, they are doing induction. Most learning consists of induction: seeing a few examples of some rule and eventually catching on. Children engage in induction when they learn the particular grammatical rules of their language, or when they learn to believe that objects going out of sight do not go out of existence. When rats or pigeons learn, they are acting inductively. On the basis of retinal information, the visual system generates a percept of its guess about what is in the world in front of the observer, despite the fact that there are always infinitely many ways the world could be that would lead to the same retinal information—the visual system thus engages in induction. If ten bass are pulled from a lake which is known to contain at most two kinds of fish—bass and carp—it is induction when one thinks the next one pulled will be a bass, or that the probability that the next will be a bass is more than 1/2.

Probabilistic conclusions are still inductive conclusions when the premises do not logically entail them, and there is nothing about having fished ten or one million bass that logically entails that a bass is more probable on the next fishing, much less some specific probability that the next will be a bass. It is entirely possible, for example, that the probability of a bass is now decreased —“it is about time for a carp.”

Although we carry out induction all the time, and although all our knowledge of the world depends crucially on it, there are severe problems in our understanding of it.

What we would like to have is a theory that can do the following: The theory would take as input (i) a set of hypotheses and (ii) all the evidence known concerning those hypotheses. The theory would then assign each hypothesis a probability value quantifying the degree of confidence one logically ought to have in the hypothesis, given all the evidence. This theory would interpret probabilities as logical probabilities (from Carnap) and might be called a theory of logical induction, or a theory of logical probability. (Logical probability can be distinguished from other interpretations of probability. For example, the subjective interpretation interprets the probability as how confident a person actually is in the hypothesis, as opposed to how confident the person ought to be. In the frequency interpretation, a probability is interpreted roughly as the relative frequency at which the hypothesis has been realized in the past.)

Such a theory would tell us the proper method in which to proceed with our inductions, i.e., it would tell us the proper “inductive method.” [An inductive method is a way by which evidence is utilized to determine a posteriori beliefs in the hypotheses. Intuitively, an inductive method is a box with evidence and hypotheses as input, and a posteriori beliefs in the hypotheses as output.]

When we fish ten bass from the lake, we could use the theory to tell us exactly how confident we should be in the next fish being a bass. The theory could be used to tell us whether and how much we should be more confident in simpler hypotheses. And when presented with data points, the theory would tell us which curve ought to be interpolated through the data.

Notice that the kind of theory we would like to have is a theory about what we ought to do in certain circumstances, namely inductive circumstances. It is a prescriptive theory we are looking for. In this way it is actually a lot like theories in ethics, which attempt to justify why one ought or ought not do some act.

Now here is the problem: No one has yet been able to develop a successful such theory!

Given a set of hypotheses and all the known evidence, it sure seems as if there is a single right way to inductively proceed. For example, if all your data lie perfectly along a line—and that is all the evidence you have to go on—it seems intuitively obvious that you should draw a line through the data, rather than, say, some curvy polynomial passing through each point. And after seeing a million bass in the lake—and assuming these observations are all you have to help you—it has just got to be right to start betting on bass, not carp.

Believe it or not, however, we are still not able to defend, or justify, why one really ought to inductively behave in those fashions, as rational as they seem. Instead, there are multiple inductive methods that seem to be just as good as one another, in terms of justification. (Hume is the philosopher who made this problem most apparent.)

The hypothesis set and evidence need to be input into some inductive method in order to obtain beliefs in light of the evidence. But the inductive method is, to this day, left variable. Different people can pick different inductive methods without violating any mathematical laws, and so come to believe different things even though they have the same evidence before them.

But do we not use inductive methods in science, and do we not have justifications for them? Surely we are not picking inductive methods willy nilly!

In order to defend inductive methods as we actually use them today, we make extra assumptions, assumptions going beyond the data at hand.

For example, we sometimes simply assume that lines are more a priori probable than parabolas (i.e., more probable before any evidence exists), and this helps us conclude that a line through the data should be given greater confidence than the other curves. And for fishing at the lake, we sometimes make an a priori assumption that, if we pull n fish from the lake, the probability of getting n bass and no carp is the same as the probability of getting n-1 bass and one carp, which is the same as the probability of getting n-2 bass and two carp, and so on; this assumption makes it possible to begin to favor bass as more and more bass, and no carp, are pulled from the lake. 

Making different a priori assumptions would, in each case, lead to different inductive methods, i.e., lead to different ways of assigning inductive confidence values, or logical probabilities, to the hypotheses.

But what justifies our making these a priori assumptions? That’s the problem. If we had a theory of logical probability—the sought-after kind of theory I mentioned earlier—we would not have to make any such undefended assumption. We would know how we logically ought to proceed in learning about our world. By making these a priori assumptions, we are just a priori choosing an inductive method; we are not bypassing the problem of justifying the inductive method.

I said earlier that the problem is that “no one has yet been able to develop a successful such theory.” This radically understates the dilemma. It suggests that there could really be a theory of logical probability, and that we have just not found it yet.

It is distressing, but true, that there simply cannot be a theory of logical probability! At least, not a theory that, given only the evidence and the hypotheses as input, outputs the degrees of confidence one really “should” have. 

The reason is that to defend any one way of inductively proceeding requires adding constraints of some kind—perhaps in the form of extra assumptions—constraints that lead to a distribution of logical probabilities on the hypothesis set even before any evidence is brought to bear. That is, to get induction going, one needs something equivalent to a priori assumptions about the logical probabilities of the hypotheses. 

But how can these hypotheses have degrees of confidence that they, a priori, simply must have. Any theory of logical probability aiming to once-and-for-all answer how to inductively proceed must essentially make an a priori assumption about the hypotheses, and this is just what we were hoping to avoid with our theory of logical probability.

That is, the goal of a theory of logical induction is to explain why we are justified in our inductive beliefs, and it does us no good to simply assume inductive beliefs in order to explain other inductive beliefs; inductive beliefs are what we are trying to explain!



=====



Adapted from chapter 3 of my first book, The Brain from 25,000 Feet. In that chapter I present a mathematico-philosophical “solution” to the riddle of induction (a theory published jointly with Tim Barber). The full story can be read here: http://www.changizi.com/ChangiziBrain25000Chapter3.pdf