In 1955, Nelson Goodman published his celebrated reevaluation of Hume’s problem of induction1. He argued that Hume never attempted to find deductively sufficient justification for inductive predictions. Rather Hume’s account of constant connection and the like was meant to specify the rules of induction as a new kind of reasoning distinct from the kind used in deduction. He then continued to say that Hume’s real problem was not only misunderstood, but also unsolved. Goodman rejected Hume’s generalization principle as an insufficient explanation of our predictive practices which it attempted to describe. The generalization principle equally supports intuitively unacceptable hypotheses, most of which contain what are called “goodmanized” or “ill-behaved” predicates. Goodman calls the problem of distinguishing ill- and well-behaved predicates the “new riddle of induction.” In the same publication, Goodman offered a solution to this riddle which involves a consideration of the entrenchment—frequency of use in prediction—of the predicates used in a hypothesis. Like Hume, Goodman saw a deep and interesting problem for which he offered an insufficient solution. In this paper, I argue that we employ something like the so-called law of parsimony to reject hypotheses with ill-behaved predicates which is the basis of why hypotheses with entrenched predicates overrule other hypotheses.
Hume’s generalization principle runs something like “Predicting the presence of anything A from another thing B is supported to the degree that A and B have been observed together so long as B has not been seen without A.” For example, we feel justified in predicting that the next emerald will be green, because we have seen many emeralds that are green and have never seen an emerald that is not green. This principle describes what we do in inductive practice. However, Goodman realized that it is not strong enough to rule out what we don’t do in inductive practice. What if we chose “grue” and not green to plug in for A in the formula above? Grue things are defined as either green and observed before some time t, or blue and not observed before time t. Based on all of our observations of emeralds and the generalization principle, we could correlate them with grue just as strongly as we can correlate them with green. The kicker is that if we did form the belief that all emeralds are grue, we would predict that the first emerald observed after t will be blue.2 Since it cannot rule out the prediction that the next emerald will be blue, the generalization principle is insufficient. We need to add something to our account of induction that rules out predictions we would not make.
Now, the new problem of induction lies in distinguishing reasonable hypotheses from intuitively unacceptable ones. The unacceptable hypotheses contain ill-behaved predicates, so we must say something about what makes them ill-behaved without reference to intuition. Goodman attempts to solve the problem by pointing to what he calls entrenchment of predicates. Entrenchment is given by the frequency with which a predicate has been used in prediction. A hypotheses is then ruled out if it uses a predicate which we could have replaced with a better entrenched one. At first, it would seem this account does not allow for innovations in scientific theories since all new terms will not be entrenched and thereby ruled out. Goodman then supplements the account by saying that a predicate may inherit entrenchment from parent predicates and all coextensive predicates have equal entrenchment. Thereby, an unfamiliar predicate may actually gain entrenchment before anyone ever uses it. Goodman shows his criteria to be effective in rejecting ill-behaved predicates in a variety of examples.3
Goodman’s solution may exactly distinguish ill- and well-behaved predicates, but it (intentionally) says nothing about why predicates become entrenched.4 In fact, he maintains that predicates are well-behaved because they are well entrenched rather than the other way around. Goodman would have us think that it just so happened that green is the entrenched predicate while it could have turned out that grue had been better entrenched and in that case we would think that green was a weird and unnatural construction. This conclusion seems as unnatural as the ill-behaved predicates themselves. It is that grue is less entrenched than green, but it only is so because grue never could have been as deeply entrenched as green is now. In fact, it can never become entrenched at all. We need a description of why grue seems as weird to us now as it would have to the first person ever to think to call something green.
This solution requires a correct understanding of how induction works. Induction is reasoning from something to whatever caused it, but we are not limited to the thing itself in the evidence we can use in inferring its cause. It is true that both hypotheses “All emeralds are green” and “All emeralds are grue” are equally supported by any set of green emeralds, but we use much more than the set of green emeralds to judge the legitimacy of any hypothesis we might generalize from them.5 We also take into account certain other concepts that we have devised from observation. In particular, we bring something like what is commonly called the law of parsimony in judging hypotheses. Should a new hypothesis require too many new assumptions, we are likely to reject it. With induction our explanation must account for any other observation judged to work in the similar way to the phenomena to be explained.
We also gain the concept of a mechanism from experience. Here mechanism is used to denote that which we posit to explain regularities in the world. We recognize some group of events as a regularity such as things falling to the earth and then we posit a mechanism that is supposed to have caused all these events. Broadly speaking, science tries to determine which mechanisms are best to use in describing regularities. For example, Aristotle posited a mechanism which he called earth that explained why things move toward the earth. Modern science has then replaced Aristotle’s earth mechanism with that of gravity. Scientific laws are meant to calculate and predict the way a mechanism works. Whether any mechanisms actually exist and whether our scientific practices are zeroing in on the correct mechanisms if they do exist are beside the point here. Since we are only trying to account for why we judge certain predicates to be ill-behaved, we only need to see that we use the concept of a mechanism when making that judgement.
By bringing in the concept of a mechanism to our judgements of hypotheses, we see that much more is implied by the hypothesis “All emeralds are grue” than we are willing to accept. In order for the set of emeralds to be a subset of the things that are grue, we must also accept the following: 1) Some emeralds are blue. 2) We will observe all green emeralds before t. 3) We will not observe any blue emerald before t. For 2 and 3, we must posit at least one mechanism that does not come into play with the hypotheses “All emeralds are green.” We would need this mechanism to explain why all green emeralds must be observed before any blue one can appear. For 1, given the current understanding of how we see color, we must posit at least one extra mechanism that affects the molecular structure, or the wavelength emitted by emeralds, or our perceptual organs.
At this point, this solution is basically identical to that of entrenchment. We judge grue as requiring too many mechanisms only in comparison to the mechanisms required for green. If the account were to stop here it would be vulnerable to the objection that if we account for acceptability of predicates based on the mechanisms involved, we could not argue that some predicates could never have been used. In that case, we would have only pushed the problem back a step, and we would now need to account for why we posit the mechanisms we do. Why couldn’t it have been the case that we posited the grue mechanism when we could have posited the green one? It might seem possible that, as Goodman implies,6 if we start with the concept of grue and its counterpart bleen, then green—as grue before time t and bleen otherwise—would seem like the ill-behaved predicate. Provisionally, we can say that we cannot define green in terms of grue and bleen because grue is already defined in terms of green and blue. But if it is somehow possible to come up with the definition of grue without reference to green or blue, say if someone could pre-theoretically pick out grue things without thinking or even knowing what green and blue are, this objection would have weight and entrenchment would be the best theory we can come up with.
It is not possible to understand grue without first understanding green and blue. Suppose we met someone who was able to pick out grue things pre-theoretically and put her in front of a table of green and blue things. She would pick out all things we call green and call them grue. If we showed her something we would call blue, she would have to say something like “You showed me that too early for it to be grue. That thing is bleen.” Assuming she thought t had not come about yet, all things we would call green, she would call grue. And all things we would call blue, she would call bleen. From this it is clear, even without knowing the words “green” and “blue,” she would still have to be able to distinguish things based on their visual appearances. She would also have to be able to compare the current time to t. Thus, it is impossible for someone to pre-theoretically distinguish grue things without being able to distinguish green and blue. This paper has focused on just the rejection of the predicate grue. The same comments could be made about any other predicate of the form X before t, or Y otherwise where X and Y are exclusive predicates. I suspect the same comments would also apply to any ill-behaved predicate.
Just as it is at least conceivable that someone could pick out all things falling into a category without explicitly defining the category, entrenchment may exactly predict the predicates we judge to be ill-formed, but the doesn’t mean it is the rule by which we judge them. Parsimony seems to more closely match the steps in reasoning we use in judging the legitimacy of hypotheses. Most likely, there are several inductive steps involved in such a judgement and other principles or concepts are employed along with the mechanism and the law of parsimony. Should the rules of induction ever be made explicit, some reference to parsimony of mechanisms would definitely be involved.
2 In this paper, I will only be concerned with this specific kind of prediction that we want to rule out, namely those involving some predicate defined as X before t, and Y otherwise. Application of this solution to other ill-behaved predicates is beyond the scope of the current paper.