In this post I’m going to examine Plantinga’s EAAN (Evolutionary Argument Against Naturalism), which he presented in the second part of his debate with Draper.
The argument begins by pointing out that (according to most versions of naturalism) a “belief” is a structure or persistent process in the brain; we might as well assume that it’s a structure. But not just any structure; this structure must have certain neurophysical (NP) properties to qualify as a belief. And since it is a belief, it must be a belief that p for some proposition p. We can call p the “content” of this belief. Since we’re distinguishing between the belief itself and its content, let’s denote the belief itself by P. (So P is the belief – the NP structure - and p is the content.)
Now Plantinga considers two possibilities: that P affects behavior by virtue of its content p and that it doesn’t. This is the first point, IMHO, where his argument goes off the rails.
To illustrate the point a computer analogy might be helpful. Consider a computer running a chess-playing program. We might naturally say that at a certain point it “decides” to “move” the bishop to king’s knight six because it “believes”, based on its calculations, that this move will force mate in four moves. But what does this mean? Surely the computer can’t properly be said to “believe” anything; or to be “trying” to win? Well, actually it can be said to do these things (whether “properly” is just a matter of linguistic convention.). To be sure, at the lowest level of analysis all that’s happening in the computer is that a great number of elementary particles are moving and interacting with one another in accordance with the laws of physics. A higher level of analysis would refer to RAM, hard drives, CPU’s, and the details of the way they implement the program code. Only at the next level of interpretation would we begin to refer to chess, to selecting “moves” based on criteria designed to maximize the computer’s chances of “winning”, etc. A perfectly accurate description of what’s going on inside the computer is possible in terms of either of the two lower levels. It is perfectly correct to say that the computer’s actions are caused by the operations of the basic laws of physics, or that they are caused by running that particular computer code (with the specified inputs) on a computer with that particular structure. And neither of these causal explanations brings in the notion of “chess” or “winning”. However, at the next higher level it makes perfectly good sense to say that its “decision” to move the bishop is based on its “belief” that this move will force mate. And it is equally natural to say that the “content” of this belief is that the move in question will force mate in four. But this is an interpretation or description of the computer’s internal state, and so of course it plays no causal role, strictly speaking, in the computer’s behavior. So in this sense it’s perfectly correct to say that the computer’s moving the bishop to king’s knight six is caused only by the “syntactical” properties of its “belief” (i.e., the structure of the code being executed, the CPU, etc.) and not by the fact that the “content” of this belief is that this move will force mate in four moves.
But in another sense it is perfectly valid to say that the computer’s “moving” the bishop to king’s knight six is caused by the “content” of this belief – i.e., by the fact that the belief in question is a belief that this move will force mate in four. This is simply a higher-level interpretation or description of what’s going on than the other two; this in no way makes it less valid. In fact, for most practical purposes it is a much better explanation, because it identifies the crucial aspects of the computer’s internal state that caused it to do what it did, and thus makes its “actions” far more comprehensible.
In fact, we use this kind of causal explanation all the time. For example, we say that Mrs. Brown’s death was “caused” by pneumonia; we do not say that she died because of complex biochemical processes (which we might then proceed to describe in detail for each cell) that were initiated by the introduction of a number of complex unicellular organisms (which we might also describe in detail) into her body. One might say that, strictly speaking, the fact that she died of pneumonia is merely an interpretation or description of these events, and not the events themselves, and that. since no one ever died from an interpretation or description, Mrs. Brown didn’t “really” die of pneumonia after all. Or to borrow Plantinga’s terminology, we might say that it was the syntactic properties of the disease (i.e., the detailed biochemical processes that constituted it) that caused her death, and not the “semantic” properties, such as the fact that it was pneumonia and not, say, a common cold. But none of this makes it invalid to say that her death was caused by pneumonia.
So whether we say that an action (such as the computer’s moving the bishop to king’s knight six) was caused by the content of a belief or by the belief itself is (in a great many cases) purely a matter of point of view, or better yet, of linguistic convention.
Now let’s return to Plantinga’s dichotomy between the causal efficacy of a belief’s being due to its content or not. In a strict sense the content of a belief does not play any causal role (since it has no physical existence; it’s merely an interpretation or description of the belief). But in ordinary language we attribute causal roles to such things all the time, and it’s no more of an abuse of language to say that the content of our beliefs plays a causal role in our actions than it is to say that Mrs. Brown’s death was caused by pneumonia.
Before proceeding further, perhaps we should say a little more about what it means to say that the content of belief P is the proposition p. As the computer analogy illustrated, the content of a belief is in some sense an interpretation or description of that belief. But what does that mean? Well, if we were talking about a belief held by a dog, for example, the matter would be complicated. We’d have to look at the behavior of the dog as a whole and infer beliefs from it. In fact, because of this it may be argued that we can’t properly apply the term “belief” to a dog in the literal sense. But humans (at least normal, adult humans) differ from dogs and other animals in a crucial respect: they have language. Why does this matter? Because propositions are ordinarily expressed in terms of language. Thus, suppose that Smith is disposed to say, under appropriate circumstances (specifically, conditions in which his intent is to convey information rather than to mislead), “That bridge is dangerous”, or to answer the question, “Do you believe that that bridge is dangerous?” with “yes”. The thing that creates his disposition to say these things is, of course, the NP structure that we call his belief that the bridge is dangerous, and the content of this belief is the proposition “That bridge is dangerous”. More generally, if someone is disposed to affirm a proposition under conditions where his intent is to convey information, then he has the belief that this proposition is true; and if he has this belief he will be disposed to affirm that proposition under such conditions. Note that I’m not trying to suggest a foolproof way to determine whether a person has a certain belief (or rather a belief with a certain content), but merely to indicate what it means to say that he has it; what kind of state of affairs would be such that it would be appropriate to say that he has it, whether or not we are able to reliably determine whether such a state of affairs actually obtains.
With beliefs and the contents of beliefs understood in this way, it’s obviously an analytic truth that the contents of a belief “correspond” to the belief itself in the way that common sense would suggest: a belief whose content is “That bridge is dangerous” will tend to produce behavior (including verbal “behavior”) of just the sort that one would expect of a person who believes that the bridge in question is dangerous.
Now Plantinga seems to think that, in order for the contents of beliefs to be subject to natural selection (so that beliefs with true contents are more likely to be selected) those contents would have to have a causal influence on our actions. But that’s not true. Take that dangerous bridge for example. (We’ll assume here that it really is dangerous.) Natural selection clearly could select for cognitive processes that tend to produce (under the conditions that actually obtain) beliefs whose content is that the bridge is dangerous. For example, those whose CP’s tend to yield beliefs with the content that the bridge is safe might tend to be at a severe reproductive disadvantage as a result of dying at an early age. So while natural selection cannot select directly on the basis of the contents of beliefs, it could certainly select for cognitive processes that tend to produce beliefs whose contents are true. (Note that we’re talking here only about whether it’s possible for natural selection to select for beliefs with true contents – or rather, for CP’s that tend to produce such beliefs. We’ll get to the question of whether this is actually likely to happen later.)
With this understanding, it’s clear that the question of whether the content of a belief “enters the causal chain leading to behavior” is a red herring. Of course it doesn’t – not strictly speaking anyway. The content of a belief is a proposition, and propositions do not enter into causal chains. But it doesn’t matter. Natural selection can select for cognitive processes that produce beliefs with true contents, and this is what matters.
Now it’s time to turn to the next question that Plantinga considers, which is, as he puts it, whether “beliefs are connected with behavior in such a way that false belief would produce maladaptive behavior, behavior that would tend to reduce the probability of the believer's surviving and reproducing”. Plantinga argues that the answer is “no”. Specifically, he argues that the proportion of true beliefs among adaptive beliefs would not be expected to be especially high. He points out that
For every true adaptive belief it seems we can easily think of a false belief that leads to the same adaptive behavior” and concludes that “The fact that my behavior (or that of my ancestors) has been adaptive, therefore, is at best a third-rate reason for thinking my beliefs mostly true and my cognitive faculties reliable--and that is true even given the commonsense view of the relation of belief to behavior. So we can't sensibly argue from the fact that our behavior (or that of our ancestors) has been adaptive, to the conclusion that our beliefs are mostly true...
Now this argument is downright absurd. In the first place, it’s just not true that “for every true adaptive belief [there is]a false belief that leads to the same adaptive behavior”. There are so many counterexamples to this that it hardly seems worthwhile to belabor the point. What Plantinga probably means is that for every true belief and every specific situation in which it might affect behavior there is a false belief that would produce the same behavior. But this is hardly the same thing! A single true belief could produce adaptive behavior in a wide variety of situations - behavior that could only be “reproduced” by false beliefs if a different false belief were postulated for each one of these situations. More importantly, perhaps, most of our behavior is not of the simple “reactive” kind (like running away from a tiger) that Plantinga likes to talk about, but is “goal-oriented”. In order to achieve even a simple goal it is generally necessary to act on the basis of a great number of beliefs, all of which contribute in an essential way to the achievement of the goal. It’s just ludicrously implausible that a set of false beliefs might “just happen” to contribute in an analogous way to the achievement of the goal.
An example: The other day I went into town to shop. First, I made a list of the things I wanted to buy because I believed it would help me remember what to get and ultimately would result in my getting them. Next I got out my garage key because I believed that it would open the garage door. Then I got into the car because I believed that it was capable of getting me to town. I followed a particular route because I believed that it would take me to Wal-Mart. I went to Wal-Mart because I believed that I would find there certain items that I wanted. And so it went, as I went from store to store, buying various things because I believed that they would be useful. Then I headed to a restaurant because I believed that I was hungry and that I could get a meal there that I would enjoy. Finally I headed back to where I had started because I believed that my house would still be in the same place that I left it.
I could add hundreds, if not thousands, of other ways in which I acted the way I did because of various beliefs, but by now the point should be reasonably clear. It’s just completely implausible that I might have had a set of false beliefs that would, by sheer luck, have resulted in a successful shopping expedition.
Another example (fortunately fictitious): My wife starts to have serious pains in her abdomen. I call the doctor’s office because I believe that it might be serious enough to require medical attention. After listening to my description of the symptoms, he forms the belief that she probably has appendicitis, because he believes the stuff he learned in medical school. He recommends that I get her to an emergency room as soon as possible because he believes that her life is in danger and believes that I’ll probably take his advice. On hearing this I call an ambulance service, telling them what the doctor told me, because I believe that this will induce them to send an ambulance which will take her to the hospital. They send the ambulance, based on my directions, because they believe me and believe that the directions I gave to my house are accurate. When we get to the hospital, we are admitted immediately because the staff believes that this could be a life-threatening emergency. After some tests they decide that it is and schedule immediate surgery because they believe that it could save her life. The surgeon does his thing based on many beliefs that he acquired from medical school and experience with such cases. As it turns out, it really was appendicitis and she would have died if she hadn’t received the proper treatment in time.
Although this example is more dramatic than most, it is similar to everyday life in that it involves complex interactions among a number of people who are acting cooperatively to achieve a common end, and each of whose actions is based on a whole complex of beliefs.
I don’t think that even Plantinga is clever enough to come up with any remotely plausible way in which everyone involved in this episode might have come to engage in this survival-enhancing behavior without having a whole myriad of true beliefs. The falsehood of even a tiny fraction of the beliefs involved would have resulted in her death.
A final point: as I’ve alluded to several times, natural selection does not select for beliefs themselves; it selects for cognitive mechanisms that produce beliefs. It’s not hard to think of a way in which natural selection could select for beliefs that tend systematically to enhance survival: it could select for beliefs that “correspond to” or “reflect” the actual conditions in which the individuals involved live. But even if we spent years thinking up clever scenarios in which certain specific false beliefs could produce survival-enhancing behaviors, it’s perfectly obvious that there’s no way that natural selection could “select for” cognitive mechanisms that would select just the right false beliefs needed to produce the requisite survival-enhancing behavior in the vast majority of cases.
So while Plantinga thinks that it’s hard to see any reason to expect that P(R/N&E&C) would be very high (he suggests a “generous” value might be 0.9) it seems obvious on reflection that this probability is in fact extremely high – so high as to be indistinguishable in practice from unity.