Criticizing the two-concept framework

A standard view in the philosophy of mind is that there are two very different concepts of the mental: the phenomenal and the functional. Terms that express phenomenal concepts refer to the way that mental states feel from the first-person perspective. Terms that express functional concepts refer to the causal roles that mental states play in relation to ones behavior and other mental states. Many ordinary mental terms apply to mental phenomena that have both phenomenal properties and functional properties—terms like "pain," "perceive," and "desire." Philosophers often claim that terms like these conflate distinct phenomena: really, there are two concepts of pain, and two kinds of mental states they apply to, that the ordinary use of the term "pain" conflates. The same is true of other such terms. Although their phenomenal and functional uses reliably coincide, they are conceptually distinct (we can, supposedly, imagine one existing without the other and vice versa), and so scientists and philosophers should not treat them as the same. Call this view the "two-concept framework."

While I accept that phenomenal and functional properties may be distinct, I am convinced that the two-concept framework is largely unjustified. One line of criticism that the two-concept framework supports is the charge that scientists conflate the functional concept expressed by the term "consciousness" with the phenomenal concept express by the term (see Ned Block's "On a confusion about a function of consciousness"). The argument, which philosophers rarely make explicit, is something like this: 

  1. Scientists offer theories of consciousness that are clearly aimed at phenomenal mental states but then go on to offer (broadly construed) functionalist theories of consciousness. So either scientists confuse the phenomenal concept expressed by the term "consciousness" with the functional concept expressed by the term, or they believe that phenomenal mental states can be explained by functional mental states.
  2. Phenomenal mental states cannot be explained in terms of functionally specified mental states (the explanatory gap).
  3. So either scientists are obviously mistaken or they conflate the different concepts expressed by the term "consciousness" (from 1 and 2).
  4. So scientists conflate the different concepts expressed by the term "consciousness" (3 and the principle of charity).

In short, either scientists are either obviously mistaken or subtly confused, and the principle of charity recommends the latter interpretation. The crux of the argument is premise three, the explanatory gap. The explanatory gap is the special difficulty of explaining how mental states with phenomenal character arise from the material states of one's nervous system. The trouble with this argument, as I argue in my dissertation, is that the explanatory gap isn't obvious, since many philosophers and scientists deny its existence. Not only that, the premise is question-begging in the present case, since many of the scientists defending these theories explicitly reject the explanatory gap.

But setting those issues aside, let's assume that scientists do conflate phenomenal and function properties and let's suppose that they accept the explanatory gap. Does it follow that the scientists' theories can't be aimed at conscious mental states with phenomenal properties? I think the answer is still no. The reason is that, so far as I can tell, philosophers have provided little reason to think that there are two distinct kinds of mental states, the phenomenal and the functional. Since the two reliably coincide, it is entirely reasonable to suppose that they are features of one and the same kind of mental state, even if one accepts that there is an explanatory gap. If that is right, then scientists can offer theories of consciousness while neither bridging the explanatory gap or nor denying its existence. In setting out to give a theory of time, one is likely to include one's initial description of time the so-called "arrow of time," the fact that universe is always moving (through time) from states of lower entropy to states of higher entropy. Theories of time, however, have yet to be explain the arrow of time. Should we conclude, then, that none of these theories are really theories of time, or that whatever explains the arrow of time must be something over and above time itself? Probably not.

Further, the assumption that phenomenal and functional mental states are distinct, separable kinds of states leads to the problem I discussed in my last post, effectively blocking (from the philosophical perspective) the possibility of scientific progress on consciousness. Some of have called this the "harder" problem of consciousness, and if what I argued there is right, there is reason to think that the problem is even worse than philosophers thought.

I argue in my dissertation that the concept of consciousness is a cluster concept and that, despite its many distinct features, there is good reason to believe that scientists are converging on a common target of explanation. It cannot, and should not, be boiled down to phenomenal properties alone or functional properties alone. Although many philosophers have chosen to study the phenomenal and functional properties of consciousness separately from one another, scientists see them as two sides of the same coin. And no, this doesn't mean that scientists are conflating different "concepts of consciousness," as so many philosophers are convinced. 

Neural correlates of consciousness

Suppose that a group of scientists aim to discover the neural correlates of phenomenal color experiences. To do this, they need to distinguish the correlates that they are really interested in from what one might call "mere" correlates. Some neural correlates are too broad (e.g., whole brain states, states that include lots of background conditions like a proper functioning heart, etc.). Some neural correlates are too narrow (e.g., properties of electromagnetic field generated by neural states). Somehow, "mere" correlates like these must be excluded. To get around problems of this sort, Chalmers introduces the notion of a minimal neural system. A minimal neural system is one that suffices for the states of consciousness in question but contains no proper part that suffices for those states. He then defines a neural correlate of consciousness (an NCC) as follows:

An NCC is a minimal neural system N such that there is a mapping from states of N to states of consciousness, where a given state of N is sufficient, under conditions C, for the corresponding state of consciousness.

Some background: Chalmers once argued that there are two distinct concepts that we can use to think about mental phenomena, the phenomenal and the psychological. Phenomenal concepts pick out mental phenomena solely by the way that they subjectively feel whereas psychological concepts pick out mental phenomena by the functional roles that they play. Conscious states, Chalmers argued, are picked out by phenomenal concepts alone, never by functional states. 

With this in mind, I think there is an overlooked problem with distinguishing features of the nervous system that are doing the right kind of causal work from features that are free-riders. A free-rider, as I am thinking about it, is a feature of a neural system that does no causal work necessary for performing the neural system's functions, but necessarily accompanies that process in a normal well-functioning nervous system (e.g., the electromagnetic fields generated by neural processes might count as free-riders). In order to distinguish between the causally efficacious features of a minimal neural system from free-riders, however, one must have at one's disposal facts about how the phenomenal states in question function. But that is exactly the sort of feature that, according to Chalmers, phenomenal states are supposed to lack.

Chalmers also suggests a second principle that individuates NCC by their contents: "An NCC (for content) is a minimal neural representational system N such that representation of a content in N is sufficient, under conditions C, for representation of that content in consciousness." This may help alleviate the problem, but I'm skeptical it eliminates it. From what I understand, there is plenty redundancy in neural representational systems. Even if it does fix the problem, it requires some way to make sense of assigning contents to phenomenal states. Chalmers would presumably do this by appealing to his panpsychist, dual-aspect views on information. But most scientists won't be happy with that answer. Barring dualism, I think the solution is to abandon Chalmers' two-concept framework. Conscious states, as we individuate them, are not solely phenomenal. They also include psychological features, and in particular contents.

The ancient origins of consciousness

My dissertation adviser has asked me to cowrite a book review with him on Feinberg and Mallatt's new book, The Ancient Evolutionary Origins of Consciousness. So here are a few preliminary thoughts.

The book tries to shed light on some of the more puzzling features of consciousness, and in particular bridge the infamous explanatory gap, through an investigation of its evolutionary origins. While I am not convinced that it makes much progress in bridging the explanatory gap, it does advance a number interesting and, from my perspective, well-defended hypotheses on the evolutionary origins of and neurological basis of consciousness.  A central point of the the book is that the origins of consciousness, in particular sensory consciousness (aka phenomenal consciousness), began much further back than most scientists currently believe. Feinberg and Mallatt estimate that the first conscious organisms appeared between 560 and 520 mya with the first vertebrates during the Cambrian explosion. They also defend a number of interesting claims about what sort of neuronal complexity must exist to support different kinds of consciousness. They suggest that there must exist at least three levels of neuronal processing to support sensory consciousness, they argue that sensory consciousness does not depend on the corticothalmic systemthe area of the brain to which the emergence of sensory consciousness was traditionally attributedand they reject the view that affective consciousness (evaluative feelings) depends on the cerebral cortex. As a philosopher interested in the evolutionary history of consciousness, I found the story they tell compelling and richly informative.

Feinberg and Mallatt introduce a list of criteria for singling out consciousness, similar in many ways to other criterial definitions on offer. But unlike other definitions, they place special emphasis on the presence of automatic, fast-acting reflexes and their relationship to multi-layer, nested neural hierarchies that produce isomorphic neural maps using information carried by the reflexes. Throughout the book they try to show, persuasively as far as I can tell, that the early vertebrates satisfy all their criteria (and perhaps some non-vertebrates). The cliffnotes version: the first sensory improvements that evolved at the beginning of the Cambrian explosion triggered an arms race between arthropod predators and vetebrate prey, which led, in turn, to much more advanced sensory modalities (especially in vision), multisensory processing, neural hierarchies, and finally mental imaging--the sort of materials that appear in their criterial definition of consciousness. Specifically, they estimate that the first conscious vertebrates appeared between 560 to 520 mya when the first high resolution eyes and mental imaging evolved. They also argue that primitive interoceptive and affective consciousness were already present among early vertebrates.

On a more philosophical note, the book promises to show that several of the most difficult explanatory hurdles posed by phenomenal consciousness can be overcome by investigating its evolutionary origins and neurological basis. In particular, Feinberg and Mallatt set out to explain four feature of consciousness that philosophy and science have so far struggled to account for: referral (or aboutness), mental unity, qualia, and mental causation. They argue that these features can be explained by their neurobiological naturalistic approach, an extension of Searle's biological naturalism. The basic idea is that we can explain the puzzling features of consciousness entirely in terms of the neurobiological features of nervous systems. The details of their proposal are not discussed in a serious way until the last few sections of the last chapter of the book. Much of the discussion there is simply a re-iteration of the criterial definition given earlier in the book and doesn't seem to draw in a significant way from the evolutionary story that they develop in the previous chapters. One gets the sense that there are really two projects here, one about evolutionary history of consciousness and one about the explanatory gap, and it isn't obvious that they fit together or usefully inform one another.

The discussion in the last chapter draws heavily from Feinberg's recent article, "Neuroontology, neurobiological naturalism, and consciousness: A challenge to scientific reduction and a solution." Like Searle, Feinberg holds that consciousness is "emergent" and "irreducible," but only in a very weak sense. Consciousness is irreducible because it isn't a feature of any of the individual elements that makeup nervous systems, and it is emergent because it only manifests as is a higher-level relational feature of whole nervous systems. Their proposals are as follows:

  1. Referral: an embodied process of nervous systems that emerges out of the relationship between fast automatic reflexes and functionally nested, multi-layer neural hierarchies that produce isomorphic maps out of information carried by the reflexes.
  2. Mental Unity: sensory information carried by automatic reflexes is unified by nested, multi-layer neural hierarchies.
  3. Qualia: "the result of a unique, mulitfactorial neurobiological substrate and recursive interaction within and between higher and lower neurohierarchical levels."
  4. Mental Causation: mental phenomena are identical to processes in the nervous system which have causal efficacy.

Unfortunately, Feinberg and Mallatt say so little about what they take to be in need of explanation that it is difficult to assess whether or not their theory does the job. The philosophical problems posed by these features are, needless to say, complex, and there is significant disagreement about what the exactly the problems are.  While their discussion of neural hierarchies offers some interesting insight into referral, mental unity, and perhaps mental causation, it is not immediately clear what their contributions amount to. Further philosophical work remains to be done.

Philosophers will be especially disappointed by Feinberg and Mallatt's discussion of qualia. This is, perhaps, not surprising, given that explaining qualia is standardly regarded as the really "hard" problem of consciousness. But since one of the main goals of the book is to bridge the explanatory gap, it is a bit of a disappointment. The basic problem is that they offer virtually no explanation for why particular experiences are accompanied by particular qualia--why, for example, the experience of seeing something red is accompanied by phenomenal redness and not, say, phenomenal blueness. To address this issue, Feinberg and Mallatt argue as follows:

We reply that our integrated approach that combines the neurobiological, neuroevolutionary, and neurophilosophical domains is necessary to answer this seemingly impossible question. If we ask you: "Why does subjective red feel 'red,' and pain 'hurt?" what would you say? First, you could argue that we know that the neurobiology of color processing and pain processing are quite different, so they shouldn't feel the same (a neurobiological answer). Second, you could say that the distinction between the feeling of red and pain evolved because there is strong adaptive value in a response to harm that differs from a response to color (a neuroevolutionary answer). 

Whatever the virtues of this reply, it offers no new insight into the debates surrounding qualia, and it shows little awareness of the details that make the explanatory gap so difficult to bridge. On the assumption that qualia exist, there is nothing terribly mysterious about why pain sensations and blue sensations should illicit different qualia. The really mysterious questions are why they should elicit the specific qualia that they do, and why they elicit any qualia at all. As Chalmers point out, there seems to be nothing incoherent about the idea of creatures that lacks phenomenal consciousness and yet are physically and psychofunctionally just like us. This is the really "hard" problem of consciousness, and it is a problem that Feinberg and Mallatt simply do not address.

None of this is to suggest that the "hard" problem of consciousness is well-formed or that scientists should take it seriously. Perhaps the very notion of qualia is incoherent. The point is simply that Feinberg and Mallatt have not addressed the problem on its own terms.