Animal consciousness

Are non-human animals conscious? Throughout history, many prominent philosophers and scientists have denied that non-human animals are conscious. Descartes, for example, held that animals are mere mindless automata. In recent years, however, scientific leaders have come to answer this question in the affirmative, even going so far as to sign a public declaration asserting that, yes, many animals are indeed conscious. Despite this emerging consensus, some prominent philosophers continue to cast doubt on whether or not animals are conscious. Peter Carruthers, in particular, has recently argued that there are no facts of the matter about whether or not animals are conscious, and that scientists should stop asking the question. His argument depends on the assumption that the global workspace theory of consciousness is correct, a theory that is currently enjoying a great deal of currency in cognitive science. Since analogous arguments could be made for many other popular theories of consciousness, however, his ideas likely point to general reasons to doubt that there are facts of the matter about whether or not animals are conscious.

The global workspace theory, first introduced by cognitive scientist Bernard Baars, is currently one of the most well-regarded theories of consciousness. The theory proposes that the distinction between conscious mental states and unconscious mental states is best explained in terms of the accessibility of their representational content. Whereas the content of unconscious mental states is restricted to cognitive systems that perform relatively automatic, domain-specific tasks, the content of conscious mental states is made widely available to many different cognitive systems. This is done through what Baars calls the global workspace, a cognitive system in which the contents of consciousness are consolidated, unified, and distributed (or globally broadcasted) to other cognitive systems. Baars often describes the global workspace as a "theater of the mind" that works to shine a spotlight on information that is relevant to decision-making and learning. By limiting what information is made available to decision-making processes, the global workspace makes it possible for multi-modal organisms with complex muscular systems to respond intelligently in real time to their constantly changing environments.

Whereas unconscious processing is quick, massively parallel, and highly reliable, conscious processing is slow, serial, and error-prone. Despite these drawbacks, conscious processing is better suited to facilitate learning and guided control over action because it is creative, adaptive, and capable of arriving at novel solutions to complex problems. When first learning a new skill, one is conscious of every movement. As one becomes more adept at the skill, however, one's awareness of the fine movements required to perform the skill becomes increasingly unconscious. This is a good thing: it means the necessary processing occurs more quickly and is less prone to mistakes. While the processing required is still determined by the goals assigned by the global workspace, its execution is increasingly automatic and reflexive.

Carruthers argues the global workspace theory, if true, implies that there are no facts of the matter about whether or not non-human animals are conscious. To arrive at this conclusion, he begins from the fact that the global workspace is framed in terms of the cognitive capacities of the human mind. Strictly speaking, few if any non-human animals broadcast information in exactly the way that humans do. So, strictly speaking, few if any non-human humans process information in exactly the way the global workspace requires. Instead, animal cognition can only be said to broadcast information to a certain degree, relative to the degree to which their cognitive capacities overlap with human cognitive capacities.

Carruthers then insists that consciousness does not come in degrees. Although we may be conscious of certain cognitive content to higher or lesser degrees, we are not conscious to higher or lesser degrees. Consciousness is all or nothing phenomenon. So, he concludes, there is a “mismatch” between the concept of consciousness and the concept of global broadcasting, despite the fact that, according to the global workspace theory, they have the same application.

So what are our options? According to Carruthers, there are just three possibilities. First, one could reject the assumption that consciousness and global access are identical. After all, they seem to have different properties. In particular, global broadcasting admits of degrees, but phenomenal consciousness does not. To reject this possibility, Carruthers insists that global broadcasting is defined relative to human cognitive capacities. The claim is that human consciousness is identical to global broadcasting, relative to the cognitive capacities of normal humans. So long as the global workspace can explain human consciousness, the fact that it does not extend to animals is not a problem. Since scientists cannot confirm whether or not animals are conscious in the first place, the fact that our best current theory of consciousness implies that animals are not conscious is not enough to reject the theory.

The second option is to grant that consciousness is an all-or-nothing phenomenon while insisting that there is a categorical albeit vague boundary between the degree of global broadcasting that suffices for consciousness and the degree that does not. The trouble with this idea, Carruthers argues, is that there are no determinate facts of the matter about how similar the mental states of animals must be in order to count as globally broadcasted states. The mental states of animals will be more similar in some respects and less similar in others depending on which cognitive capacities the animal in question possesses. In order to arrive at a fixed ranking, there must be some way to weight the importance of different cognitive capacities. But, Carruthers suggests, there are no facts of the matter about the relative importance of different cognitive capacities encompassed by the global workspace.

The third option is to argue that there are no facts of the matter about whether or not non-human animals are conscious in the first place. This is option that Carruthers chooses. His defense of this option rests on two assumptions:

If the global workspace theory is correct, then conscious mental states are identical to globally broadcasted nonconceptual contents.

The global workspace theory is fully reductive and, if true, precludes the possibility that conscious mental states are anything beyond globally broadcasted representational states.

If that is right, then the fact that cognitive processing in animals only resembles the global workspace to a certain degree presents no problem. According to the global workspace, there are no special features of consciousness over and above its representational features. To ask whether an animal’s cognitive processes are similar enough to the global workspace to be sufficient for consciousness only counts as a substantive question if it is assumed that consciousness is something over and above globally broadcasted content.

I want to offer three responses to Caruthers’ argument. Like him, I will assume that the global workspace theory of consciousness is correct.

My first line of response is that it is unlikely that defenders of the global workspace intend to relativize broadcasting to the cognitive capacities of humans. Although the global workspace was introduced relative to human cognitive capacities, it does not automatically follow, as Carruthers seems to think, that the notion of global access must be defined in terms of human cognitive capacities. As it turns out, most defenders of the global workspace theory believe that many non-human animals are conscious, and they do not seem to recognize any tension in their views. Baars has written extensively on animal consciousness, as have many of his followers. If Baars and his followers think that global broadcasting should be defined in terms of cognitive capacities that are unique to humans, then they should not even entertain the possibility that other animals are also capable of global broadcasting. But they clearly do. Not only do they think that many animals possess the right cognitive capacities, but much of the experimental evidence that they appeal to in support of the global broadcasting theory is animal research in comparative psychology.

Second, even if global broadcasting were relativized to the cognitive capacities of humans, the question becomes, which humans, and which cognitive capacities? Since there is a great range of diversity in human cognition and human cognitive capacities, there is a danger that whichever humans and whichever cognitive capacities are selected, a great many humans will come out as unconscious. Young children and the cognitively disabled do not possess the same cognitive capacities as normal, healthy adults. Nevertheless, it seems obvious that at least some young children and at least some cognitively disabled are conscious in the same way and to the same extent that healthy adult humans are conscious. This is not to beg any important questions or oversimplify the issue: it is an interesting and difficult question which cognitive capacities are required for conscious experience and which are not. The view that Carruthers' defends, however, seems to imply that there is no answer to this question. If scientists interpret the global workspace in the way that, according to him, they should, then it appears they must conclude that there are no facts of the matter about whether or not children and the cognitively disabled are conscious because they do not possess the full catalog of cognitive capacities that healthy human adults do. But surely there are such facts of the matter—surely it is a substantive question whether or not newborn infants are conscious. A good theory of consciousness ought to position scientists to answer such questions.

More broadly, individual humans often differ in their cognitive, perceptual, and motor capacities. Presumably, these differences in capacities reflect differences in their cognitive systems. If so, global broadcasting for me may be tied to different cognitive capacities than global broadcasting for you. How can I be sure, then, that you are conscious in the same way that I am? Carruthers answer seems to be that we can rely on first-person testimony, something that animals, in his view, are unable to provide. But it is unclear why first-person testimony is any better than other behavioral and neurological evidence for consciousness that scientists rely on to detect, for example, whether or not patients with locked-syndrome are conscious. Either way, one must make an inference to the best explanation, and either way, it is conceivable that one is mistaken. While our evidence that other humans are conscious is certainly stronger, the difference is only a matter of degree.

Third, I want to suggest there may be ways to weight features of the similarity space between global broadcasting in humans and analogous processes in animals. In particular, it may be possible to arrive at conclusions about the nature of conscious experience through evolutionary biology. Although conscious experience seems to be causally implicated in various ways with learning, decision-making, and behavior, scientists do not know which of these capacities are more central to its nature and which are more peripheral. My suggestion is that scientists may be able to use evolutionary biology to arrive at conclusions about which of these capacities are constitutive of conscious experience and which are not. In particular it could lead to an understanding of how the cognitive structure of conscious experience works to achieve its natural functions. This, in turn, should allow scientists to determine which aspects of its cognitive structure are more critical to its functions and which less so, providing a means to weight the similarity space between global broadcasting in humans and analogous processes in non-human animals. All of this presupposes, of course, that it is possible to learn something about the nature of consciousness by investigating the evolutionary origins of consciousness in non-human animals. But given that animal studies in comparative psychology are already central pieces of evidence for the global workspace theory, I think it is fair game.

No, your vote doesn't make a difference

Why should you vote?

The standard reason is that you should vote to "make a difference." At face value, this suggests that you should vote to affect the outcome of the election.

But do you really believe that, were you not to vote, the outcome would be any different?

A recent study concluded that the odds that your vote would have "made a difference" in the 2008 election were about 1 in 60 million. In swing states, that number drops to 1 in 10 million. Perhaps you think those are good enough odds to justify paying attention to the election, educating yourself about the candidates, and making the trip to your local polling place. Personally, if the only reason to vote is to get someone elected, I think I'll stay home.

To be clear, I am not saying that your vote doesn't matter. What I'm saying is that, if it matters, it isn't because it affects the outcome of the election.

So I ask again: why should you vote? Why does your vote matter?

The answer that I favor is that you should vote in order to play your part in the democratic process. If democracy is to be successful, it requires widespread participation. To the extent that you believe in the democratic process, you believe that everyone ought to participate. So you should vote for the same sort of reason that you should keep your promises, tell the truth, and help those in need, even when you can get away with doing otherwise and even when it confers no benefit to yourself. You should vote because it's the right thing to do.

Supposing that you should vote for moral reasons, and not for self-interested reasons, a further question one might ask is: how should you vote?

This question is generally answered in one of two ways:

  1. Vote for the presidential candidate whose values and policies most reflect your own.

  2. Vote for the presidential candidate, of the candidates most likely to win, whose values and policies you find least objectionable.

The second answer, what  is sometimes called the lesser evil voting strategy (or LEV, for short), is typically supported on the grounds that your vote "makes a difference." Accordingly, you should cast it in the way that is likely to have the best consequences. 

This idea is defended by some very smart people. But since your vote doesn't have any likely consequences, it is difficult to know what to make of this kind of justification.

Another way to decide between moral rules, one that respects the idea that you should consider the likely consequences without implying that your vote literally "make a difference," is to consider what would happen if everyone were to follow them. The question then becomes: would it generally be a better world if everyone were to vote according to the first principle or the second principle? 

It's unclear how often the two principles would reach the same verdict. But I can think of at least two sorts of scenarios in which they wouldn't. The first scenario seems to support the LEV strategy. The second seems to support voting for the candidate whose values most reflect your own.

First scenario: Suppose that there is one candidate that a majority of the population dislikes and two candidates that are both liked by a majority. But, in voting for the candidates that best reflect their values, 45% of the votes go to the candidate that is most disliked, 40% goes to one of the two remaining candidates, and 15% goes to the last (I realize this is an oversimplification given the way that actual US presidential elections work, but, suffice it to say, an analogous scenario could play out through the electoral college at the state level). The most disliked candidate wins, a consequence that would have been avoided had people voted according to the LEV strategy.

Second scenario: Imagine that the mainstream, established parties have both chosen candidates that are despised by the majority of the population. And imagine that certain third-party or established independent candidates would be more desirable to the majority of the population. This suggests that you shouldn't decide who you're going to vote for on the basis of LEV reasoning months ahead of the election, especially given the effects of modern polling on the outcome of the election. 

Given the current state of the US voting system, there are probably scenarios in which the LEV strategy is justified. Even so, it’s hard to know when we’re actually in such scenarios, and LEV reasoning has the potential to do a lot of harm. It has the potential, in particular, to foster unquestioning support for established parties that represent the interests of the few rather than the many. 

After the mainstream parties finish their primaries and nominate their candidates, the third parties need to be given the chance to rally support and provide people with a legitimate alternative. By turning to LEV reasoning too soon, say, we acquiesce to the possibility that there will be no viable candidates that the majority of the population actually support.

My recommendation, then, is that we work to reform the voting system and, in the meantime, avoid appealing to the LEV strategy unless it's very clear that not doing so will lead to the election of a much worse candidate.

Criticizing the two-concept framework

A standard view in the philosophy of mind is that there are two very different concepts of the mental: the phenomenal and the functional. Terms that express phenomenal concepts refer to the way that mental states feel from the first-person perspective. Terms that express functional concepts refer to the causal roles that mental states play in relation to ones behavior and other mental states. Many ordinary mental terms apply to mental phenomena that have both phenomenal properties and functional properties—terms like "pain," "perceive," and "desire." Philosophers often claim that terms like these conflate distinct phenomena: really, there are two concepts of pain, and two kinds of mental states they apply to, that the ordinary use of the term "pain" conflates. The same is true of other such terms. Although their phenomenal and functional uses reliably coincide, they are conceptually distinct (we can, supposedly, imagine one existing without the other and vice versa), and so scientists and philosophers should not treat them as the same. Call this view the "two-concept framework."

While I accept that phenomenal and functional properties may be distinct, I am convinced that the two-concept framework is largely unjustified. One line of criticism that the two-concept framework supports is the charge that scientists conflate the functional concept expressed by the term "consciousness" with the phenomenal concept express by the term (see Ned Block's "On a confusion about a function of consciousness"). The argument, which philosophers rarely make explicit, is something like this: 

  1. Scientists offer theories of consciousness that are clearly aimed at phenomenal mental states but then go on to offer (broadly construed) functionalist theories of consciousness. So either scientists confuse the phenomenal concept expressed by the term "consciousness" with the functional concept expressed by the term, or they believe that phenomenal mental states can be explained by functional mental states.
  2. Phenomenal mental states cannot be explained in terms of functionally specified mental states (the explanatory gap).
  3. So either scientists are obviously mistaken or they conflate the different concepts expressed by the term "consciousness" (from 1 and 2).
  4. So scientists conflate the different concepts expressed by the term "consciousness" (3 and the principle of charity).

In short, scientists are either obviously mistaken or subtly confused, and the principle of charity recommends the latter interpretation. The crux of the argument is premise three, the explanatory gap. The explanatory gap is the special difficulty of explaining how mental states with phenomenal character arise from the material states of one's nervous system. The trouble with this argument, as I argue in my dissertation, is that the explanatory gap isn't obvious, since many philosophers and scientists deny its existence. Not only that, the premise is question-begging in the present case, since many of the scientists defending these theories explicitly reject the explanatory gap.

But setting those issues aside, let's assume that scientists do conflate phenomenal and function properties and let's suppose that they accept the explanatory gap. Does it follow that the scientists' theories can't be aimed at conscious mental states with phenomenal properties? I think the answer is still no. The reason is that, so far as I can tell, philosophers have provided little reason to think that there are two distinct kinds of mental states, the phenomenal and the functional. Since the two reliably coincide, it is entirely reasonable to suppose that they are features of one and the same kind of mental state, even if one accepts that there is an explanatory gap. If that is right, then scientists can offer theories of consciousness while neither bridging the explanatory gap or nor denying its existence. In setting out to give a theory of time, one is likely to include one's initial description of time the so-called "arrow of time," the fact that universe is always moving (through time) from states of lower entropy to states of higher entropy. Theories of time, however, have yet to be explain the arrow of time. Should we conclude, then, that none of these theories are really theories of time, or that whatever explains the arrow of time must be something over and above time itself? Probably not.

Further, the assumption that phenomenal and functional mental states are distinct, separable kinds of states leads to the problem I discussed in my last post, effectively blocking (from the philosophical perspective) the possibility of scientific progress on consciousness. Some of have called this the "harder" problem of consciousness, and if what I argued there is right, there is reason to think that the problem is even worse than philosophers thought.

I argue in my dissertation that the concept of consciousness is a cluster concept and that, despite its many distinct features, there is good reason to believe that scientists are converging on a common target of explanation. It cannot, and should not, be boiled down to phenomenal properties alone or functional properties alone. Although many philosophers have chosen to study the phenomenal and functional properties of consciousness separately from one another, scientists see them as two sides of the same coin. And no, this doesn't mean that scientists are conflating different "concepts of consciousness," as so many philosophers are convinced.