Neural correlates of consciousness

Suppose that a group of scientists aim to discover the neural correlates of phenomenal color experiences. To do this, they need to distinguish the correlates that they are really interested in from what one might call "mere" correlates. Some neural correlates are too broad (e.g., whole brain states, states that include lots of background conditions like a proper functioning heart, etc.). Some neural correlates are too narrow (e.g., properties of electromagnetic field generated by neural states). Somehow, "mere" correlates like these must be excluded. To get around problems of this sort, Chalmers introduces the notion of a minimal neural system. A minimal neural system is one that suffices for the states of consciousness in question but contains no proper part that suffices for those states. He then defines a neural correlate of consciousness (an NCC) as follows:

An NCC is a minimal neural system N such that there is a mapping from states of N to states of consciousness, where a given state of N is sufficient, under conditions C, for the corresponding state of consciousness.

Some background: Chalmers once argued that there are two distinct concepts that we can use to think about mental phenomena, the phenomenal and the psychological. Phenomenal concepts pick out mental phenomena solely by the way that they subjectively feel whereas psychological concepts pick out mental phenomena by the functional roles that they play. Conscious states, Chalmers argued, are picked out by phenomenal concepts alone, never by functional states. 

With this in mind, I think there is an overlooked problem with distinguishing features of the nervous system that are doing the right kind of causal work from features that are free-riders. A free-rider, as I am thinking about it, is a feature of a neural system that does no causal work necessary for performing the neural system's functions, but necessarily accompanies that process in a normal well-functioning nervous system (e.g., the electromagnetic fields generated by neural processes might count as free-riders). In order to distinguish between the causally efficacious features of a minimal neural system from free-riders, however, one must have at one's disposal facts about how the phenomenal states in question function. But that is exactly the sort of feature that, according to Chalmers, phenomenal states are supposed to lack.

Chalmers also suggests a second principle that individuates NCC by their contents: "An NCC (for content) is a minimal neural representational system N such that representation of a content in N is sufficient, under conditions C, for representation of that content in consciousness." This may help alleviate the problem, but I'm skeptical it eliminates it. From what I understand, there is plenty redundancy in neural representational systems. Even if it does fix the problem, it requires some way to make sense of assigning contents to phenomenal states. Chalmers would presumably do this by appealing to his panpsychist, dual-aspect views on information. But most scientists won't be happy with that answer. Barring dualism, I think the solution is to abandon Chalmers' two-concept framework. Conscious states, as we individuate them, are not solely phenomenal. They also include psychological features, and in particular contents.