What’s the meaning of the word “red”? There are different approaches one might take to answering this question.
A representationalist approach to linguistic meaning will answer this question by saying that “red” represents a specific non-linguistic quality, namely, redness. But what is redness? One might think that no answer to this question can be given. Clearly, however, we can at least provide a partial answer. We can say, for instance, that redness is a color, that something’s being scarlet or crimson implies that it is red, that something’s being red (all over) is incompatible with its being green (all over), and so on. Saying such things, one specifies, in relational terms, what it is for something to be red. The core idea of an alternative approach to linguistic meaning, known as inferentialism, is that, in saying such things, all one is really doing is expressing the inferential rules governing the use of the term “red,” and it is in terms of these rules that the meaning of “red” is to be understood.
An inferentialist approach to linguistic meaning understands the meaning of a word in terms of how it contributes to the meanings of sentences, understanding the meanings of sentences in terms of how they inferentially relate to other sentences, most fundamentally, their implying and being implied by other sentences and their being incompatible with other sentences. For instance, “x is red” implies “x is colored,” is implied by “x is scarlet” or “x is crimson” is incompatible with “x is green,” and so on. Clearly, however, such a purely inferential specification of the meaning of “red” can’t suffice to completely specify its meaning, right? Almost all so-called “inferentialists” will answer this question by saying “Of course, not!”
According to the standard version of inferentialism, which I’ll call “quasi-inferentialism,” accounting for the meaning of a sentence requires appealing to more than just inferential relations between sentences. On the quasi-inferentialist account, there must be, in addition to inferential relations between sentences, relations between perceptual states and sentences—so-called “language-entries”—as well as relations between sentences and intentional actions—so called “language-exits.” Focusing just on the case of perception, it’s clearly essential to the meaning of “red” that one can come to know that something’s red, thus being in a position to correctly apply the word “red” to that thing, by seeing that it’s red. For instance, one can come to be entitled to the claim “The ball is red” in virtue of seeing the following red ball:
The quasi-inferentialist accounts for this aspect of the meaning of “red” by including in their semantic theory the following “language-entry rule”:
Quasi-inferentialism, however, faces a basic problem. It is advertised as an account of the meaning of words like “red.” However, in spelling out the account, we end up using the word “red” to specify the circumstance under which one is entitled to use the word “red.” We’re thus appealing to the very meaning for which we’re supposed to be inferentially accounting in giving our account. Thus, if we really try to give an account of the meaning of “red” along these lines, our account would be circular.
Of course, one might think that there’s no non-circular account of the meaning of words like “red” to be given, and so we should simply accept that we’re always going to appeal to the meaning of “red” in articulating our account of its meaning. If one thinks that, however, there is little reason to adopt any sort of substantive theory of meaning at all, be it inferentialist or quasi-inferentialist. Rather than trying to articulate the meaning of “red” in terms of the complex set of rules governing its use, one might as well just state one very simple rule that perfectly suffices to capture it’s use:
To resolve the problem, let us return to the starting thought that it’s essential to the meaning of the word “red” that one know that something’s red by seeing it. My basic positive proposal is simple. Rather than attempting to account for this aspect of the meaning of the word “red” by appealing to a quasi-inferential relation between a perceptual circumstance and the use of the word “red,” we can account for it in purely inferential terms simply by articulating the inferential relations between “red,” “sees,” and related terms. Let me explain.
In articulating what it is for something to be red, we might say such things as that if one is in a position to see that some object x is red—looking at x in good lighting—and one has color vision, then one will see and thereby know that x is red. The core inferentialist thought, applied in this case, is that we can think of what we say here as the expression of inferential rules governing the use of “red.” Spelling this out explicitly, we can inferentially define the predicate “is positioned to see that x is red” in terms of inferences like the following:
I’m aware that this account of the meaning of “red,” which appeals to nothing but inferential relations between sentences, is going to be met with skepticism from most readers. Let me address some concerns and articulate some consequences.
One initial worry is that, on this account, there will be no way to distinguish the meanings of words like “red” and “green.” One might wonder about a mapping of the language onto itself where “red” is and “green” are switched with all of the inferential relations being preserved, for instance, with “crimson” being mapped to “forest green,” “colored” being mapped to itself, and so on. Of course, it’s true that, for all of the inferences I’ve specified as examples thus far, this is possible. Yet, on this account, the meanings of these words are understood in terms of the entire web of inferences, which includes more than just relations between color words, and, in the context of this whole web, they are indeed distinct. For instance, “x is a tomato” along with “x is red” implies “x is ripe” whereas “x is a tomato” along with “x is green” implies “x is unripe.” By inferentially connecting color words such as “red” and “green” to non-color words like “ripe” and “unripe,” we can account for their distinctive conceptual significance. I acknowledge that one is likely to feel that something must be left out of this account. I had this feeling too when I first started developing this theory. My claim, however, is that nothing is left out.
To make this claim vivid, consider Mary, the color scientist who’s been in a black and white room since birth and so has never experienced color, but has reached the theoretical limit of what can be known about the colors without actually having experienced them. Most people have the intuition that she does not know what it is for something to be red. I disagree. On this account, she knows just what it is for something to be red, since she grasps all of the inferential relations between sentences that articulate the content of “red.” Though she’s never herself used the term non-inferentially, she knows just the conditions under which it can be non-inferentially used, and that is all that is required in order to be counted as knowing the meaning of “red” on this account. Of course, one might be inclined to just pound the table and assert here “But she doesn’t know what it is for something to be red!” However, I don’t see a non-question-begging argument for this claim.
Here’s another consequence of this view of meaning. There has recently been much debate about whether a Large Language Model, trained on nothing but linguistic data, without any sort of “sensory grounding,” could actually understand natural language. Some philosophers and computer scientists have argued that such a model can’t understand language at all, whereas others have argued that such a model could only understand a specific subset of natural language expressions, for instance, those belonging to pure mathematics and logic. This account has the radical consequence that a model trained on nothing but linguistic data could in principle grasp—completely grasp—the meanings of all natural language expressions, even including those that are essentially such as to be deployed perceptually such as “red.” Once again, on this account, the meaning of “red” is constituted—completely constituted—by the inferential relations sentences containing it bear to other sentences, and a Large Language Model is in principle capable of grasping all such relations. Understanding how such an expression can be non-inferentially deployed is an essential aspect of grasping its meanings, but actually deploying such an expression non-inferentially is not. Once again, this consequence of the account might be unintuitive to many, but I don’t see a non-question-begging argument against it.
I develop this argument for this radical form of inferentialism, which has been termed “hyper-inferentialism,” in my recent paper How to Be a Hyper-Inferentialist. Check it out, if you're interested in hearing more of the details!