Symbol grounding
Encyclopedia
The Symbol Grounding Problem is related to the problem of how word
s (symbol
s) get their meanings
, and hence to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of consciousness
, or how it is that mental state
s are meaningful. According to a widely held theory of cognition
called "computationalism," cognition (i.e., thinking) is just a form of computation. But computation in turn is just formal symbol manipulation: symbols are manipulated according to rules that are based on the symbols' shapes, not their meanings. How are those symbols (e.g., the words in our heads) connected to the things they refer to? It cannot be through the mediation of an external interpreter's head, because that would lead to an infinite regress, just as looking up the meanings of words in a (unilingual) dictionary of a language that one does not understand would lead to an infinite regress. The symbols in an autonomous hybrid symbolic+sensorimotor system—a Turing-scale robot consisting of both a symbol system and a sensorimotor system that reliably connects its internal symbols to the external objects they refer to, so it can interact with them Turing-indistinguishably from the way a person does—would be grounded. But whether its symbols would have meaning rather than just grounding is something that even the robotic Turing Test—hence cognitive science itself—cannot determine, or explain.
that the thing that a word refers to (i.e., its referent) is not the same as its meaning (or "sense"). This is most clearly illustrated using the proper names of concrete individuals, but it is also true of names of kinds of things and of abstract properties: (1) "Tony Blair," (2) "the UK's former prime minister," and (3) "Cherie Blair's husband" all have the same referent, but not the same meaning.
Some have suggested that the meaning of a (referring) word is the rule or features that one must use in order to successfully pick out its referent. In that respect, (2) and (3) come closer to wearing their meanings on their sleeves, because they are explicitly stating a rule for picking out their referents: "Find whoever is the UK's former PM, or whoever is Cherie's current husband". But that does not settle the matter, because there's still the problem of the meaning of the components of that rule ("UK," "former," "current," "PM," "Cherie," "husband"), and how to pick them out.
Perhaps "Tony Blair" (or better still, just "Tony") does not have this recursive component problem, because it points straight to its referent, but how? If the meaning is the rule for picking out the referent, what is that rule, when we come down to non-decomposable components like proper names of individuals (or names of kinds, as in "an unmarried man" is a "bachelor")?
It is probably unreasonable to expect us to know the rule for picking out the intended referents of our words—to know it explicitly, at least. Our brains do need to have the "know-how" to execute the rule, whatever it happens to be: they need to be able to actually pick out the intended referents of our words, such as "Tony Blair" or "bachelor." But we do not need to know consciously how our brains do that; we needn't know the rule. We can leave it to cognitive science and neuroscience to find out how our brains do it, and then explain the rule to us explicitly.
So if we take a word's meaning to be the means of picking out its referent, then meanings are in our brains. That is meaning in the narrow sense. If we use "meaning" in a wider sense, then we may want to say that meanings include both the referents themselves and the means of picking them out. So if a word (say, "Tony-Blair") is located inside an entity (e.g., oneself) that can use the word and pick out its referent, then the word's wide meaning consists of both the means that that entity uses to pick out its referent, and the referent itself: a wide causal nexus between (1) a head, (2) a word inside it, (3) an object outside it, and (4) whatever "processing" is required in order to successfully connect the inner word to the outer object.
But what if the "entity" in which a word is located is not a head but a piece of paper (or a computer screen)? What is its meaning then? Surely all the (referring) words on this screen, for example, have meanings, just as they have referents.
So the meaning of a word on a page is "ungrounded." Nor would looking it up in a dictionary help: If one tried to look up the meaning of a word one did not understand in a (unilingual) dictionary of a language one did not already understand, one would just cycle endlessly from one meaningless definition to another. One's search for meaning would be ungrounded. In contrast, the meaning of the words in one's head—those words one does understand—are "grounded" (by a means that cognitive neuroscience might eventually reveal to us). And that grounding of the meanings of the words in one's head mediates between the words on any external page one reads (and understands) and the external objects to which those words refer.
", a variety of functionalism
), the future theory explaining how the brain picks out its referents (the theory that cognitive neuroscience may eventually arrive at) will be a purely computational one (Pylyshyn 1984). A computational theory is a theory at the software level. It is essentially a computer program: a set of rules for manipulating symbols. And software is "implementation-independent." That means that whatever it is that a program is doing, it will do the same thing no matter what hardware it is executed on. The physical details of the dynamical system implementing the computation are irrelevant to the computation itself, which is purely formal; any hardware that can run the computation will do, and all physical implementations of that particular computer program are equivalent, computationally.
A computer can execute any computation. Hence once computationalism finds the right computer program, the same one that our brain is running when there is meaning transpiring in our heads, meaning will be transpiring in that computer too, when it is executing that program.
How would we know that we have the right computer program? It would have to be able to pass the Turing Test
(TT). That means it would have to be capable of corresponding with any human being as a pen-pal, for a lifetime, without ever being in any way distinguishable from a real human pen-pal.
formulated his celebrated "Chinese Room
Argument," in which he pointed out that if the Turing Test were conducted in Chinese, then he himself, Searle (who does not understand Chinese), could execute the very same program that the computer was executing without knowing what any of the words he was manipulating meant. So if there's no meaning going on inside Searle's head when he is implementing the program, then there's no meaning going on inside the computer when it is the one implementing the program either, computation being implementation-independent.
How does Searle know that there is no meaning going on in his head when he is executing the TT-passing program? Exactly the same way he knows whether there is or is not meaning going on inside his head under any other conditions: He understands the words of English, whereas the Chinese symbols that he is manipulating according to the program's rules mean nothing whatsoever to him (and there is no one else in his head for them to mean anything to). The symbols that are coming in, being rulefully manipulated, and then being sent out by any implementation of the TT-passing computer program, whether Searle or a computer, are like the ungrounded words on a page, not the grounded words in a head.
Note that in pointing out that the Chinese words would be meaningless to him under those conditions, Searle has appealed to consciousness. Otherwise one could argue that there would be meaning going on in Searle's head under those conditions, but that Searle himself would simply not be conscious of it. That is called the "Systems Reply" to Searle's Chinese Room Argument, and Searle rejects the Systems Reply as being merely a reiteration, in the face of negative evidence, of the very thesis (computationalism) that is on trial in his thought-experiment: "Are words in a running computation like the ungrounded words on a page, meaningless without the mediation of brains, or are they like the grounded words in brains?"
In this either/or question, the (still undefined) word "ungrounded" has implicitly relied on the difference between inert words on a page and consciously meaningful words in our heads. And Searle is reminding us that under these conditions (the Chinese TT), the words in his head would not be consciously meaningful, hence they would still be as ungrounded as the inert words on a page.
So if Searle is right, that (1) both the words on a page and those in any running computer program (including a TT-passing computer program) are meaningless in and of themselves, and hence that (2) whatever it is that the brain is doing to generate meaning, it can't be just implementation-independent computation, then what is the brain doing to generate meaning (Harnad 2001a)?
First we have to define "symbol": A symbol is any object that is part of a symbol system. (The notion of single symbol in isolation is not a useful one.) Symbols are arbitrary in their shape. A symbol system is a set of symbols and syntactic rules for manipulating them on the basis of their shapes (not their meanings). The symbols are systematically interpretable as having meanings and referents, but their shape is arbitrary in relation to their meanings and the shape of their referents.
A numeral is as good an example as any: Numerals (e.g., "1," "2," "3,") are part of a symbol system (arithmetic) consisting of shape-based rules for combining the symbols into ruleful strings. "2" means what we mean by "two", but its shape in no way resembles, nor is it connected to, "two-ness." Yet the symbol system is systematically interpretable as making true statements about numbers (e.g. "1 + 1 = 2").
It is critical to understand the property that the symbol-manipulation rules are based on shape rather than meaning (the symbols are treated as primitive and undefined, insofar as the rules are concerned), yet the symbols and their ruleful combinations are all meaningfully interpretable. It should be evident in the case of formal arithmetic, that although the symbols make sense, that sense is in our heads and not in the symbol system. The numerals in a running desk calculator are as meaningless as the numerals on a page of hand-calculations. Only in our minds do they take on meaning (Harnad 1994).
This is not to deprecate the property of systematic interpretability: We select and design formal symbol systems (algorithms) precisely because we want to know and use their systematic properties; the systematic correspondence between scratches on paper and quantities in the universe is a remarkable and extremely powerful property. But it is not the same thing as meaning, which is a property of certain things going on in our heads.
(Fodor 1975). On paper, or in a computer, language too is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. But in the brain, meaningless strings of squiggles become meaningful thoughts. Harnad has suggested viz. pointed at two properties that might be required to make this difference.
To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that its symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations.
The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. Just the symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts (Cangelosi & Harnad 2001).
, which is purely symbolic (computational), to the robotic Turing Test, which is hybrid symbolic/sensorimotor (Harnad 2000, 2007). Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to (see entries for Affordance
and for Categorical Perception
).
To categorize is to do the right thing with the right kind of thing. The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers. These feature-detectors must either be inborn or learned. The learning can be based on trial and error induction, guided by feedback from the consequences of correct and incorrect categorization; or, in our own linguistic species, the learning can also be based on verbal descriptions or definitions. The description or definition of a new category, however, can only convey the category and ground its name if the words in the definition are themselves already grounded category names Blondin-Massé et al. 2008). So ultimately grounding has to be sensorimotor, to avoid infinite regress (Harnad 2005).
But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie
, with no one home, feeling feelings, meaning meanings (Harnad 1995).
Harnad thus points at consciousness
as a second property. The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science. But the problem of explaining how consciousness can play an independent role in doing so is probably insoluble, except on pain of telekinetic
dualism
. Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present too, perhaps not. But in either case, there is no way we can hope to be any the wiser—and that is Turing's methodological point (Harnad 2001b, 2003, 2006).
to the effect that mental states always have an inherent, intended (mental) object or content toward which they are "directed": One sees something, wants something, believes something, desires something, understands something, means something etc.; and that something is always something one has in mind. Having a mental object is part of having anything in mind. Hence it is the mark of the mental. There are no "free-floating" mental states that do not also have a mental object. Even hallucinations and imaginings have an object, and even feeling depressed feels like something. Nor is the object the "external" physical object, when there is one. One may see a real chair, but the "intentional" object of one's "intentional state" is the mental chair one has in mind. (Yet another term for intentionality has been "aboutness" or "representationality": thoughts are always about something; they are (mental) "representations" of something; but that something is what it is that the thinker has in mind, not whatever external object may or may not correspond to it.)
If this all sounds like skating over the surface of a problem rather than a real break-through, then the foregoing description has had its intended effect: No, the problem of intentionality is not the symbol grounding problem; nor is grounding symbols the solution to the problem of intentionality. The symbols inside an autonomous dynamical symbol system that is able to pass the robotic Turing Test are grounded, in that, unlike in the case of an ungrounded symbol system, they do not depend on the mediation of the mind of an external interpreter to connect them to the external objects that they are interpretable (by the interpreter) as being "about"; the connection is autonomous, direct, and unmediated. But grounding is not meaning. Grounding is an input/output performance function. Grounding connects the sensory inputs from external objects to internal symbols and states occurring within an autonomous sensorimotor system, guiding the system's resulting processing and output.
Meaning, in contrast, is something mental. But to try to put a halt to the name-game of proliferating nonexplanatory synonyms for the mind/body problem without solving it (or, worse, implying that there is more than one mind/body problem), let us cite just one more thing that requires no further explication: feeling. The only thing that distinguishes an internal state that merely has grounding from one that has meaning is that it feels like something to be in the meaning state, whereas it does not feel like anything to be in the merely grounded functional state. Grounding is a functional matter; feeling is a felt matter. And that is the real source of Brentano's vexed peekaboo relation between "intentionality" and its internal "intentional object": All mental states, in addition to being the functional states of an autonomous dynamical system, are also feeling states: Feelings are not merely "functed," as all other physical states are; feelings are also felt.
Hence feeling is the real mark of the mental. But the symbol grounding problem is not the same as the mind/body problem, let alone a solution to it. The mind/body problem is actually the feeling/function problem: Symbol-grounding touches only its functional component. This does not detract from the importance of the symbol grounding problem, but just reflects that it is a keystone piece to the bigger puzzle called the mind.
Note: This article is based on an entry originally published in Nature/Macmillan Encyclopedia of Cognitive Science that has since been revised by the author and the Wikipedia community
Word
In language, a word is the smallest free form that may be uttered in isolation with semantic or pragmatic content . This contrasts with a morpheme, which is the smallest unit of meaning but will not necessarily stand on its own...
s (symbol
Symbol
A symbol is something which represents an idea, a physical entity or a process but is distinct from it. The purpose of a symbol is to communicate meaning. For example, a red octagon may be a symbol for "STOP". On a map, a picture of a tent might represent a campsite. Numerals are symbols for...
s) get their meanings
Meaning (psychology)
Meaning is a concept used in psychology as well as in other fields such as philosophy, linguistics, semiotics and sociology. These multidisciplinary use of the term are not independent, but more or less overlapping...
, and hence to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of consciousness
Consciousness
Consciousness is a term that refers to the relationship between the mind and the world with which it interacts. It has been defined as: subjectivity, awareness, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind...
, or how it is that mental state
Mental state
* In psychology, mental state is an indication of a person's mental health**Mental status examination, a structured way of observing and describing a patient's current state of mind...
s are meaningful. According to a widely held theory of cognition
Cognition
In science, cognition refers to mental processes. These processes include attention, remembering, producing and understanding language, solving problems, and making decisions. Cognition is studied in various disciplines such as psychology, philosophy, linguistics, and computer science...
called "computationalism," cognition (i.e., thinking) is just a form of computation. But computation in turn is just formal symbol manipulation: symbols are manipulated according to rules that are based on the symbols' shapes, not their meanings. How are those symbols (e.g., the words in our heads) connected to the things they refer to? It cannot be through the mediation of an external interpreter's head, because that would lead to an infinite regress, just as looking up the meanings of words in a (unilingual) dictionary of a language that one does not understand would lead to an infinite regress. The symbols in an autonomous hybrid symbolic+sensorimotor system—a Turing-scale robot consisting of both a symbol system and a sensorimotor system that reliably connects its internal symbols to the external objects they refer to, so it can interact with them Turing-indistinguishably from the way a person does—would be grounded. But whether its symbols would have meaning rather than just grounding is something that even the robotic Turing Test—hence cognitive science itself—cannot determine, or explain.
Words and Meanings
We know since FregeGottlob Frege
Friedrich Ludwig Gottlob Frege was a German mathematician, logician and philosopher. He is considered to be one of the founders of modern logic, and made major contributions to the foundations of mathematics. He is generally considered to be the father of analytic philosophy, for his writings on...
that the thing that a word refers to (i.e., its referent) is not the same as its meaning (or "sense"). This is most clearly illustrated using the proper names of concrete individuals, but it is also true of names of kinds of things and of abstract properties: (1) "Tony Blair," (2) "the UK's former prime minister," and (3) "Cherie Blair's husband" all have the same referent, but not the same meaning.
Some have suggested that the meaning of a (referring) word is the rule or features that one must use in order to successfully pick out its referent. In that respect, (2) and (3) come closer to wearing their meanings on their sleeves, because they are explicitly stating a rule for picking out their referents: "Find whoever is the UK's former PM, or whoever is Cherie's current husband". But that does not settle the matter, because there's still the problem of the meaning of the components of that rule ("UK," "former," "current," "PM," "Cherie," "husband"), and how to pick them out.
Perhaps "Tony Blair" (or better still, just "Tony") does not have this recursive component problem, because it points straight to its referent, but how? If the meaning is the rule for picking out the referent, what is that rule, when we come down to non-decomposable components like proper names of individuals (or names of kinds, as in "an unmarried man" is a "bachelor")?
It is probably unreasonable to expect us to know the rule for picking out the intended referents of our words—to know it explicitly, at least. Our brains do need to have the "know-how" to execute the rule, whatever it happens to be: they need to be able to actually pick out the intended referents of our words, such as "Tony Blair" or "bachelor." But we do not need to know consciously how our brains do that; we needn't know the rule. We can leave it to cognitive science and neuroscience to find out how our brains do it, and then explain the rule to us explicitly.
So if we take a word's meaning to be the means of picking out its referent, then meanings are in our brains. That is meaning in the narrow sense. If we use "meaning" in a wider sense, then we may want to say that meanings include both the referents themselves and the means of picking them out. So if a word (say, "Tony-Blair") is located inside an entity (e.g., oneself) that can use the word and pick out its referent, then the word's wide meaning consists of both the means that that entity uses to pick out its referent, and the referent itself: a wide causal nexus between (1) a head, (2) a word inside it, (3) an object outside it, and (4) whatever "processing" is required in order to successfully connect the inner word to the outer object.
But what if the "entity" in which a word is located is not a head but a piece of paper (or a computer screen)? What is its meaning then? Surely all the (referring) words on this screen, for example, have meanings, just as they have referents.
Consciousness
Here is where the problem of consciousness rears its head. For there would be no connection at all between scratches on paper and any intended referents if there were no minds mediating those intentions, via their own internal means of picking out those intended referents.So the meaning of a word on a page is "ungrounded." Nor would looking it up in a dictionary help: If one tried to look up the meaning of a word one did not understand in a (unilingual) dictionary of a language one did not already understand, one would just cycle endlessly from one meaningless definition to another. One's search for meaning would be ungrounded. In contrast, the meaning of the words in one's head—those words one does understand—are "grounded" (by a means that cognitive neuroscience might eventually reveal to us). And that grounding of the meanings of the words in one's head mediates between the words on any external page one reads (and understands) and the external objects to which those words refer.
Symbol Grounding and Computation
What about the meaning of a word inside a computer? Is it like the word on the page or like the word in one's head? This is where the Symbol Grounding Problem comes in. Is a dynamic process transpiring in a computer more like the static paper page, or more like another dynamical system, the brain?Functionalism
There is a school of thought according to which the computer is more like the brain—or rather, the brain is more like the computer: According to this view (called "computationalismComputational theory of mind
In philosophy, the computational theory of mind is the view that the human mind is an information processing system and that thinking is a form of computing. The theory was proposed in its modern form by Hilary Putnam in 1961 and developed by Jerry Fodor in the 60s and 70s...
", a variety of functionalism
Functionalism (philosophy of mind)
Functionalism is a theory of the mind in contemporary philosophy, developed largely as an alternative to both the identity theory of mind and behaviourism. Its core idea is that mental states are constituted solely by their functional role — that is, they are causal relations to other mental...
), the future theory explaining how the brain picks out its referents (the theory that cognitive neuroscience may eventually arrive at) will be a purely computational one (Pylyshyn 1984). A computational theory is a theory at the software level. It is essentially a computer program: a set of rules for manipulating symbols. And software is "implementation-independent." That means that whatever it is that a program is doing, it will do the same thing no matter what hardware it is executed on. The physical details of the dynamical system implementing the computation are irrelevant to the computation itself, which is purely formal; any hardware that can run the computation will do, and all physical implementations of that particular computer program are equivalent, computationally.
A computer can execute any computation. Hence once computationalism finds the right computer program, the same one that our brain is running when there is meaning transpiring in our heads, meaning will be transpiring in that computer too, when it is executing that program.
How would we know that we have the right computer program? It would have to be able to pass the Turing Test
Turing test
The Turing test is a test of a machine's ability to exhibit intelligent behaviour. In Turing's original illustrative example, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All...
(TT). That means it would have to be capable of corresponding with any human being as a pen-pal, for a lifetime, without ever being in any way distinguishable from a real human pen-pal.
Searle's Chinese Room Argument
It was in order to show that computationalism is incorrect that SearleJohn Searle
John Rogers Searle is an American philosopher and currently the Slusser Professor of Philosophy at the University of California, Berkeley.-Biography:...
formulated his celebrated "Chinese Room
Chinese room
The Chinese room is a thought experiment by John Searle, which first appeared in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980...
Argument," in which he pointed out that if the Turing Test were conducted in Chinese, then he himself, Searle (who does not understand Chinese), could execute the very same program that the computer was executing without knowing what any of the words he was manipulating meant. So if there's no meaning going on inside Searle's head when he is implementing the program, then there's no meaning going on inside the computer when it is the one implementing the program either, computation being implementation-independent.
How does Searle know that there is no meaning going on in his head when he is executing the TT-passing program? Exactly the same way he knows whether there is or is not meaning going on inside his head under any other conditions: He understands the words of English, whereas the Chinese symbols that he is manipulating according to the program's rules mean nothing whatsoever to him (and there is no one else in his head for them to mean anything to). The symbols that are coming in, being rulefully manipulated, and then being sent out by any implementation of the TT-passing computer program, whether Searle or a computer, are like the ungrounded words on a page, not the grounded words in a head.
Note that in pointing out that the Chinese words would be meaningless to him under those conditions, Searle has appealed to consciousness. Otherwise one could argue that there would be meaning going on in Searle's head under those conditions, but that Searle himself would simply not be conscious of it. That is called the "Systems Reply" to Searle's Chinese Room Argument, and Searle rejects the Systems Reply as being merely a reiteration, in the face of negative evidence, of the very thesis (computationalism) that is on trial in his thought-experiment: "Are words in a running computation like the ungrounded words on a page, meaningless without the mediation of brains, or are they like the grounded words in brains?"
In this either/or question, the (still undefined) word "ungrounded" has implicitly relied on the difference between inert words on a page and consciously meaningful words in our heads. And Searle is reminding us that under these conditions (the Chinese TT), the words in his head would not be consciously meaningful, hence they would still be as ungrounded as the inert words on a page.
So if Searle is right, that (1) both the words on a page and those in any running computer program (including a TT-passing computer program) are meaningless in and of themselves, and hence that (2) whatever it is that the brain is doing to generate meaning, it can't be just implementation-independent computation, then what is the brain doing to generate meaning (Harnad 2001a)?
Formulation of Symbol Grounding Problem
To answer this question we have to formulate the symbol grounding problem itself (Harnad 1990):First we have to define "symbol": A symbol is any object that is part of a symbol system. (The notion of single symbol in isolation is not a useful one.) Symbols are arbitrary in their shape. A symbol system is a set of symbols and syntactic rules for manipulating them on the basis of their shapes (not their meanings). The symbols are systematically interpretable as having meanings and referents, but their shape is arbitrary in relation to their meanings and the shape of their referents.
A numeral is as good an example as any: Numerals (e.g., "1," "2," "3,") are part of a symbol system (arithmetic) consisting of shape-based rules for combining the symbols into ruleful strings. "2" means what we mean by "two", but its shape in no way resembles, nor is it connected to, "two-ness." Yet the symbol system is systematically interpretable as making true statements about numbers (e.g. "1 + 1 = 2").
It is critical to understand the property that the symbol-manipulation rules are based on shape rather than meaning (the symbols are treated as primitive and undefined, insofar as the rules are concerned), yet the symbols and their ruleful combinations are all meaningfully interpretable. It should be evident in the case of formal arithmetic, that although the symbols make sense, that sense is in our heads and not in the symbol system. The numerals in a running desk calculator are as meaningless as the numerals on a page of hand-calculations. Only in our minds do they take on meaning (Harnad 1994).
This is not to deprecate the property of systematic interpretability: We select and design formal symbol systems (algorithms) precisely because we want to know and use their systematic properties; the systematic correspondence between scratches on paper and quantities in the universe is a remarkable and extremely powerful property. But it is not the same thing as meaning, which is a property of certain things going on in our heads.
Requirements for Symbol Grounding
Another symbol system is natural languageNatural language
In the philosophy of language, a natural language is any language which arises in an unpremeditated fashion as the result of the innate facility for language possessed by the human intellect. A natural language is typically used for communication, and may be spoken, signed, or written...
(Fodor 1975). On paper, or in a computer, language too is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. But in the brain, meaningless strings of squiggles become meaningful thoughts. Harnad has suggested viz. pointed at two properties that might be required to make this difference.
Capacity to Pick Out Referents
One property that the symbols on static paper or even in a dynamic computer lack that symbols in a brain possess is the capacity to pick out their referents. This is what we were discussing earlier, and it is what the hitherto undefined term "grounding" refers to. A symbol system alone, whether static or dynamic, cannot have this capacity (any more than a book can), because picking out referents is not just a computational (implementation-independent) property; it is a dynamical (implementation-dependent) property.To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that its symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations.
The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. Just the symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts (Cangelosi & Harnad 2001).
Consciousness
The necessity of groundedness, in other words, takes us from the level of the pen-pal Turing TestTuring test
The Turing test is a test of a machine's ability to exhibit intelligent behaviour. In Turing's original illustrative example, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All...
, which is purely symbolic (computational), to the robotic Turing Test, which is hybrid symbolic/sensorimotor (Harnad 2000, 2007). Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to (see entries for Affordance
Affordance
An affordance is a quality of an object, or an environment, which allows an individual to perform an action. For example, a knob affords twisting, and perhaps pushing, while a cord affords pulling...
and for Categorical Perception
Categorical perception
Categorical perception is the experience of percept invariances in sensory phenomena that can be varied along a continuum. Multiple views of a face, for example, are mapped onto a common identity, visually distinct objects such as cars are mapped into the same category and distinct speech tokens...
).
To categorize is to do the right thing with the right kind of thing. The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers. These feature-detectors must either be inborn or learned. The learning can be based on trial and error induction, guided by feedback from the consequences of correct and incorrect categorization; or, in our own linguistic species, the learning can also be based on verbal descriptions or definitions. The description or definition of a new category, however, can only convey the category and ground its name if the words in the definition are themselves already grounded category names Blondin-Massé et al. 2008). So ultimately grounding has to be sensorimotor, to avoid infinite regress (Harnad 2005).
But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie
Zombie
Zombie is a term used to denote an animated corpse brought back to life by mystical means such as witchcraft. The term is often figuratively applied to describe a hypnotized person bereft of consciousness and self-awareness, yet ambulant and able to respond to surrounding stimuli...
, with no one home, feeling feelings, meaning meanings (Harnad 1995).
Harnad thus points at consciousness
Consciousness
Consciousness is a term that refers to the relationship between the mind and the world with which it interacts. It has been defined as: subjectivity, awareness, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind...
as a second property. The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science. But the problem of explaining how consciousness can play an independent role in doing so is probably insoluble, except on pain of telekinetic
Psychokinesis
The term psychokinesis , also referred to as telekinesis with respect to strictly describing movement of matter, sometimes abbreviated PK and TK respectively, is a term...
dualism
Dualism (philosophy of mind)
In philosophy of mind, dualism is a set of views about the relationship between mind and matter, which begins with the claim that mental phenomena are, in some respects, non-physical....
. Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present too, perhaps not. But in either case, there is no way we can hope to be any the wiser—and that is Turing's methodological point (Harnad 2001b, 2003, 2006).
Symbol Grounding and Brentano's Notion of Intentionality
"Intentionality" has been called the "mark of the mental" because of some observations by the philosopher BrentanoFranz Brentano
Franz Clemens Honoratus Hermann Brentano was an influential German philosopher and psychologist whose influence was felt by other such luminaries as Sigmund Freud, Edmund Husserl, Kazimierz Twardowski and Alexius Meinong, who followed and adapted his views.-Life:Brentano was born at Marienberg am...
to the effect that mental states always have an inherent, intended (mental) object or content toward which they are "directed": One sees something, wants something, believes something, desires something, understands something, means something etc.; and that something is always something one has in mind. Having a mental object is part of having anything in mind. Hence it is the mark of the mental. There are no "free-floating" mental states that do not also have a mental object. Even hallucinations and imaginings have an object, and even feeling depressed feels like something. Nor is the object the "external" physical object, when there is one. One may see a real chair, but the "intentional" object of one's "intentional state" is the mental chair one has in mind. (Yet another term for intentionality has been "aboutness" or "representationality": thoughts are always about something; they are (mental) "representations" of something; but that something is what it is that the thinker has in mind, not whatever external object may or may not correspond to it.)
If this all sounds like skating over the surface of a problem rather than a real break-through, then the foregoing description has had its intended effect: No, the problem of intentionality is not the symbol grounding problem; nor is grounding symbols the solution to the problem of intentionality. The symbols inside an autonomous dynamical symbol system that is able to pass the robotic Turing Test are grounded, in that, unlike in the case of an ungrounded symbol system, they do not depend on the mediation of the mind of an external interpreter to connect them to the external objects that they are interpretable (by the interpreter) as being "about"; the connection is autonomous, direct, and unmediated. But grounding is not meaning. Grounding is an input/output performance function. Grounding connects the sensory inputs from external objects to internal symbols and states occurring within an autonomous sensorimotor system, guiding the system's resulting processing and output.
Meaning, in contrast, is something mental. But to try to put a halt to the name-game of proliferating nonexplanatory synonyms for the mind/body problem without solving it (or, worse, implying that there is more than one mind/body problem), let us cite just one more thing that requires no further explication: feeling. The only thing that distinguishes an internal state that merely has grounding from one that has meaning is that it feels like something to be in the meaning state, whereas it does not feel like anything to be in the merely grounded functional state. Grounding is a functional matter; feeling is a felt matter. And that is the real source of Brentano's vexed peekaboo relation between "intentionality" and its internal "intentional object": All mental states, in addition to being the functional states of an autonomous dynamical system, are also feeling states: Feelings are not merely "functed," as all other physical states are; feelings are also felt.
Hence feeling is the real mark of the mental. But the symbol grounding problem is not the same as the mind/body problem, let alone a solution to it. The mind/body problem is actually the feeling/function problem: Symbol-grounding touches only its functional component. This does not detract from the importance of the symbol grounding problem, but just reflects that it is a keystone piece to the bigger puzzle called the mind.
Note: This article is based on an entry originally published in Nature/Macmillan Encyclopedia of Cognitive Science that has since been revised by the author and the Wikipedia community
See also
- Categorical PerceptionCategorical perceptionCategorical perception is the experience of percept invariances in sensory phenomena that can be varied along a continuum. Multiple views of a face, for example, are mapped onto a common identity, visually distinct objects such as cars are mapped into the same category and distinct speech tokens...
- Chinese RoomChinese roomThe Chinese room is a thought experiment by John Searle, which first appeared in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980...
- Communicative actionCommunicative actionCommunicative action is a concept associated with the German philosopher-sociologist Jürgen Habermas. Habermas uses this concept to describe cooperative action undertaken by individuals based upon mutual deliberation and argumentation...
- ConsciousnessConsciousnessConsciousness is a term that refers to the relationship between the mind and the world with which it interacts. It has been defined as: subjectivity, awareness, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind...
- Formal languageFormal languageA formal language is a set of words—that is, finite strings of letters, symbols, or tokens that are defined in the language. The set from which these letters are taken is the alphabet over which the language is defined. A formal language is often defined by means of a formal grammar...
- Formal systemFormal systemIn formal logic, a formal system consists of a formal language and a set of inference rules, used to derive an expression from one or more other premises that are antecedently supposed or derived . The axioms and rules may be called a deductive apparatus...
- FunctionalismFunctionalism (philosophy of mind)Functionalism is a theory of the mind in contemporary philosophy, developed largely as an alternative to both the identity theory of mind and behaviourism. Its core idea is that mental states are constituted solely by their functional role — that is, they are causal relations to other mental...
- Hermeneutics
- InterpretationSemanticsSemantics is the study of meaning. It focuses on the relation between signifiers, such as words, phrases, signs and symbols, and what they stand for, their denotata....
- Physical symbol systemPhysical symbol systemA physical symbol system takes physical patterns , combining them into structures and manipulating them to produce new expressions....
- PragmaticsPragmaticsPragmatics is a subfield of linguistics which studies the ways in which context contributes to meaning. Pragmatics encompasses speech act theory, conversational implicature, talk in interaction and other approaches to language behavior in philosophy, sociology, and linguistics. It studies how the...
- SemanticsSemanticsSemantics is the study of meaning. It focuses on the relation between signifiers, such as words, phrases, signs and symbols, and what they stand for, their denotata....
- SemeioticSemeioticSemeiotic is a spelling variant of a word used by Charles Sanders Peirce, likewise as "Semiotic," "Semiotics", and "Semeotic", to refer to his philosophical logic, which he cast as the study of signs, or semiotic. Some, not all, Peircean scholars have used "semeiotic" to refer to distinctly...
- SemiosisSemiosisSemiosis is any form of activity, conduct, or process that involves signs, including the production of meaning. Briefly – semiosis is sign process...
- SemioticsSemioticsSemiotics, also called semiotic studies or semiology, is the study of signs and sign processes , indication, designation, likeness, analogy, metaphor, symbolism, signification, and communication...
- SignSignA sign is something that implies a connection between itself and its object. A natural sign bears a causal relation to its object—for instance, thunder is a sign of storm. A conventional sign signifies by agreement, as a full stop signifies the end of a sentence...
- Sign relationSign relationA sign relation is the basic construct in the theory of signs, also known as semeiotic or semiotics, as developed by Charles Sanders Peirce.-Anthesis:...
- Situated cognitionSituated cognitionSituated cognition poses that knowing is inseparable from doing by arguing that all knowledge is situated in activity bound to social, cultural and physical contexts....
- SyntaxSyntaxIn linguistics, syntax is the study of the principles and rules for constructing phrases and sentences in natural languages....
- Turing machineTuring machineA Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...