segunda-feira, 23 de fevereiro de 2009

Teorias da verdade - Wikipédia, a enciclopédia livre

SOBRE A CONCEPÇÃO DE VERDADE DE TARSKI

The Semantic Conception of Truth and the Foundations of Semantics - Alfred Tarski - Chap.15

Ontologias em Gestão do Conhecimento

A SURVEY OF DATA MINING AND KNOWLEDGE DISCOVERY SOFTWARE TOOLS

Tutorial de Ontologia

UNIVERSIDADE CORPORATIVA UMA NOVA ESTRATÉGIA PARA A APRENDIZAGEM ORGANIZACIONAL

SWOT Analysis

Workflow Patterns

SWOT - Planejamento Estratégico

Service Oriented Enterprises

Aprenda a Expandir Sua Inteligência

Engenharia da Informação

Cognitive Science

 
 

Cognitive Science

First published Mon Sep 23, 1996; substantive revision Mon Apr 30, 2007

Cognitive science is the interdisciplinary study of mind and intelligence, embracing philosophy, psychology, artificial intelligence, neuroscience, linguistics, and anthropology. Its intellectual origins are in the mid-1950s when researchers in several fields began to develop theories of mind based on complex representations and computational procedures. Its organizational origins are in the mid-1970s when the Cognitive Science Society was formed and the journal Cognitive Science began. Since then, more than sixty universities in North America, Europe, Asia, and Australia have established cognitive science programs, and many others have instituted courses in cognitive science.


 

1. History

Attempts to understand the mind and its operation go back at least to the Ancient Greeks, when philosophers such as Plato and Aristotle tried to explain the nature of human knowledge. The study of mind remained the province of philosophy until the nineteenth century, when experimental psychology developed. Wilhelm Wundt and his students initiated laboratory methods for studying mental operations more systematically. Within a few decades, however, experimental psychology became dominated by behaviorism, a view that virtually denied the existence of mind. According to behaviorists such as J. B. Watson, psychology should restrict itself to examining the relation between observable stimuli and observable behavioral responses. Talk of consciousness and mental representations was banished from respectable scientific discussion. Especially in North America, behaviorism dominated the psychological scene through the 1950s. Around 1956, the intellectual landscape began to change dramatically. George Miller summarized numerous studies which showed that the capacity of human thinking is limited, with short-term memory, for example, limited to around seven items. He proposed that memory limitations can be overcome by recoding information into chunks, mental representations that require mental procedures for encoding and decoding the information. At this time, primitive computers had been around for only a few years, but pioneers such as John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon were founding the field of artificial intelligence. In addition, Noam Chomsky rejected behaviorist assumptions about language as a learned habit and proposed instead to explain language comprehension in terms of mental grammars consisting of rules. The six thinkers mentioned in this paragraph can be viewed as the founders of cognitive science.

2. Methods

Cognitive science has unifying theoretical ideas, but we have to appreciate the diversity of outlooks and methods that researchers in different fields bring to the study of mind and intelligence. Although cognitive psychologists today often engage in theorizing and computational modeling, their primary method is experimentation with human participants. People, usually undergraduates satisfying course requirements, are brought into the laboratory so that different kinds of thinking can be studied under controlled conditions. For example, psychologists have experimentally examined the kinds of mistakes people make in deductive reasoning, the ways that people form and apply concepts, the speed of people thinking with mental images, and the performance of people solving problems using analogies. Our conclusions about how the mind works must be based on more than "common sense" and introspection, since these can give a misleading picture of mental operations, many of which are not consciously accessible. Psychological experiments that carefully approach mental operations from diverse directions are therefore crucial for cognitive science to be scientific.

Although theory without experiment is empty, experiment without theory is blind. To address the crucial questions about the nature of mind, the psychological experiments need to be interpretable within a theoretical framework that postulates mental representations and procedures. One of the best ways of developing theoretical frameworks is by forming and testing computational models intended to be analogous to mental operations. To complement psychological experiments on deductive reasoning, concept formation, mental imagery, and analogical problem solving, researchers have developed computational models that simulate aspects of human performance. Designing, building, and experimenting with computational models is the central method of artificial intelligence (AI), the branch of computer science concerned with intelligent systems. Ideally in cognitive science, computational models and psychological experimentation go hand in hand, but much important work in AI has examined the power of different approaches to knowledge representation in relative isolation from experimental psychology.

While some linguists do psychological experiments or develop computational models, most currently use different methods. For linguists in the Chomskian tradition, the main theoretical task is to identify grammatical principles that provide the basic structure of human languages. Identification takes place by noticing subtle differences between grammatical and ungrammatical utterances. In English, for example, the sentences "She hit the ball" and "What do you like?" are grammatical, but "She the hit ball" and "What does you like?" are not. A grammar of English will explain why the former are acceptable but not the latter.

Like cognitive psychologists, neuroscientists often perform controlled experiments, but their observations are very different, since neuroscientists are concerned directly with the nature of the brain. With nonhuman subjects, researchers can insert electrodes and record the firing of individual neurons. With humans for whom this technique would be too invasive, it has become possible in recent years to use magnetic and positron scanning devices to observe what is happening in different parts of the brain while people are doing various mental tasks. For example, brain scans have identified the regions of the brain involved in mental imagery and word interpretation. Additional evidence about brain functioning is gathered by observing the performance of people whose brains have been damaged in identifiable ways. A stroke, for example, in a part of the brain dedicated to language can produce deficits such as the inability to utter sentences. Like cognitive psychology, neuroscience is often theoretical as well as experimental, and theory development is frequently aided by developing computational models of the behavior of groups of neurons.

Cognitive anthropology expands the examination of human thinking to consider how thought works in different cultural settings. The study of mind should obviously not be restricted to how English speakers think but should consider possible differences in modes of thinking across cultures. Cognitive science is becoming increasingly aware of the need to view the operations of mind in particular physical and social environments. For cultural anthropologists, the main method is ethnography, which requires living and interacting with members of a culture to a sufficient extent that their social and cognitive systems become apparent. Cognitive anthropologists have investigated, for example, the similarities and differences across cultures in words for colors.

With a few exceptions, philosophers generally do not perform systematic empirical observations or construct computational models. But philosophy remains important to cognitive science because it deals with fundamental issues that underlie the experimental and computational approach to mind. Abstract questions such as the nature of representation and computation need not be addressed in the everyday practice of psychology or artificial intelligence, but they inevitably arise when researchers think deeply about what they are doing. Philosophy also deals with general questions such as the relation of mind and body and with methodological questions such as the nature of explanations found in cognitive science. In addition, philosophy concerns itself with normative questions about how people should think as well as with descriptive ones about how they do. In addition to the theoretical goal of understanding human thinking, cognitive science can have the practical goal of improving it, which requires normative reflection on what we want thinking to be. Philosophy of mind does not have a distinct method, but should share with the best theoretical work in other fields a concern with empirical results.

In its weakest form, cognitive science is just the sum of the fields mentioned: psychology, artificial intelligence, linguistics, neuroscience, anthropology, and philosophy. Interdisciplinary work becomes much more interesting when there is theoretical and experimental convergence on conclusions about the nature of mind. For example, psychology and artificial intelligence can be combined through computational models of how people behave in experiments. The best way to grasp the complexity of human thinking is to use multiple methods, especially psychological and neurological experiments and computational models. Theoretically, the most fertile approach has been to understand the mind in terms of representation and computation.

3. Representation and Computation

The central hypothesis of cognitive science is that thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures. While there is much disagreement about the nature of the representations and computations that constitute thinking, the central hypothesis is general enough to encompass the current range of thinking in cognitive science, including connectionist theories which model thinking using artificial neural networks.

Most work in cognitive science assumes that the mind has mental representations analogous to computer data structures, and computational procedures similar to computational algorithms. Cognitive theorists have proposed that the mind contains such mental representations as logical propositions, rules, concepts, images, and analogies, and that it uses mental procedures such as deduction, search, matching, rotating, and retrieval. The dominant mind-computer analogy in cognitive science has taken on a novel twist from the use of another analog, the brain.

Connectionists have proposed novel ideas about representation and computation that use neurons and their connections as inspirations for data structures, and neuron firing and spreading activation as inspirations for algorithms. Cognitive science then works with a complex 3-way analogy among the mind, the brain, and computers. Mind, brain, and computation can each be used to suggest new ideas about the others. There is no single computational model of mind, since different kinds of computers and programming approaches suggest different ways in which the mind might work. The computers that most of us work with today are serial processors, performing one instruction at a time, but the brain and some recently developed computers are parallel processors, capable of doing many operations at once.

4. Theoretical Approaches

Here is a schematic summary of current theories about the nature of the representations and computations that explain how the mind works.

4.1 Formal logic

Formal logic provides some powerful tools for looking at the nature of representation and computation. Propositional and predicate calculus serve to express many complex kinds of knowledge, and many inferences can be understood in terms of logical deduction with inferences rules such as modus ponens. The explanation schema for the logical approach is:

Explanation target:

  • Why do people make the inferences they do?

Explanatory pattern:

  • People have mental representations similar to sentences in predicate logic.
  • People have deductive and inductive procedures that operate on those sentences.
  • The deductive and inductive procedures, applied to the sentences, produce the inferences.

It is not certain, however, that logic provides the core ideas about representation and computation needed for cognitive science, since more efficient and psychologically natural methods of computation may be needed to explain human thinking.

4.2 Rules

Much of human knowledge is naturally described in terms of rules of the form IF … THEN …, and many kinds of thinking such as planning can be modeled by rule-based systems. The explanation schema used is:

Explanation target:

Explanatory pattern:

Computational models based on rules have provided detailed simulations of a wide range of psychological experiments, from cryptarithmetic problem solving to skill acquisition to language use. Rule-based systems have also been of practical importance in suggesting how to improve learning and how to develop intelligent machine systems.

4.3 Concepts

Concepts, which partly correspond to the words in spoken and written language, are an important kind of mental representation. There are computational and psychological reasons for abandoning the classical view that concepts have strict definitions. Instead, concepts can be viewed as sets of typical features. Concept application is then a matter of getting an approximate match between concepts and the world. Schemas and scripts are more complex than concepts that correspond to words, but they are similar in that they consist of bundles of features that can be matched and applied to new situations. The explanatory schema used in concept-based systems is:

Explanatory target:

Explanation pattern:

4.4 Analogies

Analogies play an important role in human thinking, in areas as diverse as problem solving, decision making, explanation, and linguistic communication. Computational models simulate how people retrieve and map source analogs in order to apply them to target situations. The explanation schema for analogies is:

Explanation target:

Explanatory pattern:

The constraints of similarity, structure, and purpose overcome the difficult problem of how previous experiences can be found and used to help with new problems. Not all thinking is analogical, and using inappropriate analogies can hinder thinking, but analogies can be very effective in applications such as education and design.

4.5 Images

Visual and other kinds of images play an important role in human thinking. Pictorial representations capture visual and spatial information in a much more usable form than lengthy verbal descriptions. Computational procedures well suited to visual representations include inspecting, finding, zooming, rotating, and transforming. Such operations can be very useful for generating plans and explanations in domains to which pictorial representations apply. The explanatory schema for visual representation is:

Explanation target:

Explanatory pattern:

Imagery can aid learning, and some metaphorical aspects of language may have their roots in imagery. Psychological experiments suggest that visual procedures such as scanning and rotating employ imagery, and recent neurophysiological results confirm a close physical link between reasoning with mental imagery and perception.

4.6 Connectionism

Connectionist networks consisting of simple nodes and links are very useful for understanding psychological processes that involve parallel constraint satisfaction. Such processes include aspects of vision, decision making, explanation selection, and meaning making in language comprehension. Connectionist models can simulate learning by methods that include Hebbian learning and backpropagation. The explanatory schema for the connectionist approach is:

Explanation target:

Explanatory pattern:

Simulations of various psychological experiments have shown the psychological relevance of the connectionist models, which are, however, only very rough approximations to actual neural networks.

4.7 Theoretical neuroscience

Theoretical neuroscience is the attempt to develop mathematical and computational theories and models of the structures and processes of the brains of humans and other animals. It differs from connectionism in trying to be more biologically accurate by modeling the behavior of large numbers of realistic neurons organized into functionally significant brain areas. In recent years, computational models of the brain have become biologically richer, both with respect to employing more realistic neurons such as ones that spike and have chemical pathways, and with respect to simulating the interactions among different areas of the brain such as the hippocampus and the cortex. These models are not strictly an alternative to computational accounts in terms of logic, rules, concepts, analogies, images, and connections, but should mesh with them and show how mental functioning can be performed at the neural level. The explanatory schema for theoretical neuroscience is:

Explanation target:

Explanatory pattern:

From the perspective of theoretical neuroscience, mental representations are patterns of neural activity, and inference is transformation of such patterns.

5. Philosophical Relevance

Some philosophy, in particular naturalistic philosophy of mind, is part of cognitive science. But the interdisciplinary field of cognitive science is relevant to philosophy in several ways. First, the psychological, computational, and other results of cognitive science investigations have important potential applications to traditional philosophical problems in epistemology, metaphysics, and ethics. Second, cognitive science can serve as an object of philosophical critique, particularly concerning the central assumption that thinking is representational and computational. Third and more constructively, cognitive science can be taken as an object of investigation in the philosophy of science, generating reflections on the methodology and presuppositions of the enterprise.

5.1 Philosophical Applications

Much philosophical research today is naturalistic, treating philosophical investigations as continuous with empirical work in fields such as psychology. From a naturalistic perspective, philosophy of mind is closely allied with theoretical and experimental work in cognitive science. Metaphysical conclusions about the nature of mind are to be reached, not by a priori speculation, but by informed reflection on scientific developments in fields such as computer science and neuroscience. Similarly, epistemology is not a stand-alone conceptual exercise, but depends on and benefits from scientific findings concerning mental structures and learning procedures. Even ethics can benefit by using greater understanding of the psychology of moral thinking to bear on ethical questions such as the nature of deliberations concerning right and wrong. Goldman (1993) provides a concise review of applications of cognitive science to epistemology, philosophy of science, philosophy of mind, metaphysics, and ethics. Here are some philosophical problems to which ongoing developments in cognitive science are highly relevant. Links are provided to other relevant articles in this Encyclopedia.

Additional philosophical problems arise from examining the presuppositions of current approaches to cognitive science.

5.2 Critique of Cognitive Science

The claim that human minds work by representation and computation is an empirical conjecture and might be wrong. Although the computational-representational approach to cognitive science has been successful in explaining many aspects of human problem solving, learning, and language use, some philosophical critics such as Hubert Dreyfus (1992) and John Searle (1992) have claimed that this approach is fundamentally mistaken. Critics of cognitive science have offered such challenges as:

  1. The emotion challenge: Cognitive science neglects the important role of emotions in human thinking.
  2. The consciousness challenge: Cognitive science ignores the importance of consciousness in human thinking.
  3. The world challenge: Cognitive science disregards the significant role of physical environments in human thinking.
  4. The body challenge: Cognitive science neglects the contribution of the body to human thought and action.
  5. The social challenge: Human thought is inherently social in ways that cognitive science ignores.
  6. The dynamical systems challenge: The mind is a dynamical system, not a computational system.
  7. The mathematics challenge: Mathematical results show that human thinking cannot be computational in the standard sense, so the brain must operate differently, perhaps as a quantum computer.

Thagard (2005) argues that all these challenges can best be met by expanding and supplementing the computational-representational approach, not by abandoning it.

5.3 Philosophy of Cognitive Science

Cognitive science raises many interesting methodological questions that are worthy of investigation by philosophers of science. What is the nature of representation? What role do computational models play in the development of cognitive theories? What is the relation among apparently competing accounts of mind involving symbolic processing, neural networks, and dynamical systems? What is the relation among the various fields of cognitive science such as psychology, linguistics, and neuroscience? Are psychological phenomena subject to reductionist explanations via neuroscience? Von Eckardt (1993) and Clark (2001) provide discussions of some of the philosophical issues that arise in cognitive science. Bechtel et al. (2001) collect useful articles on the philosophy of neuroscience.

The increasing prominence of neural explanations in cognitive, social, developmental, and clinical psychology raises important philosophical questions about explanation and reduction. Anti-reductionism, according to which psychological explanations are completely independent of neurological ones, is becoming increasingly implausible, but it remains controversial to what extent psychology can be reduced to neuroscience and molecular biology (see McCauley, 2007, for a comprehensive survey). Essential to answering questions about the nature of reduction are answers to questions about the nature of explanation. Explanations in psychology, neuroscience, and biology in general are plausibly viewed as descriptions of mechanisms, which are systems of parts that interact to produce regular changes (Bechtel and Abrahamsen, 2005). In psychological explanations, the parts are mental representations that interact by computational procedures to produce new representations. In neuroscientific explanations, the parts are neural populations that interact by electrochemical processes to produce new activity in neural populations. If progress in theoretical neuroscience continues, it should become possible to tie psychological to neurological explanations by showing how mental representations such as concepts are constituted by activities in neural populations, and how computational procedures such as spreading activation among concepts are carried out by neural processes.

Bibliography

Acknowledgment

With the kind permission of MIT Press, this page incorporates some material from the first and second editions of P. Thagard, Mind: Introduction to Cognitive Science.

Other Internet Resources

Related Entries

artificial intelligence | behaviorism | concepts | connectionism | consciousness | emotion | folk psychology: as a theory | folk psychology: as mental simulation | identity theory of mind | innate/acquired distinction | innateness: and contemporary theories of cognition | intentionality | language of thought hypothesis | meaning holism | memory | mental content: causal theories of | mental imagery | mental representation | mind: computational theory of | mind: modularity of | neuroscience, philosophy of | propositional attitude reports

Copyright © 2007 by
Paul Thagard <pthagard@watarts.uwaterloo.ca>

Mental Imagery

O Processo de Negociação

Mental Representation

 
 

Mental Representation

First published Thu Mar 30, 2000; substantive revision Wed Jul 7, 2004

The notion of a "mental representation" is, arguably, in the first instance a theoretical construct of cognitive science. As such, it is a basic concept of the Computational Theory of Mind, according to which cognitive states and processes are constituted by the occurrence, transformation and storage (in the mind/brain) of information-bearing structures (representations) of one kind or another.

However, on the assumption that a representation is an object with semantic properties (content, reference, truth-conditions, truth-value, etc.), a mental representation may be more broadly construed as a mental object with semantic properties. As such, mental representations (and the states and processes that involve them) need not be understood only in computational terms. On this broader construal, mental representation is a philosophical topic with roots in antiquity and a rich history and literature predating the recent "cognitive revolution." Though most contemporary philosophers of mind acknowledge the relevance and importance of cognitive science, they vary in their degree of engagement with its literature, methods and results; and there remain, for many, issues concerning the representational properties of the mind that can be addressed independently of the computational hypothesis.

Though the term 'Representational Theory of Mind' is sometimes used almost interchangeably with 'Computational Theory of Mind', I will use it here to refer to any theory that postulates the existence of semantically evaluable mental objects, including philosophy's stock in trade mentalia — thoughts, concepts, percepts, ideas, impressions, notions, rules, schemas, images, phantasms, etc. — as well as the various sorts of "subpersonal" representations postulated by cognitive science. Representational theories may thus be contrasted with theories, such as those of Baker (1995), Collins (1987), Dennett (1987), Gibson (1966, 1979), Reid (1764/1997), Stich (1983) and Thau (2002), which deny the existence of such things.


 

1. The Representational Theory of Mind

The Representational Theory of Mind (RTM) (which goes back at least to Aristotle) takes as its starting point commonsense mental states, such as thoughts, beliefs, desires, perceptions and images. Such states are said to have "intentionality" — they are about or refer to things, and may be evaluated with respect to properties like consistency, truth, appropriateness and accuracy. (For example, the thought that cousins are not related is inconsistent, the belief that Elvis is dead is true, the desire to eat the moon is inappropriate, a visual experience of a ripe strawberry as red is accurate, an image of George W. Bush with dreadlocks is inaccurate.)

RTM defines such intentional mental states as relations to mental representations, and explains the intentionality of the former in terms of the semantic properties of the latter. For example, to believe that Elvis is dead is to be appropriately related to a mental representation whose propositional content is that Elvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involve different relations to the same mental representation.) To perceive a strawberry is to have a sensory experience of some kind which is appropriately related to (e.g., caused by) the strawberry.

RTM also understands mental processes such as thinking, reasoning and imagining as sequences of intentional mental states. For example, to imagine the moon rising over a mountain is to entertain a series of mental images of the moon (and a mountain). To infer a proposition q from the propositions p and if p then q is (among other things) to have a sequence of thoughts of the form p, if p then q, q.

Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized — i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensically conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.

In philosophy, recent debates about mental representation have centered around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focused on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.

2. Propositional Attitudes

Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behavior (often collectively referred to as "folk psychology") are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do; and we have no other way of making sense of each other's behavior than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)

Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behavior. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court. See, e.g., Churchland 1989.)

Dennett (1987a) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behavior is merely to adopt the "intentional stance" toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behavior (on the assumption that it is rational — i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this. (See Dennett 1987a: 29.)

Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a "moderate" realist about propositional attitudes, since he believes that the patterns in the behavior and behavioral dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987b, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.

(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.)

Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behavior and cognition, and the causal powers of a mental state are determined by its intrinsic "structural" or "syntactic" properties. The semantic properties of a mental state, however, are determined by its extrinsic properties — e.g., its history, environmental or intramental relations. Hence, such properties cannot figure in causal-scientific explanations of behavior. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role. (Stich has changed his views on a number of these issues. See Stich 1996.)

3. Conceptual and Non-Conceptual Representation

It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (cf. Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal ("what-it's-like") features ("qualia"), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Nonconceptual content is usually defined as a kind of content that states of a creature lacking concepts might nonetheless enjoy.[1] On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)

Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that nonconceptual representations — percepts ("impressions"), images ("ideas") and the like — are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focusing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981c) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frege 1918/1997, Geach 1957) or mathematical (Frege 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only nonconceptual representations construed in this way.

Contemporary disagreement over nonconceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible. (See the discussion in the next section.)

There has also been dissent from the traditional claim that conceptual representations (thoughts, beliefs) lack phenomenology. Chalmers (1996), Flanagan (1992), Goldman (1993), Horgan and Tiensen (2003), Jackendoff (1987), Levine (1993, 1995, 2001), McGinn (1991a), Pitt (2004), Searle (1992), Siewert (1998) and Strawson (1994), claim that purely symbolic (conscious) representational states themselves have a (perhaps proprietary) phenomenology. If this claim is correct, the question of what role phenomenology plays in the determination of content rearises for conceptual representation; and the eliminativist ambitions of Sellars, Brandom, Rey, et al. would meet a new obstacle. (It would also raise prima face problems for reductivist representationalism (see the next section).)

4. Representationalism and Phenomenalism

Among realists about phenomenal properties, the central division is between representationalists (also called "representationists" and "intentionalists") — e.g., Dretske (1995), Harman (1990), Leeds (1993), Lycan (1987, 1996), Rey (1991), Thau (2002), Tye (1995, 2000) — and phenomenalists (also called "phenomenists" and "qualia freaks") — e.g., Block (1996, 2003), Chalmers (1996, 2004), Evans (1982), Loar (2003a, 2003b), Peacocke (1983, 1989, 1992, 2001), Raffman (1995), Shoemaker (1990). Representationalists claim that the phenomenal character of a mental state is reducible to a kind of intentional content. Phenomenalists claim that the phenomenal character of a mental state is not so reducible.

The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-reductive claim (though the term 'representationalism' is most often used for the reductive claim). (See Chalmers 2004.) On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether qualia are intrinsically representational (Loar) or not (Block, Peacocke).

Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality (see the next section) is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal — though not in the same way.)

The main argument for representationalism appeals to the transparency of experience (cf. Tye 2000: 45-51). The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to "see through it" to the objects and properties it is experiences of.[2] They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.

In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property P is a state of a system whose evolved function is to indicate the presence of P in the environment; a thought representing the property P, on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of "symbol-filled arrays." (Cf. the account of mental images in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalist, it is the phenomenal properties of experiences — qualia themselves — that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual "scenario" (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is "correct" (a semantic property) if in the corresponding "scene" (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.

Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the "phenomenal concept" — a conceptual/phenomenal hybrid consisting of a phenomenological "sample" (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991b) puts it, "you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties." One cannot have a phenomenal concept of a phenomenal property P, and, hence, phenomenal beliefs about P, without having experience of P, because P itself is (in some way) constitutive of the concept of P. (Cf. Jackson 1982, 1986 and Nagel 1974.)

5. Imagery

Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that themselves have spatial properties — i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981a, 1981b, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery (see, e.g., Kosslyn and Pomerantz 1977). The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focused on visual imagery — hence the designation 'pictorial'; though of course there may imagery in other modalities — auditory, olfactory, etc. — as well.)

The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such as being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is "quasi-pictorial" when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially — for example, in terms of the number of discrete computational steps required to combine stored information about them. (Cf. Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are "(labeled) interpreted symbol-filled arrays." The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each "cell" in the array represents a specific viewer-centered 2-D location on the surface of the imagined object).

6. Content Determination

The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.[3]

Causal-informational theories (Dretske 1981, 1988, 1995) hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990a) cause it to occur.[4] There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories (e.g., Fodor 1987, 1990a, 1994) and Teleological Theories (Fodor 1990b, Millikan 1984, Papineau 1987, Dretske 1988, 1995). The Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses (or the property horse).

According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.

Functional theories (Block 1986, Harman 1973), hold that the content of a mental representation is grounded in its (causal computational, inferential) relations to other mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localism (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989).

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (non-conceptual) content of experiential states. They thus tend to be externalists (see the next section) about phenomenological as well as conceptual content. Phenomenalists and non-reductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to internalist individuation of the content (if not the reference) of such states.

7. Internalism and Externalism

Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are externalists (e.g., Burge 1979, 1986b, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists; cf. Putnam 1975, Fodor 1981b).

This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviors they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic (see Stich 1983, Fodor 1982, 1987, 1994). Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both "narrow" content (determined by intrinsic factors) and "wide" or "broad" content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology. See von Eckardt 1993: 189.)

Narrow content has been variously construed. Putnam (1975), Fodor (1982: 114; 1994: 39ff), and Block (1986: 627ff), for example, seem to understand it as something like de dicto content (i.e., Fregean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. On both construals, narrow contents are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intramental computational or inferential role (or its phenomenology — see, e.g., Searle 1992, Siewert 1998, Pitt forthcoming).

Burge (1986b) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that a scientific psychology might not need narrow content in order to supply naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frege cases, are either nomologically impossible or dismissible as exceptions to non-strict psychological laws.

8. The Computational Theory of Mind

The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind (CTM), claims that the brain is a kind of computer and that mental processes are computations. According to CTM, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states.

CTM develops RTM by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some — so-called "subpersonal" or "sub-doxastic" representations — are not. Though many philosophers believe that CTM can provide the best scientific explanations of cognition and behavior, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific RTM.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental (Fodor 1981a, Pylyshyn 1984, Von Eckardt 1993). That mental processes are computations, that computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the "mental models" of Johnson-Laird 1983, the "retinal arrays," "primal sketches" and "2½ -D sketches" of Marr 1982, the "frames" of Minsky 1974, the "sub-symbolic" structures of Smolensky 1989, the "quasi-pictures"of Kosslyn 1980, and the "interpreted symbol-filled arrays" of Tye 1991 — in addition to representations that may be appropriate to the explanation of commonsense psychological states. Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975, Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and use (Chomsky 1965, Pinker 1989), and musical comprehension (Lerdahl and Jackendoff 1983).

A fundamental disagreement among proponents of CTM concerns the realization of personal-level representations (e.g., thoughts) and processes (e.g., inferences) in the brain. The central debate here is between proponents of Classical Architectures and proponents of Connectionist Architectures.

The classicists (e.g., Turing 1950, Fodor 1975, Fodor and Pylyshyn 1988, Marr 1982, Newell and Simon 1976) hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists (e.g., McCulloch & Pitts 1943, Rumelhart 1989, Rumelhart and McClelland 1986, Smolensky 1988) hold that mental representations are realized by patterns of activation in a network of simple processors ("nodes") and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism — "localist" versions — on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the connectionist program (Smolensky 1988, 1991, Chalmers 1993).)

Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language of Thought Hypothesis (LOTH) (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. (Cf. also Marr 1982 for an application of classical approach in scientific psychology.) According to the LOTH, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the LOTH explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drives computation in connectionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)

Moreover, connectionists argue that information processing as it occurs in connectionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981c), on the connectionist model it is a matter of evolving distribution of "weight" (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The connectionist network is "trained up" by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well. (Cf. the sonar example in Churchland 1989.)

Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that connectionist systems show the kind of flexibility in response to novel situations typical of human cognition — situations in which classical systems are relatively "brittle" or "fragile."

Some philosophers have maintained that connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if connectionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991, Horgan and Tienson 1996) argue that language-of-thought style representation is both necessary in general and realizable within connectionist architectures. (MacDonald & MacDonald 1995 collects the central contemporary papers in the classicist/connectionist debate, and provides useful introductory material as well. See also Von Eckardt 2004.)

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that CTM provides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems' components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters. (See also Port and Van Gelder 1995; Clark 1997a, 1997b.)

Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. CTM attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So CTM involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

9. Thought and Language

To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that ocelots take snuff. I am thinking about ocelots, and if what I think of them (that they take snuff) is true of them, then my thought is true. According to RTM such states are to be explained as relations between agents and mental representations. To think that ocelots take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that ocelots take snuff. I am talking about ocelots, and if what I say of them (that they take snuff) is true of them, then my utterance is true. Now, to say that ocelots take snuff is (in part) to utter a sentence that means that ocelots take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express (Grice 1957, Fodor 1978, Schiffer1972/1988, Searle 1983). On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.

(Others, however, e.g., Davidson (1975, 1982) have suggested that the kind of thought human beings are capable of is not possible without language, so that the dependency might be reversed, or somehow mutual (see also Sellars 1956). (But see Martin 1987 for a defense of the claim that thought is possible without language. See also Chisholm and Sellars 1958.) Schiffer (1987) subsequently despaired of the success of what he calls "Intention Based Semantics.")

It is also widely held that in addition to having such properties as reference, truth-conditions and truth — so-called extensional properties — expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions — i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frege 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.

Bibliography

Other Internet Resources

Related Entries

artificial intelligence | cognitive science | concepts | connectionism | consciousness: and intentionality | consciousness: representational theories of | folk psychology: as mental simulation | information: semantic conceptions of | intentionality | language of thought hypothesis | materialism: eliminative | mental content: causal theories of | mental content: externalism about | mental content: narrow | mental content: nonconceptual | mental content: teleological theories of | mental imagery | mental representation: in medieval philosophy | mind: computational theory of | neuroscience, philosophy of | perception: the problem of | qualia | reference

Acknowledgments

Thanks to Brad Armour-Garb, Mark Balaguer, Dave Chalmers, Jim Garson, John Heil, Jeff Poland, Bill Robinson, Galen Strawson, Adam Vinueza and (especially) Barbara Von Eckardt for comments on earlier versions of this entry.

Copyright © 2004 by
David Pitt <dalanpitt@yahoo.com>