Interactions Between Perception and Concepts
One of our research goals is to expand our understanding of perception, cognition, and their interactions. Traditionally, work in human perception has been disconnected from research on more sophisticated cognitive functioning. One of the first distinctions that an undergraduate psychology student learns is between “Low-level,” simple, “merely” perceptual processes, and “high-level,” “true” cognition. This distinction is echoed by philosophers who differentiate sense data from cognitive inferences about sense data. Our research is based on the premise that this distinction is misleading and counterproductive. Perception is far more sophisticated than usually thought, raising it up to the level of “high-level” cognition. Reciprocally, “high-level” cognition is fundamentally grounded in our perceptual abilities. Even when we strive for disembodied, symbolic abstraction, our cognitive processes retain their connection to perception. Our research is an attempt to build bridges between our perceptual and conceptual systems.
Much of our current research argues for an influence of concept learning on our perceptual abilities. Certainly in many domains, experts (radiologists, wine tasters, chicken sorters, chess masters, and fishers) seem to have developed specialized perceptual tools for analyzing the stimuli in their domain of expertise. We are interested in describing mechanisms of perceptual learning, and implementing these mechanisms in neural network models.
One mechanism of perceptual change is selective attention, according to which perceptual dimensions that are relevant for an important categorization become sensitized. One general experimental method that my laboratory has often used to explore selective attention is to first have people learn a categorization for a substantial length of time (anywhere from half an hour to 20 hours), and then give them a perceptually based test to see whether this initial category learning task has altered their later performance. Goldstone (1994) found that dimensions relevant during the initial categorization phase become sensitized for a subsequent task in which subjects respond as to whether two stimuli are physically identical or not. A related approach was taken by Goldstone (1995), to show that people automatically categorize objects into groups (based on shapes), and these categories cause systematic distortions to peoples’ perception of color. In addition to selective attention to entire dimensions, we have also found evidence for selective attention to regions within a perceptual dimension that occur near the boundary between two learned categories (Goldstone, 1994; Goldstone, Steyvers, & Larimer, 1996).
Two other seemingly contradictory mechanisms of perceptual learning are dimensionalization and unitization. By dimensionalization, perceptual dimensions that were originally psychologically fused together become separated as a result of training on a categorization that makes one but not the other dimension relevant (Goldstone, 1994; Goldstone & Steyvers, 2001). Related to this, we have found that the way in which an object is broken down into parts depends on how relevant those parts have been for a previous categorization (Pevtzow & Goldstone, 1994). Natural ways of perceiving an object can be abandoned if less natural ways involve parts that have been useful for categorization. By unitization, a single functional unit is created for a complex pattern, and this functional unit can be identified without an analytic process of breaking it down into components and identifying the components (Goldstone, 2000; Goldstone, Steyvers, Spencer-Smith, & Kersten, 2000). There is an apparent contradiction between experience creating larger “chunks” via unitization and dividing an object into more clearly delineated components via differentiation. This incongruity can be transformed into a commonality at a more abstract level. Unitization and differentiation are both processes that build appropriate sized representations for the tasks at hand (Goldstone, 1998).
A final mechanism of perceptual learning is feature creation –- the development of functionally novel perceptual organizations. We have made both theoretical (Goldstone & Schyns, 1994; Schyns, Goldstone, and Thibaut, 1998) and empirical arguments (Goldstone, 2003; Goldstone et al., 2000; Goldstone, Lippa, & Shiffrin, 2001) that perception supports cognition by flexibly adapting to the requirements imposed by cognitive tasks. Perception may not be stable, but its departures from stability may facilitate rather than hamper its ability to support cognition. Rather than creating concepts by composing together a fixed set of primitive elements, it appears that novel elements can be created if they are needed for concept learning and if they obey certain psychophysical constraints.
Our laboratory has built neural network models of the perceptual flexibility that we have observed in the laboratory. The most important component in these models is a set of “detector” units that intervene between the inputs (photographs of the stimuli that are shown to subjects) and outputs (categorization responses or perceptual judgments). The system cannot view the world directly, but rather must view the world filtered through its detectors. Our detectors are built to be flexibly tuned to the stimuli and tasks confronting the system. For example, detectors concentrate where they are needed most for a task (Goldstone, Steyvers, & Larimer, 1996). More recent work has shown how a neural network, when given a set of input patterns, can create a set of building blocks that, when combined together, are able to reconstruct the input patterns (Goldstone, 2003).
There are two very different ideas about what concepts are like in cognitive science. Linguists and artificial intelligence researchers often stress the interrelations between concepts – concepts are defined in terms of their connections to other concepts in a system. On the other hand, most work on concept learning in psychology has stressed relatively isolated representations of concepts (Goldstone & Kersten, 2003). We have developed a neural network model that incorporates both isolated and interrelated aspects of concepts, and are currently empirically testing this model. Empirically, we have developed a number of measures of the isolatedness/interrelatedness of a concept, and have also developed a number of manipulations that can alter the degree of relatedness of laboratory-trained concepts (Goldstone, 1996-a; Goldstone, Steyvers, & Rogosky, 2003).
A common theme for my research on concept learning is that although they are abstract, our concepts are also inherently connected to our perceptual world (see Goldstone & Barsalou, 1998; Goldstone, 1994-a). Many of the devices that we use for analyzing our perceptual world, such as spatial organization (Goldstone, Steyvers, Spencer-Smith, & Kersten, 2000) and selective attention (Kersten, Goldstone, &Shaffert, 1998), are borrowed for more abstract conceptual tasks involving language, reasoning, and decision making.
An illustrated guide to ways that concepts influence our perception made perceptually manifest by Joe Lee, conceived by Rob Goldstone)
- Attention Weighting
- Categorical Perception
- Assimilation and Contrast
Our Selected Papers Relevant to Perceptual Learning, Conceptual Learning, and Conceptual Representation
Visit our paper repository for abstracts. Clicking on the paper below will download a PDF version of the paper, but the repository has additional formats.
Goldstone, R. L., Feng, Y., & Rogosky, B. (in press). In D. Pecher & R. Zwaan (Eds.) Grounding cognition: The role of perception and action in memory, language, and thinking. Cambridge: Cambridge University Press.
Goldstone, R. L. (2004). Believing is seeing. American Psychological Society Observer, 17, 23-26. Available on-line at:http://www.psychologicalscience.org/observer/getArticle.cfm?id=1595
Goldstone, R. L. (2003). Learning to perceive while perceiving to learn. in R. Kimchi, M. Behrmann, and C. Olson (Eds.) Perceptual Organization in Vision: Behavioral and Neural Perspectives. Mahwah, New Jersey. Lawrence Erlbaum Associates. (pp. 233-278).
Goldstone, R. L., & Johansen, M. K. (2003). Conceptual development from origins to asymptotes. In D. Rakison & L. Oakes (Eds.) Categories and concepts in early development. (pp. 403-418). Oxford, England: Oxford University Press.
Goldstone, R. L., & Kersten, A. (2003). Concepts and Categories. In A. F. Healy & R. W. Proctor (Eds.) Comprehensive handbook of psychology, Volume 4: Experimental psychology. (pp. 591-621). New York: Wiley.
Lippa, Y., & Goldstone, R. L. (2001). The Acquisition of Automatic Response Biases through Stimulus-Response Mapping and Categorization Determined by a Compatibility Task. Memory & Cognition, 29, 1051-1060
Goldstone, R. L. (2000). A neural network model of concept-influenced segmentation. Proceedings of the Twenty-second Annual Conference of the Cognitive Science Society. Hillsdale, New Jersey: Lawrence Erlbaum Associates. (pp. 172-177).
Goldstone, R. L., Steyvers, M., Spencer-Smith, J., & Kersten, A. (2000). Interactions between perceptual and conceptual learning. in E. Diettrich & A. B. Markman (eds.) Cognitive Dynamics: Conceptual Change in Humans and Machines. Mahwah, New Jersey: Lawrence Erlbaum Associates. (pp. 191-228).
Goldstone, R. L., Schyns, P. G., & Medin, D. L. (1997). Learning to bridge between perception and cogntion. in R. L. Goldstone, P. G. Schyns, & D. L. Medin (Eds.) Psychology of Learning and Motivation: Perceptual Learning, Vol. 36. (pp. 1-14). San Diego, CA: Academic Press.
Goldstone, R. L., Steyvers, M., Larimer, K. (1996). Categorical perception of novel dimensions. Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society. (pp 243-248). Hillsdale, New Jersey: Lawrence Erlbaum Associates.
Other Related Research on Interactions Between Concepts and Perception
Larry Barsalou has developed and empirically tested Perceptual Symbols Theory, according to which even abstract concepts are grounded in our perceptual-motor systems.
Lera Boroditsky has explored how our language and conceptual systems influence our perceptual abilities
Stevan Harnad has done on computational models of acquired categorical perception, and philosophical work on perceptually grounded symbols.
A large, distributed group in England, including Ian Davies, Jules Davidoff, Emre Ozgen, Philippe Schyns, Paul Sowden, Deb Roberson are studying cross-cultural, experiential, and linguistic influences on color perception, object recognition, and expert perception.
Back in the United States, the Perceptual Expertise Network, including Isabel Gauthier,, Daniel Bub, Gary Cottrell, Tom Palmeri, Robert Schultz, David Sheinberg, Jim Tanaka, and Michael Tarr are exploring the development of perceptual expertise from behavioral, computational, and neural perspectives.
Marlene Behrmann has done considerable research on influences of familiarity and experience on perceptual and attentional tasks. She has collaborated with Michael Mozer, and Richard Zemel in building computational models of perceptual learning