• Skip to main content
  • Skip to primary sidebar

psychology.iresearchnet.com

iResearchNet

Psychology » Cognitive Psychology » Page 2

Cognitive Psychology

Cognitive Psychology Theories

Cognitive PsychologyThe primary goal of cognitive psychology is to provide an understanding of mental activity via the use of the scientific method. Because mental activity mediates be­tween stimuli presented to a person and the person’s response, and is therefore not directly observable, cog­nitive science is heavily theory laden. Theories attempt to provide an explanation of the results from a large number of studies and to provide predictions that can be directly tested. A good theory should reduce complex behavior to a limited set of principles that explain why some phenomena may occur in some circumstances and may not occur in others. However, there are some general limitations to theories that are noteworthy. For example, because most cognitive theory is based on ex­perimentation, in which independent variables are ma­nipulated and their influence is measured on dependent variables, there is always a limitation of building the model of the structures and processes intervening be­tween the manipulations and behavior. In fact, Ander­son (1976) has argued that behavioral data may not allow one to distinguish between theories that assume very different representations and processes. Theories must then be guided by other criteria such as parsi­mony, effectiveness, generality, and accuracy.

Given the difficulty in cognitive theory development, how does one build confidence in a theory? Converging operations is a method that has been used extensively by cognitive scientists to discriminate among alterna­tive theoretical accounts of particular patterns of data (Garner, Hake, & Eriksen. 1956). Converging operations reflect the use of two or more experimental operations that eliminate an alternative theoretical account of a set of data. If Theory A is consistently supported after being pitted against reasonable competing theoretical accounts of a set of data, then there is increased confidence in Theory A.

In order for the reader to gain some appreciation for cognitive theories, a brief overview of some of the the­oretical issues that have been addressed in cognitive psychology will presented. Obviously, it would be im­possible to cover the richness of theory development in such a limited space. Therefore, we have chosen to pro­vide a brief overview of some of the theoretical issues that have stirred controversy in the field.

Bottom-up vs. Interactive Models of Pattern Recognition

Models of perception attempt to explain, in large part, how patterns are recognized. Our intuitions might sug­gest the following “bottom-up” stream of events: pat­terns in the environment activate sensory receptive sys­tems (e.g. ears and eyes) and these systems provide signals that are transformed into higher-level represen­tations that provide information regarding the identity of a stimulus pattern. For example, the pandemonium model of letter recognition (Selfridge & Neisser, 1960) is a classic example of a bottom-up feature detection model in which stimuli first activate a set of feature detectors (e.g. vertical lines, horizontal lines, oblique patterns, diagonals), and these feature detectors are combined to activate relevant letters (e.g. the letter E would be activated by the presence of three horizontal lines and one vertical line). Ultimately, the most acti­vated letter is selected as the target to report. Interest­ingly, not long after Selfridge’s theoretical model was introduced, results from electrophysiological studies provided some converging evidence for feature-like de­tectors in nonhuman species (e.g. Hubel & Wiesel. 1962).

Although evidence for feature detectors exists, and the bottom-up approach is intuitively appealing, there is also support for an alternative perspective, called the interactive model, which assumes that pattern recog­nition is not simply controlled by the stimulus but is aided by preexisting memory representations. For ex­ample, one of the classic findings in support of an in­teractive position is the word superiority effect. i.e. let­ters embedded within words are better perceived than letters embedded in nonwords or presented in isolation. The theoretical conundrum that this finding presents is: How can the word representation influence the let­ters that make up the word because the letters must have been already identified in route to recognizing the word? These findings led McClelland (1979) to propose that higher order mental representations influence rec­ognition via a processing cascade. Specifically, early in perception before letter recognition has occurred, letter units begin receiving activation, and partial activation is transferred to higher-order representations (e.g.. words). These higher-order representations then trans­mit partial activation back down to the relevant letter representations, which actually helps constrain the per­ception of those letters.

The interactive perspective with both bottom-up and top-down processes has been very influential because it suggests that the stimulus is not the only source of information, but rather the perceiver adds information across time to the stimulus information to construct the perceptual experience. It is precisely this type of added information that provides a way of understanding per­ceptual illusions and potentially memories of events that never occurred. Our perceptions and memories in­volve an elaborate interaction between the external stimulus and preexisting knowledge.

One of the major theoretical debates that has arisen in this area is the extent to which there are interactions among distinct systems within the processing system. According to the modular approach (e.g. Fodor, 1983), there are dedicated systems that only provide feed-forward information from lower-level systems to higher-level systems. On the other hand, some theorists believe that there is almost complete interactivity across sys­tems. For example, an area of research that has amassed a considerable amount of empirical and the­oretical debate concerns the processes by which the ap­propriate meanings of ambiguous words are resolved in sentence contexts. The modular approach suggests that when processing an ambiguous word (e.g. the word organ can refer to musical instrument or bodily organ), a prior sentence context such as. “The musician played both the piano and organ.” does not influence which meaning becomes initially activated (i.e. both the musical instrument meaning and the body mean­ing of organ would become initially activated). In con­trast, the interactive approach suggests that prior sen­tence context should control which meaning becomes initially activated (i.e. only the contextually relevant meaning becomes activated). Although the original re­search in this area strongly supported the modular ap­proach, more recent work has indicated that a strong sentence context can influence the initial interpretation of ambiguous linguistic structures.

In summary, one goal of cognitive theory is to ex­plain how patterns are recognized. Early models were primarily bottom-up processors, i.e. from the sensory systems to higher-level systems. However, the results of cognitive research and theory development suggest that pattern recognition is influenced by top-down concep­tual processes that reflect the interactive nature of the processing architecture.

Attentional Selection: Early or Late

One of the most difficult issues that cognitive scientists have had to grapple with is how to empirically address and theoretically model human attention. For example, how do people at a crowded party ignore distracting information and focus on (i.e., attend to) one conver­sation? As in pattern recognition, we all have intuitions regarding attention, but how does one develop a theory of attention based on experimental studies? Research­ers have used metaphors such as attentional filters, switches, reservoirs of capacity, spotlights, executive processors, and many others. Although attention re­search ultimately touches on all areas of cognitive psy­chology, most researchers work on specific aspects of attention such as the locus of attentional selection, its relationship to consciousness, and aspects of attentional control and automaticity.

Much of the early theoretical debate focused on the extent to which unattended stimuli are processed. Early selection models postulated that selection occurs at a relatively early level in the system, before meaning has been extracted. The initial support for this notion was classic studies using the dichotic listening task in which listeners were given a very demanding primary task to one ear (verbally repeating the information presented over headphones, i.e., shadowing), while information was simultaneously presented to the other ear (e.g., Cherry, 1953). The results suggested that participants noticed little of the information presented to the un­attended ear, e.g. did not even notice a switch to a different language. However, researchers soon realized that attentional selection was not an all-or-none phe­nomenon. For example, if one is presented with a highly relevant stimulus in the unattended channel (such as the person’s name), then in fact the person could recall information presented to the unattended channel (Moray. 1959). Returning to the crowded party example presented earlier, one would be able to tune out most of the other conversations at the party, but if one hears something that is highly relevant to the per­son (e.g.. his or her own name), then it is likely that the person would attend to this information. Hence, although it appears that there may be some attenua­tion of unattended information, it is still possible for some signal to get through, and such a signal can push highly relevant stimuli over the threshold. The topic of attentional selection has invoked rather widespread in­terest not, only in studies of healthy young adults, but also in the neuropsychological literature, because in some patient populations (such as attention-deficit dis­orders and schizophrenia), there may be a breakdown in the amount of information getting into the system (i.e., a breakdown in the attentional selection system), thereby overloading any limited capacity aspect of the processing system.

Related to the issue of attentional selection is the control of attention. Again, our intuitions would sug­gest that we have control over what we attend. How­ever, researchers have become interested in situations where effects of variables are outside of the individual’s attentional control. One classic example of this is the Stroop task, wherein one is asked to name the color that words are printed in. When words are printed in incongruent colors, e.g., the word red printed in blue, there is considerable slowdown in color naming, com­pared to naming the color of a neutral word such as run. Some researchers have argued that this interfer­ence occurs because words invoke a qualitatively dis­tinct type of processing, referred to as automatic proc­essing, in which words automatically activate their meaning (outside of attentional control), and this au­tomatic processing produces conflict when the color and word information are inconsistent. Automatic pro­cesses reflect those processes that are well practiced and under consistent stimulus-to-response mappings, e.g. the processing of the meaning of the word blue is con­sistent and highly processed. Because these processes have in some sense been wired into the system, they are outside the scope of attentional control. Research­ers have addressed theoretically interesting questions regarding the development of automaticity such as the role of conscious control, the time course, the influence of practice, and even the neurophysiological substrates. Thus, the distinction between automatic and attentional control processes has been a central theme in current theory development.

Separate vs. Unitary Memory Systems

Our intuitions would suggest that there are a number of distinct types of memory systems. For example, re­hearsing a telephone number until it is dialed seems to be quite distinct from recalling what one had for break­fast, which also seems quite distinct from providing the definition of the low-frequency word orb from memory. Indeed, there is a rich history of memory research that has been viewed as supporting distinct types of mem­ory systems such as short-term, long-term, implicit, ex­plicit, etc. Are these types of memory reflective of dis­tinct memory systems or are they best understood in terms of a single system that utilizes different pro­cesses? The debate over memory types has had a long tradition in cognitive psychology. For example, Atkin­son and Shiffrin (1968) introduced an information-processing model comprised of sensory, short-term, and long-term memory stores. However, shortly thereafter, Craik and Lockheart (1972) advanced a unitary view of memory referred to as depth of processing. The idea was that the level at which information is initially proc­essed determines how well it will be encoded in mem­ory. Memory for information processed at a shallow level (e.g., visual features) differs from memory proc­essed at a deep (e.g., meaning) level. Thus, the distinc­tion between short- and long-term memory could also be viewed as a distinction between different types of processes that vary in the quality of memory-trace strength.

In addition to the distinction between short- and long-term memory systems, distinctions have been made between declarative/explicit (directly recollecting an earlier experience) and procedural/implicit (the benefit from an earlier exposure to a stimulus on an indirect measure) memory systems. For example, ma­nipulations of encoding condition that lead to a partic­ular result on explicit measures (e.g., recall of a list of words) can produce opposite effects on implicit mea­sures (e.g., perceptual identification of a visually de­graded word) (e.g., Jacoby. 1983). These dissociations would appear to support distinct memory systems. However, this evidence was challenged by Roediger, Weldon, and Challis (1989), who argued that many of the dissociations that appear in the literature could also be accommodated within the transfer-appropriate-processing (TAP) framework. This approach empha­sizes the match between encoding operations and re­trieval operations. They noted that studies of implicit memory often emphasized data-driven processes, whereas studies of explicit memory often emphasized conceptually driven processes. They also argued that if dissociations were the criterion for separate systems, we would need many more than just two or three dis­tinct systems.

Finally, even the dissociation between abstract cate­gory information and individual episodic experiences has been challenged. Specifically, Posner’ and Keele (1968), among others, have argued for a distinct rep­resentation for prototypes/categories (e.g., dog, which represent the common attributes of members within a category, e.g., collie, poodle. beagle). More recent work by Hintzman (1986) and Barsalou (1991) has demon­strated that the evidence in support of qualitatively dis­tinct representations for instances and categories can be accommodated by a model that assumes only one type of instance based memory system. These theorists argue that the apparent distinction between category and instances falls quite naturally from correlations among the features across members within a category. That is, collies, poodles, and beagles all have four legs, bark, have fur, are good pets, etc. It is the similarity across these features that produces the dog category.

Although there is still theoretical debate regarding distinct memory systems versus distinct processing engaged by different tasks, it is important to note that there is evi­dence for some memory-system distinctions. For exam­ple, results indicate that amnesics perform poorly on ex­plicit memory tasks, while their performance on implicit tasks is often normal. Thus, the lesion produced in these individuals would appear to be primarily affecting one system while- leaving the other system intact (Squire, 1987). Moreover, evidence from brain-imaging studies is beginning to provide evidence for distinct memory sys­tems (Nyberg, Cabeza, & Tulving, 1996). Thus, although it is clearly the case that some memory-system dissocia­tions are more apparent than real, it is also the case that some system dissociations are in fact real.

Analog vs. Propositional Representations of Mental Images

Humans have little difficulty imagining stimuli that are typically perceived via the senses. For example, we have little difficulty imagining a shiny red apple or a yellow school bus. The theoretical issue that has concerned re­searchers in this area is the form of representation to gen­erate these images. For example, do mental images de­mand a qualitatively different form of representation than the representation that we use to process language?

One popular notion of imagery posits that the men­tal code retains the spatial and sensory properties of the external stimuli we perceive in analog form. For example, an analog representation of the neighborhood in which we live would preserve the relative distances between houses and their sizes. Accordingly, the time it takes to mentally scan between two objects in a men­tal image should reflect their relative distance to each other. Many experiments have demonstrated this to be the case (e.g., Kosslyn & Pomerantz, 1977). The alter­native view of imagery posits that mental images are represented as abstract propositions. According to this account, mental images, language, and other infor­mation relies on one primitive code that the brain uses to process all types of information (i.e., The Language of Thought, Fodor, 1975). The generation of images occurs after this primitive code is accessed.

Recently there has been some progress in this the­oretical debate. Much of the support has actually arisen from studies of the neuropsychological under­pinnings of mental imagery. For example, Kosslyn, Thompson, Kim, and Albert (1995) have demon­strated, via brain-imaging studies, that not only do vi­sual images activate areas of the brain dedicated to visual processing, but activations within neural sys­tems across perception and imagery appear to be cor­related across stimuli that vary in size. Thus, there appears to be a link between the neural systems that underlie imagery and the actual visual perception of the stimulus. Moreover, studies of individuals with brain lesions have produced dissociations between dif­ferent aspects of visual imagery such as the spatial versus the visual nature of the image (e.g., Farah, Hammond, Levine, & Calvanio, 1988). Thus, it is clear that important constraints have been placed on theories of visual imagery based on both behavioral and neuropsychological evidence.

Connectionist vs. Symbolic Representations

One issue that has recently received a considerable amount of attention is the level of description needed for models of higher-level cognition such as language processing and problem solving. For example, how might one build a theory of orthography, phonology, or syntax within a language? Based on linguistic theory, one might assume a set of rules that specify how the constituents can be combined within a language. For example, a rule might specify that the vowel that pre­cedes the letter “e” at the end of a word, as in gave, should be elongated. Such rules provide a descriptive account of many phenomena in language processing. Unfortunately, as in most rules, there are many excep­tions. For example, according to the above rule, the word have should be pronounced differently. Thus, lin­guistic models are often forced to provide a separate processing route for such exceptions.

Within the past decade, there has been an increased appreciation for an alternative way of modeling aspects of human cognition, i.e. connectionist modeling. Con­nectionist models typically assume a relatively simple set of processing units that are in distinct layers, with all the processing units within a layer connected to all the processing units in adjacent layers. These models do not assume any rules, and are mathematically spec­ified. Knowledge of a domain is contained in the values of weighted connections linking units that are either built into the models or are adjusted according to a gradual learning algorithm that updates activation pat­terns based on the frequency of exposure to a given stimulus and the deviation of the correct response to the current output. Interestingly, the general principles of connectionist modeling have been used to account for many aspects of cognitive processes (i.e. pattern recognition. speech production. category learning).

Clearly there has been some tension between sym­bolic rule-based theories and connectionist theories (e.g. Fodor & Pylyshyn, 1988). One might argue that the symbolic models reflect the first wave of cognitive theorizing. These models are often metaphorical in na­ture, i.e. performance can be modeled by a specific set of stages and a specific set of rules at each stage. These models remain central in current theories of human cognition. On the other hand, connectionist models have a level of computational specificity that is quite appealing. Moreover, there is at least some sense of neural plausibility within such connectionist models (i.e. the simple processing units have some surface level resemblance to neurons. whereas rules are difficult to envisage within a neural network). Ultimately, the ad­equacy of such models may lie in their ability to pro­vide new insights into understanding a set of empirical observations. Because both types of models have ad­vantages, it is likely that both first wave metaphorical models, and second wave connectionist models will continue to be central to theoretical accounts of hu­man cognition (Spieler & Balota. 1997).

Summary

The present article provides a brief overview of a few of the theoretical controversies and issues that have been the focus of theory development in cognitive psy­chology. In each of these areas we have shown how basic research in cognitive psychology has allowed one to distinguish between alternative theories. Cognitive psychology has made considerable progress in under­standing mental activity and is now in the excellent position of taking advantage of new technologies (e.g. connectionist modeling. neuroimaging) to provide im­portant advances in theory development.

See also:

  • Cognitive Psychology History
  • Cognitive Psychology Research Methods
  • Cognitive Psychology Bibliography

Pages: Page 1 Page 2 Page 3 Page 4

Primary Sidebar

Psychology Research and Reference

Psychology Research and Reference
  • Cognitive Psychology