Social psychology research methods are the ways in which researchers measure variables and design studies to test hypotheses. Research in social psychology is generally carried out to construct and test causal theories. The emphasis on causation is implicit in the overall goals of the science: to understand phenomena by bringing them within the compass of general causal laws and to show how undesirable social conditions may be changed by altering their causal antecedents. Of course, some individual research studies, such as those aimed simply at describing existing states of affairs or at constructing and reining measurement instruments, do not directly address causal issues. Still, such studies are best regarded as part of an overall research process that focuses on building causal theory.
Social Psychology Research Methods
- Autobiographical Narratives
- Bogus Pipeline
- Content Analysis
- Control Condition
- Cross-Lagged Panel Correlation
- Demand Characteristics
- Ecological Validity
- Forced Compliance Technique
- Implicit Association Test
- Lost Letter Technique
- Mundane Realism
- Nonexperimental Designs
- Order Effects
- Path Analysis
- Placebo Effect
- Quasi-Experimental Design
- Semantic Differential
- Social Relations Model
- Sociometric Status
- Structural Equation Modeling
- Twin Studies
To evaluate the utility of different types of research in advancing causal theories. Cook and Campbell (1979) listed four criteria or forms of validity. Statistical conclusion validity depends on the use of appropriate statistical tests and adequate sample sizes, which are relatively independent of the choice of research methods. The other three forms of validity are closely related to research methods. Internal validity is the confidence with which one can conclude that the causal or independent variable in a study (here termed x) actually had a causal influence on the dependent variable in the particular study (y). Construct validity is the confidence with which one can draw a correspondence between the concrete variables in the particular study (x and y) and the abstract theoretical constructs that they are intended to operationalize (X and Y). Hence, construct validity assesses confidence with which a causal relationship can be inferred between X and Y. External validity takes two different forms depending on the type of research. When research is intended to apply in a simple and direct way to a defined setting and population (as public opinion surveys are intended to portray the opinions of the entire voting-age population of a state or country) then external validity measures the extent to which this sample-to-population generalization is viable. Such research is termed particularistic. In the more common situation of universalistic research, intended to test general causal theories, external validity amounts to a vaguer and more general question about the limits of the effects. Are they valid across different types of people, settings, or cultures?
Methods of research are defined by a number of cross-cutting factors including the research setting, the population studied, the research design, and the techniques of data collection used. Social psychology values methodological diversity, for as Cook and Campbell (1979) explained, a theoretical prediction that passes tests involving multiple methods is stronger than one that has been repeatedly tested in only a single way. Nevertheless, the field’s mainstream and defining research method, used in probably three-quarters or more of published research, is the laboratory experiment.
Laboratory Research Methods
The laboratory as a research setting is defined by its flexibility. It is a stage upon which the researcher can produce and direct whatever sequence of events is required to implement a planned study. The high degree of researcher control that is possible in the lab means that experimental designs (which require control over the specific experiences of each participant) can readily be implemented. Logically, research settings and populations are independent issues. However, in practice, laboratory studies generally use college students as participants. This restriction to well-educated, generally attentive young adults carries potential limitations on external validity, which are due not to the laboratory setting itself but rather to the populations studied. These limitations are increasingly being remedied as social psychologists investigate more diverse populations.
Laboratory experiments are the bread and butter of social psychology (Aronson, Ellsworth, Carlsmith, & Gonzalez, 1990). The use of experimental design, in which the study’s independent variable is manipulated to create randomly assigned groups of participants, means that internal validity is high and generally unproblematic. Criticism of lab experimentation generally involves its construct validity. Lab research offers many examples of effective, meaningful manipulations and valid measures of dependent variables, but also instances of variables that strike participants as artificial and meaningless and hence may not correspond to the intended theoretical constructs.
Laboratory experiments can be roughly divided into three types (Judd, Smith, & Kidder, 1991). Scenario or impact studies include some of the classics of social psychology. Researchers have had confederates posing as other participants report obviously incorrect judgments of the lengths of lines in order to test hypotheses about social influence on real participants’ judgments. Other researchers have staged emergencies in the lab to assess the impact of various factors on participants’ willingness to help others who appeared to be in danger. When scenario studies are effectively designed and implemented, the experiences become real for the subjects, leaving most questions about construct validity behind. For the study participants, for instance, their behaviors are responses to a real emergency in which they happen to find themselves. Still, the possibility of low construct validity due to the lab’s artificiality exists, as participants may sit back and wonder whether the other participants’ seeming consensus or the seeming emergency might not be contrived, just part of the experiment. Of course, the use of deception in research poses ethical as well as practical issues that must be carefully considered (T. D. Cook, in Judd, Smith, & Kidder, 1991, pp. 477-558).
Other laboratory experiments can be termed judgment studies, in which subjects assess usually complex stimuli (often including persons or social groups) and report their judgments, evaluations, inferences, or other reactions. Examples include most research on stereotyping and prejudice, in which information about an individual is presented, with different participants learning that the individual belongs to different social groups (e.g., he is Hispanic or Anglo in ethnicity). Their evaluations or judgments may reveal the content and operation of group stereotypes that affect their thinking about the target individual. Judgment studies also may be high or low in construct validity. When participants treat the task as realistic and meaningful it can assess their honest reactions to members of different groups. Even though most people other than corporate person-nel officers may find making judgments about individuals described on paper an unusual task, we all do make judgments about others from secondhand information every day—whether they are characters in novels, individuals described in news stories, or ex-spouses described by our acquaintances. However, construct validity can also be low, for example if participants figure out that “they are looking at stereotypes of His-panics here” and react to their perceptions of the research purpose rather than their perceptions of the target persons.
Finally, some laboratory experiments are performance studies in which some aspect of the participants’ performance of a task reveals the thoughts or feelings that are under study. No explicit judgments (such as checking a box from 1 to 7 on a rating scale) may be required. Instead, subjects are asked to read some material and later to report all that they can recall of it or to respond to words presented on a screen by pressing a key as rapidly as possible to indicate whether each word is positive or negative. Within social psychology, performance studies are most common in the area of social cognition. In performance studies, the hypotheses and the predicted responses are generally less transparent than in judgment studies. For example, a suspicious participant may decide to respond in counterstereotypical ways to the Hispanic character he is asked to rate but will ordinarily have no idea what pattern of memory or response-time performance will confirm or falsify the hypothesis that he holds a stereotype of the group. Even if he knew the relevant pattern, his performance has real limits; he may not be able to remember more or process faster in order to falsely generate counterstereotypic data. Thus, in performance studies, construct validity depends more on the careful theoretically based validation of the data-collection method itself than on the participant’s perceptions of the intentions of the study.
Laboratory Nonexperimental Studies
Although the control and flexibility that are characteristic of the laboratory can be used to good advantage to increase internal validity by implementing experimental designs, nonexperimental laboratory research is also common. The laboratory is used to set up specific conditions for participants to experience or to permit videotaping or other types of detailed observation. For example, two participants might be unobtrusively videotaped as they hold an informal get-acquainted conversation, and the videotape later coded in terms of their verbal and nonverbal behaviors. Or a small group may engage in a problem-solving task and their interactions coded for a study of leadership in informal groups. In cases like these, the flexibility and control of the laboratory can be used effectively to create the conditions under which specific observations can be made, even when no experimental manipulations are part of the research design.
Nonlaboratory Research Methods
Outside of the laboratory setting the researcher’s ability to control events is much weaker. In particular, experimental designs (with their near-guarantee of high internal validity) are usually difficult to implement outside of the lab. In nonlaboratory (often called field) settings, construct validity can be either high or low. Important and meaningful independent and dependent variables can often be studied. Consider research on the effects of models or of the number of bystanders on an individual’s probability of offering help to a person in need, or the effects of a person’s self-concept on his or her psychological adjustment to a diagnosis of cancer. On the other hand, research outside the laboratory, embedded in the complexity of real life, always has to deal with a variety of potentially confounding variables.
Nonlaboratory research is sometimes simply assumed to be higher than laboratory research in external validity or generalizability, but this is not necessarily the case. In contrast to the connotation of the term “the field,” diverse nonlaboratory settings such as a street corner, an industrial lunchroom, or a hospital emergency room differ among themselves at least as much as laboratory and nonlaboratory settings do. The types of people who typically inhabit these settings differ as well. Thus, the fact that a given study was conducted in a field setting does not in any way guarantee that its results will generalize to other nonlaboratory settings. The only true test of the external validity of a nonlaboratory finding (as of a laboratory finding) is replication.
Experimental research outside the laboratory, though often difficult to implement, has many potential strengths. An example of a field experiment might be a study of bystanders offering help to a person in need, where a situation of apparent need is constructed (e.g.. a person standing next to a disabled car by the side of the road) and researchers can assess the effects of various manipulations (such as a billboard advocating community responsibility located earlier along the road) 011 the number of offers of help. The manipulations would be added and removed according to a random schedule in order to assure that the groups of bystanders exposed to the different ma-nipulations were equivalent. Internal validity is high because of the use of experimental design, while construct validity can also be high because the manipulations and measures derive from a meaningful and realistic setting. Finally, in particularistic research, external validity can be high when the field experiment takes place with the actual setting and population of interest (such as an experiment on the effects of differ-ent working conditions on productivity in an industrial setting). In universalistic research, it is important to keep in mind that nonlaboratory settings differ and the fact that a study was conducted outside the laboratory does not guarantee that its results are broadly generalizable.
Quasi-experimental designs (Cook & Campbell. 1979) can guard against some but not all of the threats to internal validity that true experiments rule out. However, they impose lower demands for strict control and may be easier to implement outside the laboratory than experimental designs. Quasi-experimentation usually involves manipulation but not random assignment. For example, an ad campaign promoting seat belt use may be implemented on television stations in one city but not in a comparable city which is used as a control, and changes in drivers’ behavior measured in both cities. Except for the lower internal validity, the same considerations apply to quasi-experiments as to field experiments, just discussed.
Many social sciences including sociology and political science rely heavily on survey research methods, and some social psychological work its this mold as well. For example, a researcher interested in the effects of intergroup contact on prejudice may conduct a survey to question people about the extent of their contacts with members of other races and also about their degree of prejudice. Surveys typically involve (a) an effort to collect data from a representative sample of the population of research interest (e.g.. the voters in a particular state) or from the entire population (e.g.. all the employees of a firm); and (b) the use of self-report data collection methods. Surveys may be conducted by personal interviews. telephone interviews, or written self-administered questionnaires; these modes of data collection each have their own strengths and weaknesses in terms of cost and data quality (Judd, Smith, & Kidder, 1991). The limitations of survey research include low internal validity stemming from the typically nonexperimental design (no manipulations are ordinarily employed. except for variations in question wording embedded within the questionnaire). Limitations in construct validity arise from the method of data collection (self-report. which may involve biases of various sorts). Thus, in the example survey study just mentioned, low internal validity means that low prejudice might cause increased inter-group contact (rather than the reverse causal sequence that is of theoretical interest), and low construct validity means that questions about prejudice may be answered dishonestly by respondents who believe that prejudice is socially unacceptable. The external validity of surveys can be high, especially for particularistic research where generalization from the sample to a specific target population is the key issue.
Naturalistic Observational Studies
Some research observes naturally occurring social behaviors, gaining high construct validity by measurement in realistic settings and populations, but losing internal validity through a lack of experimental design. For example, researchers interested in the climate of intergroup relations in an elementary school may unobtrusively observe the extent of racial segregation in seating patterns in the school lunchroom.
Analysis of Archival Data
Social psychologists have also tested research hypotheses by examining records kept in official or unofficial archives: government records, newspaper stories, library circulation records, and so on. For example, tests of the idea that heat increases aggression could involve examination of official weather records and crime statistics to determine whether there are more homicides on hot days. Archival data can offer objective and complete coverage of a population of interest, going beyond self-reports to assessments of real and important life outcomes, but can be weak in construct validity if archival measures do not correspond directly to the psychological construct of interest. For example, homicide as legally defined is not precisely equivalent to the psychological notion of aggression. In addition, archival data analyses typically involve nonexperimental designs and hence low internal validity.
Research without Primary Data Collection
Some research in social psychology involves not the collection of new data but the further analysis and comparison of existing studies. This approach, termed meta-analysis, involves the quantitative summary of multiple primary studies on a given topic. For example, many studies may have investigated (as their major goal or as a subsidiary issue) sex differences in helping behavior. A meta-analysis of all this research may be conducted to draw conclusions about (a) the overall or average difference (yielding the conclusion that across all the situations studied. men help more than women do or vice versa) and (b) the factors or conditions that influence the effect (e.g., conclusions that women help more than men in private situations but the reverse is true in public situations). The goals of meta-analysis are thus similar to those of conventional or narrative literature reviews, but quantitative techniques are used to make the conclusions more precise and objective (Mullen and Norman, in Judd, Smith, & Kidder, 1991, pp. 425-449). Results supported by meta-analytic methods can generally be considered quite strong on the counts of construct validity (to the extent that different studies use multiple different operationalizations of the constructs) and external validity (to the extent that the research samples a variety of settings and participant populations).
Computer simulation is not a substitute for data collection but a method for deducing the implications of a theory. A computer program is written embodying the assumptions of the theory, and then run to generate the theory’s predictions under specified condi-tions. Simulation is most appropriately used with theories that are too complex for unaided intuition to clearly generate their predictions. When simulation is used, the other steps in the overall research process must be carried out as always: the theory’s implications become research hypotheses to be tested by comparing them to actual data from research participants, and if the hypotheses mismatch the data then the theory must be modified or discarded.
As stated earlier, laboratory experimentation is by far the most common method in social psychology today, but other methods are also valued. The best-accepted research findings are those that can be obtained repeatedly in different settings and populations, with diverse research methods.
- Abelson, R. P. (1995). Statistics as principled argument. Hillsdale. NJ: Erlbaum.
- Aronson. E., Ellsworth. P. C., Carlsmith, J. M., & Gonzales. M. H. (1990). Methods of research in social psychology (2nd ed.). New York: McGraw-Hill.
- Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation. Chicago: Rand McNally.
- Judd. C. M., Smith, E. R., & Kidder. L. H. (1991). Research methods in social relations (6th ed.). Fort Worth. TX: Holt, Rinehart, and Winston.
- Mook. D. G. (1980). In defense of external invalidity. American Psychologist, 38, 379-388.
- Reis, H. T., & Judd, C. M. (Eds.). (in press). Handbook of research methods in social psychology. Cambridge. UK: Cambridge University Press.