Lay beliefs about factors that influence the reliability of eyewitness testimony have been assessed with a variety of survey and experimental methods. When compared with expert opinion about the effects of these factors, the lay public frequently holds beliefs that would be considered incorrect in the light of psychological research on eyewitness memory.
A brief example provides the framework for understanding the relevance of lay beliefs about eyewitness memory to legal decision making and criminal justice procedures: A man presents a note to a bank teller and tells everyone to get on the floor. A security agent rushes the robber and is shot, but the thief escapes. Six weeks later, a man named Simon Chung is apprehended. His picture is included in a collection of photos that is shown to the teller, the wounded security officer, other employees, and the bank customers. The teller and four customers identify Chung as the robber, whereas the bank security guard and another three employees do not. Chung is charged with the crime and the case proceeds to trial. The prosecution believes that the five eyewitness identifications make up a strong case against Chung. At trial, Chung’s defense team presents a cognitive psychologist who, if given the opportunity, will testify that a number of features of the robbery and of the defendant reduce the reliability of the identification evidence. Defense counsel argues that jurors need to be aware of these factors if Chung is to receive a fair trial. The judge considers the expert’s testimony and, over the objections of the prosecution, decides that the expert should be allowed to give evidence.
The proffering of expert testimony at trial occurs frequently in common law countries. Judges decide whether an expert will be heard on the basis of several legal criteria, the most important of which for present purposes is the judge’s assessment of the levels of lay or juror knowledge about eyewitness testimony. If the substance of an expert’s presentation is deemed to be relevant to the case and to be outside the jurors’ ken, experience, or their common knowledge, expert testimony intended to inform the jurors will likely be deemed admissible. Only an expert in the specific subject area may provide what is called opinion evidence on the matter. Based on his or her own knowledge and evaluation of the expert testimony, the judge decides whether members of the jury are, as a group, sufficiently informed and, if not, whether the quality and reliability of their deliberations will benefit from an expert’s presentation. Given the adversarial nature of common law procedures, it is probable that opposing counsel may also proffer an expert who has a different interpretation of the importance of the relevant eyewitness factors.
Regardless of the decisions made by judges in these situations, there is little research that can tell us whether their assessments of jurors’ lay beliefs about eyewitness factors are likely to be correct. Furthermore, although a topic of interest in its own right, few studies have assessed whether judges (or trial counsel) themselves hold correct beliefs concerning eyewitness issues. The question raised here, however, is the following: On what basis do judges decide whether jurors are sufficiently informed (or have “common knowledge”)? Scientific investigations of lay beliefs about eyewitness memory have been conducted and, on occasion, the judges’ assessments are informed by descriptions of this line of research.
To describe the history, methods, and results of that research, a few words are first needed about topics within the field of eyewitness psychology—topics about which the lay public may be examined as to their beliefs and knowledge. Briefly, the field of eyewitness memory research examines the myriad factors that may influence witnesses’ recollections of an event, the people present, their behaviors, and the context in which the event occurred. The usual scheme for categorizing these factors is based on a distinction between (a) those that are under the control of the criminal justice system and, as a result, may be manipulated to improve the reliability of eyewitness evidence, such as the investigative interviews and the suspect identification procedures, and (b) those for which an impact on the reliability of testimony may only be estimated and that are not under the control of the justice system. This latter includes a very large group of factors, such as the age of the witness, lighting at the crime scene, witnesses’ levels of stress, the confidence held by an eyewitness, and the presence of a weapon. Research has demonstrated that these factors can produce general memory impairments, but the effects are variable and unpredictable with regard to specific individuals.
Approaches to the Assessment of Lay Knowledge
Although the subject of scientific investigation of eyewitness memory is more than 100 years old, it is primarily the last 40 years of research that have provided a substantive and reliable foundation of data. To assess public beliefs concerning the eyewitness factors examined in this research, both direct and indirect methods have been employed. For examples, the introductory robbery scenario will serve. One factor that has long been considered relevant to the reliability of eyewitness identification is the correspondence between the race of the witness and that of the suspect. Identification reliability has often been found to be higher when both are members of the same racial group than when the two people belong to different racial groups—an outcome called the “other-race” effect. Using the direct or survey approach to the assessment of lay beliefs, respondents might be asked to agree or disagree with statements such as “People are better at recognizing members of their own racial group than those of a different race” or be asked to choose among a number of alternative formats to the following statement: “When people are asked to identify someone of a racial group different from their own, they are just as likely (or more likely or less likely or don’t know) to be correct as when the person is of their own racial group.” On the other hand, using an indirect approach, respondents may receive a brief written vignette in which the respective races of the witness and perpetrator are either not mentioned at all (a control condition), are described as being the same, or described as different. The vignette may in fact be a summary of an actual experiment in which identification rates were examined as a function of variations in the racial similarity variable. After reading the vignette, the respondents estimate the probability that the witness’s identification decision is correct, an estimate that is often called a “postdiction” in relation to the actual experiment. Differences in the probability estimates from participants who received the different vignettes are taken to reflect public beliefs about the direction and magnitude of the relationship between witness-suspect race and eyewitness memory reliability. If, for example, Simon Chung is Asian, but the witnesses are Caucasian, juror beliefs about the relevance of this distinction may be important to their assessments of the identification evidence. Of course, when compared with the effects of the variable on actual identification accuracy in research experiments, these response differences may reflect wholly erroneous beliefs.
These kinds of data from public samples will only be helpful to a judge if he or she has a basis for assessing the accuracy of the beliefs of survey respondents and research participants and, by extension, the public. Therefore, what is needed is a distillation of eye-witness research that provides the “correct” answer for each of the eyewitness factors present in a case. These correct answers have been made available to courts in two ways. In the first, survey researchers explicitly compare the public survey and research outcomes with their interpretations of what the scientific research literature has revealed. In the second, the survey researchers instead compare their findings with the results of surveys of “eyewitness experts” (themselves researchers) concerning the effects of variables on eyewitness reliability. The most recent of the expert surveys was completed in 2001 by Saul Kassin and colleagues, who tabulated the survey responses of 64 experts to each of 30 propositions about eyewitness factors including, for example, the effects of delay, weapon presence, other-race identification, stress, age, lineup construction techniques, and long term, to name but a few. To date, no factor has received complete unanimity from the experts as to its impact on eyewitness memory. Instead, to determine what is currently “correct,” courts may look at general agreement among experts or a consensus of opinion. For example, of the 30 propositions presented to experts by Kassin, only 16 achieved a consensus of 80% agreement across experts. However, as a summarizing statement, when the responses collected from lay participants using both direct and indirect methods are compared with the consensual opinions of the experts about the factors, these comparisons frequently demonstrate significant differences between the beliefs of experts and those of members of the public.
The earliest surveys were completed in the early 1980s and tested university students with multiple-choice questions. The majority of participants did not give the correct answer to most items, including the effect of violence on recall accuracy, the relationship between witness accuracy and confidence, memory for faces, effects of training or experience on identification performance, and the other-race effect. Subsequent surveys of other students, legal professionals, potential jurors, and community respondents in the United Kingdom, Australia, and Canada produced similar results: More than half the participants did not identify the known relationships between eyewitness accuracy and confidence, event violence, event duration estimates, trained observers, older witnesses, verbal descriptions, and child suggestibility. These surveys were followed by those in which Likert-type scale items (ratings on 7-point agree-disagree scales) were presented to samples of college students and community adults, with highly comparable results: Almost half the respondents disagreed with expert opinion on many items. Despite these differences, lay responses were, nonetheless, often similar to those of experts on a subset of the items: the effects of attitudes and expectations, wording of questions, weapon focus, event violence, and estimates of the duration of events. More recently, an assessment of the responses of potential jurors in Tennessee to items from Kassin’s survey of experts produced a similar outcome: Jurors responded significantly differently than experts on 26 of 30 items, with magnitudes of disagreements ranging from 11% to 67%. A small sample of actual jurors from Washington, D.C., was also surveyed in 1990: Fewer than half the participants agreed with the correct responses. Furthermore, in a 2005 telephone survey, a large sample of potential jurors in Washington, D.C., were questioned about a smaller number of eyewitness factors. The authors argued that their results support the view that potential jurors often differ from experts in their opinions about and understanding of many issues. Finally, Canadian researchers recently constructed surveys in a manner intended to reduce jargon and professional terminology to improve understanding by survey respondents. Their results strongly suggest that assessments of lay beliefs are influenced by question format and that prior research may have underestimated current levels of lay knowledge concerning a number of factors, for example, the relationship between confidence and accuracy. Nonetheless, even with the friendlier survey format, disagreement with the experts was apparent for approximately 50% of the eyewitness topics.
The indirect approach to assessing lay knowledge is based on the distinction between having knowledge and making use of it. The direct-method survey research above has emphasized the former. With indirect methods, on the other hand, participant responses are used as the basis for determining whether existing beliefs appear to have influenced the respondents’ judgments about the reliability of eyewitness testimony. In other examples of this approach, researchers attempt to increase the levels of knowledge of participants who serve as “mock jurors” and then ask whether such knowledge appears to be integrated in judgments about eyewitness reliability and defendant guilt.
In the first group of studies, research participants estimated the likelihood of accurate person identification by an eyewitness in situations that varied along several dimensions that had, in fact, been manipulated in actual experiments—for example, levels of witness confidence, crime seriousness, and lineup bias. To determine whether participants were sensitive to these factors as determinants of eyewitness reliability, their “postdictive” estimates were compared with the effects of these same variables in the laboratory research. In general, participants appeared to be quite insensitive to the manipulated factors: Estimates of identification accuracy were overly optimistic; considerable reliance was erroneously placed on witness confidence, and their estimates usually failed to reflect the real effects of variables. Another indirect approach examines data collected from “mock jurors” who reached verdicts (and other judgments of witness credibility) after reading case descriptions in which eyewitness variables that are known to influence identification accuracy had been manipulated. The results revealed that the factors recognized by experts as important determinants of eyewitness accuracy generally have not been shown to influence mock jurors’ verdicts or credibility evaluations, and some of those known to be unrelated to witness accuracy (i.e., confidence) did affect such evaluations. Similarly, there is a disparity between mock jurors’ judgments of factors that they say are important to eyewitness reliability and the impact of these factors on their decisions when case evidence is actually presented to them.
Furthermore, it is one thing to be able to identify correctly explicitly stated, general relationships between eyewitness factors and memory but quite another to have the depth of knowledge to appreciate conceptual distinctions made at trial by experts about these factors as they are presented in specific cases. To examine these questions, researchers have asked whether beliefs demonstrably held by mock jurors (without benefit of expert testimony) appear to be integrated into their decisions when they are presented with a case description that includes the relevant eyewitness factors (e.g., cross-race effect). In one of the few investigations of this question, Brian Cutler and colleagues found that even when jurors had specific knowledge of the limitations of eyewitness identification, the information was not well integrated into their decision making. Similarly, other researchers have recently found that mock jurors who have demonstrably more knowledge than others do not necessarily demonstrate sensitivity to the eyewitness factors relevant to a case. In summary, researchers have concluded that there is little evidence that the existing knowledge held by mock jurors is readily incorporated into their decisions regarding a written vignette.
Finally, Brian Cutler and colleagues have also attempted to improve levels of mock juror knowledge through the presentation of expert testimony prior to making judgments about cases in which eyewitness identification factors are manipulated. This research has been completed in laboratory settings with mock jurors, and as a result, its generalizability to court-rooms and jury deliberations is unknown. Nonetheless, these studies suggest that whereas the presentation of relevant expert testimony may increase low levels of juror knowledge or awareness of relevant eyewitness factors, the integration of this knowledge into juror decision making may or may not be successful, depending on the particular variables of interest. Thus, if expert testimony is recommended as a safeguard against weak juror understanding of eyewitness factors, it does not appear to be particularly effective.
Potential Difficulties with Evaluations of Lay Knowledge
A number of issues are relevant to the reliability and validity of the kinds of assessments of lay belief described above. First, in a temporal sense, public survey results have limited validity because public beliefs and knowledge will change over time. These changes likely result from improved scientific understanding and its dissemination to the general public through various media and by integration into formal education.
Second, considerably more research is required to determine the extent to which survey and mock trial responses accurately reflect the beliefs of jury-eligible participants. This issue concerns the sensitivity and reliability of the various assessment procedures described above and the extent to which lay responses may be directly compared with those of experts. For example, even with ostensibly identical foci of the questions posed to experts and the public, the response options provided have not been identical. Similarly, if statements are written by experts and offered without change to survey participants, on what basis can we argue that the public understands the statements in a manner similar to that of the experts? Furthermore, the translation of the expert items into meaningful statements for lay respondents is difficult and suggests that real understanding of these issues by jurors (and by judges, trial counsel, and experts alike) will only be gained with more in-depth interviews, open-ended questions, and the use of techniques that can assess response consistency within individuals across both question formats and time.
A third question is whether the samples surveyed to date actually represent the members of a population of individuals who may be called for jury duty and who serve as jurors. Many studies have relied on undergraduate students, albeit jury-eligible in most cases, but who arguably are not representative of actual jurors: In fact, university students infrequently serve on actual juries. Additionally, even those studies in which community samples were included, nonetheless, suffer from weak representativeness because there may be important demographic and attitudinal differences between community members who, once called, appear versus those who fail to appear for jury duty. A more compelling approach would be to collect data from actual jurors who have participated in trials or to survey community members who have been called and appear for jury duty but have yet to be assigned to a particular case.
In summary, a fairly consistent description of juror knowledge emerges across a wide variety of assessment methods; specifically, jurors appear to have limited understanding of eyewitness issues and research findings.
- Benton, T. P., McDonnell, S., Ross, D., Thomas, W. N., & Bradshaw, E. (2007). Has eyewitness testimony research penetrated the American legal system? A synthesis of case history, juror knowledge, and expert testimony. In R. C. L. Lindsay, D. W. Ross, J. D. Read, & M. P. Toglia (Eds.), Handbook of eyewitness psychology: Vol. 2. Memory for people. Mahwah, NJ: Lawrence Erlbaum.
- Benton, T. R., Ross, D. F., Bradshaw, E., Thomas, W. N., & Bradshaw, G. S. (2006). Eyewitness memory is still not common sense: Comparing jurors, judges, and law enforcement to eyewitness experts. Applied Cognitive Psychology, 20, 115-130.
- Cutler, B. L., & Penrod, S. D. (1995). Mistaken identification: The eyewitness, psychology, and the law. New York: Cambridge University Press.
- Devenport, J. L., Penrod, S. D., & Cutler, B. L. (1997). Eyewitness identification evidence: Evaluating commonsense evaluations. Psychology, Public Policy, and Law, 3, 338-361.
- Kassin, S. M., Tubb, V. A., Hosch, H. M., & Memon, A. (2001). On the “general acceptance” of eyewitness testimony research. American Psychologist, 56, 405-U6.
- Schmechel, R. S., O’Toole, T. P., Easterly, C., & Loftus, E. F. (2006). Beyond the ken? Testing jurors’ understanding of eyewitness reliability evidence. Jurimetrics, 46, 177-214.