Cognitive Ability Tests

Although there are many definitions of cognitive ability, most focus on the notion that cognitive ability is both a determinant and a product of human learning. A common definition of cognitive ability describes it as a general mental capability that involves, among other things, the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. Given this definition, it is easy to see why cognitive ability is often used synonymously with the term intelligence and why it has long been an important construct for industrial and organizational psychologists.

Cognitive Ability Testing In Industry

Over the past century, a number of mental tests have been created to measure both general cognitive ability and specific abilities or aptitudes. In particular, such tests have been used to assess pre employment candidates in industry since the 1920s. These tests usually contain questions related to mathematical, verbal, spatial, and mechanical material and are typically delivered using paper and pencil. For example, cognitive ability tests have been created to measure people’s specific aptitude to solve math problems, read and answer questions about written material, and mentally rotate figures. Given the long history of these tests in industry, it is not surprising that there is widespread, global use of cognitive ability tests in companies for predicting on-the-job performance and a number of other important workplace outcomes.

Prediction of Workplace Outcomes

Cognitive ability tests have long been used in industry to assess preemployment candidates because of the strength with which they predict on-the-job performance for all jobs. In fact, extensive meta-analytic evidence from validity studies across a range of jobs (e.g., clerical, military, sales, and white-collar positions) makes a strong case for g, or general mental ability (GMA), as the single best predictor of performance. Meta-analytic studies conducted in the United States report predictive validity between GMA and performance ranging from .31 (refinery workers) to .73 (computer programmers), and meta-analytic studies from the European Community report a similar range and an overall operational validity of .62.

As evidenced by this wide range of correlations in the United States and in Europe, the value of the predictive validity of GMA for job performance varies according to the complexity of the job. For example, manual labor jobs tend to have validity coefficients around .25, whereas most white-collar jobs tend to have coefficients ranging from .50 to .60. Therefore, it is the complexity of the job that moderates the validity of GMA for predicting job performance. Along with job complexity, another factor originally thought to be related to the GMA-performance relationship is job experience. In particular, it was thought that this relationship would decrease with increasing job experience. Recent investigation, however, shows that the predictive validity of GMA is stable and does not decrease over time with increased job knowledge. This finding makes sense because GMA is believed to be the factor that turns experience into increased performance.

In addition to their use in predicting job performance, cognitive ability tests have also been used to successfully predict other important workplace criteria. For example, U.S. meta-analytic research findings show a strong predictive correlation of .62 between GMA and training performance, whereas European findings report a correlation of .54. Meta-analytic research also shows that GMA is related to occupational attainment, though findings vary depending on whether the investigation is cross-sectional (.62) or longitudinal (.51). Cognitive ability results increase and standard deviations and score ranges decrease with increasing occupational level; in other words, it seems that those low in GMA have a more difficult time attaining high-level occupations.

To this point, only the relationship between GMA and workplace outcomes has been discussed despite the existence of specific aptitude theory, which posits that different jobs require different specific abilities and that testing for these specific abilities should increase or optimize the prediction of workplace outcomes. The reason for this omission is that specific aptitude tests measure GMA in addition to some specific ability, but it is the GMA component of these tests—not the specific aptitude (e.g., mathematical, spatial, verbal) components—that predicts workplace outcomes. In other words, it appears that assessing a candidate’s specific aptitudes apart from GMA adds little or nothing to the prediction of posthire performance. One area in which specific abilities may be of importance in industry, however, is vocational counseling, where within-person aptitude differences can be useful in placing individuals in positions.

Utility of Cognitive Ability Testing in Preemployment Selection

In addition to investigating the predictive validity of GMA for important workplace outcomes, industrial and organizational psychologists have examined the utility, or practical value, of GMA compared to and combined with other common selection tests. It appears that cognitive ability tests (predictive validity of .51) should be used when there is only a single selection instrument for screening candidates. In cases in which multiple instruments are used to screen inexperienced candidates, cognitive ability tests should be used in combination with an integrity test (composite validity of .65), structured behavioral interview (composite validity of .63), or conscientious test (composite validity of .60). When screening experienced candidates, however, a combination of cognitive ability and work sample testing (composite validity of .63) may offer the most utility.

Prevalence of and Concerns about Cognitive Ability Testing in Industry

Surveys on the prevalence of GMA test batteries and specific aptitude tests in industry report that they are used at similar levels across countries. For example, it appears that approximately 16% of companies in the United States administer GMA tests, and 42% use aptitude tests. In Canada, it is reported that 43% of companies administer aptitude tests. In the European Community, surveys report that GMA test usage ranges from 6% (Germany) and 20% (Italy) to 70% (United Kingdom) and 74% (Finland), and that aptitude test usage ranges from 8% (Germany) to 72% (Spain). In addition, one survey reported that 56.2% of companies in Australia use cognitive tests.

Although these percentages indicate widespread global use of GMA and aptitude tests, their prevalence is far less than the nearly 100% global use of interviews as screening instruments. In addition, these numbers show that many companies are not using cognitive ability tests despite the accumulated evidence of their ability to predict important workplace outcomes compared to other selection instruments.

There are many reasons for these results, including the cost of testing candidates, the difficulty of administering paper-and-pencil or computerized tests under supervised conditions, the possibility of negative candidate reactions, the length of the recruiting process when tests are added, and concerns about test security. In addition, industry remains concerned about legal challenges to the use of cognitive ability tests in employment screening, as the tests are perceived as being plagued by measurement bias against protected groups. The persistence of this perception, despite enormous empirical evidence showing that professionally developed cognitive ability tests are not biased against any group, is a bit of a mystery. Although concern over measurement bias has been empirically solved, many of the issues surrounding cognitive ability testing are exacerbated by recruiting and selection processes that are increasingly being managed and delivered through the Internet under unsupervised conditions.

Summary

More than 100 years of theory development and empirical investigation of cognitive ability have shown this measure to be a valid and fair predictor of important job outcomes in companies throughout the world. Despite overwhelming evidence of its success and its strengths compared with other selection instruments, however, many companies are still not using cognitive ability tests. It is hoped that cognitive ability researchers and test developers will focus on technological advances in delivering cognitive tests and developing tests with less adverse impact that can be transported globally. These advances in cognitive ability testing will surely allow more companies to benefit from its use.

References:

  1. Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98.
  2. Jensen, A. R. (1980). Varieties of mental test items. In Bias in mental testing, by A. R. Jensen (pp. 125-168). New York: Free Press.
  3. Jensen, A. R. (1998). Construct, vehicles, and measurements. In The g factor: The science of mental ability by A. R. Jensen (pp. 306-349). Westport, CT: Praeger.
  4. Keith, T. Z. (1997). Using confirmatory factor analysis to aid in understanding the constructs measured by intelligence tests. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 373-402). New York: Guilford Press.
  5. Murphy, K. R., Cronin, B. E., & Tam, A. P. (2003). Controversy and consensus regarding the use of cognitive ability testing in organizations. Journal of Applied Psychology, 88, 660-671.
  6. Salgado, J. F., Anderson, N., Moscoso, S., Bertua, C., & De Fruyt, F. (2003). International validity generalization of GMA and cognitive abilities: A European Community meta-analysis. Personnel Psychology, 56, 573-605.

See also: