Biodata, or biographical data, are paper and pencil measures that ask respondents to reflect or report on their life experiences. Scores from biodata are typically used in conjunction with other employment measures for predicting individual performance in a given job. Biodata have been used across a wide range of occupations as an indicator of the likelihood of job success, where success may be defined as task-specific job performance, teamwork, or other organizationally relevant outcomes. Biodata can therefore be a useful tool for organizations seeking to examine job applicants’ backgrounds in a consistent, transparent, and fair manner. The fundamental premise underlying the use of these measures is that past experience should be a reasonable predictor of future work behavior. That is, it is assumed that individuals shape their life experiences, and they are also shaped by these experiences. Given these processes are relatively continuous over time, having critical information about an individual’s previous experiences should allow for making more accurate predictions about future behavior—even above and beyond predictions one could make from measures of cognitive ability, personality, motivation, and interests.
Biodata items can vary substantially in content and specificity. For instance, some items may be relatively personality oriented, making the underlying experiences of interest difficult to identify (e.g., “To what extent does your happiness depend on how things are going at work?”). On the other hand, they may be more situation specific or overt, making it relatively easier to identify the purpose of the item (e.g., “Approximately how many books have you read in the past three months?”). In all cases, however, biodata items require the respondent to recall and report their characteristics and experiences. Therefore, the usefulness of these items depends in part on the extent to which individuals are able to accurately perceive, store, and recall this information and their willingness to report it truthfully. It is known from the cognitive psychology literature that individuals vary widely in the efficiency and effectiveness of their memory storage and retrieval processes, and it is known from the organizational psychology literature that faking answers to items that have no right answer (such as biodata and personality items) is a serious concern in the employment setting.
In addition to items being covert or overt in nature, the underlying personal characteristics tapped by bio-data instruments also vary widely across forms. Either implicitly or by design, biodata items typically reflect specific experiences tied to constructs such as ability, personality, motivation, interpersonal skills, and interests. In some cases, these items may be fairly pure measures of a given construct, but in other cases, the items may relate to several constructs. This clearly has implications for empirically examining and interpreting the underlying factor structure and reliability of biodata instruments. Both test-retest reliability and internal consistency (coefficient alpha) should be considered when examining biodata reliability, with test-retest reliability being more sensible when no strong factors exist in the measure.
Although biodata instruments vary widely in terms of many characteristics (e.g., content, length, scoring), these measures have consistently been found to demonstrate criterion-related validity across occupations. Correlations between scores on these measures and indices of job performance (e.g., supervisor ratings) tend to be approximately .30. Furthermore, these measures have demonstrated incremental validity above and beyond measures of general cognitive ability and the five-factor model personality constructs (emotional stability, extraversion, openness to experience, agreeableness, conscientiousness). Thus, these assessments provide useful information regarding likely occupational success beyond that provided by measures of broad individual differences, themselves known to be valuable predictors of organizational behavior.
Item Attributes
As noted, biodata items can differ in a number of ways. F. A. Mael has presented a useful outline of 10 major biodata item attributes.
- Historical versus hypothetical (i.e., past behaviors versus predicted or anticipated behaviors in “what if scenarios)
- External versus internal (i.e., behaviors versus attitudes)
- Objective versus subjective (i.e., observable-countable events versus self-perceptions)
- Firsthand versus secondhand (i.e., self-descriptions versus how people would say others describe them)
- Discrete versus summative (i.e., single events versus averaging over a period of time)
- Verifiable versus nonverifiable
- Controllable versus noncontrollable (i.e., circumstances could or could not be influenced by the individual’s own decisions)
- Equal access versus unequal access (i.e., access to opportunities with respect to the group being tested)
- Job relevant versus nonjob relevant
- Noninvasive versus invasive (i.e., matters usually kept private)
Scoring Biodata Measures
A number of biodata measure scoring methods have been proposed. In situations where linkages between items and constructs are relatively clear, scoring can be quite straightforward. For example, item content may have been developed to tap a specific set of constructs or categorizations might be supported by subject matter expert item sorting. In these cases, each item might be scored along a single underlying continuum (i.e., more is better), consistent with the approach used with traditional Likert-scale self-report measures of personality.
Alternatively, a criterion-keying approach is typically a more complex scoring method. This approach involves obtaining item responses and relevant criterion scores for a sample of individuals. Mean criterion scores or criterion-related validity coefficients are calculated for each response option, across all items. These values are then used as item response weights for scoring purposes. These are strictly empirical weights that can be adjusted, for example, when there are range restriction effects that can be estimated or when nonlinear patterns are found for what conceptually appear to be relatively continuous response options. Keying items based on empirical relationships can also be carried out using personality or other individual difference measures, rather than a criterion measure. This type of scoring may be particularly useful in situations where respondents are motivated to present themselves in a socially desirable manner (e.g., job applicant contexts). In these situations, it may be relatively easy for test takers to manipulate scores on traditional personality measures, whereas scores on a set of personality-relevant but objective or verifiable biodata items may be less susceptible to this sort of response distortion.
One other approach, referred to as configural scoring, involves placing individuals into subgroups based on their profiles of biodata scores. An attempt is made to identify subgroups in an initial sample that are internally consistent but externally distinct. The mean bio-data profiles from these subgroups may be linked to relevant organizational criteria and then labeled (e.g., a “goal-oriented leaders” profile). Individuals completing the biodata measure subsequently are assigned to these subgroups based on an index of similarity between their biodata profile and the mean subgroup profile (e.g., squared Euclidean distance or profile correlation). These assignments may operationalize various decisions about individuals such as hiring, placement, training, and development decisions.
Although numerous scoring approaches have proven useful, two general recommendations appear most appropriate. First, scoring methods should be informed by both rational and empirical considerations. A rational or theory-based approach is often very useful for item development, item revision, and score use and interpretation. Clearly, empirical findings that suggest revisions to the conceptual foundation of the measure should not be ignored; this information may lead to both improved prediction and theoretical understanding. Second, a given approach to item scoring developed on one sample should be cross-validated on an independent sample. Any scoring method with weights derived from one particular sample will capitalize on chance to some degree. Therefore, cross-validation is necessary to ensure that findings from the derivation sample (e.g., strong criterion-related validity, reduced group mean differences) are robust.
Test-Taker Reactions to Biodata Measures
Given that biodata items ask respondents about personal characteristics and life experiences, the potential for negative test-taker reactions to these instruments exists, particularly when they contain items whose purpose is not transparent. Reviews of test-taker reactions research indicate that, compared with other personnel selection measures, biodata tend to be rated as moderate in terms of favorability. Specifically, these measures generally score around the midpoint of favorability rating scales and are typically rated lower than interviews, resumes, and cognitive ability tests, but higher than integrity tests. However, reactions to biodata measures also vary across studies, likely due to the diversity of these instruments. In general, biodata measures are viewed more favorably when the content is perceived as job relevant and a fair reflection of the individual’s life experiences.
References:
- Dean, M. A., & Russell, C. J. (2005). An examination of biodata theory-based constructs in a field context. International Journal of Selection and Assessment, 13, 139-149.
- Mael, F. A. (1991). A conceptual rationale for the domain and attributes of biodata items. Personnel Psychology, 44, 763-792.
- Mount, M. K., Witt, L. A., & Barrick, M. R. (2000). Incremental validity of empirically keyed biodata scales over GMA and the five factor personality constructs. Personnel Psychology, 53, 299-323.
- Oswald, F. L., Schmitt, N., Kim, B. H., Ramsay, L. J., & Gillespie, M. A. (2004). Developing a biodata measure and situational judgment inventory as predictors of college student performance. Journal of Applied Psychology, 89, 187-207.
- Ouellette, J. A., & Wood, W. (1998). Habit and intention in everyday life: The multiple processes by which past behavior predicts future behavior. Psychological Bulletin, 124, 54-74.
- Reiter-Palmon, R., & Connelly, M. S. (2000). Item selection counts: A comparison of empirical key and rational scale validities in theory-based and non-theory-based item pools. Journal of Applied Psychology, 85, 143-151.
- Stokes, G. S., & Cooper, L. A. (2004). Biodata. In J. C. Thomas (Ed.), Comprehensive handbook of psychological assessment: Vol. 4. Industrial and organizational assessment (pp. 243-268). Hoboken, NJ: Wiley.
See also: