Job Performance

Almost all efforts of managers and human resources consultants have the objective of improving individual employee job\performance, either directly or indirectly.

Efforts such as personnel selection or training are aimed at improving performance directly, whereas interventions in other organizational processes (e.g., culture, climate, or team processes by reducing conflict, and increasing coordination across organizational members) attempt to improve performance indirectly. The popular press is full of anecdotes of top executives who have lost their jobs because of incompetence in performing them. Given this state of organizational research and practice, it is necessary to have a clear understanding of what job performance is and the issues involved in its measurement. Job performance models have been developed to specify the content domain of job performance as well as to clarify the relationships among individual differences variables such as personality and organizational characteristics including reward systems to job performance. In this summary, we first examine the content domain of job performance, then consider some relevant measurement issues.

Specifying the Content Domain of Job Performance

What constitutes job performance? This simple question becomes complex when we consider the nuances associated with it. Is it just what the organization has hired an employee to do (that is legitimate)? If so, a careful job analysis should provide the contours of what should be included when job performance of an individual is assessed. These aspects of job performance may include performance on tasks, communication, leadership, and avoiding counterproductive behaviors. Different models of job performance make distinctions among different aspects of performance. For example, one of the popular models of job performance distinguishes between task performance and contextual performance. Task performance is defined as the proficiency with which job incumbents perform activities that are formally recognized as part of their jobs, activities that contribute to the technical core of the organization. Contextual performance constitutes the activities that an individual incumbent performs that support the social environment in which the technical core must function. This distinction between task and contextual performance has been further developed in other models of job performance. Some of the models divide contextual performance to further subdimensions.

The literature on the dimensions of job performance has suffered from ambiguous conceptualizations.

Organizational researchers have used virtually any individual differences variable that could contribute to the productivity, efficiency, or profitability of the unit or organization as a measure of job performance. Even when appropriate measures have been used to assess aspects of job performance, consistent agreement about dimensions of job performance has not been easy to achieve. To understand which measures cluster together to assess a dimension of job performance, organizational researchers have collected data on the individual measures, as well as correlated and factor-analyzed such data. Yet decades of research along these lines have not converged on any unique solution, partly because it is infeasible to collect data on all measures from the same sample of job incumbents.

Recent years have seen the use of meta-analytic methods to cumulate data across several individual studies. This enables organizational researchers to assess the covariation of several job performance measures even though not all measures may be obtained from the same sample of incumbents. A comprehensive study that included all measures used in published articles in the Journal of Applied Psychology between 1917 and 2004 (see References:) found that all the different measures were positively correlated. These positive correlations suggest that there is a common factor underlying all the different measures of job performance and the different dimensions into which job performance is sliced by different researchers and practitioners. The positive correlations also imply that in general, employees who perform well in one dimension are also likely to perform well in other dimensions. Thus, organizational interventions aimed to improve one dimension are likely to have a larger impact than usually acknowledged on other dimensions.

The fact that there is a general factor of job performance that underlies the different measures of job performance does not preclude or diminish the importance of individual dimensions or facets of job performance. What it implies is that depending on context and needs of the researcher/organizations, the content domain of job performance can be sliced in different ways. For an organization in a service industry, it may be pertinent to make distinctions between customer friendliness and working well with coworkers, treating them as two separate dimensions. To assess employee job performance in an industrial manufacturing plant, it may very well make sense to combine the two dimensions as one overall dimension of interpersonal competence. Different conceptualizations and dimensional views of job performance are relevant for different organizational interventions (e.g., selection versus training). This renders controversies of which set of job performance dimensions is correct moot.

Recognizing that the different sets of dimensions postulated are dependent on context and purpose of measurement, we can employ a two-dimensional grid to classify the different taxonomies of job performance dimensions postulated in the extant literature. The first dimension of this grid distinguishes whether the taxonomy under consideration is developed for a particular job/occupation or for being applicable across occupations. For example, one taxonomy of job performance dimensions focuses exclusively on retail industry entry-level workers and proposes the following nine dimensions of job performance: adherence to rules, industriousness, thoroughness, schedule flexibility, attendance, off-task behavior, unruliness, theft, and drug use. Although this set is appropriate for retail industry employees, schedule flexibility may not be a relevant dimension in another occupation. In contrast, another taxonomy of job performance, developed to be applicable across occupations, proposes the following eight dimensions into which the job performance content domain can be sliced: job-specific task proficiency, non-job-specific task proficiency, written and oral communication competence, demonstrating effort, maintaining personal discipline, facilitating team and peer performance, supervision, and administration. The taxonomies that are specific to occupations are richer in context-specific information yield, whereas the second type described have the advantage of generalizability. It should, however, be noted that regardless of whether the taxonomies are occupation specific or applicable across organizations, the dimensions are positively correlated, suggesting the presence of a common factor of job performance across all dimensions. In fact, the general common factor usually explains as much as 60% of the variance in each dimension.

In addition to classifying the taxonomies as either occupation-specific or not, we can also group the taxonomies as to whether they focus on specific aspects of performance or on the whole domain of job performance. For example, several taxonomies of counterproductive work behaviors have been proposed in recent years. The goal is not to define the entire content domain of job performance but to home in on specific aspects of performance. Overlaying this  classification of taxonomies with the previous classification (occupation-specific versus across occupations) results in a four-cell classification of taxonomies. This two-dimensional grid serves the useful purpose of summarizing the numerous taxonomies of job performance content postulated in the literature.

Measurement Issues in Job Performance Assessment

Given the centrality of job performance assessment to several high-stakes decisions in organizations (selection of personnel, merit pay, legal defense of organizational policies, etc.), accurate measurement of job performance is critical. Measures of job performance can be objective, organizational records of units produced, errors made, absences, promotion rates, accidents, or turnover. Alternatively, measures of job performance can be subjective, judgmental evaluations that can be either rankings or ratings. The judgmental evaluations can be made by supervisors, peers, or subordinates, or even by the employees themselves. Sometimes, organizations obtain judgmental evaluations from customers. The purpose of the assessment (e.g., deciding merit pay raises) may render some evaluation sources (e.g., self-ratings) untenable. A voluminous literature on multisource feedback or 360-degree feedback addresses the relative merits of different evaluators and argues for an integration of the different perspectives for a comprehensive evaluation of an individual employee.

Given that specific dimensions of job performance can be evaluated by different sources (supervisors, peers, etc.), a natural question arises as to the equivalence of the different sources in evaluating a particular job performance dimension (e.g., leadership). That is, do supervisors and peers mean the same thing when evaluating an employee as demonstrating good leadership? One hypothesis is that the different sources emphasize different behaviors, observe different behaviors, and mean different behaviors when rating an employee’s leadership. In this example, supervisors may rate employees’ leadership based on how well they get the task done through subordinates, whereas peers may rate the employee’s leadership based on how well the employee coordinates with other colleagues. If this hypothesis is correct, the correlation between leadership ratings provided by supervisors and leadership ratings provided by peers for a set of employees should correlate less than perfectly. Recent cumulative research suggests that although the observed correlation is less than perfect, the magnitude of the relationship is affected by unreliable measurement and once corrections for unreliability are applied, the corrected correlation is much closer to 1.0 than previously believed. Thus, it appears that the different sources (peers, supervisors, etc.) rate the same dimension, albeit with emphasis on different observable behaviors. The fact that different sources of raters are rating the same dimension does not detract from the merit of multisource assessments. An analogy with test construction is useful here. A test of intelligence will have several items, and integrating responses across items we get a better (i.e., more comprehensive coverage of the content domain of intelligence) and more reliable assessment of intelligence. Similarly, multisource assessments ensure comprehensiveness in coverage of behaviors under consideration and enhance the reliability of the composite evaluation.

Another measurement issue in job performance assessment is definitional in nature. Some researchers have argued that job performance measures should include only behaviors and not outcomes of behaviors. Thus, the efforts made by an employee (e.g., number of contacts made with potential customers) should be evaluated, but the outcomes (e.g., number of units sold) should not be. The reasoning for this distinction relies on the control an individual employee has on behaviors versus outcomes. The argument is that an individual’s performance should be evaluated based on what is under the control of the employee. However, this issue of control is relative. On one hand, one could argue that the number of papers written by a professor is part of job performance but the number of papers published is not (as this depends on several factors outside the control of the professor). On the other hand, even the number of papers written is not strictly under the control of the professor. Given this endless circle of arguments, it is preferable if job performance is defined as both evaluatable behaviors and outcomes.

Other measurement issues are likely to come to the forefront of debate in the coming years. Almost all taxonomies of job performance dimensions do not take into account the temporal relationships across the dimensions. For example, interpersonal competence at time 1 can result in better productivity at time 2. Exploring the dynamic relationships among job performance dimensions is not the same as the issue of criterion dynamicity. Criterion dynamicity refers to the hypothesis that individuals will improve in the performance of a dimension over time. Cumulative research shows that although there are mean changes in performance over time (i.e., people improve with experience), the relative rank ordering of individuals on performance on that dimension stays fairly constant over time.

We close this section on measurement issues with a note on the reliability of job performance assessments. Reliability is the consistency of measurement, and depending on the answer to the question “Consistency over what?” there are several types of reliabilities. When a job performance item is measured at two points in time (assuming no change in true performance levels), the correlation indexing the consistency across the two points in time is the test-retest (or rate-rerate) reliability. When a job performance dimension is assessed with several items, consistency across the items is captured by the alpha coefficient. When two raters rate the performance of a set of employees, the interrater reliability captures the consistency across raters. Cumulative research has shown that there is a large idiosyncratic component in each rater’s ratings. To the extent organizational researchers and practitioners are interested in the shared (across raters) component of job performance assessments, interrater reliabilities are the appropriate reliability coefficients to use.

Conclusions

Job performance is a central variable in organizational research and interventions. Several models of job performance have been developed to slice the content of job performance into different sets of dimensions. The different models can be placed into one of the four cells of a two-dimensional grid that takes into account (a) whether the model is occupation specific or applicable across occupations and (b) whether the entire domain of job performance or a specific section is targeted. Despite the different ways the job performance domain can be partitioned into dimensions, all dimensions are positively correlated, suggesting a common underlying factor across all job performance dimensions. The measurement of job performance raises several issues, such as the temporal relationships across dimensions and choosing the appropriate reliability coefficient. Given the large idiosyncratic component in individual rater ratings, interrater reliability coefficients should be assessed. Cumulative research also suggests (a) equivalence of different sources of ratings and (b) lack of criterion dynamicity. Job performance assessments are critical in high-stakes decision making in organizations, and continued research is likely to improve existing job performance models.

References:

  1. Austin, J. T., & Villanova, P. (1992). The criterion problem: 1917-1992. Journal of Applied Psychology, 77, 836-874.
  2. Borman, W., & Motowidlo, S. J. (1993). Expanding the criterion domain to include elements of contextual performance. In N. Schmitt & W. Borman (Eds.), Personnel selection in organizations (pp. 71-98). San Francisco: Jossey-Bass.
  3. Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1993). A theory of performance. In N. Schmitt & W. Borman (Eds.), Personnel selection in organizations (pp. 35-70). San Francisco: Jossey-Bass.
  4. Gruys, M. L., & Sackett, P. R. (2003). Investigating the dimensionality of counterproductive work behavior. International Journal of Selection and Assessment, 11, 30-42.
  5. Hunt, S. T. (1996). Generic work behavior: An investigation into the dimensions of entry-level hourly job performance. Personnel Psychology, 49, 51-83.
  6. Viswesvaran, C., & Ones, D. S. (2000). Perspectives on models of job performance. International Journal of Selection and Assessment, 8, 216-227.
  7. Viswesvaran, C., & Ones, D. S. (2005). Job performance: Assessment issues in personnel selection. In A. Avers, N. Anderson, & O. Voskuijl (Eds.), Handbook of personnel selection (pp. 354-375). Malden, MA: Blackwell.
  8. Viswesvaran, C., Schmidt, F. L., & Ones, D. S. (2005). Is there a general factor in job performance ratings? A meta-analytic framework for disentangling substantive and error influences. Journal of Applied Psychology, 90, 108-131.

See also: