Job Knowledge Testing




Job knowledge is critical to successful job performance. Job performance can be viewed as being determined by one’s declarative knowledge (knowledge of facts, rules, and procedures—a job’s requirements), procedural knowledge and skill (knowing how and being able to do what the job requires), and motivation. In the job performance literature, job knowledge is the declarative knowledge of interest.

Job analysis studies often use job knowledge as an important job descriptor. A typical job analysis will identify the tasks performed by job incumbents, as well as the knowledge, skills, and abilities required to successfully perform those tasks. In this context, knowledge can be defined as the degree to which one has mastered a body of material (facts and theory) directly involved in the performance of a job. Competency studies also typically yield some knowledge-based competencies.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


How is job knowledge measured?

Although job knowledge is sometimes assessed using ratings (e.g., made by interviewers or supervisors), it is typically measured more directly and objectively with multiple-choice tests. Such tests are developed to be content valid (i.e., to cover knowledge areas proportionately to their importance to the job as determined through job analysis). Many strategies can help ensure the quality of such tests. For example, a test blueprint (based on a job analysis) is developed to specify test content. The blueprint reflects the appropriate weighting of knowledge areas. Item-writing guidelines improve the readability and clarity of test items and help prevent “test-wise” examinees from performing inappropriately well on the test. It is also good practice to develop test questions that go beyond simple recall and definitions, instead requiring some amount of analysis or reasoning to answer the question. Some test developers use Bloom’s taxonomy as a framework to accomplish this. Another strategy is to use visual aids (e.g., illustrations, photos, graphics) to make the questions look more job-relevant and to limit the degree to which test scores depend on reading ability. Job experts also should review items for accuracy and collect judgments about the relevance and importance of each item to help document the content validity of the test.

Developers are increasingly using item formats other than traditional multiple choice because they can be easily administered and scored by computer. Such formats include multiple-response (e.g., check all that apply), matching, drag-and-drop, and ranking. Some of these formats efficiently cover more content than do traditional item formats, and varying the formats can make the test more engaging for examinees. It is important, however, to consider how to combine scores from different types of items so that the resulting total test score appropriately weights them. For example, how do you combine the score on a five-part matching item (in which examinees may be given partial credit for getting some, but not all, parts right) with the scores from several multiple-choice items (scored one point each) so that the reliability and validity of the total score are maximized? The answer might vary depending on the primary testing goal (e.g., maximizing content validity or correlations with other measures).

A job knowledge test can be developed, scored, and evaluated using classical test theory (CTT) and item response theory (IRT) strategies. Of these, CTT strategies have the advantage of being particularly useful for providing diagnostic information about items (e.g., percentage of examinees selecting each response option and option-total score correlations) that can be used to improve them through rewriting. Because they provide a common underlying metric,

IRT strategies are particularly useful if the test uses several item formats or if multiple forms of the test are required, but they require larger sample sizes to yield reliable information. If sample sizes permit, it is good practice to use both types of analytic strategies.

Using Job Knowledge to Predict Performance

When hiring or promoting from a pool of experienced or relevantly educated candidates, an employer should consider including job knowledge as a component of the selection process. This is done most often in the context of job interviews. There is precedent for using tests of job knowledge for selection testing, but it can be expensive to develop and maintain a test for this purpose. Employers often look for relevant certifications (offered through either industry- or association-based testing programs) as a way to help gauge if individuals have sufficient job knowledge prior to hiring or promotion. When job knowledge is used to predict performance, it is important to consider what knowledge is required at entry versus that which can be acquired on the job—a distinction that can be made during the job analysis.

Little published research addresses the validity of job knowledge measures used for employee selection. One would expect the predictive validity of a well-designed multiple-choice test to be relatively strong when there is a strong correspondence between test content and job requirements. As with cognitive ability tests, however, job knowledge tests tend to exhibit Black-White race performance differences. The race differences for job knowledge tests, however, tend to be on the order of a half standard deviation, in contrast to the full standard deviation difference often observed on cognitive ability tests.

Using Job Knowledge to Measure Performance

There are several relevant applications of job knowledge testing to measure performance. The primary application is probably seen in the vast number of credentialing (certification and licensure) testing programs offered. Job knowledge tests are also used as job performance criterion measures in criterion-related validation research.

Job knowledge tests, however, do not tell the whole story about an examinee’s capacity to perform a job. Performance tests is a term used for higher fidelity assessments that require examinees to perform parts of a job in a simulated environment. The managerial assessment center is one form of such testing (typically used for selection and development) that has been around for a long time. There is a current surge of interest is using computer-based tests to develop more realistic performance measures. For example, the architect licensure examination not only includes multiple-choice questions, but also requires candidates to draft designs on the computer. Software certification testing programs are another example, as they also increasingly use high-fidelity simulations of work activities. It is important to recognize, however, that such tests still leave the motivational aspects of performance unmeasured. Performing well on a knowledge or performance test may yield a big part of the answer, but it is not the same as measuring job performance.

References:

  1. Bloom, B. S., Englelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Hand-book I: Cognitive domain. New York: David McKay.
  2. Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1993). A theory of performance. In N. Schmitt, W. C. Borman, & associates (Eds.), Personnel selection in organizations (pp. 35-70). San Francisco: Jossey-Bass.
  3. Haladyna, T. M. (1997). Writing test items to evaluate higher order thinking. Boston: Allyn & Bacon.
  4. Roth, P. L., Huffcutt, A. I., & Bobko, P. (2003). Ethnic group differences in measures of job performance: A new meta-analysis. Journal of Applied Psychology, 88(4), 694-706.
  5. Sackett, P. R., Schmitt, N., Ellingson, J. E., & Kabin, M. B. (2001). High-stakes testing in employment, credentialing, and higher education: Prospects in a post-affirmative action world. American Psychologist, 56(4), 302-318.

See also: