Computer assessment, Web-based assessment, and computer adaptive testing (CAT) all refer to a classification of personnel instruments that use computer technology for the purposes of selection and assessment. The most general of these terms, computer assessment, refers to any assessment instrument that is presented using a computer interface. Web-based assessment is a specialized form of computer assessment that relies on features of the World Wide Web for the administration of the assessment. Finally, CAT uses the computer technology to administer tests in an unconventional manner. Essentially, this form of testing adapts to the test taker based on his or her past responses to produce a testing experience that is tailored to a particular individual.
The simplest form of computerized assessment consists of taking a paper-and-pencil instrument and presenting the items on that assessment on a computer. These tests are often referred to as page-turner tests because the technology of the computer is used to move test takers from one item to the next, like turning from page to page in a traditional paper-and-pencil assessment. However, more extensive utilization of computer technology can be integrated into an assessment system. For example, computers allow test developers to include multimedia elements such as audio and video files in their assessments. In addition, computer technology affords the opportunity for more interactive assessment than is possible in the paper-and-pencil format—for example, an assessment could include a computerized “in basket” that responds to an individual’s actions or provides information when queried.
Web-based assessment takes the process of computerized assessment one step further by incorporating the Internet into the assessment process. The capabilities of the Internet allow for enhanced flexibility in the assessment process. The Internet enables assessments to be administered in a wide variety of locations and at different times without the need for specialized software. Web-based assessment permits test takers to complete assessment batteries in the comfort and privacy of their own home at a time that is most convenient for them without needing a proctor to administer the test. Internet technology also creates opportunities for unique assessments that are not possible using computers alone. For example, Web-based interviewing can use videoconferencing technology to conduct interviews that fall somewhere between face-to-face and telephone interviews in terms of interactivity.
Computer Adaptive Testing
Computer adaptive testing presents a markedly different application of computer technology in assessment. Conventional tests typically consist of a set number of items that all test takers are exposed to. Because most tests contain a mixture of easy, moderately difficult, and difficult items, some test takers are exposed to items that are inappropriate for their ability. For example, high-ability test takers are required to answer some very easy items, and low-ability examinees are forced to wrestle with some extremely difficult items. Because high performers tend to get all of the easy items correct, these items do not help to differentiate among high-ability examinees. The same is true for low-ability examinees, who have little chance of success on difficult items. Because these inappropriate items do not differentiate among test takers of similar ability, a more efficient solution would be to ask test takers to respond only to items that are appropriate for their ability level. This is where CAT comes into play.
The process of CAT is as follows: An examinee responds to an initial item that is presented. The adaptive test then uses a statistical model called item response theory to generate an estimate of the examinee’s ability, and based on this estimate, the computer selects an item of appropriate difficulty to be presented next. This procedure continues iteratively after each item has been answered until some criterion for stopping the test is reached, such as answering a pre-specified number of items, reaching a time limit, or achieving a certain level of measurement precision. Because the presentation of items is tailored to examinees, test takers no longer have to answer questions that are extremely easy or exceedingly difficult for them. Because inappropriate items have been eliminated, adaptive testing procedures are much more efficient. In addition, because the items presented are tailored to a particular individual, CAT provides a more precise estimate of a test taker’s true score.
This process of tailoring items to a particular examinee creates a testing process that is quite unlike conventional tests. An obvious result of CAT is that all test takers do not receive the same items. Unlike conventional tests, which administer a fixed set of items to all examinees, CAT presents items based on individual response patterns. Thus, two examinees taking the test in the same place and at the same time might receive two completely different sets of questions. Computer adaptive tests also differ in the way an individual’s score is calculated. On conventional tests, an individual’s test score is determined by the number of questions he or she answered correctly. However, on an adaptive test, scores are not based solely on the number of items answered correctly but also on which items were answered correctly. Test takers are rewarded more for answering difficult questions correctly than for answering easy questions correctly. Unlike traditional paper-and-pencil tests, which allow test takers to skip items and return to them later, review their answers to previous items, and change answers to items already completed, adaptive tests usually do not permit any of these actions. Instead, test takers must advance through adaptive tests linearly, answering each question before moving on to the next one, with no opportunity to go back.
The integration of technology and assessment confers a number of advantages, the most obvious being the potential for new and different types of assessment. Computerized multimedia or interactive assessments have the potential to increase the perceived realism of the assessment, thereby improving face validity and even criterion-related validity. In addition, novel performance indexes can be collected using computer technology, such as response latencies, which may further enhance validity or reduce adverse impact. In the case of CAT, these assessments are considerably more precise and efficient, taking one third to one half the time of a conventional test. Adaptive tests also provide increased test security. Because test takers receive items that are tailored specifically to them, it is virtually impossible to cheat. Similarly, conventional computerized tests may be more secure because there are no test forms that can be compromised.
Another advantage of computerized assessments is their ability to provide instantaneous feedback to test takers regarding their performance. Despite the large up-front costs of technology, computerized assessments can be economically advantageous. Because there are no printing costs, the cost of creating new tests or switching to a different test form is negligible. No manpower is required to score the assessments or compile the data from individuals’ responses, making the results less prone to error. In the case of Web-based assessment, the cost of test proctors can be eliminated. Web-based assessment confers additional advantages and cost savings because it can be administered anywhere Internet access is available.
As with any new technology, there are a number of potential pitfalls that must be avoided to make full use of these techniques. One major concern with the use of technologically sophisticated assessments is adverse impact, especially because of the known disparity in access to technology among different groups. Test security must also be managed differently when technology is involved. Computerized assessments can enhance test security because there is no opportunity for test forms or booklets to be compromised. However, test administrators must protect the computerized item banks as well as the computerized records of individuals’ responses. Unproctored Web-based assessment creates the additional security dilemma of not knowing exactly who might be taking a particular assessment device or whether the respondent is working alone or getting assistance.
It is important to consider the measurement equivalence of the new procedure. The concept of measurement equivalence is concerned with whether a test administered using a computer will produce a score that is equivalent to the score one would obtain on the paper-and-pencil version of that test. Research shows that tests administered adaptively are equivalent to conventionally administered assessments. Cognitive ability tests also produce similar scores regardless of whether they are administered in paper-and-pencil or computer format. However, the equivalence of scores decreases dramatically on speeded tests (tests that have stringent time limits). Care should be taken when computerizing noncognitive assessments because the measurement equivalence of noncognitive batteries remains relatively unknown.
The problem of measurement equivalence also extends to proctored versus unproctored computerized tests. Web-based procedures afford more opportunities for assessments to be completed in an unproctored setting, but the question remains whether scores obtained without a proctor are equivalent to those that might be obtained in a supervised administration.
Finally, because technologically sophisticated assessment procedures are very different from traditional procedures that applicants are accustomed to, test takers’ reactions to the procedures must be taken into account. This is particularly true for testing procedures that use novel item types or for CAT, which uses an unfamiliar testing procedure.
Computerized assessment, Web-based assessment, and CAT provide a number of advantages over conventionally administered assessments and will likely dominate the selection and assessment landscape in the future. However, care must be taken when implementing these technologically sophisticated assessments to ensure that the reliability and validity of procedures is established and maintained.
- Anderson, N. (2003). Applicant and recruiter reactions to new technology in selection: A critical review and agenda for future research. International Journal of Selection and Assessment, 11, 121-136.
- Drasgow, F., & Olsen, J. B. (1999). Innovations in computerized assessment. Mahwah, NJ: Lawrence Erlbaum.
- Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114, 449-458.
- Tonidandel, S., Quinones, M. A., & Adams, A. A. (2002). Computer adaptive testing: The impact of test characteristics on perceived performance and test takers’ reactions. Journal of Applied Psychology, 87, 320-332.
- Wainer, H. (2000). Computer adaptive testing: A primer. Mahwah, NJ: Lawrence Erlbaum.