Credentialing is a process for granting a designation, such as a certificate or license, by measuring an individual’s competence in a specific knowledge, skill, or performance area. The purpose of credentialing is to assure the public that an individual meets the minimum requirements within an area of competence, typically an occupation or profession. There are three principal types of credentialing. Licensure, the most restrictive type, refers to the mandatory government requirement needed to practice an occupation or profession. The second type, certification, is a voluntary process that is traditionally established by nongovernmental agencies. The final type of credentialing, registration, is the least restrictive and typically requires individuals to apply for a credential through a governmental or private agency.
Although registration credentials are usually a mandatory process, individuals are not required to demonstrate a level of achievement beyond the application for a title. Licensure and certification, on the other hand, require an individual to successfully complete an educational program or demonstrate relevant job experience, as well as complete a test or assessment. Despite the distinctions between licensure and certification, the development process for these two types of credentials is similar.
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% OFF with 24START discount code
Identifying Credentialing Exam Content
The first step in developing a credentialing exam involves gathering information to demonstrate that the exam content is directly linked to the occupation of interest. This is known as contentvalidity. The most common approach for content validating a credentialing test is using an occupational analysis (also known as a practice analysis). An occupational analysis differs from a job analysis in that it focuses on an entire profession rather than a specific job. One of the most critical decisions during the occupational analysis for a credentialing exam is defining the targeted level of expertise to be tested. If the examination is designed to credential practitioners who possess the minimum level of required competence in an occupation, the analysis should focus only on entry-level requirements. However, if the examination targets intermediate or expert practitioners, particular attention should be given to identifying the level of competence to be targeted by the examination. Although a variety of techniques exist for gathering occupational analysis information, all occupational analyses should rely on input from individuals who are deeply knowledgeable about the profession.
Regardless of the occupational analysis technique employed, the results should be used to develop the credentialing exam’s content outline (also known as a test blueprint). The content outline is used to convey the organization of the exam by describing the content domains covered on the exam, as well as the amount of emphasis given to each content domain. Most credentialing exams contain many content domains, which are traditionally organized by similar tasks, knowledge areas, or skills. The amount of emphasis given to each content domain should reflect the findings of the occupational analysis—that is, the domains identified as most important in the occupational analysis should receive the most emphasis on the exam, whereas domains of lesser importance should receive a smaller amount of weight. A well-developed content outline should not only guide the development of the test questions but also facilitate candidates’ preparation for an exam.
Developing the Credentialing Exam
Many different question formats can be used on credentialing exams. Test developers traditionally use the results of the occupational analysis to identify the most appropriate format(s) for an exam. Multiple-choice questions are the most commonly used format for credentialing exams, particularly for exams that focus on measuring knowledge domains. An extensive amount of research supports the effectiveness of this type of question for credentialing exams.
However, for many credentialing programs, the multiple-choice format may not be the most effective way to gauge an examinee’s proficiency. Other viable formats include performance-based assessments and simulations. Performance-based assessments are intended to elicit responses with more than one correct answer and with multiple dimensions. For example, a physician’s credentialing exam might include questions that require an examinee to identify multiple ways to diagnose and treat an illness. Because of the open-ended nature of these questions, the grading is often done by subject-matter experts (SMEs). Simulations present an occupational-related scenario that is measured under standard conditions. For example, a flight simulation might offer an effective test format for some content areas on a pilot certification exam. Simulations are often scored using a standardized checklist or rating process to ensure that all raters are properly calibrated.
Regardless of the format, the process for developing a credentialing exam remains the same. The first step is to assemble SMEs to write test questions or to inform the structure of a performance assessment or simulation. The SMEs should represent the specialty areas within the occupation and have expertise in the content domains measured on the exam. The SMEs should then be provided with training on how to write test questions. The training should review the relevant components of the exam specifications, acceptable question formats, and techniques for developing effective questions. Because effective question writing is an iterative process, each question typically goes through an extensive review by the SMEs. Once the questions have been reviewed, a different group of SMEs content-validates the questions. This process involves reviewing each question to ensure the accuracy, importance, and suitability of the content in relation to the target areas on the exam outline. Questions that are deemed acceptable are pilot tested to obtain preliminary data, and questions with acceptable pilot test data are then used to construct an exam that conforms to the test specifications. Each of these steps depends heavily on the involvement of SMEs to ensure accurate representation of the content areas in the exam, as well as an industrial psychologist, psychometrician, or similarly trained measurement professional to oversee the process and conduct many of the more technical tasks.
Setting Performance Standards
Once an exam is developed, defining the passing point on the exam becomes of primary importance. There are two broad categories of standard-setting techniques. The first category involves the use of normative standards. With this technique, pass/fail decisions are made by comparing a candidate’s performance to the performance of other candidates. This technique is common in many personnel selection situations, in which the goal is to select the best from many candidates who may be qualified; however, it is rarely used for credentialing because it guarantees that some examinees will fail regardless of whether candidates demonstrate an acceptable level of competence. Because of this issue, nearly all credentialing programs rely on the second category of techniques that use absolute standards.
In their simplest form, absolute standards offer a policy statement about what constitutes acceptable performance on an exam (e.g., all candidates with a score above 70% will pass). However, this technique fails to account for the actual level of performance required for competent practice, the difficulty of the exam, or the characteristics of the candidate population (e.g., it is possible that all candidates sitting for an exam are competent). Typically, a more appropriate method for setting an absolute performance standard is to include a strategy for linking decisions about examination performance to criteria for acceptable practice of the profession. This involves the use of a criterion-referenced approach, which sets performance standards by evaluating the test content and making judgments about the expected or observed candidate performance. The most commonly used method for accomplishing this is the Angoff method, although several variants and alternative methods exist. These methods ensure that candidate test performance is directly linked to a minimum performance standard of the profession.
Trends in Credentialing
Cultural pressures are placing greater demands on individuals and organizations to be accountable for their competence, performance, and results. These pressures have prompted an increase in the use of testing and assessment to gauge educational outcomes and have now extended into the realm of occupational certification. There are signs that the certification of competence is of increasing importance in business organizations. Software developers have long been able to take credentialing exams to demonstrate competence in specific computer languages. Hiring organizations may then factor these measures into their selection requirements. Some companies even use a form of credentialing in their career management systems—for example, by requiring a passing score on a competence assessment before applying for pro-motion. By using properly constructed examination processes to establish competence, organizations are better able to make fair and accurate workplace decisions about individuals.
- Angoff, W. H. (1971). Scales, norms, and equivalent scores. In R. L. Thorndike (Ed.), Educational measurement (pp. 508-600). Washington, DC: American Council on Education.
- Browning, A. H., Bugbee, A. C., & Mullins, M. (Eds.). (1996). Certification: A NOCA handbook. Washington, DC: National Organization for Competency Assurance.
- Dehut, X., & Hambleton, R. K. (2004). Impact of test design, item quality, and item bank size on the psychometric properties of computer-based credentialing examinations. Educational and Psychological Measurement, 64, 5-22.
- Impara, J. C. (Ed.). (1995). Licensure testing: Purposes, procedures, and practices. Lincoln, NE: Buros Institute of Mental Measurements.
- Mancall, E. L., Bashook, P. G., & Dockery, J. L. (Eds.). (1994). Establishing standards for board certification. Evanston, IL: American Board of Medical Specialties.
- National Commission for Certifying Agencies. (2003). Standards for the accreditation of certification programs. Washington, DC: National Organization for Competency Assurance.