Historically, program evaluation has been used as a tool for assessing the merits of educational and governmental programs, where public funding demands a demonstration of accountability. The basic tenet underlying program evaluation that makes it so useful in this context is its reliance on methods that integrate science and practice to produce reliable and actionable information for decision makers. During the past decade, program evaluation has also become increasingly recognized as a useful tool for helping for-profit organizations implement and enhance human resource (HR) programs to achieve key business outcomes. Successful companies understand that survival and growth in the marketplace cannot occur without programs that are designed to improve competitive performance and productivity, engage employees in the organization’s mission, and create an environment where people want to work. Recognizing the impact that HR programs have on employees and the company’s bottom line, organizations need practical tools to accurately and efficiently evaluate program quality, so they can take the necessary actions to either improve or replace them.
The field of program evaluation is based on the commonsense notion that programs should produce demonstrable benefits. Evaluation is a discipline of study that concentrates on determining the value or merit of an object. The term program in this article refers to the object of the evaluation and includes such organizational functions as recruitment and staffing, compensation, performance management, succession planning, training, team building, organizational communications, and health and work-life balance.
Evaluations can help organizations identify how a program can be improved on an ongoing basis or examine its overall worth. The first approach, called formative evaluation, is usually conducted while the program is being formed or implemented and will generally lead to recommendations that focus on program adjustments. The specific findings might be used to identify program challenges and opportunities and provide strategies for continuous improvement. Formative evaluations seek to improve efficiency and ensure that the program is responsive to changing organizational needs.
An evaluation that is conducted to examine a program’s overall worth is called a summative evaluation and will generally be performed when an organization is attempting to determine if the program should be replaced, rather than modified. This approach focuses on the program’s outcomes and their value to the organization. The specific findings are used to address accountability or the overall merits of the program. Some decisions to replace a program or major parts of the program are easy because of major program deficiencies. However, most such decisions will be more difficult because of the need to weigh multiple strengths and weakness of the program as well as other considerations such as resource constraints such as budget, staff, and time.
A Six-Phase Approach to Program Evaluation
There are a variety of approaches for conducting an evaluation, but most proceed through a similar sequence of steps and decision points. We have grouped these steps and decision points into a six-phase approach for executing a successful evaluation:
- Identifying stakeholders, evaluators, and evaluation questions
- Planning the evaluation
- Collecting data
- Analyzing and interpreting the data
- Communicating findings and insights
- Using the results
Although other approaches and actual HR program evaluations may over- or underemphasize some steps within these phases or accomplish a step in an earlier or later phase, any evaluation will need to address the activities covered within each of the six phases.
Deviations from these six phases may be related to the nature of the specific HR program being evaluated, characteristics of the organization, composition of the evaluation team, or a variety of resource considerations.
Phase 1: Identify Stakeholders, Evaluators, and Evaluation Questions
Phase 1 requires three major sets of decisions that will have implications throughout the HR program evaluation. The identification of stakeholders is a critical first step toward ensuring that the evaluation is appropriately structured and that the results will be relevant. Stakeholders are those individuals with a direct interest in the program, because either they depend on or are directly involved in its execution in some way. An organization’s leaders, the HR and legal departments, as well as other internal groups are often important stakeholders in an HR program evaluation. External stakeholders such as stockholders and customers might also need to be considered because of their potential investment in the targeted program. Accounting for stakeholders’ different perspectives from the start of a program evaluation can bring two important benefits: increased buy-in to the process and decreased resistance to change. Sometimes representative groups of stakeholders also serve on an advisory panel to the evaluation team to provide guidance throughout the process and assist with needed resources.
Another decision to be made in phase 1 involves identifying the evaluators. Although a single evaluator can conduct evaluations, we and other professionals generally recommend that a team be formed to plan and execute the evaluation process. A team approach speeds up the process and increases the likelihood that the right combination of skills is present to generate valid findings and recommendations. Using a single evaluator may limit the choice of evaluation methods to those that feel most comfortable, rather than identifying the most appropriate methods. Also, more than one explanation can often be given for a finding, and the ability to see patterns and alternative interpretations is enhanced when a team conducts an evaluation.
Identifying evaluation questions constitutes a third type of critical decision made in phase 1. An essential ingredient to valid findings and recommendations is identifying well-focused, answerable questions at the beginning of the project that address the needs of the stakeholders. The way the evaluation questions are posed has implications for the kinds and sources of data to be collected, data analyses, and the conclusions that can be drawn. Therefore, the evaluation team must arrive at evaluation questions that not only address the needs of the stakeholders but that are also answerable within the organizational constraints that the team will face.
In many cases, stakeholder groups, evaluators, and evaluation questions will be obvious based on the nature of the HR program and the events that led to its evaluation. Any number of events can precipitate a decision to conduct an HR program evaluation. These events can vary from a regularly scheduled review of a program that appears to be working properly, such as a review of the HR information system every three years, to the need for a program to be certified, which could be a safety inspection in a nuclear power plant, to a major revamping of a program caused by a significant or high-visibility event such as a publicized gender discrimination case.
Phase 2: Plan the Evaluation
Phase 2 focuses on designing the HR program evaluation, developing a budget, and constructing the timeline to accomplish the steps throughout the next four phases of the evaluation. A good evaluation design enhances the credibility of findings and recommendations by incorporating a sound methodological approach, minimizing time and resource requirements, and ensuring stakeholder buy-in. A well-executed evaluation requires a good deal of front-end planning to ensure that the factors likely to affect the quality of the results can be addressed. Failure to spend the time necessary to fully plan the evaluation can result in a good deal of rework, missed milestones, unmet expectations, and other problems that make findings and recommendations difficult to sell to upper management and other stakeholders.
An often-overlooked aspect of the planning phase is the need to develop a realistic budget that is reviewed and approved by the sponsors of the evaluation. The evaluation team’s budget should include, among other things, staffing, travel, special equipment, and space requirements. The extent of the evaluation plan will depend on the size and scope of the HR program being evaluated and the methods used in the analysis. The goal is to obtain credible answers to the evaluation questions through sound methodology and by using only those organizational resources that are absolutely required.
Phase 3: Collect Data
In most HR program evaluations, data collection will require more time than any other phase. The credibility of the evaluation’s conclusions and recommendations rests largely with the quality of the data assembled, so a good deal of attention needs to be paid to getting it right. It is critical that this phase be carefully planned so that the data adequately answer the evaluation questions and provide the evidence needed to support decisions regarding the targeted program.
The tasks performed for this phase are concentrated on four primary sets of overlapping activities, which include
- ensuring that the proper data collection methods have been selected to properly evaluate the HR program,
- using data collection strategies that take into account organizational resource limitations,
- establishing quality control measures, and
- building efficiency into the data collection process.
A program evaluation will only be as good as the data used to evaluate its effectiveness. The ultimate goal is to deliver the most useful and accurate information to key stakeholders in the most cost-effective and realistic manner.
In general, it is wise to use multiple methods of data collection to ensure the accuracy, consistency, and quality of results. Specifically, a combination of quantitative methods such as surveys and qualitative methods such as interviews will typically result in a richer understanding of the program and more confidence in the accuracy of the results.
Phase 4: Analyze and Interpret Data
Statistical data analyses and interpretation of the results are an integral part of most HR evaluation programs. The evaluation plan and goals should dictate the types of statistical analyses to be used in interpreting the data. Many evaluation questions can be answered through the use of simple descriptive statistics, such as frequency distributions, means and medians, and cross-tabulations. Other questions may require more sophisticated analyses that highlight trends and surface important subtleties in the data. The use of advanced statistical techniques may require specialized professional knowledge unavailable among the evaluation team members. If so, the team may need to obtain outside assistance. The evaluation team is ultimately responsible for using statistical procedures that will generate practically meaningful interpretations and address the evaluation questions.
Simpler is often better in choosing statistical procedures because the evaluation team must be able to explain the procedures, assumptions, and findings to key stakeholders who are likely to be less methodologically sophisticated than the team members. The inability to explain and defend the procedures used to generate findings—particularly those that might disagree with a key stakeholder’s perspective—could lead to concerns about those findings, as well as the total program evaluation effort.
Phase 5: Communicate Findings and Insights
Phase 5 focuses on strategies for ensuring that evaluation results are meaningfully communicated. With all the information produced by an evaluation, the evaluation team must differentiate what is essential to communicate from what is simply interesting and identify the most effective medium for disseminating information to each stakeholder group. Regardless of the group, the information must be conveyed in a way that engenders ownership of the results and motivation to act on the findings.
Each stakeholder group will likely have its own set of questions and criteria for judging program effectiveness. As such, the evaluation team needs to engage these groups in discussions about how and when to best communicate the progress and findings of the evaluation. Gaining a commitment to an ongoing dialogue with stakeholders increases ownership of and motivation to act on what is learned. Nurturing this relationship throughout the project helps the evaluation team make timely and appropriate refinements to the evaluation design, questions, methods, and data interpretations.
The extent and nature of these information exchanges should be established during the planning phase of the evaluation (i.e., phase 2). Thereafter, the agreed-on communication plan, with timelines and milestones, should be followed throughout the evaluation.
Phase 6: Use the Results
In reviewing the literature on program evaluation, the chief criticism that emerges is that evaluation reports frequently go unread and findings are rarely used. Although credible findings should be enough to drive actions, this is rarely a sufficient condition. Putting knowledge to use is probably the most important yet intransigent challenge facing program evaluators. Furthermore, the literature on both program evaluation and organizational development indicates that planned interventions and change within an organization are likely to be met with resistance. The nature and source of this resistance will depend on the program, stakeholders involved, and culture of the organization. By understanding that resistance to change is a natural state for individuals and organizations, the program evaluation team can better anticipate and address this challenge to the use of program evaluation results.
Decisions about whether to implement recommendations (e.g., to adjust, replace, or drop an HR program) will be driven by various considerations. Ideally, the nature of the stakeholder questions and the resulting findings heavily influence how recommendations are formulated. In addition, the evaluation approach, such as formative versus summative, will influence which recommendations are implemented. A primary consideration in the adjust-replace-drop decision is cost. In most cases, the short-term costs will probably favor modification of the existing program, and the long-term costs will probably favor replacement. It should be noted that replacing an HR program is almost always more disruptive than adjusting an existing system. In these situations it is not uncommon for program staff members, users, and other key stakeholders to take a short-term perspective and prefer work-arounds and other program inefficiencies instead of the uncertainty that comes with a replacement program.
The Joint Committee on Standards for Educational Evaluation (founded by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education in 1975) published a set of standards organized around the major tasks conducted in a program evaluation. Anyone embarking on a program evaluation would benefit from a review of these standards. Other useful readings on the subject are listed in the reference section.
References:
- Davidson, E. J. (2004). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage.
- Edwards, J. E., Scott, J. C., & Raju, N. S. (Eds.). (2003). The human resources program-evaluation handbook. Thousand Oaks, CA: Sage.
- Joint Committee on Standards for Educational Evaluation. (1994). The program evaluation standards: How to assess evaluations of educational programs (2nd ed.).
- Thousand Oaks, CA: Sage. Morris, L. L., Fitz-Gibbon, C. T., & Freeman, M. E. (1987).
- How to communicate evaluation findings. Beverly Hills, CA: Sage.
- National Science Foundation. (1993). User-friendly handbook for project evaluation: Science, mathematics, engineering and technology education (NSF 93-152). Arlington, VA: Author.
- Patton, M. Q. (1996). Utilization-focused evaluation (3rd ed.). Thousand Oaks, CA: Sage.
- Rose, D. S., & Davidson, E. J. (2003). Overview of program evaluation. In J. E. Edwards, J. C. Scott, & N. S. Raju (Eds.), The human resources program-evaluation handbook (pp. 3-26). Thousand Oaks, CA: Sage.
- Rossi, P. H., & Freeman, H. E. (1993). Evaluation: A systematic approach (5th ed.). Newbury Park, CA: Sage.
- Scriven, M. (1991). Evaluation thesaurus (4th ed.). Beverly Hills, CA: Sage.
- Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (Eds.). (1994). Handbook of practical program evaluation. San Francisco: Jossey-Bass.