Survey Definition
A survey can be broadly defined as a detailed investigation of a topic. Although interviews and focus groups are often included under this broad umbrella, the term survey has become synonymous with a questionnaire approach to research. Surveys are arguably the most common approach to data collection in organizations, primarily because of their broad applicability. They can be used to gather information both inside and outside the organization. Surveys can be used to assess employee attitudes, gauge readiness for organizational change efforts, gather performance feedback, or measure customer satisfaction. To gather accurate information for any of these purposes, certain steps need to be followed; however, users often underestimate the time necessary to do a survey properly. The critical steps of planning, designing, communicating, administering, analyzing, and addressing the results of a survey are a serious undertaking for any organization.
Defining the Purpose and Goals of the Survey
Before any survey project can begin in earnest, the goals and expected outcomes of the project must be clearly defined. The goals will drive the content to cover, the questions to ask, the people to ask, and the format to use. Therefore, the researcher needs to determine whether the survey is intended to take the pulse of the organization, identify necessary action to take, or explore new products, policies, or other changes. The researcher also needs to include key stakeholders in the planning process. Organization members who will be asked to address the results of the survey must be included at the earliest stages of the process. This is also a good time to gather the support of senior management, not to mention union officers, if applicable. Without the buy-in of the people at the top of the organization, a survey project can easily be subverted.
Once support from key players has been obtained, the process of identifying the information to be gathered can begin. This process is driven by how the information will be used. If retention of key talent is the goal, for example, employee satisfaction surveys need to address topics that influence employee engagement (supervision, the work itself, coworkers, growth opportunities). Customer satisfaction surveys must address key products and services and how they are delivered. The goal of any survey is to gather information that will help to improve the organization.
Designing the Survey
Designing the Instrument
Some initial decisions need to be made about the type of questions that will be asked. First, the balance of open-ended (i.e., write-in) versus closed-ended (Likert-type) questions must be considered. Open-ended questions provide a wealth of information but also take significantly longer to code and interpret.
Open-ended questions also provide an opportunity for respondents to ramble, so the questions need to be very specific to prevent unintended or uninterpretable responses. Closed-ended questions are much easier to summarize but don’t provide respondents with any real opportunity for elaboration. Closed-ended questions present different problems, though, in terms of their construction. In an attempt to gather more information, survey designers often create unintentionally double-barreled questions that ask for two separate pieces of information (e.g., asking respondents in one question to rate their pay and benefits). Post hoc interpretation is impossible because the focus of the respondents is not clear (e.g., did they rate pay or benefits or both?). Questions need to address issues that respondents know about, avoid jargon or acronyms that respondents may not know, and use the simplest language possible. Always put the most sensitive questions toward the end of the survey to avoid losing potential respondents before they really begin.
To ensure that questions ask what is intended, a brief pretest of items can prevent headaches later. Ask a small group to review the questions to ensure that they are worded properly. You also want to test how long the survey takes (so your invitation letter doesn’t lie about the time required) and skip patterns to ensure respondents see the right questions. The flow and naturalness of the question order can also be assessed during the pretest. For online surveys, URLs and hyperlinks must be tested to ensure the survey tool does what you hope it will and that the final survey looks the way it should.
Deciding How to Administer
Before the survey can be administered, several additional decisions need to be made. Will the survey go to all employees or a smaller sample of them? Will the survey be done online or using paper and pencil? If paper surveys are used, will they be administered by mail or in group administrations at work locations? Are incentives for participation needed? When is the best time to administer?
The decision to do a census or sample survey is related to other stages of the survey process. If the survey is intended to harness commitment to further action or change, then the entire organization should be included. If results will be shared within every department in the organization (with the expectation that department-level action will be taken), a census is required. If the survey is intended to take the pulse of how the organization as a whole feels about an issue (or issues), then a sample may be sufficient. Sample surveys allow the organization to survey more often without running the risk of survey fatigue. Sample surveys may raise suspicion about why and who was actually invited, however, and they may be easier for the respondent to ignore because everyone around them is not taking it as well. Sample surveys require support from human resources (and potentially information technology) to be able to identify eligible employees and then select them randomly. If results will be broken down into smaller groups (e.g., by divisions, geographic regions, or demographic groups), a stratified sampling approach may be needed to ensure valid results for relevant subgroups.
Communicating the Survey
The next step in administering the survey is communicating that it will be happening. Response rates suffer if the survey is not communicated well. The level of communication needed is a function of the survey culture of the organization. In companies that have not done surveys before, extensive communication before the survey is needed to indicate why the survey is being done, why responses are critical, how the results will be used, and how the organization is committed to action. In organizations that have had negative experiences with surveys in the past, the communication needs to focus on how things will be different this time. The medium of communication will vary from company to company. E-mails from senior managers may be enough, whereas public addresses may be needed in other organizations. The medium of communication that is usually used in the organization to communicate important events should be used for the survey as well.
Administering the Survey
The medium by which the survey is administered is a critical decision. If all employees have access to the Internet or an e-mail system, an online or network survey may be advisable. Online surveys allow for easier tracking of response rates and remove the need for the data entry involved with paper surveys. As a result, online surveys are generally more cost-effective compared to other approaches. However, online surveys can fall prey to network problems and suspicions that responses are not truly anonymous. Online surveys also require that access to the survey be controlled. If the workforce does not have ready access to the Internet or company network, then paper surveys may be required. Paper surveys are what most people think of automatically, and thus there is some comfort with this format. They also allow for group administration, which can help to increase response rates. Voice-response surveys can be done by phone but typically require the survey to be very short and not very complex. Fax-based surveys are still used but have faded in popularity.
Incentives for participation are usually less of a concern for internal employee surveys because people are expected to participate. However, for external surveys (e.g., customer satisfaction surveys), incentives may help boost response rates. Research continues on the most effective incentives, but lotteries based on responses seem to actually reduce responses below levels with no incentive at all. More immediate incentives appear to be more effective. The promise of sharing results can also motivate some respondents.
Finally, the timing of the survey is critical. Organizations should avoid exceptionally busy times (April for accountants, November and December in retail) or times when many employees are expected to be away (summertime or holidays). Organizations need to provide employees with enough time to respond and must accommodate individual travel, vacations, and leave. Having a survey available for only one week may cause employees to miss the opportunity to respond. Two weeks is a short administration period, and two months is relatively long. Of course, online surveys may require a shorter window, and mail surveys need to allow time for postal service. Similarly, the timing of survey action planning must be considered. The survey should be timed so that immediate communication of results and preliminary follow-up action can be taken shortly after the survey is complete and results have been analyzed.
Analyzing the Survey Results
There are as many ways to analyze survey results as there are questions to be asked. The analysis approach must be geared to the audience that will be receiving the information. Although regression analyses and other higher-level statistics may provide very useful information, they are not appropriate for every audience. Generally, the percentage of favorable responses to individual questions or groups of questions is summarized. The mean values of items or groups of items may also be compared. For example, categories or questions with relatively high mean values identify areas of strength and categories or questions with relatively low mean values represent opportunities for improvement.
Presenting the Survey Results and Taking Action
The culture within an organization influences how survey results are fed back to employees. Some organizations share results at the department level and then move up the hierarchy, with senior management actually receiving lower-level results last. Other organizations ask for results to be presented at the top first, and then results are rolled out from the top to the bottom of the organization. The direction of the rollout is another opportunity for the company to communicate the importance of the survey and where they expect action to occur.
The results shared depend on the audience and should focus on “what’s in it for them.” Therefore, the results shared with a frontline department will be very different from those shared with the senior management team. Departments want to know how the group felt, where they are up and where they are down, and how they compare with other groups (or the rest of the chain of command). Senior managers want to know which parts of the organization are working well and which require their immediate attention. They also want to know how the results of the organization compare with industry (or competitor) norms.
This raises the important question of whether the focus of analysis and interpretation should be internal or external to the organization. For a first survey, external norms may be an unnecessary distraction when analysis should focus on internal strengths and weaknesses. After initial internal baselines are established, making comparisons to benchmark norms on subsequent surveys can be useful. Once again, the purpose of the survey drives the organization’s focus.
When improvement areas are identified, the organization (or department) must decide where to begin—it cannot necessarily take on every challenge. At this point, commitment from senior management (or department management) is critical. Those who need to make the changes will not be on board unless they believe they will have the needed resources and support.
Summary
Surveys are a popular tool in organizations but suffer from the fate that nearly everyone believes that he or she can conduct a survey well. Unfortunately, it is very easy for a survey to be done poorly. By following the steps outlined here, some of the key pitfalls in survey research can be avoided and the benefits of an effective survey can be achieved.
References:
- Church, A. H., & Waclawski, J. (1998). Designing and using organizational surveys: A seven-step process. San Francisco: Jossey-Bass.
- Kraut, A. I. (1996). Organizational surveys: Tools for assessment and change. San Francisco: Jossey-Bass. Rogelberg, S. G., Church, A. H., Waclawski, J., & Stanton, J. M. (2002). Organizational survey research. In S. G.
- Rogelberg (Ed.), Handbook of research in industrial and organizational psychology (pp. 141-160). Malden, MA: Blackwell.
- Stanton, J. M., & Rogelberg, S. G. (2001). Using Internet/intranet Web pages to collect organizational research data. Organizational Research Methods, 4, 199-216.