Ethics in Industrial/Organizational Research




Ethics has to do with defining what is meant by right and wrong or good and evil (or bad) and with justifying according to some rational system what one ought to do or what sort of person one should be. As applied to the conduct of research with human participants, the ethics of research concerns the proper treatment of those participants—their protection—by researchers. This overlaps with, but is different in intent from, the ethics of science, which has to do with the protection of the scientific enterprise by means of norms concerning ways in which its integrity is maintained (e.g., providing full and accurate descriptions of procedures in published research reports).

Deontological (i.e., rule-based) moral principles generally guide the ethical strictures concerning research participation: treating people with dignity and respect for their autonomy (so they are free to decide whether to participate in the research and whether to continue their participation); having concern for their well-being and avoiding the infliction of harm (so that if deception or withholding information can be justified by a rigorous review, adequate debriefing and dehoaxing will be provided); abiding by principles of justice or fairness (so that people are not coerced into participation by virtue of their lesser social status or other factors); and displaying honesty, integrity, and trustworthiness (so that promises made regarding the confidentiality of replies and the potential benefits, discomforts, or risks of participation are fulfilled). However, consequentialist or utilitarian analyses are frequently used in deciding whether the aggregate benefits of a proposed research study outweigh any potential harms involved.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


The Nature Of Industrial/Organizational Research: Who Benefits?

Most research in the social and behavioral sciences is not conducted to benefit those who participate in it as sources of information. Some studies seek to advance our general understanding of psychological processes or constructs (e.g., “What are the determinants of organizational citizenship behavior?”). The students or employees from whom we hope to learn the answers to such questions may have little interest in the questions or the research and cannot expect to benefit from it. Even much of the applied research that aims to improve the functioning of the organization in which it is implemented is unlikely to improve the welfare of those particular research participants (e.g., applicants or employees in a test validation study). Nevertheless, some organizational studies are of the sort that will yield benefits for the organization as a whole as well as for the participants (e.g., an experimental evaluation of alternative training procedures, the best of which may then be implemented to the benefit of all). And some organizational research activities are components of what might more accurately be described as interventions (the introduction of changes in policies, programs, or practices) that are directly aimed at improving aspects of the work life of some segment of the organization.

Therefore, potential research participants, even company employees, should not be thought of as necessarily being obliged to cooperate with one’s research plans. That is one of the important reasons why much of the ethics of research has to do with the appropriateness of the conditions under which prospective research participants are recruited and the voluntariness of their agreement to participate. Other reasons similarly involve basic ethical principles such as respect for persons—their inherent dignity and autonomy—and issues of justice. The essentials of ethical research consist of assuring voluntary participation and informed consent to participate; eliminating coercive influences; securing privacy and confidentiality for participants; minimizing the deception of participants; and providing debriefing, feedback, and dehoaxing (correcting any adverse effects of deception). These concerns are codified and operationalized in the U.S. Department of Health and Human Services (DHHS) Federal Policy for the Protection of Human Subjects and the American Psychological Association (APA) Ethical Principles of Psychologists and Code of Conduct.

The History of Research Regulations

Applied psychologists such as Allen J. Kimmel and other social scientists working under the auspices of the National Research Council have traced the events that led to a growing concern for the protection of participants in biomedical, social, behavioral, and economic sciences research in the 30 years following the end of World War II. These are some critical events:

  • At the end of the war, the world was repulsed when it was learned that bona fide medical doctors had been conducting gruesome and horrific experiments on concentration camp inmates.
  • For 40 years, beginning in 1932, the U.S. Public Health Service Tuskegee Study withheld treatment for syphilis and deceived almost 400 Black men about the (non)treatment they were receiving, resulting in up to 100 deaths by 1969. Treatment was not provided until after the experiment was uncovered in a 1972 newspaper report.
  • In the mid-1950s, in the Wichita Jury Study, law professors secretly recorded the deliberations of several juries without the knowledge of the plaintiffs, the defendants, or the jurors themselves.
  • In the early 1960s, a social psychologist, Stanley Milgram, tricked his research participants into believing that they were administering stronger and stronger electric shocks to subjects (actually, Milgram’s confederates) when they responded incorrectly in a “learning experiment.” A majority of participants remained obedient to the experimenter even when the learners supposedly being punished were apparently in considerable pain and they themselves were experiencing great discomfort and ambivalence about what they were ostensibly doing.

In 1946, the modern era of biomedical research oversight was ushered in by the Nuremberg Code, whose 10 principles emphasize that participation in experimental research must be voluntary, participants must be free to discontinue their participation, and the possibility of harm and risks must be minimized and exceeded by the potential beneficial findings of the experiment. At about that same time, the APA also was deliberating standards of ethics, including issues pertaining to psychological research, resulting in a formal statement of Ethical Principles of Psychologists and Code of Conduct in 1953, which has been revised several times—most recently in 1992 and 2002.

It is not clear, however, that research psychologists (and other social scientists whose disciplines had also developed codes) paid all that much attention to their ethical standards of research until the federal government began to take several larger and larger steps into the picture. In 1966 the United States Public Health Service (USPHS) required that federally funded medical research facilities had to establish review committees (the first institutional review boards, or IRBs) to pass on proposed research; in 1969 the policy was extended to include behavioral and social science research. These policies were further developed by the Department of Health, Education and Welfare (DHEW) in 1971 (The Institutional Guide to DHEW Policy on Protection of Human Subjects) and 1974 (45 Code of Federal Regulations [CFR] 46). In 1979, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research produced Ethical Principles and Guidelines for the Protection of Human Subjects of Research (“The Belmont Report”). It placed research with human subjects firmly within a moral framework by emphasizing the relevance of three ethical principles: respect for persons (acknowledging their autonomy and protecting those with diminished autonomy); beneficence and nonmaleficence (the obligation to maximize possible benefits to participants and to do them no harm), and justice (fairness in the distribution of the benefits from, and burdens of, research). In 1981, the National Institutes of Health (NIH) Office for Protection From Research Risks (OPRR) revised 45 CFR 46. Finally, in 1991 the Department of Health and Human Services (DHHS, the successor to DHEW) published a further revision (45 CFR 46, subpart A) that was adopted as “Common Rule” by all relevant governmental agencies and applied to all research conducted at institutions that receive any federal funding. In 2000, the Office for Human Research Protections (OHRP) was established in DHHS, with expanded educational responsibilities as well as regulatory oversight.

From 1966, there has been almost continuous criticism by the social and behavioral sciences research community of the regulatory process conducted by the IRBs that pass on the acceptability of proposed studies. These complaints and the administrative reactions to them, as well as several national investigations of the operation of IRBs, have resulted in a cyclical waxing and waning of the scope and relative restrictiveness and permissiveness of the successive revisions to the regulations. The most frequent complaints have been (a) lack of consistent standards of evaluation across IRBs; (b) lack of expertise among IRB members to evaluate proposals from many different disciplines; (c) greater concern for the bureaucratic formalities of signed consent forms than on furthering prospective participants’ understanding of the proposed research; (d) inappropriately critiquing technical aspects of the research design rather than limiting the review to the protection of participants; and (e) failure to take advantage of the flexibility built into the Common Rule—for example, by requiring full board review rather than expedited review or even an exemption from review for minimal risk research (studies, like most of those in industrial/organizational research, that entail procedures no more risky than ordinary life activities or routine physical or psychological tests).

Current Regulations

An IRB review of proposed studies begins with consideration of four sequential questions:

  1. Is it research? According to the Common Rule, research is an investigation intended to expand or supply generalizable knowledge. Projects (e.g., attitude surveys, test validation studies) intended for internal organizational use are not included, but if findings are submitted for publication, that would be an attempt to contribute to generalizable knowledge. (IRB approval might be possible ex post facto for the now archival data if it is anonymous.)
  2. Does it involve human participants? Such involvement means obtaining private information from living persons who may be individually identifiable. This would exclude qualitative and quantitative literature reviews that are limited to secondary analyses of aggregate data.
  3. Does it fall into one of the categories of research exempt from IRB review? The IRB (not the researcher) can determine that the proposed study does not require review. This includes such projects as evaluations of instructional strategies in educational settings, and the use of survey and interview procedures or the collection of existing data—as long as the respondents cannot be identified.
  4. If not exempt, does it entail no more than minimal risk so that it is thus eligible for expedited review? Under expedited review, only the IRB chair (or designee) reviews the study, rather than the full board, and the study cannot be disapproved (only the full board can do that). This category includes individual or group behavior research or characteristics of individuals—for example, perception, cognition, game theory, and test development—in which the subjects’ behavior is not manipulated and research does not involve stress on the subjects.

Basic Components of Ethical Research

Privacy, Confidentiality, and Informed Consent

People have a right to maintain their privacy—that is, the right to determine how much information about themselves will be revealed to others, in what form, and under what circumstances. Empirical research indicates that perceived violations of privacy depend on the nature of the information being sought, how public the setting is in which a person’s behavior is being studied, people’s expectations regarding privacy, and the anonymity of the research data—that is, assurance that the data cannot be linked directly to the respondent source. It probably also depends on the degree of trust in which the researcher is held by participants. Confidentiality refers to people’s right to have the information they provide kept private, as well as to the agreements made with them regarding what will be done with the data. Establishing conditions of anonymity is the best way to guarantee privacy and complete confidentiality. Most of the time, a psychologist’s research participants (e.g., employees or students) will assume that the information they provide is to be kept confidential. Therefore, it is imperative that any limitations on confidentiality be explained carefully in advance of securing the person’s participation. For example, studying the determinants of voluntary employee turnover may require maintaining the identity of data sources so that antecedent information can be linked with later separation data. This should be explained to potential participants, and the personal identifiers that provide the link should be destroyed once the matching is done.

Describing to prospective research participants any limitations on confidentiality is part of the information to be supplied prerequisite to obtaining their informed consent to participate. To summarize briefly the Common Rule and the APA ethics code (Standards 3.10, 8.02, 9.03), additional information to be provided in clear, unambiguous, readily understandable language includes the purpose and expected duration of the research, and its procedures; any factors that might reasonably be foreseen to affect a person’s decision to participate; the participant’s right to decline to participate or discontinue participation, along with any consequences for not participating; and whom to contact with any questions about the research or the participant’s rights. Opportunity should also be provided for the person to have any questions about the research answered. Although obtaining a signed consent form containing the above information from each participant is the default option, there are a number of circumstances under which a waiver can be granted under the Common Rule. These include minimal-risk research; studies of educational programs; routine assessments of organizational practices; archival data; and collecting test, survey, or interview data—as long as participants cannot be identified; for studies that could not be done unless the waiver was granted; and when the signed consent form would be the only record linking the respondent with the research and a breach of confidentiality would be potentially harmful (e.g., an employee survey of admitted unethical behavior).

The Use of Deception

When individuals are misled (active deception) or not told (passive deception) about significant features of the research in which they are being asked to participate, it arguably violates the principles of respect, autonomy, and trust underlying the researcher-participant relationship and the spirit and intent of providing informed consent. Consequently, deception has been an extremely controversial practice in social science research—much more prevalent in social psychology than in industrial/organizational psychology, owing to differences in the topics studied and the preeminent use of laboratory versus field studies, respectively. The most common forms of deception are concealing the true purpose of the study and/or aspects of its procedures by giving participants false instructions or false information about stimulus materials, use of a confederate to mislead them, and providing erroneous or manufactured feedback about their performance on some task.

Although critics such as Diana Baumrind argue that deception is never justified because of the long-term potential harms it inflicts on participants, on the reputation of the profession, and on society, deception is generally viewed as conditionally appropriate. For example, the APA ethical code indicates that it may be justified by the potential significant value of the study, if the research question cannot be investigated effectively by nondeceptive means, and as long as the deception is explained promptly to the participants and any misconceptions induced in participants are corrected. It is extremely unlikely, however, that any substantial deception would be viewed as appropriate in industrial/organizational research conducted with employee participants in an organizational setting.

References:

  1. American Psychological Association. (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57,1060-1073. Retrieved March 18, 2006, from http://www.apa.org/ethics
  2. Baumrind, D. (1985). Research using intentional deception: Ethical issues revisited. American Psychologist, 40, 165-174.
  3. Department of Health and Human Services. (1991). Public Health Service Act: Protection of human subjects. Title 45, Code of Federal Regulations [CFR], Part 46. Retrieved March 18, 2006, from http://www.hhs.gov/ohrp/human subjects/guidance/45cfr46.htm
  4. Kimmel, A. J. (1996). Ethical issues in behavioral research: A survey. Cambridge, MA: Blackwell.
  5. Lefkowitz, J. (2003). Ethics and values in industrial-organizational psychology. Mahwah, NJ: Lawrence Erlbaum.
  6. National Research Council of the National Academies. (2001). Protecting participants and facilitating social and behavioral sciences research. Washington, DC: National Academies Press.
  7. Sales, B. D., & Folkman, S. (Eds.). (2000). Ethics in research with human participants. Washington, DC: American Psychological Association.

See also: