Quantitative Methodologies




Quantitative methodologies can be generally defined as the various procedures used to examine differences between groups and relationships between variables with respect to quantifiable phenomena. The list of potentially quantifiable phenomena is immense and includes any type of behavior, attitude, perception, knowledge domain, or other extant characteristic that can be measured numerically. Quantitative methodologies are applied across a variety of disciplines in the physical, biological, and social sciences and reflect contributions from the fields of statistics and measurement. Statistics include the procedures employed for summarizing numeric data and testing hypotheses, while measurement encompasses the processes used to assign meaningful numbers to the traits or variables of interest to a researcher.

The types of research designs in which quantitative methods are used include true experiments (also known as completely randomized designs), quasi-experiments, and nonexperimental designs (also termed passive observational or correlational). What differs among these designs is the extent to which the researcher exercises control over certain factors in the study and the extent to which randomization is present. Moreover, quantitative methods can be classified into those used solely for descriptive purposes (i.e., to characterize data in terms of distributional shape, central tendency, and variability) and those used for drawing inferences based on tests of hypotheses.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Origins of Quantitative Methodology

Historical Background

The quantitative methodologies in current use evolved from various disciplines including biology, mathematics, psychology, sociology, statistics, econometrics, and political science and can be classified broadly into the fields of statistics and measurement. Although the fields do share common roots, given the use of statistical methods in conducting measurement analyses and conversely, the reliance on valid and reliable measures to conduct meaningful statistical tests, the fields of statistics and measurement nevertheless have unique historical origins.

To those who have ever taken a course in statistics, the term statistics often conjures images of memorized formulas, lists of rules, and tedious calculations. However, the early history of modern statistics was far from dull, often characterized by volatile ideological and philosophical clashes, bitter rivalries, and in some cases, lifelong enmity among the key players in the development of the field. The first evidence of modern statistical reasoning appeared in the late 17th century, when statistical concepts that provide the foundation for modern statistical theory, such as probability, variability, and chance, were first proposed. Many would argue that among the most profound developments in the history of statistics were Karl Pearson’s mathematical formulation of the correlation coefficient in 1896 and Sir Ronald A. Fisher’s first publication describing analysis of variance in 1921. Their work not only incited debate within the statistical community but also served as the impetus for subsequent advances in the field following both Pearson’s nonexperimental design tradition and Fisher’s randomized experimentation approach. During the past half century, new statistical procedures have extended our ability to study quantitative phenomena by accommodating simultaneous testing of multiple dependent variables (e.g., multivariate analysis of variance), modeling of more complex relationships among multiple variables (e.g., path analysis and structural equation modeling), and analysis of complex data structures (e.g., hierarchical linear modeling and multilevel modeling).

Somewhat parallel to the history of statistical methodology was the development of measurement methodology or psychometrics, which can trace its ancestry to early philosophers such as Plato and to scientists including Galileo who espoused the importance of quantifying physical phenomena. However, many modern measurement principles can be more directly attributed to advances in the field of psychology. For example, Gustav Fechner, founder of quantitative psychology, was among the first to propose methods for obtaining psychological measures under rigorously controlled conditions. Other early contributions to measurement from the field of psychology include Charles Spearman’s development of factor analysis in 1904 as a means for understanding intelligence, L. L. Thurstone’s scaling methods for measuring attitudes, and James M. Cattell’s writings on the importance of large samples as the basis for group intelligence tests. Since the mid-1900s, among the most significant innovations in the field of measurement have been improvements in procedures for estimating reliability, such as the reliability coefficient presented by Lee J. Cronbach in 1951, as well as the realization of item response theory, which now dominates large-scale standardized testing and has made possible advances in the computer adaptive testing that is used with many assessments such as the Graduate Record Examination.

Philosophical Foundation

While some would contend that quantitative methodology stems largely from logical positivism, in fact, quantitative methodology, particularly as applied in the social sciences, derives primarily from postpositivism. Logical positivists espouse a deterministic view of reality, in which knowledge can be absolutely verified in a world of order and regularity. In contrast, postpositivists view reality as less certain and more probabilistic than deterministic. Researchers using methods of statistical inference likewise operate within a paradigm in which statements about knowledge must be couched in uncertainty and probability. Moreover, another fundamental tenet of postpositivism that underlies statistical practice is Popper’s notion of falsification. As any student in a beginning statistics course knows, hypotheses are tested, not for the purpose of verification, but for the purpose of falsification, i.e., a hypothesis can never be proven true but can only be disproven. Thus, the statistical methods that exemplify much of quantitative methodology require a philosophical framework that is considerably less restrictive than logical positivism.

Concepts and Underlying Principles

Probability and Chance

A basic premise of inferential statistical analysis is the desire to use samples to infer characteristics about a larger population. Because any given sample represents only a subset of the larger population of interest, there is a possibility that the particular sample selected for a study does not adequately represent the population even if the sample is randomly selected. Consequently, there is always an element of chance or uncertainty when attempting to use sample data to estimate population characteristics or parameters. For the same reason, researchers using statistical procedures to conduct hypothesis testing can never be 100% certain that the results of their tests are correct or that the results actually reflect the idiosyncratic characteristics of the sample selected for their study.

This concept of uncertainty in quantitative analysis is the basis for significance testing in statistics and also embodies the risk of committing either a type 1 or type 2 error. Type 1 error is analogous to a false positive and occurs when a researcher wrongly determines that a relationship or difference exists in the data when in fact such a relationship or difference does not exist. It should be noted that such an error does not represent an ethical breach but rather an incorrect finding based on any number of reasons, including features of the study’s design, characteristics of the subjects in the study, or violation of one or more statistical assumptions, where assumptions represent a prescribed set of conditions necessary for conducting any given statistical test. Furthermore, there is always some risk of committing a type 1 error when conducting statistical tests. When researchers set a particular confidence level, level of significance, or alpha prior to conducting their statistical analyses, they are in essence indicating the probability of type 1 error they are willing to risk.

The other type of decision error that is always possible is type 2 error, which is akin to a false negative. Researchers committing type 2 errors fail to detect significant differences or relationships that are actually present in the population. In such cases, their tests are said to lack power. Statistical tests may have insufficient power due to various factors, including small sample size, excessively stringent levels of significance, violation of statistical assumptions, the study’s design, or unreliable measures of the variables.

Regarding the relative seriousness of the two types of errors, there is no absolute consensus; it clearly depends upon the nature of the study and the potential consequences of the error. For example, in some types of medical research, failure to correctly identify a drug as potentially effective in mitigating a life-threatening illness (type 2 error) arguably might present a more egregious mistake than incorrectly determining such a drug to be effective when it is not. However, by convention, most researchers tend to set their alpha (or risk of type 1 error) at .05 or .01 and type 2 error at .20, suggesting less tolerance for type 1 errors.

Variability

Inherent in the use of quantitative methodology is an interest in understanding variability. In fact, it was curiosity about variation in such diverse phenomena as the size of sweet peas, the chest girths of Scottish soldiers, and the characteristics of hops used in brewing beer that drove development of early statistical theory. Through their observations, scientists noted both regularity and unpredictability in the variation associated with the events they observed. This fascination with variability provides the basis for the current inferential statistical analyses that researchers apply to understand, explain, or predict differences. Such differences might manifest themselves in disparities between groups, changes across time, or relationships between variables. Likewise, the field of measurement is founded on an interest in assessment and classification as a means for comprehending diversity. Consequently, the presence of differences among the phenomena to be studied is assumed and is requisite to meaningful application of quantitative methodology.

Variability from the perspective of quantitative methodology is often classified into systematic versus random. This distinction in the nature of variability holds true in both statistics and measurement. In statistical analysis, tests of significance typically comprise two components: explained and unexplained variance. The explained component represents variability attributable to factors being examined in the study, such as a treatment effect, motivational influences, and so on, while the unexplained or random component reflects factors outside the control of the study, such as individual differences or environmental distractions. Accordingly, it should be evident that researchers engaging in statistical analyses seek to maximize systematic variance and to minimize random variance, which is often given the status of noise or nuisance. Likewise, in measurement, the variability in a set of scores, whether from psychological tests or physiological measures, is often partitioned into systematic variance associated with the trait of interest and random error variance due to faulty measurement.

Error

Error is the presence of any type of random or systematic variance that adversely affects a study’s outcome or the soundness of a measure. As noted above, there are two different types of decision errors critical to conducting statistical hypothesis testing. However, the concept of error in quantitative methodology comes in many guises. Sampling error, measurement error, and prediction error are among the sources of extraneous variance that compromise the utility of information obtained through statistical analysis and measurement. Concepts of measurement reliability and validity are related to the presence or absence of random and systematic error, respectively. Examples of random errors that could reduce reliability include examinees’ mood states, guessing, memory lapses, and scoring errors. In contrast, potential systematic errors that could diminish validity include learning disabilities, reading comprehension, and personality attributes that consistently contribute to either inflated or underestimated scores on various types of measures. Errors in general are considered undesirable in applying quantitative methodology, whatever form they take. Much of the advice on designing sound research focuses on attempts to minimize the various sources of potential error to the greatest degree possible.

Control

Control in the context of quantitative methodology is essentially the ability to minimize or account for extraneous variability by ruling out plausible, alternative explanations for the findings of a study. Where sufficient control is absent, researchers are unable to unambiguously interpret the findings of their studies. Control can take many forms, including manipulation of conditions within a study, holding certain factors constant, randomization, and inclusion of potentially confounding variables in order to study their effects. Both manipulation and randomization are hallmarks of the experimental research designs pioneered by Fisher. Such designs, which form the basis for evidence-based research, are often dubbed the gold standard because of their ability to control for most extraneous variables. Both nonexperimental and quasi-experimental designs are sometimes considered less rigorous because of their inability to adequately control for extraneous factors. Nevertheless, many researchers use these types of designs when the topics for their research do not lend themselves either to manipulation of variables or to randomization.

Types of Quantitative Methods

Statistical Procedures

There is no single taxonomy for classifying the countless statistical procedures that have been or currently are in use, and compilation of a complete list would be daunting. However, several classification criteria can be used that illustrate features of different methods. For example, statistical procedures are often dichotomized into descriptive versus inferential. Within inferential statistics, procedures are often organized according to how variables are measured (e.g., whether they are categorical or continuous), whether analyses involve one or more than one independent and/or dependent variable, whether data are collected at one point in time or at two or more points in time, or to what extent assumptions are made about underlying characteristics of the data. Examples of commonly used statistical procedures include analysis of variance to examine mean differences among groups, chi-square test of independence to analyze associations between categorical variables, and multiple regression to determine the extent to which a set of variables explain or predict some outcome. Newer, more complex multivariate procedures that have begun to see greater application in the past few decades include structural equation modeling, hierarchical linear modeling, and growth curve analysis.

Measurement Procedures

Measurement procedures are quantitative tools used primarily to evaluate reliability and validity in order to determine the appropriateness of scores for a given purpose and population. Traditional approaches to assessing reliability include Cronbach’s alpha and the test-retest method. In some settings, procedures based on item response theory (IRT), have replaced traditional methods. Although originally used almost exclusively with large-scale tests, IRT-based methods, particularly Rasch analysis, have become increasingly popular for analyzing scores from affective measures such as personality inventories and attitude scales. Validity-related procedures comprise a variety of techniques, including factor analysis, correlations, and other statistical methods to provide support for the meaning and appropriateness of scores from a particular measure.

The Future of Quantitative Methodology

Given the emphasis on evidence-based research that is currently being mandated in a number of arenas, it is likely that quantitative research will continue to flourish. The No Child Left Behind legislation as well as other federal directives not only emphasize the application of randomized trials but also stipulate use of instruments producing valid and reliable scores, implying a preference for quantitative outcome measures. In addition, with increasing computer capabilities, technology is allowing development of sophisticated statistical and psychometric methods that will enable quantitative researchers to analyze their data in ways researchers cannot currently envision. Twenty years ago, for example, hierarchical linear modeling was primarily a concept; now it is seen in an increasing number of applied studies to capture more accurately data that are clustered in some hierarchical fashion, such as occurs when students all experience the same classroom or clients share the same therapist. It is likely that future developments in quantitative methods will continue to expand our ability to understand the complexities of behaviors, events, and processes as well as their interrelationships to the extent that such phenomena can be quantified.

References:

  1. Cowles, M. (1989). Statistics in psychology: An historical perspective. Hillsdale, NJ: Lawrence Erlbaum.
  2. Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory. New York: Holt, Rinehart, and Winston.
  3. Michell, J. (2003). The quantitative imperative: Positivism, naive realism and the place of qualitative methods in psychology. Theory & Psychology, 13, 5-31.
  4. Pedhazur, E. J., & Schmelkin, L. P. (1991). Measurement, design, and analysis: An integrated approach. Hillsdale, NJ: Lawrence Erlbaum.

See also: