Inferential Statistics




A frequent goal of collecting data is to allow inferences to be drawn about a population from a sample. In such cases, inferential statistics provide the bases on which to draw such conclusions that go beyond the observed data. An example of a common inference is evaluating the likelihood that an observed effect (e.g., difference in group means) is not attributable to chance. Unlike descriptive statistics that simply summarize observed data, inferential statistics are used to make more general statements about the world beyond the data. Most of the frequently reported inferential statistics are derived from the general linear model. Some examples are t tests, analysis of variance (ANOVA), linear regression analysis, and factor analysis.

Inferential statistics are based on the notion of sampling and probability. The problem to be overcome in conducting research is that data are typically collected from a sample taken from a larger population of interest. If the data collected from the sample are not representative of the data associated with the population, inferences drawn from the sample data are not warranted. For example, consider the case in which a researcher is interested in how employees feel about a new organizational policy and chooses to sample 20 individuals from the larger organization consisting of 1,000 employees. The inferences drawn from this sample are likely to be less justified than if the sample had been larger (e.g., 100). In addition to sample size, characteristics of the sample are also important. Using the previous example, 20 randomly selected individuals may be more representative than 100 individuals who share some unique characteristics (e.g., all work the night shift) and thus may provide a stronger basis for drawing inferences about employee attitudes.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


Probability is another important concept related to inferential statistics. Probability theory provides the basis for accepting or rejecting hypotheses. Traditional hypothesis testing casts inferential statistics as providing the basis of a dichotomous decision. In short, if the null hypothesis were true (i.e., no effect), what is the probability of observing the effect? To the extent that the answer to this question is a low value (typically less than 5%), the null hypothesis is rejected and the inference is made that the observed effect was not likely the result of chance. Instead, the effect is considered real.

As an alternative to null hypothesis significance testing, confidence intervals may be used as the basis for drawing inferences. The fundamental idea underlying confidence intervals is that a given effect may be overestimated or underestimated in a specific study. Confidence intervals are derived from the point estimate of the observed effect, and the width of the interval is determined by the standard error of the estimate of the effect. A benefit to using confidence intervals is that they provide more information (i.e., a range of reasonable values for the estimated parameter) than traditional dichotomous hypothesis tests.

Generally, inferential statistics require four pieces of information. First, a measure of the size of the observed effect is required. This effect size estimate can take the form of an observed difference between groups (e.g., d) or can be expressed in terms of the magnitude of relationship between variables (e.g., r). These effects are referred to as estimates of the population parameter of interest. Such estimates provide for stronger inferences, to the extent that they are unbiased. All else being equal, the larger an observed effect, the more likely it is that an inference will be drawn that the effect is real. Second, sample size influences the inferences drawn from data. In general, larger samples permit drawing stronger inferences than smaller samples. Third, the Type I error rate (i.e., p) associated with the statistical test of the observed effect influences the inferences drawn from observed results. Finally, the power of the statistical test influences the extent to which inferences are justified. Greater power leads to stronger inferences.

References:

  1. Gelman, A., & Nolan, D. (2002). Teaching statistics: A bag of tricks. Oxford, UK: Oxford University Press.
  2. Kranzler, G., & Moursund, J. (1999). Statistics for the terrified (2nd ed.). Upper Saddle River, NJ: Prentice Hall.