Cognitive Task Analysis

Cognitive  task  analysis  (CTA)  refers  to  a  suite  of scientific  methods  designed  to  identify  the  cognitive  skills,  strategies,  and  knowledge  required  to perform tasks proficiently. The goal of CTA is to use this information to improve instruction, training,  and  technological  design  (e.g.,  decision  aids) for  the  purposes  of  making  work  more  efficient, productive,  satisfying,  and  of  a  higher  quality,  or to accelerate proficiency.

Background and Historical Development

CTA  has  a  long  history,  spanning  multiple  communities  of  practice,  ranging  from  those  studying  perception  and  thinking,  to  those  studying the behavioral aspects of work and, subsequently, skilled  performance  in  applied  settings.  Prior  to the 1980s, these communities included (in approximate chronological order) those practicing introspection,  applied  and  industrial  psychology,  task analysis, ergonomics, human factors, and instructional design.

In  the  early  1900s,  physical  labor  dominated many  aspects  of  work,  and  physical  issues,  such as   fatigue   and   injury   risk,   were   of   concern. Accordingly,  task  analytic  methods  were  often behaviorally  oriented—designed  to  decrease  inefficiencies  and  to  increase  productivity.  Classic examples   of   task   analysis   include   Frederick W. Taylor’s time–motion studies of factory workers, and Frank and Lillian Gilbreth’s study of bricklayers. Although worker behavior was emphasized during  early  task  analyses,  the  decomposition  of tasks  using  such  methods  rarely  excluded  cognitive aspects of work. However, over time, as work became more reliant on higher order cognition, the focus of task analysis shifted.

Despite  historical  antecedents,  the  term  CTA did not emerge until the 1980s, when it was used to  understand  the  cognitive  activities  involved  in man–machine  systems.  A  key  turning  point  was a  Nuclear  Regulatory  Commission  workshop  in 1982  on  Cognitive  Modeling  of  Nuclear  Plant Operators, attended by cognitive and human factors psychologists David Woods, Donald Norman, Thomas  Sheridan,  William  Rouse,  and  Thomas Moran. The changing nature of technology in the workplace and the increasing complexity of work systems resulted in a greater emphasis on cognitive work and, therefore, a greater need for cognitively oriented task analytic methods.

From  the  1980s,  a  system-oriented  perspective  to  CTA  prevailed,  focused  on  understanding adaptive cognitive work in complex contexts. This joint-cognitive  systems  or  sociotechnical  systems perspective viewed cognitive work as an embedded phenomenon—inextricably  tied  to  the  context  in which  it  occurs.  In  naturalistic  contexts,  (a)  systems  may  comprise  multiple  (human  and  technological) components; (b) task goals are frequently ill-defined;  (c)  planning  may  only  be  possible  at general levels of abstraction; (d) information may be  limited;  and  (e)  complexity,  uncertainty,  time constraint,  and  stress  are  the  norm.  Cognition  in these  contexts  is  often  emergent  or  distributed across  individuals—and  technology.  Moreover, cognition  is  also  constrained  by  broader  professional,  organizational,  and  institutional  contexts, which  influence  the  strategies,  plans,  goals,  processes, and policies employed. Consequently, CTA has evolved as a means to study adaptive, resilient, and collaborative cognition in simulated environments and, especially, in the field.

CTA has been championed by the cognitive systems engineering and naturalistic decision-making communities,  and  cognitive  scientists  studying expertise, including cognitive anthropologists and ethnographers. The range of CTA methods is vast. Rather than present an exhaustive list of methods, the reader is referred to Beth Crandall, Gary Klein, and  Robert  R.  Hoffman’s  2006  book  Working Minds.  A  more  detailed  description  of  one  particular  method,  the  Critical  Decision  Method,  is provided later in this entry.

In  general,  CTA  is  most  synonymous  with knowledge   elicitation   and   cognitive   process-stracing data collection methods, of which there are several  classes:  self-reports,  automated  capture-based techniques, observation, and interviews. As with all empirical methods, there are strengths and limitations  to  each  method.  Self-reports,  such  as questionnaires,  rating  scales,  and  diaries,  permit efficient data collection and interpretation, which can  be  automated  by  computer.  However,  valid psychometric  scale  development  takes  time  and effort.  Moreover,  self-reports  do  not  afford  the opportunity for additional exploration of the data, which may be problematic given the possibility of participant self-presentation or amotivation.

Automated  capture  includes  computer-based tasks  or  those  implemented  in  simulated  task environments  (e.g.,  situation  awareness  global assessment  technique;  temporal  or  spatial  occlusion).  These  methods  allow  direct  capture  of important cognitive behavior in situ, such as making a specific prediction or decision. However, they may require supplementary methods (e.g., a priori goal-directed  task  analysis,  verbal  reports,  eye movements) to generate design recommendations. Additionally,  they  are  costly  to  establish,  require extensive  system  and  scenario  development,  and have limited flexibility.

Observation  like  ethnographic  immersion  and shadowing  provide  first-hand  information  about how  events  unfold  (and  permits  post  hoc  verification of information elicited via other methods). However, as with many CTA techniques, it requires domain  knowledge  to  identify  useful  targets  for data  coding  and  interpretation.  Observation  is not  always  feasible  (e.g.,  low-frequency,  life-threatening  events),  can  be  intrusive  and,  per  the Hawthorne  effect,  can  change  the  behavior  of those being observed.

Structured  and  semi-structured  interview  techniques  (e.g.,  critical  decision  method  and  crystal ball  technique)  that  employ  directed  probes  can reveal  information  that  other  methods  would not.  In  addition  to  conducting  naturalistic  studies, interviews and observations can be combined with experiment-like tasks (e.g., 20 questions, card sorting) to generate useful insights into the cognitive  processes  underlying  superior  performance. However, trained interviewers are required, interviewees’  memory  is  fallible,  and  the  validity  of retrospective reports has been questioned.

The Critical Decision Method

The critical decision method (CDM) was developed by  Gary  Klein,  Robert  Hoffman,  and  colleagues and is adapted from the critical incident technique (CIT) developed by John Flanagan and colleagues (including Paul Fitts)—which was designed to generate  a  functional  description  of,  and  identify  the critical requirements for, on-the-job performance in military  settings.  Like  the  CIT,  rather  than  probe for  general  knowledge,  the  CDM  is  a  case-based, retrospective,  semi-structured  interview  method.  It uses multiple sweeps to elicit a participant’s thinking in a specific, non-routine incident, in which they were an active decision maker and played a central role. A particular goal of the CDM is to focus the interviewee on the parts of the incident that most

affected  their  decision  making  and  to  elicit  information  about  the  macrocognitive  functions  and processes—such  as  situation  assessment,  sense-making,  (re)planning,  and  decision  making—that supported  proficient  performance.  Although  elicitation is not limited to events that could be directly observed, the use of skilled individuals and specific, challenging, and recent events in which they were emotionally invested was hypothesized to scaffold memory recall based on their elaborate encoding of such incidents.

In  the  first  sweep  of  the  CDM,  a  brief  (e.g., 30 to  60-second)  outline  of  a  specific  incident is  elicited  (see  below  for  pointers  on  framing  the interview). In a second sweep, a detailed timeline is  constructed  to  elaborate  on  the  incident.  This should  highlight  critical  points  where  the  interviewee  made  good  or  bad  decisions;  the  goals or  understanding  changed;  the  situation  itself changed;  the  interviewee  acted,  failed  to  act,  or could have acted differently; or relied on personal expertise. Once delineated, the timeline is restated back to the interviewee to develop a shared understanding of the facts and to resolve inconsistencies.

In  the  third,  progressively  deepening,  sweep, the interviewer tries to build a contextualized and comprehensive description of the incident from the interviewee’s point of view. The goal is to identify the  interviewee’s  knowledge,  perceptions,  expectations,  options,  goals,  judgments,  confusions, concerns, and uncertainties at each point. Targeted probe questions are used to investigate each point in turn. The CDM provides a list of useful probes for many different types of situations. For instance, probes for investigating decision points and shifts in  situation  assessment  may  include:  “What  was it about the situation that let you know what was going to happen?” and “What were your overriding concerns at that point?” Probes for investigating cues, expert strategies, and goals may include: “What  were  you  noticing  at  that  point?”  “What information did you use in making this decision?” “What  were  you  hoping  to  accomplish  at  this point?”

A fourth sweep can be used to gain additional insight into the interviewee’s experience, skills, and knowledge  using  what  if  queries,  for  instance,  to determine how a novice may have handled the situation  differently.  Although  some  probes  require reflection  on  strategizing  and  decision  making, which  increases  subjectivity  and  may  affect  reliability,  they  provide  a  rich  source  for  hypotheses formation  and  theory  development.  With  subsequent  analysis,  these  data  can  be  leveraged  into design-based  hypotheses,  for  training  or  technology  development  to  aid  future  performance  and learning.

From Elicitation to Design

To  meet  the  goal  of  communicating  results  (e.g., from a set of CDM interviews) for improving system  performance,  CTA  embraces  a  range  of  data analysis  and  knowledge  representation  methods. Quantitative  and  qualitative  methods  are  leveraged  to  understand  the  data,  including  hit  rates, reaction  times,  thematic  or  categorical  analyses, chronological  or  protocol  analyses,  and  conceptual and statistical or computational models.

As with all qualitative analyses, the goal of the CTA practitioner is to unravel the story contained in the data. After conducting data and quality control checks, analysis is used to identify and organize the cognitive elements so that patterns in the data can emerge. Importantly, elements should be informed by asking cognitive questions of the data, such as: What didn’t they see? What information did they use? Where did they search? What were their  concerns?  What  were  they  thinking  about? At this stage, knowing the data is key and organizational procedures—such as categorizing, sorting, making lists, counting, and descriptive statistics— can be used to assist in this effort.

Once  individual  elements  have  been  identified,  the  next  goal  is  to  identify  the  higher  order data  structure  that  describes  the  relationships between  elements.  This  is  done  by  generalizing across  participants  to  describe  regularities  in  the data  (e.g.,  by  looking  for  co and  re-occurrences, or  their  absence,  that  might  signify  a  pattern); organizing  elements  into  inclusive,  overarching formats  (e.g.,  create  tables  of  difficult  decisions, cues  used,  and  strategies  employed);  looking  for similarities  or  differences  across  groups  (e.g., cues  used  by  experts  but  not  novices);  and  using statistical  analyses  to  examine  differences  and relationships.

Following  the  data  analysis,  knowledge  must be  represented  in  a  useful  form.  Fortunately, many  forms  of  representation  exist  as  a  natural part of the analysis process. However, representations  developed  early  in  the  process  will  be  data driven,  whereas  those  developed  later  should  be meaning  driven.  Narrative-based  representations

(story summaries) can extend participants’ verbalizations  to  highlight  what  is  implicit  in  the  data. Graphical  chronologies  like  timelines  that  retain the  order  of  events  can  be  used  to  represent  the multiple viewpoints of team members and permit subjective  and  objective  accounts  to  be  linked. Data  organizers,  such  as  decision  requirements tables,  permit  multiple  sources  of  information  to be  synthesized  to  form  an  integrated  representation. Conceptual mapping tools (e.g., allow knowledge structures to be graphically and hierarchically represented. Process diagrams like a decision ladder represent cognition in action and provide insights into aspects of cognitive complexity that might otherwise appear to be simple.

The selection of CTA methods should be driven by  framing  questions,  including:  What  is  the  primary  issue  or  problem  to  be  addressed  by  the CTA? What is the deliverable? Inexperienced CTA practitioners  frequently  underspecify  the  project and adopt a method-driven, rather than problem-ocused, approach. To overcome these biases and to direct project resources efficiently, practitioners should  become  familiar  with  the  target  domain, the  study  of  micro and  macrocognition,  and  the range of CTA methods available, and they should conduct  preliminary  investigations  to  identify  the most cognitively challenging task components.

Translating  CTA  into  actual  design  is  often the least well-executed and most ambiguous step. However,  it  need  not  be.  Making  explicit  what will  be  delivered  (training  plan,  intervention,  or decision aids) and agreeing on this with the target users permits the original goal—positively changing (system) behavior—to be attained. To do this effectively, however, the CTA practitioner needs to identify the stakeholders involved, understand their goals and needs, and determine the intended use of the deliverable. Frequently, data exist that can help frame  a  CTA  project—for  instance,  behavioral task analyses or documented evidence about training  or  system  inadequacies.  Generating  outcomes without reference to these issues will likely result in  design  recommendations  or  tools  that  do  not generate an effective—or even adequate—solution to  the  problem.  Ultimately,  the  goal  of  any  CTA is to use mixed methods to generate products that leverage  expert  data  in  a  way  that  can  improve performance.


  1. Crandall, B. W., Klein, G. A., & Hoffman, R. R. (2006). Working minds: A practitioner’s guide to cognitive task analysis. Cambridge: MIT Press.
  2. Hoffman, R. R., Crandall, B. W., & Shadbolt, N. (1998). Use of the critical decision method to elicit expert knowledge: A case study in the methodology of cognitive task analysis. Human Factors, 40, 254–276.
  3. Hoffman, R. R., & Militello, L. (2008). Perspectives on cognitive task analysis: Historical origins and modern communities of practice. New York: Psychology Press.
  4. Hoffman, R. R., Ward, P., Feltovich, P. J., DiBello, L., Fiore, S. M., & Andrews, D. (2013). Accelerated expertise: Training for high proficiency in a complex world. New York: Psychology Press.
  5. Klein, G. A., Calderwood, R., & Clinton-Cirocco, A. (1986). Rapid decision making on the fire ground. Human Factors and Ergonomics Society Annual Meeting Proceedings, 30, 576–580.
  6. Salmon, P. M., Stanton, N. A., Gibbon, A. C., Jenkins, D. P., & Walker, G. H. (2010). Human factors methods and sports science: A practical guide. Boca Raton, FL: CRC Press.
  7. Schraagen, J. M., Chipman, S. F., & Shalin, V. L. (2000). Cognitive task analysis. Mahwah, NJ: Erlbaum.

See also: