The term “counseling process” pertains to the unfolding events, characteristics, or conditions that emerge during or as a consequence of the interaction between a counselor and a client. An exemplar of the counseling process is the therapeutic relationship that develops within counseling sessions. Likewise, engaging in homework assignments outside of sessions is also considered a component of the counseling process. The notion of process encompasses both the counselor’s activities with the client and how transformative changes occur within the client.
In contrast, the “counseling outcome” refers to the tangible results or impacts of the counseling experience. Outcomes denote the observable shifts that manifest in the client, directly or indirectly, as a result of counseling. While it’s assumed that the counseling process influences outcomes, attempts at research have struggled to consistently demonstrate definitive connections between measures of process and measures of outcomes.
Measurement of both counseling process and outcome has posed considerable challenges throughout the history of counseling and psychotherapy research. A central issue in measuring process and outcome lies in the absence of a consensus regarding what should be measured and how these measurements should be applied in practice or research. This is referred to as the “units of measurement” predicament: what factors should be assessed with individual clients or in specific studies?
Process researchers have explored diverse variables, encompassing factors like the amount of dialogue from counselors or clients, counselor competence, adherence of therapists to their prescribed roles or treatment protocols, clients’ emotional experiences, the strength of the therapeutic alliance, client responses of defensiveness or resistance, the severity of clients’ issues, and language utilization. As researchers have highlighted, the concept of process encompasses a multifaceted range of aspects.
The subsequent section delves into the typical evaluation of psychological tests, provides an assessment of a representative selection of counseling process and outcome measurements, and outlines potential future directions for research regarding these measurements.
General Measurement Principles
When assessing process and outcome measures, established measurement principles like reliability and validity serve as cornerstones. Reliability entails the consistency of measurement and is commonly evaluated through statistical analyses of internal consistency, test-retest reliability, and inter-rater reliability.
Internal consistency is gauged using coefficients like alpha, which indicates how consistently individual test items contribute to the total score. Test-retest reliability examines measurement consistency over time, especially for stable psychological traits where repeated administrations should yield consistent scores. Interrater reliability assesses whether multiple judges can evaluate psychological attributes or events in a similar manner, often measured using the intraclass correlation and kappa statistic.
Validity is approached from two angles: measuring whether a test truly assesses the intended construct and whether the test scores can be effectively used for specific purposes. The first aspect concerns whether a test accurately reflects a particular construct, differentiating it from other constructs or sources of error. An example is social desirability, where respondents tend to present themselves favorably, potentially skewing their responses on behaviors like smoking or drinking in a counseling context.
The second facet of validity takes into account the test’s intended purpose. Esteemed organizations like the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME) emphasize the significance of testing purpose. For outcome measures intended to assess change, their sensitivity to change aligns directly with their construct validity.
Multiple methods exist to evaluate validity. Convergent validity involves examining correlations between a test of a particular construct and another test assessing the same or a related construct. On the other hand, discriminant validity evaluates correlations between a test of a construct and a dissimilar one. Test developers often employ factor analysis, a statistical procedure, by administering a wide range of items to a substantial group of participants and analyzing resulting scores. This analysis assumes that diverse items measure a smaller set of fundamental factors or traits, which are interrelated and collectively explain patterns in the responses. This factor analysis helps uncover the underlying constructs in a set of items or tests.
Counseling Process Measures
In the realm of counseling process measurement, several instruments offer insights into the interactions and dynamics between counselors and clients. Five notable process measures are reviewed here, each encompassing its unique characteristics, reliability, validity findings, associated research outcomes, and potential issues.
Five Process Measures
- Working Alliance Inventory (WAI). Comprising 36 items, the WAI is administered in parallel forms by clients, counselors, and observers. Additionally, a 12-item short-form has been recently introduced. Rooted in Bordin’s concept of the working alliance, the WAI measures facets emerging from client-therapist agreement on counseling goals and tasks. Three subscales (tasks, goals, therapeutic bond) and a total score are generated from the WAI. Sample items like “I believe (my counselor) likes me” and “(My counselor) and I are working toward mutually agreed-upon goals” exemplify its content. Over 100 studies have employed the WAI, consistently highlighting high internal consistency and modest predictive power for therapy outcomes. Nevertheless, certain studies have posed inquiries about the underlying constructs assessed by the WAI, explanations for the strong correlations among the subscales, and the source (client, counselor, observer) that better predicts outcomes. Research also suggests a link between a stronger working alliance and a client’s willingness to express negative emotions, potentially crucial for a successful counseling process.
- Session Evaluation Questionnaire (SEQ). Devised to gauge clients’ perceptions of a counseling session’s impact, the SEQ relies on 24 items rated on a 7-point semantic differential scale. This scale presents opposing constructs with gradations in between, and respondents choose the number aligning with their feelings. Subscale scores for Depth, Smoothness, Positivity, and Arousal, along with a total score, reveal the session’s effects. Depth assesses the client’s in-session perception of the session’s significance, while Smoothness evaluates perceptions of safety and distress during the session. Positivity gauges post-session confidence and happiness, while Arousal indicates the level of post-session activity and excitement. Research demonstrates a notable correlation between in-session Depth and Arousal measures with post-session Positivity, though not with Arousal. Like other process measures, questions remain about SEQ’s relationship to counseling outcome—specifically, the extent to which a single session affects the overall counseling outcome.
- Expectations About Counseling-Brief Form (EAC-B). A critical factor in clients’ decision to seek counseling lies in their expectations about the process and outcomes. The Expectations About Counseling-Brief Form (EAC-B) serves as a widely employed research measure in this domain. Respondents answer 66 items on a 7-point Likert scale ranging from “not true” to “definitely true.” These responses generate scores across 17 scales, organized into four areas: Client Attitudes and Behaviors, Counselor Attitudes and Behaviors, Counselor Characteristics, and Counseling Process and Outcome. Studies focusing on internal consistency and test-retest reliability of the subscales have generally supported most of the scales. Construct validity assessments underscore that the EAC-B evaluates expectations about counseling, differentiating them from perceptions. Notably, this scale has demonstrated applicability across diverse populations, including Hispanic and rural samples. Despite being one of the technically advanced process measures, limited research has been published on the EAC-B since the early 1990s.
- Client Reactions System (CRS). Devised by Hill and colleagues, the CRS delves into the array of responses clients experience in reaction to counselor interventions. The measure encompasses 21 reaction categories—14 positive and 7 negative—that illuminate clients’ dynamic responses. Positive categories include “Feelings,” capturing the client’s heightened emotional depth; “Supported,” reflecting a sense of counselor care; “Relieved,” signifying diminished anxiety or depression; and “Responsibility,” indicating a shift towards personal accountability. In contrast, the negative categories include “Misunderstood,” highlighting instances where the client perceives inaccuracies or judgment, and “Scared,” conveying feelings of apprehension. To administer the CRS, clients watch a session videotape, pausing after each counselor intervention to rate their reactions. While questions about the reliability and validity of similar measures have surfaced, relatively few studies have explored the full potential of the CRS.
- Counseling Self-Estimate Inventory (COSE).Underpinning modern process-oriented measures is Bandura’s self-efficacy theory, which serves as their theoretical foundation. This theory posits a fundamental concept: the anticipation of personal competence profoundly shapes the initiation and persistence of behavior. This notion has paved the way for the development of innovative measures like the Counseling Self-Estimate Inventory (COSE).The creation of the COSE involved Larson and colleagues administering Likert-format items to a cohort of 213 students enrolled in master’s level counseling courses. Subsequently, these responses underwent rigorous scrutiny via factor analysis. This analytical process yielded five distinct factors that shed light on various dimensions of counselors’ self-efficacy perceptions.
Microskills: Within the COSE framework, “Microskills” items pertain to the content covered in basic counseling and communication skills training. This factor examines counselors’ confidence in their mastery of essential techniques that form the foundation of effective counseling interactions.
Process: The “Process” factor delves into counselors’ self-efficacy regarding the integration of their responses when working directly with clients. This dimension captures their belief in their ability to navigate the intricate dynamics of the counseling process.
Difficult Client Behaviors: This factor gauges counselors’ self-efficacy in handling challenging situations, particularly those involving clients who are unresponsive or unmotivated. A higher score in this category signifies a strong sense of competence in addressing such scenarios.
Cultural Competence: The “Cultural Competence” factor focuses on counselors’ confidence in interacting adeptly with clients from diverse ethnic or cultural backgrounds. This facet reflects their preparedness to navigate cultural nuances with sensitivity.
Awareness of Values: Within the COSE, “Awareness of Values” items evaluate counselors’ tendency to project their values and biases onto clients. This factor examines the degree to which counselors are attuned to potential value-related influences on the counseling process.
The integration of Bandura’s self-efficacy theory into the COSE’s framework underscores the significance of counselors’ beliefs in their own competence. As process-oriented measures continue to evolve, this theoretical foundation offers valuable insights into the psychological underpinnings that drive counselors’ behavior and interactions with their clients. Further research could explore the correlation between counselors’ self-efficacy perceptions, their actual performance in counseling sessions, and the resulting outcomes. Such investigations have the potential to enhance counselor training and optimize their effectiveness in facilitating positive client progress.
The scales within the Counseling Self-Estimate Inventory (COSE) have showcased a range of internal consistency and test-retest reliability, spanning from moderate to high levels. These commendable reliability indices signify the stability and consistency of the measurement tool over time and across different administrations. Furthermore, the COSE scales exhibit moderate correlations, aligning with expectations, when compared to other measures such as self-concept, anxiety levels, problem-solving appraisal, and the performance of counselors in their counseling roles.
However, a noteworthy issue arises in relation to the Process and Microskills subscales within the COSE. This matter revolves around the wording of the items, which introduces a confounding element. The positive or negative nature of the phrasing in these subscales has the potential to influence respondents’ perceptions and responses, potentially biasing the outcomes. This complicates the interpretation of the results and raises concerns about the true measurement of counselors’ self-efficacy regarding certain counseling processes and fundamental skills.
While the COSE holds promise as a comprehensive measure of counselors’ self-efficacy perceptions, addressing the confounding item wording is essential to ensure the integrity and accuracy of the outcomes. Future revisions of the COSE may benefit from reevaluating and refining the phrasing of items to eliminate potential bias and enhance the validity of the measure. This enhancement would contribute to a more robust assessment of counselors’ self-efficacy across various dimensions, thereby supporting their professional growth and development.
Additional Process Measures
Furthermore, a multitude of additional process measures have found their way into the counseling literature, each offering unique insights into the intricacies of the counseling process. These measures shed light on various dimensions and aspects of the therapeutic interaction, enriching our understanding of how counselors and clients engage and collaborate to achieve positive outcomes.
Among these process measures are the Expectations about Counseling scale, which explores clients’ anticipated experiences and outcomes from counseling sessions. The Counselor Rating Form and Counselor Effectiveness Scale provide perspectives on clients’ perceptions of their counselors’ efficacy and effectiveness in guiding them through the therapeutic journey. The Barrett-Lennard Relationship Inventory delves into the quality of the therapeutic relationship itself, evaluating the degree of warmth, empathy, and genuineness perceived by the client.
In the realm of counselor evaluation, the Counselor Effectiveness Rating Scale offers a comprehensive assessment of counselors’ competencies and impact on clients’ well-being. The Personal Attributes Inventory delves into the personal attributes of counselors that may influence their effectiveness. The Counselor Evaluation Inventory provides a systematic evaluation of the counselor’s skills and abilities based on observable behaviors.
Additionally, the Structural Analysis of Social Behavior uncovers patterns in the client-counselor interaction, offering insights into communication dynamics and relationship patterns. These measures, among others, exemplify the rich tapestry of process constructs that the field of counseling explores. They empower researchers and practitioners alike to dissect, understand, and optimize the therapeutic process, ultimately fostering enhanced outcomes for clients and contributing to the growth and evolution of the counseling discipline.
Counseling Outcome Measures
While numerous process measures predominantly find their use in research studies, both researchers and practicing counselors find great value in employing outcome measures. Outcome measures are pivotal tools that serve multiple purposes in the counseling field. Researchers utilize them to conduct efficacy and effectiveness studies that shed light on the effectiveness of different therapeutic approaches. Efficacy studies, often conducted in controlled settings like university clinics, involve specific therapeutic methods applied to homogeneous groups to examine the impact on targeted issues such as depression or anxiety. In contrast, effectiveness studies take place in real-world settings, like community mental health centers, where a diverse clientele with varying psychological concerns seek assistance.
In practice environments, outcome measures are frequently mandated by managed care companies and other funding bodies to substantiate the efficacy of counseling services. Community mental health organizations, hospitals, and educational institutions utilize these measures to demonstrate the tangible benefits of their services for clients and their families. In effectiveness studies and practical scenarios, outcome measures tend to encompass a broad spectrum of problem domains, aiming to provide comprehensive insights into the outcomes of counseling interventions.
However, similar to the challenges encountered with process measures, a series of complexities emerges when dealing with counseling outcome measures. These measures differ in terms of their:
- Content: They may focus on aspects such as intrapsychic functioning (e.g., anxiety and depression), interpersonal relationships (e.g., conflicts), or social roles (e.g., work or academic performance).
- Source of information: Information can be collected from clients, counselors, significant others, observers, experts, and even societal agents like teachers.
- Methods: Outcome measures can be based on self-reports, external ratings, behavioral observations, physiological measurements, or projective techniques.
Within this landscape, no definitive consensus exists among researchers or counselors about the ideal combination of content, source, or method to employ in specific situations. Nevertheless, it is clear that each of these elements plays a role in shaping the results of outcome studies. Some key findings include:
- Counselor and expert ratings often yield more significant estimates of change compared to client self-reports.
- Global assessments of client functioning tend to show more pronounced effects than evaluations of specific symptoms.
- Measures focusing on short-term therapy targets (e.g., reducing alcohol consumption) tend to produce greater effects than those aiming for long-term mental health improvements.
- Measures of negative affective states, like depression and anxiety, tend to demonstrate more immediate effects from counseling than measures of interpersonal conflicts or social role functioning.
Amidst these considerations, outcome measures stand as essential tools that illuminate the impact and effectiveness of counseling interventions, aiding both researchers and practitioners in their pursuit of enhancing clients’ well-being and fostering positive change.
Four Outcome Measures
In the realm of measuring counseling outcomes for individuals across various age groups, several prominent scales are commonly employed. These encompass children, adolescents, and adults, offering insights into the efficacy of therapeutic interventions. Widely recognized tools such as the Child Behavior Checklist, Conners Rating Scales, and the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) have gained prominence due to their diagnostic and screening capabilities. However, it’s noteworthy that many of these scales tend to be extensive, costly, and filled with items that might not be conducive to detecting changes in a practical counseling setting.
While research participants might be willing to invest an extended period, often an hour or more, in completing a battery of measures, this approach might not be as feasible for actual clients seeking relief from distressing conditions. In practice, clients are primarily driven by the desire for relief and are less inclined to engage with measures that exceed a single page in length, tasks that surpass 5 minutes in completion, or activities that seem disconnected from the core provision of counseling services.
To offer a glimpse into the landscape of outcome measurement, the following section introduces four illustrative examples of outcome measures. Among these, two are specifically designed for evaluating depression and anxiety outcomes, while the remaining two exemplify a broader global outcome measure and a comprehensive measure that covers various domains of outcomes. Through these examples, the diverse array of tools available for assessing the effectiveness of counseling interventions becomes evident, catering to the unique needs and preferences of both researchers and clients.
- Beck Depression Inventory (BDI).Built upon careful observations of both individuals with depression and those without, the Beck Depression Inventory (BDI) emerges as a significant tool for gauging the intricacies of depression. This assessment comprises 21 multiple-choice items, each thoughtfully designed to evaluate distinct facets of depression. Completed by clients, every item presents a quartet of statements that range from an absence of depressive symptoms to their severe manifestation. For instance, respondents might choose between statements like “I do not feel sad” and “I am so sad or unhappy that I can’t stand it.” The scope of assessed depression symptoms and attitudes spans mood, guilt, suicidal tendencies, irritability, sleep disruptions, and appetite loss.Extensive research affirms that BDI scores showcase strong internal consistency, sharing high correlations with alternative depression measurement methods. Furthermore, these scores exhibit sensitivity to changes prompted by various interventions, including medication and counseling. Nonetheless, a contrasting facet emerges from research – more than half of the individuals initially identified as depressed by the BDI shift categories upon retesting, even when the reassessment period spans a mere few hours or days. This volatility in certain BDI items prompts inquiries into whether score changes indeed signify counseling-induced improvements or are influenced by methodological factors. The BDI thus holds a dual significance as a valuable tool for assessing depression while also instigating discussions around its stability and the nuanced factors influencing its outcomes.
- State-Trait Anxiety Inventory (STAI).Delving into the realm of anxiety assessment, the State-Trait Anxiety Inventory (STAI) emerges as a robust tool consisting of two distinct self-report scales, each comprising 20 items. These scales are grounded in a conceptual framework that views anxiety as a signal alerting an individual to potential danger. The differentiation lies in their focus: the State Anxiety scale captures the flux of transient emotional states that can vary across different situations, while the Trait Anxiety scale delves into the more enduring facets of anxiety. Within these scales, one might encounter items like “I feel content” juxtaposed with “I feel nervous.”A careful evaluation of research illuminates anticipated nuances. For instance, the State Anxiety scale exhibits lower test-retest reliability, reflective of its transient nature, whereas both scales demonstrate commendable internal consistency across diverse samples. The State Anxiety scale’s responsiveness to treatment interventions stands out, particularly when assessing the impact of counseling aimed at alleviating test anxiety.
Despite the STAI’s design to offer a distinct portrayal of anxiety versus depression, a notable discovery looms large – a moderate to high correlation exists between most anxiety and depression measures. This observation, while intriguing, lacks a comprehensive explanation and injects doubts into the validity of anxiety and depression measurement tools. The interconnectedness between these mental states raises queries about the fundamental nature of these constructs and the extent to which they overlap or share underlying mechanisms. As the STAI uncovers these intricate intersections, it not only provides insights into anxiety but also prompts a reevaluation of the complex relationship between emotional states within the human psyche.
- Global Assessment of Functioning (GAF).Taking a closer look at outcome measurement, the Global Assessment of Functioning (GAF) stands out as a prevalent and concise tool that has garnered popularity over the past decades. Its widespread adoption can be attributed, in part, to its simplicity – a single-item 100-point rating scale that clinicians utilize to gauge a client’s overall functioning and symptomatology. This global rating aims to encapsulate symptoms and functionality across a wide spectrum, encompassing realms from work performance to suicide risk, all within the context of daily, weekly, or monthly timeframes. Generally, GAF ratings are administered at the commencement and culmination of counseling, yet the demands of managed care often necessitate more frequent reporting during ongoing treatment.Reports from researchers indicate moderate test-retest reliability figures ranging from .60 to .80. Despite its ubiquity, the availability of additional psychometric data remains limited. However, a fundamental concern with the GAF looms – its vulnerability to manipulation. The transparency of the rating system allows counselors to potentially adjust the rating to present the client as distressed as needed to justify treatment, albeit at the potential expense of validity. A recent survey highlights that counselors regard global GAF-type data as among the least informative for assessing outcomes. This underscores the need for a more comprehensive and nuanced approach to outcome measurement, recognizing both its benefits and limitations in shaping the landscape of therapeutic progress evaluation.
- Outcome Questionnaire (OQ-45).Taking a closer look at outcome measurement, the Global Assessment of Functioning (GAF) stands out as a prevalent and concise tool that has garnered popularity over the past decades. Its widespread adoption can be attributed, in part, to its simplicity – a single-item 100-point rating scale that clinicians utilize to gauge a client’s overall functioning and symptomatology. This global rating aims to encapsulate symptoms and functionality across a wide spectrum, encompassing realms from work performance to suicide risk, all within the context of daily, weekly, or monthly timeframes. Generally, GAF ratings are administered at the commencement and culmination of counseling, yet the demands of managed care often necessitate more frequent reporting during ongoing treatment.Reports from researchers indicate moderate test-retest reliability figures ranging from .60 to .80. Despite its ubiquity, the availability of additional psychometric data remains limited. However, a fundamental concern with the GAF looms – its vulnerability to manipulation. The transparency of the rating system allows counselors to potentially adjust the rating to present the client as distressed as needed to justify treatment, albeit at the potential expense of validity. A recent survey highlights that counselors regard global GAF-type data as among the least informative for assessing outcomes. This underscores the need for a more comprehensive and nuanced approach to outcome measurement, recognizing both its benefits and limitations in shaping the landscape of therapeutic progress evaluation.
Future Directions
In the realm of psychological assessment, the traditional focus has been on the creation of items that effectively differentiate individuals based on relatively enduring traits like intelligence or vocational preferences. However, a more intricate and demanding challenge lies in the measurement of change – a concept that has become increasingly central in the context of counseling and therapy. Test developers are currently embarking on a new phase, shaping the second generation of tests designed explicitly to gauge the shifts that occur either directly or indirectly as a consequence of counseling interactions.
Within the professional psychological landscape, the emergence of empirical studies exploring methodologies to assess and amplify change-sensitive measures has only recently gained momentum. This endeavor is guided by the recognition that distinct procedures for test construction and item analysis are essential to pinpoint items that accurately capture the effects of counseling. These analytical techniques serve to test the contrasting propositions: whether observable alterations at the item level are a product of counseling itself or whether they stem from extraneous factors unrelated to the therapeutic process.
As this new wave of effort gains traction, it holds the promise of unraveling crucial inquiries that have long intrigued both researchers and practitioners. Questions such as “What specific changes transpire through counseling?” and “What intricate mechanisms underlie the processes that drive positive change in therapeutic interventions?” are primed to receive enhanced answers, paving the way for a deeper comprehension of the dynamics that shape the efficacy of counseling and illuminating the path forward for improved outcome measurement practices.
References:
- Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191-215.
- Bordin, E. S. (1979). The generalizability of the psychoanalytic concept of the working alliance. Psychotherapy: Theory, Research, and Practice, 16, 252-260.
- Hill, C. E., & Lambert, M. J. (2004). Methodological issues in studying psychotherapy processes and outcome. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior change (5th ed., pp. 84-135). New York: Wiley.
- Hoffman, B., & Meier, S. T. (2001). An individualized approach to managed mental health care in colleges and universities: A case study. Journal of College Student Psychotherapy, 15, 49-64.
- Horvath, A. O., & Greenberg, L. S. (1989). Development and validation of the Working Alliance Inventory. Journal of Counseling Psychology, 36, 223-233.
- Kendall, P. C., Hollon, S. D., Beck, A. T., Hammen, C. L., & Ingram, R. E. (1987). Issues and recommendations regarding use of the Beck Depression Inventory. Cognitive Therapy and Research, 3, 289-299.
- Martin, D. J., Garske, J. P., & Davis, M. K. (2000). Relation of the therapeutic alliance with outcome and other variables: A meta-analytic review. Journal of Consulting and Clinical Psychology, 68, 438-150.
- Maruish, M. E. (1999). The use of psychological testing for treatment planning and outcome assessment (2nd ed.). Mahwah, NJ: Lawrence Erlbaum.
- Meier, S. T. (2004). Improving design sensitivity through intervention-sensitive measures. American Journal of Evaluation, 25, 321-334.
- Orlinsky, D. E., Ronnestad, M. H., & Willutzki, U. (2004). Fifty years of psychotherapy process-outcome research: Continuity and change. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior change (5th ed., pp. 307-389). New York: Wiley.
- Spielberger, C. D., & Sydeman, S. (1994). State-Trait Anxiety Inventory and State-Trait Anger Expression Inventory. In M. Maruish (Ed.), The use of psychological testing for treatment planning and outcome assessment (pp. 292-321). Hillsdale, NJ: Lawrence Erlbaum.
- Stiles, W. B., Honos-Webb, L., & Knobloch, L. M. (1999). Treatment process research methods. In P. C. Kendall, J. N. Butcher, & G. N. Holmbeck (Eds.), Handbook of research methods in clinical psychology (2nd ed., pp. 364-402). New York: Wiley.
- Stiles, W. B., & Snow, J. S. (1984). Counseling session impact as viewed by novice counselors and their clients. Journal of Counseling Psychology, 31, 3-12.
- Vermeersch, D. A., Whipple, J. L., Lambert, M. J., Hawkins, E. J., Burchfield, C. M., & Okiishi, J. C. (2004). Outcome Questionnaire: Is it sensitive to changes in counseling center clients? Journal of Counseling Psychology, 51, 38-49.
See also: