Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 81
Filter
1.
J Couns Psychol ; 71(4): 203-214, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38949778

ABSTRACT

Mental health researchers have focused on promoting culturally sensitive clinical care (Herman et al., 2007; Whaley & Davis, 2007), emphasizing the need to understand how biases may impact client well-being. Clients report that their therapists commit racial microaggressions-subtle, sometimes unintentional, racial slights-during treatment (Owen et al., 2014). Yet, existing studies often rely on retrospective evaluations of clients and cannot establish the causal impact of varying ambiguity of microaggressions on clients. This study uses an experimental analogue design to examine offensiveness, emotional reactions, and evaluations of the interaction across three distinct levels of microaggression statements: subtle, moderate, and overt. We recruited 158 adult African American participants and randomly assigned them to watch a brief counseling vignette. We found significant differences between the control and three microaggression statements on all outcome variables. We did not find significant differences between the microaggression conditions. This study, in conjunction with previous correlational research, highlights the detrimental impact of microaggressions within psychotherapy, regardless of racially explicit content. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Aggression , Black or African American , Professional-Patient Relations , Psychotherapy , Humans , Adult , Male , Black or African American/psychology , Female , Aggression/psychology , Psychotherapy/methods , Racism/psychology , Middle Aged , Young Adult
2.
Psychiatr Serv ; : appips20230648, 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39026467

ABSTRACT

OBJECTIVE: Counselor assessment of suicide risk is one key component of crisis counseling, and standards require risk assessment in every crisis counseling conversation. Efforts to increase risk assessment frequency are limited by quality improvement tools that rely on human evaluation of conversations, which is labor intensive, slow, and impossible to scale. Advances in machine learning (ML) have made possible the development of tools that can automatically and immediately detect the presence of risk assessment in crisis counseling conversations. METHODS: To train models, a coding team labeled every statement in 476 crisis counseling calls (193,257 statements) for a core element of risk assessment. The authors then fine-tuned a transformer-based ML model with the labeled data, utilizing separate training, validation, and test data sets. RESULTS: Generally, the evaluated ML model was highly consistent with human raters. For detecting any risk assessment, ML model agreement with human ratings was 98% of human interrater agreement. Across specific labels, average F1 (the harmonic mean of precision and recall) was 0.86 at the call level and 0.66 at the statement level and often varied as a result of a low base rate for some risk labels. CONCLUSIONS: ML models can reliably detect the presence of suicide risk assessment in crisis counseling conversations, presenting an opportunity to scale quality improvement efforts.

3.
Psychotherapy (Chic) ; 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38300571

ABSTRACT

Recent scholarship has highlighted the value of therapists adopting a multicultural orientation (MCO) within psychotherapy. A newly developed performance-based measure of MCO capacities exists (MCO-performance task [MCO-PT]) in which therapists respond to video-based vignettes of clients sharing culturally relevant information in therapy. The MCO-PT provides scores related to the three aspects of MCO: cultural humility (i.e., adoption of a nonsuperior and other-oriented stance toward clients), cultural opportunities (i.e., seizing or making moments in session to ask about clients' cultural identities), and cultural comfort (i.e., therapists' comfort in cultural conversations). Although a promising measure, the MCO-PT relies on labor-intensive human coding. The present study evaluated the ability to automate the scoring of the MCO-PT transcripts using modern machine learning and natural language processing methods. We included a sample of 100 participants (n = 613 MCO-PT responses). Results indicated that machine learning models were able to achieve near-human reliability on the average across all domains (Spearman's ρ = .75, p < .0001) and opportunity (ρ = .81, p < .0001). Performance was less robust for cultural humility (ρ = .46, p < .001) and was poorest for cultural comfort (ρ = .41, p < .001). This suggests that we may be on the cusp of being able to develop machine learning-based training paradigms that could allow therapists opportunities for feedback and deliberate practice of some key therapist behaviors, including aspects of MCO. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

4.
Addict Sci Clin Pract ; 19(1): 8, 2024 01 20.
Article in English | MEDLINE | ID: mdl-38245783

ABSTRACT

BACKGROUND: The opioid epidemic has resulted in expanded substance use treatment services and strained the clinical workforce serving people with opioid use disorder. Focusing on evidence-based counseling practices like motivational interviewing may be of interest to counselors and their supervisors, but time-intensive adherence tasks like recording and feedback are aspirational in busy community-based opioid treatment programs. The need to improve and systematize clinical training and supervision might be addressed by the growing field of machine learning and natural language-based technology, which can promote counseling skill via self- and supervisor-monitoring of counseling session recordings. METHODS: Counselors in an opioid treatment program were provided with an opportunity to use an artificial intelligence based, HIPAA compliant recording and supervision platform (Lyssn.io) to record counseling sessions. We then conducted four focus groups-two with counselors and two with supervisors-to understand the integration of technology with practice and supervision. Questions centered on the acceptability of the clinical supervision software and its potential in an OTP setting; we conducted a thematic coding of the responses. RESULTS: The clinical supervision software was experienced by counselors and clinical supervisors as beneficial to counselor training, professional development, and clinical supervision. Focus group participants reported that the clinical supervision software could help counselors learn and improve motivational interviewing skills. Counselors said that using the technology highlights the value of counseling encounters (versus paperwork). Clinical supervisors noted that the clinical supervision software could help meet national clinical supervision guidelines and local requirements. Counselors and clinical supervisors alike talked about some of the potential challenges of requiring session recording. CONCLUSIONS: Implementing evidence-based counseling practices can help the population served in OTPs; another benefit of focusing on clinical skills is to emphasize and hold up counselors' roles as worthy. Machine learning technology can have a positive impact on clinical practices among counselors and clinical supervisors in opioid treatment programs, settings whose clinical workforce continues to be challenged by the opioid epidemic. Using technology to focus on clinical skill building may enhance counselors' and clinical supervisors' overall experiences in their places of work.


Subject(s)
Analgesics, Opioid , Artificial Intelligence , Humans , Analgesics, Opioid/therapeutic use , Preceptorship , Counseling/methods , Technology
5.
JAMA Netw Open ; 7(1): e2352590, 2024 Jan 02.
Article in English | MEDLINE | ID: mdl-38252437

ABSTRACT

Importance: Use of asynchronous text-based counseling is rapidly growing as an easy-to-access approach to behavioral health care. Similar to in-person treatment, it is challenging to reliably assess as measures of process and content do not scale. Objective: To use machine learning to evaluate clinical content and client-reported outcomes in a large sample of text-based counseling episodes of care. Design, Setting, and Participants: In this quality improvement study, participants received text-based counseling between 2014 and 2019; data analysis was conducted from September 22, 2022, to November 28, 2023. The deidentified content of messages was retained as a part of ongoing quality assurance. Treatment was asynchronous text-based counseling via an online and mobile therapy app (Talkspace). Therapists were licensed to provide mental health treatment and were either independent contractors or employees of the product company. Participants were self-referred via online sign-up and received services via their insurance or self-pay and were assigned a diagnosis from their health care professional. Exposure: All clients received counseling services from a licensed mental health clinician. Main Outcomes and Measures: The primary outcomes were client engagement in counseling (number of weeks), treatment satisfaction, and changes in client symptoms, measured via the 8-item version of Patient Health Questionnaire (PHQ-8). A previously trained, transformer-based, deep learning model automatically categorized messages into types of therapist interventions and summaries of clinical content. Results: The total sample included 166 644 clients treated by 4973 therapists (20 600 274 messages). Participating clients were predominantly female (75.23%), aged 26 to 35 years (55.4%), single (37.88%), earned a bachelor's degree (59.13%), and were White (61.8%). There was substantial variability in intervention use and treatment content across therapists. A series of mixed-effects regressions indicated that collectively, interventions and clinical content were associated with key outcomes: engagement (multiple R = 0.43), satisfaction (multiple R = 0.46), and change in PHQ-8 score (multiple R = 0.13). Conclusions and Relevance: This quality improvement study found associations between therapist interventions, clinical content, and client-reported outcomes. Consistent with traditional forms of counseling, higher amounts of supportive counseling were associated with improved outcomes. These findings suggest that machine learning-based evaluations of content may increase the scale and specificity of psychotherapy research.


Subject(s)
Counseling , Mental Health , Female , Humans , Male , Psychotherapy , Data Analysis , Machine Learning
6.
Couns Psychother Res ; 23(2): 378-388, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37457038

ABSTRACT

Psychotherapy can be an emotionally laden conversation, where both verbal and non-verbal interventions may impact the therapeutic process. Prior research has postulated mixed results in how clients emotionally react following a silence after the therapist is finished talking, potentially due to studying a limited range of silences with primarily qualitative and self-report methodologies. A quantitative exploration may illuminate new findings. Utilizing research and automatic data processing from the field of linguistics, we analysed the full range of silence lengths (0.2 to 24.01 seconds), and measures of emotional expression - vocally encoded arousal and emotional valence from the works spoken - of 84 audio recordings Motivational Interviewing sessions. We hypothesized that both the level and the variance of client emotional expression would change as a function of silence length, however, due to the mixed results in the literature the direction of emotional change was unclear. We conducted a multilevel linear regression to examine how the level of client emotional expression changed across silence length, and an ANOVA to examine the variability of client emotional expression across silence lengths. Results indicated in both analyses that as silence length increased, emotional expression largely remained the same. Broadly, we demonstrated a weak connection between silence length and emotional expression, indicating no persuasive evidence that silence leads to client emotional processing and expression.

7.
Front Psychiatry ; 14: 1110527, 2023.
Article in English | MEDLINE | ID: mdl-37032952

ABSTRACT

Introduction: With the increasing utilization of text-based suicide crisis counseling, new means of identifying at risk clients must be explored. Natural language processing (NLP) holds promise for evaluating the content of crisis counseling; here we use a data-driven approach to evaluate NLP methods in identifying client suicide risk. Methods: De-identified crisis counseling data from a regional text-based crisis encounter and mobile tipline application were used to evaluate two modeling approaches in classifying client suicide risk levels. A manual evaluation of model errors and system behavior was conducted. Results: The neural model outperformed a term frequency-inverse document frequency (tf-idf) model in the false-negative rate. While 75% of the neural model's false negative encounters had some discussion of suicidality, 62.5% saw a resolution of the client's initial concerns. Similarly, the neural model detected signals of suicidality in 60.6% of false-positive encounters. Discussion: The neural model demonstrated greater sensitivity in the detection of client suicide risk. A manual assessment of errors and model performance reflected these same findings, detecting higher levels of risk in many of the false-positive encounters and lower levels of risk in many of the false negatives. NLP-based models can detect the suicide risk of text-based crisis encounters from the encounter's content.

8.
Psychother Res ; 33(7): 898-917, 2023 09.
Article in English | MEDLINE | ID: mdl-37001119

ABSTRACT

Objective: This paper highlights the facilitation of dyadic synchrony as a core psychotherapist skill that occurs at the non-verbal level and underlies many other therapeutic methods. We define dyadic synchrony, differentiate it from similar constructs, and provide an excerpt illustrating dyadic synchrony in a psychotherapy session. Method: We then present a systematic review of 17 studies that have examined the associations between dyadic synchrony and psychotherapy outcomes. We also conduct a meta-analysis of 8 studies that examined whether there is more synchrony between clients and therapists than would be expected by chance. Results: Weighted box score analysis revealed that the overall association of synchrony and proximal as well as distal outcomes was neutral to mildly positive. The results of the meta-analysis indicated that real client-therapist dyad pairs exhibited synchronized behavioral patterns to a much greater extent than a sample of randomly paired people who did not actually speak. Conclusion: Our discussion revolves around how synchrony can be facilitated in a beneficial way, as well as situations in which it may not be beneficial. We conclude with training implications and therapeutic practices.


Subject(s)
Professional-Patient Relations , Psychotherapy , Humans , Psychotherapy/methods , Treatment Outcome
9.
Psychotherapy (Chic) ; 60(2): 149-158, 2023 06.
Article in English | MEDLINE | ID: mdl-36301302

ABSTRACT

Supportive counseling skills like empathy and active listening are critical ingredients of all psychotherapies, but most research relies on client or therapist reports of the treatment process. This study utilized machine-learning models trained to evaluate counseling skills to evaluate supportive skill use in 3,917 session recordings. We analyzed overall skill use and variation in practice patterns using a series of mixed effects models. On average, therapists scored moderately high on observer-rated empathy (i.e., 3.8 out of 5), 3.3% of the therapists' utterances in a session were open questions, and 12.9% of their utterances were reflections. However, there were substantial differences in skill use across therapists as well as across clients within-therapist caseloads. These findings highlight the substantial variability in the process of counseling that clients may experience when they access psychotherapy. We discuss findings in the context of both the need for therapists to be responsive and flexible with their clients, but also potential costs related to the lack of a more uniform experience of care. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Professional-Patient Relations , Psychotherapy , Humans , Empathy , Counseling
10.
J Couns Psychol ; 70(1): 81-89, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36174188

ABSTRACT

Meta-analyses have established the alliance as the most robust predictor of outcome in psychotherapy. A growing number of studies have evaluated potential threats to the conclusion that alliance is a causal factor in psychotherapy. One potential threat that has not been systematically examined is the possibility that the alliance-outcome association is driven by low alliance outliers. We examined the influence of removing low alliance outliers on the alliance-outcome association using data drawn from two large-scale, naturalistic psychotherapy data sets (Ns = 1,052; 11,029). These data sets differed in setting (university counseling center, community mental health center), country (United States and Canada), alliance measure (four-item Working Alliance Inventory Short Form Revised, 10-item Session Rating Scale), and outcome measure (Counseling Center Assessment of Psychological Symptoms-34, Outcome Questionnaire-45). We examined the impact of treating outliers in five different ways: retaining them, removing values three or two standard deviations from the mean, and winsorizing values three or two standard deviations from the mean. We also examined the effect of outliers after disaggregating alliance ratings into within-therapist and between-therapist components. The alliance-outcome correlation and the proportion of variance in posttest outcomes explained by alliance when controlling for pretest outcomes were similar regardless of how low alliance outliers were treated (change in r ≤ .04, change in R² ≤ 1%). Results from the disaggregation were similar. Thus, it appears that the alliance-outcome association is not an artifact of the influence of low alliance outliers. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Therapeutic Alliance , Humans , Professional-Patient Relations , Psychotherapy/methods , Outcome Assessment, Health Care , Surveys and Questionnaires , Treatment Outcome
11.
BMC Health Serv Res ; 22(1): 1177, 2022 Sep 20.
Article in English | MEDLINE | ID: mdl-36127689

ABSTRACT

BACKGROUND: Each year, millions of Americans receive evidence-based psychotherapies (EBPs) like cognitive behavioral therapy (CBT) for the treatment of mental and behavioral health problems. Yet, at present, there is no scalable method for evaluating the quality of psychotherapy services, leaving EBP quality and effectiveness largely unmeasured and unknown. Project AFFECT will develop and evaluate an AI-based software system to automatically estimate CBT fidelity from a recording of a CBT session. Project AFFECT is an NIMH-funded research partnership between the Penn Collaborative for CBT and Implementation Science and Lyssn.io, Inc. ("Lyssn") a start-up developing AI-based technologies that are objective, scalable, and cost efficient, to support training, supervision, and quality assurance of EBPs. Lyssn provides HIPAA-compliant, cloud-based software for secure recording, sharing, and reviewing of therapy sessions, which includes AI-generated metrics for CBT. The proposed tool will build from and be integrated into this core platform. METHODS: Phase I will work from an existing software prototype to develop a LyssnCBT user interface geared to the needs of community mental health (CMH) agencies. Core activities include a user-centered design focus group and interviews with community mental health therapists, supervisors, and administrators to inform the design and development of LyssnCBT. LyssnCBT will be evaluated for usability and implementation readiness in a final stage of Phase I. Phase II will conduct a stepped-wedge, hybrid implementation-effectiveness randomized trial (N = 1,875 clients) to evaluate the effectiveness of LyssnCBT to improve therapist CBT skills and client outcomes and reduce client drop-out. Analyses will also examine the hypothesized mechanism of action underlying LyssnCBT. DISCUSSION: Successful execution will provide automated, scalable CBT fidelity feedback for the first time ever, supporting high-quality training, supervision, and quality assurance, and providing a core technology foundation that could support the quality delivery of a range of EBPs in the future. TRIAL REGISTRATION: ClinicalTrials.gov; NCT05340738 ; approved 4/21/2022.


Subject(s)
Artificial Intelligence , Cognitive Behavioral Therapy , Cognitive Behavioral Therapy/methods , Feedback , Humans , Mental Health , Psychotherapy , United States
12.
Adm Policy Ment Health ; 49(3): 343-356, 2022 05.
Article in English | MEDLINE | ID: mdl-34537885

ABSTRACT

To capitalize on investments in evidence-based practices, technology is needed to scale up fidelity assessment and supervision. Stakeholder feedback may facilitate adoption of such tools. This evaluation gathered stakeholder feedback and preferences to explore whether it would be fundamentally feasible or possible to implement an automated fidelity-scoring supervision tool in community mental health settings. A partially mixed, sequential research method design was used including focus group discussions with community mental health therapists (n = 18) and clinical leadership (n = 12) to explore typical supervision practices, followed by discussion of an automated fidelity feedback tool embedded in a cloud-based supervision platform. Interpretation of qualitative findings was enhanced through quantitative measures of participants' use of technology and perceptions of acceptability, appropriateness, and feasibility of the tool. Initial perceptions of acceptability, appropriateness, and feasibility of automated fidelity tools were positive and increased after introduction of an automated tool. Standard supervision was described as collaboratively guided and focused on clinical content, self-care, and documentation. Participants highlighted the tool's utility for supervision, training, and professional growth, but questioned its ability to evaluate rapport, cultural responsiveness, and non-verbal communication. Concerns were raised about privacy and the impact of low scores on therapist confidence. Desired features included intervention labeling and transparency about how scores related to session content. Opportunities for asynchronous, remote, and targeted supervision were particularly valued. Stakeholder feedback suggests that automated fidelity measurement could augment supervision practices. Future research should examine the relations among use of such supervision tools, clinician skill, and client outcomes.


Subject(s)
Artificial Intelligence , Cognitive Behavioral Therapy , Attitude , Cognitive Behavioral Therapy/methods , Focus Groups , Humans , Research Design
13.
Behav Res Methods ; 54(2): 690-711, 2022 04.
Article in English | MEDLINE | ID: mdl-34346043

ABSTRACT

With the growing prevalence of psychological interventions, it is vital to have measures which rate the effectiveness of psychological care to assist in training, supervision, and quality assurance of services. Traditionally, quality assessment is addressed by human raters who evaluate recorded sessions along specific dimensions, often codified through constructs relevant to the approach and domain. This is, however, a cost-prohibitive and time-consuming method that leads to poor feasibility and limited use in real-world settings. To facilitate this process, we have developed an automated competency rating tool able to process the raw recorded audio of a session, analyzing who spoke when, what they said, and how the health professional used language to provide therapy. Focusing on a use case of a specific type of psychotherapy called "motivational interviewing", our system gives comprehensive feedback to the therapist, including information about the dynamics of the session (e.g., therapist's vs. client's talking time), low-level psychological language descriptors (e.g., type of questions asked), as well as other high-level behavioral constructs (e.g., the extent to which the therapist understands the clients' perspective). We describe our platform and its performance using a dataset of more than 5000 recordings drawn from its deployment in a real-world clinical setting used to assist training of new therapists. Widespread use of automated psychotherapy rating tools may augment experts' capabilities by providing an avenue for more effective training and skill improvement, eventually leading to more positive clinical outcomes.


Subject(s)
Professional-Patient Relations , Speech , Humans , Language , Psychotherapy/methods
14.
IEEE Trans Affect Comput ; 13(1): 508-518, 2022.
Article in English | MEDLINE | ID: mdl-36704750

ABSTRACT

We propose a methodology for estimating human behaviors in psychotherapy sessions using mutli-label and multi-task learning paradigms. We discuss the problem of behavioral coding in which data of human interactions is the annotated with labels to describe relevant human behaviors of interest. We describe two related, yet distinct, corpora consisting of therapist client interactions in psychotherapy sessions. We experimentally compare the proposed learning approaches for estimating behaviors of interest in these datasets. Specifically, we compare single and multiple label learning approaches, single and multiple task learning approaches, and evaluate the performance of these approaches when incorporating turn context. We demonstrate the prediction performance gains which can be achieved by using the proposed paradigms and discuss the insights these models provide into these complex interactions.

15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 1836-1839, 2021 11.
Article in English | MEDLINE | ID: mdl-34891644

ABSTRACT

Cognitive Behavioral Therapy (CBT) is a goal-oriented psychotherapy for mental health concerns implemented in a conversational setting. The quality of a CBT session is typically assessed by trained human raters who manually assign pre-defined session-level behavioral codes. In this paper, we develop an end-to-end pipeline that converts speech audio to diarized and transcribed text and extracts linguistic features to code the CBT sessions automatically. We investigate both word-level and utterance-level features and propose feature fusion strategies to combine them. The utterance level features include dialog act tags as well as behavioral codes drawn from another well-known talk psychotherapy called Motivational Interviewing (MI). We propose a novel method to augment the word-based features with the utterance level tags for subsequent CBT code estimation. Experiments show that our new fusion strategy outperforms all the studied features, both when used individually and when fused by direct concatenation. We also find that incorporating a sentence segmentation module can further improve the overall system given the preponderance of multi-utterance conversational turns in CBT sessions.


Subject(s)
Cognitive Behavioral Therapy , Motivational Interviewing , Humans , Psychotherapy
16.
Behav Res Methods ; 53(5): 2069-2082, 2021 10.
Article in English | MEDLINE | ID: mdl-33754322

ABSTRACT

Emotional distress is a common reason for seeking psychotherapy, and sharing emotional material is central to the process of psychotherapy. However, systematic research examining patterns of emotional exchange that occur during psychotherapy sessions is often limited in scale. Traditional methods for identifying emotion in psychotherapy rely on labor-intensive observer ratings, client or therapist ratings obtained before or after sessions, or involve manually extracting ratings of emotion from session transcripts using dictionaries of positive and negative words that do not take the context of a sentence into account. However, recent advances in technology in the area of machine learning algorithms, in particular natural language processing, have made it possible for mental health researchers to identify sentiment, or emotion, in therapist-client interactions on a large scale that would be unattainable with more traditional methods. As an attempt to extend prior findings from Tanana et al. (2016), we compared their previous sentiment model with a common dictionary-based psychotherapy model, LIWC, and a new NLP model, BERT. We used the human ratings from a database of 97,497 utterances from psychotherapy to train the BERT model. Our findings revealed that the unigram sentiment model (kappa = 0.31) outperformed LIWC (kappa = 0.25), and ultimately BERT outperformed both models (kappa = 0.48).


Subject(s)
Natural Language Processing , Psychotherapy , Emotions , Humans , Language , Machine Learning
17.
J Couns Psychol ; 68(4): 418-424, 2021 Jul.
Article in English | MEDLINE | ID: mdl-33764115

ABSTRACT

OBJECTIVE: Mental health disparities between racial/ethnic minorities (REM) and White individuals are well documented. These disparities extend into psychotherapy and have been observed among clients receiving care at university/college counseling centers. However, less is known about if campus RE composition affects outcomes from psychotherapy for REM and White clients. METHOD: This study examined psychotherapy outcomes from 16,011 clients who engaged in services at 33 university/college counseling centers. Each of these clients completed the Behavioral Health Measure as a of part routine practice. Campus RE composition was coded from publicly available data. RESULTS: The results demonstrated that White clients had better therapy outcomes than REM clients when they were at campuses where there were more White students. For universities 1 SD below the mean percentage of White students, the average difference in therapy outcomes for White and REM clients was Cohen's d = .21 (with White students experiencing more improvement); however, for universities 1 SD above the mean, the between group outcome disparity was greater (Cohen's d = .38). CONCLUSION: Therapists and higher education professionals should consider environmental impacts on counseling services. Implications for higher education, counseling centers, and mental health disparities are provided. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Psychotherapy , Universities , Ethnicity , Humans , Minority Groups , Racial Groups
18.
Patient Educ Couns ; 104(8): 2098-2105, 2021 08.
Article in English | MEDLINE | ID: mdl-33468364

ABSTRACT

OBJECTIVE: Train machine learning models that automatically predict emotional valence of patient and physician in primary care visits. METHODS: Using transcripts from 353 primary care office visits with 350 patients and 84 physicians (Cook, 2002 [1], Tai-Seale et al., 2015 [2]), we developed two machine learning models (a recurrent neural network with a hierarchical structure and a logistic regression classifier) to recognize the emotional valence (positive, negative, neutral) (Posner et al., 2005 [3]) of each utterance. We examined the agreement of human-generated ratings of emotional valence with machine learning model ratings of emotion. RESULTS: The agreement of emotion ratings from the recurrent neural network model with human ratings was comparable to that of human-human inter-rater agreement. The weighted-average of the correlation coefficients for the recurrent neural network model with human raters was 0.60, and the human rater agreement was also 0.60. CONCLUSIONS: The recurrent neural network model predicted the emotional valence of patients and physicians in primary care visits with similar reliability as human raters. PRACTICE IMPLICATIONS: As the first machine learning-based evaluation of emotion recognition in primary care visit conversations, our work provides valuable baselines for future applications that might help monitor patient emotional signals, supporting physicians in empathic communication, or examining the role of emotion in patient-centered care.


Subject(s)
Emotions , Physicians , Communication , Humans , Office Visits , Primary Health Care , Reproducibility of Results
19.
J Couns Psychol ; 68(2): 149-155, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33252919

ABSTRACT

Efforts to help therapists improve their multicultural competence (MCC) rely on measures that can distinguish between different levels of competence. MCC is often assessed by asking clients to rate their experiences with their therapists. However, differences in client ratings of therapist MCC do not necessarily provide information about the relative performance of therapists and can be influenced by other factors including the client's own characteristics. In this study, we used a repeated measures design of 8,497 observations from 1,458 clients across 35 therapists to clarify the proportion of variability in MCC ratings attributed to the therapist versus the client and better understand the extent that an MCC measure detects therapist differences. Overall, we found that a small amount of variability in MCC ratings was attributed to the therapist (2%) and substantial amount attributed to the client (70%). These findings suggest that our measure of MCC primarily detected differences at the client level versus therapist level, indicating that therapist MCC scores were largely dependent on the client. Clinical implications and recommendations for future MCC research and measurement are discussed. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Cultural Diversity , Professional Competence , Professional-Patient Relations , Psychotherapists/psychology , Psychotherapy/standards , Adult , Female , Humans , Male
20.
Psychother Res ; 31(3): 281-288, 2021 03.
Article in English | MEDLINE | ID: mdl-32172682

ABSTRACT

Objective: Therapist interpersonal skills are foundational to psychotherapy. However, assessment is labor intensive and infrequent. This study evaluated if machine learning (ML) tools can automatically assess therapist interpersonal skills. Method: Data were drawn from a previous study in which 164 undergraduate students (i.e., not clinical trainees) completed the Facilitative Interpersonal Skills (FIS) task. This task involves responding to video vignettes depicting interpersonally challenging moments in psychotherapy. Trained raters scored the responses. We used an elastic net model on top of a term frequency-inverse document frequency representation to predict FIS scores. Results: Models predicted FIS total and item-level scores above chance (rhos = .27-.53, ps < .001), achieving 31-60% of human reliability. Models explained 13-24% of the variance in FIS total and item-level scores on a held out set of data (R2), with the exception of the two items most reliant on vocal cues (verbal fluency, emotional expression), for which models explained ≤1% of variance. Conclusion: ML may be a promising approach for automating assessment of constructs like interpersonal skill previously coded by humans. ML may perform best when the standardized stimuli limit the "space" of potential responses (vs. naturalistic psychotherapy) and when models have access to the same data available to raters (i.e., transcripts).


Subject(s)
Psychotherapy , Social Skills , Clinical Competence , Computers , Humans , Machine Learning , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...