Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Implement Res Pract ; 4: 26334895231187906, 2023.
Article in English | MEDLINE | ID: mdl-37790171

ABSTRACT

Background: Evidence-based parenting programs effectively prevent the onset and escalation of child and adolescent behavioral health problems. When programs have been taken to scale, declines in the quality of implementation diminish intervention effects. Gold-standard methods of implementation monitoring are cost-prohibitive and impractical in resource-scarce delivery systems. Technological developments using computational linguistics and machine learning offer an opportunity to assess fidelity in a low burden, timely, and comprehensive manner. Methods: In this study, we test two natural language processing (NLP) methods [i.e., Term Frequency-Inverse Document Frequency (TF-IDF) and Bidirectional Encoder Representations from Transformers (BERT)] to assess the delivery of the Family Check-Up 4 Health (FCU4Health) program in a type 2 hybrid effectiveness-implementation trial conducted in primary care settings that serve primarily Latino families. We trained and evaluated models using 116 English and 81 Spanish-language transcripts from the 113 families who initiated FCU4Health services. We evaluated the concurrent validity of the TF-IDF and BERT models using observer ratings of program sessions using the COACH measure of competent adherence. Following the Implementation Cascade model, we assessed predictive validity using multiple indicators of parent engagement, which have been demonstrated to predict improvements in parenting and child outcomes. Results: Both TF-IDF and BERT ratings were significantly associated with observer ratings and engagement outcomes. Using mean squared error, results demonstrated improvement over baseline for observer ratings from a range of 0.83-1.02 to 0.62-0.76, resulting in an average improvement of 24%. Similarly, results demonstrated improvement over baseline for parent engagement indicators from a range of 0.81-27.3 to 0.62-19.50, resulting in an approximate average improvement of 18%. Conclusions: These results demonstrate the potential for NLP methods to assess implementation in evidence-based parenting programs delivered at scale. Future directions are presented. Trial registration: NCT03013309 ClinicalTrials.gov.


Research has shown that evidence-based parenting programs effectively prevent the onset and escalation of child and adolescent behavioral health problems. However, if they are not implemented with fidelity, there is a potential that they will not produce the same effects. Gold-standard methods of implementation monitoring include observations of program sessions. This is expensive and difficult to implement in delivery settings with limited resources. Using data from a trial of the Family Check-Up 4 Health program in primary care settings that served Latino families, we investigated the potential to make use of a form of machine learning called natural language processing (NLP) to monitor program delivery. NLP-based ratings were significantly associated with independent observer ratings of fidelity and participant engagement outcomes. These results demonstrate the potential for NLP methods to monitor implementation in evidence-based parenting programs delivered at scale.

2.
PLoS One ; 17(12): e0278604, 2022.
Article in English | MEDLINE | ID: mdl-36542600

ABSTRACT

Contemporary media is full of images that reflect traditional gender notions and stereotypes, some of which may perpetuate harmful gender representations. In an effort to highlight the occurrence of these adverse portrayals, researchers have proposed machine-learning methods to identify stereotypes in the language patterns found in character dialogues. However, not all of the harmful stereotypes are communicated just through dialogue. As a complementary approach, we present a large-scale machine-learning framework that automatically identifies character's actions from scene descriptions found in movie scripts. For this work, we collected 1.2+ million scene descriptions from 912 movie scripts, with more than 50 thousand actions and 20 thousand movie characters. Our framework allow us to study systematic gender differences in movie portrayals at a scale. We show this through a series of statistical analyses that highlight differences in gender portrayals. Our findings provide further evidence to claims from prior media studies including: (i) male characters display higher agency than female characters; (ii) female actors are more frequently the subject of gaze, and (iii) male characters are less likely to display affection. We hope that these data resources and findings help raise awareness on portrayals of character actions that reflect harmful gender stereotypes, and demonstrate novel possibilities for computational approaches in media analysis.


Subject(s)
Dancing , Motion Pictures , Humans , Male , Female , Linguistics , Language , Sex Factors
3.
Behav Res Methods ; 54(2): 690-711, 2022 04.
Article in English | MEDLINE | ID: mdl-34346043

ABSTRACT

With the growing prevalence of psychological interventions, it is vital to have measures which rate the effectiveness of psychological care to assist in training, supervision, and quality assurance of services. Traditionally, quality assessment is addressed by human raters who evaluate recorded sessions along specific dimensions, often codified through constructs relevant to the approach and domain. This is, however, a cost-prohibitive and time-consuming method that leads to poor feasibility and limited use in real-world settings. To facilitate this process, we have developed an automated competency rating tool able to process the raw recorded audio of a session, analyzing who spoke when, what they said, and how the health professional used language to provide therapy. Focusing on a use case of a specific type of psychotherapy called "motivational interviewing", our system gives comprehensive feedback to the therapist, including information about the dynamics of the session (e.g., therapist's vs. client's talking time), low-level psychological language descriptors (e.g., type of questions asked), as well as other high-level behavioral constructs (e.g., the extent to which the therapist understands the clients' perspective). We describe our platform and its performance using a dataset of more than 5000 recordings drawn from its deployment in a real-world clinical setting used to assist training of new therapists. Widespread use of automated psychotherapy rating tools may augment experts' capabilities by providing an avenue for more effective training and skill improvement, eventually leading to more positive clinical outcomes.


Subject(s)
Professional-Patient Relations , Speech , Humans , Language , Psychotherapy/methods
4.
PLoS One ; 16(10): e0258639, 2021.
Article in English | MEDLINE | ID: mdl-34679105

ABSTRACT

During a psychotherapy session, the counselor typically adopts techniques which are codified along specific dimensions (e.g., 'displays warmth and confidence', or 'attempts to set up collaboration') to facilitate the evaluation of the session. Those constructs, traditionally scored by trained human raters, reflect the complex nature of psychotherapy and highly depend on the context of the interaction. Recent advances in deep contextualized language models offer an avenue for accurate in-domain linguistic representations which can lead to robust recognition and scoring of such psychotherapy-relevant behavioral constructs, and support quality assurance and supervision. In this work, we propose a BERT-based model for automatic behavioral scoring of a specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT), where prior work is limited to frequency-based language features and/or short text excerpts which do not capture the unique elements involved in a spontaneous long conversational interaction. The model focuses on the classification of therapy sessions with respect to the overall score achieved on the widely-used Cognitive Therapy Rating Scale (CTRS), but is trained in a multi-task manner in order to achieve higher interpretability. BERT-based representations are further augmented with available therapy metadata, providing relevant non-linguistic context and leading to consistent performance improvements. We train and evaluate our models on a set of 1,118 real-world therapy sessions, recorded and automatically transcribed. Our best model achieves an F1 score equal to 72.61% on the binary classification task of low vs. high total CTRS.


Subject(s)
Cognitive Behavioral Therapy/methods , Mental Disorders/therapy , Clinical Competence , Data Interpretation, Statistical , Female , Humans , Male , Models, Psychological , Natural Language Processing , Psychiatric Status Rating Scales
5.
Sci Rep ; 11(1): 11730, 2021 06 03.
Article in English | MEDLINE | ID: mdl-34083579

ABSTRACT

Machine learning (ML) models have demonstrated the power of utilizing clinical instruments to provide tools for domain experts in gaining additional insights toward complex clinical diagnoses. In this context these tools desire two additional properties: interpretability, being able to audit and understand the decision function, and robustness, being able to assign the correct label in spite of missing or noisy inputs. This work formulates diagnostic classification as a decision-making process and utilizes Q-learning to build classifiers that meet the aforementioned desired criteria. As an exemplary task, we simulate the process of differentiating Autism Spectrum Disorder from Attention Deficit-Hyperactivity Disorder in verbal school aged children. This application highlights how reinforcement learning frameworks can be utilized to train more robust classifiers by jointly learning to maximize diagnostic accuracy while minimizing the amount of information required.


Subject(s)
Clinical Decision-Making/methods , Decision Support Systems, Clinical , Machine Learning , Software , Algorithms , Attention Deficit Disorder with Hyperactivity/diagnosis , Autism Spectrum Disorder/diagnosis , Humans , Models, Theoretical
6.
J Couns Psychol ; 67(4): 438-448, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32614225

ABSTRACT

Artificial intelligence generally and machine learning specifically have become deeply woven into the lives and technologies of modern life. Machine learning is dramatically changing scientific research and industry and may also hold promise for addressing limitations encountered in mental health care and psychotherapy. The current paper introduces machine learning and natural language processing as related methodologies that may prove valuable for automating the assessment of meaningful aspects of treatment. Prediction of therapeutic alliance from session recordings is used as a case in point. Recordings from 1,235 sessions of 386 clients seen by 40 therapists at a university counseling center were processed using automatic speech recognition software. Machine learning algorithms learned associations between client ratings of therapeutic alliance exclusively from session linguistic content. Using a portion of the data to train the model, machine learning algorithms modestly predicted alliance ratings from session content in an independent test set (Spearman's ρ = .15, p < .001). These results highlight the potential to harness natural language processing and machine learning to predict a key psychotherapy process variable that is relatively distal from linguistic content. Six practical suggestions for conducting psychotherapy research using machine learning are presented along with several directions for future research. Questions of dissemination and implementation may be particularly important to explore as machine learning improves in its ability to automate assessment of psychotherapy process and outcome. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Biomedical Research/methods , Machine Learning , Mental Disorders/therapy , Natural Language Processing , Psychotherapy/methods , Therapeutic Alliance , Adolescent , Adult , Biomedical Research/trends , Counseling/methods , Counseling/trends , Female , Humans , Machine Learning/trends , Male , Mental Disorders/psychology , Professional-Patient Relations , Psychotherapeutic Processes , Psychotherapy/trends , Universities/trends , Young Adult
7.
PLoS One ; 15(1): e0225695, 2020.
Article in English | MEDLINE | ID: mdl-31940347

ABSTRACT

Individuals with serious mental illness experience changes in their clinical states over time that are difficult to assess and that result in increased disease burden and care utilization. It is not known if features derived from speech can serve as a transdiagnostic marker of these clinical states. This study evaluates the feasibility of collecting speech samples from people with serious mental illness and explores the potential utility for tracking changes in clinical state over time. Patients (n = 47) were recruited from a community-based mental health clinic with diagnoses of bipolar disorder, major depressive disorder, schizophrenia or schizoaffective disorder. Patients used an interactive voice response system for at least 4 months to provide speech samples. Clinic providers (n = 13) reviewed responses and provided global assessment ratings. We computed features of speech and used machine learning to create models of outcome measures trained using either population data or an individual's own data over time. The system was feasible to use, recording 1101 phone calls and 117 hours of speech. Most (92%) of the patients agreed that it was easy to use. The individually-trained models demonstrated the highest correlation with provider ratings (rho = 0.78, p<0.001). Population-level models demonstrated statistically significant correlations with provider global assessment ratings (rho = 0.44, p<0.001), future provider ratings (rho = 0.33, p<0.05), BASIS-24 summary score, depression sub score, and self-harm sub score (rho = 0.25,0.25, and 0.28 respectively; p<0.05), and the SF-12 mental health sub score (rho = 0.25, p<0.05), but not with other BASIS-24 or SF-12 sub scores. This study brings together longitudinal collection of objective behavioral markers along with a transdiagnostic, personalized approach for tracking of mental health clinical state in a community-based clinical setting.


Subject(s)
Computational Biology/methods , Mental Disorders/epidemiology , Speech , Female , Humans , Male , Middle Aged , Pilot Projects , Residence Characteristics , Support Vector Machine
8.
Interspeech ; 2019: 1901-1905, 2019 Sep.
Article in English | MEDLINE | ID: mdl-36703954

ABSTRACT

Psychotherapy, from a narrative perspective, is the process in which a client relates an on-going life-story to a therapist. In each session, a client will recount events from their life, some of which stand out as more significant than others. These significant stories can ultimately shape one's identity. In this work we study these narratives in the context of therapeutic alliance-a self-reported measure on the perception of a shared bond between client and therapist. We propose that alliance can be predicted from the interactions between certain types of clients with types of therapists. To validate this method, we obtained 1235 transcribed sessions with client-reported alliance to train an unsupervised approach to discover groups of therapists and clients based on common types of narrative characters, or personae. We measure the strength of the relation between personae and alliance in two experiments. Our results show that (1) alliance can be explained by the interactions between the discovered character types, and (2) models trained on therapist and client personae achieve significant performance gains compared to competitive supervised baselines. Finally, exploratory analysis reveals important character traits that lead to an improved perception of alliance.

SELECTION OF CITATIONS
SEARCH DETAIL
...