Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(6): e0303653, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38941299

RESUMO

BACKGROUND: Racism and implicit bias underlie disparities in health care access, treatment, and outcomes. An emerging area of study in examining health disparities is the use of stigmatizing language in the electronic health record (EHR). OBJECTIVES: We sought to summarize the existing literature related to stigmatizing language documented in the EHR. To this end, we conducted a scoping review to identify, describe, and evaluate the current body of literature related to stigmatizing language and clinician notes. METHODS: We searched PubMed, Cumulative Index of Nursing and Allied Health Literature (CINAHL), and Embase databases in May 2022, and also conducted a hand search of IEEE to identify studies investigating stigmatizing language in clinical documentation. We included all studies published through April 2022. The results for each search were uploaded into EndNote X9 software, de-duplicated using the Bramer method, and then exported to Covidence software for title and abstract screening. RESULTS: Studies (N = 9) used cross-sectional (n = 3), qualitative (n = 3), mixed methods (n = 2), and retrospective cohort (n = 1) designs. Stigmatizing language was defined via content analysis of clinical documentation (n = 4), literature review (n = 2), interviews with clinicians (n = 3) and patients (n = 1), expert panel consultation, and task force guidelines (n = 1). Natural language processing was used in four studies to identify and extract stigmatizing words from clinical notes. All of the studies reviewed concluded that negative clinician attitudes and the use of stigmatizing language in documentation could negatively impact patient perception of care or health outcomes. DISCUSSION: The current literature indicates that NLP is an emerging approach to identifying stigmatizing language documented in the EHR. NLP-based solutions can be developed and integrated into routine documentation systems to screen for stigmatizing language and alert clinicians or their supervisors. Potential interventions resulting from this research could generate awareness about how implicit biases affect communication patterns and work to achieve equitable health care for diverse populations.


Assuntos
Documentação , Registros Eletrônicos de Saúde , Humanos , Idioma , Estereotipagem , Racismo
2.
Matern Child Health J ; 28(3): 578-586, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38147277

RESUMO

INTRODUCTION: Stigma and bias related to race and other minoritized statuses may underlie disparities in pregnancy and birth outcomes. One emerging method to identify bias is the study of stigmatizing language in the electronic health record. The objective of our study was to develop automated natural language processing (NLP) methods to identify two types of stigmatizing language: marginalizing language and its complement, power/privilege language, accurately and automatically in labor and birth notes. METHODS: We analyzed notes for all birthing people > 20 weeks' gestation admitted for labor and birth at two hospitals during 2017. We then employed text preprocessing techniques, specifically using TF-IDF values as inputs, and tested machine learning classification algorithms to identify stigmatizing and power/privilege language in clinical notes. The algorithms assessed included Decision Trees, Random Forest, and Support Vector Machines. Additionally, we applied a feature importance evaluation method (InfoGain) to discern words that are highly correlated with these language categories. RESULTS: For marginalizing language, Decision Trees yielded the best classification with an F-score of 0.73. For power/privilege language, Support Vector Machines performed optimally, achieving an F-score of 0.91. These results demonstrate the effectiveness of the selected machine learning methods in classifying language categories in clinical notes. CONCLUSION: We identified well-performing machine learning methods to automatically detect stigmatizing language in clinical notes. To our knowledge, this is the first study to use NLP performance metrics to evaluate the performance of machine learning methods in discerning stigmatizing language. Future studies should delve deeper into refining and evaluating NLP methods, incorporating the latest algorithms rooted in deep learning.


Assuntos
Algoritmos , Processamento de Linguagem Natural , Feminino , Humanos , Registros Eletrônicos de Saúde , Aprendizado de Máquina , Idioma
3.
Nurs Inq ; 30(3): e12557, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37073504

RESUMO

The presence of stigmatizing language in the electronic health record (EHR) has been used to measure implicit biases that underlie health inequities. The purpose of this study was to identify the presence of stigmatizing language in the clinical notes of pregnant people during the birth admission. We conducted a qualitative analysis on N = 1117 birth admission EHR notes from two urban hospitals in 2017. We identified stigmatizing language categories, such as Disapproval (39.3%), Questioning patient credibility (37.7%), Difficult patient (21.3%), Stereotyping (1.6%), and Unilateral decisions (1.6%) in 61 notes (5.4%). We also defined a new stigmatizing language category indicating Power/privilege. This was present in 37 notes (3.3%) and signaled approval of social status, upholding a hierarchy of bias. The stigmatizing language was most frequently identified in birth admission triage notes (16%) and least frequently in social work initial assessments (13.7%). We found that clinicians from various disciplines recorded stigmatizing language in the medical records of birthing people. This language was used to question birthing people's credibility and convey disapproval of decision-making abilities for themselves or their newborns. We reported a Power/privilege language bias in the inconsistent documentation of traits considered favorable for patient outcomes (e.g., employment status). Future work on stigmatizing language may inform tailored interventions to improve perinatal outcomes for all birthing people and their families.


Assuntos
Idioma , Estereotipagem , Recém-Nascido , Gravidez , Feminino , Humanos , Registros Eletrônicos de Saúde
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...