Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 49
Filtrar
1.
Cancer Biomark ; 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38517780

RESUMO

BACKGROUND: Large community cohorts are useful for lung cancer research, allowing for the analysis of risk factors and development of predictive models. OBJECTIVE: A robust methodology for (1) identifying lung cancer and pulmonary nodules diagnoses as well as (2) associating multimodal longitudinal data with these events from electronic health record (EHRs) is needed to optimally curate cohorts at scale. METHODS: In this study, we leveraged (1) SNOMED concepts to develop ICD-based decision rules for building a cohort that captured lung cancer and pulmonary nodules and (2) clinical knowledge to define time windows for collecting longitudinal imaging and clinical concepts. We curated three cohorts with clinical data and repeated imaging for subjects with pulmonary nodules from our Vanderbilt University Medical Center. RESULTS: Our approach achieved an estimated sensitivity 0.930 (95% CI: [0.879, 0.969]), specificity of 0.996 (95% CI: [0.989, 1.00]), positive predictive value of 0.979 (95% CI: [0.959, 1.000]), and negative predictive value of 0.987 (95% CI: [0.976, 0.994]) for distinguishing lung cancer from subjects with SPNs. CONCLUSION: This work represents a general strategy for high-throughput curation of multi-modal longitudinal cohorts at risk for lung cancer from routinely collected EHRs.

2.
NPJ Digit Med ; 7(1): 53, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38429353

RESUMO

The rising popularity of artificial intelligence in healthcare is highlighting the problem that a computational model achieving super-human clinical performance at its training sites may perform substantially worse at new sites. In this perspective, we argue that we should typically expect this failure to transport, and we present common sources for it, divided into those under the control of the experimenter and those inherent to the clinical data-generating process. Of the inherent sources we look a little deeper into site-specific clinical practices that can affect the data distribution, and propose a potential solution intended to isolate the imprint of those practices on the data from the patterns of disease cause and effect that are the usual target of probabilistic clinical models.

3.
Comput Biol Med ; 171: 108122, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38417381

RESUMO

Treatments ideally mitigate pathogenesis, or the detrimental effects of the root causes of disease. However, existing definitions of treatment effect fail to account for pathogenic mechanism. We therefore introduce the Treated Root causal Effects (TRE) metric which measures the ability of a treatment to modify root causal effects. We leverage TREs to automatically identify treatment targets and cluster patients who respond similarly to treatment. The proposed algorithm learns a partially linear causal model to extract the root causal effects of each variable and then estimates TREs for target discovery and downstream subtyping. We maintain interpretability even without assuming an invertible structural equation model. Experiments across a range of datasets corroborate the generality of the proposed approach.


Assuntos
Algoritmos , Modelos Teóricos , Humanos
4.
J Am Med Inform Assoc ; 31(4): 968-974, 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38383050

RESUMO

OBJECTIVE: To develop and evaluate a data-driven process to generate suggestions for improving alert criteria using explainable artificial intelligence (XAI) approaches. METHODS: We extracted data on alerts generated from January 1, 2019 to December 31, 2020, at Vanderbilt University Medical Center. We developed machine learning models to predict user responses to alerts. We applied XAI techniques to generate global explanations and local explanations. We evaluated the generated suggestions by comparing with alert's historical change logs and stakeholder interviews. Suggestions that either matched (or partially matched) changes already made to the alert or were considered clinically correct were classified as helpful. RESULTS: The final dataset included 2 991 823 firings with 2689 features. Among the 5 machine learning models, the LightGBM model achieved the highest Area under the ROC Curve: 0.919 [0.918, 0.920]. We identified 96 helpful suggestions. A total of 278 807 firings (9.3%) could have been eliminated. Some of the suggestions also revealed workflow and education issues. CONCLUSION: We developed a data-driven process to generate suggestions for improving alert criteria using XAI techniques. Our approach could identify improvements regarding clinical decision support (CDS) that might be overlooked or delayed in manual reviews. It also unveils a secondary purpose for the XAI: to improve quality by discovering scenarios where CDS alerts are not accepted due to workflow, education, or staffing issues.


Assuntos
Inteligência Artificial , Sistemas de Apoio a Decisões Clínicas , Humanos , Aprendizado de Máquina , Centros Médicos Acadêmicos , Escolaridade
5.
Med Image Anal ; 90: 102939, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37725868

RESUMO

Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realizes global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissue structures. To address such challenges and inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting of multiple modalities, anatomies, and a wide range of tissue classes, including 133 structures in the brain, 14 organs in the abdomen, 4 hierarchical components in the kidneys, inter-connected kidney tumors and brain tumors. We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in a single network, outperforming prior state-of-the-art method SLANT27 ensembled with 27 networks. Our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively. Code, pre-trained models, and use case pipeline are available at: https://github.com/MASILab/UNesT.

6.
Artigo em Inglês | MEDLINE | ID: mdl-37465096

RESUMO

Features learned from single radiologic images are unable to provide information about whether and how much a lesion may be changing over time. Time-dependent features computed from repeated images can capture those changes and help identify malignant lesions by their temporal behavior. However, longitudinal medical imaging presents the unique challenge of sparse, irregular time intervals in data acquisition. While self-attention has been shown to be a versatile and efficient learning mechanism for time series and natural images, its potential for interpreting temporal distance between sparse, irregularly sampled spatial features has not been explored. In this work, we propose two interpretations of a time-distance vision transformer (ViT) by using (1) vector embeddings of continuous time and (2) a temporal emphasis model to scale self-attention weights. The two algorithms are evaluated based on benign versus malignant lung cancer discrimination of synthetic pulmonary nodules and lung screening computed tomography studies from the National Lung Screening Trial (NLST). Experiments evaluating the time-distance ViTs on synthetic nodules show a fundamental improvement in classifying irregularly sampled longitudinal images when compared to standard ViTs. In cross-validation on screening chest CTs from the NLST, our methods (0.785 and 0.786 AUC respectively) significantly outperform a cross-sectional approach (0.734 AUC) and match the discriminative performance of the leading longitudinal medical imaging algorithm (0.779 AUC) on benign versus malignant classification. This work represents the first self-attention-based framework for classifying longitudinal medical images. Our code is available at https://github.com/tom1193/time-distance-transformer.

7.
Radiology ; 308(1): e222937, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37489991

RESUMO

Background An artificial intelligence (AI) algorithm has been developed for fully automated body composition assessment of lung cancer screening noncontrast low-dose CT of the chest (LDCT) scans, but the utility of these measurements in disease risk prediction models has not been assessed. Purpose To evaluate the added value of CT-based AI-derived body composition measurements in risk prediction of lung cancer incidence, lung cancer death, cardiovascular disease (CVD) death, and all-cause mortality in the National Lung Screening Trial (NLST). Materials and Methods In this secondary analysis of the NLST, body composition measurements, including area and attenuation attributes of skeletal muscle and subcutaneous adipose tissue, were derived from baseline LDCT examinations by using a previously developed AI algorithm. The added value of these measurements was assessed with sex- and cause-specific Cox proportional hazards models with and without the AI-derived body composition measurements for predicting lung cancer incidence, lung cancer death, CVD death, and all-cause mortality. Models were adjusted for confounding variables including age; body mass index; quantitative emphysema; coronary artery calcification; history of diabetes, heart disease, hypertension, and stroke; and other PLCOM2012 lung cancer risk factors. Goodness-of-fit improvements were assessed with the likelihood ratio test. Results Among 20 768 included participants (median age, 61 years [IQR, 57-65 years]; 12 317 men), 865 were diagnosed with lung cancer and 4180 died during follow-up. Including the AI-derived body composition measurements improved risk prediction for lung cancer death (male participants: χ2 = 23.09, P < .001; female participants: χ2 = 15.04, P = .002), CVD death (males: χ2 = 69.94, P < .001; females: χ2 = 16.60, P < .001), and all-cause mortality (males: χ2 = 248.13, P < .001; females: χ2 = 94.54, P < .001), but not for lung cancer incidence (male participants: χ2 = 2.53, P = .11; female participants: χ2 = 1.73, P = .19). Conclusion The body composition measurements automatically derived from baseline low-dose CT examinations added predictive value for lung cancer death, CVD death, and all-cause death, but not for lung cancer incidence in the NLST. Clinical trial registration no. NCT00047385 © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Fintelmann in this issue.


Assuntos
Doenças Cardiovasculares , Neoplasias Pulmonares , Feminino , Masculino , Humanos , Pessoa de Meia-Idade , Detecção Precoce de Câncer , Inteligência Artificial , Composição Corporal , Pulmão
8.
Med Image Comput Comput Assist Interv ; 14221: 649-659, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38779102

RESUMO

The accuracy of predictive models for solitary pulmonary nodule (SPN) diagnosis can be greatly increased by incorporating repeat imaging and medical context, such as electronic health records (EHRs). However, clinically routine modalities such as imaging and diagnostic codes can be asynchronous and irregularly sampled over different time scales which are obstacles to longitudinal multimodal learning. In this work, we propose a transformer-based multimodal strategy to integrate repeat imaging with longitudinal clinical signatures from routinely collected EHRs for SPN classification. We perform unsupervised disentanglement of latent clinical signatures and leverage time-distance scaled self-attention to jointly learn from clinical signatures expressions and chest computed tomography (CT) scans. Our classifier is pretrained on 2,668 scans from a public dataset and 1,149 subjects with longitudinal chest CTs, billing codes, medications, and laboratory tests from EHRs of our home institution. Evaluation on 227 subjects with challenging SPNs revealed a significant AUC improvement over a longitudinal multimodal baseline (0.824 vs 0.752 AUC), as well as improvements over a single cross-section multimodal scenario (0.809 AUC) and a longitudinal imaging-only scenario (0.741 AUC). This work demonstrates significant advantages with a novel approach for co-learning longitudinal imaging and non-imaging phenotypes with transformers. Code available at https://github.com/MASILab/lmsignatures.

9.
Comput Biol Med ; 150: 106113, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36198225

RESUMO

OBJECTIVE: Patients with indeterminate pulmonary nodules (IPN) with an intermediate to a high probability of lung cancer generally undergo invasive diagnostic procedures. Chest computed tomography image and clinical data have been in estimating the pretest probability of lung cancer. In this study, we apply a deep learning network to integrate multi-modal data from CT images and clinical data (including blood-based biomarkers) to improve lung cancer diagnosis. Our goal is to reduce uncertainty and to avoid morbidity, mortality, over- and undertreatment of patients with IPNs. METHOD: We use a retrospective study design with cross-validation and external-validation from four different sites. We introduce a deep learning framework with a two-path structure to learn from CT images and clinical data. The proposed model can learn and predict with single modality if the multi-modal data is not complete. We use 1284 patients in the learning cohort for model development. Three external sites (with 155, 136 and 96 patients, respectively) provided patient data for external validation. We compare our model to widely applied clinical prediction models (Mayo and Brock models) and image-only methods (e.g., Liao et al. model). RESULTS: Our co-learning model improves upon the performance of clinical-factor-only (Mayo and Brock models) and image-only (Liao et al.) models in both cross-validation of learning cohort (e.g. , AUC: 0.787 (ours) vs. 0.707-0.719 (baselines), results reported in validation fold and external-validation using three datasets from University of Pittsburgh Medical Center (e.g., 0.918 (ours) vs. 0.828-0.886 (baselines)), Detection of Early Cancer Among Military Personnel (e.g., 0.712 (ours) vs. 0.576-0.709 (baselines)), and University of Colorado Denver (e.g., 0.847 (ours) vs. 0.679-0.746 (baselines)). In addition, our model achieves better re-classification performance (cNRI 0.04 to 0.20) in all cross- and external-validation sets compared to the Mayo model. CONCLUSIONS: Lung cancer risk estimation in patients with IPNs can benefit from the co-learning of CT image and clinical data. Learning from more subjects, even though those only have a single modality, can improve the prediction accuracy. An integrated deep learning model can achieve reasonable discrimination and re-classification performance.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiplos , Humanos , Estudos Retrospectivos , Incerteza , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem
10.
Neuroinformatics ; 20(2): 483-505, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34981404

RESUMO

Along with the increasing availability of electronic medical record (EMR) data, phenome-wide association studies (PheWAS) and phenome-disease association studies (PheDAS) have become a prominent, first-line method of analysis for uncovering the secrets of EMR. Despite this recent growth, there is a lack of approachable software tools for conducting these analyses on large-scale EMR cohorts. In this article, we introduce pyPheWAS, an open-source python package for conducting PheDAS and related analyses. This toolkit includes 1) data preparation, such as cohort censoring and age-matching; 2) traditional PheDAS analysis of ICD-9 and ICD-10 billing codes; 3) PheDAS analysis applied to a novel EMR phenotype mapping: current procedural terminology (CPT) codes; and 4) novelty analysis of significant disease-phenotype associations found through PheDAS. The pyPheWAS toolkit is approachable and comprehensive, encapsulating data prep through result visualization all within a simple command-line interface. The toolkit is designed for the ever-growing scale of available EMR data, with the ability to analyze cohorts of 100,000 + patients in less than 2 h. Through a case study of Down Syndrome and other intellectual developmental disabilities, we demonstrate the ability of pyPheWAS to discover both known and potentially novel disease-phenotype associations across different experiment designs and disease groups. The software and user documentation are available in open source at https://github.com/MASILab/pyPheWAS .


Assuntos
Registros Eletrônicos de Saúde , Estudo de Associação Genômica Ampla , Estudo de Associação Genômica Ampla/métodos , Fenótipo , Software
11.
J Biol Rhythms ; 36(6): 595-601, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34696614

RESUMO

False negative tests for SARS-CoV-2 are common and have important public health and medical implications. We tested the hypothesis of diurnal variation in viral shedding by assessing the proportion of positive versus negative SARS-CoV-2 reverse transcription polymerase chain reaction (RT-PCR) tests and cycle time (Ct) values among positive samples by the time of day. Among 86,342 clinical tests performed among symptomatic and asymptomatic patients in a regional health care network in the southeastern United States from March to August 2020, we found evidence for diurnal variation in the proportion of positive SARS-CoV-2 tests, with a peak around 1400 h and 1.7-fold variation over the day after adjustment for age, sex, race, testing location, month, and day of week and lower Ct values during the day for positive samples. These findings have important implications for public health testing and vaccination strategies.


Assuntos
COVID-19 , SARS-CoV-2 , Teste para COVID-19 , Ritmo Circadiano , Humanos , Reação em Cadeia da Polimerase
12.
Sci Rep ; 11(1): 18618, 2021 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-34545125

RESUMO

Heart failure (HF) has no cure and, for HF with preserved ejection fraction (HFpEF), no life-extending treatments. Defining the clinical epidemiology of HF could facilitate earlier identification of high-risk individuals. We define the clinical epidemiology of HF subtypes (HFpEF and HF with reduced ejection fraction [HFrEF]), identified among 2.7 million individuals receiving routine clinical care. Differences in patterns and rates of accumulation of comorbidities, frequency of hospitalization, use of specialty care, were defined for each HF subtype. Among 28,156 HF cases, 8322 (30%) were HFpEF and 11,677 (42%) were HFrEF. HFpEF was the more prevalent subtype among older women. 177 Phenotypes differentially associated with HFpEF versus HFrEF. HFrEF was more frequently associated with diagnoses related to ischemic cardiac injury while HFpEF was associated more with non-cardiac comorbidities and HF symptoms. These comorbidity patterns were frequently present 3 years prior to a HFpEF diagnosis. HF subtypes demonstrated distinct patterns of clinical co-morbidities and disease progression. For HFpEF, these comorbidities were often non-cardiac and manifested prior to the onset of a HF diagnosis. Recognizing these comorbidity patterns, along the care continuum, may present a window of opportunity to identify individuals at risk for developing incident HFpEF.


Assuntos
Insuficiência Cardíaca/classificação , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Comorbidade , Progressão da Doença , Feminino , Fatores de Risco de Doenças Cardíacas , Insuficiência Cardíaca/epidemiologia , Insuficiência Cardíaca/fisiopatologia , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Fenótipo , Comportamento de Redução do Risco , Volume Sistólico
13.
Appl Clin Inform ; 12(1): 164-169, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33657635

RESUMO

BACKGROUND: The data visualization literature asserts that the details of the optimal data display must be tailored to the specific task, the background of the user, and the characteristics of the data. The general organizing principle of a concept-oriented display is known to be useful for many tasks and data types. OBJECTIVES: In this project, we used general principles of data visualization and a co-design process to produce a clinical display tailored to a specific cognitive task, chosen from the anesthesia domain, but with clear generalizability to other clinical tasks. To support the work of the anesthesia-in-charge (AIC) our task was, for a given day, to depict the acuity level and complexity of each patient in the collection of those that will be operated on the following day. The AIC uses this information to optimally allocate anesthesia staff and providers across operating rooms. METHODS: We used a co-design process to collaborate with participants who work in the AIC role. We conducted two in-depth interviews with AICs and engaged them in subsequent input on iterative design solutions. RESULTS: Through a co-design process, we found (1) the need to carefully match the level of detail in the display to the level required by the clinical task, (2) the impedance caused by irrelevant information on the screen such as icons relevant only to other tasks, and (3) the desire for a specific but optional trajectory of increasingly detailed textual summaries. CONCLUSION: This study reports a real-world clinical informatics development project that engaged users as co-designers. Our process led to the user-preferred design of a single binary flag to identify the subset of patients needing further investigation, and then a trajectory of increasingly detailed, text-based abstractions for each patient that can be displayed when more information is needed.


Assuntos
Apresentação de Dados , Informática Médica , Atenção à Saúde , Humanos , Salas Cirúrgicas , Assistência Perioperatória
14.
J Am Med Inform Assoc ; 28(3): 596-604, 2021 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-33277896

RESUMO

OBJECTIVE: Simulating electronic health record data offers an opportunity to resolve the tension between data sharing and patient privacy. Recent techniques based on generative adversarial networks have shown promise but neglect the temporal aspect of healthcare. We introduce a generative framework for simulating the trajectory of patients' diagnoses and measures to evaluate utility and privacy. MATERIALS AND METHODS: The framework simulates date-stamped diagnosis sequences based on a 2-stage process that 1) sequentially extracts temporal patterns from clinical visits and 2) generates synthetic data conditioned on the learned patterns. We designed 3 utility measures to characterize the extent to which the framework maintains feature correlations and temporal patterns in clinical events. We evaluated the framework with billing codes, represented as phenome-wide association study codes (phecodes), from over 500 000 Vanderbilt University Medical Center electronic health records. We further assessed the privacy risks based on membership inference and attribute disclosure attacks. RESULTS: The simulated temporal sequences exhibited similar characteristics to real sequences on the utility measures. Notably, diagnosis prediction models based on real versus synthetic temporal data exhibited an average relative difference in area under the ROC curve of 1.6% with standard deviation of 3.8% for 1276 phecodes. Additionally, the relative difference in the mean occurrence age and time between visits were 4.9% and 4.2%, respectively. The privacy risks in synthetic data, with respect to the membership and attribute inference were negligible. CONCLUSION: This investigation indicates that temporal diagnosis code sequences can be simulated in a manner that provides utility and respects privacy.


Assuntos
Simulação por Computador , Confidencialidade , Registros Eletrônicos de Saúde , Modelos Estatísticos , Centros Médicos Acadêmicos , Current Procedural Terminology , Diagnóstico , Doença/classificação , Preços Hospitalares/classificação , Humanos , Disseminação de Informação , Tennessee , Fatores de Tempo
15.
J Clin Anesth ; 68: 110114, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33142248

RESUMO

STUDY OBJECTIVE: A challenge in reducing unwanted care variation is effectively managing the wide variety of performed surgical procedures. While an organization may perform thousands of types of cases, privacy and logistical constraints prevent review of previous cases to learn about prior practices. To bridge this gap, we developed a system for extracting key data from anesthesia records. Our objective was to determine whether usage of the system would improve case planning performance for anesthesia residents. DESIGN: Randomized, cross-over trial. SETTING: Vanderbilt University Medical Center. MEASUREMENTS: We developed a web-based, data visualization tool for reviewing de-identified anesthesia records. First year anesthesia residents were recruited and performed simulated case planning tasks (e.g., selecting an anesthetic type) across six case scenarios using a randomized, cross-over design after a baseline assessment. An algorithm scored case planning performance based on care components selected by residents occurring frequently among prior anesthetics, which was scored on a 0-4 point scale. Linear mixed effects regression quantified the tool effect on the average performance score, adjusting for potential confounders. MAIN RESULTS: We analyzed 516 survey questionnaires from 19 residents. The mean performance score was 2.55 ± SD 0.32. Utilization of the tool was associated with an average score improvement of 0.120 points (95% CI 0.060 to 0.179; p < 0.001). Additionally, a 0.055 point improvement due to the "learning effect" was observed from each assessment to the next (95% CI 0.034 to 0.077; p < 0.001). Assessment score was also significantly associated with specific case scenarios (p < 0.001). CONCLUSIONS: This study demonstrated the feasibility of developing of a clinical data visualization system that aggregated key anesthetic information and found that the usage of tools modestly improved residents' performance in simulated case planning.


Assuntos
Anestesia , Internato e Residência , Centros Médicos Acadêmicos , Anestesia/efeitos adversos , Competência Clínica , Estudos Cross-Over , Humanos
16.
J Biomed Inform ; 112: 103611, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33157313

RESUMO

Model calibration, critical to the success and safety of clinical prediction models, deteriorates over time in response to the dynamic nature of clinical environments. To support informed, data-driven model updating strategies, we present and evaluate a calibration drift detection system. Methods are developed for maintaining dynamic calibration curves with optimized online stochastic gradient descent and for detecting increasing miscalibration with adaptive sliding windows. These methods are generalizable to support diverse prediction models developed using a variety of learning algorithms and customizable to address the unique needs of clinical use cases. In both simulation and case studies, our system accurately detected calibration drift. When drift is detected, our system further provides actionable alerts by including information on a window of recent data that may be appropriate for model updating. Simulations showed these windows were primarily composed of data accruing after drift onset, supporting the potential utility of the windows for model updating. By promoting model updating as calibration deteriorates rather than on pre-determined schedules, implementations of our drift detection system may minimize interim periods of insufficient model accuracy and focus analytic resources on those models most in need of attention.


Assuntos
Algoritmos , Modelos Estatísticos , Calibragem , Prognóstico
17.
Appl Clin Inform ; 11(5): 700-709, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-33086396

RESUMO

BACKGROUND: Suboptimal information display in electronic health records (EHRs) is a notorious pain point for users. Designing an effective display is difficult, due in part to the complex and varied nature of clinical practice. OBJECTIVE: This article aims to understand the goals, constraints, frustrations, and mental models of inpatient medical providers when accessing EHR data, to better inform the display of clinical information. METHODS: A multidisciplinary ethnographic study of inpatient medical providers. RESULTS: Our participants' primary goal was usually to assemble a clinical picture around a given question, under the constraints of time pressure and incomplete information. To do so, they tend to use a mental model of multiple layers of abstraction when thinking of patients and disease; they prefer immediate pattern recognition strategies for answering clinical questions, with breadth-first or depth-first search strategies used subsequently if needed; and they are sensitive to data relevance, completeness, and reliability when reading a record. CONCLUSION: These results conflict with the ubiquitous display design practice of separating data by type (test results, medications, notes, etc.), a mismatch that is known to encumber efficient mental processing by increasing both navigation burden and memory demands on users. A popular and obvious solution is to select or filter the data to display exactly what is presumed to be relevant to the clinical question, but this solution is both brittle and mistrusted by users. A less brittle approach that is more aligned with our users' mental model could use abstraction to summarize details instead of filtering to hide data. An abstraction-based approach could allow clinicians to more easily assemble a clinical picture, to use immediate pattern recognition strategies, and to adjust the level of displayed detail to their particular needs. It could also help the user notice unanticipated patterns and to fluidly shift attention as understanding evolves.


Assuntos
Registros Eletrônicos de Saúde , Pacientes Internados , Humanos , Reprodutibilidade dos Testes , Design Centrado no Usuário
18.
Am J Health Syst Pharm ; 77(19): 1556-1570, 2020 09 18.
Artigo em Inglês | MEDLINE | ID: mdl-32620944

RESUMO

PURPOSE: To provide pharmacists and other clinicians with a basic understanding of the underlying principles and practical applications of artificial intelligence (AI) in the medication-use process. SUMMARY: "Artificial intelligence" is a general term used to describe the theory and development of computer systems to perform tasks that normally would require human cognition, such as perception, language understanding, reasoning, learning, planning, and problem solving. Following the fundamental theorem of informatics, a better term for AI would be "augmented intelligence," or leveraging the strengths of computers and the strengths of clinicians together to obtain improved outcomes for patients. Understanding the vocabulary of and methods used in AI will help clinicians productively communicate with data scientists to collaborate on developing models that augment patient care. This primer includes discussion of approaches to identifying problems in practice that could benefit from application of AI and those that would not, as well as methods of training, validating, implementing, evaluating, and maintaining AI models. Some key limitations of AI related to the medication-use process are also discussed. CONCLUSION: As medication-use domain experts, pharmacists play a key role in developing and evaluating AI in healthcare. An understanding of the core concepts of AI is necessary to engage in collaboration with data scientists and critically evaluating its place in patient care, especially as clinical practice continues to evolve and develop.


Assuntos
Assistência Farmacêutica , Farmácia , Médicos , Inteligência Artificial , Atenção à Saúde , Humanos
19.
Lect Notes Monogr Ser ; 12446: 112-121, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-34456459

RESUMO

Semi-supervised methods have an increasing impact on computer vision tasks to make use of scarce labels on large datasets, yet these approaches have not been well translated to medical imaging. Of particular interest, the MixMatch method achieves significant performance improvement over popular semi-supervised learning methods with scarce labels in the CIFAR-10 dataset. In a complementary approach, Nullspace Tuning on equivalence classes offers the potential to leverage multiple subject scans when the ground truth for the subject is unknown. This work is the first to (1) explore MixMatch with Nullspace Tuning in the context of medical imaging and (2) characterize the impacts of the methods with diminishing labels. We consider two distinct medical imaging domains: skin lesion diagnosis and lung cancer prediction. In both cases we evaluate models trained with diminishing labeled data using supervised, MixMatch, and Nullspace Tuning methods as well as MixMatch with Nullspace Tuning together. MixMatch with Nullspace Tuning together is able to achieve an AUC of 0.755 in lung cancer diagnosis with only 200 labeled subjects on the National Lung Screening Trial and a balanced multi-class accuracy of 77% with only 779 labeled examples on HAM10000. This performance is similar to that of the fully supervised methods when all labels are available. In advancing data driven methods in medical imaging, it is important to consider the use of current state-of-the-art semi-supervised learning methods from the greater machine learning community and their impact on the limitations of data acquisition and annotation.

20.
PLoS One ; 14(11): e0225495, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31774837

RESUMO

Increasing reliance on electronic medical records at large medical centers provides unique opportunities to perform population level analyses exploring disease progression and etiology. The massive accumulation of diagnostic, procedure, and laboratory codes in one place has enabled the exploration of co-occurring conditions, their risk factors, and potential prognostic factors. While most of the readily identifiable associations in medical records are (now) well known to the scientific community, there is no doubt many more relationships are still to be uncovered in EMR data. In this paper, we introduce a novel finding index to help with that task. This new index uses data mined from real-time PubMed abstracts to indicate the extent to which empirically discovered associations are already known (i.e., present in the scientific literature). Our methods leverage second-generation p-values, which better identify associations that are truly clinically meaningful. We illustrate our new method with three examples: Autism Spectrum Disorder, Alzheimer's Disease, and Optic Neuritis. Our results demonstrate wide utility for identifying new associations in EMR data that have the highest priority among the complex web of correlations and causalities. Data scientists and clinicians can work together more effectively to discover novel associations that are both empirically reliable and clinically understudied.


Assuntos
Doença de Alzheimer/epidemiologia , Transtorno do Espectro Autista/epidemiologia , Registros Eletrônicos de Saúde/estatística & dados numéricos , Neurite Óptica/epidemiologia , Doença de Alzheimer/patologia , Transtorno do Espectro Autista/patologia , Comorbidade , Conjuntos de Dados como Assunto , Humanos , Neurite Óptica/patologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...