Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38981939

ABSTRACT

PURPOSE: This project examines ChatGPT's potential to enhance the readability of patient educational materials about interventional radiology (IR) procedures. METHODS AND MATERIALS: The descriptions of IR procedures from the Cardiovascular and Interventional Radiological Society of Europe (CIRSE) were used as the original text. Readability scores were calculated using three metrics: Flesch Reading Ease (FRE), Gunning Fog (GF), and the Automated Readability Index (ARI) using an online calculator ( https://readabilityformulas.com ). FRE is scored on a scale of 0-100, where 100 indicates easy-to-read texts, and GF and ARI represent the grade level required to comprehend the text. The DISCERN instrument measured credibility and reliability. ChatGPT was prompted to simplify the texts to a fifth-grade reading level, with subsequent recalculation of readability and DISCERN scores for comparison. Statistical significance was determined using a Wilcoxon Signed-Rank Test. Articles were subsequently organized by subgroups and analyzed. RESULTS: 73 interventional radiology procedures from CIRSE were analyzed. The original FRE score was 47.2 (Difficult), improved to 78.4 (Fairly Easy) by ChatGPT. GF and ARI scores dropped from 14.4 and 11.2 to 7.8 and 5.8, respectively, after simplification, showing significant improvement (p < 0.001). However, the average DISCERN score decreased from 3.73 to 2.99 (p < 0.001) post-ChatGPT simplification. CONCLUSION: This study shows ChatGPT's ability to make interventional radiology descriptions more readable but highlights its struggle to maintain the original's reliability, suggesting the need for human review and prompt engineering to enhance outcomes. LEVEL OF EVIDENCE: Level 6.

2.
Struct Dyn ; 11(3): 034701, 2024 May.
Article in English | MEDLINE | ID: mdl-38774441

ABSTRACT

Studying protein dynamics and conformational heterogeneity is crucial for understanding biomolecular systems and treating disease. Despite the deposition of over 215 000 macromolecular structures in the Protein Data Bank and the advent of AI-based structure prediction tools such as AlphaFold2, RoseTTAFold, and ESMFold, static representations are typically produced, which fail to fully capture macromolecular motion. Here, we discuss the importance of integrating experimental structures with computational clustering to explore the conformational landscapes that manifest protein function. We describe the method developed by the Protein Data Bank in Europe - Knowledge Base to identify distinct conformational states, demonstrate the resource's primary use cases, through examples, and discuss the need for further efforts to annotate protein conformations with functional information. Such initiatives will be crucial in unlocking the potential of protein dynamics data, expediting drug discovery research, and deepening our understanding of macromolecular mechanisms.

3.
J Am Coll Radiol ; 21(7): 1072-1078, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38224925

ABSTRACT

BACKGROUND AND PURPOSE: Large language models (LLMs) have seen explosive growth, but their potential role in medical applications remains underexplored. Our study investigates the capability of LLMs to predict the most appropriate imaging study for specific clinical presentations in various subspecialty areas in radiology. METHODS AND MATERIALS: Chat Generative Pretrained Transformer (ChatGPT), by OpenAI and Glass AI by Glass Health were tested on 1,075 clinical scenarios from 11 ACR expert panels to determine the most appropriate imaging study, benchmarked against the ACR Appropriateness Criteria. Two responses per clinical presentation were generated and averaged for the final clinical presentation score. Clinical presentation scores for each topic area were averaged as its final score. The average of the topic scores within a panel determined the final score of each panel. LLM responses were on a scale of 0 to 3. Partial scores were given for nonspecific answers. Pearson correlation coefficient (R-value) was calculated for each panel to determine a context-specific performance. RESULTS: Glass AI scored significantly higher than ChatGPT (2.32 ± 0.67 versus 2.08 ± 0.74, P = .002). Both LLMs performed the best in the Polytrauma, Breast, and Vascular panels, and performed the worst in the Neurologic, Musculoskeletal, and Cardiac panels. Glass AI outperformed ChatGPT in 10 of 11 panels, except Obstetrics and Gynecology. Maximum agreement was in the Pediatrics, Neurologic, and Thoracic panels, and the most disagreement occurred in the Vascular, Breast, and Urologic panels. CONCLUSION: LLMs can be used to predict imaging studies, with Glass AI's superior performance indicating the benefits of extra medical-text training. This supports the potential of LLMs in radiologic decision making.


Subject(s)
Radiology , Humans , Clinical Decision-Making
4.
J Am Coll Radiol ; 20(10): 1004-1009, 2023 10.
Article in English | MEDLINE | ID: mdl-37423349

ABSTRACT

PURPOSE: Large language models (LLMs) have demonstrated a level of competency within the medical field. The aim of this study was to explore the ability of LLMs to predict the best neuroradiologic imaging modality given specific clinical presentations. In addition, the authors seek to determine if LLMs can outperform an experienced neuroradiologist in this regard. METHODS: ChatGPT and Glass AI, a health care-based LLM by Glass Health, were used. ChatGPT was prompted to rank the three best neuroimaging modalities while taking the best responses from Glass AI and the neuroradiologist. The responses were compared with the ACR Appropriateness Criteria for 147 conditions. Clinical scenarios were passed into each LLM twice to account for stochasticity. Each output was scored out of 3 on the basis of the criteria. Partial scores were given for nonspecific answers. RESULTS: ChatGPT and Glass AI scored 1.75 and 1.83, respectively, with no statistically significant difference. The neuroradiologist scored 2.20, significantly outperforming both LLMs. ChatGPT was also found to be the more inconsistent of the two LLMs, with the score difference between both outputs being statistically significant. Additionally, scores between different ranks output by ChatGPT were statistically significant. CONCLUSIONS: LLMs perform well in selecting appropriate neuroradiologic imaging procedures when prompted with specific clinical scenarios. ChatGPT performed the same as Glass AI, suggesting that with medical text training, ChatGPT could significantly improve its function in this application. LLMs did not outperform an experienced neuroradiologist, indicating the need for continued improvement in the medical context.


Subject(s)
Language , Neuroimaging , Humans , Radiologists
5.
Biol Bull ; 243(1): 50-75, 2022 08.
Article in English | MEDLINE | ID: mdl-36108034

ABSTRACT

AbstractSea star wasting-marked in a variety of sea star species as varying degrees of skin lesions followed by disintegration-recently caused one of the largest marine die-offs ever recorded on the west coast of North America, killing billions of sea stars. Despite the important ramifications this mortality had for coastal benthic ecosystems, such as increased abundance of prey, little is known about the causes of the disease or the mechanisms of its progression. Although there have been studies indicating a range of causal mechanisms, including viruses and environmental effects, the broad spatial and depth range of affected populations leaves many questions remaining about either infectious or non-infectious mechanisms. Wasting appears to start with degradation of mutable connective tissue in the body wall, leading to disintegration of the epidermis. Here, we briefly review basic sea star biology in the context of sea star wasting and present our current knowledge and hypotheses related to the symptoms, the microbiome, the viruses, and the associated environmental stressors. We also highlight throughout the article knowledge gaps and the data needed to better understand sea star wasting mechanistically, its causes, and potential management.


Subject(s)
Ecosystem , Starfish , Animals , Biology
SELECTION OF CITATIONS
SEARCH DETAIL
...