Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Radiol Artif Intell ; : e240067, 2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39017032

RESUMO

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. The diagnostic performance of an artificial intelligence (AI) clinical decision support (CDS) solution for acute intracranial hemorrhage (ICH) detection was assessed in a large teleradiology practice. The impact on radiologist read times and system efficiency was also quantified. A total of 61,704 consecutive noncontrast head CTs (NCHCT) were retrospectively evaluated. System performance was calculated along with mean and median read time values for NCHCT pre-AI (baseline; August 2021-May 2022) and post-AI (January 2023-February 2024). The AI solution had a sensitivity of 75.6%, specificity of 92.1%, accuracy of 91.7%, prevalence of 2.70%, and positive predictive value of 21.1%. Of the 56,745 post-AI NCHCT with no bleed identified by a radiologist, examinations falsely flagged as suspected ICH by the AI solution (n = 4,464) took 9min40sec on average/8min7sec median to interpret as compared with 8min25sec average/6min48ec median for unremarkable NCHCT pre-AI (n = 49,007) (P < .001) and 8min38sec average/6min53sec median post-AI when ICH was not suspected by the AI solution (n = 52,281) (P < .001). NCHCT with no bleed identified by the AI but reported as positive for ICH by the radiologist (n = 384) took 14min23sec on average/13min35sec median to interpret as compared with a read time of 13min34sec mean/12min30sec median for NCHCT correctly reported as a bleed by the AI (n = 1192) (P = .04). With lengthened read times for falsely flagged examinations, system inefficiencies may outweigh the potential benefits of using the tool in a high volume, low prevalence environment. ©RSNA, 2024.

2.
Curr Probl Diagn Radiol ; 48(6): 535-542, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30244814

RESUMO

Recognizing and preventing diagnostic errors is an increasingly emphasized topic across medicine, and abdominal imaging is no exception. Peer-learning strives for quality improvement through understanding why errors occur and identifying opportunities to prevent errors from recurring. In an effort to learn from mistakes, our abdominal imaging section initiated a Peer Learning Conference, where errors are discussed and compartmentalized into one or more of the following categories: Observation, Interpretation, Communication, and Inadequate Data Gathering. In this manuscript, the structure of our Peer Learning Conference is introduced and the components of each discrepancy category are described in detail. Images are included to highlight learning points through exemplary cases from the conference.


Assuntos
Erros de Diagnóstico/classificação , Erros de Diagnóstico/prevenção & controle , Revisão dos Cuidados de Saúde por Pares , Radiografia Abdominal/normas , Radiologia/educação , Competência Clínica/normas , Congressos como Assunto , Feedback Formativo , Humanos , Garantia da Qualidade dos Cuidados de Saúde
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...