Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(6): e0296985, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38889117

RESUMO

Deep neural networks have been widely adopted in numerous domains due to their high performance and accessibility to developers and application-specific end-users. Fundamental to image-based applications is the development of Convolutional Neural Networks (CNNs), which possess the ability to automatically extract features from data. However, comprehending these complex models and their learned representations, which typically comprise millions of parameters and numerous layers, remains a challenge for both developers and end-users. This challenge arises due to the absence of interpretable and transparent tools to make sense of black-box models. There exists a growing body of Explainable Artificial Intelligence (XAI) literature, including a collection of methods denoted Class Activation Maps (CAMs), that seek to demystify what representations the model learns from the data, how it informs a given prediction, and why it, at times, performs poorly in certain tasks. We propose a novel XAI visualization method denoted CAManim that seeks to simultaneously broaden and focus end-user understanding of CNN predictions by animating the CAM-based network activation maps through all layers, effectively depicting from end-to-end how a model progressively arrives at the final layer activation. Herein, we demonstrate that CAManim works with any CAM-based method and various CNN architectures. Beyond qualitative model assessments, we additionally propose a novel quantitative assessment that expands upon the Remove and Debias (ROAD) metric, pairing the qualitative end-to-end network visual explanations assessment with our novel quantitative "yellow brick ROAD" assessment (ybROAD). This builds upon prior research to address the increasing demand for interpretable, robust, and transparent model assessment methodology, ultimately improving an end-user's trust in a given model's predictions. Examples and source code can be found at: https://omni-ml.github.io/pytorch-grad-cam-anim/.


Assuntos
Redes Neurais de Computação , Inteligência Artificial , Humanos , Algoritmos , Aprendizado Profundo
2.
Sci Rep ; 14(1): 9013, 2024 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-38641713

RESUMO

Deep learning algorithms have demonstrated remarkable potential in clinical diagnostics, particularly in the field of medical imaging. In this study, we investigated the application of deep learning models in early detection of fetal kidney anomalies. To provide an enhanced interpretation of those models' predictions, we proposed an adapted two-class representation and developed a multi-class model interpretation approach for problems with more than two labels and variable hierarchical grouping of labels. Additionally, we employed the explainable AI (XAI) visualization tools Grad-CAM and HiResCAM, to gain insights into model predictions and identify reasons for misclassifications. The study dataset consisted of 969 ultrasound images from unique patients; 646 control images and 323 cases of kidney anomalies, including 259 cases of unilateral urinary tract dilation and 64 cases of unilateral multicystic dysplastic kidney. The best performing model achieved a cross-validated area under the ROC curve of 91.28% ± 0.52%, with an overall accuracy of 84.03% ± 0.76%, sensitivity of 77.39% ± 1.99%, and specificity of 87.35% ± 1.28%. Our findings emphasize the potential of deep learning models in predicting kidney anomalies from limited prenatal ultrasound imagery. The proposed adaptations in model representation and interpretation represent a novel solution to multi-class prediction problems.


Assuntos
Aprendizado Profundo , Nefropatias , Sistema Urinário , Gravidez , Feminino , Humanos , Ultrassonografia Pré-Natal/métodos , Diagnóstico Pré-Natal/métodos , Nefropatias/diagnóstico por imagem , Sistema Urinário/anormalidades
3.
J Obstet Gynaecol Can ; 46(6): 102435, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38458270

RESUMO

OBJECTIVES: To compare surgeon responses regarding their surgical plan before and after receiving a patient-specific three-dimensional (3D)-printed model of a patient's multifibroid uterus created from their magnetic resonance imaging. METHODS: 3D-printed models were derived from standard-of-care pelvic magnetic resonance images of patients scheduled for surgical intervention for multifibroid uterus. Relevant anatomical structures were printed using a combination of transparent and opaque resin types. 3D models were used for 7 surgical cases (5 myomectomies, 2 hysterectomies). A staff surgeon and 1 or 2 surgical fellow(s) were present for each case. Surgeons completed a questionnaire before and after receiving the model documenting surgical approach, perceived difficulty, and confidence in surgical plan. A postoperative questionnaire was used to assess surgeon experience using 3D models. RESULTS: Two staff surgeons and 3 clinical fellows participated in this study. A total of 15 surgeon responses were collected across the 7 cases. After viewing the models, an increase in perceived surgical difficulty and confidence in surgical plan was reported in 12/15 and 7/15 responses, respectively. Anticipated surgical time had a mean ± SD absolute change of 44.0 ± 47.9 minutes and anticipated blood loss had an absolute change of 100 ± 103.5 cc. 2 of 15 responses report a change in pre-surgical approach. Intra-operative model reference was reported to change the dissection route in 8/15 surgeon responses. On average, surgeons rated their experience using 3D models 8.6/10 for pre-surgical planning and 8.1/10 for intra-operative reference. CONCLUSIONS: Patient-specific 3D anatomical models may be a useful tool to increase a surgeon's understanding of complex gynaecologic anatomy and to improve their surgical plan. Future work is needed to evaluate the impact of 3D models on surgical outcomes in gynaecology.


Assuntos
Imageamento por Ressonância Magnética , Modelos Anatômicos , Impressão Tridimensional , Útero , Humanos , Feminino , Útero/cirurgia , Útero/diagnóstico por imagem , Útero/anatomia & histologia , Neoplasias Uterinas/cirurgia , Neoplasias Uterinas/diagnóstico por imagem , Miomectomia Uterina/métodos , Histerectomia/métodos , Leiomioma/cirurgia , Leiomioma/diagnóstico por imagem , Leiomioma/patologia , Adulto , Cirurgiões
4.
Can Assoc Radiol J ; 74(4): 713-722, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37070854

RESUMO

PURPOSE: Rapid identification of hematoma expansion (HE) risk at baseline is a priority in intracerebral hemorrhage (ICH) patients and may impact clinical decision making. Predictive scores using clinical features and Non-Contract Computed Tomography (NCCT)-based features exist, however, the extent to which each feature set contributes to identification is limited. This paper aims to investigate the relative value of clinical, radiological, and radiomics features in HE prediction. METHODS: Original data was retrospectively obtained from three major prospective clinical trials ["Spot Sign" Selection of Intracerebral Hemorrhage to Guide Hemostatic Therapy (SPOTLIGHT)NCT01359202; The Spot Sign for Predicting and Treating ICH Growth Study (STOP-IT)NCT00810888] Patients baseline and follow-up scans following ICH were included. Clinical, NCCT radiological, and radiomics features were extracted, and multivariate modeling was conducted on each feature set. RESULTS: 317 patients from 38 sites met inclusion criteria. Warfarin use (p=0.001) and GCS score (p=0.046) were significant clinical predictors of HE. The best performing model for HE prediction included clinical, radiological, and radiomic features with an area under the curve (AUC) of 87.7%. NCCT radiological features improved upon clinical benchmark model AUC by 6.5% and a clinical & radiomic combination model by 6.4%. Addition of radiomics features improved goodness of fit of both clinical (p=0.012) and clinical & NCCT radiological (p=0.007) models, with marginal improvements on AUC. Inclusion of NCCT radiological signs was best for ruling out HE whereas the radiomic features were best for ruling in HE. CONCLUSION: NCCT-based radiological and radiomics features can improve HE prediction when added to clinical features.


Assuntos
Hemorragia Cerebral , Hematoma , Humanos , Estudos Retrospectivos , Estudos Prospectivos , Hemorragia Cerebral/diagnóstico por imagem , Hematoma/diagnóstico por imagem , Tomografia Computadorizada por Raios X
5.
3D Print Med ; 9(1): 6, 2023 Mar 17.
Artigo em Inglês | MEDLINE | ID: mdl-36932284

RESUMO

OBJECTIVE: Developments in 3-dimensional (3D) printing technology has made it possible to produce high quality, affordable 3D printed models for use in medicine. As a result, there is a growing assessment of this approach being published in the medical literature. The objective of this study was to outline the clinical applications of individualized 3D printing in gynecology through a scoping review. DATA SOURCES: Four medical databases (Medline, Embase, Cochrane CENTRAL, Scopus) and grey literature were searched for publications meeting eligibility criteria up to 31 May 2021. STUDY ELIGIBILITY CRITERIA: Publications were included if they were published in English, had a gynecologic context, and involved production of patient specific 3D printed product(s). STUDY APPRAISAL AND SYNTHESIS METHODS: Studies were manually screened and assessed for eligibility by two independent reviewers and data were extracted using pre-established criteria using Covidence software. RESULTS: Overall, 32 studies (15 abstracts,17 full text articles) were included in the scoping review. Most studies were either case reports (12/32,38%) or case series (15/32,47%). Gynecologic sub-specialties in which the 3D printed models were intended for use included: gynecologic oncology (21/32,66%), benign gynecology (6/32,19%), pediatrics (2/32,6%), urogynecology (2/32,6%) and reproductive endocrinology and infertility (1/32,3%). Twenty studies (63%) printed 5 or less models, 6/32 studies (19%) printed greater than 5 (up to 50 models). Types of 3D models printed included: anatomical models (11/32,34%), medical devices, (2/32,6%) and template/guide/cylindrical applicators for brachytherapy (19/32,59%). CONCLUSIONS: Our scoping review has outlined novel clinical applications for individualized 3D printed models in gynecology. To date, they have mainly been used for production of patient specific 3D printed brachytherapy guides/applicators in patients with gynecologic cancer. However, individualized 3D printing shows great promise for utility in surgical planning, surgical education, and production of patient specific devices, across gynecologic subspecialties. Evidence supporting the clinical value of individualized 3D printing in gynecology is limited by studies with small sample size and non-standardized reporting, which should be the focus of future studies.

6.
Neuroradiology ; 64(12): 2357-2362, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35913525

RESUMO

PURPOSE: Data extraction from radiology free-text reports is time consuming when performed manually. Recently, more automated extraction methods using natural language processing (NLP) are proposed. A previously developed rule-based NLP algorithm showed promise in its ability to extract stroke-related data from radiology reports. We aimed to externally validate the accuracy of CHARTextract, a rule-based NLP algorithm, to extract stroke-related data from free-text radiology reports. METHODS: Free-text reports of CT angiography (CTA) and perfusion (CTP) studies of consecutive patients with acute ischemic stroke admitted to a regional stroke center for endovascular thrombectomy were analyzed from January 2015 to 2021. Stroke-related variables were manually extracted as reference standard from clinical reports, including proximal and distal anterior circulation occlusion, posterior circulation occlusion, presence of ischemia or hemorrhage, Alberta stroke program early CT score (ASPECTS), and collateral status. These variables were simultaneously extracted using a rule-based NLP algorithm. The NLP algorithm's accuracy, specificity, sensitivity, positive predictive value (PPV), and negative predictive value (NPV) were assessed. RESULTS: The NLP algorithm's accuracy was > 90% for identifying distal anterior occlusion, posterior circulation occlusion, hemorrhage, and ASPECTS. Accuracy was 85%, 74%, and 79% for proximal anterior circulation occlusion, presence of ischemia, and collateral status respectively. The algorithm confirmed the absence of variables from radiology reports with an 87-100% accuracy. CONCLUSIONS: Rule-based NLP has a moderate to good performance for stroke-related data extraction from free-text imaging reports. The algorithm's accuracy was affected by inconsistent report styles and lexicon among reporting radiologists.


Assuntos
AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Processamento de Linguagem Natural , Acidente Vascular Cerebral/diagnóstico por imagem , Algoritmos , Automação
7.
PLoS One ; 17(6): e0269323, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35731736

RESUMO

OBJECTIVE: To develop and internally validate a deep-learning algorithm from fetal ultrasound images for the diagnosis of cystic hygromas in the first trimester. METHODS: All first trimester ultrasound scans with a diagnosis of a cystic hygroma between 11 and 14 weeks gestation at our tertiary care centre in Ontario, Canada were studied. Ultrasound scans with normal nuchal translucency were used as controls. The dataset was partitioned with 75% of images used for model training and 25% used for model validation. Images were analyzed using a DenseNet model and the accuracy of the trained model to correctly identify cases of cystic hygroma was assessed by calculating sensitivity, specificity, and the area under the receiver-operating characteristic (ROC) curve. Gradient class activation heat maps (Grad-CAM) were generated to assess model interpretability. RESULTS: The dataset included 289 sagittal fetal ultrasound images;129 cystic hygroma cases and 160 normal NT controls. Overall model accuracy was 93% (95% CI: 88-98%), sensitivity 92% (95% CI: 79-100%), specificity 94% (95% CI: 91-96%), and the area under the ROC curve 0.94 (95% CI: 0.89-1.0). Grad-CAM heat maps demonstrated that the model predictions were driven primarily by the fetal posterior cervical area. CONCLUSIONS: Our findings demonstrate that deep-learning algorithms can achieve high accuracy in diagnostic interpretation of cystic hygroma in the first trimester, validated against expert clinical assessment.


Assuntos
Aprendizado Profundo , Linfangioma Cístico , Aberrações Cromossômicas , Feminino , Humanos , Linfangioma Cístico/diagnóstico por imagem , Ontário , Gravidez , Primeiro Trimestre da Gravidez , Ultrassonografia Pré-Natal
8.
3D Print Med ; 7(1): 17, 2021 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-34224043

RESUMO

BACKGROUND: Patient specific three-dimensional (3D) models can be derived from two-dimensional medical images, such as magnetic resonance (MR) images. 3D models have been shown to improve anatomical comprehension by providing more accurate assessments of anatomical volumes and better perspectives of structural orientations relative to adjacent structures. The clinical benefit of using patient specific 3D printed models have been highlighted in the fields of orthopaedics, cardiothoracics, and neurosurgery for the purpose of pre-surgical planning. However, reports on the clinical use of 3D printed models in the field of gynecology are limited. MAIN TEXT: This article aims to provide a brief overview of the principles of 3D printing and the steps required to derive patient-specific, anatomically accurate 3D printed models of gynecologic anatomy from MR images. Examples of 3D printed models for uterine fibroids and endometriosis are presented as well as a discussion on the barriers to clinical uptake and the future directions for 3D printing in the field of gynecological surgery. CONCLUSION: Successful gynecologic surgery requires a thorough understanding of the patient's anatomy and burden of disease. Future use of patient specific 3D printed models is encouraged so the clinical benefit can be better understood and evidence to support their use in standard of care can be provided.

9.
3D Print Med ; 7(1): 2, 2021 Jan 06.
Artigo em Inglês | MEDLINE | ID: mdl-33409814

RESUMO

BACKGROUND: This prospective study investigated whether the use of 3D-printed model facilitates novice learning of radiology anatomy on multiplanar computed tomography (CT) when compared to traditional 2D-based learning tools. Specifically, whether the use of a 3D printed model improved interpretation of multiplanar CT tracheobronchial anatomy. METHODS: Thirty-one medical students (10F, 21 M) from years one to three were recruited, matched for gender and level of training and randomized to 2D or 3D group. Students underwent 20-min self-study session using 2D-printed image or 3D-printed model of the tracheobronchial tree. Immediately after, students answered 10 multiple-choice questions (Test 1) to identify tracheobronchial tree branches on multiplanar CT images. Two weeks later, identical test (Test 2) was used to assess retention of information. Mean scores of 2D and 3D groups were calculated. Student's t test was used to compare mean differences in tests scores and analysis of variance (ANOVA) was used to assess the interaction of gender, CT imaging plane and time on test scores between the two groups. RESULTS: For test 1, 2D group had higher mean score than 3D group although not statistically significant (7.69 and 7.43, p = 0.39). Mean scores for Test 2 were significantly lower than for Test 1 (7 and 7.57, p = 0.03) with mean score decline for 2D group (Test 1 = 7.69, Test 2 = 6.63, p = 0.03), and similar score for 3D group (Test 1 and 2 = 7.43). There was no statistically significant interaction of gender and test score over time. Significant interaction between group and time of test was found for axial CT images but not for coronal images. CONCLUSIONS: Use of a 3D-printed model of the tracheobronchial anatomy had no immediate advantage over traditional 2D-printed images for learning CT anatomy. However, use of a 3D model improved students' ability to retain learned information, irrespective of gender.

12.
3D Print Med ; 3(1): 14, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29782619

RESUMO

In this work, we provide specific clinical examples to demonstrate basic practical techniques involved in image segmentation, computer-aided design, and 3D printing. A step-by-step approach using United States Food and Drug Administration cleared software is provided to enhance surgical intervention in a patient with a complex superior sulcus tumor. Furthermore, patient-specific device creation is demonstrated using dedicated computer-aided design software. Relevant anatomy for these tasks is obtained from CT Digital Imaging and Communications in Medicine images, leading to the generation of 3D printable files and delivery of these files to a 3D printer.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...