Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34.375
Filter
1.
Radiol Cardiothorac Imaging ; 6(3): e230177, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38722232

ABSTRACT

Purpose To develop a deep learning model for increasing cardiac cine frame rate while maintaining spatial resolution and scan time. Materials and Methods A transformer-based model was trained and tested on a retrospective sample of cine images from 5840 patients (mean age, 55 years ± 19 [SD]; 3527 male patients) referred for clinical cardiac MRI from 2003 to 2021 at nine centers; images were acquired using 1.5- and 3-T scanners from three vendors. Data from three centers were used for training and testing (4:1 ratio). The remaining data were used for external testing. Cines with downsampled frame rates were restored using linear, bicubic, and model-based interpolation. The root mean square error between interpolated and original cine images was modeled using ordinary least squares regression. In a prospective study of 49 participants referred for clinical cardiac MRI (mean age, 56 years ± 13; 25 male participants) and 12 healthy participants (mean age, 51 years ± 16; eight male participants), the model was applied to cines acquired at 25 frames per second (fps), thereby doubling the frame rate, and these interpolated cines were compared with actual 50-fps cines. The preference of two readers based on perceived temporal smoothness and image quality was evaluated using a noninferiority margin of 10%. Results The model generated artifact-free interpolated images. Ordinary least squares regression analysis accounting for vendor and field strength showed lower error (P < .001) with model-based interpolation compared with linear and bicubic interpolation in internal and external test sets. The highest proportion of reader choices was "no preference" (84 of 122) between actual and interpolated 50-fps cines. The 90% CI for the difference between reader proportions favoring collected (15 of 122) and interpolated (23 of 122) high-frame-rate cines was -0.01 to 0.14, indicating noninferiority. Conclusion A transformer-based deep learning model increased cardiac cine frame rates while preserving both spatial resolution and scan time, resulting in images with quality comparable to that of images obtained at actual high frame rates. Keywords: Functional MRI, Heart, Cardiac, Deep Learning, High Frame Rate Supplemental material is available for this article. © RSNA, 2024.


Subject(s)
Deep Learning , Magnetic Resonance Imaging, Cine , Humans , Male , Magnetic Resonance Imaging, Cine/methods , Middle Aged , Female , Prospective Studies , Retrospective Studies , Heart/diagnostic imaging , Image Interpretation, Computer-Assisted/methods
2.
BMC Med Imaging ; 24(1): 107, 2024 May 11.
Article in English | MEDLINE | ID: mdl-38734629

ABSTRACT

This study addresses the critical challenge of detecting brain tumors using MRI images, a pivotal task in medical diagnostics that demands high accuracy and interpretability. While deep learning has shown remarkable success in medical image analysis, there remains a substantial need for models that are not only accurate but also interpretable to healthcare professionals. The existing methodologies, predominantly deep learning-based, often act as black boxes, providing little insight into their decision-making process. This research introduces an integrated approach using ResNet50, a deep learning model, combined with Gradient-weighted Class Activation Mapping (Grad-CAM) to offer a transparent and explainable framework for brain tumor detection. We employed a dataset of MRI images, enhanced through data augmentation, to train and validate our model. The results demonstrate a significant improvement in model performance, with a testing accuracy of 98.52% and precision-recall metrics exceeding 98%, showcasing the model's effectiveness in distinguishing tumor presence. The application of Grad-CAM provides insightful visual explanations, illustrating the model's focus areas in making predictions. This fusion of high accuracy and explainability holds profound implications for medical diagnostics, offering a pathway towards more reliable and interpretable brain tumor detection tools.


Subject(s)
Brain Neoplasms , Deep Learning , Magnetic Resonance Imaging , Humans , Brain Neoplasms/diagnostic imaging , Magnetic Resonance Imaging/methods , Image Interpretation, Computer-Assisted/methods
3.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38750436

ABSTRACT

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Subject(s)
Brain Neoplasms , Deep Learning , Magnetic Resonance Imaging , Humans , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/classification , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Machine Learning , Image Interpretation, Computer-Assisted/methods
4.
Sci Rep ; 14(1): 11701, 2024 05 22.
Article in English | MEDLINE | ID: mdl-38778034

ABSTRACT

Due to the lack of sufficient labeled data for the prostate and the extensive and complex semantic information in ultrasound images, accurately and quickly segmenting the prostate in transrectal ultrasound (TRUS) images remains a challenging task. In this context, this paper proposes a solution for TRUS image segmentation using an end-to-end bidirectional semantic constraint method, namely the BiSeC model. The experimental results show that compared with classic or popular deep learning methods, this method has better segmentation performance, with the Dice Similarity Coefficient (DSC) of 96.74% and the Intersection over Union (IoU) of 93.71%. Our model achieves a good balance between actual boundaries and noise areas, reducing costs while ensuring the accuracy and speed of segmentation.


Subject(s)
Prostate , Prostatic Neoplasms , Semantics , Ultrasonography , Male , Humans , Ultrasonography/methods , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging , Deep Learning , Image Processing, Computer-Assisted/methods , Algorithms , Image Interpretation, Computer-Assisted/methods
5.
Stud Health Technol Inform ; 314: 123-124, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38785016

ABSTRACT

This paper aims to propose an approach leveraging Artificial Intelligence (AI) to diagnose thalassemia through medical imaging. The idea is to employ a U-net neural network architecture for precise erythrocyte morphology detection and classification in thalassemia diagnosis. This accomplishment was realized by developing and assessing a supervised semantic segmentation model of blood smear images, coupled with the deployment of various data engineering techniques. This methodology enables new applications in tailored medical interventions and contributes to the evolution of AI within precision healthcare, establishing new benchmarks in personalized treatment planning and disease management.


Subject(s)
Artificial Intelligence , Thalassemia , Humans , Thalassemia/diagnosis , Thalassemia/blood , Neural Networks, Computer , Image Interpretation, Computer-Assisted/methods
6.
Sci Rep ; 14(1): 11678, 2024 05 22.
Article in English | MEDLINE | ID: mdl-38778219

ABSTRACT

Polyps are abnormal tissue clumps growing primarily on the inner linings of the gastrointestinal tract. While such clumps are generally harmless, they can potentially evolve into pathological tumors, and thus require long-term observation and monitoring. Polyp segmentation in gastrointestinal endoscopy images is an important stage for polyp monitoring and subsequent treatment. However, this segmentation task faces multiple challenges: the low contrast of the polyp boundaries, the varied polyp appearance, and the co-occurrence of multiple polyps. So, in this paper, an implicit edge-guided cross-layer fusion network (IECFNet) is proposed for polyp segmentation. The codec pair is used to generate an initial saliency map, the implicit edge-enhanced context attention module aggregates the feature graph output from the encoding and decoding to generate the rough prediction, and the multi-scale feature reasoning module is used to generate final predictions. Polyp segmentation experiments have been conducted on five popular polyp image datasets (Kvasir, CVC-ClinicDB, ETIS, CVC-ColonDB, and CVC-300), and the experimental results show that the proposed method significantly outperforms a conventional method, especially with an accuracy margin of 7.9% on the ETIS dataset.


Subject(s)
Colonic Polyps , Humans , Colonic Polyps/pathology , Colonic Polyps/diagnostic imaging , Algorithms , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Image Interpretation, Computer-Assisted/methods , Polyps/pathology , Polyps/diagnostic imaging , Endoscopy, Gastrointestinal/methods
7.
BMC Med Imaging ; 24(1): 118, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38773391

ABSTRACT

Brain tumor diagnosis using MRI scans poses significant challenges due to the complex nature of tumor appearances and variations. Traditional methods often require extensive manual intervention and are prone to human error, leading to misdiagnosis and delayed treatment. Current approaches primarily include manual examination by radiologists and conventional machine learning techniques. These methods rely heavily on feature extraction and classification algorithms, which may not capture the intricate patterns present in brain MRI images. Conventional techniques often suffer from limited accuracy and generalizability, mainly due to the high variability in tumor appearance and the subjective nature of manual interpretation. Additionally, traditional machine learning models may struggle with the high-dimensional data inherent in MRI images. To address these limitations, our research introduces a deep learning-based model utilizing convolutional neural networks (CNNs).Our model employs a sequential CNN architecture with multiple convolutional, max-pooling, and dropout layers, followed by dense layers for classification. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The precision, recall, and F1-scores ranging from 97 to 98% with a roc-auc ranging from 99 to 100% for each tumor category further substantiate the model's effectiveness. Additionally, the utilization of Grad-CAM visualizations provides insights into the model's decision-making process, enhancing interpretability. This research addresses the pressing need for enhanced diagnostic accuracy in identifying brain tumors through MRI imaging, tackling challenges such as variability in tumor appearance and the need for rapid, reliable diagnostic tools.


Subject(s)
Brain Neoplasms , Deep Learning , Magnetic Resonance Imaging , Neural Networks, Computer , Humans , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/classification , Magnetic Resonance Imaging/methods , Algorithms , Image Interpretation, Computer-Assisted/methods , Male , Female
8.
Comput Biol Med ; 175: 108412, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38691914

ABSTRACT

Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.


Subject(s)
Brain Neoplasms , Magnetic Resonance Imaging , Humans , Brain Neoplasms/diagnostic imaging , Magnetic Resonance Imaging/methods , Deep Learning , Image Interpretation, Computer-Assisted/methods , Brain/diagnostic imaging , Machine Learning , Image Processing, Computer-Assisted/methods , Neuroimaging/methods
9.
Comput Biol Med ; 175: 108459, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38701588

ABSTRACT

Diabetic retinopathy (DR) is the most common diabetic complication, which usually leads to retinal damage, vision loss, and even blindness. A computer-aided DR grading system has a significant impact on helping ophthalmologists with rapid screening and diagnosis. Recent advances in fundus photography have precipitated the development of novel retinal imaging cameras and their subsequent implementation in clinical practice. However, most deep learning-based algorithms for DR grading demonstrate limited generalization across domains. This inferior performance stems from variance in imaging protocols and devices inducing domain shifts. We posit that declining model performance between domains arises from learning spurious correlations in the data. Incorporating do-operations from causality analysis into model architectures may mitigate this issue and improve generalizability. Specifically, a novel universal structural causal model (SCM) was proposed to analyze spurious correlations in fundus imaging. Building on this, a causality-inspired diabetic retinopathy grading framework named CauDR was developed to eliminate spurious correlations and achieve more generalizable DR diagnostics. Furthermore, existing datasets were reorganized into 4DR benchmark for DG scenario. Results demonstrate the effectiveness and the state-of-the-art (SOTA) performance of CauDR. Diabetic retinopathy (DR) is the most common diabetic complication, which usually leads to retinal damage, vision loss, and even blindness. A computer-aided DR grading system has a significant impact on helping ophthalmologists with rapid screening and diagnosis. Recent advances in fundus photography have precipitated the development of novel retinal imaging cameras and their subsequent implementation in clinical practice. However, most deep learning-based algorithms for DR grading demonstrate limited generalization across domains. This inferior performance stems from variance in imaging protocols and devices inducing domain shifts. We posit that declining model performance between domains arises from learning spurious correlations in the data. Incorporating do-operations from causality analysis into model architectures may mitigate this issue and improve generalizability. Specifically, a novel universal structural causal model (SCM) was proposed to analyze spurious correlations in fundus imaging. Building on this, a causality-inspired diabetic retinopathy grading framework named CauDR was developed to eliminate spurious correlations and achieve more generalizable DR diagnostics. Furthermore, existing datasets were reorganized into 4DR benchmark for DG scenario. Results demonstrate the effectiveness and the state-of-the-art (SOTA) performance of CauDR.


Subject(s)
Diabetic Retinopathy , Diabetic Retinopathy/diagnostic imaging , Diabetic Retinopathy/diagnosis , Humans , Fundus Oculi , Algorithms , Deep Learning , Image Interpretation, Computer-Assisted/methods
10.
Comput Biol Med ; 175: 108523, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38701591

ABSTRACT

Diabetic retinopathy is considered one of the most common diseases that can lead to blindness in the working age, and the chance of developing it increases as long as a person suffers from diabetes. Protecting the sight of the patient or decelerating the evolution of this disease depends on its early detection as well as identifying the exact levels of this pathology, which is done manually by ophthalmologists. This manual process is very consuming in terms of the time and experience of an expert ophthalmologist, which makes developing an automated method to aid in the diagnosis of diabetic retinopathy an essential and urgent need. In this paper, we aim to propose a new hybrid deep learning method based on a fine-tuning vision transformer and a modified capsule network for automatic diabetic retinopathy severity level prediction. The proposed approach consists of a new range of computer vision operations, including the power law transformation technique and the contrast-limiting adaptive histogram equalization technique in the preprocessing step. While the classification step builds up on a fine-tuning vision transformer, a modified capsule network, and a classification model combined with a classification model, The effectiveness of our approach was evaluated using four datasets, including the APTOS, Messidor-2, DDR, and EyePACS datasets, for the task of severity levels of diabetic retinopathy. We have attained excellent test accuracy scores on the four datasets, respectively: 88.18%, 87.78%, 80.36%, and 78.64%. Comparing our results with the state-of-the-art, we reached a better performance.


Subject(s)
Deep Learning , Diabetic Retinopathy , Diabetic Retinopathy/diagnostic imaging , Humans , Neural Networks, Computer , Databases, Factual , Image Interpretation, Computer-Assisted/methods , Algorithms
11.
Artif Intell Med ; 152: 102872, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38701636

ABSTRACT

Accurately measuring the evolution of Multiple Sclerosis (MS) with magnetic resonance imaging (MRI) critically informs understanding of disease progression and helps to direct therapeutic strategy. Deep learning models have shown promise for automatically segmenting MS lesions, but the scarcity of accurately annotated data hinders progress in this area. Obtaining sufficient data from a single clinical site is challenging and does not address the heterogeneous need for model robustness. Conversely, the collection of data from multiple sites introduces data privacy concerns and potential label noise due to varying annotation standards. To address this dilemma, we explore the use of the federated learning framework while considering label noise. Our approach enables collaboration among multiple clinical sites without compromising data privacy under a federated learning paradigm that incorporates a noise-robust training strategy based on label correction. Specifically, we introduce a Decoupled Hard Label Correction (DHLC) strategy that considers the imbalanced distribution and fuzzy boundaries of MS lesions, enabling the correction of false annotations based on prediction confidence. We also introduce a Centrally Enhanced Label Correction (CELC) strategy, which leverages the aggregated central model as a correction teacher for all sites, enhancing the reliability of the correction process. Extensive experiments conducted on two multi-site datasets demonstrate the effectiveness and robustness of our proposed methods, indicating their potential for clinical applications in multi-site collaborations to train better deep learning models with lower cost in data collection and annotation.


Subject(s)
Deep Learning , Magnetic Resonance Imaging , Multiple Sclerosis , Multiple Sclerosis/diagnostic imaging , Humans , Magnetic Resonance Imaging/methods , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods
12.
Comput Methods Programs Biomed ; 250: 108205, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38703435

ABSTRACT

The pancreas is a vital organ in digestive system which has significant health implications. It is imperative to evaluate and identify malignant pancreatic lesions promptly in light of the high mortality rate linked to such malignancies. Endoscopic Ultrasound (EUS) is a non-invasive precise technique to detect pancreas disorders, but it is highly operator dependent. Artificial intelligence (AI), including traditional machine learning (ML) and deep learning (DL) techniques can play a pivotal role to enhancing the performance of EUS regardless of operator. AI performs a critical function in the detection, classification, and segmentation of medical images. The utilization of AI-assisted systems has improved the accuracy and productivity of pancreatic analysis, including the detection of diverse pancreatic disorders (e.g., pancreatitis, masses, and cysts) as well as landmarks and parenchyma. This systematic review examines the rapidly developing domain of AI-assisted system in EUS of the pancreas. Its objective is to present a thorough study of the present research status and developments in this area. This paper explores the significant challenges of AI-assisted system in pancreas EUS imaging, highlights the potential of AI techniques in addressing these challenges, and suggests the scope for future research in domain of AI-assisted EUS systems.


Subject(s)
Artificial Intelligence , Endosonography , Pancreas , Humans , Endosonography/methods , Pancreas/diagnostic imaging , Machine Learning , Deep Learning , Pancreatic Neoplasms/diagnostic imaging , Pancreatic Diseases/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods
13.
Comput Biol Med ; 175: 108549, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38704901

ABSTRACT

In this paper, we propose a multi-task learning (MTL) network based on the label-level fusion of metadata and hand-crafted features by unsupervised clustering to generate new clustering labels as an optimization goal. We propose a MTL module (MTLM) that incorporates an attention mechanism to enable the model to learn more integrated, variable information. We propose a dynamic strategy to adjust the loss weights of different tasks, and trade off the contributions of multiple branches. Instead of feature-level fusion, we propose label-level fusion and combine the results of our proposed MTLM with the results of the image classification network to achieve better lesion prediction on multiple dermatological datasets. We verify the effectiveness of the proposed model by quantitative and qualitative measures. The MTL network using multi-modal clues and label-level fusion can yield the significant performance improvement for skin lesion classification.


Subject(s)
Skin , Humans , Skin/diagnostic imaging , Skin/pathology , Image Interpretation, Computer-Assisted/methods , Machine Learning , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Neural Networks, Computer , Algorithms , Skin Diseases/diagnostic imaging
14.
Skin Res Technol ; 30(5): e13607, 2024 May.
Article in English | MEDLINE | ID: mdl-38742379

ABSTRACT

BACKGROUND: Timely diagnosis plays a critical role in determining melanoma prognosis, prompting the development of deep learning models to aid clinicians. Questions persist regarding the efficacy of clinical images alone or in conjunction with dermoscopy images for model training. This study aims to compare the classification performance for melanoma of three types of CNN models: those trained on clinical images, dermoscopy images, and a combination of paired clinical and dermoscopy images from the same lesion. MATERIALS AND METHODS: We divided 914 image pairs into training, validation, and test sets. Models were built using pre-trained Inception-ResNetV2 convolutional layers for feature extraction, followed by binary classification. Training comprised 20 models per CNN type using sets of random hyperparameters. Best models were chosen based on validation AUC-ROC. RESULTS: Significant AUC-ROC differences were found between clinical versus dermoscopy models (0.661 vs. 0.869, p < 0.001) and clinical versus clinical + dermoscopy models (0.661 vs. 0.822, p = 0.001). Significant sensitivity differences were found between clinical and dermoscopy models (0.513 vs. 0.799, p = 0.01), dermoscopy versus clinical + dermoscopy models (0.799 vs. 1.000, p = 0.02), and clinical versus clinical + dermoscopy models (0.513 vs. 1.000, p < 0.001). Significant specificity differences were found between dermoscopy versus clinical + dermoscopy models (0.800 vs. 0.288, p < 0.001) and clinical versus clinical + dermoscopy models (0.650 vs. 0.288, p < 0.001). CONCLUSION: CNN models trained on dermoscopy images outperformed those relying solely on clinical images under our study conditions. The potential advantages of incorporating paired clinical and dermoscopy images for CNN-based melanoma classification appear less clear based on our findings.


Subject(s)
Dermoscopy , Melanoma , Neural Networks, Computer , Skin Neoplasms , Humans , Melanoma/diagnostic imaging , Melanoma/pathology , Melanoma/classification , Dermoscopy/methods , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Skin Neoplasms/classification , Deep Learning , Sensitivity and Specificity , Female , ROC Curve , Image Interpretation, Computer-Assisted/methods , Male
15.
Arch Dermatol Res ; 316(6): 275, 2024 May 25.
Article in English | MEDLINE | ID: mdl-38796546

ABSTRACT

PURPOSE: A skin lesion refers to an area of the skin that exhibits anomalous growth or distinctive visual characteristics compared to the surrounding skin. Benign skin lesions are noncancerous and generally pose no threat. These irregular skin growths can vary in appearance. On the other hand, malignant skin lesions correspond to skin cancer, which happens to be the most prevalent form of cancer in the United States. Skin cancer involves the unusual proliferation of skin cells anywhere on the body. The conventional method for detecting skin cancer is relatively more painful. METHODS: This work involves the automated prediction of skin cancer and its types using two stage Convolutional Neural Network (CNN). The first stage of CNN extracts low level features and second stage extracts high level features. Feature selection is done using these two CNN and ABCD (Asymmetry, Border irregularity, Colour variation, and Diameter) technique. The features extracted from the two CNNs are fused with ABCD features and fed into classifiers for the final prediction. The classifiers employed in this work include ensemble learning methods such as gradient boosting and XG boost, as well as machine learning classifiers like decision trees and logistic regression. This methodology is evaluated using the International Skin Imaging Collaboration (ISIC) 2018 and 2019 dataset. RESULTS: As a result, the first stage CNN which is used for creation of new dataset achieved an accuracy of 97.92%. Second stage CNN which is used for feature selection achieved an accuracy of 98.86%. Classification results are obtained for both with and without fusion of features. CONCLUSION: Therefore, two stage prediction model achieved better results with feature fusion.


Subject(s)
Melanoma , Neural Networks, Computer , Skin Neoplasms , Humans , Melanoma/diagnosis , Melanoma/pathology , Skin Neoplasms/diagnosis , Skin Neoplasms/pathology , Skin/pathology , Skin/diagnostic imaging , Machine Learning , Deep Learning , Image Interpretation, Computer-Assisted/methods , Melanoma, Cutaneous Malignant , Dermoscopy/methods
16.
Radiology ; 311(2): e230750, 2024 May.
Article in English | MEDLINE | ID: mdl-38713024

ABSTRACT

Background Multiparametric MRI (mpMRI) improves prostate cancer (PCa) detection compared with systematic biopsy, but its interpretation is prone to interreader variation, which results in performance inconsistency. Artificial intelligence (AI) models can assist in mpMRI interpretation, but large training data sets and extensive model testing are required. Purpose To evaluate a biparametric MRI AI algorithm for intraprostatic lesion detection and segmentation and to compare its performance with radiologist readings and biopsy results. Materials and Methods This secondary analysis of a prospective registry included consecutive patients with suspected or known PCa who underwent mpMRI, US-guided systematic biopsy, or combined systematic and MRI/US fusion-guided biopsy between April 2019 and September 2022. All lesions were prospectively evaluated using Prostate Imaging Reporting and Data System version 2.1. The lesion- and participant-level performance of a previously developed cascaded deep learning algorithm was compared with histopathologic outcomes and radiologist readings using sensitivity, positive predictive value (PPV), and Dice similarity coefficient (DSC). Results A total of 658 male participants (median age, 67 years [IQR, 61-71 years]) with 1029 MRI-visible lesions were included. At histopathologic analysis, 45% (294 of 658) of participants had lesions of International Society of Urological Pathology (ISUP) grade group (GG) 2 or higher. The algorithm identified 96% (282 of 294; 95% CI: 94%, 98%) of all participants with clinically significant PCa, whereas the radiologist identified 98% (287 of 294; 95% CI: 96%, 99%; P = .23). The algorithm identified 84% (103 of 122), 96% (152 of 159), 96% (47 of 49), 95% (38 of 40), and 98% (45 of 46) of participants with ISUP GG 1, 2, 3, 4, and 5 lesions, respectively. In the lesion-level analysis using radiologist ground truth, the detection sensitivity was 55% (569 of 1029; 95% CI: 52%, 58%), and the PPV was 57% (535 of 934; 95% CI: 54%, 61%). The mean number of false-positive lesions per participant was 0.61 (range, 0-3). The lesion segmentation DSC was 0.29. Conclusion The AI algorithm detected cancer-suspicious lesions on biparametric MRI scans with a performance comparable to that of an experienced radiologist. Moreover, the algorithm reliably predicted clinically significant lesions at histopathologic examination. ClinicalTrials.gov Identifier: NCT03354416 © RSNA, 2024 Supplemental material is available for this article.


Subject(s)
Deep Learning , Multiparametric Magnetic Resonance Imaging , Prostatic Neoplasms , Male , Humans , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Aged , Prospective Studies , Multiparametric Magnetic Resonance Imaging/methods , Middle Aged , Algorithms , Prostate/diagnostic imaging , Prostate/pathology , Image-Guided Biopsy/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
17.
Clin Radiol ; 79(7): e892-e899, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38719689

ABSTRACT

PURPOSE: We aimed to evaluate the feasibility of non-contrast-enhanced T1 sequence in texture analysis of breast cancer lesions to predict their estrogen receptor status. METHODS: The study included 85 pathologically proven breast cancer lesions in 53 patients. Immunohistochemical studies were performed to determine the estrogen receptor status (ER). Lesions were divided into two groups: ER + ve status and ER-ve status. Texture analysis using the second-order analysis features [The Co-occurrence matrix (11 features)] was applied on both T1 and dynamic contrast-enhanced (DCE) MRI images for each lesion. Texture features gained from both T1 and DCE images were analyzed to obtain cut-off values using ROC curves to sort lesions according to their estrogen receptor status. RESULTS: Angular second momentum and some of the entropy-based features showed statistically significant cut-off values in differentiation between the two groups [P-values for pre- and post-contrast images for AngSecMom (0.001, 0.008), sum entropy (0.003,0.005), and entropy (0.033,0.019), respectively]. On comparing the AUCs between pre- and post-contrast images, we found that differences were statistically insignificant. Sum of squares, sum variance, and sum average showed statistically significant cut-off points only on pre-contrast images [P-values for sum of squares (0.018), sum variance (0.024), and sum average (0.039)]. CONCLUSIONS: Texture analysis features showed promising results in predicting estrogen receptor status of breast cancer lesions on non-contrast T1 images.


Subject(s)
Breast Neoplasms , Feasibility Studies , Magnetic Resonance Imaging , Receptors, Estrogen , Humans , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/metabolism , Female , Magnetic Resonance Imaging/methods , Middle Aged , Receptors, Estrogen/metabolism , Adult , Aged , Image Interpretation, Computer-Assisted/methods , Breast/diagnostic imaging , Contrast Media , Retrospective Studies
18.
Korean J Radiol ; 25(6): 550-558, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38807336

ABSTRACT

Hepatocellular carcinoma (HCC) is a biologically heterogeneous tumor characterized by varying degrees of aggressiveness. The current treatment strategy for HCC is predominantly determined by the overall tumor burden, and does not address the diverse prognoses of patients with HCC owing to its heterogeneity. Therefore, the prognostication of HCC using imaging data is crucial for optimizing patient management. Although some radiologic features have been demonstrated to be indicative of the biologic behavior of HCC, traditional radiologic methods for HCC prognostication are based on visually-assessed prognostic findings, and are limited by subjectivity and inter-observer variability. Consequently, artificial intelligence has emerged as a promising method for image-based prognostication of HCC. Unlike traditional radiologic image analysis, artificial intelligence based on radiomics or deep learning utilizes numerous image-derived quantitative features, potentially offering an objective, detailed, and comprehensive analysis of the tumor phenotypes. Artificial intelligence, particularly radiomics has displayed potential in a variety of applications, including the prediction of microvascular invasion, recurrence risk after locoregional treatment, and response to systemic therapy. This review highlights the potential value of artificial intelligence in the prognostication of HCC as well as its limitations and future prospects.


Subject(s)
Artificial Intelligence , Carcinoma, Hepatocellular , Liver Neoplasms , Humans , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/therapy , Liver Neoplasms/pathology , Carcinoma, Hepatocellular/diagnostic imaging , Carcinoma, Hepatocellular/therapy , Carcinoma, Hepatocellular/pathology , Prognosis , Image Interpretation, Computer-Assisted/methods
19.
Sci Rep ; 14(1): 12567, 2024 05 31.
Article in English | MEDLINE | ID: mdl-38821977

ABSTRACT

In recent years, the growth spurt of medical imaging data has led to the development of various machine learning algorithms for various healthcare applications. The MedMNISTv2 dataset, a comprehensive benchmark for 2D biomedical image classification, encompasses diverse medical imaging modalities such as Fundus Camera, Breast Ultrasound, Colon Pathology, Blood Cell Microscope etc. Highly accurate classifications performed on these datasets is crucial for identification of various diseases and determining the course of treatment. This research paper presents a comprehensive analysis of four subsets within the MedMNISTv2 dataset: BloodMNIST, BreastMNIST, PathMNIST and RetinaMNIST. Each of these selected datasets is of diverse data modalities and comes with various sample sizes, and have been selected to analyze the efficiency of the model against diverse data modalities. The study explores the idea of assessing the Vision Transformer Model's ability to capture intricate patterns and features crucial for these medical image classification and thereby transcend the benchmark metrics substantially. The methodology includes pre-processing the input images which is followed by training the ViT-base-patch16-224 model on the mentioned datasets. The performance of the model is assessed using key metrices and by comparing the classification accuracies achieved with the benchmark accuracies. With the assistance of ViT, the new benchmarks achieved for BloodMNIST, BreastMNIST, PathMNIST and RetinaMNIST are 97.90%, 90.38%, 94.62% and 57%, respectively. The study highlights the promise of Vision transformer models in medical image analysis, preparing the way for their adoption and further exploration in healthcare applications, aiming to enhance diagnostic accuracy and assist medical professionals in clinical decision-making.


Subject(s)
Algorithms , Humans , Machine Learning , Image Processing, Computer-Assisted/methods , Diagnostic Imaging/methods , Databases, Factual , Image Interpretation, Computer-Assisted/methods
20.
Breast Cancer Res ; 26(1): 85, 2024 May 28.
Article in English | MEDLINE | ID: mdl-38807211

ABSTRACT

BACKGROUND: Abbreviated breast MRI (FAST MRI) is being introduced into clinical practice to screen women with mammographically dense breasts or with a personal history of breast cancer. This study aimed to optimise diagnostic accuracy through the adaptation of interpretation-training. METHODS: A FAST MRI interpretation-training programme (short presentations and guided hands-on workstation teaching) was adapted to provide additional training during the assessment task (interpretation of an enriched dataset of 125 FAST MRI scans) by giving readers feedback about the true outcome of each scan immediately after each scan was interpreted (formative assessment). Reader interaction with the FAST MRI scans used developed software (RiViewer) that recorded reader opinions and reading times for each scan. The training programme was additionally adapted for remote e-learning delivery. STUDY DESIGN: Prospective, blinded interpretation of an enriched dataset by multiple readers. RESULTS: 43 mammogram readers completed the training, 22 who interpreted breast MRI in their clinical role (Group 1) and 21 who did not (Group 2). Overall sensitivity was 83% (95%CI 81-84%; 1994/2408), specificity 94% (95%CI 93-94%; 7806/8338), readers' agreement with the true outcome kappa = 0.75 (95%CI 0.74-0.77) and diagnostic odds ratio = 70.67 (95%CI 61.59-81.09). Group 1 readers showed similar sensitivity (84%) to Group 2 (82% p = 0.14), but slightly higher specificity (94% v. 93%, p = 0.001). Concordance with the ground truth increased significantly with the number of FAST MRI scans read through the formative assessment task (p = 0.002) but by differing amounts depending on whether or not a reader had previously attended FAST MRI training (interaction p = 0.02). Concordance with the ground truth was significantly associated with reading batch size (p = 0.02), tending to worsen when more than 50 scans were read per batch. Group 1 took a median of 56 seconds (range 8-47,466) to interpret each FAST MRI scan compared with 78 (14-22,830, p < 0.0001) for Group 2. CONCLUSIONS: Provision of immediate feedback to mammogram readers during the assessment test set reading task increased specificity for FAST MRI interpretation and achieved high diagnostic accuracy. Optimal reading-batch size for FAST MRI was 50 reads per batch. Trial registration (25/09/2019): ISRCTN16624917.


Subject(s)
Breast Neoplasms , Learning Curve , Magnetic Resonance Imaging , Mammography , Humans , Female , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/diagnosis , Magnetic Resonance Imaging/methods , Mammography/methods , Middle Aged , Early Detection of Cancer/methods , Prospective Studies , Aged , Sensitivity and Specificity , Image Interpretation, Computer-Assisted/methods , Breast/diagnostic imaging , Breast/pathology
SELECTION OF CITATIONS
SEARCH DETAIL
...