Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 226
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38715895

RESUMO

Objectives: To identify and classify submucosal tumors by building and validating a radiomics model with gastrointestinal endoscopic ultrasonography (EUS) images. Methods: A total of 144 patients diagnosed with submucosal tumors through gastrointestinal EUS were collected between January 2019 and October 2020. There are 1952 radiomic features extracted from each patient's EUS images. The statistical test and the customized least absolute shrinkage and selection operator regression were used for feature selection. Subsequently, an extremely randomized trees algorithm was utilized to construct a robust radiomics classification model specifically tailored for gastrointestinal EUS images. The performance of the model was measured by evaluating the area under the receiver operating characteristic curve. Results: The radiomics model comprised 30 selected features that showed good discrimination performance in the validation cohorts. During validation, the area under the receiver operating characteristic curve was calculated as 0.9203 and the mean value after 10-fold cross-validation was 0.9260, indicating excellent stability and calibration. These results confirm the clinical utility of the model. Conclusions: Utilizing the dataset provided curated from gastrointestinal EUS examinations at our collaborating hospital, we have developed a well-performing radiomics model. It can be used for personalized and non-invasive prediction of the type of submucosal tumors, providing physicians with aid for early treatment and management of tumor progression.

2.
Biomed Eng Lett ; 14(5): 1069-1077, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39220025

RESUMO

Multiclass classification of brain tumors from magnetic resonance (MR) images is challenging due to high inter-class similarities. To this end, convolution neural networks (CNN) have been widely adopted in recent studies. However, conventional CNN architectures fail to capture the small lesion patterns of brain tumors. To tackle this issue, in this paper, we propose a global transformer network dubbed GT-Net for multiclass brain tumor classification. The GT-Net mainly comprises a global transformer module (GTM), which is introduced on the top of a backbone network. A generalized self-attention block (GSB) is proposed to capture the feature inter-dependencies not only across spatial dimension but also channel dimension, thereby facilitating the extraction of the detailed tumor lesion information while ignoring less important information. Further, multiple GSB heads are used in GTM to leverage global feature dependencies. We evaluate our GT-Net on a benchmark dataset by adopting several backbone networks, and the results demonstrate the effectiveness of GTM. Further, comparison with state-of-the-art methods validates the superiority of our model.

4.
Curr Oncol ; 31(9): 5057-5079, 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39330002

RESUMO

Multi-task learning (MTL) methods are widely applied in breast imaging for lesion area perception and classification to assist in breast cancer diagnosis and personalized treatment. A typical paradigm of MTL is the shared-backbone network architecture, which can lead to information sharing conflicts and result in the decline or even failure of the main task's performance. Therefore, extracting richer lesion features and alleviating information-sharing conflicts has become a significant challenge for breast cancer classification. This study proposes a novel Multi-Feature Fusion Multi-Task (MFFMT) model to effectively address this issue. Firstly, in order to better capture the local and global feature relationships of lesion areas, a Contextual Lesion Enhancement Perception (CLEP) module is designed, which integrates channel attention mechanisms with detailed spatial positional information to extract more comprehensive lesion feature information. Secondly, a novel Multi-Feature Fusion (MFF) module is presented. The MFF module effectively extracts differential features that distinguish between lesion-specific characteristics and the semantic features used for tumor classification, and enhances the common feature information of them as well. Experimental results on two public breast ultrasound imaging datasets validate the effectiveness of our proposed method. Additionally, a comprehensive study on the impact of various factors on the model's performance is conducted to gain a deeper understanding of the working mechanism of the proposed framework.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Ultrassonografia Mamária/métodos , Interpretação de Imagem Assistida por Computador/métodos
5.
Bioengineering (Basel) ; 11(8)2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39199758

RESUMO

Lung cancer, the second most common type of cancer worldwide, presents significant health challenges. Detecting this disease early is essential for improving patient outcomes and simplifying treatment. In this study, we propose a hybrid framework that combines deep learning (DL) with quantum computing to enhance the accuracy of lung cancer detection using chest radiographs (CXR) and computerized tomography (CT) images. Our system utilizes pre-trained models for feature extraction and quantum circuits for classification, achieving state-of-the-art performance in various metrics. Not only does our system achieve an overall accuracy of 92.12%, it also excels in other crucial performance measures, such as sensitivity (94%), specificity (90%), F1-score (93%), and precision (92%). These results demonstrate that our hybrid approach can more accurately identify lung cancer signatures compared to traditional methods. Moreover, the incorporation of quantum computing enhances processing speed and scalability, making our system a promising tool for early lung cancer screening and diagnosis. By leveraging the strengths of quantum computing, our approach surpasses traditional methods in terms of speed, accuracy, and efficiency. This study highlights the potential of hybrid computational technologies to transform early cancer detection, paving the way for wider clinical applications and improved patient care outcomes.

6.
Front Neuroinform ; 18: 1403732, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39139696

RESUMO

Introduction: Brain diseases, particularly the classification of gliomas and brain metastases and the prediction of HT in strokes, pose significant challenges in healthcare. Existing methods, relying predominantly on clinical data or imaging-based techniques such as radiomics, often fall short in achieving satisfactory classification accuracy. These methods fail to adequately capture the nuanced features crucial for accurate diagnosis, often hindered by noise and the inability to integrate information across various scales. Methods: We propose a novel approach that mask attention mechanisms with multi-scale feature fusion for Multimodal brain disease classification tasks, termed M 3, which aims to extract features highly relevant to the disease. The extracted features are then dimensionally reduced using Principal Component Analysis (PCA), followed by classification with a Support Vector Machine (SVM) to obtain the predictive results. Results: Our methodology underwent rigorous testing on multi-parametric MRI datasets for both brain tumors and strokes. The results demonstrate a significant improvement in addressing critical clinical challenges, including the classification of gliomas, brain metastases, and the prediction of hemorrhagic stroke transformations. Ablation studies further validate the effectiveness of our attention mechanism and feature fusion modules. Discussion: These findings underscore the potential of our approach to meet and exceed current clinical diagnostic demands, offering promising prospects for enhancing healthcare outcomes in the diagnosis and treatment of brain diseases.

7.
Biomedicines ; 12(7)2024 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-39061969

RESUMO

Brain tumor classification is essential for clinical diagnosis and treatment planning. Deep learning models have shown great promise in this task, but they are often challenged by the complex and diverse nature of brain tumors. To address this challenge, we propose a novel deep residual and region-based convolutional neural network (CNN) architecture, called Res-BRNet, for brain tumor classification using magnetic resonance imaging (MRI) scans. Res-BRNet employs a systematic combination of regional and boundary-based operations within modified spatial and residual blocks. The spatial blocks extract homogeneity, heterogeneity, and boundary-related features of brain tumors, while the residual blocks significantly capture local and global texture variations. We evaluated the performance of Res-BRNet on a challenging dataset collected from Kaggle repositories, Br35H, and figshare, containing various tumor categories, including meningioma, glioma, pituitary, and healthy images. Res-BRNet outperformed standard CNN models, achieving excellent accuracy (98.22%), sensitivity (0.9811), F1-score (0.9841), and precision (0.9822). Our results suggest that Res-BRNet is a promising tool for brain tumor classification, with the potential to improve the accuracy and efficiency of clinical diagnosis and treatment planning.

8.
Front Neuroinform ; 18: 1414925, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38957549

RESUMO

Background: The Rotation Invariant Vision Transformer (RViT) is a novel deep learning model tailored for brain tumor classification using MRI scans. Methods: RViT incorporates rotated patch embeddings to enhance the accuracy of brain tumor identification. Results: Evaluation on the Brain Tumor MRI Dataset from Kaggle demonstrates RViT's superior performance with sensitivity (1.0), specificity (0.975), F1-score (0.984), Matthew's Correlation Coefficient (MCC) (0.972), and an overall accuracy of 0.986. Conclusion: RViT outperforms the standard Vision Transformer model and several existing techniques, highlighting its efficacy in medical imaging. The study confirms that integrating rotational patch embeddings improves the model's capability to handle diverse orientations, a common challenge in tumor imaging. The specialized architecture and rotational invariance approach of RViT have the potential to enhance current methodologies for brain tumor detection and extend to other complex imaging tasks.

9.
Cureus ; 16(6): e61483, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38952601

RESUMO

This research study explores of the effectiveness of a machine learning image classification model in the accurate identification of various types of brain tumors. The types of tumors under consideration in this study are gliomas, meningiomas, and pituitary tumors. These are some of the most common types of brain tumors and pose significant challenges in terms of accurate diagnosis and treatment. The machine learning model that is the focus of this study is built on the Google Teachable Machine platform (Alphabet Inc., Mountain View, CA). The Google Teachable Machine is a machine learning image classification platform that is built from Tensorflow, a popular open-source platform for machine learning. The Google Teachable Machine model was specifically evaluated for its ability to differentiate between normal brains and the aforementioned types of tumors in MRI images. MRI images are a common tool in the diagnosis of brain tumors, but the challenge lies in the accurate classification of the tumors. This is where the machine learning model comes into play. The model is trained to recognize patterns in the MRI images that correspond to the different types of tumors. The performance of the machine learning model was assessed using several metrics. These include precision, recall, and F1 score. These metrics were generated from a confusion matrix analysis and performance graphs. A confusion matrix is a table that is often used to describe the performance of a classification model. Precision is a measure of the model's ability to correctly identify positive instances among all instances it identified as positive. Recall, on the other hand, measures the model's ability to correctly identify positive instances among all actual positive instances. The F1 score is a measure that combines precision and recall providing a single metric for model performance. The results of the study were promising. The Google Teachable Machine model demonstrated high performance, with accuracy, precision, recall, and F1 scores ranging between 0.84 and 1.00. This suggests that the model is highly effective in accurately classifying the different types of brain tumors. This study provides insights into the potential of machine learning models in the accurate classification of brain tumors. The findings of this study lay the groundwork for further research in this area and have implications for the diagnosis and treatment of brain tumors. The study also highlights the potential of machine learning in enhancing the field of medical imaging and diagnosis. With the increasing complexity and volume of medical data, machine learning models like the one evaluated in this study could play a crucial role in improving the accuracy and efficiency of diagnoses. Furthermore, the study underscores the importance of continued research and development in this field to further refine these models and overcome any potential limitations or challenges. Overall, the study contributes to the field of medical imaging and machine learning and sets the stage for future research and advancements in this area.

10.
J Neurosci Methods ; 410: 110227, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39038716

RESUMO

BACKGROUND: Accurately diagnosing brain tumors from MRI scans is crucial for effective treatment planning. While traditional methods heavily rely on radiologist expertise, the integration of AI, particularly Convolutional Neural Networks (CNNs), has shown promise in improving accuracy. However, the lack of transparency in AI decision-making processes presents a challenge for clinical adoption. METHODS: Recent advancements in deep learning, particularly the utilization of CNNs, have facilitated the development of models for medical image analysis. In this study, we employed the EfficientNetB0 architecture and integrated explainable AI techniques to enhance both accuracy and interpretability. Grad-CAM visualization was utilized to highlight significant areas in MRI scans influencing classification decisions. RESULTS: Our model achieved a classification accuracy of 98.72 % across four categories of brain tumors (Glioma, Meningioma, No Tumor, Pituitary), with precision and recall exceeding 97 % for all categories. The incorporation of explainable AI techniques was validated through visual inspection of Grad-CAM heatmaps, which aligned well with established diagnostic markers in MRI scans. CONCLUSION: The AI-enhanced EfficientNetB0 framework with explainable AI techniques significantly improves brain tumor classification accuracy to 98.72 %, offering clear visual insights into the decision-making process. This method enhances diagnostic reliability and trust, demonstrating substantial potential for clinical adoption in medical diagnostics.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Meningioma/diagnóstico por imagem , Glioma/diagnóstico por imagem , Neuroimagem/métodos , Neuroimagem/normas , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
11.
Electromagn Biol Med ; : 1-15, 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39081005

RESUMO

Efficient and accurate classification of brain tumor categories remains a critical challenge in medical imaging. While existing techniques have made strides, their reliance on generic features often leads to suboptimal results. To overcome these issues, Multimodal Contrastive Domain Sharing Generative Adversarial Network for Improved Brain Tumor Classification Based on Efficient Invariant Feature Centric Growth Analysis (MCDS-GNN-IBTC-CGA) is proposed in this manuscript.Here, the input imagesare amassed from brain tumor dataset. Then the input images are preprocesssed using Range - Doppler Matched Filter (RDMF) for improving the quality of the image. Then Ternary Pattern and Discrete Wavelet Transforms (TPDWT) is employed for feature extraction and focusing on white, gray mass, edge correlation, and depth features. The proposed method leverages Multimodal Contrastive Domain Sharing Generative Adversarial Network (MCDS-GNN) to categorize brain tumor images into Glioma, Meningioma, and Pituitary tumors. Finally, Coati Optimization Algorithm (COA) optimizes MCDS-GNN's weight parameters. The proposed MCDS-GNN-IBTC-CGA is empirically evaluated utilizing accuracy, specificity, sensitivity, Precision, F1-score,Mean Square Error (MSE). Here, MCDS-GNN-IBTC-CGA attains 12.75%, 11.39%, 13.35%, 11.42% and 12.98% greater accuracy comparing to the existingstate-of-the-arts techniques, likeMRI brain tumor categorization utilizing parallel deep convolutional neural networks (PDCNN-BTC), attention-guided convolutional neural network for the categorization of braintumor (AGCNN-BTC), intelligent driven deep residual learning method for the categorization of braintumor (DCRN-BTC),fully convolutional neural networks method for the classification of braintumor (FCNN-BTC), Convolutional Neural Network and Multi-Layer Perceptron based brain tumor classification (CNN-MLP-BTC) respectively.


The proposed MCDS-GNN-IBTC-CGA method starts by cleaning brain tumor images with RDMF and extracting features using TPDWT, focusing on color and texture. Subsequently, the MCDS-GNN artificial intelligence system categorizes tumors into types like Glioma and Meningioma. To enhance accuracy, COA fine-tunes the MCDS-GNN parameters. Ultimately, this approach aids in more effective diagnosis and treatment of brain tumors.

12.
BMC Med Imaging ; 24(1): 133, 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38840240

RESUMO

BACKGROUND: Breast cancer is the most common cancer among women, and ultrasound is a usual tool for early screening. Nowadays, deep learning technique is applied as an auxiliary tool to provide the predictive results for doctors to decide whether to make further examinations or treatments. This study aimed to develop a hybrid learning approach for breast ultrasound classification by extracting more potential features from local and multi-center ultrasound data. METHODS: We proposed a hybrid learning approach to classify the breast tumors into benign and malignant. Three multi-center datasets (BUSI, BUS, OASBUD) were used to pretrain a model by federated learning, then every dataset was fine-tuned at local. The proposed model consisted of a convolutional neural network (CNN) and a graph neural network (GNN), aiming to extract features from images at a spatial level and from graphs at a geometric level. The input images are small-sized and free from pixel-level labels, and the input graphs are generated automatically in an unsupervised manner, which saves the costs of labor and memory space. RESULTS: The classification AUCROC of our proposed method is 0.911, 0.871 and 0.767 for BUSI, BUS and OASBUD. The balanced accuracy is 87.6%, 85.2% and 61.4% respectively. The results show that our method outperforms conventional methods. CONCLUSIONS: Our hybrid approach can learn the inter-feature among multi-center data and the intra-feature of local data. It shows potential in aiding doctors for breast tumor classification in ultrasound at an early stage.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Redes Neurais de Computação , Ultrassonografia Mamária , Humanos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Ultrassonografia Mamária/métodos , Interpretação de Imagem Assistida por Computador/métodos , Adulto
13.
Neuro Oncol ; 26(10): 1805-1822, 2024 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-38912846

RESUMO

The 2016 and 2021 World Health Organization 2021 Classification of central nervous system tumors have resulted in a major improvement in the classification of isocitrate dehydrogenase (IDH)-mutant gliomas. With more effective treatments many patients experience prolonged survival. However, treatment guidelines are often still based on information from historical series comprising both patients with IDH wild-type and IDH-mutant tumors. They provide recommendations for radiotherapy and chemotherapy for so-called high-risk patients, usually based on residual tumor after surgery and age over 40. More up-to-date studies give a better insight into clinical, radiological, and molecular factors associated with the outcome of patients with IDH-mutant glioma. These insights should be used today for risk stratification and for treatment decisions. In many patients with IDH-mutant grades 2 and 3 glioma, if carefully monitored postponing radiotherapy and chemotherapy is safe, and will not jeopardize the overall outcome of patients. With the INDIGO trial showing patient benefit from the IDH inhibitor vorasidenib, there is a sizable population in which it seems reasonable to try this class of agents before recommending radio-chemotherapy with its delayed adverse event profile affecting quality of survival. Ongoing trials should help to further identify the patients that are benefiting from this treatment.


Assuntos
Neoplasias Encefálicas , Glioma , Isocitrato Desidrogenase , Mutação , Gradação de Tumores , Humanos , Isocitrato Desidrogenase/genética , Isocitrato Desidrogenase/antagonistas & inibidores , Glioma/genética , Glioma/tratamento farmacológico , Glioma/patologia , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/tratamento farmacológico , Neoplasias Encefálicas/patologia , Neoplasias Encefálicas/terapia , Fatores Etários , Tomada de Decisão Clínica , Inibidores Enzimáticos/uso terapêutico
14.
BMC Med Imaging ; 24(1): 110, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38750436

RESUMO

Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Aprendizado de Máquina , Interpretação de Imagem Assistida por Computador/métodos
15.
Comput Biol Med ; 175: 108412, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38691914

RESUMO

Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Neuroimagem/métodos
17.
J Neurooncol ; 168(3): 515-524, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38811523

RESUMO

PURPOSE: Accurate classification of cancer subgroups is essential for precision medicine, tailoring treatments to individual patients based on their cancer subtypes. In recent years, advances in high-throughput sequencing technologies have enabled the generation of large-scale transcriptomic data from cancer samples. These data have provided opportunities for developing computational methods that can improve cancer subtyping and enable better personalized treatment strategies. METHODS: Here in this study, we evaluated different feature selection schemes in the context of meningioma classification. To integrate interpretable features from the bulk (n = 77 samples) and single-cell profiling (∼ 10 K cells), we developed an algorithm named CLIPPR which combines the top-performing single-cell models, RNA-inferred copy number variation (CNV) signals, and the initial bulk model to create a meta-model. RESULTS: While the scheme relying solely on bulk transcriptomic data showed good classification accuracy, it exhibited confusion between malignant and benign molecular classes in approximately ∼ 8% of meningioma samples. In contrast, models trained on features learned from meningioma single-cell data accurately resolved the sub-groups confused by bulk-transcriptomic data but showed limited overall accuracy. CLIPPR showed superior overall accuracy and resolved benign-malignant confusion as validated on n = 789 bulk meningioma samples gathered from multiple institutions. Finally, we showed the generalizability of our algorithm using our in-house single-cell (∼ 200 K cells) and bulk TCGA glioma data (n = 711 samples). CONCLUSION: Overall, our algorithm CLIPPR synergizes the resolution of single-cell data with the depth of bulk sequencing and enables improved cancer sub-group diagnoses and insights into their biology.


Assuntos
Algoritmos , Neoplasias Meníngeas , Meningioma , Análise de Sequência de RNA , Análise de Célula Única , Humanos , Análise de Célula Única/métodos , Neoplasias Meníngeas/genética , Neoplasias Meníngeas/patologia , Neoplasias Meníngeas/classificação , Meningioma/genética , Meningioma/patologia , Meningioma/classificação , Análise de Sequência de RNA/métodos , Variações do Número de Cópias de DNA , Biomarcadores Tumorais/genética , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Transcriptoma , Perfilação da Expressão Gênica/métodos
18.
Diagnostics (Basel) ; 14(10)2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38786294

RESUMO

Deep learning (DL) networks have shown attractive performance in medical image processing tasks such as brain tumor classification. However, they are often criticized as mysterious "black boxes". The opaqueness of the model and the reasoning process make it difficult for health workers to decide whether to trust the prediction outcomes. In this study, we develop an interpretable multi-part attention network (IMPA-Net) for brain tumor classification to enhance the interpretability and trustworthiness of classification outcomes. The proposed model not only predicts the tumor grade but also provides a global explanation for the model interpretability and a local explanation as justification for the proffered prediction. Global explanation is represented as a group of feature patterns that the model learns to distinguish high-grade glioma (HGG) and low-grade glioma (LGG) classes. Local explanation interprets the reasoning process of an individual prediction by calculating the similarity between the prototypical parts of the image and a group of pre-learned task-related features. Experiments conducted on the BraTS2017 dataset demonstrate that IMPA-Net is a verifiable model for the classification task. A percentage of 86% of feature patterns were assessed by two radiologists to be valid for representing task-relevant medical features. The model shows a classification accuracy of 92.12%, of which 81.17% were evaluated as trustworthy based on local explanations. Our interpretable model is a trustworthy model that can be used for decision aids for glioma classification. Compared with black-box CNNs, it allows health workers and patients to understand the reasoning process and trust the prediction outcomes.

19.
Sci Rep ; 14(1): 11977, 2024 05 25.
Artigo em Inglês | MEDLINE | ID: mdl-38796531

RESUMO

The preoperative diagnosis of brain tumors is important for therapeutic planning as it contributes to the tumors' prognosis. In the last few years, the development in the field of artificial intelligence and machine learning has contributed greatly to the medical area, especially the diagnosis of the grades of brain tumors through radiological images and magnetic resonance images. Due to the complexity of tumor descriptors in medical images, assessing the accurate grade of glioma is a major challenge for physicians. We have proposed a new classification system for glioma grading by integrating novel MRI features with an ensemble learning method, called Ensemble Learning based on Adaptive Power Mean Combiner (EL-APMC). We evaluate and compare the performance of the EL-APMC algorithm with twenty-one classifier models that represent state-of-the-art machine learning algorithms. Results show that the EL-APMC algorithm achieved the best performance in terms of classification accuracy (88.73%) and F1-score (93.12%) over the MRI Brain Tumor dataset called BRATS2015. In addition, we showed that the differences in classification results among twenty-two classifier models have statistical significance. We believe that the EL-APMC algorithm is an effective method for the classification in case of small-size datasets, which are common cases in medical fields. The proposed method provides an effective system for the classification of glioma with high reliability and accurate clinical findings.


Assuntos
Algoritmos , Neoplasias Encefálicas , Glioma , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Gradação de Tumores , Humanos , Glioma/diagnóstico por imagem , Glioma/classificação , Glioma/patologia , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Neoplasias Encefálicas/patologia
20.
Front Oncol ; 14: 1363756, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38746679

RESUMO

Objectives: The diagnosis and treatment of brain tumors have greatly benefited from extensive research in traditional radiomics, leading to improved efficiency for clinicians. With the rapid development of cutting-edge technologies, especially deep learning, further improvements in accuracy and automation are expected. In this study, we explored a hybrid deep learning scheme that integrates several advanced techniques to achieve reliable diagnosis of primary brain tumors with enhanced classification performance and interpretability. Methods: This study retrospectively included 230 patients with primary brain tumors, including 97 meningiomas, 66 gliomas and 67 pituitary tumors, from the First Affiliated Hospital of Yangtze University. The effectiveness of the proposed scheme was validated by the included data and a commonly used data. Based on super-resolution reconstruction and dynamic learning rate annealing strategies, we compared the classification results of several deep learning models. The multi-classification performance was further improved by combining feature transfer and machine learning. Classification performance metrics included accuracy (ACC), area under the curve (AUC), sensitivity (SEN), and specificity (SPE). Results: In the deep learning tests conducted on two datasets, the DenseNet121 model achieved the highest classification performance, with five-test accuracies of 0.989 ± 0.006 and 0.967 ± 0.013, and AUCs of 0.999 ± 0.001 and 0.994 ± 0.005, respectively. In the hybrid deep learning tests, LightGBM, a promising classifier, achieved accuracies of 0.989 and 0.984, which were improved from the original deep learning scheme of 0.987 and 0.965. Sensitivities for both datasets were 0.985, specificities were 0.988 and 0.984, respectively, and relatively desirable receiver operating characteristic (ROC) curves were obtained. In addition, model visualization studies further verified the reliability and interpretability of the results. Conclusions: These results illustrated that deep learning models combining several advanced technologies can reliably improve the performance, automation, and interpretability of primary brain tumor diagnosis, which is crucial for further brain tumor diagnostic research and individualized treatment.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA