Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.722
Filter
1.
Article in English | MEDLINE | ID: mdl-38992406

ABSTRACT

Artificial intelligence (AI) refers to computer-based methodologies which use data to teach a computer to solve pre-defined tasks; these methods can be applied to identify patterns in large multi-modal data sources. AI applications in inflammatory bowel disease (IBD) includes predicting response to therapy, disease activity scoring of endoscopy, drug discovery, and identifying bowel damage in images. As a complex disease with entangled relationships between genomics, metabolomics, microbiome, and the environment, IBD stands to benefit greatly from methodologies that can handle this complexity. We describe current applications, critical challenges, and propose future directions of AI in IBD.

2.
Front Med (Lausanne) ; 11: 1372091, 2024.
Article in English | MEDLINE | ID: mdl-38962734

ABSTRACT

Introduction: Microaneurysms serve as early signs of diabetic retinopathy, and their accurate detection is critical for effective treatment. Due to their low contrast and similarity to retinal vessels, distinguishing microaneurysms from background noise and retinal vessels in fluorescein fundus angiography (FFA) images poses a significant challenge. Methods: We present a model for automatic detection of microaneurysms. FFA images were pre-processed using Top-hat transformation, Gray-stretching, and Gaussian filter techniques to eliminate noise. The candidate microaneurysms were coarsely segmented using an improved matched filter algorithm. Real microaneurysms were segmented by a morphological strategy. To evaluate the segmentation performance, our proposed model was compared against other models, including Otsu's method, Region Growing, Global Threshold, Matched Filter, Fuzzy c-means, and K-means, using both self-constructed and publicly available datasets. Performance metrics such as accuracy, sensitivity, specificity, positive predictive value, and intersection-over-union were calculated. Results: The proposed model outperforms other models in terms of accuracy, sensitivity, specificity, positive predictive value, and intersection-over-union. The segmentation results obtained with our model closely align with benchmark standard. Our model demonstrates significant advantages for microaneurysm segmentation in FFA images and holds promise for clinical application in the diagnosis of diabetic retinopathy. Conclusion: The proposed model offers a robust and accurate approach to microaneurysm detection, outperforming existing methods and demonstrating potential for clinical application in the effective treatment of diabetic retinopathy.

3.
Clin Imaging ; 113: 110231, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38964173

ABSTRACT

PURPOSE: Qualitative findings in Crohn's disease (CD) can be challenging to reliably report and quantify. We evaluated machine learning methodologies to both standardize the detection of common qualitative findings of ileal CD and determine finding spatial localization on CT enterography (CTE). MATERIALS AND METHODS: Subjects with ileal CD and a CTE from a single center retrospective study between 2016 and 2021 were included. 165 CTEs were reviewed by two fellowship-trained abdominal radiologists for the presence and spatial distribution of five qualitative CD findings: mural enhancement, mural stratification, stenosis, wall thickening, and mesenteric fat stranding. A Random Forest (RF) ensemble model using automatically extracted specialist-directed bowel features and an unbiased convolutional neural network (CNN) were developed to predict the presence of qualitative findings. Model performance was assessed using area under the curve (AUC), sensitivity, specificity, accuracy, and kappa agreement statistics. RESULTS: In 165 subjects with 29,895 individual qualitative finding assessments, agreement between radiologists for localization was good to very good (κ = 0.66 to 0.73), except for mesenteric fat stranding (κ = 0.47). RF prediction models had excellent performance, with an overall AUC, sensitivity, specificity of 0.91, 0.81 and 0.85, respectively. RF model and radiologist agreement for localization of CD findings approximated agreement between radiologists (κ = 0.67 to 0.76). Unbiased CNN models without benefit of disease knowledge had very similar performance to RF models which used specialist-defined imaging features. CONCLUSION: Machine learning techniques for CTE image analysis can identify the presence, location, and distribution of qualitative CD findings with similar performance to experienced radiologists.

4.
Article in Chinese | MEDLINE | ID: mdl-38973043

ABSTRACT

Objective:To build a VGG-based computer-aided diagnostic model for chronic sinusitis and evaluate its efficacy. Methods:①A total of 5 000 frames of diagnosed sinus CT images were collected. The normal group consisted of 1 000 frames(250 frames each of maxillary sinus, frontal sinus, septal sinus, and pterygoid sinus), while the abnormal group consisted of 4 000 frames(1 000 frames each of maxillary sinusitis, frontal sinusitis, septal sinusitis, and pterygoid sinusitis). ②The models were trained and simulated to obtain five classification models for the normal group, the pteroid sinusitis group, the frontal sinusitis group, the septal sinusitis group and the maxillary sinusitis group, respectively. The classification efficacy of the models was evaluated objectively in six dimensions: accuracy, precision, sensitivity, specificity, interpretation time and area under the ROC curve(AUC). ③Two hundred randomly selected images were read by the model with three groups of physicians(low, middle and high seniority) to constitute a comparative experiment. The efficacy of the model was objectively evaluated using the aforementioned evaluation indexes in conjunction with clinical analysis. Results:①Simulation experiment: The overall recognition accuracy of the model is 83.94%, with a precision of 89.52%, sensitivity of 83.94%, specificity of 95.99%, and the average interpretation time of each frame is 0.2 s. The AUC for sphenoid sinusitis was 0.865(95%CI 0.849-0.881), for frontal sinusitis was 0.924(0.991-0.936), for ethmoidoid sinusitis was 0.895(0.880-0.909), and for maxillary sinusitis was 0.974(0.967-0.982). ②Comparison experiment: In terms of recognition accuracy, the model was 84.52%, while the low-seniority physicians group was 78.50%, the middle-seniority physicians group was 80.50%, and the seniority physicians group was 83.50%; In terms of recognition accuracy, the model was 85.67%, the low seniority physicians group was 79.72%, the middle seniority physicians group was 82.67%, and the high seniority physicians group was 83.66%. In terms of recognition sensitivity, the model was 84.52%, the low seniority group was 78.50%, the middle seniority group was 80.50%, and the high seniority group was 83.50%. In terms of recognition specificity, the model was 96.58%, the low-seniority physicians group was 94.63%, the middle-seniority physicians group was 95.13%, and the seniority physicians group was 95.88%. In terms of time consumption, the average image per frame of the model is 0.20 s, the average image per frame of the low-seniority physicians group is 2.35 s, the average image per frame of the middle-seniority physicians group is 1.98 s, and the average image per frame of the senior physicians group is 2.19 s. Conclusion:This study demonstrates the potential of a deep learning-based artificial intelligence diagnostic model for chronic sinusitis to classify and diagnose chronic sinusitis; the deep learning-based artificial intelligence diagnosis model for chronic sinusitis has good classification performance and high diagnostic efficacy.


Subject(s)
Sinusitis , Tomography, X-Ray Computed , Humans , Chronic Disease , Tomography, X-Ray Computed/methods , Sinusitis/classification , Sinusitis/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Sensitivity and Specificity , Maxillary Sinusitis/diagnostic imaging , Maxillary Sinusitis/classification , Maxillary Sinus/diagnostic imaging , ROC Curve
5.
Cell Biochem Funct ; 42(5): e4088, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38973163

ABSTRACT

The field of image processing is experiencing significant advancements to support professionals in analyzing histological images obtained from biopsies. The primary objective is to enhance the process of diagnosis and prognostic evaluations. Various forms of cancer can be diagnosed by employing different segmentation techniques followed by postprocessing approaches that can identify distinct neoplastic areas. Using computer approaches facilitates a more objective and efficient study of experts. The progressive advancement of histological image analysis holds significant importance in modern medicine. This paper provides an overview of the current advances in segmentation and classification approaches for images of follicular lymphoma. This research analyzes the primary image processing techniques utilized in the various stages of preprocessing, segmentation of the region of interest, classification, and postprocessing as described in the existing literature. The study also examines the strengths and weaknesses associated with these approaches. Additionally, this study encompasses an examination of validation procedures and an exploration of prospective future research roads in the segmentation of neoplasias.


Subject(s)
Diagnosis, Computer-Assisted , Image Processing, Computer-Assisted , Lymphoma, Follicular , Lymphoma, Follicular/diagnosis , Lymphoma, Follicular/pathology , Humans
6.
Scand J Gastroenterol ; : 1-8, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38950889

ABSTRACT

OBJECTIVES: Recently, artificial intelligence (AI) has been applied to clinical diagnosis. Although AI has already been developed for gastrointestinal (GI) tract endoscopy, few studies have applied AI to endoscopic ultrasound (EUS) images. In this study, we used a computer-assisted diagnosis (CAD) system with deep learning analysis of EUS images (EUS-CAD) and assessed its ability to differentiate GI stromal tumors (GISTs) from other mesenchymal tumors and their risk classification performance. MATERIALS AND METHODS: A total of 101 pathologically confirmed cases of subepithelial lesions (SELs) arising from the muscularis propria layer, including 69 GISTs, 17 leiomyomas and 15 schwannomas, were examined. A total of 3283 EUS images were used for training and five-fold-cross-validation, and 827 images were independently tested for diagnosing GISTs. For the risk classification of 69 GISTs, including very-low-, low-, intermediate- and high-risk GISTs, 2,784 EUS images were used for training and three-fold-cross-validation. RESULTS: For the differential diagnostic performance of GIST among all SELs, the accuracy, sensitivity, specificity and area under the receiver operating characteristic (ROC) curve were 80.4%, 82.9%, 75.3% and 0.865, respectively, whereas those for intermediate- and high-risk GISTs were 71.8%, 70.2%, 72.0% and 0.771, respectively. CONCLUSIONS: The EUS-CAD system showed a good diagnostic yield in differentiating GISTs from other mesenchymal tumors and successfully demonstrated the GIST risk classification feasibility. This system can determine whether treatment is necessary based on EUS imaging alone without the need for additional invasive examinations.

7.
Cas Lek Cesk ; 162(7-8): 283-289, 2024.
Article in English | MEDLINE | ID: mdl-38981713

ABSTRACT

In recent years healthcare is undergoing significant changes due to technological innovations, with Artificial Intelligence (AI) being a key trend. Particularly in radiodiagnostics, according to studies, AI has the potential to enhance accuracy and efficiency. We focus on AI's role in diagnosing pulmonary lesions, which could indicate lung cancer, based on chest X-rays. Despite lower sensitivity in comparison to other methods like chest CT, due to its routine use, X-rays often provide the first detection of lung lesions. We present our deep learning-based solution aimed at improving lung lesion detection, especially during early-stage of illness. We then share results from our previous studies validating this model in two different clinical settings: a general hospital with low prevalence findings and a specialized oncology center. Based on a quantitative comparison with the conclusions of radiologists of different levels of experience, our model achieves high sensitivity, but lower specificity than comparing radiologists. In the context of clinical requirements and AI-assisted diagnostics, the experience and clinical reasoning of the doctor play a crucial role, therefore we currently lean more towards models with higher sensitivity over specificity. Even unlikely suspicions are presented to the doctor. Based on these results, it can be expected that in the future artificial intelligence will play a key role in the field of radiology as a supporting tool for evaluating specialists. To achieve this, it is necessary to solve not only technical but also medical and regulatory aspects. It is crucial to have access to quality and reliable information not only about the benefits but also about the limitations of machine learning and AI in medicine.


Subject(s)
Artificial Intelligence , Lung Neoplasms , Radiography, Thoracic , Humans , Lung Neoplasms/diagnostic imaging , Czech Republic , Retrospective Studies , Sensitivity and Specificity , Early Detection of Cancer/methods , Deep Learning
8.
JMIR Dermatol ; 7: e48811, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38954807

ABSTRACT

BACKGROUND: Dermatology is an ideal specialty for artificial intelligence (AI)-driven image recognition to improve diagnostic accuracy and patient care. Lack of dermatologists in many parts of the world and the high frequency of cutaneous disorders and malignancies highlight the increasing need for AI-aided diagnosis. Although AI-based applications for the identification of dermatological conditions are widely available, research assessing their reliability and accuracy is lacking. OBJECTIVE: The aim of this study was to analyze the efficacy of the Aysa AI app as a preliminary diagnostic tool for various dermatological conditions in a semiurban town in India. METHODS: This observational cross-sectional study included patients over the age of 2 years who visited the dermatology clinic. Images of lesions from individuals with various skin disorders were uploaded to the app after obtaining informed consent. The app was used to make a patient profile, identify lesion morphology, plot the location on a human model, and answer questions regarding duration and symptoms. The app presented eight differential diagnoses, which were compared with the clinical diagnosis. The model's performance was evaluated using sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and F1-score. Comparison of categorical variables was performed with the χ2 test and statistical significance was considered at P<.05. RESULTS: A total of 700 patients were part of the study. A wide variety of skin conditions were grouped into 12 categories. The AI model had a mean top-1 sensitivity of 71% (95% CI 61.5%-74.3%), top-3 sensitivity of 86.1% (95% CI 83.4%-88.6%), and all-8 sensitivity of 95.1% (95% CI 93.3%-96.6%). The top-1 sensitivities for diagnosis of skin infestations, disorders of keratinization, other inflammatory conditions, and bacterial infections were 85.7%, 85.7%, 82.7%, and 81.8%, respectively. In the case of photodermatoses and malignant tumors, the top-1 sensitivities were 33.3% and 10%, respectively. Each category had a strong correlation between the clinical diagnosis and the probable diagnoses (P<.001). CONCLUSIONS: The Aysa app showed promising results in identifying most dermatoses.


Subject(s)
Artificial Intelligence , Mobile Applications , Skin Diseases , Humans , Cross-Sectional Studies , Skin Diseases/diagnosis , Male , Female , Adult , Middle Aged , Sensitivity and Specificity , Reproducibility of Results , India , Adolescent , Dermatology/methods , Aged , Young Adult , Diagnosis, Differential , Child
9.
Phys Med Biol ; 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38955331

ABSTRACT

OBJECTIVE: The trend in the medical field is towards intelligent detection-based medical diagnostic systems. However, these methods are often seen as "black boxes" due to their lack of interpretability. This situation presents challenges in identifying reasons for misdiagnoses and improving accuracy, which leads to potential risks of misdiagnosis and delayed treatment. Therefore, how to enhance the interpretability of diagnostic models is crucial for improving patient outcomes and reducing treatment delays. So far, only limited researches exist on deep learning-based prediction of spontaneous pneumothorax, a pulmonary disease that affects lung ventilation and venous return. Approach. This study develops an integrated medical image analysis system using explainable deep learning model for image recognition and visualization to achieve an interpretable automatic diagnosis process. Main results. The system achieves an impressive 95.56% accuracy in pneumothorax classification, which emphasizes the significance of the blood vessel penetration defect in clinical judgment. Significance. This would lead to improve model trustworthiness, reduce uncertainty, and accurate diagnosis of various lung diseases, which results in better medical outcomes for patients and better utilization of medical resources. Future research can focus on implementing new deep learning models to detect and diagnose other lung diseases that can enhance the generalizability of this system. .

10.
Med Image Anal ; 97: 103262, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38986351

ABSTRACT

Automatic image-based severity estimation is an important task in computer-aided diagnosis. Severity estimation by deep learning requires a large amount of training data to achieve a high performance. In general, severity estimation uses training data annotated with discrete (i.e., quantized) severity labels. Annotating discrete labels is often difficult in images with ambiguous severity, and the annotation cost is high. In contrast, relative annotation, in which the severity between a pair of images is compared, can avoid quantizing severity and thus makes it easier. We can estimate relative disease severity using a learning-to-rank framework with relative annotations, but relative annotation has the problem of the enormous number of pairs that can be annotated. Therefore, the selection of appropriate pairs is essential for relative annotation. In this paper, we propose a deep Bayesian active learning-to-rank that automatically selects appropriate pairs for relative annotation. Our method preferentially annotates unlabeled pairs with high learning efficiency from the model uncertainty of the samples. We prove the theoretical basis for adapting Bayesian neural networks to pairwise learning-to-rank and demonstrate the efficiency of our method through experiments on endoscopic images of ulcerative colitis on both private and public datasets. We also show that our method achieves a high performance under conditions of significant class imbalance because it automatically selects samples from the minority classes.

11.
Front Big Data ; 7: 1401981, 2024.
Article in English | MEDLINE | ID: mdl-38994120

ABSTRACT

Tuberculosis (TB) is a chronic and pathogenic disease that leads to life-threatening situations like death. Many people have been affected by TB owing to inaccuracy, late diagnosis, and deficiency of treatment. The early detection of TB is important to protect people from the severity of the disease and its threatening consequences. Traditionally, different manual methods have been used for TB prediction, such as chest X-rays and CT scans. Nevertheless, these approaches are identified as time-consuming and ineffective for achieving optimal results. To resolve this problem, several researchers have focused on TB prediction. Conversely, it results in a lack of accuracy, overfitting of data, and speed. For improving TB prediction, the proposed research employs the Selection Focal Fusion (SFF) block in the You Look Only Once v8 (YOLOv8, Ultralytics software company, Los Angeles, United States) object detection model with attention mechanism through the Kaggle TBX-11k dataset. The YOLOv8 is used for its ability to detect multiple objects in a single pass. However, it struggles with small objects and finds it impossible to perform fine-grained classifications. To evade this problem, the proposed research incorporates the SFF technique to improve detection performance and decrease small object missed detection rates. Correspondingly, the efficacy of the projected mechanism is calculated utilizing various performance metrics such as recall, precision, F1Score, and mean Average Precision (mAP) to estimate the performance of the proposed framework. Furthermore, the comparison of existing models reveals the efficiency of the proposed research. The present research is envisioned to contribute to the medical world and assist radiologists in identifying tuberculosis using the YOLOv8 model to obtain an optimal outcome.

13.
J Med Imaging (Bellingham) ; 11(4): 044501, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38993628

ABSTRACT

Purpose: Medical imaging-based machine learning (ML) for computer-aided diagnosis of in vivo lesions consists of two basic components or modules of (i) feature extraction from non-invasively acquired medical images and (ii) feature classification for prediction of malignancy of lesions detected or localized in the medical images. This study investigates their individual performances for diagnosis of low-dose computed tomography (CT) screening-detected lesions of pulmonary nodules and colorectal polyps. Approach: Three feature extraction methods were investigated. One uses the mathematical descriptor of gray-level co-occurrence image texture measure to extract the Haralick image texture features (HFs). One uses the convolutional neural network (CNN) architecture to extract deep learning (DL) image abstractive features (DFs). The third one uses the interactions between lesion tissues and X-ray energy of CT to extract tissue-energy specific characteristic features (TFs). All the above three categories of extracted features were classified by the random forest (RF) classifier with comparison to the DL-CNN method, which reads the images, extracts the DFs, and classifies the DFs in an end-to-end manner. The ML diagnosis of lesions or prediction of lesion malignancy was measured by the area under the receiver operating characteristic curve (AUC). Three lesion image datasets were used. The lesions' tissue pathological reports were used as the learning labels. Results: Experiments on the three datasets produced AUC values of 0.724 to 0.878 for the HFs, 0.652 to 0.965 for the DFs, and 0.985 to 0.996 for the TFs, compared to the DL-CNN of 0.694 to 0.964. These experimental outcomes indicate that the RF classifier performed comparably to the DL-CNN classification module and the extraction of tissue-energy specific characteristic features dramatically improved AUC value. Conclusions: The feature extraction module is more important than the feature classification module. Extraction of tissue-energy specific characteristic features is more important than extraction of image abstractive and characteristic features.

14.
Jpn J Radiol ; 2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38867035

ABSTRACT

PURPOSE: To assess the diagnostic accuracy of ChatGPT-4V in interpreting a set of four chest CT slices for each case of COVID-19, non-small cell lung cancer (NSCLC), and control cases, thereby evaluating its potential as an AI tool in radiological diagnostics. MATERIALS AND METHODS: In this retrospective study, 60 CT scans from The Cancer Imaging Archive, covering COVID-19, NSCLC, and control cases were analyzed using ChatGPT-4V. A radiologist selected four CT slices from each scan for evaluation. ChatGPT-4V's interpretations were compared against the gold standard diagnoses and assessed by two radiologists. Statistical analyses focused on accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), along with an examination of the impact of pathology location and lobe involvement. RESULTS: ChatGPT-4V showed an overall diagnostic accuracy of 56.76%. For NSCLC, sensitivity was 27.27% and specificity was 60.47%. In COVID-19 detection, sensitivity was 13.64% and specificity of 64.29%. For control cases, the sensitivity was 31.82%, with a specificity of 95.24%. The highest sensitivity (83.33%) was observed in cases involving all lung lobes. The chi-squared statistical analysis indicated significant differences in Sensitivity across categories and in relation to the location and lobar involvement of pathologies. CONCLUSION: ChatGPT-4V demonstrated variable diagnostic performance in chest CT interpretation, with notable proficiency in specific scenarios. This underscores the challenges of cross-modal AI models like ChatGPT-4V in radiology, pointing toward significant areas for improvement to ensure dependability. The study emphasizes the importance of enhancing these models for broader, more reliable medical use.

15.
J Imaging Inform Med ; 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38926264

ABSTRACT

Breast cancer is the most common cancer in women. Ultrasound is one of the most used techniques for diagnosis, but an expert in the field is necessary to interpret the test. Computer-aided diagnosis (CAD) systems aim to help physicians during this process. Experts use the Breast Imaging-Reporting and Data System (BI-RADS) to describe tumors according to several features (shape, margin, orientation...) and estimate their malignancy, with a common language. To aid in tumor diagnosis with BI-RADS explanations, this paper presents a deep neural network for tumor detection, description, and classification. An expert radiologist described with BI-RADS terms 749 nodules taken from public datasets. The YOLO detection algorithm is used to obtain Regions of Interest (ROIs), and then a model, based on a multi-class classification architecture, receives as input each ROI and outputs the BI-RADS descriptors, the BI-RADS classification (with 6 categories), and a Boolean classification of malignancy. Six hundred of the nodules were used for 10-fold cross-validation (CV) and 149 for testing. The accuracy of this model was compared with state-of-the-art CNNs for the same task. This model outperforms plain classifiers in the agreement with the expert (Cohen's kappa), with a mean over the descriptors of 0.58 in CV and 0.64 in testing, while the second best model yielded kappas of 0.55 and 0.59, respectively. Adding YOLO to the model significantly enhances the performance (0.16 in CV and 0.09 in testing). More importantly, training the model with BI-RADS descriptors enables the explainability of the Boolean malignancy classification without reducing accuracy.

16.
Tomography ; 10(6): 848-868, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38921942

ABSTRACT

Computer-aided diagnosis systems play a crucial role in the diagnosis and early detection of breast cancer. However, most current methods focus primarily on the dual-view analysis of a single breast, thereby neglecting the potentially valuable information between bilateral mammograms. In this paper, we propose a Four-View Correlation and Contrastive Joint Learning Network (FV-Net) for the classification of bilateral mammogram images. Specifically, FV-Net focuses on extracting and matching features across the four views of bilateral mammograms while maximizing both their similarities and dissimilarities. Through the Cross-Mammogram Dual-Pathway Attention Module, feature matching between bilateral mammogram views is achieved, capturing the consistency and complementary features across mammograms and effectively reducing feature misalignment. In the reconstituted feature maps derived from bilateral mammograms, the Bilateral-Mammogram Contrastive Joint Learning module performs associative contrastive learning on positive and negative sample pairs within each local region. This aims to maximize the correlation between similar local features and enhance the differentiation between dissimilar features across the bilateral mammogram representations. Our experimental results on a test set comprising 20% of the combined Mini-DDSM and Vindr-mamo datasets, as well as on the INbreast dataset, show that our model exhibits superior performance in breast cancer classification compared to competing methods.


Subject(s)
Breast Neoplasms , Mammography , Radiographic Image Interpretation, Computer-Assisted , Humans , Breast Neoplasms/diagnostic imaging , Mammography/methods , Female , Radiographic Image Interpretation, Computer-Assisted/methods , Breast/diagnostic imaging , Breast/pathology , Diagnosis, Computer-Assisted/methods , Machine Learning , Algorithms
17.
Bioengineering (Basel) ; 11(6)2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38927807

ABSTRACT

Ameloblastoma (AM), periapical cyst (PC), and chronic suppurative osteomyelitis (CSO) are prevalent maxillofacial diseases with similar imaging characteristics but different treatments, thus making preoperative differential diagnosis crucial. Existing deep learning methods for diagnosis often require manual delineation in tagging the regions of interest (ROIs), which triggers some challenges in practical application. We propose a new model of Wavelet Extraction and Fusion Module with Vision Transformer (WaveletFusion-ViT) for automatic diagnosis using CBCT panoramic images. In this study, 539 samples containing healthy (n = 154), AM (n = 181), PC (n = 102), and CSO (n = 102) were acquired by CBCT for classification, with an additional 2000 healthy samples for pre-training the domain-adaptive network (DAN). The WaveletFusion-ViT model was initialized with pre-trained weights obtained from the DAN and further trained using semi-supervised learning (SSL) methods. After five-fold cross-validation, the model achieved average sensitivity, specificity, accuracy, and AUC scores of 79.60%, 94.48%, 91.47%, and 0.942, respectively. Remarkably, our method achieved 91.47% accuracy using less than 20% labeled samples, surpassing the fully supervised approach's accuracy of 89.05%. Despite these promising results, this study's limitations include a low number of CSO cases and a relatively lower accuracy for this condition, which should be addressed in future research. This research is regarded as an innovative approach as it deviates from the fully supervised learning paradigm typically employed in previous studies. The WaveletFusion-ViT model effectively combines SSL methods to effectively diagnose three types of CBCT panoramic images using only a small portion of labeled data.

18.
Bioengineering (Basel) ; 11(6)2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38927865

ABSTRACT

Prostate cancer is a significant health concern with high mortality rates and substantial economic impact. Early detection plays a crucial role in improving patient outcomes. This study introduces a non-invasive computer-aided diagnosis (CAD) system that leverages intravoxel incoherent motion (IVIM) parameters for the detection and diagnosis of prostate cancer (PCa). IVIM imaging enables the differentiation of water molecule diffusion within capillaries and outside vessels, offering valuable insights into tumor characteristics. The proposed approach utilizes a two-step segmentation approach through the use of three U-Net architectures for extracting tumor-containing regions of interest (ROIs) from the segmented images. The performance of the CAD system is thoroughly evaluated, considering the optimal classifier and IVIM parameters for differentiation and comparing the diagnostic value of IVIM parameters with the commonly used apparent diffusion coefficient (ADC). The results demonstrate that the combination of central zone (CZ) and peripheral zone (PZ) features with the Random Forest Classifier (RFC) yields the best performance. The CAD system achieves an accuracy of 84.08% and a balanced accuracy of 82.60%. This combination showcases high sensitivity (93.24%) and reasonable specificity (71.96%), along with good precision (81.48%) and F1 score (86.96%). These findings highlight the effectiveness of the proposed CAD system in accurately segmenting and diagnosing PCa. This study represents a significant advancement in non-invasive methods for early detection and diagnosis of PCa, showcasing the potential of IVIM parameters in combination with machine learning techniques. This developed solution has the potential to revolutionize PCa diagnosis, leading to improved patient outcomes and reduced healthcare costs.

19.
Diagnostics (Basel) ; 14(12)2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38928696

ABSTRACT

Alzheimer's disease (AD) is a neurological disorder that significantly impairs cognitive function, leading to memory loss and eventually death. AD progresses through three stages: early stage, mild cognitive impairment (MCI) (middle stage), and dementia. Early diagnosis of Alzheimer's disease is crucial and can improve survival rates among patients. Traditional methods for diagnosing AD through regular checkups and manual examinations are challenging. Advances in computer-aided diagnosis systems (CADs) have led to the development of various artificial intelligence and deep learning-based methods for rapid AD detection. This survey aims to explore the different modalities, feature extraction methods, datasets, machine learning techniques, and validation methods used in AD detection. We reviewed 116 relevant papers from repositories including Elsevier (45), IEEE (25), Springer (19), Wiley (6), PLOS One (5), MDPI (3), World Scientific (3), Frontiers (3), PeerJ (2), Hindawi (2), IO Press (1), and other multiple sources (2). The review is presented in tables for ease of reference, allowing readers to quickly grasp the key findings of each study. Additionally, this review addresses the challenges in the current literature and emphasizes the importance of interpretability and explainability in understanding deep learning model predictions. The primary goal is to assess existing techniques for AD identification and highlight obstacles to guide future research.

20.
Diagnostics (Basel) ; 14(11)2024 May 24.
Article in English | MEDLINE | ID: mdl-38893619

ABSTRACT

Diabetic retinopathy (DR) arises from blood vessel damage and is a leading cause of blindness on a global scale. Clinical professionals rely on examining fundus images to diagnose the disease, but this process is frequently prone to errors and is tedious. The usage of computer-assisted techniques offers assistance to clinicians in detecting the severity levels of the disease. Experiments involving automated diagnosis employing convolutional neural networks (CNNs) have produced impressive outcomes in medical imaging. At the same time, retinal image grading for detecting DR severity levels has predominantly focused on spatial features. More spectral features must be explored for a more efficient performance of this task. Analysing spectral features plays a vital role in various tasks, including identifying specific objects or materials, anomaly detection, and differentiation between different classes or categories within an image. In this context, a model incorporating Wavelet CNN and Support Vector Machine has been introduced and assessed to classify clinically significant grades of DR from retinal fundus images. The experiments were conducted on the EyePACS dataset and the performance of the proposed model was evaluated on the following metrics: precision, recall, F1-score, accuracy, and AUC score. The results obtained demonstrate better performance compared to other state-of-the-art techniques.

SELECTION OF CITATIONS
SEARCH DETAIL
...