Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.692
Filtrar
1.
Artigo em Chinês | MEDLINE | ID: mdl-38973043

RESUMO

Objective:To build a VGG-based computer-aided diagnostic model for chronic sinusitis and evaluate its efficacy. Methods:①A total of 5 000 frames of diagnosed sinus CT images were collected. The normal group consisted of 1 000 frames(250 frames each of maxillary sinus, frontal sinus, septal sinus, and pterygoid sinus), while the abnormal group consisted of 4 000 frames(1 000 frames each of maxillary sinusitis, frontal sinusitis, septal sinusitis, and pterygoid sinusitis). ②The models were trained and simulated to obtain five classification models for the normal group, the pteroid sinusitis group, the frontal sinusitis group, the septal sinusitis group and the maxillary sinusitis group, respectively. The classification efficacy of the models was evaluated objectively in six dimensions: accuracy, precision, sensitivity, specificity, interpretation time and area under the ROC curve(AUC). ③Two hundred randomly selected images were read by the model with three groups of physicians(low, middle and high seniority) to constitute a comparative experiment. The efficacy of the model was objectively evaluated using the aforementioned evaluation indexes in conjunction with clinical analysis. Results:①Simulation experiment: The overall recognition accuracy of the model is 83.94%, with a precision of 89.52%, sensitivity of 83.94%, specificity of 95.99%, and the average interpretation time of each frame is 0.2 s. The AUC for sphenoid sinusitis was 0.865(95%CI 0.849-0.881), for frontal sinusitis was 0.924(0.991-0.936), for ethmoidoid sinusitis was 0.895(0.880-0.909), and for maxillary sinusitis was 0.974(0.967-0.982). ②Comparison experiment: In terms of recognition accuracy, the model was 84.52%, while the low-seniority physicians group was 78.50%, the middle-seniority physicians group was 80.50%, and the seniority physicians group was 83.50%; In terms of recognition accuracy, the model was 85.67%, the low seniority physicians group was 79.72%, the middle seniority physicians group was 82.67%, and the high seniority physicians group was 83.66%. In terms of recognition sensitivity, the model was 84.52%, the low seniority group was 78.50%, the middle seniority group was 80.50%, and the high seniority group was 83.50%. In terms of recognition specificity, the model was 96.58%, the low-seniority physicians group was 94.63%, the middle-seniority physicians group was 95.13%, and the seniority physicians group was 95.88%. In terms of time consumption, the average image per frame of the model is 0.20 s, the average image per frame of the low-seniority physicians group is 2.35 s, the average image per frame of the middle-seniority physicians group is 1.98 s, and the average image per frame of the senior physicians group is 2.19 s. Conclusion:This study demonstrates the potential of a deep learning-based artificial intelligence diagnostic model for chronic sinusitis to classify and diagnose chronic sinusitis; the deep learning-based artificial intelligence diagnosis model for chronic sinusitis has good classification performance and high diagnostic efficacy.


Assuntos
Sinusite , Tomografia Computadorizada por Raios X , Humanos , Doença Crônica , Tomografia Computadorizada por Raios X/métodos , Sinusite/classificação , Sinusite/diagnóstico por imagem , Diagnóstico por Computador/métodos , Sensibilidade e Especificidade , Sinusite Maxilar/diagnóstico por imagem , Sinusite Maxilar/classificação , Seio Maxilar/diagnóstico por imagem , Curva ROC
2.
Cell Biochem Funct ; 42(5): e4088, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38973163

RESUMO

The field of image processing is experiencing significant advancements to support professionals in analyzing histological images obtained from biopsies. The primary objective is to enhance the process of diagnosis and prognostic evaluations. Various forms of cancer can be diagnosed by employing different segmentation techniques followed by postprocessing approaches that can identify distinct neoplastic areas. Using computer approaches facilitates a more objective and efficient study of experts. The progressive advancement of histological image analysis holds significant importance in modern medicine. This paper provides an overview of the current advances in segmentation and classification approaches for images of follicular lymphoma. This research analyzes the primary image processing techniques utilized in the various stages of preprocessing, segmentation of the region of interest, classification, and postprocessing as described in the existing literature. The study also examines the strengths and weaknesses associated with these approaches. Additionally, this study encompasses an examination of validation procedures and an exploration of prospective future research roads in the segmentation of neoplasias.


Assuntos
Diagnóstico por Computador , Processamento de Imagem Assistida por Computador , Linfoma Folicular , Linfoma Folicular/diagnóstico , Linfoma Folicular/patologia , Humanos
3.
JMIR Dermatol ; 7: e48811, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38954807

RESUMO

BACKGROUND: Dermatology is an ideal specialty for artificial intelligence (AI)-driven image recognition to improve diagnostic accuracy and patient care. Lack of dermatologists in many parts of the world and the high frequency of cutaneous disorders and malignancies highlight the increasing need for AI-aided diagnosis. Although AI-based applications for the identification of dermatological conditions are widely available, research assessing their reliability and accuracy is lacking. OBJECTIVE: The aim of this study was to analyze the efficacy of the Aysa AI app as a preliminary diagnostic tool for various dermatological conditions in a semiurban town in India. METHODS: This observational cross-sectional study included patients over the age of 2 years who visited the dermatology clinic. Images of lesions from individuals with various skin disorders were uploaded to the app after obtaining informed consent. The app was used to make a patient profile, identify lesion morphology, plot the location on a human model, and answer questions regarding duration and symptoms. The app presented eight differential diagnoses, which were compared with the clinical diagnosis. The model's performance was evaluated using sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and F1-score. Comparison of categorical variables was performed with the χ2 test and statistical significance was considered at P<.05. RESULTS: A total of 700 patients were part of the study. A wide variety of skin conditions were grouped into 12 categories. The AI model had a mean top-1 sensitivity of 71% (95% CI 61.5%-74.3%), top-3 sensitivity of 86.1% (95% CI 83.4%-88.6%), and all-8 sensitivity of 95.1% (95% CI 93.3%-96.6%). The top-1 sensitivities for diagnosis of skin infestations, disorders of keratinization, other inflammatory conditions, and bacterial infections were 85.7%, 85.7%, 82.7%, and 81.8%, respectively. In the case of photodermatoses and malignant tumors, the top-1 sensitivities were 33.3% and 10%, respectively. Each category had a strong correlation between the clinical diagnosis and the probable diagnoses (P<.001). CONCLUSIONS: The Aysa app showed promising results in identifying most dermatoses.


Assuntos
Inteligência Artificial , Aplicativos Móveis , Dermatopatias , Humanos , Estudos Transversais , Dermatopatias/diagnóstico , Masculino , Feminino , Adulto , Pessoa de Meia-Idade , Sensibilidade e Especificidade , Reprodutibilidade dos Testes , Índia , Adolescente , Dermatologia/métodos , Idoso , Adulto Jovem , Diagnóstico Diferencial , Criança
4.
Phys Med Biol ; 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38955331

RESUMO

OBJECTIVE: The trend in the medical field is towards intelligent detection-based medical diagnostic systems. However, these methods are often seen as "black boxes" due to their lack of interpretability. This situation presents challenges in identifying reasons for misdiagnoses and improving accuracy, which leads to potential risks of misdiagnosis and delayed treatment. Therefore, how to enhance the interpretability of diagnostic models is crucial for improving patient outcomes and reducing treatment delays. So far, only limited researches exist on deep learning-based prediction of spontaneous pneumothorax, a pulmonary disease that affects lung ventilation and venous return. Approach. This study develops an integrated medical image analysis system using explainable deep learning model for image recognition and visualization to achieve an interpretable automatic diagnosis process. Main results. The system achieves an impressive 95.56% accuracy in pneumothorax classification, which emphasizes the significance of the blood vessel penetration defect in clinical judgment. Significance. This would lead to improve model trustworthiness, reduce uncertainty, and accurate diagnosis of various lung diseases, which results in better medical outcomes for patients and better utilization of medical resources. Future research can focus on implementing new deep learning models to detect and diagnose other lung diseases that can enhance the generalizability of this system. .

5.
Front Med (Lausanne) ; 11: 1372091, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38962734

RESUMO

Introduction: Microaneurysms serve as early signs of diabetic retinopathy, and their accurate detection is critical for effective treatment. Due to their low contrast and similarity to retinal vessels, distinguishing microaneurysms from background noise and retinal vessels in fluorescein fundus angiography (FFA) images poses a significant challenge. Methods: We present a model for automatic detection of microaneurysms. FFA images were pre-processed using Top-hat transformation, Gray-stretching, and Gaussian filter techniques to eliminate noise. The candidate microaneurysms were coarsely segmented using an improved matched filter algorithm. Real microaneurysms were segmented by a morphological strategy. To evaluate the segmentation performance, our proposed model was compared against other models, including Otsu's method, Region Growing, Global Threshold, Matched Filter, Fuzzy c-means, and K-means, using both self-constructed and publicly available datasets. Performance metrics such as accuracy, sensitivity, specificity, positive predictive value, and intersection-over-union were calculated. Results: The proposed model outperforms other models in terms of accuracy, sensitivity, specificity, positive predictive value, and intersection-over-union. The segmentation results obtained with our model closely align with benchmark standard. Our model demonstrates significant advantages for microaneurysm segmentation in FFA images and holds promise for clinical application in the diagnosis of diabetic retinopathy. Conclusion: The proposed model offers a robust and accurate approach to microaneurysm detection, outperforming existing methods and demonstrating potential for clinical application in the effective treatment of diabetic retinopathy.

6.
Clin Imaging ; 113: 110231, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38964173

RESUMO

PURPOSE: Qualitative findings in Crohn's disease (CD) can be challenging to reliably report and quantify. We evaluated machine learning methodologies to both standardize the detection of common qualitative findings of ileal CD and determine finding spatial localization on CT enterography (CTE). MATERIALS AND METHODS: Subjects with ileal CD and a CTE from a single center retrospective study between 2016 and 2021 were included. 165 CTEs were reviewed by two fellowship-trained abdominal radiologists for the presence and spatial distribution of five qualitative CD findings: mural enhancement, mural stratification, stenosis, wall thickening, and mesenteric fat stranding. A Random Forest (RF) ensemble model using automatically extracted specialist-directed bowel features and an unbiased convolutional neural network (CNN) were developed to predict the presence of qualitative findings. Model performance was assessed using area under the curve (AUC), sensitivity, specificity, accuracy, and kappa agreement statistics. RESULTS: In 165 subjects with 29,895 individual qualitative finding assessments, agreement between radiologists for localization was good to very good (κ = 0.66 to 0.73), except for mesenteric fat stranding (κ = 0.47). RF prediction models had excellent performance, with an overall AUC, sensitivity, specificity of 0.91, 0.81 and 0.85, respectively. RF model and radiologist agreement for localization of CD findings approximated agreement between radiologists (κ = 0.67 to 0.76). Unbiased CNN models without benefit of disease knowledge had very similar performance to RF models which used specialist-defined imaging features. CONCLUSION: Machine learning techniques for CTE image analysis can identify the presence, location, and distribution of qualitative CD findings with similar performance to experienced radiologists.

7.
Cas Lek Cesk ; 162(7-8): 283-289, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38981713

RESUMO

In recent years healthcare is undergoing significant changes due to technological innovations, with Artificial Intelligence (AI) being a key trend. Particularly in radiodiagnostics, according to studies, AI has the potential to enhance accuracy and efficiency. We focus on AI's role in diagnosing pulmonary lesions, which could indicate lung cancer, based on chest X-rays. Despite lower sensitivity in comparison to other methods like chest CT, due to its routine use, X-rays often provide the first detection of lung lesions. We present our deep learning-based solution aimed at improving lung lesion detection, especially during early-stage of illness. We then share results from our previous studies validating this model in two different clinical settings: a general hospital with low prevalence findings and a specialized oncology center. Based on a quantitative comparison with the conclusions of radiologists of different levels of experience, our model achieves high sensitivity, but lower specificity than comparing radiologists. In the context of clinical requirements and AI-assisted diagnostics, the experience and clinical reasoning of the doctor play a crucial role, therefore we currently lean more towards models with higher sensitivity over specificity. Even unlikely suspicions are presented to the doctor. Based on these results, it can be expected that in the future artificial intelligence will play a key role in the field of radiology as a supporting tool for evaluating specialists. To achieve this, it is necessary to solve not only technical but also medical and regulatory aspects. It is crucial to have access to quality and reliable information not only about the benefits but also about the limitations of machine learning and AI in medicine.


Assuntos
Inteligência Artificial , Neoplasias Pulmonares , Radiografia Torácica , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , República Tcheca , Estudos Retrospectivos , Sensibilidade e Especificidade , Detecção Precoce de Câncer/métodos , Aprendizado Profundo
8.
Scand J Gastroenterol ; : 1-8, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38950889

RESUMO

OBJECTIVES: Recently, artificial intelligence (AI) has been applied to clinical diagnosis. Although AI has already been developed for gastrointestinal (GI) tract endoscopy, few studies have applied AI to endoscopic ultrasound (EUS) images. In this study, we used a computer-assisted diagnosis (CAD) system with deep learning analysis of EUS images (EUS-CAD) and assessed its ability to differentiate GI stromal tumors (GISTs) from other mesenchymal tumors and their risk classification performance. MATERIALS AND METHODS: A total of 101 pathologically confirmed cases of subepithelial lesions (SELs) arising from the muscularis propria layer, including 69 GISTs, 17 leiomyomas and 15 schwannomas, were examined. A total of 3283 EUS images were used for training and five-fold-cross-validation, and 827 images were independently tested for diagnosing GISTs. For the risk classification of 69 GISTs, including very-low-, low-, intermediate- and high-risk GISTs, 2,784 EUS images were used for training and three-fold-cross-validation. RESULTS: For the differential diagnostic performance of GIST among all SELs, the accuracy, sensitivity, specificity and area under the receiver operating characteristic (ROC) curve were 80.4%, 82.9%, 75.3% and 0.865, respectively, whereas those for intermediate- and high-risk GISTs were 71.8%, 70.2%, 72.0% and 0.771, respectively. CONCLUSIONS: The EUS-CAD system showed a good diagnostic yield in differentiating GISTs from other mesenchymal tumors and successfully demonstrated the GIST risk classification feasibility. This system can determine whether treatment is necessary based on EUS imaging alone without the need for additional invasive examinations.

9.
Diagnostics (Basel) ; 14(11)2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38893619

RESUMO

Diabetic retinopathy (DR) arises from blood vessel damage and is a leading cause of blindness on a global scale. Clinical professionals rely on examining fundus images to diagnose the disease, but this process is frequently prone to errors and is tedious. The usage of computer-assisted techniques offers assistance to clinicians in detecting the severity levels of the disease. Experiments involving automated diagnosis employing convolutional neural networks (CNNs) have produced impressive outcomes in medical imaging. At the same time, retinal image grading for detecting DR severity levels has predominantly focused on spatial features. More spectral features must be explored for a more efficient performance of this task. Analysing spectral features plays a vital role in various tasks, including identifying specific objects or materials, anomaly detection, and differentiation between different classes or categories within an image. In this context, a model incorporating Wavelet CNN and Support Vector Machine has been introduced and assessed to classify clinically significant grades of DR from retinal fundus images. The experiments were conducted on the EyePACS dataset and the performance of the proposed model was evaluated on the following metrics: precision, recall, F1-score, accuracy, and AUC score. The results obtained demonstrate better performance compared to other state-of-the-art techniques.

10.
Diagnostics (Basel) ; 14(11)2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38893643

RESUMO

The evaluation of mammographic breast density, a critical indicator of breast cancer risk, is traditionally performed by radiologists via visual inspection of mammography images, utilizing the Breast Imaging-Reporting and Data System (BI-RADS) breast density categories. However, this method is subject to substantial interobserver variability, leading to inconsistencies and potential inaccuracies in density assessment and subsequent risk estimations. To address this, we present a deep learning-based automatic detection algorithm (DLAD) designed for the automated evaluation of breast density. Our multicentric, multi-reader study leverages a diverse dataset of 122 full-field digital mammography studies (488 images in CC and MLO projections) sourced from three institutions. We invited two experienced radiologists to conduct a retrospective analysis, establishing a ground truth for 72 mammography studies (BI-RADS class A: 18, BI-RADS class B: 43, BI-RADS class C: 7, BI-RADS class D: 4). The efficacy of the DLAD was then compared to the performance of five independent radiologists with varying levels of experience. The DLAD showed robust performance, achieving an accuracy of 0.819 (95% CI: 0.736-0.903), along with an F1 score of 0.798 (0.594-0.905), precision of 0.806 (0.596-0.896), recall of 0.830 (0.650-0.946), and a Cohen's Kappa (κ) of 0.708 (0.562-0.841). The algorithm achieved robust performance that matches and in four cases exceeds that of individual radiologists. The statistical analysis did not reveal a significant difference in accuracy between DLAD and the radiologists, underscoring the model's competitive diagnostic alignment with professional radiologist assessments. These results demonstrate that the deep learning-based automatic detection algorithm can enhance the accuracy and consistency of breast density assessments, offering a reliable tool for improving breast cancer screening outcomes.

11.
Int J Med Inform ; 189: 105523, 2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38901270

RESUMO

BACKGROUND: The surge in emergency head CT imaging and artificial intelligence (AI) advancements, especially deep learning (DL) and convolutional neural networks (CNN), have accelerated the development of computer-aided diagnosis (CADx) for emergency imaging. External validation assesses model generalizability, providing preliminary evidence of clinical potential. OBJECTIVES: This study systematically reviews externally validated CNN-CADx models for emergency head CT scans, critically appraises diagnostic test accuracy (DTA), and assesses adherence to reporting guidelines. METHODS: Studies comparing CNN-CADx model performance to reference standard were eligible. The review was registered in PROSPERO (CRD42023411641) and conducted on Medline, Embase, EBM-Reviews and Web of Science following PRISMA-DTA guideline. DTA reporting were systematically extracted and appraised using standardised checklists (STARD, CHARMS, CLAIM, TRIPOD, PROBAST, QUADAS-2). RESULTS: Six of 5636 identified studies were eligible. The common target condition was intracranial haemorrhage (ICH), and intended workflow roles auxiliary to experts. Due to methodological and clinical between-study variation, meta-analysis was inappropriate. The scan-level sensitivity exceeded 90 % in 5/6 studies, while specificities ranged from 58,0-97,7 %. The SROC 95 % predictive region was markedly broader than the confidence region, ranging above 50 % sensitivity and 20 % specificity. All studies had unclear or high risk of bias and concern for applicability (QUADAS-2, PROBAST), and reporting adherence was below 50 % in 20 of 32 TRIPOD items. CONCLUSION: 0.01 % of identified studies met the eligibility criteria. The evidence on the DTA of CNN-CADx models for emergency head CT scans remains limited in the scope of this review, as the reviewed studies were scarce, inapt for meta-analysis and undermined by inadequate methodological conduct and reporting. Properly conducted, external validation remains preliminary for evaluating the clinical potential of AI-CADx models, but prospective and pragmatic clinical validation in comparative trials remains most crucial. In conclusion, future AI-CADx research processes should be methodologically standardized and reported in a clinically meaningful way to avoid research waste.

12.
J Imaging Inform Med ; 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38926264

RESUMO

Breast cancer is the most common cancer in women. Ultrasound is one of the most used techniques for diagnosis, but an expert in the field is necessary to interpret the test. Computer-aided diagnosis (CAD) systems aim to help physicians during this process. Experts use the Breast Imaging-Reporting and Data System (BI-RADS) to describe tumors according to several features (shape, margin, orientation...) and estimate their malignancy, with a common language. To aid in tumor diagnosis with BI-RADS explanations, this paper presents a deep neural network for tumor detection, description, and classification. An expert radiologist described with BI-RADS terms 749 nodules taken from public datasets. The YOLO detection algorithm is used to obtain Regions of Interest (ROIs), and then a model, based on a multi-class classification architecture, receives as input each ROI and outputs the BI-RADS descriptors, the BI-RADS classification (with 6 categories), and a Boolean classification of malignancy. Six hundred of the nodules were used for 10-fold cross-validation (CV) and 149 for testing. The accuracy of this model was compared with state-of-the-art CNNs for the same task. This model outperforms plain classifiers in the agreement with the expert (Cohen's kappa), with a mean over the descriptors of 0.58 in CV and 0.64 in testing, while the second best model yielded kappas of 0.55 and 0.59, respectively. Adding YOLO to the model significantly enhances the performance (0.16 in CV and 0.09 in testing). More importantly, training the model with BI-RADS descriptors enables the explainability of the Boolean malignancy classification without reducing accuracy.

13.
Ultrasound Med Biol ; 2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-38910034

RESUMO

BACKGROUND: Ultrasound image examination has become the preferred choice for diagnosing metabolic dysfunction-associated steatotic liver disease (MASLD) due to its non-invasive nature. Computer-aided diagnosis (CAD) technology can assist doctors in avoiding deviations in the detection and classification of MASLD. METHOD: We propose a hybrid model that integrates the pre-trained VGG16 network with an attention mechanism and a stacking ensemble learning model, which is capable of multi-scale feature aggregation based on the self-attention mechanism and multi-classification model fusion (Logistic regression, random forest, support vector machine) based on stacking ensemble learning. The proposed hybrid method achieves four classifications of normal, mild, moderate, and severe fatty liver based on ultrasound images. RESULT AND CONCLUSION: Our proposed hybrid model reaches an accuracy of 91.34% and exhibits superior robustness against interference, which is better than traditional neural network algorithms. Experimental results show that, compared with the pre-trained VGG16 model, adding the self-attention mechanism improves the accuracy by 3.02%. Using the stacking ensemble learning model as a classifier further increases the accuracy to 91.34%, exceeding any single classifier such as LR (89.86%) and SVM (90.34%) and RF (90.73%). The proposed hybrid method can effectively improve the efficiency and accuracy of MASLD ultrasound image detection.

14.
Biomed Phys Eng Express ; 10(4)2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38848695

RESUMO

Recent advancements in computational intelligence, deep learning, and computer-aided detection have had a significant impact on the field of medical imaging. The task of image segmentation, which involves accurately interpreting and identifying the content of an image, has garnered much attention. The main objective of this task is to separate objects from the background, thereby simplifying and enhancing the significance of the image. However, existing methods for image segmentation have their limitations when applied to certain types of images. This survey paper aims to highlight the importance of image segmentation techniques by providing a thorough examination of their advantages and disadvantages. The accurate detection of cancer regions in medical images is crucial for ensuring effective treatment. In this study, we have also extensive analysis of Computer-Aided Diagnosis (CAD) systems for cancer identification, with a focus on recent research advancements. The paper critically assesses various techniques for cancer detection and compares their effectiveness. Convolutional neural networks (CNNs) have attracted particular interest due to their ability to segment and classify medical images in large datasets, thanks to their capacity for self- learning and decision-making.


Assuntos
Algoritmos , Inteligência Artificial , Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador , Neoplasias , Redes Neurais de Computação , Humanos , Neoplasias/diagnóstico por imagem , Neoplasias/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Diagnóstico por Imagem/métodos , Diagnóstico por Computador/métodos , Aprendizado Profundo
15.
Tomography ; 10(6): 848-868, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38921942

RESUMO

Computer-aided diagnosis systems play a crucial role in the diagnosis and early detection of breast cancer. However, most current methods focus primarily on the dual-view analysis of a single breast, thereby neglecting the potentially valuable information between bilateral mammograms. In this paper, we propose a Four-View Correlation and Contrastive Joint Learning Network (FV-Net) for the classification of bilateral mammogram images. Specifically, FV-Net focuses on extracting and matching features across the four views of bilateral mammograms while maximizing both their similarities and dissimilarities. Through the Cross-Mammogram Dual-Pathway Attention Module, feature matching between bilateral mammogram views is achieved, capturing the consistency and complementary features across mammograms and effectively reducing feature misalignment. In the reconstituted feature maps derived from bilateral mammograms, the Bilateral-Mammogram Contrastive Joint Learning module performs associative contrastive learning on positive and negative sample pairs within each local region. This aims to maximize the correlation between similar local features and enhance the differentiation between dissimilar features across the bilateral mammogram representations. Our experimental results on a test set comprising 20% of the combined Mini-DDSM and Vindr-mamo datasets, as well as on the INbreast dataset, show that our model exhibits superior performance in breast cancer classification compared to competing methods.


Assuntos
Neoplasias da Mama , Mamografia , Interpretação de Imagem Radiográfica Assistida por Computador , Humanos , Neoplasias da Mama/diagnóstico por imagem , Mamografia/métodos , Feminino , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Mama/diagnóstico por imagem , Mama/patologia , Diagnóstico por Computador/métodos , Aprendizado de Máquina , Algoritmos
16.
Bioengineering (Basel) ; 11(6)2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38927807

RESUMO

Ameloblastoma (AM), periapical cyst (PC), and chronic suppurative osteomyelitis (CSO) are prevalent maxillofacial diseases with similar imaging characteristics but different treatments, thus making preoperative differential diagnosis crucial. Existing deep learning methods for diagnosis often require manual delineation in tagging the regions of interest (ROIs), which triggers some challenges in practical application. We propose a new model of Wavelet Extraction and Fusion Module with Vision Transformer (WaveletFusion-ViT) for automatic diagnosis using CBCT panoramic images. In this study, 539 samples containing healthy (n = 154), AM (n = 181), PC (n = 102), and CSO (n = 102) were acquired by CBCT for classification, with an additional 2000 healthy samples for pre-training the domain-adaptive network (DAN). The WaveletFusion-ViT model was initialized with pre-trained weights obtained from the DAN and further trained using semi-supervised learning (SSL) methods. After five-fold cross-validation, the model achieved average sensitivity, specificity, accuracy, and AUC scores of 79.60%, 94.48%, 91.47%, and 0.942, respectively. Remarkably, our method achieved 91.47% accuracy using less than 20% labeled samples, surpassing the fully supervised approach's accuracy of 89.05%. Despite these promising results, this study's limitations include a low number of CSO cases and a relatively lower accuracy for this condition, which should be addressed in future research. This research is regarded as an innovative approach as it deviates from the fully supervised learning paradigm typically employed in previous studies. The WaveletFusion-ViT model effectively combines SSL methods to effectively diagnose three types of CBCT panoramic images using only a small portion of labeled data.

17.
Bioengineering (Basel) ; 11(6)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38927865

RESUMO

Prostate cancer is a significant health concern with high mortality rates and substantial economic impact. Early detection plays a crucial role in improving patient outcomes. This study introduces a non-invasive computer-aided diagnosis (CAD) system that leverages intravoxel incoherent motion (IVIM) parameters for the detection and diagnosis of prostate cancer (PCa). IVIM imaging enables the differentiation of water molecule diffusion within capillaries and outside vessels, offering valuable insights into tumor characteristics. The proposed approach utilizes a two-step segmentation approach through the use of three U-Net architectures for extracting tumor-containing regions of interest (ROIs) from the segmented images. The performance of the CAD system is thoroughly evaluated, considering the optimal classifier and IVIM parameters for differentiation and comparing the diagnostic value of IVIM parameters with the commonly used apparent diffusion coefficient (ADC). The results demonstrate that the combination of central zone (CZ) and peripheral zone (PZ) features with the Random Forest Classifier (RFC) yields the best performance. The CAD system achieves an accuracy of 84.08% and a balanced accuracy of 82.60%. This combination showcases high sensitivity (93.24%) and reasonable specificity (71.96%), along with good precision (81.48%) and F1 score (86.96%). These findings highlight the effectiveness of the proposed CAD system in accurately segmenting and diagnosing PCa. This study represents a significant advancement in non-invasive methods for early detection and diagnosis of PCa, showcasing the potential of IVIM parameters in combination with machine learning techniques. This developed solution has the potential to revolutionize PCa diagnosis, leading to improved patient outcomes and reduced healthcare costs.

18.
Med Image Anal ; 97: 103224, 2024 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-38850624

RESUMO

Many real-world image recognition problems, such as diagnostic medical imaging exams, are "long-tailed" - there are a few common findings followed by many more relatively rare conditions. In chest radiography, diagnosis is both a long-tailed and multi-label problem, as patients often present with multiple findings simultaneously. While researchers have begun to study the problem of long-tailed learning in medical image recognition, few have studied the interaction of label imbalance and label co-occurrence posed by long-tailed, multi-label disease classification. To engage with the research community on this emerging topic, we conducted an open challenge, CXR-LT, on long-tailed, multi-label thorax disease classification from chest X-rays (CXRs). We publicly release a large-scale benchmark dataset of over 350,000 CXRs, each labeled with at least one of 26 clinical findings following a long-tailed distribution. We synthesize common themes of top-performing solutions, providing practical recommendations for long-tailed, multi-label medical image classification. Finally, we use these insights to propose a path forward involving vision-language foundation models for few- and zero-shot disease classification.

19.
Comput Med Imaging Graph ; 116: 102399, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38833895

RESUMO

Lung cancer screening (LCS) using annual computed tomography (CT) scanning significantly reduces mortality by detecting cancerous lung nodules at an earlier stage. Deep learning algorithms can improve nodule malignancy risk stratification. However, they have typically been used to analyse single time point CT data when detecting malignant nodules on either baseline or incident CT LCS rounds. Deep learning algorithms have the greatest value in two aspects. These approaches have great potential in assessing nodule change across time-series CT scans where subtle changes may be challenging to identify using the human eye alone. Moreover, they could be targeted to detect nodules developing on incident screening rounds, where cancers are generally smaller and more challenging to detect confidently. Here, we show the performance of our Deep learning-based Computer-Aided Diagnosis model integrating Nodule and Lung imaging data with clinical Metadata Longitudinally (DeepCAD-NLM-L) for malignancy prediction. DeepCAD-NLM-L showed improved performance (AUC = 88%) against models utilizing single time-point data alone. DeepCAD-NLM-L also demonstrated comparable and complementary performance to radiologists when interpreting the most challenging nodules typically found in LCS programs. It also demonstrated similar performance to radiologists when assessed on out-of-distribution imaging dataset. The results emphasize the advantages of using time-series and multimodal analyses when interpreting malignancy risk in LCS.

20.
Jpn J Radiol ; 2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38867035

RESUMO

PURPOSE: To assess the diagnostic accuracy of ChatGPT-4V in interpreting a set of four chest CT slices for each case of COVID-19, non-small cell lung cancer (NSCLC), and control cases, thereby evaluating its potential as an AI tool in radiological diagnostics. MATERIALS AND METHODS: In this retrospective study, 60 CT scans from The Cancer Imaging Archive, covering COVID-19, NSCLC, and control cases were analyzed using ChatGPT-4V. A radiologist selected four CT slices from each scan for evaluation. ChatGPT-4V's interpretations were compared against the gold standard diagnoses and assessed by two radiologists. Statistical analyses focused on accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), along with an examination of the impact of pathology location and lobe involvement. RESULTS: ChatGPT-4V showed an overall diagnostic accuracy of 56.76%. For NSCLC, sensitivity was 27.27% and specificity was 60.47%. In COVID-19 detection, sensitivity was 13.64% and specificity of 64.29%. For control cases, the sensitivity was 31.82%, with a specificity of 95.24%. The highest sensitivity (83.33%) was observed in cases involving all lung lobes. The chi-squared statistical analysis indicated significant differences in Sensitivity across categories and in relation to the location and lobar involvement of pathologies. CONCLUSION: ChatGPT-4V demonstrated variable diagnostic performance in chest CT interpretation, with notable proficiency in specific scenarios. This underscores the challenges of cross-modal AI models like ChatGPT-4V in radiology, pointing toward significant areas for improvement to ensure dependability. The study emphasizes the importance of enhancing these models for broader, more reliable medical use.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...