Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Sci Rep ; 13(1): 21849, 2023 12 09.
Article in English | MEDLINE | ID: mdl-38071254

ABSTRACT

Early detection of prostate cancer (PCa) and benign prostatic hyperplasia (BPH) is crucial for maintaining the health and well-being of aging male populations. This study aims to evaluate the performance of transfer learning with convolutional neural networks (CNNs) for efficient classification of PCa and BPH in transrectal ultrasound (TRUS) images. A retrospective experimental design was employed in this study, with 1380 TRUS images for PCa and 1530 for BPH. Seven state-of-the-art deep learning (DL) methods were employed as classifiers with transfer learning applied to popular CNN architectures. Performance indices, including sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), Kappa value, and Hindex (Youden's index), were used to assess the feasibility and efficacy of the CNN methods. The CNN methods with transfer learning demonstrated a high classification performance for TRUS images, with all accuracy, specificity, sensitivity, PPV, NPV, Kappa, and Hindex values surpassing 0.9400. The optimal accuracy, sensitivity, and specificity reached 0.9987, 0.9980, and 0.9980, respectively, as evaluated using twofold cross-validation. The investigated CNN methods with transfer learning showcased their efficiency and ability for the classification of PCa and BPH in TRUS images. Notably, the EfficientNetV2 with transfer learning displayed a high degree of effectiveness in distinguishing between PCa and BPH, making it a promising tool for future diagnostic applications.


Subject(s)
Prostatic Hyperplasia , Prostatic Neoplasms , Male , Humans , Prostatic Hyperplasia/diagnostic imaging , Retrospective Studies , Prostatic Neoplasms/diagnostic imaging , Neural Networks, Computer , Machine Learning
2.
J Xray Sci Technol ; 31(6): 1315-1332, 2023.
Article in English | MEDLINE | ID: mdl-37840464

ABSTRACT

BACKGROUND: Dental panoramic imaging plays a pivotal role in dentistry for diagnosis and treatment planning. However, correctly positioning patients can be challenging for technicians due to the complexity of the imaging equipment and variations in patient anatomy, leading to positioning errors. These errors can compromise image quality and potentially result in misdiagnoses. OBJECTIVE: This research aims to develop and validate a deep learning model capable of accurately and efficiently identifying multiple positioning errors in dental panoramic imaging. METHODS AND MATERIALS: This retrospective study used 552 panoramic images selected from a hospital Picture Archiving and Communication System (PACS). We defined six types of errors (E1-E6) namely, (1) slumped position, (2) chin tipped low, (3) open lip, (4) head turned to one side, (5) head tilted to one side, and (6) tongue against the palate. First, six Convolutional Neural Network (CNN) models were employed to extract image features, which were then fused using transfer learning. Next, a Support Vector Machine (SVM) was applied to create a classifier for multiple positioning errors, using the fused image features. Finally, the classifier performance was evaluated using 3 indices of precision, recall rate, and accuracy. RESULTS: Experimental results show that the fusion of image features with six binary SVM classifiers yielded high accuracy, recall rates, and precision. Specifically, the classifier achieved an accuracy of 0.832 for identifying multiple positioning errors. CONCLUSIONS: This study demonstrates that six SVM classifiers effectively identify multiple positioning errors in dental panoramic imaging. The fusion of extracted image features and the employment of SVM classifiers improve diagnostic precision, suggesting potential enhancements in dental imaging efficiency and diagnostic accuracy. Future research should consider larger datasets and explore real-time clinical application.


Subject(s)
Deep Learning , Radiology Information Systems , Humans , Retrospective Studies , Diagnostic Imaging , Neural Networks, Computer
3.
Healthcare (Basel) ; 11(15)2023 Aug 07.
Article in English | MEDLINE | ID: mdl-37570467

ABSTRACT

This study focuses on overcoming challenges in classifying eye diseases using color fundus photographs by leveraging deep learning techniques, aiming to enhance early detection and diagnosis accuracy. We utilized a dataset of 6392 color fundus photographs across eight disease categories, which was later augmented to 17,766 images. Five well-known convolutional neural networks (CNNs)-efficientnetb0, mobilenetv2, shufflenet, resnet50, and resnet101-and a custom-built CNN were integrated and trained on this dataset. Image sizes were standardized, and model performance was evaluated via accuracy, Kappa coefficient, and precision metrics. Shufflenet and efficientnetb0demonstrated strong performances, while our custom 17-layer CNN outperformed all with an accuracy of 0.930 and a Kappa coefficient of 0.920. Furthermore, we found that the fusion of image features with classical machine learning classifiers increased the performance, with Logistic Regression showcasing the best results. Our study highlights the potential of AI and deep learning models in accurately classifying eye diseases and demonstrates the efficacy of custom-built models and the fusion of deep learning and classical methods. Future work should focus on validating these methods across larger datasets and assessing their real-world applicability.

4.
Healthcare (Basel) ; 11(10)2023 May 10.
Article in English | MEDLINE | ID: mdl-37239653

ABSTRACT

Convolutional neural networks (CNNs) have shown promise in accurately diagnosing coronavirus disease 2019 (COVID-19) and bacterial pneumonia using chest X-ray images. However, determining the optimal feature extraction approach is challenging. This study investigates the use of fusion-extracted features by deep networks to improve the accuracy of COVID-19 and bacterial pneumonia classification with chest X-ray radiography. A Fusion CNN method was developed using five different deep learning models after transferred learning to extract image features (Fusion CNN). The combined features were used to build a support vector machine (SVM) classifier with a RBF kernel. The performance of the model was evaluated using accuracy, Kappa values, recall rate, and precision scores. The Fusion CNN model achieved an accuracy and Kappa value of 0.994 and 0.991, with precision scores for normal, COVID-19, and bacterial groups of 0.991, 0.998, and 0.994, respectively. The results indicate that the Fusion CNN models with the SVM classifier provided reliable and accurate classification performance, with Kappa values no less than 0.990. Using a Fusion CNN approach could be a possible solution to enhance accuracy further. Therefore, the study demonstrates the potential of deep learning and fusion-extracted features for accurate COVID-19 and bacterial pneumonia classification with chest X-ray radiography.

5.
Healthcare (Basel) ; 10(12)2022 Nov 27.
Article in English | MEDLINE | ID: mdl-36553906

ABSTRACT

According to the Health Promotion Administration in the Ministry of Health and Welfare statistics in Taiwan, over ten thousand women have breast cancer every year. Mammography is widely used to detect breast cancer. However, it is limited by the operator's technique, the cooperation of the subjects, and the subjective interpretation by the physician. It results in inconsistent identification. Therefore, this study explores the use of a deep neural network algorithm for the classification of mammography images. In the experimental design, a retrospective study was used to collect imaging data from actual clinical cases. The mammography images were collected and classified according to the breast image reporting and data-analyzing system (BI-RADS). In terms of model building, a fully convolutional dense connection network (FC-DCN) is used for the network backbone. All the images were obtained through image preprocessing, a data augmentation method, and transfer learning technology to build a mammography image classification model. The research results show the model's accuracy, sensitivity, and specificity were 86.37%, 100%, and 72.73%, respectively. Based on the FC-DCN model framework, it can effectively reduce the number of training parameters and successfully obtain a reasonable image classification model for mammography.

6.
Diagnostics (Basel) ; 12(6)2022 Jun 13.
Article in English | MEDLINE | ID: mdl-35741267

ABSTRACT

Chest X-ray (CXR) is widely used to diagnose conditions affecting the chest, its contents, and its nearby structures. In this study, we used a private data set containing 1630 CXR images with disease labels; most of the images were disease-free, but the others contained multiple sites of abnormalities. Here, we used deep convolutional neural network (CNN) models to extract feature representations and to identify possible diseases in these images. We also used transfer learning combined with large open-source image data sets to resolve the problems of insufficient training data and optimize the classification model. The effects of different approaches of reusing pretrained weights (model finetuning and layer transfer), source data sets of different sizes and similarity levels to the target data (ImageNet, ChestX-ray, and CheXpert), methods integrating source data sets into transfer learning (initiating, concatenating, and co-training), and backbone CNN models (ResNet50 and DenseNet121) on transfer learning were also assessed. The results demonstrated that transfer learning applied with the model finetuning approach typically afforded better prediction models. When only one source data set was adopted, ChestX-ray performed better than CheXpert; however, after ImageNet initials were attached, CheXpert performed better. ResNet50 performed better in initiating transfer learning, whereas DenseNet121 performed better in concatenating and co-training transfer learning. Transfer learning with multiple source data sets was preferable to that with a source data set. Overall, transfer learning can further enhance prediction capabilities and reduce computing costs for CXR images.

7.
J Xray Sci Technol ; 30(5): 953-966, 2022.
Article in English | MEDLINE | ID: mdl-35754254

ABSTRACT

BACKGROUND: Dividing liver organs or lesions depicting on computed tomography (CT) images could be applied to help tumor staging and treatment. However, most existing image segmentation technologies use manual or semi-automatic analysis, making the analysis process costly and time-consuming. OBJECTIVE: This research aims to develop and apply a deep learning network architecture to segment liver tumors automatically after fine tuning parameters. METHODS AND MATERIALS: The medical imaging is obtained from the International Symposium on Biomedical Imaging (ISBI), which includes 3D abdominal CT scans of 131 patients diagnosed with liver tumors. From these CT scans, there are 7,190 2D CT images along with the labeled binary images. The labeled binary images are regarded as gold standard for evaluation of the segmented results by FCN (Fully Convolutional Network). The backbones of FCN are extracted from Xception, InceptionresNetv2, MobileNetv2, ResNet18, ResNet50 in this study. Meanwhile, the parameters including optimizers (SGDM and ADAM), size of epoch, and size of batch are investigated. CT images are randomly divided into training and testing sets using a ratio of 9:1. Several evaluation indices including Global Accuracy, Mean Accuracy, Mean IoU (Intersection over Union), Weighted IoU and Mean BF Score are applied to evaluate tumor segmentation results in the testing images. RESULTS: The Global Accuracy, Mean Accuracy, Mean IoU, Weighted IoU, and Mean BF Scores are 0.999, 0.969, 0.954, 0.998, 0.962 using ResNet50 in FCN with optimizer SGDM, batch size 12, and epoch 9. It is important to fine tuning the parameters in FCN model. Top 20 FNC models enable to achieve higher tumor segmentation accuracy with Mean IoU over 0.900. The occurred frequency of InceptionresNetv2, MobileNetv2, ResNet18, ResNet50, and Xception are 9, 6, 3, 5, and 2 times. Therefore, the InceptionresNetv2 has higher performance than others. CONCLUSIONS: This study develop and test an automated liver tumor segmentation model based on FCN. Study results demonstrate that many deep learning models including InceptionresNetv2, MobileNetv2, ResNet18, ResNet50, and Xception have high potential to segment liver tumors from CT images with accuracy exceeding 90%. However, it is still difficult to accurately segment tiny and small size tumors by FCN models.


Subject(s)
Liver Neoplasms , Tomography, X-Ray Computed , Abdomen/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods
8.
Fundam Clin Pharmacol ; 35(4): 634-644, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33278834

ABSTRACT

Intracerebral hemorrhage (ICH) is a common and severe neurological disorder associated with high morbidity and mortality rates. Despite extensive research into its pathology, there are no clinically approved neuroprotective treatments for ICH. Increasing evidence has revealed that inflammatory responses mediate the pathophysiological processes of brain injury following ICH. Experimental ICH was induced by direct infusion of 100 µL fresh (non-heparinized) autologous whole blood into the right basal ganglia of Sprague-Dawley rats at a constant rate (10 µL/min). The simvastatin group was administered simvastatin (15 mg/kg) and the combination therapy group was administered simvastatin (10 mg/kg) and ezetimibe (10 mg/kg). Magnetic resonance imaging (MRI), the forelimb use asymmetry test, the Morris water maze test, and two biomarkers were used to evaluate the effect of simvastatin and combination therapy. MRI imaging revealed that combination therapy resulted in significantly reduced perihematomal edema. Biomarker analyses revealed that both treatments led to significantly reduced endothelial inflammatory responses. The forelimb use asymmetry test revealed that both treatment groups had significantly improved neurological outcomes. The Morris water maze test revealed improved neurological function after combined therapy, which also led to less neuronal loss in the hippocampal CA1 region. In conclusion, simvastatin-ezetimibe combination therapy can improve neurological function, attenuate the endothelial inflammatory response and lead to less neuronal loss in the hippocampal CA1 region in a rat model of ICH.


Subject(s)
Cerebral Hemorrhage/drug therapy , Ezetimibe/pharmacology , Neuroprotective Agents/pharmacology , Simvastatin/pharmacology , Animals , Cerebral Hemorrhage/metabolism , Disease Models, Animal , Drug Therapy, Combination , Ezetimibe/therapeutic use , Hippocampus/metabolism , Intercellular Signaling Peptides and Proteins/metabolism , Male , Maze Learning , Neurons/drug effects , Neuroprotective Agents/therapeutic use , Rats , Rats, Sprague-Dawley , Simvastatin/therapeutic use
9.
J Xray Sci Technol ; 24(3): 353-9, 2016 03 17.
Article in English | MEDLINE | ID: mdl-27257874

ABSTRACT

BACKGROUND: Coronary artery disease (CAD) remains the leading cause of death worldwide. Currently, cardiac multi-detector computed tomography (MDCT) is widely used to diagnose CAD. The purpose in this study is to identify informative and useful predictors from left ventricular (LV) in the early CAD patients using cardiac MDCT images. MATERIALS AND METHODS: Study groups comprised 42 subjects who underwent a screening health examination, including laboratory testing and cardiac angiography by 64-slice MDCT angiography. Two geometrical characteristics and one image density were defined as shape, size and stiffness on MDCT image. The t-test, logistic regression, and receiver operating characteristic curve were applied to assess and identify the significant predictors. The Kappa statistics was used to exam the agreements with physician's judgments (i.e., Golden of True, GOT). RESULTS: The proposed three characteristics of LV MDCT images are important predictors and risk factors for the early CAD patients. These predictors present over 80% of AUC and higher odds ratio. The Kappa statistics was 0.68 for the combinations of shape and stiffness into logistic regression. CONCLUSIONS: The shape, size and stiffness of the left ventricular on MDCT can be used to be the effective indicators in the early CAD patients. Besides, the combinations of shape and stiffness into logistic regression could provide substantial agreement with physician's judgments.


Subject(s)
Coronary Artery Disease/diagnostic imaging , Heart Ventricles/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Adult , Aged , Female , Humans , Male , Middle Aged , ROC Curve , Retrospective Studies
10.
J Xray Sci Technol ; 24(1): 133-43, 2016.
Article in English | MEDLINE | ID: mdl-26890904

ABSTRACT

PURPOSE: A novel diagnostic method using the standard deviation (SD) value of apparent diffusion coefficient (ADC) by diffusion-weighted (DWI) magnetic resonance imaging (MRI) is applied for differential diagnosis of primary chest cancers, metastatic tumors and benign tumors. MATERIALS AND METHODS: This retrospective study enrolled 27 patients (20 males, 7 female; age, 15-85; mean age, 68) who had thoracic mass lesions in the last three years and underwent an MRI chest examination at our institution. In total, 29 mass lesions were analyzed using SD of ADC and DWI. Lesions were divided into five groups: Primary lung cancers (N = 10); esophageal cancers (N = 5); metastatic tumors (N = 8); benign tumors (N = 3); and inflammatory lesions (N = 3). Quantitative assessment of MRI parameters of mass lesions was performed. The ADC value was acquired based on the average of the entire tumor area. The error-plot, t-test and the area under receiver operating characteristic (AUC) were applied for statistical analysis. RESULTS: The SD of ADC value (mean±SD) was (4.867±1.359)×10-4 mm2/sec in primary lung cancers, and (3.598±0.350)×10-4 mm2/sec in metastatic tumors. The SD of ADC values of primary lung cancers and metastatic tumors (P <  0.05) were significantly different and the AUC was 0.800 (P <  0.05). The means of SD of ADC values was 4.532±1.406×10-4 mm2/sec and 2.973±0.364×10-4 mm2/sec for malignant tumors (including primary lung cancers, esophageal cancers) and benign tumors with respectively. The mean of SD of ADC values between malignant chest tumors and benign chest tumors was shown significant difference (P <  0.01). The values of AUC was 0.967 between malignant chest tumors and benign chest tumors (P <  0.05). The ADC values for primary lung cancers, metastatic tumors and benign tumors were not significantly difference (P >  0.05). CONCLUSIONS: The mean of SD of ADC value by DWI can be used for differential diagnosis of chest lesions.


Subject(s)
Diffusion Magnetic Resonance Imaging/methods , Esophageal Neoplasms/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Lung Neoplasms/diagnostic imaging , Adolescent , Adult , Aged , Aged, 80 and over , Area Under Curve , Female , Humans , Male , Middle Aged , Retrospective Studies , Young Adult
11.
J Chin Med Assoc ; 68(5): 226-9, 2005 May.
Article in English | MEDLINE | ID: mdl-15909728

ABSTRACT

BACKGROUND: We report our experience with patients who had acquired immunodeficiency syndrome (AIDS) and who presented with signs and symptoms suggesting acute appendicitis. METHODS: Observational data are documented for 9 patients with AIDS who underwent surgery for acute appendicitis. RESULTS: Of the 9 patients, 6 (66.7%) had acute appendicitis without perforation, while the other 3 (33.3%) had perforated appendicitis. An elevated preoperative temperature was found in 4 patients without perforation (66.7%), and in 1 patient with perforation (33.3%). An elevated white blood cell count was found in all 6 patients without perforation (100%), but in none with perforation (0%). The mean interval from surgical referral to laparotomy was 61.1 hours, the mean hospital stay was 9.3 days, and the perioperative mortality rate was 22.2%. CONCLUSION: Our experience should alert emergency medical staff who care for AIDS patients to the need for early diagnosis and prompt surgical treatment of appendicitis.


Subject(s)
Acquired Immunodeficiency Syndrome/complications , Appendicitis/diagnosis , Acute Disease , Adult , Appendicitis/surgery , Female , Humans , Laparoscopy , Male , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL
...