Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Digit Health ; 9: 20552076231203676, 2023.
Article in English | MEDLINE | ID: mdl-37766903

ABSTRACT

Prolonged hyperglycemia can cause diabetic retinopathy (DR), which is a major contributor to blindness. Numerous incidences of DR may be avoided if it were identified and addressed promptly. Throughout recent years, many deep learning (DL)-based algorithms have been proposed to facilitate psychometric testing. Utilizing DL model that encompassed four scenarios, DR and its stages were identified in this study using retinal scans from the "Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 Blindness Detection" dataset. Adopting a DL model then led to the use of augmentation strategies that produced a comprehensive dataset with consistent hyper parameters across all test cases. As a further step in the classification process, we used a Convolutional Neural Network model. Different enhancement methods have been used to raise visual quality. The proposed approach detected the DR with a highest experimental result of 97.83%, a top-2 accuracy of 99.31%, and a top-3 accuracy of 99.88% across all the 5 severity stages of the APTOS 2019 evaluation employing CLAHE and ESRGAN techniques for image enhancement. In addition, we employed APTOS 2019 to develop a set of evaluation metrics (precision, recall, and F1-score) to use in analyzing the efficacy of the suggested model. The proposed approach was also proven to be more efficient at DR location than both state-of-the-art technology and conventional DL.

2.
Digit Health ; 9: 20552076231194942, 2023.
Article in English | MEDLINE | ID: mdl-37588156

ABSTRACT

Objective: Diabetic retinopathy (DR) can sometimes be treated and prevented from causing irreversible vision loss if caught and treated properly. In this work, a deep learning (DL) model is employed to accurately identify all five stages of DR. Methods: The suggested methodology presents two examples, one with and one without picture augmentation. A balanced dataset meeting the same criteria in both cases is then generated using augmentative methods. The DenseNet-121-rendered model on the Asia Pacific Tele-Ophthalmology Society (APTOS) and dataset for diabetic retinopathy (DDR) datasets performed exceptionally well when compared to other methods for identifying the five stages of DR. Results: Our propose model achieved the highest test accuracy of 98.36%, top-2 accuracy of 100%, and top-3 accuracy of 100% for the APTOS dataset, and the highest test accuracy of 79.67%, top-2 accuracy of 92.%76, and top-3 accuracy of 98.94% for the DDR dataset. Additional criteria (precision, recall, and F1-score) for gauging the efficacy of the proposed model were established with the help of APTOS and DDR. Conclusions: It was discovered that feeding a model with higher-quality photographs increased its efficiency and ability for learning, as opposed to both state-of-the-art technology and the other, non-enhanced model.

3.
Diagnostics (Basel) ; 13(14)2023 Jul 14.
Article in English | MEDLINE | ID: mdl-37510123

ABSTRACT

One of the primary causes of blindness in the diabetic population is diabetic retinopathy (DR). Many people could have their sight saved if only DR were detected and treated in time. Numerous Deep Learning (DL)-based methods have been presented to improve human analysis. Using a DL model with three scenarios, this research classified DR and its severity stages from fundus images using the "APTOS 2019 Blindness Detection" dataset. Following the adoption of the DL model, augmentation methods were implemented to generate a balanced dataset with consistent input parameters across all test scenarios. As a last step in the categorization process, the DenseNet-121 model was employed. Several methods, including Enhanced Super-resolution Generative Adversarial Networks (ESRGAN), Histogram Equalization (HIST), and Contrast Limited Adaptive HIST (CLAHE), have been used to enhance image quality in a variety of contexts. The suggested model detected the DR across all five APTOS 2019 grading process phases with the highest test accuracy of 98.36%, top-2 accuracy of 100%, and top-3 accuracy of 100%. Further evaluation criteria (precision, recall, and F1-score) for gauging the efficacy of the proposed model were established with the help of APTOS 2019. Furthermore, comparing CLAHE + ESRGAN against both state-of-the-art technology and other recommended methods, it was found that its use was more effective in DR classification.

4.
Diagnostics (Basel) ; 13(10)2023 May 22.
Article in English | MEDLINE | ID: mdl-37238299

ABSTRACT

When it comes to skin tumors and cancers, melanoma ranks among the most prevalent and deadly. With the advancement of deep learning and computer vision, it is now possible to quickly and accurately determine whether or not a patient has malignancy. This is significant since a prompt identification greatly decreases the likelihood of a fatal outcome. Artificial intelligence has the potential to improve healthcare in many ways, including melanoma diagnosis. In a nutshell, this research employed an Inception-V3 and InceptionResnet-V2 strategy for melanoma recognition. The feature extraction layers that were previously frozen were fine-tuned after the newly added top layers were trained. This study used data from the HAM10000 dataset, which included an unrepresentative sample of seven different forms of skin cancer. To fix the discrepancy, we utilized data augmentation. The proposed models outperformed the results of the previous investigation with an effectiveness of 0.89 for Inception-V3 and 0.91 for InceptionResnet-V2.

5.
Healthcare (Basel) ; 11(6)2023 Mar 15.
Article in English | MEDLINE | ID: mdl-36981520

ABSTRACT

Vision loss can be avoided if diabetic retinopathy (DR) is diagnosed and treated promptly. The main five DR stages are none, moderate, mild, proliferate, and severe. In this study, a deep learning (DL) model is presented that diagnoses all five stages of DR with more accuracy than previous methods. The suggested method presents two scenarios: case 1 with image enhancement using a contrast limited adaptive histogram equalization (CLAHE) filtering algorithm in conjunction with an enhanced super-resolution generative adversarial network (ESRGAN), and case 2 without image enhancement. Augmentation techniques were then performed to generate a balanced dataset utilizing the same parameters for both cases. Using Inception-V3 applied to the Asia Pacific Tele-Ophthalmology Society (APTOS) datasets, the developed model achieved an accuracy of 98.7% for case 1 and 80.87% for case 2, which is greater than existing methods for detecting the five stages of DR. It was demonstrated that using CLAHE and ESRGAN improves a model's performance and learning ability.

6.
Healthcare (Basel) ; 10(12)2022 Dec 08.
Article in English | MEDLINE | ID: mdl-36554004

ABSTRACT

One of the most prevalent cancers worldwide is skin cancer, and it is becoming more common as the population ages. As a general rule, the earlier skin cancer can be diagnosed, the better. As a result of the success of deep learning (DL) algorithms in other industries, there has been a substantial increase in automated diagnosis systems in healthcare. This work proposes DL as a method for extracting a lesion zone with precision. First, the image is enhanced using Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) to improve the image's quality. Then, segmentation is used to segment Regions of Interest (ROI) from the full image. We employed data augmentation to rectify the data disparity. The image is then analyzed with a convolutional neural network (CNN) and a modified version of Resnet-50 to classify skin lesions. This analysis utilized an unequal sample of seven kinds of skin cancer from the HAM10000 dataset. With an accuracy of 0.86, a precision of 0.84, a recall of 0.86, and an F-score of 0.86, the proposed CNN-based Model outperformed the earlier study's results by a significant margin. The study culminates with an improved automated method for diagnosing skin cancer that benefits medical professionals and patients.

7.
Sensors (Basel) ; 22(17)2022 Sep 01.
Article in English | MEDLINE | ID: mdl-36081083

ABSTRACT

The Internet of Things (IoT) refers to a system of interconnected, internet-connected devices and sensors that allows the collection and dissemination of data. The data provided by these sensors may include outliers or exhibit anomalous behavior as a result of attack activities or device failure, for example. However, the majority of existing outlier detection algorithms rely on labeled data, which is frequently hard to obtain in the IoT domain. More crucially, the IoT's data volume is continually increasing, necessitating the requirement for predicting and identifying the classes of future data. In this study, we propose an unsupervised technique based on a deep Variational Auto-Encoder (VAE) to detect outliers in IoT data by leveraging the characteristic of the reconstruction ability and the low-dimensional representation of the input data's latent variables of the VAE. First, the input data are standardized. Then, we employ the VAE to find a reconstructed output representation from the low-dimensional representation of the latent variables of the input data. Finally, the reconstruction error between the original observation and the reconstructed one is used as an outlier score. Our model was trained only using normal data with no labels in an unsupervised manner and evaluated using Statlog (Landsat Satellite) dataset. The unsupervised model achieved promising and comparable results with the state-of-the-art outlier detection schemes with a precision of ≈90% and an F1 score of 79%.

8.
Healthcare (Basel) ; 10(7)2022 Jun 24.
Article in English | MEDLINE | ID: mdl-35885710

ABSTRACT

An increasing number of genetic and metabolic anomalies have been determined to lead to cancer, generally fatal. Cancerous cells may spread to any body part, where they can be life-threatening. Skin cancer is one of the most common types of cancer, and its frequency is increasing worldwide. The main subtypes of skin cancer are squamous and basal cell carcinomas, and melanoma, which is clinically aggressive and responsible for most deaths. Therefore, skin cancer screening is necessary. One of the best methods to accurately and swiftly identify skin cancer is using deep learning (DL). In this research, the deep learning method convolution neural network (CNN) was used to detect the two primary types of tumors, malignant and benign, using the ISIC2018 dataset. This dataset comprises 3533 skin lesions, including benign, malignant, nonmelanocytic, and melanocytic tumors. Using ESRGAN, the photos were first retouched and improved. The photos were augmented, normalized, and resized during the preprocessing step. Skin lesion photos could be classified using a CNN method based on an aggregate of results obtained after many repetitions. Then, multiple transfer learning models, such as Resnet50, InceptionV3, and Inception Resnet, were used for fine-tuning. In addition to experimenting with several models (the designed CNN, Resnet50, InceptionV3, and Inception Resnet), this study's innovation and contribution are the use of ESRGAN as a preprocessing step. Our designed model showed results comparable to the pretrained model. Simulations using the ISIC 2018 skin lesion dataset showed that the suggested strategy was successful. An 83.2% accuracy rate was achieved by the CNN, in comparison to the Resnet50 (83.7%), InceptionV3 (85.8%), and Inception Resnet (84%) models.

9.
Healthcare (Basel) ; 10(2)2022 Feb 10.
Article in English | MEDLINE | ID: mdl-35206957

ABSTRACT

The coronavirus disease (COVID-19) is rapidly spreading around the world. Early diagnosis and isolation of COVID-19 patients has proven crucial in slowing the disease's spread. One of the best options for detecting COVID-19 reliably and easily is to use deep learning (DL) strategies. Two different DL approaches based on a pertained neural network model (ResNet-50) for COVID-19 detection using chest X-ray (CXR) images are proposed in this study. Augmenting, enhancing, normalizing, and resizing CXR images to a fixed size are all part of the preprocessing stage. This research proposes a DL method for classifying CXR images based on an ensemble employing multiple runs of a modified version of the Resnet-50. The proposed system is evaluated against two publicly available benchmark datasets that are frequently used by several researchers: COVID-19 Image Data Collection (IDC) and CXR Images (Pneumonia). The proposed system validates its dominance over existing methods such as VGG or Densnet, with values exceeding 99.63% in many metrics, such as accuracy, precision, recall, F1-score, and Area under the curve (AUC), based on the performance results obtained.

10.
Sensors (Basel) ; 22(3)2022 Jan 24.
Article in English | MEDLINE | ID: mdl-35161622

ABSTRACT

Breast cancer is among the leading causes of mortality for females across the planet. It is essential for the well-being of women to develop early detection and diagnosis techniques. In mammography, focus has contributed to the use of deep learning (DL) models, which have been utilized by radiologists to enhance the needed processes to overcome the shortcomings of human observers. The transfer learning method is being used to distinguish malignant and benign breast cancer by fine-tuning multiple pre-trained models. In this study, we introduce a framework focused on the principle of transfer learning. In addition, a mixture of augmentation strategies were used to prevent overfitting and produce stable outcomes by increasing the number of mammographic images; including several rotation combinations, scaling, and shifting. On the Mammographic Image Analysis Society (MIAS) dataset, the proposed system was evaluated and achieved an accuracy of 89.5% using (residual network-50) ResNet50, and achieved an accuracy of 70% using the Nasnet-Mobile network. The proposed system demonstrated that pre-trained classification networks are significantly more effective and efficient, making them more acceptable for medical imaging, particularly for small training datasets.


Subject(s)
Breast Neoplasms , Breast Neoplasms/diagnostic imaging , Disease Progression , Female , Humans , Machine Learning , Mammography , Neural Networks, Computer
11.
J Comput Assist Tomogr ; 43(6): 870-876, 2019.
Article in English | MEDLINE | ID: mdl-31453974

ABSTRACT

AIM: This study aimed to evaluate potential dose savings on a revised protocol for whole-body computed tomography and image quality after implementing Adaptive Statistical Iterative Reconstruction V (ASiR-V) algorism for trauma patients and compare it with routine protocol. MATERIALS AND METHODS: One hundred trauma patients were classified into 2 groups using 2 different scanning protocols. Group A (n = 50; age, 32.48 ± 8.09 years) underwent routine 3-phase protocol. Group B (n = 50; age, 35.94 ± 13.57 years) underwent biphasic injection protocol including unenhanced scan for the brain and cervical spines, followed by a 1-step acquisition of the thorax, abdomen, and pelvis. The ASiR-V level was kept at 50% for all examinations, and then studies were reconstructed at 0% ASiR-V level. Radiation dose, total acquisition time, and image count were compared between groups (A and B). Two radiologists independently graded image quality and artifacts between both groups and 2 ASiR-V levels (0 and 50%). RESULTS: The mean (±SD) dose-length product value for postcontrast scans in group A was 1602.3 ± 271.8 mGy · cm and higher when compared with group B (P < 0.001), which was 951.1 ± 359.6 mGy · cm. Biphasic injection protocol gave a dose reduction of 40.4% and reduced the total acquisition time by 11.4% and image count by 37.6%. There was no statistically significant difference between the image quality scores for both groups; however, group A scored higher grades (4.62 ± 0.56 and 4.56 ± 0.67). Similarly, the image quality scores for both ASiR-V levels in both groups were not significantly different. CONCLUSIONS: Biphasic computed tomography protocol reduced radiation dose with maintenance of diagnostic accuracy and image quality after implementing ASiR-V algorism.


Subject(s)
Radiographic Image Interpretation, Computer-Assisted/methods , Whole Body Imaging/methods , Wounds and Injuries/diagnostic imaging , Adolescent , Adult , Female , Humans , Male , Middle Aged , Radiation Dosage , Sensitivity and Specificity , Tomography, X-Ray Computed , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...