Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Med Imaging Graph ; 111: 102320, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38134726

RESUMO

Medical imaging, specifically chest X-ray image analysis, is a crucial component of early disease detection and screening in healthcare. Deep learning techniques, such as convolutional neural networks (CNNs), have emerged as powerful tools for computer-aided diagnosis (CAD) in chest X-ray image analysis. These techniques have shown promising results in automating tasks such as classification, detection, and segmentation of abnormalities in chest X-ray images, with the potential to surpass human radiologists. In this review, we provide an overview of the importance of chest X-ray image analysis, historical developments, impact of deep learning techniques, and availability of labeled databases. We specifically focus on advancements and challenges in radiology report generation using deep learning, highlighting potential future advancements in this area. The use of deep learning for report generation has the potential to reduce the burden on radiologists, improve patient care, and enhance the accuracy and efficiency of chest X-ray image analysis in medical imaging.


Assuntos
Aprendizado Profundo , Humanos , Raios X , Redes Neurais de Computação , Tórax , Diagnóstico por Computador/métodos
2.
Comput Biol Med ; 163: 107133, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37327756

RESUMO

This paper presents a novel framework for breast cancer detection using mammogram images. The proposed solution aims to output an explainable classification from a mammogram image. The classification approach uses a Case-Based Reasoning system (CBR). CBR accuracy strongly depends on the quality of the extracted features. To achieve relevant classification, we propose a pipeline that includes image enhancement and data augmentation to improve the quality of extracted features and provide a final diagnosis. An efficient segmentation method based on a U-Net architecture is used to extract Regions of interest (RoI) from mammograms. The purpose is to combine deep learning (DL) with CBR to improve classification accuracy. DL provides accurate mammogram segmentation, while CBR gives an explainable and accurate classification. The proposed approach was tested on the CBIS-DDSM dataset and achieved high performance with an accuracy (Acc) of 86.71 % and a recall of 91.34 %, outperforming some well-known machine learning (ML) and DL approaches.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Mamografia/métodos , Aprendizado de Máquina , Aumento da Imagem
3.
Viruses ; 15(6)2023 06 06.
Artigo em Inglês | MEDLINE | ID: mdl-37376626

RESUMO

COVID-19,which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is one of the worst pandemics in recent history. The identification of patients suspected to be infected with COVID-19 is becoming crucial to reduce its spread. We aimed to validate and test a deep learning model to detect COVID-19 based on chest X-rays. The recent deep convolutional neural network (CNN) RegNetX032 was adapted for detecting COVID-19 from chest X-ray (CXR) images using polymerase chain reaction (RT-PCR) as a reference. The model was customized and trained on five datasets containing more than 15,000 CXR images (including 4148COVID-19-positive cases) and then tested on 321 images (150 COVID-19-positive) from Montfort Hospital. Twenty percent of the data from the five datasets were used as validation data for hyperparameter optimization. Each CXR image was processed by the model to detect COVID-19. Multi-binary classifications were proposed, such as: COVID-19 vs. normal, COVID-19 + pneumonia vs. normal, and pneumonia vs. normal. The performance results were based on the area under the curve (AUC), sensitivity, and specificity. In addition, an explainability model was developed that demonstrated the high performance and high generalization degree of the proposed model in detecting and highlighting the signs of the disease. The fine-tuned RegNetX032 model achieved an overall accuracy score of 96.0%, with an AUC score of 99.1%. The model showed a superior sensitivity of 98.0% in detecting signs from CXR images of COVID-19 patients, and a specificity of 93.0% in detecting healthy CXR images. A second scenario compared COVID-19 + pneumonia vs. normal (healthy X-ray) patients. The model achieved an overall score of 99.1% (AUC) with a sensitivity of 96.0% and specificity of 93.0% on the Montfort dataset. For the validation set, the model achieved an average accuracy of 98.6%, an AUC score of 98.0%, a sensitivity of 98.0%, and a specificity of 96.0% for detection (COVID-19 patients vs. healthy patients). The second scenario compared COVID-19 + pneumonia vs. normal patients. The model achieved an overall score of 98.8% (AUC) with a sensitivity of 97.0% and a specificity of 96.0%. This robust deep learning model demonstrated excellent performance in detecting COVID-19 from chest X-rays. This model could be used to automate the detection of COVID-19 and improve decision making for patient triage and isolation in hospital settings. This could also be used as a complementary aid for radiologists or clinicians when differentiating to make smart decisions.


Assuntos
COVID-19 , Aprendizado Profundo , Pneumonia , Humanos , COVID-19/diagnóstico por imagem , SARS-CoV-2 , Raios X
4.
SN Comput Sci ; 4(4): 388, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37200562

RESUMO

X-ray images are the most widely used medical imaging modality. They are affordable, non-dangerous, accessible, and can be used to identify different diseases. Multiple computer-aided detection (CAD) systems using deep learning (DL) algorithms were recently proposed to support radiologists in identifying different diseases on medical images. In this paper, we propose a novel two-step approach for chest disease classification. The first is a multi-class classification step based on classifying X-ray images by infected organs into three classes (normal, lung disease, and heart disease). The second step of our approach is a binary classification of seven specific lungs and heart diseases. We use a consolidated dataset of 26,316 chest X-ray (CXR) images. Two deep learning methods are proposed in this paper. The first is called DC-ChestNet. It is based on ensembling deep convolutional neural network (DCNN) models. The second is named VT-ChestNet. It is based on a modified transformer model. VT-ChestNet achieved the best performance overcoming DC-ChestNet and state-of-the-art models (DenseNet121, DenseNet201, EfficientNetB5, and Xception). VT-ChestNet obtained an area under curve (AUC) of 95.13% for the first step. For the second step, it obtained an average AUC of 99.26% for heart diseases and an average AUC of 99.57% for lung diseases.

5.
SN Comput Sci ; 4(4): 414, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37252339

RESUMO

Accurate segmentation of the lungs in CXR images is the basis for an automated CXR image analysis system. It helps radiologists in detecting lung areas, subtle signs of disease and improving the diagnosis process for patients. However, precise semantic segmentation of lungs is considered a challenging case due to the presence of the edge rib cage, wide variation of lung shape, and lungs affected by diseases. In this paper, we address the problem of lung segmentation in healthy and unhealthy CXR images. Five models were developed and used in detecting and segmenting lung regions. Two loss functions and three benchmark datasets were employed to evaluate these models. Experimental results showed that the proposed models were able to extract salient global and local features from the input CXR images. The best performing model achieved an F1 score of 97.47%, outperforming recent published models. They proved their ability to separate lung regions from the rib cage and clavicle edges and segment varying lung shape depending on age and gender, as well as challenging cases of lungs affected by anomalies such as tuberculosis and the presence of nodules.

6.
J Imaging ; 9(3)2023 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-36976113

RESUMO

With the widespread use of deep learning in leading systems, it has become the mainstream in the table detection field. Some tables are difficult to detect because of the likely figure layout or the small size. As a solution to the underlined problem, we propose a novel method, called DCTable, to improve Faster R-CNN for table detection. DCTable came up to extract more discriminative features using a backbone with dilated convolutions in order to improve the quality of region proposals. Another main contribution of this paper is the anchors optimization using the Intersection over Union (IoU)-balanced loss to train the RPN and reduce the false positive rate. This is followed by a RoI Align layer, instead of the ROI pooling, to improve the accuracy during mapping table proposal candidates by eliminating the coarse misalignment and introducing the bilinear interpolation in mapping region proposal candidates. Training and testing on a public dataset showed the effectiveness of the algorithm and a considerable improvement of the F1-score on ICDAR 2017-Pod, ICDAR-2019, Marmot and RVL CDIP datasets.

7.
Diagnostics (Basel) ; 13(1)2023 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-36611451

RESUMO

Chest X-ray radiography (CXR) is among the most frequently used medical imaging modalities. It has a preeminent value in the detection of multiple life-threatening diseases. Radiologists can visually inspect CXR images for the presence of diseases. Most thoracic diseases have very similar patterns, which makes diagnosis prone to human error and leads to misdiagnosis. Computer-aided detection (CAD) of lung diseases in CXR images is among the popular topics in medical imaging research. Machine learning (ML) and deep learning (DL) provided techniques to make this task more efficient and faster. Numerous experiments in the diagnosis of various diseases proved the potential of these techniques. In comparison to previous reviews our study describes in detail several publicly available CXR datasets for different diseases. It presents an overview of recent deep learning models using CXR images to detect chest diseases such as VGG, ResNet, DenseNet, Inception, EfficientNet, RetinaNet, and ensemble learning methods that combine multiple models. It summarizes the techniques used for CXR image preprocessing (enhancement, segmentation, bone suppression, and data-augmentation) to improve image quality and address data imbalance issues, as well as the use of DL models to speed-up the diagnosis process. This review also discusses the challenges present in the published literature and highlights the importance of interpretability and explainability to better understand the DL models' detections. In addition, it outlines a direction for researchers to help develop more effective models for early and automatic detection of chest diseases.

8.
Curr Oncol ; 29(11): 8767-8793, 2022 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-36421343

RESUMO

Recent advances in deep learning have enhanced medical imaging research. Breast cancer is the most prevalent cancer among women, and many applications have been developed to improve its early detection. The purpose of this review is to examine how various deep learning methods can be applied to breast cancer screening workflows. We summarize deep learning methods, data availability and different screening methods for breast cancer including mammography, thermography, ultrasound and magnetic resonance imaging. In this review, we will explore deep learning in diagnostic breast imaging and describe the literature review. As a conclusion, we discuss some of the limitations and opportunities of integrating artificial intelligence into breast cancer clinical practice.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Radiologia , Feminino , Humanos , Inteligência Artificial , Neoplasias da Mama/diagnóstico por imagem , Radiografia
9.
J Clin Med ; 11(11)2022 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-35683400

RESUMO

The rapid spread of COVID-19 across the globe since its emergence has pushed many countries' healthcare systems to the verge of collapse. To restrict the spread of the disease and lessen the ongoing cost on the healthcare system, it is critical to appropriately identify COVID-19-positive individuals and isolate them as soon as possible. The primary COVID-19 screening test, RT-PCR, although accurate and reliable, has a long turn-around time. More recently, various researchers have demonstrated the use of deep learning approaches on chest X-ray (CXR) for COVID-19 detection. However, existing Deep Convolutional Neural Network (CNN) methods fail to capture the global context due to their inherent image-specific inductive bias. In this article, we investigated the use of vision transformers (ViT) for detecting COVID-19 in Chest X-ray (CXR) images. Several ViT models were fine-tuned for the multiclass classification problem (COVID-19, Pneumonia and Normal cases). A dataset consisting of 7598 COVID-19 CXR images, 8552 CXR for healthy patients and 5674 for Pneumonia CXR were used. The obtained results achieved high performance with an Area Under Curve (AUC) of 0.99 for multi-class classification (COVID-19 vs. Other Pneumonia vs. normal). The sensitivity of the COVID-19 class achieved 0.99. We demonstrated that the obtained results outperformed comparable state-of-the-art models for detecting COVID-19 on CXR images using CNN architectures. The attention map for the proposed model showed that our model is able to efficiently identify the signs of COVID-19.

10.
Cancers (Basel) ; 14(11)2022 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-35681643

RESUMO

Automated medical data analysis demonstrated a significant role in modern medicine, and cancer diagnosis/prognosis to achieve highly reliable and generalizable systems. In this study, an automated breast cancer screening method in ultrasound imaging is proposed. A convolutional deep autoencoder model is presented for simultaneous segmentation and radiomic extraction. The model segments the breast lesions while concurrently extracting radiomic features. With our deep model, we perform breast lesion segmentation, which is linked to low-dimensional deep-radiomic extraction (four features). Similarly, we used high dimensional conventional imaging throughputs and applied spectral embedding techniques to reduce its size from 354 to 12 radiomics. A total of 780 ultrasound images-437 benign, 210, malignant, and 133 normal-were used to train and validate the models in this study. To diagnose malignant lesions, we have performed training, hyperparameter tuning, cross-validation, and testing with a random forest model. This resulted in a binary classification accuracy of 78.5% (65.1-84.1%) for the maximal (full multivariate) cross-validated model for a combination of radiomic groups.

11.
Sensors (Basel) ; 22(5)2022 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-35271126

RESUMO

Wildfires are a worldwide natural disaster causing important economic damages and loss of lives. Experts predict that wildfires will increase in the coming years mainly due to climate change. Early detection and prediction of fire spread can help reduce affected areas and improve firefighting. Numerous systems were developed to detect fire. Recently, Unmanned Aerial Vehicles were employed to tackle this problem due to their high flexibility, their low-cost, and their ability to cover wide areas during the day or night. However, they are still limited by challenging problems such as small fire size, background complexity, and image degradation. To deal with the aforementioned limitations, we adapted and optimized Deep Learning methods to detect wildfire at an early stage. A novel deep ensemble learning method, which combines EfficientNet-B5 and DenseNet-201 models, is proposed to identify and classify wildfire using aerial images. In addition, two vision transformers (TransUNet and TransFire) and a deep convolutional model (EfficientSeg) were employed to segment wildfire regions and determine the precise fire regions. The obtained results are promising and show the efficiency of using Deep Learning and vision transformers for wildfire classification and segmentation. The proposed model for wildfire classification obtained an accuracy of 85.12% and outperformed many state-of-the-art works. It proved its ability in classifying wildfire even small fire areas. The best semantic segmentation models achieved an F1-score of 99.9% for TransUNet architecture and 99.82% for TransFire architecture superior to recent published models. More specifically, we demonstrated the ability of these models to extract the finer details of wildfire using aerial images. They can further overcome current model limitations, such as background complexity and small wildfire areas.


Assuntos
Aprendizado Profundo , Incêndios , Incêndios Florestais , Mudança Climática
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2794-2797, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891829

RESUMO

Disease detection using chest X-ray (CXR) images is one of the most popular radiology methods to diagnose diseases through a visual inspection of abnormal symptoms in the lung region. A wide variety of diseases such as pneumonia, heart failure and lung cancer can be detected using CXRs. Although CXRs can show the symptoms of a variety of diseases, detecting and manually classifying those diseases can be difficult and time-consuming adding to clinicians' work burden. Research shows that nearly 90% of mistakes made in a lung cancer diagnosis involved chest radiography. A variety of algorithms and computer-assisted diagnosis tools (CAD) were proposed to assist radiologists in the interpretation of medical images to reduce diagnosis errors. In this work, we propose a deep learning approach to screen multiple diseases using more than 220,000 images from the CheXpert dataset. The proposed binary relevance approach using Deep Convolutional Neural Networks (CNNs) achieves high performance results and outperforms past published work in this area.Clinical relevance- This application can be used to support physicians ans speed-up the diagnosis work. The proposed CAD can increase the confidence in the diagnosis or suggest a second opinion. The CAD can also be used in emergency situations when a radiologist is not available immediately.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Radiografia Torácica , Tórax , Raios X
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3074-3077, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891892

RESUMO

Melanoma is considered as one of the world's deadly cancers. This type of skin cancer will spread to other areas of the body if not detected at an early stage. Convolutional Neural Network (CNN) based classifiers are currently considered one of the most effective melanoma detection techniques. This study presents the use of recent deep CNN approaches to detect melanoma skin cancer and investigate suspicious lesions. Tests were conducted using a set of more than 36,000 images extracted from multiple datasets. The obtained results show that the best performing deep learning approach achieves high scores with an accuracy and Area Under Curve (AUC) above 99%.


Assuntos
Aprendizado Profundo , Melanoma , Neoplasias Cutâneas , Dermoscopia , Humanos , Melanoma/diagnóstico , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3297-3300, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891945

RESUMO

COVID-19 is an acute severe respiratory disease caused by a novel coronavirus SARS-CoV-2. After its first appearance in Wuhan (China), it spread rapidly across the world and became a pandemic. It had a devastating effect on everyday life, public health, and the world economy. The use of advanced artificial intelligence (AI) techniques combined with radiological imaging can be helpful in speeding-up the detection of this disease. In this study, we propose the development of recent deep learning models for automatic COVID-19 detection using computed tomography (CT) images. The proposed models are fine-tuned and optimized to provide accurate results for multiclass classification of COVID-19 vs. Community Acquired Pneumonia (CAP) vs. Normal cases. Tests were conducted both at the image and patient-level and show that the proposed algorithms achieve very high scores. In addition, an explainability algorithm was developed to help visualize the symptoms of the disease detected by the best performing deep model.


Assuntos
COVID-19 , Inteligência Artificial , Humanos , Redes Neurais de Computação , SARS-CoV-2 , Tomografia Computadorizada por Raios X
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3336-3339, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891954

RESUMO

Early fundus screening is a cost-effective and efficient approach to reduce ophthalmic disease-related blindness in ophthalmology. Manual evaluation is time-consuming. Ophthalmic disease detection studies have shown interesting results thanks to the advancement in deep learning techniques, but the majority of them are limited to a single disease. In this paper we propose the study of various deep learning models for eyes disease detection where several optimizations were performed. The results show that the best model achieves high scores with an AUC of 98.31% for six diseases and an AUC of 96.04% for eight diseases.


Assuntos
Aprendizado Profundo , Oftalmopatias , Oftalmologia , Oftalmopatias/diagnóstico , Fundo de Olho , Humanos
16.
J Clin Med ; 10(14)2021 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-34300266

RESUMO

The COVID-19 pandemic continues to spread globally at a rapid pace, and its rapid detection remains a challenge due to its rapid infectivity and limited testing availability. One of the simply available imaging modalities in clinical routine involves chest X-ray (CXR), which is often used for diagnostic purposes. Here, we proposed a computer-aided detection of COVID-19 in CXR imaging using deep and conventional radiomic features. First, we used a 2D U-Net model to segment the lung lobes. Then, we extracted deep latent space radiomics by applying deep convolutional autoencoder (ConvAE) with internal dense layers to extract low-dimensional deep radiomics. We used Johnson-Lindenstrauss (JL) lemma, Laplacian scoring (LS), and principal component analysis (PCA) to reduce dimensionality in conventional radiomics. The generated low-dimensional deep and conventional radiomics were integrated to classify COVID-19 from pneumonia and healthy patients. We used 704 CXR images for training the entire model (i.e., U-Net, ConvAE, and feature selection in conventional radiomics). Afterward, we independently validated the whole system using a study cohort of 1597 cases. We trained and tested a random forest model for detecting COVID-19 cases through multivariate binary-class and multiclass classification. The maximal (full multivariate) model using a combination of the two radiomic groups yields performance in classification cross-validated accuracy of 72.6% (69.4-74.4%) for multiclass and 89.6% (88.4-90.7%) for binary-class classification.

17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1335-1338, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018235

RESUMO

Lung cancer is considered the deadliest cancer worldwide. In order to detect it, radiologists need to inspect multiple Computed Tomography (CT) scans. This task is tedious and time consuming. In recent years, promising methods based on deep learning object detection algorithms were proposed for the automatic nodule detection and classification. With those techniques, Computed Aided Detection (CAD) software can be developed to alleviate radiologist's burden and help speed-up the screening process. However, among available object detection frameworks, there are just a limited number that have been used for this purpose. Moreover, it can be challenging to know which one to choose as a baseline for the development of a new application for this task. Hence, in this work we propose a benchmark of recent state-of-the-art deep learning detectors such as Faster-RCNN, YOLO, SSD, RetinaNet and EfficientDet in the challenging task of pulmonary nodule detection. Evaluation is done using automatically segmented 2D images extracted from volumetric chest CT scans.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Algoritmos , Humanos , Neoplasias Pulmonares/diagnóstico , Software , Tomografia Computadorizada por Raios X
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1966-1969, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018388

RESUMO

Diabetic retinopathy (DR) is a medical condition due to diabetes mellitus that can damage the patient retina and cause blood leaks. This condition can cause different symptoms from mild vision problems to complete blindness if it is not timely treated. In this work, we propose the use of a deep learning architecture based on a recent convolutional neural network called EfficientNet to detect referable diabetic retinopathy (RDR) and vision-threatening DR. Tests were conducted on two public datasets, EyePACS and APTOS 2019. The obtained results achieve state-of-the-art performance and show that the proposed network leads to higher classification rates, achieving an Area Under Curve (AUC) of 0.984 for RDR and 0.990 for vision-threatening DR on EyePACS dataset. Similar performances are obtained for APTOS 2019 dataset with an AUC of 0.966 and 0.998 for referable and vision-threatening DR, respectively. An explainability algorithm was also developed and shows the efficiency of the proposed approach in detecting DR signs.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Algoritmos , Área Sob a Curva , Retinopatia Diabética/diagnóstico , Humanos , Redes Neurais de Computação , Retina
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5312-5315, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33019183

RESUMO

With cancer being one of the main remaining challenges of modern medicine, a lot of effort is put towards oncology research. Since early diagnosis is a highly important factor for the treatment of many types of cancer, screening tests have become a popular research subject. Technical and technological advances have brought down the price of genome sequencing and have led to an increase in understanding the relationship between DNA, RNA and tumor sites. These advances have sparked an interest in personalized and precision medicine research. In this work, we propose a deep neural network classifier to identify the anatomical site of a tumor. Using 27 TCGA miRNA stem-loops cohorts, we classify tumors in 20 anatomical sites with a 96.9% accuracy. Our results demonstrate the possibility of using stem-loop expression data for accurate cancer localization.


Assuntos
MicroRNAs , Neoplasias , Aprendizado Profundo , Humanos , MicroRNAs/genética , Neoplasias/diagnóstico , Redes Neurais de Computação , Medicina de Precisão
20.
J Med Imaging (Bellingham) ; 7(4): 044503, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32904519

RESUMO

Purpose: Diabetic retinopathy (DR) is characterized by retinal lesions affecting people having diabetes for several years. It is one of the leading causes of visual impairment worldwide. To diagnose this disease, ophthalmologists need to manually analyze retinal fundus images. Computer-aided diagnosis systems can help alleviate this burden by automatically detecting DR on retinal images, thus saving physicians' precious time and reducing costs. The objective of this study is to develop a deep learning algorithm capable of detecting DR on retinal fundus images. Nine public datasets and more than 90,000 images are used to assess the efficiency of the proposed technique. In addition, an explainability algorithm is developed to visually show the DR signs detected by the deep model. Approach: The proposed deep learning algorithm fine-tunes a pretrained deep convolutional neural network for DR detection. The model is trained on a subset of EyePACS dataset using a cosine annealing strategy for decaying the learning rate with warm up, thus improving the training accuracy. Tests are conducted on the nine datasets. An explainability algorithm based on gradient-weighted class activation mapping is developed to visually show the signs selected by the model to classify the retina images as DR. Result: The proposed network leads to higher classification rates with an area under curve (AUC) of 0.986, sensitivity = 0.958, and specificity = 0.971 for EyePACS. For MESSIDOR, MESSIDOR-2, DIARETDB0, DIARETDB1, STARE, IDRID, E-ophtha, and UoA-DR, the AUC is 0.963, 0.979, 0.986, 0.988, 0.964, 0.957, 0.984, and 0.990, respectively. Conclusions: The obtained results achieve state-of-the-art performance and outperform past published works relying on training using only publicly available datasets. The proposed approach can robustly classify fundus images and detect DR. An explainability model was developed and showed that our model was able to efficiently identify different signs of DR and detect this health issue.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...