Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add filters








Language
Year range
1.
Article in Chinese | WPRIM | ID: wpr-908584

ABSTRACT

Objective:To propose and evaluate the cycle-constraint adversarial network (CycleGAN) for enhancing the low-quality fundus images such as the blurred, underexposed and overexposed etc.Methods:A dataset including 700 high-quality and 700 low-quality fundus images selected from the EyePACS dataset was used to train the image enhancement network in this study.The selected images were cropped and uniformly scaled to 512×512 pixels.Two generative models and two discriminative models were used to establish CycleGAN.The generative model generated matching high/low-quality images according to the input low/high-quality fundus images, and the discriminative model determined whether the image was original or generated.The algorithm proposed in this study was compared with three image enhancement algorithms of contrast limited adaptive histogram equalization (CLAHE), dynamic histogram equalization (DHE), and multi-scale retinex with color restoration (MSRCR) to perform qualitative visual assessment with clarity, BRISQUE, hue and saturation as quantitative indicators.The original and enhanced images were applied to the diabetic retinopathy (DR) diagnostic network to diagnose, and the accuracy and specificity were compared.Results:CycleGAN achieved the optimal results on enhancing the three types of low-quality fundus images including the blurred, underexposed and overexposed.The enhanced fundus images were of high contrast, rich colors, and with clear optic disc and blood vessel structures.The clarity of the images enhanced by CycleGAN was second only to the CLAHE algorithm.The BRISQUE quality score of the images enhanced by CycleGAN was 0.571, which was 10.2%, 7.3%, and 10.0% higher than that of CLAHE, DHE and MSRCR algorithms, respectively.CycleGAN achieved 103.03 in hue and 123.24 in saturation, both higher than those of the other three algorithms.CycleGAN took only 35 seconds to enhance 100 images, only slower than CLAHE.The images enhanced by CycleGAN achieved accuracy of 96.75% and specificity of 99.60% in DR diagnosis, which were higher than those of oringinal images.Conclusions:CycleGAN can effectively enhance low-quality blurry, underexposed and overexposed fundus images and improve the accuracy of computer-aided DR diagnostic network.The enhanced fundus image is helpful for doctors to carry out pathological analysis and may have great application value in clinical diagnosis of ophthalmology.

2.
Article in Chinese | WPRIM | ID: wpr-908586

ABSTRACT

Objective:To evaluate the efficiency of ResNet50-OC model based on deep learning for multiple classification of color fundus photographs.Methods:The proprietary dataset (PD) collected in July 2018 in BenQ Hospital of Nanjing Medical University and EyePACS dataset were included.The included images were classified into five types of high quality, underexposure, overexposure, blurred edges and lens flare according to clinical ophthalmologists.There were 1 000 images (800 from EyePACS and 200 from PD) for each type in the training dataset and 500 images (400 from EyePACS and 100 from PD) for each type in the testing dataset.There were 5 000 images in the training dataset and 2 500 images in the testing dataset.All images were normalized and augmented.The transfer learning method was used to initialize the parameters of the network model, on the basis of which the current mainstream deep learning classification networks (VGG, Inception-resnet-v2, ResNet, DenseNet) were compared.The optimal network ResNet50 with best accuracy and Micro F1 value was selected as the main network of the classification model in this study.In the training process, the One-Cycle strategy was introduced to accelerate the model convergence speed to obtain the optimal model ResNet50-OC.ResNet50-OC was applied to multi-class classification of fundus image quality.The accuracy and Micro F1 value of multi-classification of color fundus photographs by ResNet50 and ResNet50-OC were evaluated.Results:The multi-classification accuracy and Micro F1 values of color fundus photographs of ResNet50 were significantly higher than those of VGG, Inception-resnet-v2, ResNet34 and DenseNet.The accuracy of multi-classification of fundus photographs in the ResNet50-OC model was 98.77% after 15 rounds of training, which was higher than 98.76% of the ResNet50 model after 50 rounds of training.The Micro F1 value of multi-classification of retinal images in ResNet50-OC model was 98.78% after 15 rounds of training, which was the same as that of ResNet50 model after 50 rounds of training.Conclusions:The proposed ResNet50-OC model can be accurate and effective in the multi-classification of color fundus photograph quality.One-Cycle strategy can reduce the frequency of training and improve the classification efficiency.

3.
Article in Chinese | WPRIM | ID: wpr-753206

ABSTRACT

Objective To propose a deep learning-based retinal image quality classification network, FA-Net,to make convolutional neural network ( CNN) more suitable for image quality assessment in eye disease screening system. Methods The main network of FA-Net was composed of VGG-19. On this basis,attention mechanism was added to the CNN. By using transfer learning method in training, the weight of ImageNet was used to initialize the network. The attention net is based on foreground extraction by extracting the blood vessel and suspected regions of lesion and assigning higher weights to region of interest to enhance the learning of these important areas. Results Total of 2894 fundus images were used for training FA-Net. FA-Net achieved 97. 65% classification accuracy on a test set containing 2170 fundus images,with the sensitivity and specificity of 0. 978 and 0. 960,respectively,and the area under curve(AUC) was 0. 995. Conclusions Compared with other CNNs,the proposed FA-Net has better classification performance and can evaluate retinal fundus image quality more accurately and efficiently. The network takes into account the human visual system ( HVS) and human attention mechanism. By adding attention module into the VGG-19 network structure, the classification results can be better interpreted as well as better classification performance.

SELECTION OF CITATIONS
SEARCH DETAIL