Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Biomed Opt Express ; 15(2): 1233-1252, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38404302

ABSTRACT

Optical coherence tomography (OCT) inevitably suffers from the influence of speckles originating from multiple scattered photons owing to its low-coherence interferometry property. Although various deep learning schemes have been proposed for OCT despeckling, they typically suffer from the requirement for ground-truth images, which are difficult to collect in clinical practice. To alleviate the influences of speckles without requiring ground-truth images, this paper presents a self-supervised deep learning scheme, namely, Self2Self strategy (S2Snet), for OCT despeckling using a single noisy image. Specifically, in this study, the main deep learning architecture is the Self2Self network, with its partial convolution being updated with a gated convolution layer. Specifically, both the input images and their Bernoulli sampling instances are adopted as network input first, and then, a devised loss function is integrated into the network to remove the background noise. Finally, the denoised output is estimated using the average of multiple predicted outputs. Experiments with various OCT datasets are conducted to verify the effectiveness of the proposed S2Snet scheme. Results compared with those of the existing methods demonstrate that S2Snet not only outperforms those existing self-supervised deep learning methods but also achieves better performances than those non-deep learning ones in different cases. Specifically, S2Snet achieves an improvement of 3.41% and 2.37% for PSNR and SSIM, respectively, as compared to the original Self2Self network, while such improvements become 19.9% and 22.7% as compared with the well-known non-deep learning NWSR method.

2.
Comput Biol Med ; 165: 107319, 2023 10.
Article in English | MEDLINE | ID: mdl-37611427

ABSTRACT

As a leading cause of blindness worldwide, macular edema (ME) is mainly determined by sub-retinal fluid (SRF), intraretinal fluid (IRF), and pigment epithelial detachment (PED) accumulation, and therefore, the characterization of SRF, IRF, and PED, which is also known as ME segmentation, has become a crucial issue in ophthalmology. Due to the subjective and time-consuming nature of ME segmentation in retinal optical coherence tomography (OCT) images, automatic computer-aided systems are highly desired in clinical practice. This paper proposes a novel loss-balanced parallel decoding network, namely PadNet, for ME segmentation. Specifically, PadNet mainly consists of an encoder and three parallel decoder modules, which serve as segmentation, contour, and diffusion branches, and they are employed to extract the ME's characteristics, the contour area features, and to expand the ME area from the center to edge, respectively. A new loss-balanced joint-loss function with three components corresponding to each of the three parallel decoding branches is also devised for training. Experiments are conducted with three public datasets to verify the effectiveness of PadNet, and the performances of PadNet are compared with those of five state-of-the-art methods. Results show that PadNet improves ME segmentation accuracy by 8.1%, 11.1%, 0.6%, 1.4% and 8.3%, as compared with UNet, sASPP, MsTGANet, YNet, RetiFluidNet, respectively, which convincingly demonstrates that the proposed PadNet is robust and effective in ME segmentation in different cases.


Subject(s)
Macular Edema , Retinal Detachment , Humans , Tomography, Optical Coherence/methods , Retina/diagnostic imaging , Macular Edema/diagnostic imaging , Retinal Detachment/diagnostic imaging
3.
Biomed Opt Express ; 14(6): 2773-2795, 2023 Jun 01.
Article in English | MEDLINE | ID: mdl-37342690

ABSTRACT

As a low-coherence interferometry-based imaging modality, optical coherence tomography (OCT) inevitably suffers from the influence of speckles originating from multiply scattered photons. Speckles hide tissue microstructures and degrade the accuracy of disease diagnoses, which thus hinder OCT clinical applications. Various methods have been proposed to address such an issue, yet they suffer either from the heavy computational load, or the lack of high-quality clean images prior, or both. In this paper, a novel self-supervised deep learning scheme, namely, Blind2Unblind network with refinement strategy (B2Unet), is proposed for OCT speckle reduction with a single noisy image only. Specifically, the overall B2Unet network architecture is presented first, and then, a global-aware mask mapper together with a loss function are devised to improve image perception and optimize sampled mask mapper blind spots, respectively. To make the blind spots visible to B2Unet, a new re-visible loss is also designed, and its convergence is discussed with the speckle properties being considered. Extensive experiments with different OCT image datasets are finally conducted to compare B2Unet with those state-of-the-art existing methods. Both qualitative and quantitative results convincingly demonstrate that B2Unet outperforms the state-of-the-art model-based and fully supervised deep-learning methods, and it is robust and capable of effectively suppressing speckles while preserving the important tissue micro-structures in OCT images in different cases.

4.
J Med Imaging (Bellingham) ; 10(2): 024006, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37009058

ABSTRACT

Purpose: Optical coherence tomography (OCT) is a noninvasive, high-resolution imaging modality capable of providing both cross-sectional and three-dimensional images of tissue microstructures. Owing to its low-coherence interferometry nature, however, OCT inevitably suffers from speckles, which diminish image quality and mitigate the precise disease diagnoses, and therefore, despeckling mechanisms are highly desired to alleviate the influences of speckles on OCT images. Approach: We propose a multiscale denoising generative adversarial network (MDGAN) for speckle reductions in OCT images. A cascade multiscale module is adopted as MDGAN basic block first to raise the network learning capability and take advantage of the multiscale context, and then a spatial attention mechanism is proposed to refine the denoised images. For enormous feature learning in OCT images, a deep back-projection layer is finally introduced to alternatively upscale and downscale the features map of MDGAN. Results: Experiments with two different OCT image datasets are conducted to verify the effectiveness of the proposed MDGAN scheme. Results compared those of the state-of-the-art existing methods show that MDGAN is able to improve both peak-single-to-noise ratio and signal-to-noise ratio by 3 dB at most, with its structural similarity index measurement and contrast-to-noise ratio being 1.4% and 1.3% lower than those of the best existing methods. Conclusions: Results demonstrate that MDGAN is effective and robust for OCT image speckle reductions and outperforms the best state-of-the-art denoising methods in different cases. It could help alleviate the influence of speckles in OCT images and improve OCT imaging-based diagnosis.

5.
J Biophotonics ; 15(10): e202200067, 2022 10.
Article in English | MEDLINE | ID: mdl-35704010

ABSTRACT

Automatic optical coherence tomography angiography (OCTA) vessel segmentation is of great significance to retinal disease diagnoses. Due to the complex vascular structure, however, various existing factors make the segmentation task challenging. This paper reports a novel end-to-end three-stage channel and position attention (CPA) module integrated graph reasoning convolutional neural network (CGNet) for retinal OCTA vessel segmentation. Specifically, in the coarse stage, both CPA and graph reasoning network (GRN) modules are integrated in between a U-shaped neural network encoder and decoder to acquire vessel confidence maps. After being directed into a fine stage, such confidence maps are concatenated with the original image and the generated fine image map as a 3-channel image to refine retinal micro-vasculatures. Finally, both the fine and refined images are fused at the refining stage as the segmentation results. Experiments with different public datasets are conducted to verify the efficacy of the proposed CGNet. Results show that by employing the end-to-end training scheme and the integrated CPA and GRN modules, CGNet achieves 94.29% and 85.62% in area under the ROC curve (AUC) for the two different datasets, outperforming the state-of-the-art existing methods with both improved operability and reduced complexity in different cases. Code is available at https://github.com/GE-123-cpu/CGnet-for-vessel-segmentation.


Subject(s)
Image Processing, Computer-Assisted , Tomography, Optical Coherence , Algorithms , Fluorescein Angiography , Image Processing, Computer-Assisted/methods , Retinal Vessels/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...