Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Med Imaging ; 43(4): 1323-1336, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38015687

ABSTRACT

Medical imaging provides many valuable clues involving anatomical structure and pathological characteristics. However, image degradation is a common issue in clinical practice, which can adversely impact the observation and diagnosis by physicians and algorithms. Although extensive enhancement models have been developed, these models require a well pre-training before deployment, while failing to take advantage of the potential value of inference data after deployment. In this paper, we raise an algorithm for source-free unsupervised domain adaptive medical image enhancement (SAME), which adapts and optimizes enhancement models using test data in the inference phase. A structure-preserving enhancement network is first constructed to learn a robust source model from synthesized training data. Then a teacher-student model is initialized with the source model and conducts source-free unsupervised domain adaptation (SFUDA) by knowledge distillation with the test data. Additionally, a pseudo-label picker is developed to boost the knowledge distillation of enhancement tasks. Experiments were implemented on ten datasets from three medical image modalities to validate the advantage of the proposed algorithm, and setting analysis and ablation studies were also carried out to interpret the effectiveness of SAME. The remarkable enhancement performance and benefits for downstream tasks demonstrate the potential and generalizability of SAME. The code is available at https://github.com/liamheng/Annotation-free-Medical-Image-Enhancement.


Subject(s)
Algorithms , Image Enhancement , Humans , Image Processing, Computer-Assisted
2.
BMC Ophthalmol ; 23(1): 451, 2023 Nov 13.
Article in English | MEDLINE | ID: mdl-37953270

ABSTRACT

BACKGROUND: The purpose of this study was to investigate retinal layers changes in patients with age-related macular degeneration (AMD) treated with anti-vascular endothelial growth factor (anti-VEGF) agents and to evaluate if these changes may affect treatment response. METHODS: This study included 496 patients with AMD or PCV who were treated with anti-VEGF agents and followed up for at least 6 months. A comprehensive analysis of retinal layers affecting visual acuity was conducted. To eliminate the fact that the average thickness calculated may lead to differences tending to converge towards the mean, we proposed that the retinal layer was divided into different regions and the thickness of the retinal layer was analyzed at the same time. The labeled data will be publicly available for further research. RESULTS: Compared to baseline, significant improvement in visual acuity was observed in patients at the 6-month follow-up. Statistically significant reduction in central retinal thickness and separate retinal layer thickness was also observed (p < 0.05). Among all retinal layers, the thickness of the external limiting membrane to retinal pigment epithelium/Bruch's membrane (ELM to RPE/BrM) showed the greatest reduction. Furthermore, the subregional assessment revealed that the ELM to RPE/BrM decreased greater than that of other layers in each region. CONCLUSION: Treatment with anti-VEGF agents effectively reduced retinal thickness in all separate retinal layers as well as the retina as a whole and anti-VEGF treatment may be more targeted at the edema site. These findings could have implications for the development of more precise and targeted therapies for AMD treatment.


Subject(s)
Macular Degeneration , Ranibizumab , Humans , Ranibizumab/therapeutic use , Angiogenesis Inhibitors/therapeutic use , Vascular Endothelial Growth Factor A , Retina , Macular Degeneration/drug therapy , Intravitreal Injections , Tomography, Optical Coherence , Retrospective Studies
3.
Int J Comput Assist Radiol Surg ; 18(10): 1769-1781, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37199827

ABSTRACT

PURPOSE: Automatic surgical instrument segmentation is a crucial step for robotic-aided surgery. Encoder-decoder construction-based methods often directly fuse high-level and low-level features by skip connection to supplement some detailed information. However, irrelevant information fusion also increases misclassification or wrong segmentation, especially for complex surgical scenes. Uneven illumination always results in instruments similar to other tissues of background, which greatly increases the difficulty of automatic surgical instrument segmentation. The paper proposes a novel network to solve the problem. METHODS: The paper proposes to guide the network to select effective features for instrument segmentation. The network is named context-guided bidirectional attention network (CGBANet). The guidance connection attention (GCA) module is inserted into the network to adaptively filter out irrelevant low-level features. Moreover, we propose bidirectional attention (BA) module for the GCA module to capture both local information and local-global dependency for surgical scenes to provide accurate instrument features. RESULTS: The superiority of our CGBA-Net is verified by multiple instrument segmentation on two publicly available datasets of different surgical scenarios, including an endoscopic vision dataset (EndoVis 2018) and a cataract surgery dataset. Extensive experimental results demonstrate our CGBA-Net outperforms the state-of-the-art methods on two datasets. Ablation study based on the datasets proves the effectiveness of our modules. CONCLUSION: The proposed CGBA-Net increased the accuracy of multiple instruments segmentation, which accurately classifies and segments the instruments. The proposed modules effectively provided instrument-related features for the network.


Subject(s)
Cataract Extraction , Ophthalmology , Robotic Surgical Procedures , Humans , Lighting , Surgical Instruments , Image Processing, Computer-Assisted
4.
Comput Biol Med ; 146: 105628, 2022 07.
Article in English | MEDLINE | ID: mdl-35609472

ABSTRACT

Medical image segmentation is fundamental for computer-aided diagnosis or surgery. Various attention modules are proposed to improve segmentation results, which exist some limitations for medical image segmentation, such as large computations, weak framework applicability, etc. To solve the problems, we propose a new attention module named FGAM, short for Feature Guided Attention Module, which is a simple but pluggable and effective module for medical image segmentation. The FGAM tries to dig out the feature representation ability in the encoder and decoder features. Specifically, the decoder shallow layer always contains abundant information, which is taken as a queryable feature dictionary in the FGAM. The module contains a parameter-free activator and can be deleted after various encoder-decoder networks' training. The efficacy of the FGAM is proved on various encoder-decoder models based on five datasets, including four publicly available datasets and one in-house dataset.


Subject(s)
Diagnosis, Computer-Assisted , Image Processing, Computer-Assisted , Attention , Image Processing, Computer-Assisted/methods
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2672-2675, 2021 11.
Article in English | MEDLINE | ID: mdl-34891802

ABSTRACT

Surgical instrument segmentation is critical for the field of computer-aided surgery system. Most of deep-learning based algorithms only use either multi-scale information or multi-level information, which may lead to ambiguity of semantic information. In this paper, we propose a new neural network, which extracts both multi-scale and multilevel features based on the backbone of U-net. Specifically, the cascaded and double convolutional feature pyramid is input into the U-net. Then we propose a DFP (short for Dilation Feature-Pyramid) module for decoder which extracts multi-scale and multi-level information. The proposed algorithm is evaluated on two publicly available datasets, and extensive experiments prove that the five evaluation metrics by our algorithm are superior than other comparing methods.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Algorithms , Semantics , Surgical Instruments
SELECTION OF CITATIONS
SEARCH DETAIL
...