Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 15912-15929, 2023 12.
Article in English | MEDLINE | ID: mdl-37494162

ABSTRACT

Contrastive learning, which aims to capture general representation from unlabeled images to initialize the medical analysis models, has been proven effective in alleviating the high demand for expensive annotations. Current methods mainly focus on instance-wise comparisons to learn the global discriminative features, however, pretermitting the local details to distinguish tiny anatomical structures, lesions, and tissues. To address this challenge, in this paper, we propose a general unsupervised representation learning framework, named local discrimination (LD), to learn local discriminative features for medical images by closely embedding semantically similar pixels and identifying regions of similar structures across different images. Specifically, this model is equipped with an embedding module for pixel-wise embedding and a clustering module for generating segmentation. And these two modules are unified by optimizing our novel region discrimination loss function in a mutually beneficial mechanism, which enables our model to reflect structure information as well as measure pixel-wise and region-wise similarity. Furthermore, based on LD, we propose a center-sensitive one-shot landmark localization algorithm and a shape-guided cross-modality segmentation model to foster the generalizability of our model. When transferred to downstream tasks, the learned representation by our method shows a better generalization, outperforming representation from 18 state-of-the-art (SOTA) methods and winning 9 out of all 12 downstream tasks. Especially for the challenging lesion segmentation tasks, the proposed method achieves significantly better performance.


Subject(s)
Algorithms , Unsupervised Machine Learning , Cluster Analysis , Image Processing, Computer-Assisted
2.
Med Image Anal ; 67: 101876, 2021 01.
Article in English | MEDLINE | ID: mdl-33197863

ABSTRACT

Fully convolutional networks (FCNs) trained with abundant labeled data have been proven to be a powerful and efficient solution for medical image segmentation. However, FCNs often fail to achieve satisfactory results due to the lack of labelled data and significant variability of appearance in medical imaging. To address this challenging issue, this paper proposes a conjugate fully convolutional network (CFCN) where pairwise samples are input for capturing a rich context representation and guide each other with a fusion module. To avoid the overfitting problem introduced by intra-class heterogeneity and boundary ambiguity with a small number of training samples, we propose to explicitly exploit the prior information from the label space, termed as proxy supervision. We further extend the CFCN to a compact conjugate fully convolutional network (C2FCN), which just has one head for fitting the proxy supervision without incurring two additional branches of decoders fitting ground truth of the input pairs compared to CFCN. In the test phase, the segmentation probability is inferred by the learned logical relation implied in the proxy supervision. Quantitative evaluation on the Liver Tumor Segmentation (LiTS) and Combined (CT-MR) Healthy Abdominal Organ Segmentation (CHAOS) datasets shows that the proposed framework achieves a significant performance improvement on both binary segmentation and multi-category segmentation, especially with a limited amount of training data. The source code is available at https://github.com/renzhenwang/pairwise_segmentation.


Subject(s)
Image Processing, Computer-Assisted , Liver Neoplasms , Diagnostic Imaging , Humans , Liver Neoplasms/diagnostic imaging , Probability
3.
IEEE Trans Med Imaging ; 39(12): 3843-3854, 2020 12.
Article in English | MEDLINE | ID: mdl-32746128

ABSTRACT

Automatic rib fracture recognition from chest X-ray images is clinically important yet challenging due to weak saliency of fractures. Weakly Supervised Learning (WSL) models recognize fractures by learning from large-scale image-level labels. In WSL, Class Activation Maps (CAMs) are considered to provide spatial interpretations on classification decisions. However, the high-responding regions, namely Supporting Regions of CAMs may erroneously lock to regions irrelevant to fractures, which thereby raises concerns on the reliability of WSL models for clinical applications. Currently available Mixed Supervised Learning (MSL) models utilize object-level labels to assist fitting WSL-derived CAMs. However, as a prerequisite of MSL, the large quantity of precisely delineated labels is rarely available for rib fracture tasks. To address these problems, this paper proposes a novel MSL framework. Firstly, by embedding the adversarial classification learning into WSL frameworks, the proposed Biased Correlation Decoupling and Instance Separation Enhancing strategies guide CAMs to true fractures indirectly. The CAM guidance is insensitive to shape and size variations of object descriptions, thereby enables robust learning from bounding boxes. Secondly, to further minimize annotation cost in MSL, a CAM-based Active Learning strategy is proposed to recognize and annotate samples whose Supporting Regions cannot be confidently localized. Consequently, the quantity demand of object-level labels can be reduced without compromising the performance. Over a chest X-ray rib-fracture dataset of 10966 images, the experimental results show that our method produces rational Supporting Regions to interpret its classification decisions and outperforms competing methods at an expense of annotating 20% of the positive samples with bounding boxes.


Subject(s)
Rib Fractures , Humans , Radiography , Reproducibility of Results , Rib Fractures/diagnostic imaging
4.
IEEE Trans Med Imaging ; 38(6): 1501-1512, 2019 06.
Article in English | MEDLINE | ID: mdl-30530359

ABSTRACT

Early diagnosis and continuous monitoring of patients suffering from eye diseases have been major concerns in the computer-aided detection techniques. Detecting one or several specific types of retinal lesions has made a significant breakthrough in computer-aided screen in the past few decades. However, due to the variety of retinal lesions and complex normal anatomical structures, automatic detection of lesions with unknown and diverse types from a retina remains a challenging task. In this paper, a weakly supervised method, requiring only a series of normal and abnormal retinal images without need to specifically annotate their locations and types, is proposed for this task. Specifically, a fundus image is understood as a superposition of background, blood vessels, and background noise (lesions included for abnormal images). Background is formulated as a low-rank structure after a series of simple preprocessing steps, including spatial alignment, color normalization, and blood vessels removal. Background noise is regarded as stochastic variable and modeled through Gaussian for normal images and mixture of Gaussian for abnormal images, respectively. The proposed method encodes both the background knowledge of fundus images and the background noise into one unique model, and corporately optimizes the model using normal and abnormal images, which fully depict the low-rank subspace of the background and distinguish the lesions from the background noise in abnormal fundus images. Experimental results demonstrate that the proposed method is of fine arts accuracy and outperforms the previous related methods.


Subject(s)
Diagnostic Techniques, Ophthalmological , Image Interpretation, Computer-Assisted/methods , Retina/diagnostic imaging , Retinal Diseases/diagnostic imaging , Supervised Machine Learning , Algorithms , Fundus Oculi , Humans , Normal Distribution
SELECTION OF CITATIONS
SEARCH DETAIL
...