Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38294925

ABSTRACT

Federated learning enables multiple hospitals to cooperatively learn a shared model without privacy disclosure. Existing methods often take a common assumption that the data from different hospitals have the same modalities. However, such a setting is difficult to fully satisfy in practical applications, since the imaging guidelines may be different between hospitals, which makes the number of individuals with the same set of modalities limited. To this end, we formulate this practical-yet-challenging cross-modal vertical federated learning task, in which data from multiple hospitals have different modalities with a small amount of multi-modality data collected from the same individuals. To tackle such a situation, we develop a novel framework, namely Federated Consistent Regularization constrained Feature Disentanglement (Fed-CRFD), for boosting MRI reconstruction by effectively exploring the overlapping samples (i.e., same patients with different modalities at different hospitals) and solving the domain shift problem caused by different modalities. Particularly, our Fed-CRFD involves an intra-client feature disentangle scheme to decouple data into modality-invariant and modality-specific features, where the modality-invariant features are leveraged to mitigate the domain shift problem. In addition, a cross-client latent representation consistency constraint is proposed specifically for the overlapping samples to further align the modality-invariant features extracted from different modalities. Hence, our method can fully exploit the multi-source data from hospitals while alleviating the domain shift problem. Extensive experiments on two typical MRI datasets demonstrate that our network clearly outperforms state-of-the-art MRI reconstruction methods. The source code is available at https://github.com/IAMJackYan/FedCRFD.

2.
Med Image Anal ; 90: 102973, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37757643

ABSTRACT

In the field of medical image analysis, accurate lesion segmentation is beneficial for the subsequent clinical diagnosis and treatment planning. Currently, various deep learning-based methods have been proposed to deal with the segmentation task. Albeit achieving some promising performances, the fully-supervised learning approaches require pixel-level annotations for model training, which is tedious and time-consuming for experienced radiologists to collect. In this paper, we propose a weakly semi-supervised segmentation framework, called Point Segmentation Transformer (Point SEGTR). Particularly, the framework utilizes a small amount of fully-supervised data with pixel-level segmentation masks and a large amount of weakly-supervised data with point-level annotations (i.e., annotating a point inside each object) for network training, which largely reduces the demand of pixel-level annotations significantly. To fully exploit the pixel-level and point-level annotations, we propose two regularization terms, i.e., multi-point consistency and symmetric consistency, to boost the quality of pseudo labels, which are then adopted to train a student model for inference. Extensive experiments are conducted on three endoscopy datasets with different lesion structures and several body sites (e.g., colorectal and nasopharynx). Comprehensive experimental results finely substantiate the effectiveness and the generality of our proposed method, as well as its potential to loosen the requirements of pixel-level annotations, which is valuable for clinical applications.

3.
IEEE Trans Med Imaging ; PP2022 Jul 19.
Article in English | MEDLINE | ID: mdl-35853072

ABSTRACT

Unsupervised domain adaption (UDA), which aims to enhance the segmentation performance of deep models on unlabeled data, has recently drawn much attention. In this paper, we propose a novel UDA method (namely DLaST) for medical image segmentation via disentanglement learning and self-training. Disentanglement learning factorizes an image into domain-invariant anatomy and domain-specific modality components. To make the best of disentanglement learning, we propose a novel shape constraint to boost the adaptation performance. The self-training strategy further adaptively improves the segmentation performance of the model for the target domain through adversarial learning and pseudo label, which implicitly facilitates feature alignment in the anatomy space. Experimental results demonstrate that the proposed method outperforms the state-of-the-art UDA methods for medical image segmentation on three public datasets, i.e., a cardiac dataset, an abdominal dataset and a brain dataset. The code will be released soon.

4.
IEEE Trans Med Imaging ; 41(4): 869-880, 2022 04.
Article in English | MEDLINE | ID: mdl-34752391

ABSTRACT

Computed tomography (CT) images are often impaired by unfavorable artifacts caused by metallic implants within patients, which would adversely affect the subsequent clinical diagnosis and treatment. Although the existing deep-learning-based approaches have achieved promising success on metal artifact reduction (MAR) for CT images, most of them treated the task as a general image restoration problem and utilized off-the-shelf network modules for image quality enhancement. Hence, such frameworks always suffer from lack of sufficient model interpretability for the specific task. Besides, the existing MAR techniques largely neglect the intrinsic prior knowledge underlying metal-corrupted CT images which is beneficial for the MAR performance improvement. In this paper, we specifically propose a deep interpretable convolutional dictionary network (DICDNet) for the MAR task. Particularly, we first explore that the metal artifacts always present non-local streaking and star-shape patterns in CT images. Based on such observations, a convolutional dictionary model is deployed to encode the metal artifacts. To solve the model, we propose a novel optimization algorithm based on the proximal gradient technique. With only simple operators, the iterative steps of the proposed algorithm can be easily unfolded into corresponding network modules with specific physical meanings. Comprehensive experiments on synthesized and clinical datasets substantiate the effectiveness of the proposed DICDNet as well as its superior interpretability, compared to current state-of-the-art MAR methods. Code is available at https://github.com/hongwang01/DICDNet.


Subject(s)
Artifacts , Image Processing, Computer-Assisted , Algorithms , Humans , Image Processing, Computer-Assisted/methods , Metals , Tomography, X-Ray Computed/methods
5.
IEEE Trans Med Imaging ; 40(12): 3641-3651, 2021 12.
Article in English | MEDLINE | ID: mdl-34197318

ABSTRACT

As the labeled anomalous medical images are usually difficult to acquire, especially for rare diseases, the deep learning based methods, which heavily rely on the large amount of labeled data, cannot yield a satisfactory performance. Compared to the anomalous data, the normal images without the need of lesion annotation are much easier to collect. In this paper, we propose an anomaly detection framework, namely [Formula: see text], extracting [Formula: see text]elf-supervised and tr [Formula: see text]ns [Formula: see text]ation-consistent features for [Formula: see text]nomaly [Formula: see text]etection. The proposed SALAD is a reconstruction-based method, which learns the manifold of normal data through an encode-and-reconstruct translation between image and latent spaces. In particular, two constraints (i.e., structure similarity loss and center constraint loss) are proposed to regulate the cross-space (i.e., image and feature) translation, which enforce the model to learn translation-consistent and representative features from the normal data. Furthermore, a self-supervised learning module is engaged into our framework to further boost the anomaly detection accuracy by deeply exploiting useful information from the raw normal data. An anomaly score, as a measure to separate the anomalous data from the healthy ones, is constructed based on the learned self-supervised-and-translation-consistent features. Extensive experiments are conducted on optical coherence tomography (OCT) and chest X-ray datasets. The experimental results demonstrate the effectiveness of our approach.


Subject(s)
Tomography, Optical Coherence
6.
IEEE Trans Neural Netw Learn Syst ; 31(5): 1461-1474, 2020 May.
Article in English | MEDLINE | ID: mdl-31295122

ABSTRACT

This paper proposes a novel end-to-end learning model, called skip-connected covariance (SCCov) network, for remote sensing scene classification (RSSC). The innovative contribution of this paper is to embed two novel modules into the traditional convolutional neural network (CNN) model, i.e., skip connections and covariance pooling. The advantages of newly developed SCCov are twofold. First, by means of the skip connections, the multi-resolution feature maps produced by the CNN are combined together, which provides important benefits to address the presence of large-scale variance in RSSC data sets. Second, by using covariance pooling, we can fully exploit the second-order information contained in such multi-resolution feature maps. This allows the CNN to achieve more representative feature learning when dealing with RSSC problems. Experimental results, conducted using three large-scale benchmark data sets, demonstrate that our newly proposed SCCov network exhibits very competitive or superior classification performance when compared with the current state-of-the-art RSSC techniques, using a much lower amount of parameters. Specifically, our SCCov only needs 10% of the parameters used by its counterparts.

7.
J Opt Soc Am A Opt Image Sci Vis ; 34(2): 252-258, 2017 Feb 01.
Article in English | MEDLINE | ID: mdl-28157851

ABSTRACT

In this paper, an effective CANDECOMP/PARAFAC tensor-based compression (CPTBC) approach is proposed for on-ground hyperspectral images (HSIs). By considering the observed HSI cube as a whole three-order tensor, the proposed CPTBC method utilizes the CANDECOMP/PARAFAC tensor decomposition to decompose the original HSI data into the sum of R rank-1 tensors, which can simultaneously exploit both the spatial and spectral information of HSIs. Specifically, compared with the original HSI data, the R rank-1 tensors have fewer non-zero entries. In addition, non-zero entries of the R rank-1 tensors are sparse and follow a regular distribution. Therefore, the HSI can be efficiently compressed into R rank-1 tensors with the proposed CPTBC method. Our experimental results on real three HSIs demonstrate the superiority of the proposed CPTBC method over several well-known compression approaches and the average PSNR improvements of the proposed method over the six compared methods (i.e., MPEG4, band-wise JPEG2000, TD, 3D-SPECK, 3D-TCE, 3D-TARP) are more than 13, 10, 6, 4, 3, and 3 dB, respectively.

SELECTION OF CITATIONS
SEARCH DETAIL
...