Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; PP2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38923479

RESUMO

Intrathoracic airway segmentation in computed tomography is a prerequisite for various respiratory disease analyses such as chronic obstructive pulmonary disease, asthma and lung cancer. Due to the low imaging contrast and noises execrated at peripheral branches, the topological-complexity and the intra-class imbalance of airway tree, it remains challenging for deep learning-based methods to segment the complete airway tree (on extracting deeper branches). Unlike other organs with simpler shapes or topology, the airway's complex tree structure imposes an unbearable burden to generate the "ground truth" label (up to 7 or 3 hours of manual or semi-automatic annotation per case). Most of the existing airway datasets are incompletely labeled/annotated, thus limiting the completeness of computer-segmented airway. In this paper, we propose a new anatomy-aware multi-class airway segmentation method enhanced by topology-guided iterative self-learning. Based on the natural airway anatomy, we formulate a simple yet highly effective anatomy-aware multi-class segmentation task to intuitively handle the severe intra-class imbalance of the airway. To solve the incomplete labeling issue, we propose a tailored iterative self-learning scheme to segment toward the complete airway tree. For generating pseudo-labels to achieve higher sensitivity (while retaining similar specificity), we introduce a novel breakage attention map and design a topology-guided pseudo-label refinement method by iteratively connecting breaking branches commonly existed from initial pseudo-labels. Extensive experiments have been conducted on four datasets including two public challenges. The proposed method achieves the top performance in both EXACT'09 challenge using average score and ATM'22 challenge on weighted average score. In a public BAS dataset and a private lung cancer dataset, our method significantly improves previous leading approaches by extracting at least (absolute) 6.1% more detected tree length and 5.2% more tree branches, while maintaining comparable precision.

2.
IEEE Trans Med Imaging ; 43(1): 96-107, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37399157

RESUMO

Deep learning has been widely used in medical image segmentation and other aspects. However, the performance of existing medical image segmentation models has been limited by the challenge of obtaining sufficient high-quality labeled data due to the prohibitive data annotation cost. To alleviate this limitation, we propose a new text-augmented medical image segmentation model LViT (Language meets Vision Transformer). In our LViT model, medical text annotation is incorporated to compensate for the quality deficiency in image data. In addition, the text information can guide to generate pseudo labels of improved quality in the semi-supervised learning. We also propose an Exponential Pseudo label Iteration mechanism (EPI) to help the Pixel-Level Attention Module (PLAM) preserve local image features in semi-supervised LViT setting. In our model, LV (Language-Vision) loss is designed to supervise the training of unlabeled images using text information directly. For evaluation, we construct three multimodal medical segmentation datasets (image + text) containing X-rays and CT images. Experimental results show that our proposed LViT has superior segmentation performance in both fully-supervised and semi-supervised setting. The code and datasets are available at https://github.com/HUANGLIZI/LViT.


Assuntos
Idioma , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador
3.
Med Image Anal ; 90: 102957, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37716199

RESUMO

Open international challenges are becoming the de facto standard for assessing computer vision and image analysis algorithms. In recent years, new methods have extended the reach of pulmonary airway segmentation that is closer to the limit of image resolution. Since EXACT'09 pulmonary airway segmentation, limited effort has been directed to the quantitative comparison of newly emerged algorithms driven by the maturity of deep learning based approaches and extensive clinical efforts for resolving finer details of distal airways for early intervention of pulmonary diseases. Thus far, public annotated datasets are extremely limited, hindering the development of data-driven methods and detailed performance evaluation of new algorithms. To provide a benchmark for the medical imaging community, we organized the Multi-site, Multi-domain Airway Tree Modeling (ATM'22), which was held as an official challenge event during the MICCAI 2022 conference. ATM'22 provides large-scale CT scans with detailed pulmonary airway annotation, including 500 CT scans (300 for training, 50 for validation, and 150 for testing). The dataset was collected from different sites and it further included a portion of noisy COVID-19 CTs with ground-glass opacity and consolidation. Twenty-three teams participated in the entire phase of the challenge and the algorithms for the top ten teams are reviewed in this paper. Both quantitative and qualitative results revealed that deep learning models embedded with the topological continuity enhancement achieved superior performance in general. ATM'22 challenge holds as an open-call design, the training data and the gold standard evaluation are available upon successful registration via its homepage (https://atm22.grand-challenge.org/).


Assuntos
Pneumopatias , Árvores , Humanos , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Pulmão/diagnóstico por imagem
4.
NPJ Digit Med ; 6(1): 116, 2023 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-37344684

RESUMO

Cerebrovascular disease is a leading cause of death globally. Prevention and early intervention are known to be the most effective forms of its management. Non-invasive imaging methods hold great promises for early stratification, but at present lack the sensitivity for personalized prognosis. Resting-state functional magnetic resonance imaging (rs-fMRI), a powerful tool previously used for mapping neural activity, is available in most hospitals. Here we show that rs-fMRI can be used to map cerebral hemodynamic function and delineate impairment. By exploiting time variations in breathing pattern during rs-fMRI, deep learning enables reproducible mapping of cerebrovascular reactivity (CVR) and bolus arrival time (BAT) of the human brain using resting-state CO2 fluctuations as a natural "contrast media". The deep-learning network is trained with CVR and BAT maps obtained with a reference method of CO2-inhalation MRI, which includes data from young and older healthy subjects and patients with Moyamoya disease and brain tumors. We demonstrate the performance of deep-learning cerebrovascular mapping in the detection of vascular abnormalities, evaluation of revascularization effects, and vascular alterations in normal aging. In addition, cerebrovascular maps obtained with the proposed method exhibit excellent reproducibility in both healthy volunteers and stroke patients. Deep-learning resting-state vascular imaging has the potential to become a useful tool in clinical cerebrovascular imaging.

5.
IEEE Trans Med Imaging ; 41(8): 2033-2047, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35192462

RESUMO

Fast and accurate MRI image reconstruction from undersampled data is crucial in clinical practice. Deep learning based reconstruction methods have shown promising advances in recent years. However, recovering fine details from undersampled data is still challenging. In this paper, we introduce a novel deep learning based method, Pyramid Convolutional RNN (PC-RNN), to reconstruct images from multiple scales. Based on the formulation of MRI reconstruction as an inverse problem, we design the PC-RNN model with three convolutional RNN (ConvRNN) modules to iteratively learn the features in multiple scales. Each ConvRNN module reconstructs images at different scales and the reconstructed images are combined by a final CNN module in a pyramid fashion. The multi-scale ConvRNN modules learn a coarse-to-fine image reconstruction. Unlike other common reconstruction methods for parallel imaging, PC-RNN does not employ coil sensitive maps for multi-coil data and directly model the multiple coils as multi-channel inputs. The coil compression technique is applied to standardize data with various coil numbers, leading to more efficient training. We evaluate our model on the fastMRI knee and brain datasets and the results show that the proposed model outperforms other methods and can recover more details. The proposed method is one of the winner solutions in the 2019 fastMRI competition.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
6.
Artigo em Inglês | MEDLINE | ID: mdl-34661201

RESUMO

Reconstructing magnetic resonance (MR) images from under-sampled data is a challenging problem due to various artifacts introduced by the under-sampling operation. Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture which captures low-level features at the initial layers and high-level features at the deeper layers. Such networks focus much on global features which may not be optimal to reconstruct the fully-sampled image. In this paper, we propose an Over-and-Under Complete Convolutional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network (CRNN). The overcomplete branch gives special attention in learning local structures by restraining the receptive field of the network. Combining it with the undercomplete branch leads to a network which focuses more on low-level features without losing out on the global structures. Extensive experiments on two datasets demonstrate that the proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.

7.
IEEE Trans Med Imaging ; 40(10): 2832-2844, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33351754

RESUMO

Data-driven automatic approaches have demonstrated their great potential in resolving various clinical diagnostic dilemmas in neuro-oncology, especially with the help of standard anatomic and advanced molecular MR images. However, data quantity and quality remain a key determinant, and a significant limit of the potential applications. In our previous work, we explored the synthesis of anatomic and molecular MR image networks (SAMR) in patients with post-treatment malignant gliomas. In this work, we extend this through a confidence-guided SAMR (CG-SAMR) that synthesizes data from lesion contour information to multi-modal MR images, including T1-weighted ( [Formula: see text]), gadolinium enhanced [Formula: see text] (Gd- [Formula: see text]), T2-weighted ( [Formula: see text]), and fluid-attenuated inversion recovery ( FLAIR ), as well as the molecular amide proton transfer-weighted ( [Formula: see text]) sequence. We introduce a module that guides the synthesis based on a confidence measure of the intermediate results. Furthermore, we extend the proposed architecture to allow training using unpaired data. Extensive experiments on real clinical data demonstrate that the proposed model can perform better than current the state-of-the-art synthesis methods. Our code is available at https://github.com/guopengf/CG-SAMR.


Assuntos
Glioma , Imageamento por Ressonância Magnética , Glioma/diagnóstico por imagem , Humanos
8.
Artigo em Inglês | MEDLINE | ID: mdl-35444379

RESUMO

Fast and accurate reconstruction of magnetic resonance (MR) images from under-sampled data is important in many clinical applications. In recent years, deep learning-based methods have been shown to produce superior performance on MR image reconstruction. However, these methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations. In order to overcome this challenge, we propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients' privacy. However, the generalizability of models trained with the FL setting can still be suboptimal due to domain shift, which results from the data collected at multiple institutions with different sensors, disease types, and acquisition protocols, etc. With the motivation of circumventing this challenge, we propose a cross-site modeling for MR image reconstruction in which the learned intermediate latent features among different source sites are aligned with the distribution of the latent features at the target site. Extensive experiments are conducted to provide various insights about FL for MR image reconstruction. Experimental results demonstrate that the proposed framework is a promising direction to utilize multi-institutional data without compromising patients' privacy for achieving improved MR image reconstruction. Our code is available at https://github.com/guopengf/FL-MRCM.

9.
Med Image Comput Comput Assist Interv ; 12262: 104-113, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33073265

RESUMO

Data-driven automatic approaches have demonstrated their great potential in resolving various clinical diagnostic dilemmas for patients with malignant gliomas in neuro-oncology with the help of conventional and advanced molecular MR images. However, the lack of sufficient annotated MRI data has vastly impeded the development of such automatic methods. Conventional data augmentation approaches, including flipping, scaling, rotation, and distortion are not capable of generating data with diverse image content. In this paper, we propose a method, called synthesis of anatomic and molecular MR images network (SAMR), which can simultaneously synthesize data from arbitrary manipulated lesion information on multiple anatomic and molecular MRI sequences, including T1-weighted (T 1w), gadolinium enhanced T 1w (Gd-T 1w), T2-weighted (T 2w), fluid-attenuated inversion recovery (FLAIR), and amide proton transfer-weighted (APTw). The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators. Extensive experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.

10.
Artigo em Inglês | MEDLINE | ID: mdl-33103161

RESUMO

Current protocol of Amide Proton Transfer-weighted (APTw) imaging commonly starts with the acquisition of high-resolution T2-weighted (T2w) images followed by APTw imaging at particular geometry and locations (i.e. slice) determined by the acquired T2w images. Although many advanced MRI reconstruction methods have been proposed to accelerate MRI, existing methods for APTw MRI lacks the capability of taking advantage of structural information in the acquired T2w images for reconstruction. In this paper, we present a novel APTw image reconstruction framework that can accelerate APTw imaging by reconstructing APTw images directly from highly undersampled k-space data and corresponding T2w image at the same location. The proposed framework starts with a novel sparse representation-based slice matching algorithm that aims to find the matched T2w slice given only the undersampled APTw image. A Recurrent Feature Sharing Reconstruction network (RFS-Rec) is designed to utilize intermediate features extracted from the matched T2w image by a Convolutional Recurrent Neural Network (CRNN), so that the missing structural information can be incorporated into the undersampled APT raw image thus effectively improving the image quality of the reconstructed APTw image. We evaluate the proposed method on two real datasets consisting of brain data from rats and humans. Extensive experiments demonstrate that the proposed RFS-Rec approach can outperform the state-of-the-art methods.

11.
Int J Comput Assist Radiol Surg ; 15(7): 1127-1135, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32430694

RESUMO

PURPOSE: Automatic bone surfaces segmentation is one of the fundamental tasks of ultrasound (US)-guided computer-assisted orthopedic surgery procedures. However, due to various US imaging artifacts, manual operation of the transducer during acquisition, and different machine settings, many existing methods cannot deal with the large variations of the bone surface responses, in the collected data, without manual parameter selection. Even for fully automatic methods, such as deep learning-based methods, the problem of dataset bias causes networks to perform poorly on the US data that are different from the training set. METHODS: In this work, an intensity-invariant convolutional neural network (CNN) architecture is proposed for robust segmentation of bone surfaces from US data obtained from two different US machines with varying acquisition settings. The proposed CNN takes US image as input and simultaneously generates two intermediate output images, denoted as local phase tensor (LPT) and global context tensor (GCT), from two branches which are invariant to intensity variations. LPT and GCT are fused to generate the final segmentation map. In the training process, the LPT network branch is supervised by precalculated ground truth without manual annotation. RESULTS: The proposed method is evaluated on 1227 in vivo US scans collected using two US machines, including a portable handheld ultrasound scanner, by scanning various bone surfaces from 28 volunteers. Validation of proposed method on both US machines not only shows statistically significant improvements in cross-machine segmentation of bone surfaces compared to state-of-the-art methods but also achieves a computation time of 30 milliseconds per image, [Formula: see text] improvement over state-of-the-art. CONCLUSION: The encouraging results obtained in this initial study suggest that the proposed method is promising enough for further evaluation. Future work will include extensive validation of the method on new US data collected from various machines using different acquisition settings. We will also evaluate the potential of using the segmented bone surfaces as an input to a point set-based registration method.


Assuntos
Osso e Ossos/cirurgia , Processamento de Imagem Assistida por Computador/métodos , Cirurgia Assistida por Computador , Ultrassonografia de Intervenção/métodos , Artefatos , Osso e Ossos/diagnóstico por imagem , Aprendizado Profundo , Humanos , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...