Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Quant Imaging Med Surg ; 14(1): 1122-1140, 2024 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-38223046

RESUMEN

Background and Objective: Automatic tumor segmentation is a critical component in clinical diagnosis and treatment. Although single-modal imaging provides useful information, multi-modal imaging provides a more comprehensive understanding of the tumor. Multi-modal tumor segmentation has been an essential topic in medical image processing. With the remarkable performance of deep learning (DL) methods in medical image analysis, multi-modal tumor segmentation based on DL has attracted significant attention. This study aimed to provide an overview of recent DL-based multi-modal tumor segmentation methods. Methods: In in the PubMed and Google Scholar databases, the keywords "multi-modal", "deep learning", and "tumor segmentation" were used to systematically search English articles in the past 5 years. The date range was from 1 January 2018 to 1 June 2023. A total of 78 English articles were reviewed. Key Content and Findings: We introduce public datasets, evaluation methods, and multi-modal data processing. We also summarize common DL network structures, techniques, and multi-modal image fusion methods used in different tumor segmentation tasks. Finally, we conclude this study by presenting perspectives for future research. Conclusions: In multi-modal tumor segmentation tasks, DL technique is a powerful method. With the fusion methods of different modal data, the DL framework can effectively use the characteristics of different modal data to improve the accuracy of tumor segmentation.

2.
Phys Med Biol ; 68(18)2023 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-37549672

RESUMEN

Objective. Whole-body positron emission tomography/computed tomography (PET/CT) scans are an important tool for diagnosing various malignancies (e.g. malignant melanoma, lymphoma, or lung cancer), and accurate segmentation of tumors is a key part of subsequent treatment. In recent years, convolutional neural network based segmentation methods have been extensively investigated. However, these methods often give inaccurate segmentation results, such as oversegmentation and undersegmentation. To address these issues, we propose a postprocessing method based on a graph convolutional network (GCN) to refine inaccurate segmentation results and improve the overall segmentation accuracy.Approach. First, nnU-Net is used as an initial segmentation framework, and the uncertainty in the segmentation results is analyzed. Certain and uncertain pixels are used to establish the nodes of a graph. Each node and its 6 neighbors form an edge, and 32 nodes are randomly selected as uncertain nodes to form edges. The highly uncertain nodes are used as the subsequent refinement targets. Second, the nnU-Net results of the certain nodes are used as labels to form a semisupervised graph network problem, and the uncertain part is optimized by training the GCN to improve the segmentation performance. This describes our proposed nnU-Net + GCN segmentation framework.Main results.We perform tumor segmentation experiments with the PET/CT dataset from the MICCIA2022 autoPET challenge. Among these data, 30 cases are randomly selected for testing, and the experimental results show that the false-positive rate is effectively reduced with nnU-Net + GCN refinement. In quantitative analysis, there is an improvement of 2.1% for the average Dice score, 6.4 for the 95% Hausdorff distance (HD95), and 1.7 for the average symmetric surface distance.Significance. The quantitative and qualitative evaluation results show that GCN postprocessing methods can effectively improve the tumor segmentation performance.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Imagenología Tridimensional/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
3.
Quant Imaging Med Surg ; 11(2): 749-762, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33532274

RESUMEN

BACKGROUND: Reducing the radiation tracer dose and scanning time during positron emission tomography (PET) imaging can reduce the cost of the tracer, reduce motion artifacts, and increase the efficiency of the scanner. However, the reconstructed images to be noisy. It is very important to reconstruct high-quality images with low-count (LC) data. Therefore, we propose a deep learning method called LCPR-Net, which is used for directly reconstructing full-count (FC) PET images from corresponding LC sinogram data. METHODS: Based on the framework of a generative adversarial network (GAN), we enforce a cyclic consistency constraint on the least-squares loss to establish a nonlinear end-to-end mapping process from LC sinograms to FC images. In this process, we merge a convolutional neural network (CNN) and a residual network for feature extraction and image reconstruction. In addition, the domain transform (DT) operation sends a priori information to the cycle-consistent GAN (CycleGAN) network, avoiding the need for a large amount of computational resources to learn this transformation. RESULTS: The main advantages of this method are as follows. First, the network can use LC sinogram data as input to directly reconstruct an FC PET image. The reconstruction speed is faster than that provided by model-based iterative reconstruction. Second, reconstruction based on the CycleGAN framework improves the quality of the reconstructed image. CONCLUSIONS: Compared with other state-of-the-art methods, the quantitative and qualitative evaluation results show that the proposed method is accurate and effective for FC PET image reconstruction.

4.
Phys Med Biol ; 65(21): 215010, 2020 11 03.
Artículo en Inglés | MEDLINE | ID: mdl-32663812

RESUMEN

Positron emission tomography (PET) imaging plays an indispensable role in early disease detection and postoperative patient staging diagnosis. However, PET imaging requires not only additional computed tomography (CT) imaging to provide detailed anatomical information but also attenuation correction (AC) maps calculated from CT images for precise PET quantification, which inevitably demands that patients undergo additional doses of ionizing radiation. To reduce the radiation dose and simultaneously obtain high-quality PET/CT images, in this work, we present an alternative based on deep learning that can estimate synthetic attenuation corrected PET (sAC PET) and synthetic CT (sCT) images from non-attenuation corrected PET (NAC PET) scans for whole-body PET/CT imaging. Our model consists of two stages: the first stage removes noise and artefacts in the NAC PET images to generate sAC PET images, and the second stage synthesizes CT images from the sAC PET images obtained in the first stage. Both stages employ the same deep Wasserstein generative adversarial network and identical loss functions, which encourage the proposed model to generate more realistic and satisfying output images. To evaluate the performance of the proposed algorithm, we conducted a comprehensive study on a total of 45 sets of paired PET/CT images of clinical patients. The final experimental results demonstrated that both the generated sAC PET and sCT images showed great similarity to true AC PET and true CT images based on both qualitative and quantitative analyses. These results also indicate that in the future, our proposed algorithm has tremendous potential for reducing the need for additional anatomic imaging in hybrid PET/CT systems or the need for lengthy MR sequence acquisition in hybrid PET/MRI systems.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tomografía Computarizada por Tomografía de Emisión de Positrones , Adulto , Artefactos , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...