Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
2.
Eur Radiol ; 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38662100

RESUMO

OBJECTIVES: In lung cancer, one of the main limitations for the optimal integration of the biological and anatomical information derived from Positron Emission Tomography (PET) and Computed Tomography (CT) is the time and expertise required for the evaluation of the different respiratory phases. In this study, we present two open-source models able to automatically segment lung tumors on PET and CT, with and without motion compensation. MATERIALS AND METHODS: This study involved time-bin gated (4D) and non-gated (3D) PET/CT images from two prospective lung cancer cohorts (Trials 108237 and 108472) and one retrospective. For model construction, the ground truth (GT) was defined by consensus of two experts, and the nnU-Net with 5-fold cross-validation was applied to 560 4D-images for PET and 100 3D-images for CT. The test sets included 270 4D- images and 19 3D-images for PET and 80 4D-images and 27 3D-images for CT, recruited at 10 different centres. RESULTS: In the performance evaluation with the multicentre test sets, the Dice Similarity Coefficients (DSC) obtained for our PET model were DSC(4D-PET) = 0.74 ± 0.06, improving 19% relative to the DSC between experts and DSC(3D-PET) = 0.82 ± 0.11. The performance for CT was DSC(4D-CT) = 0.61 ± 0.28 and DSC(3D-CT) = 0.63 ± 0.34, improving 4% and 15% relative to DSC between experts. CONCLUSIONS: Performance evaluation demonstrated that the automatic segmentation models have the potential to achieve accuracy comparable to manual segmentation and thus hold promise for clinical application. The resulting models can be freely downloaded and employed to support the integration of 3D- or 4D- PET/CT and to facilitate the evaluation of its impact on lung cancer clinical practice. CLINICAL RELEVANCE STATEMENT: We provide two open-source nnU-Net models for the automatic segmentation of lung tumors on PET/CT to facilitate the optimal integration of biological and anatomical information in clinical practice. The models have superior performance compared to the variability observed in manual segmentations by the different experts for images with and without motion compensation, allowing to take advantage in the clinical practice of the more accurate and robust 4D-quantification. KEY POINTS: Lung tumor segmentation on PET/CT imaging is limited by respiratory motion and manual delineation is time consuming and suffer from inter- and intra-variability. Our segmentation models had superior performance compared to the manual segmentations by different experts. Automating PET image segmentation allows for easier clinical implementation of biological information.

3.
Radiother Oncol ; 188: 109774, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37394103

RESUMO

PURPOSE: With the increased use of focal radiation dose escalation for primary prostate cancer (PCa), accurate delineation of gross tumor volume (GTV) in prostate-specific membrane antigen PET (PSMA-PET) becomes crucial. Manual approaches are time-consuming and observer dependent. The purpose of this study was to create a deep learning model for the accurate delineation of the intraprostatic GTV in PSMA-PET. METHODS: A 3D U-Net was trained on 128 different 18F-PSMA-1007 PET images from three different institutions. Testing was done on 52 patients including one independent internal cohort (Freiburg: n = 19) and three independent external cohorts (Dresden: n = 14 18F-PSMA-1007, Boston: Massachusetts General Hospital (MGH): n = 9 18F-DCFPyL-PSMA and Dana-Farber Cancer Institute (DFCI): n = 10 68Ga-PSMA-11). Expert contours were generated in consensus using a validated technique. CNN predictions were compared to expert contours using Dice similarity coefficient (DSC). Co-registered whole-mount histology was used for the internal testing cohort to assess sensitivity/specificity. RESULTS: Median DSCs were Freiburg: 0.82 (IQR: 0.73-0.88), Dresden: 0.71 (IQR: 0.53-0.75), MGH: 0.80 (IQR: 0.64-0.83) and DFCI: 0.80 (IQR: 0.67-0.84), respectively. Median sensitivity for CNN and expert contours were 0.88 (IQR: 0.68-0.97) and 0.85 (IQR: 0.75-0.88) (p = 0.40), respectively. GTV volumes did not differ significantly (p > 0.1 for all comparisons). Median specificity of 0.83 (IQR: 0.57-0.97) and 0.88 (IQR: 0.69-0.98) were observed for CNN and expert contours (p = 0.014), respectively. CNN prediction took 3.81 seconds on average per patient. CONCLUSION: The CNN was trained and tested on internal and external datasets as well as histopathology reference, achieving a fast GTV segmentation for three PSMA-PET tracers with high diagnostic accuracy comparable to manual experts.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Masculino , Humanos , Carga Tumoral , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Neoplasias da Próstata/patologia
4.
Comput Med Imaging Graph ; 107: 102241, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37201475

RESUMO

In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.


Assuntos
Próstata , Neoplasias da Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Benchmarking , Redes Neurais de Computação , Algoritmos , Neoplasias da Próstata/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...