Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 97: 103249, 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38963972

RESUMO

Image registration is an essential step in many medical image analysis tasks. Traditional methods for image registration are primarily optimization-driven, finding the optimal deformations that maximize the similarity between two images. Recent learning-based methods, trained to directly predict transformations between two images, run much faster, but suffer from performance deficiencies due to domain shift. Here we present a new neural network based image registration framework, called NIR (Neural Image Registration), which is based on optimization but utilizes deep neural networks to model deformations between image pairs. NIR represents the transformation between two images with a continuous function implemented via neural fields, receiving a 3D coordinate as input and outputting the corresponding deformation vector. NIR provides two ways of generating deformation field: directly output a displacement vector field for general deformable registration, or output a velocity vector field and integrate the velocity field to derive the deformation field for diffeomorphic image registration. The optimal registration is discovered by updating the parameters of the neural field via stochastic mini-batch gradient descent. We describe several design choices that facilitate model optimization, including coordinate encoding, sinusoidal activation, coordinate sampling, and intensity sampling. NIR is evaluated on two 3D MR brain scan datasets, demonstrating highly competitive performance in terms of both registration accuracy and regularity. Compared to traditional optimization-based methods, our approach achieves better results in shorter computation times. In addition, our methods exhibit performance on a cross-dataset registration task, compared to the pre-trained learning-based methods.

2.
Radiother Oncol ; 160: 175-184, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33961914

RESUMO

BACKGROUND AND PURPOSE: Delineating organs at risk (OARs) on computed tomography (CT) images is an essential step in radiation therapy; however, it is notoriously time-consuming and prone to inter-observer variation. Herein, we report a deep learning-based automatic segmentation (AS) algorithm (WBNet) that can accurately and efficiently delineate all major OARs in the entire body directly on CT scans. MATERIALS AND METHODS: We collected 755 CT scans of the head and neck, thorax, abdomen, and pelvis and manually delineated 50 OARs on the CT images. The CT images with contours were split into training and test sets consisting of 505 and 250 cases, respectively, to develop and validate WBNet. The volumetric Dice similarity coefficient (DSC) and 95th-percentile Hausdorff distance (95% HD) were calculated to evaluate delineation quality for each OAR. We compared the performance of WBNet with three AS algorithms: one commercial multi-atlas-based automatic segmentation (ABAS) software, and two deep learning-based AS algorithms, namely, AnatomyNet and nnU-Net. We have also evaluated the time saving and dose accuracy of WBNet. RESULTS: WBNet achieved average DSCs of 0.84 and 0.81 on in-house and public datasets, respectively, which outperformed ABAS, AnatomyNet, and nnU-Net. WBNet could reduce the delineation time significantly and perform well in treatment planning, with clinically acceptable dose differences compared with those in manual delineation. CONCLUSION: This study shows the feasibility and benefits of using WBNet in clinical practice.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Humanos , Processamento de Imagem Assistida por Computador , Órgãos em Risco , Planejamento da Radioterapia Assistida por Computador , Tomografia Computadorizada por Raios X
3.
J Comput Assist Tomogr ; 45(2): 191-202, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33273161

RESUMO

OBJECTIVE: This study aimed to preoperatively differentiate primary gastric lymphoma from Borrmann type IV gastric cancer by heterogeneity nomogram based on routine contrast-enhanced computed tomographic images. METHODS: We enrolled 189 patients from 2 hospitals (90 in the training cohort and 99 in the validation cohort). Subjective findings, including high-enhanced mucosal sign, high-enhanced serosa sign, nodular or an irregular outer layer of the gastric wall, and perigastric fat infiltration, were assessed to construct a subjective finding model. A deep learning model was developed to segment tumor areas, from which 1680 three-dimensional heterogeneity radiomic parameters, including first-order entropy, second-order entropy, and texture complexity, were extracted to build a heterogeneity signature by least absolute shrinkage and selection operator logistic regression. A nomogram that integrates heterogeneity signature and subjective findings was developed by multivariate logistic regression. The diagnostic performance of the nomogram was assessed by discrimination and clinical usefulness. RESULTS: High-enhanced serosa sign and nodular or an irregular outer layer of the gastric wall were identified as independent predictors for building the subjective finding model. High-enhanced serosa sign and heterogeneity signature were significant predictors for differentiating the 2 groups (all, P < 0.05). The area under the curve with heterogeneity nomogram was 0.932 (95% confidence interval, 0.863-0.973) in the validation cohort. Decision curve analysis and stratified analysis confirmed the clinical utility of the heterogeneity nomogram. CONCLUSIONS: The proposed heterogeneity radiomic nomogram on contrast-enhanced computed tomographic images may help differentiate primary gastric lymphoma from Borrmann type IV gastric cancer preoperatively.


Assuntos
Linfoma não Hodgkin/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Neoplasias Gástricas/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Adulto , Idoso , Aprendizado Profundo , Diagnóstico Diferencial , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Nomogramas , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...