Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Med Imaging Graph ; 109: 102300, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37776676

RESUMO

Computerized tomography (CT) synthesis from cone-beam computerized tomography (CBCT) is a key step in adaptive radiotherapy. It uses a synthetic CT to calculate the dose to correct and adjust the radiotherapy plan in a timely manner. The cycle-consistent adversarial network (Cycle GAN) is commonly used in CT synthesis tasks but it has some defects: (a) the premise of the cycle consistency loss is that the conversion between domains is bijective, but the CBCT and CT conversion does not fully satisfy the bijective relationship, and (b) it does not take advantage of the complementary information between multiple sets of CBCTs for the same patient. To address these problems, we propose a novel framework named the sequence-aware contrastive generative network (SCGN) that introduces an attention sequence fusion module to improve the CBCT quality. In addition, it not only applies contrastive learning to the generative adversarial networks (GANs) to pay more attention to the anatomical structure of CBCT in feature extraction but also uses a new generator to improve the accuracy of the anatomical details. Experimental results on our datasets show that our method significantly outperforms the existing unsupervised CT synthesis methods.


Assuntos
Tomografia Computadorizada de Feixe Cônico Espiral , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Tomografia Computadorizada de Feixe Cônico
2.
IEEE J Biomed Health Inform ; 27(7): 3455-3466, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37099474

RESUMO

Deformable multi-modal medical image registration aligns the anatomical structures of different modalities to the same coordinate system through a spatial transformation. Due to the difficulties of collecting ground-truth registration labels, existing methods often adopt the unsupervised multi-modal image registration setting. However, it is hard to design satisfactory metrics to measure the similarity of multi-modal images, which heavily limits the multi-modal registration performance. Moreover, due to the contrast difference of the same organ in multi-modal images, it is difficult to extract and fuse the representations of different modal images. To address the above issues, we propose a novel unsupervised multi-modal adversarial registration framework that takes advantage of image-to-image translation to translate the medical image from one modality to another. In this way, we are able to use the well-defined uni-modal metrics to better train the models. Inside our framework, we propose two improvements to promote accurate registration. First, to avoid the translation network learning spatial deformation, we propose a geometry-consistent training scheme to encourage the translation network to learn the modality mapping solely. Second, we propose a novel semi-shared multi-scale registration network that extracts features of multi-modal images effectively and predicts multi-scale registration fields in an coarse-to-fine manner to accurately register the large deformation area. Extensive experiments on brain and pelvic datasets demonstrate the superiority of the proposed method over existing methods, revealing our framework has great potential in clinical application.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...