Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Med Imaging Graph ; 109: 102300, 2023 10.
Article in English | MEDLINE | ID: mdl-37776676

ABSTRACT

Computerized tomography (CT) synthesis from cone-beam computerized tomography (CBCT) is a key step in adaptive radiotherapy. It uses a synthetic CT to calculate the dose to correct and adjust the radiotherapy plan in a timely manner. The cycle-consistent adversarial network (Cycle GAN) is commonly used in CT synthesis tasks but it has some defects: (a) the premise of the cycle consistency loss is that the conversion between domains is bijective, but the CBCT and CT conversion does not fully satisfy the bijective relationship, and (b) it does not take advantage of the complementary information between multiple sets of CBCTs for the same patient. To address these problems, we propose a novel framework named the sequence-aware contrastive generative network (SCGN) that introduces an attention sequence fusion module to improve the CBCT quality. In addition, it not only applies contrastive learning to the generative adversarial networks (GANs) to pay more attention to the anatomical structure of CBCT in feature extraction but also uses a new generator to improve the accuracy of the anatomical details. Experimental results on our datasets show that our method significantly outperforms the existing unsupervised CT synthesis methods.


Subject(s)
Spiral Cone-Beam Computed Tomography , Humans , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed , Cone-Beam Computed Tomography
2.
IEEE J Biomed Health Inform ; 27(7): 3455-3466, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37099474

ABSTRACT

Deformable multi-modal medical image registration aligns the anatomical structures of different modalities to the same coordinate system through a spatial transformation. Due to the difficulties of collecting ground-truth registration labels, existing methods often adopt the unsupervised multi-modal image registration setting. However, it is hard to design satisfactory metrics to measure the similarity of multi-modal images, which heavily limits the multi-modal registration performance. Moreover, due to the contrast difference of the same organ in multi-modal images, it is difficult to extract and fuse the representations of different modal images. To address the above issues, we propose a novel unsupervised multi-modal adversarial registration framework that takes advantage of image-to-image translation to translate the medical image from one modality to another. In this way, we are able to use the well-defined uni-modal metrics to better train the models. Inside our framework, we propose two improvements to promote accurate registration. First, to avoid the translation network learning spatial deformation, we propose a geometry-consistent training scheme to encourage the translation network to learn the modality mapping solely. Second, we propose a novel semi-shared multi-scale registration network that extracts features of multi-modal images effectively and predicts multi-scale registration fields in an coarse-to-fine manner to accurately register the large deformation area. Extensive experiments on brain and pelvic datasets demonstrate the superiority of the proposed method over existing methods, revealing our framework has great potential in clinical application.


Subject(s)
Brain , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...