Deep learning method for magnetic resonance imaging fluid-attenuated inversion recovery image synthesis / 生物医学工程学杂志
J. biomed. eng
; Sheng wu yi xue gong cheng xue za zhi;(6): 903-911, 2023.
Article
em Zh
| WPRIM
| ID: wpr-1008915
Biblioteca responsável:
WPRO
ABSTRACT
Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.
Palavras-chave
Texto completo:
1
Base de dados:
WPRIM
Assunto principal:
Processamento de Imagem Assistida por Computador
/
Imageamento por Ressonância Magnética
/
Aprendizado Profundo
Limite:
Humans
Idioma:
Zh
Revista:
J. biomed. eng
/
Sheng wu yi xue gong cheng xue za zhi
Ano de publicação:
2023
Tipo de documento:
Article