Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Phys Eng Sci Med ; 47(1): 73-85, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37870728

RESUMO

Dedicated breast positron emission tomography (db-PET) is more sensitive than whole-body positron emission tomography and is thus expected to detect early stage breast cancer and determine treatment efficacy. However, it is challenging to decrease the sensitivity of the chest wall side at the edge of the detector, resulting in a relative increase in noise and a decrease in detectability. Longer acquisition times and injection of larger amounts of tracer improve image quality but increase the burden on the patient. Therefore, this study aimed to improve image quality via reconstruction with shorter acquisition time data using deep learning, which has recently been widely used as a noise reduction technique. In our proposed method, a multi-adaptive denoising filter bank structure was introduced by training the training data separately for each detector area because the noise characteristics of db-PET images vary at different locations. Input and ideal images were reconstructed based on 1- and 7-min collection data, respectively, using list mode data. The deep learning model used residual learning with an encoder-decoder structure. The image quality of the proposed method was superior to that of existing noise reduction filters such as Gaussian filters and nonlocal mean filters. Furthermore, there was no significant difference between the maximum standardized uptake values before and after filtering using the proposed method. Taken together, the proposed method is useful as a noise reduction filter for db-PET images, as it can reduce the patient burden, scan time, and radiotracer amount in db-PET examinations.


Assuntos
Mama , Tomografia por Emissão de Pósitrons , Humanos , Tomografia por Emissão de Pósitrons/métodos , Tórax
2.
Nihon Hoshasen Gijutsu Gakkai Zasshi ; 77(12): 1416-1423, 2021.
Artigo em Japonês | MEDLINE | ID: mdl-34924478

RESUMO

In magnetic resonance imaging (MRI) examinations, a trade-off is noted among acquisition time, resolution, and signal-to-noise ratio (SNR). High-resolution images are expected to improve the detection of small lesions. However, to ensure a high SNR, the imaging time must be extended. If the number of additions is reduced to shorten the imaging time, a reduction in slice thickness and in-plane resolution is necessary to ensure an adequate SNR. A combination of acceleration and denoising using deep learning has been previously reported. However, although it may be useful as a noise reduction technique onboard device, it cannot be used for general purposes. We studied the effects of a recently developed general-purpose image-based noise reduction software on MRI by measuring SNR and other parameters such as contrast, resolution, and noise power spectrum (NPS). NPS was influenced by the difference in processing mode, whereas contrast remained uninfluenced. Regarding resolution, the edge information was retained and was found to be better in iNoir 3D than in iNoir 2D. However, owing to the increased intensity of noise-reduction processing, the slope of the edge in the low-contrast area was smoothed, presenting a visually blurred impression.


Assuntos
Imageamento por Ressonância Magnética , Software , Razão Sinal-Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...