Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Nucl Med ; 63(3): 468-475, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34301782

RESUMO

Attenuation correction remains a challenge in pelvic PET/MRI. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvic attenuation maps (µ-maps). However, these methods often misclassify air pockets in the digestive tract, potentially introducing bias in the reconstructed PET images. The aims of this work were to develop deep learning-based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images. Methods: A convolutional neural network (CNN) was trained to segment air pockets using 3-dimensional CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semiautomated segmentations. A separate CNN was trained to synthesize pseudo-CT µ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning-, model-, and CT-based µ-maps using data from 30 of the subjects. Finally, the impact of different µ-maps and air pocket segmentation methods on the PET quantification was investigated. Results: Air pockets segmented using the CNN agreed well with semiautomated segmentations, with a mean Dice similarity coefficient of 0.75. The volumetric similarity score between 2 segmentations was 0.85 ± 0.14. The mean absolute relative changes with respect to the CT-based µ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning-based and model-based µ-maps, respectively. The average relative change between PET images reconstructed with deep learning-based and CT-based µ-maps was 2.6%. Conclusion: We developed a deep learning-based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images, with accuracy comparable to that of semiautomatic segmentations. The µ-maps synthesized using a deep learning-based method from CAIPIRINHA-accelerated Dixon images were more accurate than those generated with the model-based approach available on integrated PET/MRI scanners.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Pelve/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/métodos , Tomografia Computadorizada por Raios X
2.
Front Neurol ; 12: 742654, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35002915

RESUMO

Objective: This study aimed to prove the concept of a new optical video-based system to measure Parkinson's disease (PD) remotely using an accessible standard webcam. Methods: We consecutively enrolled a cohort of 42 patients with PD and healthy subjects (HSs). The participants were recorded performing MDS-UPDRS III bradykinesia upper limb tasks with a computer webcam. The video frames were processed using the artificial intelligence algorithms tracking the movements of the hands. The video extracted features were correlated with clinical rating using the Movement Disorder Society revision of the Unified Parkinson's Disease Rating Scale and inertial measurement units (IMUs). The developed classifiers were validated on an independent dataset. Results: We found significant differences in the motor performance of the patients with PD and HSs in all the bradykinesia upper limb motor tasks. The best performing classifiers were unilateral finger tapping and hand movement speed. The model correlated both with the IMUs for quantitative assessment of motor function and the clinical scales, hence demonstrating concurrent validity with the existing methods. Conclusions: We present here the proof-of-concept of a novel webcam-based technology to remotely detect the parkinsonian features using artificial intelligence. This method has preliminarily achieved a very high diagnostic accuracy and could be easily expanded to other disease manifestations to support PD management.

3.
J Nucl Med ; 60(3): 429-435, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30166357

RESUMO

Whole-body attenuation correction (AC) is still challenging in combined PET/MR scanners. We describe Dixon-VIBE Deep Learning (DIVIDE), a deep-learning network that allows synthesizing pelvis pseudo-CT maps based only on the standard Dixon volumetric interpolated breath-hold examination (Dixon-VIBE) images currently acquired for AC in some commercial scanners. Methods: We propose a network that maps between the four 2-dimensional (2D) Dixon MR images (water, fat, in-phase, and out-of-phase) and their corresponding 2D CT image. In contrast to previous methods, we used transposed convolutions to learn the up-sampling parameters, we used whole 2D slices to provide context information, and we pretrained the network with brain images. Twenty-eight datasets obtained from 19 patients who underwent PET/CT and PET/MR examinations were used to evaluate the proposed method. We assessed the accuracy of the µ-maps and reconstructed PET images by performing voxel- and region-based analysis comparing the SUVs (in g/mL) obtained after AC using the Dixon-VIBE (PETDixon), DIVIDE (PETDIVIDE), and CT-based (PETCT) methods. Additionally, the bias in quantification was estimated in synthetic lesions defined in the prostate, rectum, pelvis, and spine. Results: Absolute mean relative change values relative to CT AC were lower than 2% on average for the DIVIDE method in every region of interest except for bone tissue, where it was lower than 4% and 6.75 times smaller than the relative change of the Dixon method. There was an excellent voxel-by-voxel correlation between PETCT and PETDIVIDE (R2 = 0.9998, P < 0.01). The Bland-Altman plot between PETCT and PETDIVIDE showed that the average of the differences and the variability were lower (mean PETCT-PETDIVIDE SUV, 0.0003; PETCT-PETDIVIDE SD, 0.0094; 95% confidence interval, [-0.0180,0.0188]) than the average of differences between PETCT and PETDixon (mean PETCT-PETDixon SUV, 0.0006; PETCT-PETDixon SD, 0.0264; 95% confidence interval, [-0.0510,0.0524]). Statistically significant changes in PET data quantification were observed between the 2 methods in the synthetic lesions, with the largest improvement in femur and spine lesions. Conclusion: The DIVIDE method can accurately synthesize a pelvis pseudo-CT scan from standard Dixon-VIBE images, allowing for accurate AC in combined PET/MR scanners. Additionally, our implementation allows rapid pseudo-CT synthesis, making it suitable for routine applications and even allowing retrospective processing of Dixon-VIBE data.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Imagem Multimodal , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X , Adulto , Idoso , Idoso de 80 Anos ou mais , Humanos , Masculino , Pessoa de Meia-Idade , Neoplasias da Próstata/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...