Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1588-1591, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018297

RESUMO

Simulating medical images such as X-rays is of key interest to reduce radiation in non-diagnostic visualization scenarios. Past state of the art methods utilize ray tracing, which is reliant on 3D models. To our knowledge, no approach exists for cases where point clouds from depth cameras and other sensors are the only input modality. We propose a method for estimating an X-ray image from a generic point cloud using a conditional generative adversarial network (CGAN). We train a CGAN pix2pix to translate point cloud images into X-ray images using a dataset created inside our custom synthetic data generator. Additionally, point clouds of multiple densities are examined to determine the effect of density on the image translation problem. The results from the CGAN show that this type of network can predict X-ray images from points clouds. Higher point cloud densities outperformed the two lowest point cloud densities. However, the networks trained with high-density point clouds did not exhibit a significant difference when compared with the networks trained with medium densities. We prove that CGANs can be applied to image translation problems in the medical domain and show the feasibility of using this approach when 3D models are not available. Further work includes overcoming the occlusion and quality limitations of the generic approach and applying CGANs to other medical image translation problems.


Assuntos
Redes Neurais de Computação , Raios X
2.
Int J Comput Assist Radiol Surg ; 15(6): 973-980, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32342258

RESUMO

PURPOSE: We propose a novel methodology for generating synthetic X-rays from 2D RGB images. This method creates accurate simulations for use in non-diagnostic visualization problems where the only input comes from a generic camera. Traditional methods are restricted to using simulation algorithms on 3D computer models. To solve this problem, we propose a method of synthetic X-ray generation using conditional generative adversarial networks (CGANs). METHODS: We create a custom synthetic X-ray dataset generator to generate image triplets for X-ray images, pose images, and RGB images of natural hand poses sampled from the NYU hand pose dataset. This dataset is used to train two general-purpose CGAN networks, pix2pix and CycleGAN, as well as our novel architecture called pix2xray which expands upon the pix2pix architecture to include the hand pose into the network. RESULTS: Our results demonstrate that our pix2xray architecture outperforms both pix2pix and CycleGAN in producing higher-quality X-ray images. We measure higher similarity metrics in our approach, with pix2pix coming in second, and CycleGAN producing the worst results. Our network performs better in the difficult cases which involve high occlusion due to occluded poses or large rotations. CONCLUSION: Overall our work establishes a baseline that synthetic X-rays can be simulated using 2D RGB input. We establish the need for additional data such as the hand pose to produce clearer results and show that future research must focus on more specialized architectures to improve overall image clarity and structure.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Radiografia/métodos , Raios X , Algoritmos , Simulação por Computador , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...