Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37027614

RESUMO

The combination of augmented reality (AR) and medicine is an important trend in current research. The powerful display and interaction capabilities of the AR system can assist doctors to perform more complex operations. Since the tooth itself is an exposed rigid body structure, dental AR is a relatively hot research direction with application potential. However, none of the existing dental AR solutions are designed for wearable AR devices such as AR glasses. At the same time, these methods rely on high-precision scanning equipment or auxiliary positioning markers, which greatly increases the operational complexity and cost of clinical AR. In this work, we propose a simple and accurate neural-implicit model-driven dental AR system, named ImTooth, and adapted for AR glasses. Based on the modeling capabilities and differentiable optimization properties of state-of-the-art neural implicit representations, our system fuses reconstruction and registration in a single network, greatly simplifying the existing dental AR solutions and enabling reconstruction, registration, and interaction. Specifically, our method learns a scale-preserving voxel-based neural implicit model from multi-view images captured from a textureless plaster model of the tooth. Apart from color and surface, we also learn the consistent edge feature inside our representation. By leveraging the depth and edge information, our system can register the model to real images without additional training. In practice, our system uses a single Microsoft HoloLens 2 as the only sensor and display device. Experiments show that our method can reconstruct high-precision models and accomplish accurate registration. It is also robust to weak, repeating and inconsistent textures. We also show that our system can be easily integrated into dental diagnostic and therapeutic procedures, such as bracket placement guidance.

2.
Artigo em Inglês | MEDLINE | ID: mdl-36459607

RESUMO

Virtual content creation and interaction play an important role in modern 3D applications. Recovering detailed 3D models from real scenes can significantly expand the scope of its applications and has been studied for decades in the computer vision and computer graphics community. In this work, we propose Vox-Surf, a voxel-based implicit surface representation. Our Vox-Surf divides the space into finite sparse voxels, where each voxel is a basic geometry unit that stores geometry and appearance information on its corner vertices. Due to the sparsity inherited from the voxel representation, Vox-Surf is suitable for almost any scene and can be easily trained end-to-end from multiple view images. We utilize a progressive training process to gradually cull out empty voxels and keep only valid voxels for further optimization, which greatly reduces the number of sample points and improves inference speed. Experiments show that our Vox-Surf representation can learn fine surface details and accurate colors with less memory and faster rendering than previous methods. The resulting fine voxels can also be considered as the bounding volumes for collision detection, which is useful in 3D interactions. We also show the potential application of Vox-Surf in scene editing and augmented reality. The source code is publicly available at https://github.com/zju3dv/Vox-Surf.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...