Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Med Image Anal ; 98: 103322, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39197301

RESUMEN

In this study, we address critical barriers hindering the widespread adoption of surgical navigation in orthopedic surgeries due to limitations such as time constraints, cost implications, radiation concerns, and integration within the surgical workflow. Recently, our work X23D showed an approach for generating 3D anatomical models of the spine from only a few intraoperative fluoroscopic images. This approach negates the need for conventional registration-based surgical navigation by creating a direct intraoperative 3D reconstruction of the anatomy. Despite these strides, the practical application of X23D has been limited by a significant domain gap between synthetic training data and real intraoperative images. In response, we devised a novel data collection protocol to assemble a paired dataset consisting of synthetic and real fluoroscopic images captured from identical perspectives. Leveraging this unique dataset, we refined our deep learning model through transfer learning, effectively bridging the domain gap between synthetic and real X-ray data. We introduce an innovative approach combining style transfer with the curated paired dataset. This method transforms real X-ray images into the synthetic domain, enabling the in-silico-trained X23D model to achieve high accuracy in real-world settings. Our results demonstrated that the refined model can rapidly generate accurate 3D reconstructions of the entire lumbar spine from as few as three intraoperative fluoroscopic shots. The enhanced model reached a sufficient accuracy, achieving an 84% F1 score, equating to the benchmark set solely by synthetic data in previous research. Moreover, with an impressive computational time of just 81.1 ms, our approach offers real-time capabilities, vital for successful integration into active surgical procedures. By investigating optimal imaging setups and view angle dependencies, we have further validated the practicality and reliability of our system in a clinical environment. Our research represents a promising advancement in intraoperative 3D reconstruction. This innovation has the potential to enhance intraoperative surgical planning, navigation, and surgical robotics.

2.
Med Image Anal ; 91: 103027, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37992494

RESUMEN

Established surgical navigation systems for pedicle screw placement have been proven to be accurate, but still reveal limitations in registration or surgical guidance. Registration of preoperative data to the intraoperative anatomy remains a time-consuming, error-prone task that includes exposure to harmful radiation. Surgical guidance through conventional displays has well-known drawbacks, as information cannot be presented in-situ and from the surgeon's perspective. Consequently, radiation-free and more automatic registration methods with subsequent surgeon-centric navigation feedback are desirable. In this work, we present a marker-less approach that automatically solves the registration problem for lumbar spinal fusion surgery in a radiation-free manner. A deep neural network was trained to segment the lumbar spine and simultaneously predict its orientation, yielding an initial pose for preoperative models, which then is refined for each vertebra individually and updated in real-time with GPU acceleration while handling surgeon occlusions. An intuitive surgical guidance is provided thanks to the integration into an augmented reality based navigation system. The registration method was verified on a public dataset with a median of 100% successful registrations, a median target registration error of 2.7 mm, a median screw trajectory error of 1.6°and a median screw entry point error of 2.3 mm. Additionally, the whole pipeline was validated in an ex-vivo surgery, yielding a 100% screw accuracy and a median target registration error of 1.0 mm. Our results meet clinical demands and emphasize the potential of RGB-D data for fully automatic registration approaches in combination with augmented reality guidance.


Asunto(s)
Tornillos Pediculares , Fusión Vertebral , Cirugía Asistida por Computador , Humanos , Columna Vertebral/diagnóstico por imagen , Columna Vertebral/cirugía , Cirugía Asistida por Computador/métodos , Vértebras Lumbares/diagnóstico por imagen , Vértebras Lumbares/cirugía , Fusión Vertebral/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA