Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Int J Comput Assist Radiol Surg ; 19(7): 1301-1312, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38709423

RESUMO

PURPOSE: Specialized robotic and surgical tools are increasing the complexity of operating rooms (ORs), requiring elaborate preparation especially when techniques or devices are to be used for the first time. Spatial planning can improve efficiency and identify procedural obstacles ahead of time, but real ORs offer little availability to optimize space utilization. Methods for creating reconstructions of physical setups, i.e., digital twins, are needed to enable immersive spatial planning of such complex environments in virtual reality. METHODS: We present a neural rendering-based method to create immersive digital twins of complex medical environments and devices from casual video capture that enables spatial planning of surgical scenarios. To evaluate our approach we recreate two operating rooms and ten objects through neural reconstruction, then conduct a user study with 21 graduate students carrying out planning tasks in the resulting virtual environment. We analyze task load, presence, perceived utility, plus exploration and interaction behavior compared to low visual complexity versions of the same environments. RESULTS: Results show significantly increased perceived utility and presence using the neural reconstruction-based environments, combined with higher perceived workload and exploratory behavior. There's no significant difference in interactivity. CONCLUSION: We explore the feasibility of using modern reconstruction techniques to create digital twins of complex medical environments and objects. Without requiring expert knowledge or specialized hardware, users can create, explore and interact with objects in virtual environments. Results indicate benefits like high perceived utility while being technically approachable, which may indicate promise of this approach for spatial planning and beyond.


Assuntos
Salas Cirúrgicas , Realidade Virtual , Humanos , Interface Usuário-Computador , Feminino , Masculino , Adulto , Estudos de Viabilidade , Procedimentos Cirúrgicos Robóticos/métodos
2.
Int J Comput Assist Radiol Surg ; 19(6): 1165-1173, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38619790

RESUMO

PURPOSE: The expanding capabilities of surgical systems bring with them increasing complexity in the interfaces that humans use to control them. Robotic C-arm X-ray imaging systems, for instance, often require manipulation of independent axes via joysticks, while higher-level control options hide inside device-specific menus. The complexity of these interfaces hinder "ready-to-hand" use of high-level functions. Natural language offers a flexible, familiar interface for surgeons to express their desired outcome rather than remembering the steps necessary to achieve it, enabling direct access to task-aware, patient-specific C-arm functionality. METHODS: We present an English language voice interface for controlling a robotic X-ray imaging system with task-aware functions for pelvic trauma surgery. Our fully integrated system uses a large language model (LLM) to convert natural spoken commands into machine-readable instructions, enabling low-level commands like "Tilt back a bit," to increase the angular tilt or patient-specific directions like, "Go to the obturator oblique view of the right ramus," based on automated image analysis. RESULTS: We evaluate our system with 212 prompts provided by an attending physician, in which the system performed satisfactory actions 97% of the time. To test the fully integrated system, we conduct a real-time study in which an attending physician placed orthopedic hardware along desired trajectories through an anthropomorphic phantom, interacting solely with an X-ray system via voice. CONCLUSION: Voice interfaces offer a convenient, flexible way for surgeons to manipulate C-arms based on desired outcomes rather than device-specific processes. As LLMs grow increasingly capable, so too will their applications in supporting higher-level interactions with surgical assistance systems.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Procedimentos Cirúrgicos Robóticos/instrumentação , Interface Usuário-Computador , Pelve/cirurgia , Processamento de Linguagem Natural
3.
Int J Comput Assist Radiol Surg ; 19(6): 1213-1222, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38642297

RESUMO

PURPOSE: Teamwork in surgery depends on a shared mental model of success, i.e., a common understanding of objectives in the operating room. A shared model leads to increased engagement among team members and is associated with fewer complications and overall better outcomes for patients. However, clinical training typically focuses on role-specific skills, leaving individuals to acquire a shared model indirectly through on-the-job experience. METHODS: We investigate whether virtual reality (VR) cross-training, i.elet@tokeneonedotexposure to other roles, can enhance a shared mental model for non-surgeons more directly. Our study focuses on X-ray guided pelvic trauma surgery, a procedure where successful communication depends on the shared model between the surgeon and a C-arm technologist. We present a VR environment supporting both roles and evaluate a cross-training curriculum in which non-surgeons swap roles with the surgeon. RESULTS: Exposure to the surgical task resulted in higher engagement with the C-arm technologist role in VR, as measured by the mental demand and effort expended by participants ( p < 0.001 ). It also has a significant effect on non-surgeon's mental model of the overall task; novice participants' estimation of the mental demand and effort required for the surgeon's task increases after training, while their perception of overall performance decreases ( p < 0.05 ), indicating a gap in understanding based solely on observation. This phenomenon was also present for a professional C-arm technologist. CONCLUSION: Until now, VR applications for clinical training have focused on virtualizing existing curricula. We demonstrate how novel approaches which are not possible outside of a virtual environment, such as role swapping, may enhance the shared mental model of surgical teams by contextualizing each individual's role within the overall task in a time- and cost-efficient manner. As workflows grow increasingly sophisticated, we see VR curricula as being able to directly foster a shared model for success, ultimately benefiting patient outcomes through more effective teamwork in surgery.


Assuntos
Equipe de Assistência ao Paciente , Realidade Virtual , Humanos , Feminino , Masculino , Currículo , Competência Clínica , Adulto , Cirurgia Assistida por Computador/métodos , Cirurgia Assistida por Computador/educação , Cirurgiões/educação , Cirurgiões/psicologia
4.
Artigo em Inglês | MEDLINE | ID: mdl-37555199

RESUMO

Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool's pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during "fluoro hunting" for the desired view or standard plane.

5.
Int J Comput Assist Radiol Surg ; 18(7): 1201-1208, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37213057

RESUMO

PURPOSE: Percutaneous fracture fixation involves multiple X-ray acquisitions to determine adequate tool trajectories in bony anatomy. In order to reduce time spent adjusting the X-ray imager's gantry, avoid excess acquisitions, and anticipate inadequate trajectories before penetrating bone, we propose an autonomous system for intra-operative feedback that combines robotic X-ray imaging and machine learning for automated image acquisition and interpretation, respectively. METHODS: Our approach reconstructs an appropriate trajectory in a two-image sequence, where the optimal second viewpoint is determined based on analysis of the first image. A deep neural network is responsible for detecting the tool and corridor, here a K-wire and the superior pubic ramus, respectively, in these radiographs. The reconstructed corridor and K-wire pose are compared to determine likelihood of cortical breach, and both are visualized for the clinician in a mixed reality environment that is spatially registered to the patient and delivered by an optical see-through head-mounted display. RESULTS: We assess the upper bounds on system performance through in silico evaluation across 11 CTs with fractures present, in which the corridor and K-wire are adequately reconstructed. In post hoc analysis of radiographs across 3 cadaveric specimens, our system determines the appropriate trajectory to within 2.8 ± 1.3 mm and 2.7 ± 1.8[Formula: see text]. CONCLUSION: An expert user study with an anthropomorphic phantom demonstrates how our autonomous, integrated system requires fewer images and lower movement to guide and confirm adequate placement compared to current clinical practice. Code and data are available.


Assuntos
Fraturas Ósseas , Imageamento Tridimensional , Humanos , Raios X , Imageamento Tridimensional/métodos , Fluoroscopia/métodos , Tomografia Computadorizada por Raios X/métodos , Fraturas Ósseas/diagnóstico por imagem , Fraturas Ósseas/cirurgia , Fixação de Fratura , Fixação Interna de Fraturas/métodos
6.
Nat Mach Intell ; 5(3): 294-308, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38523605

RESUMO

Artificial intelligence (AI) now enables automated interpretation of medical images. However, AI's potential use for interventional image analysis remains largely untapped. This is because the post hoc analysis of data collected during live procedures has fundamental and practical limitations, including ethical considerations, expense, scalability, data integrity and a lack of ground truth. Here we demonstrate that creating realistic simulated images from human models is a viable alternative and complement to large-scale in situ data collection. We show that training AI image analysis models on realistically synthesized data, combined with contemporary domain generalization techniques, results in machine learning models that on real data perform comparably to models trained on a precisely matched real data training set. We find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real-data-trained models due to the effectiveness of training on a larger dataset. SyntheX provides an opportunity to markedly accelerate the conception, design and evaluation of X-ray-based intelligent systems. In addition, SyntheX provides the opportunity to test novel instrumentation, design complementary surgical approaches, and envision novel techniques that improve outcomes, save time or mitigate human error, free from the ethical and practical considerations of live human data collection.

7.
Med Image Comput Comput Assist Interv ; 14228: 133-143, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38617200

RESUMO

Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity - corridor, activity, view, and frame value - simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 99.2% on simulated sequences and 71.7% in cadaver across all granularity levels, with up to 84% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.

8.
Artigo em Inglês | MEDLINE | ID: mdl-38617810

RESUMO

Intraoperative imaging using C-arm X-ray systems enables percutaneous management of fractures by providing real-time visualization of tool to tissue relationships. However, estimating appropriate positioning of surgical instruments, such as K-wires, relative to safe bony corridors is challenging due to the projective nature of X-ray images: tool pose in the plane containing the principal ray is difficult to assess, necessitating the acquisition of numerous views onto the anatomy. This task is especially demanding in complex anatomy, such as the superior pubic ramus of the pelvis, and results in high cognitive load and repeat attempts even in experienced trauma surgeons. A perception-based algorithm that interprets interventional radiographs during internal fixation to infer the likelihood of cortical breach - especially early on, when the wire has not been advanced - might reduce both the amount of X-rays acquired for verification and the likelihood of repeat attempts. In this manuscript, we present first steps towards developing such an algorithm. We devise a strategy for in silico collection and annotation of X-ray images suitable for detecting cortical breach of a K-wire in the superior pubic ramus, including those with visible fractures. Beginning with minimal manual annotations of correct trajectories, we randomly perturb entry and exit points and project the 3D scene using a physics-based forward model to obtain a large number of 2D X-ray images with and without cortical breach. We report baseline results for anticipating cortical breach at various K-wire insertion depths, achieving an AUROC score of 0.68 for 50% insertion. Code and data are available at github.com/benjamindkilleen/cortical-breach-detection.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...