ABSTRACT
Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon's movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.
Subject(s)
Operating Rooms , Surgery, Computer-Assisted/instrumentation , Surgical Procedures, Operative/methods , User-Computer Interface , Algorithms , Computer Systems , Equipment Design , Gestures , Humans , Learning , Movement , Surgery, Computer-Assisted/methods , Task Performance and AnalysisABSTRACT
In the current clinical workflow of endovascular abdominal aortic repairs (EVAR) a stent graft is inserted into the aneurysmatic aorta under 2D angiographic imaging. Due to the missing depth information in the X-ray visualization, it is highly difficult in particular for junior physicians to place the stent graft in the preoperatively defined position within the aorta. Therefore, advanced 3D visualization of stent grafts is highly required. In this paper, we present a novel algorithm to automatically match a 3D model of the stent graft to an intraoperative 2D image showing the device. By automatic preprocessing and a global-to-local registration approach, we are able to abandon user interaction and still meet the desired robustness. The complexity of our registration scheme is reduced by a semi-simultaneous optimization strategy incorporating constraints that correspond to the geometric model of the stent graft. Via experiments on synthetic, phantom, and real interventional data, we are able to show that the presented method matches the stent graft model to the 2D image data with good accuracy.