Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
Ann Med Surg (Lond) ; 65: 102268, 2021 May.
Article in English | MEDLINE | ID: mdl-33898035

ABSTRACT

BACKGROUND: Excessive tool-tissue interaction forces often result in tissue damage and intraoperative complications, while insufficient forces prevent the completion of the task. This review sought to explore the tool-tissue interaction forces exerted by instruments during surgery across different specialities, tissues, manoeuvres and experience levels. MATERIALS & METHODS: A PRISMA-guided systematic review was carried out using Embase, Medline and Web of Science databases. RESULTS: Of 462 articles screened, 45 studies discussing surgical tool-tissue forces were included. The studies were categorized into 9 different specialities with the mean of average forces lowest for ophthalmology (0.04N) and highest for orthopaedic surgery (210N). Nervous tissue required the least amount of force to manipulate (mean of average: 0.4N), whilst connective tissue (including bone) required the most (mean of average: 45.8). For manoeuvres, drilling recorded the highest forces (mean of average: 14N), whilst sharp dissection recorded the lowest (mean of average: 0.03N). When comparing differences in the mean of average forces between groups, novices exerted 22.7% more force than experts, and presence of a feedback mechanism (e.g. audio) reduced exerted forces by 47.9%. CONCLUSIONS: The measurement of tool-tissue forces is a novel but rapidly expanding field. The range of forces applied varies according to surgical speciality, tissue, manoeuvre, operator experience and feedback provided. Knowledge of the safe range of surgical forces will improve surgical safety whilst maintaining effectiveness. Measuring forces during surgery may provide an objective metric for training and assessment. Development of smart instruments, robotics and integrated feedback systems will facilitate this.

2.
IEEE Trans Biomed Eng ; 67(12): 3452-3463, 2020 12.
Article in English | MEDLINE | ID: mdl-32746002

ABSTRACT

OBJECTIVE: Intraoperative palpation is a surgical gesture jeopardized by the lack of haptic feedback which affects robotic minimally invasive surgery. Restoring the force reflection in teleoperated systems may improve both surgeons' performance and procedures' outcome. METHODS: A force-based sensing approach was developed, based on a cable-driven parallel manipulator with anticipated seamless and low-cost integration capabilities in teleoperated robotic surgery. No force sensor on the end-effector is used, but tissue probing forces are estimated from measured cable tensions. A user study involving surgical trainees (n = 22) was conducted to experimentally evaluate the platform in two palpation-based test-cases on silicone phantoms. Two modalities were compared: visual feedback alone and both visual + haptic feedbacks available at the master site. RESULTS: Surgical trainees' preference for the modality providing both visual and haptic feedback is corroborated by both quantitative and qualitative metrics. Hard nodules detection sensitivity improves (94.35 ± 9.1% vs 76.09 ± 19.15% for visual feedback alone), while also exerting smaller forces (4.13 ± 1.02 N vs 4.82 ± 0.81 N for visual feedback alone) on the phantom tissues. At the same time, the subjective perceived workload decreases. CONCLUSION: Tissue-probe contact forces are estimated in a low cost and unique way, without the need of force sensors on the end-effector. Haptics demonstrated an improvement in the tumor detection rate, a reduction of the probing forces, and a decrease in the perceived workload for the trainees. SIGNIFICANCE: Relevant benefits are demonstrated from the usage of combined cable-driven parallel manipulators and haptics during robotic minimally invasive procedures. The translation of robotic intraoperative palpation to clinical practice could improve the detection and dissection of cancer nodules.


Subject(s)
Robotic Surgical Procedures , Robotics , Feedback , Minimally Invasive Surgical Procedures , Palpation
3.
Soft Robot ; 6(4): 423-443, 2019 08.
Article in English | MEDLINE | ID: mdl-30920355

ABSTRACT

Soft robotic devices have desirable traits for applications in minimally invasive surgery (MIS), but many interdisciplinary challenges remain unsolved. To understand current technologies, we carried out a keyword search using the Web of Science and Scopus databases, applied inclusion and exclusion criteria, and compared several characteristics of the soft robotic devices for MIS in the resulting articles. There was low diversity in the device designs and a wide-ranging level of detail regarding their capabilities. We propose a standardized comparison methodology to characterize soft robotics for various MIS applications, which will aid designers producing the next generation of devices.


Subject(s)
Minimally Invasive Surgical Procedures/instrumentation , Robotics/instrumentation , Surgery, Computer-Assisted/instrumentation , Equipment Design/instrumentation
4.
Front Robot AI ; 6: 141, 2019.
Article in English | MEDLINE | ID: mdl-33501156

ABSTRACT

Minimally Invasive Surgery (MIS) imposes a trade-off between non-invasive access and surgical capability. Treatment of early gastric cancers over 20 mm in diameter can be achieved by performing Endoscopic Submucosal Dissection (ESD) with a flexible endoscope; however, this procedure is technically challenging, suffers from extended operation times and requires extensive training. To facilitate the ESD procedure, we have created a deployable cable driven robot that increases the surgical capabilities of the flexible endoscope while attempting to minimize the impact on the access that they offer. Using a low-profile inflatable support structure in the shape of a hollow hexagonal prism, our robot can fold around the flexible endoscope and, when the target site has been reached, achieve a 73.16% increase in volume and increase its radial stiffness. A sheath around the variable stiffness structure delivers a series of force transmission cables that connect to two independent tubular end-effectors through which standard flexible endoscopic instruments can pass and be anchored. Using a simple control scheme based on the length of each cable, the pose of the two instruments can be controlled by haptic controllers in each hand of the user. The forces exerted by a single instrument were measured, and a maximum magnitude of 8.29 N observed along a single axis. The working channels and tip control of the flexible endoscope remain in use in conjunction with our robot and were used during a procedure imitating the demands of ESD was successfully carried out by a novice user. Not only does this robot facilitate difficult surgical techniques, but it can be easily customized and rapidly produced at low cost due to a programmatic design approach.

5.
Int J Comput Assist Radiol Surg ; 12(7): 1131-1140, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28397111

ABSTRACT

PURPOSE: Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information -especially perceptually enabled ones-from multiple sources, could help to meet the above goals. This paper presents some core functionalities of a wider low-cost framework under development that allows perceptually enabled interaction within the surgical environment. METHODS: The synergy of wearable eye-tracking and advanced computer vision methodologies, such as SLAM, is exploited. As a demonstration of one of the framework's possible functionalities, an articulated collaborative robotic arm and laser pointer is integrated and the set-up is used to project the surgeon's fixation point in 3D space. RESULTS: The implementation is evaluated over 60 fixations on predefined targets, with distances between the subject and the targets of 92-212 cm and between the robot and the targets of 42-193 cm. The median overall system error is currently 3.98 cm. Its real-time potential is also highlighted. CONCLUSIONS: The work presented here represents an introduction and preliminary experimental validation of core functionalities of a larger framework under development. The proposed framework is geared towards a safer and more efficient surgical theatre.


Subject(s)
Eye Movement Measurements , Fixation, Ocular , Operating Rooms , Robotics/methods , Workflow , Humans , Image Interpretation, Computer-Assisted , Minimally Invasive Surgical Procedures
6.
Neuroimage ; 64: 267-76, 2013 Jan 01.
Article in English | MEDLINE | ID: mdl-22960153

ABSTRACT

Longitudinal changes in cortical function are known to accompany motor skills learning, and can be detected as an evolution in the activation map. These changes include attenuation in activation in the prefrontal cortex and increased activation in primary and secondary motor regions, the cerebellum and posterior parietal cortex. Despite this, comparatively little is known regarding the impact of the mode or type of training on the speed of activation map plasticity and on longitudinal variation in network architectures. To address this, we randomised twenty-one subjects to learn a complex motor tracking task delivered across six practice sessions in either "free-hand" or "gaze-contingent motor control" mode, during which frontoparietal cortical function was evaluated using functional near infrared spectroscopy. Results demonstrate that upon practice termination, gaze-assisted learners had achieved superior technical performance compared to free-hand learners. Furthermore, evolution in frontoparietal activation foci indicative of expertise was achieved at an earlier stage in practice amongst gaze-assisted learners. Both groups exhibited economical small world topology; however, networks in learners randomised to gaze-assistance were less costly and showed higher values of local efficiency suggesting improved frontoparietal communication in this group. We conclude that the benefits of gaze-assisted motor learning are evidenced by improved technical accuracy, more rapid task internalisation and greater neuronal efficiency. This form of assisted motor learning may have occupational relevance for high precision control such as in surgery or following re-learning as part of stroke rehabilitation.


Subject(s)
Attention/physiology , Fixation, Ocular/physiology , Frontal Lobe/physiology , Learning/physiology , Motor Skills/physiology , Nerve Net/physiology , Parietal Lobe/physiology , Adult , Female , Humans , Male , Neural Pathways/physiology , Volition/physiology
7.
Ann Biomed Eng ; 40(10): 2156-67, 2012 Oct.
Article in English | MEDLINE | ID: mdl-22581476

ABSTRACT

The use of multiple robots for performing complex tasks is becoming a common practice for many robot applications. When different operators are involved, effective cooperation with anticipated manoeuvres is important for seamless, synergistic control of all the end-effectors. In this paper, the concept of Collaborative Gaze Channelling (CGC) is presented for improved control of surgical robots for a shared task. Through eye tracking, the fixations of each operator are monitored and presented in a shared surgical workspace. CGC permits remote or physically separated collaborators to share their intention by visualising the eye gaze of their counterparts, and thus recovers, to a certain extent, the information of mutual intent that we rely upon in a vis-à-vis working setting. In this study, the efficiency of surgical manipulation with and without CGC for controlling a pair of bimanual surgical robots is evaluated by analysing the level of coordination of two independent operators. Fitts' law is used to compare the quality of movement with or without CGC. A total of 40 subjects have been recruited for this study and the results show that the proposed CGC framework exhibits significant improvement (p < 0.05) on all the motion indices used for quality assessment. This study demonstrates that visual guidance is an implicit yet effective way of communication during collaborative tasks for robotic surgery. Detailed experimental validation results demonstrate the potential clinical value of the proposed CGC framework.


Subject(s)
Robotics/instrumentation , Robotics/methods , Video-Assisted Surgery/instrumentation , Video-Assisted Surgery/methods , Humans , Male
8.
Surg Endosc ; 26(7): 2003-9, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22258302

ABSTRACT

BACKGROUND: Eye-tracking technology has been shown to improve trainee performance in the aircraft industry, radiology, and surgery. The ability to track the point-of-regard of a supervisor and reflect this onto a subjects' laparoscopic screen to aid instruction of a simulated task is attractive, in particular when considering the multilingual make up of modern surgical teams and the development of collaborative surgical techniques. We tried to develop a bespoke interface to project a supervisors' point-of-regard onto a subjects' laparoscopic screen and to investigate whether using the supervisor's eye-gaze could be used as a tool to aid the identification of a target during a surgical-simulated task. METHODS: We developed software to project a supervisors' point-of-regard onto a subjects' screen whilst undertaking surgically related laparoscopic tasks. Twenty-eight subjects with varying levels of operative experience and proficiency in English undertook a series of surgically minded laparoscopic tasks. Subjects were instructed with verbal queues (V), a cursor reflecting supervisor's eye-gaze (E), or both (VE). Performance metrics included time to complete tasks, eye-gaze latency, and number of errors. RESULTS: Completion times and number of errors were significantly reduced when eye-gaze instruction was employed (VE, E). In addition, the time taken for the subject to correctly focus on the target (latency) was significantly reduced. CONCLUSIONS: We have successfully demonstrated the effectiveness of a novel framework to enable a supervisor eye-gaze to be projected onto a trainee's laparoscopic screen. Furthermore, we have shown that utilizing eye-tracking technology to provide visual instruction improves completion times and reduces errors in a simulated environment. Although this technology requires significant development, the potential applications are wide-ranging.


Subject(s)
Computer Simulation , Education, Medical/methods , Eye Movements , Fixation, Ocular , Laparoscopy/education , Teaching Materials , Analysis of Variance , Equipment Design , Female , Humans , Laparoscopy/instrumentation , Male , Reinforcement, Verbal , Software
9.
Med Image Anal ; 16(3): 612-31, 2012 Apr.
Article in English | MEDLINE | ID: mdl-20889367

ABSTRACT

The success of MIS is coupled with an increasing demand on surgeons' manual dexterity and visuomotor coordination due to the complexity of instrument manipulations. The use of master-slave surgical robots has avoided many of the drawbacks of MIS, but at the same time, has increased the physical separation between the surgeon and the patient. Tissue deformation combined with restricted workspace and visibility of an already cluttered environment can raise critical issues related to surgical precision and safety. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robot-assisted MIS procedures. This paper introduces a novel gaze-contingent framework for real-time haptic feedback and virtual fixtures by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of Gaze-Contingent Motor Channelling. The method is also extended to 3D by introducing the concept of Gaze-Contingent Haptic Constraints where eye gaze is used to dynamically prescribe and update safety boundaries during robot-assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robot assisted phantom procedures demonstrate the potential clinical value of the technique. In order to assess the associated cognitive demand of the proposed concepts, functional Near-Infrared Spectroscopy is used and preliminary results are discussed.


Subject(s)
Fixation, Ocular/physiology , Minimally Invasive Surgical Procedures/methods , Robotics/methods , Surgery, Computer-Assisted/methods , Touch/physiology , User-Computer Interface , Cognitive Reserve/physiology , Eye Movement Measurements , Humans
10.
Article in English | MEDLINE | ID: mdl-22255557

ABSTRACT

A gaze-contingent autofocus system using an eye-tracker and liquid lens has been constructed for use with a surgical robot, making it possible to rapidly (within tens of milliseconds) change focus using only eye-control. This paper reports the results of a user test comparing the eye-tracker to a surgical robot's in-built mechanical focusing system. In the clinical environment, this intuitive interface removes the need for an external mechanical control and improves the speed at which surgeons can make decisions, based on the visible features. Possible applications include microsurgery and gastrointestinal procedures where the object distance changes due to breathing and/or peristalsis.


Subject(s)
Endoscopes , Lenses , Minimally Invasive Surgical Procedures/instrumentation , Robotics/instrumentation , Surgery, Computer-Assisted/instrumentation , Equipment Design , Equipment Failure Analysis , Feedback
11.
Med Image Comput Comput Assist Interv ; 13(Pt 3): 319-26, 2010.
Article in English | MEDLINE | ID: mdl-20879415

ABSTRACT

Novel robotic technologies utilised in surgery need assessment for their effects on the user as well as on technical performance. In this paper, the evolution in 'cognitive burden' across visuomotor learning is quantified using a combination of functional near infrared spectroscopy (fNIRS) and graph theory. The results demonstrate escalating costs within the activated cortical network during the intermediate phase of learning which is manifest as an increase in cognitive burden. This innovative application of graph theory and fNIRS enables the economic evaluation of brain behaviour underpinning task execution and how this may be impacted by novel technology and learning. Consequently, this may shed light on how robotic technologies improve human-machine interaction and augment minimally invasive surgical skills acquisition. This work has significant implications for the development and assessment of emergent robotic technologies at cortical level and in elucidating learning-related plasticity in terms of inter-regional cortical connectivity.


Subject(s)
Algorithms , Brain Mapping/methods , Cognition/physiology , Learning/physiology , Movement/physiology , Spectroscopy, Near-Infrared/methods , Visual Perception/physiology , Humans
12.
IEEE Trans Biomed Eng ; 56(3): 889-92, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19272898

ABSTRACT

This paper presents an articulated robotic-controlled device to facilitate large-area in vivo tissue imaging and characterization through the integration of miniaturized reflected white light and fluorescence intensity imaging for minimally invasive surgery (MIS). The device is composed of a long, rigid shaft with a robotically controlled distal tip featuring three degrees of in-plane articulation and one degree of rotational freedom. The constraints imposed by the articulated section, coupled with the small footprint available in MIS devices, require a novel optical configuration to ensure effective target illumination and image acquisition. A tunable coherent supercontinuum laser source is used to provide sequential white light and fluorescence illumination through a multimode fiber (200 microm diameter), and the reflected images are transmitted to an image acquisition system using a 10,000 pixel flexible fiber image guide (590 microm diameter). By using controlled joint actuation to trace overlapping trajectories, the device allows effective imaging of a larger field of view than a traditional dual-mode laparoscope. A first-generation prototype of the device and its initial phantom and ex vivo tissue characterization results are described. The results demonstrate the potential of the device to be used as a new platform for in vivo tissue characterization and navigation for MIS.


Subject(s)
Image Enhancement/instrumentation , Lasers , Minimally Invasive Surgical Procedures/instrumentation , Optics and Photonics/instrumentation , Robotics/instrumentation , Equipment Design , Fiber Optic Technology , Fluorescence , Humans , Phantoms, Imaging , Reproducibility of Results
13.
Rep U S ; 2009: 2783-2788, 2009 Oct 15.
Article in English | MEDLINE | ID: mdl-24748996

ABSTRACT

This paper presents a human-robot interface with perceptual docking to allow for the control of multiple microbots. The aim is to demonstrate that real-time eye tracking can be used for empowering robots with human vision by using knowledge acquired in situ. Several micro-robots can be directly controlled through a combination of manual and eye control. The novel control environment is demonstrated on a virtual biopsy of gastric lesion through an endoluminal approach. Twenty-one subjects were recruited to test the control environment. Statistical analysis was conducted on the completion time of the task using the keyboard control and the proposed eye tracking framework. System integration with the concept of perceptual docking framework demonstrated statistically significant improvement of task execution.

14.
Med Image Comput Comput Assist Interv ; 12(Pt 1): 353-60, 2009.
Article in English | MEDLINE | ID: mdl-20426007

ABSTRACT

With increasing demand on intra-operative navigation and motion compensation during robotic assisted minimally invasive surgery, real-time 3D deformation recovery remains a central problem. Currently the majority of existing methods rely on salient features, where the inherent paucity of distinctive landmarks implies either a semi-dense reconstruction or the use of strong geometrical constraints. In this study, we propose a gaze-contingent depth reconstruction scheme by integrating human perception with semi-dense stereo and p-q based shading information. Depth inference is carried out in real-time through a novel application of Bayesian chains without smoothness priors. The practical value of the scheme is highlighted by detailed validation using a beating heart phantom model with known geometry to verify the performance of gaze-contingent 3D surface reconstruction and deformation recovery.


Subject(s)
Cardiovascular Surgical Procedures/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Robotics/methods , Surgery, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , User-Computer Interface , Algorithms , Computer Graphics , Computer Simulation , Humans , Image Enhancement/methods , Models, Anatomic , Models, Cardiovascular , Phantoms, Imaging , Reproducibility of Results , Sensitivity and Specificity
15.
Article in English | MEDLINE | ID: mdl-20426014

ABSTRACT

In robot-assisted procedures, the surgeon's ability can be enhanced by navigation guidance through the use of virtual fixtures or active constraints. This paper presents a real-time modeling scheme for dynamic active constraints with fast and simple mesh adaptation under cardiac deformation and changes in anatomic structure. A smooth tubular pathway is constructed which provides assistance for a flexible hyper-redundant robot to circumnavigate the heart with the aim of undertaking bilateral pulmonary vein isolation as part of a modified maze procedure for the treatment of debilitating arrhythmia and atrial fibrillation. In contrast to existing approaches, the method incorporates detailed geometrical constraints with explicit manipulation margins of the forbidden region for an entire articulated surgical instrument, rather than just the end-effector itself. Detailed experimental validation is conducted to demonstrate the speed and accuracy of the instrument navigation with and without the use of the proposed dynamic constraints.


Subject(s)
Cardiovascular Surgical Procedures/methods , Computer Graphics , Imaging, Three-Dimensional/methods , Man-Machine Systems , Robotics/methods , Surgery, Computer-Assisted/methods , User-Computer Interface
16.
Med Image Comput Comput Assist Interv ; 11(Pt 2): 347-55, 2008.
Article in English | MEDLINE | ID: mdl-18982624

ABSTRACT

The use of focused energy delivery in robotic assisted surgery for atrial fibrillation requires accurate prescription of ablation paths. In this paper, an original framework based on fusing human and machine vision for providing gaze-contigent control in robotic assisted surgery is provided. With the proposed method, binocular eye tracking is used to estimate the 3D fixations of the surgeon, which are further refined by considering the camera geometry and the consistency of image features at reprojected fixations. Nonparametric clustering is then used to optimize the point distribution to provide an accurate ablation path. For experimental validation, a study where eight subjects prescribe an ablation path on the right atrium of the heart using only their gaze control is presented. The accuracy of the proposed method is validated using a phantom heart model with known 3D ground truth.


Subject(s)
Algorithms , Catheter Ablation/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Robotics/methods , Surgery, Computer-Assisted/methods
17.
Med Image Comput Comput Assist Interv ; 11(Pt 2): 676-83, 2008.
Article in English | MEDLINE | ID: mdl-18982663

ABSTRACT

The use of master-slave surgical robots for Minimally Invasive Surgery (MIS) has created a physical separation between the surgeon and the patient. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robotic assisted MIS procedures. This paper introduces a novel gaze contingent framework with real-time haptic feedback by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of gaze-contingent motor channelling. The method also uses 3D eye gaze to dynamically prescribe and update safety boundaries during robotic assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robotic assisted phantom procedures demonstrate the potential clinical value of the technique.


Subject(s)
Fixation, Ocular , Man-Machine Systems , Minimally Invasive Surgical Procedures/methods , Robotics/methods , Surgery, Computer-Assisted/methods , Touch , User-Computer Interface , Biomimetics/methods , Humans , Image Interpretation, Computer-Assisted/methods
18.
Comput Aided Surg ; 12(6): 335-46, 2007 Nov.
Article in English | MEDLINE | ID: mdl-18066949

ABSTRACT

Laparoscopic surgery poses many different constraints for the operating surgeon, resulting in a slow uptake of advanced laparoscopic procedures. Traditional approaches to the assessment of surgical performance rely on prior classification of a cohort of surgeons' technical skills for validation, which may introduce subjective bias to the outcome. In this study, Hidden Markov Models (HMMs) are used to learn surgical maneuvers from 11 subjects with mixed abilities. By using the leave-one-out method, the HMMs are trained without prior clustering of subjects into different skill levels, and the output likelihood indicates the similarity of a particular subject's motion trajectories to those of the group. The results show that after a short period of training, the novices become more similar to the group when compared to the initial pre-training assessment. The study demonstrates the strength of the proposed method in ranking the quality of trajectories of the subjects, highlighting its value in minimizing the subjective bias in skills assessment for minimally invasive surgery.


Subject(s)
Clinical Competence , Laparoscopy , Markov Chains
19.
Article in English | MEDLINE | ID: mdl-18044625

ABSTRACT

With the increasing sophistication of surgical robots, the use of motion stabilisation for enhancing the performance of micro-surgical tasks is an actively pursued research topic. The use of mechanical stabilisation devices has certain advantages, in terms of both simplicity and consistency. The technique, however, can complicate the existing surgical workflow and interfere with an already crowded MIS operated cavity. With the advent of reliable vision-based real-time and in situ in vivo techniques on 3D-deformation recovery, current effort is being directed towards the use of optical based techniques for achieving adaptive motion stabilisation. The purpose of this paper is to assess the effect of virtual stabilization on foveal/parafoveal vision during robotic assisted MIS. Detailed psychovisual experiments have been performed. Results show that stabilisation of the whole visual field is not necessary and it is sufficient to perform accurate motion tracking and deformation compensation within a relatively small area that is directly under foveal vision. The results have also confirmed that under the current motion stabilisation regime, the deformation of the periphery does not affect the visual acuity and there is no indication of the deformation velocity of the periphery affecting foveal sensitivity. These findings are expected to have a direct implication on the future design of visual stabilisation methods for robotic assisted MIS.


Subject(s)
Fixation, Ocular/physiology , Image Enhancement/methods , Minimally Invasive Surgical Procedures/methods , Surgery, Computer-Assisted/methods , Task Performance and Analysis , User-Computer Interface , Visual Perception/physiology , Artifacts , Humans , Image Interpretation, Computer-Assisted/methods , Movement/physiology
20.
Comput Aided Surg ; 11(5): 256-66, 2006 Sep.
Article in English | MEDLINE | ID: mdl-17127651

ABSTRACT

OBJECTIVE: Recovering tissue depth and deformation during robotically assisted minimally invasive procedures is an important step towards motion compensation, stabilization and co-registration with preoperative data. This work demonstrates that eye gaze derived from binocular eye tracking can be effectively used to recover 3D motion and deformation of the soft tissue. METHODS: A binocular eye-tracking device was integrated into the stereoscopic surgical console. After calibration, the 3D fixation point of the participating subjects could be accurately resolved in real time. A CT-scanned phantom heart model was used to demonstrate the accuracy of gaze-contingent depth extraction and motion stabilization of the soft tissue. The dynamic response of the oculomotor system was assessed with the proposed framework by using autoregressive modeling techniques. In vivo data were also used to perform gaze-contingent decoupling of cardiac and respiratory motion. RESULTS: Depth reconstruction, deformation tracking, and motion stabilization of the soft tissue were possible with binocular eye tracking. The dynamic response of the oculomotor system was able to cope with frequencies likely to occur under most routine minimally invasive surgical operations. CONCLUSION: The proposed framework presents a novel approach towards the tight integration of a human and a surgical robot where interaction in response to sensing is required to be under the control of the operating surgeon.


Subject(s)
Minimally Invasive Surgical Procedures/instrumentation , Pattern Recognition, Automated/methods , Pattern Recognition, Visual/physiology , Robotics/instrumentation , Surgery, Computer-Assisted/instrumentation , Algorithms , Artificial Intelligence , Attention/physiology , Computer Simulation , Humans , Image Interpretation, Computer-Assisted , Image Processing, Computer-Assisted , Imaging, Three-Dimensional , Minimally Invasive Surgical Procedures/methods , Motion , Photic Stimulation , Photogrammetry , Pilot Projects , Robotics/methods , Surgery, Computer-Assisted/methods , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL
...