Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
J Neurosci Methods ; 368: 109453, 2022 Feb 15.
Article in English | MEDLINE | ID: mdl-34968626

ABSTRACT

BACKGROUND: Camera images can encode large amounts of visual information of an animal and its environment, enabling high fidelity 3D reconstruction of the animal and its environment using computer vision methods. Most systems, both markerless (e.g. deep learning based) and marker-based, require multiple cameras to track features across multiple points of view to enable such 3D reconstruction. However, such systems can be expensive and are challenging to set up in small animal research apparatuses. NEW METHODS: We present an open-source, marker-based system for tracking the head of a rodent for behavioral research that requires only a single camera with a potentially wide field of view. The system features a lightweight visual target and computer vision algorithms that together enable high-accuracy tracking of the six-degree-of-freedom position and orientation of the animal's head. The system, which only requires a single camera positioned above the behavioral arena, robustly reconstructs the pose over a wide range of head angles (360° in yaw, and approximately ± 120° in roll and pitch). RESULTS: Experiments with live animals demonstrate that the system can reliably identify rat head position and orientation. Evaluations using a commercial optical tracker device show that the system achieves accuracy that rivals commercial multi-camera systems. COMPARISON WITH EXISTING METHODS: Our solution significantly improves upon existing monocular marker-based tracking methods, both in accuracy and in allowable range of motion. CONCLUSIONS: The proposed system enables the study of complex behaviors by providing robust, fine-scale measurements of rodent head motions in a wide range of orientations.


Subject(s)
Algorithms , Optical Devices , Animals , Computers , Motion , Rats
2.
Front Robot AI ; 8: 747917, 2021.
Article in English | MEDLINE | ID: mdl-34926590

ABSTRACT

Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore the control and visualization interfaces available to those operators, taking into account the challenges due to significant telemetry time delay. We consider one motivating application of remote teleoperation, which is ground-based control of a robot on-orbit for satellite servicing. This paper presents a model-based architecture that: 1) improves visualization and situation awareness, 2) enables more effective human/robot interaction and control, and 3) detects task failures based on anomalous sensor feedback. We illustrate elements of the architecture by drawing on 10 years of our research in this area. The paper further reports the results of several multi-user experiments to evaluate the model-based architecture, on ground-based test platforms, for satellite servicing tasks subject to round-trip communication latencies of several seconds. The most significant performance gains were obtained by enhancing the operators' situation awareness via improved visualization and by enabling them to precisely specify intended motion. In contrast, changes to the control interface, including model-mediated control or an immersive 3D environment, often reduced the reported task load but did not significantly improve task performance. Considering the challenges of fully autonomous intervention, we expect that some form of teleoperation will continue to be necessary for robotic in-situ servicing, assembly, and manufacturing tasks for the foreseeable future. We propose that effective teleoperation can be enabled by modeling the remote environment, providing operators with a fused view of the real environment and virtual model, and incorporating interfaces and control strategies that enable interactive planning, precise operation, and prompt detection of errors.

3.
Front Robot AI ; 8: 612964, 2021.
Article in English | MEDLINE | ID: mdl-34250025

ABSTRACT

Since the first reports of a novel coronavirus (SARS-CoV-2) in December 2019, over 33 million people have been infected worldwide and approximately 1 million people worldwide have died from the disease caused by this virus, COVID-19. In the United States alone, there have been approximately 7 million cases and over 200,000 deaths. This outbreak has placed an enormous strain on healthcare systems and workers. Severe cases require hospital care, and 8.5% of patients require mechanical ventilation in an intensive care unit (ICU). One major challenge is the necessity for clinical care personnel to don and doff cumbersome personal protective equipment (PPE) in order to enter an ICU unit to make simple adjustments to ventilator settings. Although future ventilators and other ICU equipment may be controllable remotely through computer networks, the enormous installed base of existing ventilators do not have this capability. This paper reports the development of a simple, low cost telerobotic system that permits adjustment of ventilator settings from outside the ICU. The system consists of a small Cartesian robot capable of operating a ventilator touch screen with camera vision control via a wirelessly connected tablet master device located outside the room. Engineering system tests demonstrated that the open-loop mechanical repeatability of the device was 7.5 mm, and that the average positioning error of the robotic finger under visual servoing control was 5.94 mm. Successful usability tests in a simulated ICU environment were carried out and are reported. In addition to enabling a significant reduction in PPE consumption, the prototype system has been shown in a preliminary evaluation to significantly reduce the total time required for a respiratory therapist to perform typical setting adjustments on a commercial ventilator, including donning and doffing PPE, from 271 to 109 s.

4.
Curr Biol ; 28(24): 4029-4036.e4, 2018 12 17.
Article in English | MEDLINE | ID: mdl-30503617

ABSTRACT

Active sensing involves the production of motor signals for the purpose of acquiring sensory information [1-3]. The most common form of active sensing, found across animal taxa and behaviors, involves the generation of movements-e.g., whisking [4-6], touching [7, 8], sniffing [9, 10], and eye movements [11]. Active sensing movements profoundly affect the information carried by sensory feedback pathways [12-15] and are modulated by both top-down goals (e.g., measuring weight versus texture [1, 16]) and bottom-up stimuli (e.g., lights on or off [12]), but it remains unclear whether and how these movements are controlled in relation to the ongoing feedback they generate. To investigate the control of movements for active sensing, we created an experimental apparatus for freely swimming weakly electric fish, Eigenmannia virescens, that modulates the gain of reafferent feedback by adjusting the position of a refuge based on real-time videographic measurements of fish position. We discovered that fish robustly regulate sensory slip via closed-loop control of active sensing movements. Specifically, as fish performed the task of maintaining position inside the refuge [17-22], they dramatically up- or downregulated fore-aft active sensing movements in relation to a 4-fold change of experimentally modulated reafferent gain. These changes in swimming movements served to maintain a constant magnitude of sensory slip. The magnitude of sensory slip depended on the presence or absence of visual cues. These results indicate that fish use two controllers: one that controls the acquisition of information by regulating feedback from active sensing movements and another that maintains position in the refuge, a control structure that may be ubiquitous in animals [23, 24].


Subject(s)
Feedback, Sensory/physiology , Gymnotiformes/physiology , Swimming/physiology , Animals , Video Recording
5.
Med Image Comput Comput Assist Interv ; 15(Pt 1): 397-404, 2012.
Article in English | MEDLINE | ID: mdl-23285576

ABSTRACT

Current technical limitations in retinal surgery hinder the ability of surgeons to identify and localize surgical targets, increasing operating times and risks of surgical error. In this paper we present a hybrid tracking and mosaicking method for augmented reality in retinal surgery. The system is a combination of direct and feature-based tracking methods. A novel extension for direct visual tracking using a robust image similarity measure in color images is also proposed. Several experiments conducted on phantom, in vivo rabbit and human images attest the ability of the method to cope with the challenging retinal surgery scenario. Applications of the proposed method for tele-mentoring and intra-operative guidance are demonstrated.


Subject(s)
Retina/surgery , Animals , Humans , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted , Imaging, Three-Dimensional/methods , Models, Statistical , Pattern Recognition, Automated/methods , Phantoms, Imaging , Rabbits , Reproducibility of Results , Retina/pathology , Robotics , Subtraction Technique , Surgery, Computer-Assisted
6.
J Thorac Cardiovasc Surg ; 143(3): 528-34, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22172215

ABSTRACT

OBJECTIVES: Current robotic training approaches lack the criteria for automatically assessing and tracking (over time) technical skills separately from clinical proficiency. We describe the development and validation of a novel automated and objective framework for the assessment of training. METHODS: We are able to record all system variables (stereo instrument video, hand and instrument motion, buttons and pedal events) from the da Vinci surgical systems using a portable archival system integrated with the robotic surgical system. Data can be collected unsupervised, and the archival system does not change system operations in any way. Our open-ended multicenter protocol is collecting surgical skill benchmarking data from 24 trainees to surgical proficiency, subject only to their continued availability. Two independent experts performed structured (objective structured assessment of technical skills) assessments on longitudinal data from 8 novice and 4 expert surgeons to generate baseline data for training and to validate our computerized statistical analysis methods in identifying the ranges of operational and clinical skill measures. RESULTS: Objective differences in operational and technical skill between known experts and other subjects were quantified. The longitudinal learning curves and statistical analysis for trainee performance measures are reported. Graphic representations of the skills developed for feedback to the trainees are also included. CONCLUSIONS: We describe an open-ended longitudinal study and automated motion recognition system capable of objectively differentiating between clinical and technical operational skills in robotic surgery. Our results have demonstrated a convergence of trainee skill parameters toward those derived from expert robotic surgeons during the course of our training protocol.


Subject(s)
Education, Medical, Graduate , Learning Curve , Motor Skills , Robotics/education , Support Vector Machine , Surgery, Computer-Assisted/education , Task Performance and Analysis , Analysis of Variance , Automation , Clinical Competence , Cluster Analysis , Humans , Longitudinal Studies , Reproducibility of Results , Time Factors , United States
7.
Int J Med Robot ; 8(1): 118-24, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22114003

ABSTRACT

BACKGROUND: With increased use of robotic surgery in specialties including urology, development of training methods has also intensified. However, current approaches lack the ability to discriminate between operational and surgical skills. METHODS: An automated recording system was used to longitudinally (monthly) acquire instrument motion/telemetry and video for four basic surgical skills - suturing, manipulation, transection, and dissection. Statistical models were then developed to discriminate the human-machine skill differences between practicing expert surgeons and trainees. RESULTS: Data from six trainees and two experts was analyzed to validate the first ever statistical models of operational skills, and demonstrate classification with very high accuracy (91.7% for masters, and 88.2% for camera motion) and sensitivity. CONCLUSIONS: The paper reports on a longitudinal study aimed at tracking robotic surgery trainees to proficiency, and methods capable of objectively assessing operational and technical skills that would be used in assessing trainee progress at the participating institutions.


Subject(s)
General Surgery/methods , Robotics/methods , Telemetry/methods , Algorithms , Automation , Clinical Competence , Computer Simulation , Equipment Design , General Surgery/education , Humans , Man-Machine Systems , Models, Statistical , Motion , Reproducibility of Results , Robotics/education
8.
Midas J ; 2011: 2-9, 2011 Oct 01.
Article in English | MEDLINE | ID: mdl-25243238

ABSTRACT

This paper presents the rationale for the use of a component-based architecture for computer-assisted intervention (CAI) systems, including the ability to reuse components and to easily develop distributed systems. We introduce three additional capabilities, however, that we believe are especially important for research and development of CAI systems. The first is the ability to deploy components among different processes (as conventionally done) or within the same process (for optimal real-time performance), without requiring source-level modifications to the component. This is particularly relevant for real-time video processing, where the use of multiple processes could cause perceptible delays in the video stream. The second key feature is the ability to dynamically reconfigure the system. In a system composed of multiple processes on multiple computers, this allows one process to be restarted (e.g., after correcting a problem) and reconnected to the rest of the system, which is more convenient than restarting the entire distributed application and enables better fault recovery. The third key feature is the availability of run-time tools for data collection, interactive control, and introspection, and offline tools for data analysis and playback. The above features are provided by the open-source cisst software package, which forms the basis for the Surgical Assistant Workstation (SAW) framework. A complex computer-assisted intervention system for retinal microsurgery is presented as an example that relies on these features. This system integrates robotics, stereo microscopy, force sensing, and optical coherence tomography (OCT) imaging to transcend the current limitations of vitreoretinal surgery.

9.
Midas J ; 2011 Jun.
Article in English | MEDLINE | ID: mdl-24398557

ABSTRACT

This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.

10.
Urology ; 73(4): 896-900, 2009 Apr.
Article in English | MEDLINE | ID: mdl-19193404

ABSTRACT

OBJECTIVES: To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. METHODS: Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. RESULTS: Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. CONCLUSIONS: Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.


Subject(s)
Imaging, Three-Dimensional , Kidney Calculi/diagnosis , Kidney Calculi/surgery , Kidney Neoplasms/diagnosis , Kidney Neoplasms/surgery , Laparoscopy/methods , Nephrectomy/methods , Robotics , Surgery, Computer-Assisted , Tomography, X-Ray Computed , Algorithms , Computer Systems , Feasibility Studies , Humans , Video Recording
11.
Stud Health Technol Inform ; 132: 396-401, 2008.
Article in English | MEDLINE | ID: mdl-18391329

ABSTRACT

The ability to accurately recognize elementary surgical gestures is a stepping stone to automated surgical assessment and surgical training. However, as the pool of subjects increases, variation in surgical techniques and unanticipated motion increases the challenge of creating robust statistical models of gestures. This paper examines the applicability of advanced modeling techniques from automated speech recognition to the problem of increasing variability in surgical motions. In particular, we demonstrate the effectiveness of automatically bootstrapped user-adaptive models on diverse data acquired from the da Vinci surgical robot.


Subject(s)
Computer Simulation , General Surgery/methods , Gestures , Humans , Models, Statistical , Robotics , Speech Recognition Software , United States , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...