Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Int J Comput Assist Radiol Surg ; 19(6): 1147-1155, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38598140

ABSTRACT

PURPOSE: This paper evaluates user performance in telesurgical tasks with the da Vinci Research Kit (dVRK), comparing unilateral teleoperation, bilateral teleoperation with force sensors and sensorless force estimation. METHODS: A four-channel teleoperation system with disturbance observers and sensorless force estimation with learning-based dynamic compensation was developed. Palpation experiments were conducted with 12 users who tried to locate tumors hidden in tissue phantoms with their fingers or through handheld or teleoperated laparoscopic instruments with visual, force sensor, or sensorless force estimation feedback. In a peg transfer experiment with 10 users, the contribution of sensorless haptic feedback with/without learning-based dynamic compensation was assessed using NASA TLX surveys, measured free motion speeds and forces, environment interaction forces as well as experiment completion times. RESULTS: The first study showed a 30% increase in accuracy in detecting tumors with sensorless haptic feedback over visual feedback with only a 5-10% drop in accuracy when compared with sensor feedback or direct instrument contact. The second study showed that sensorless feedback can help reduce interaction forces due to incidental contacts by about 3 times compared with unilateral teleoperation. The cost is an increase in free motion forces and physical effort. We show that it is possible to improve this with dynamic compensation. CONCLUSION: We demonstrate the benefits of sensorless haptic feedback in teleoperated surgery systems, especially with dynamic compensation, and that it can improve surgical performance without hardware modifications.


Subject(s)
Robotic Surgical Procedures , Humans , Robotic Surgical Procedures/methods , Robotic Surgical Procedures/instrumentation , Phantoms, Imaging , Equipment Design , Telemedicine/instrumentation , Palpation/methods , Palpation/instrumentation , User-Computer Interface , Feedback , Robotics/instrumentation , Robotics/methods , Laparoscopy/methods , Laparoscopy/instrumentation
2.
IEEE Robot Autom Lett ; 8(3): 1287-1294, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37997605

ABSTRACT

This paper introduces the first integrated real-time intraoperative surgical guidance system, in which an endoscope camera of da Vinci surgical robot and a transrectal ultrasound (TRUS) transducer are co-registered using photoacoustic markers that are detected in both fluorescence (FL) and photoacoustic (PA) imaging. The co-registered system enables the TRUS transducer to track the laser spot illuminated by a pulsed-laser-diode attached to the surgical instrument, providing both FL and PA images of the surgical region-of-interest (ROI). As a result, the generated photoacoustic marker is visualized and localized in the da Vinci endoscopic FL images, and the corresponding tracking can be conducted by rotating the TRUS transducer to display the PA image of the marker. A quantitative evaluation revealed that the average registration and tracking errors were 0.84 mm and 1.16°, respectively. This study shows that the co-registered photoacoustic marker tracking can be effectively deployed intraoperatively using TRUS+PA imaging providing functional guidance of the surgical ROI.

3.
Sensors (Basel) ; 22(14)2022 Jul 17.
Article in English | MEDLINE | ID: mdl-35891016

ABSTRACT

Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot operating system (ROS) is the standard development framework. In the past, there have been several "ad hoc" attempts made to bridge both tools; however, they are all reliant on middleware and custom interfaces. Additionally, none of these attempts have been successful in bridging access to the full suite of tools provided by ROS or 3D Slicer. Therefore, in this paper, we present the SlicerROS2 module, which was designed for the direct use of ROS2 packages and libraries within 3D Slicer. The module was developed to enable real-time visualization of robots, accommodate different robot configurations, and facilitate data transfer in both directions (between ROS and Slicer). We demonstrate the system on multiple robots with different configurations, evaluate the system performance and discuss an image-guided robotic intervention that can be prototyped with this module. This module can serve as a starting point for clinical system development that reduces the need for custom interfaces and time-intensive platform setup.


Subject(s)
Robotics , Diagnostic Imaging , Reactive Oxygen Species , Software
4.
Front Robot AI ; 8: 747917, 2021.
Article in English | MEDLINE | ID: mdl-34926590

ABSTRACT

Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore the control and visualization interfaces available to those operators, taking into account the challenges due to significant telemetry time delay. We consider one motivating application of remote teleoperation, which is ground-based control of a robot on-orbit for satellite servicing. This paper presents a model-based architecture that: 1) improves visualization and situation awareness, 2) enables more effective human/robot interaction and control, and 3) detects task failures based on anomalous sensor feedback. We illustrate elements of the architecture by drawing on 10 years of our research in this area. The paper further reports the results of several multi-user experiments to evaluate the model-based architecture, on ground-based test platforms, for satellite servicing tasks subject to round-trip communication latencies of several seconds. The most significant performance gains were obtained by enhancing the operators' situation awareness via improved visualization and by enabling them to precisely specify intended motion. In contrast, changes to the control interface, including model-mediated control or an immersive 3D environment, often reduced the reported task load but did not significantly improve task performance. Considering the challenges of fully autonomous intervention, we expect that some form of teleoperation will continue to be necessary for robotic in-situ servicing, assembly, and manufacturing tasks for the foreseeable future. We propose that effective teleoperation can be enabled by modeling the remote environment, providing operators with a fused view of the real environment and virtual model, and incorporating interfaces and control strategies that enable interactive planning, precise operation, and prompt detection of errors.

5.
Front Robot AI ; 8: 612964, 2021.
Article in English | MEDLINE | ID: mdl-34250025

ABSTRACT

Since the first reports of a novel coronavirus (SARS-CoV-2) in December 2019, over 33 million people have been infected worldwide and approximately 1 million people worldwide have died from the disease caused by this virus, COVID-19. In the United States alone, there have been approximately 7 million cases and over 200,000 deaths. This outbreak has placed an enormous strain on healthcare systems and workers. Severe cases require hospital care, and 8.5% of patients require mechanical ventilation in an intensive care unit (ICU). One major challenge is the necessity for clinical care personnel to don and doff cumbersome personal protective equipment (PPE) in order to enter an ICU unit to make simple adjustments to ventilator settings. Although future ventilators and other ICU equipment may be controllable remotely through computer networks, the enormous installed base of existing ventilators do not have this capability. This paper reports the development of a simple, low cost telerobotic system that permits adjustment of ventilator settings from outside the ICU. The system consists of a small Cartesian robot capable of operating a ventilator touch screen with camera vision control via a wirelessly connected tablet master device located outside the room. Engineering system tests demonstrated that the open-loop mechanical repeatability of the device was 7.5 mm, and that the average positioning error of the robotic finger under visual servoing control was 5.94 mm. Successful usability tests in a simulated ICU environment were carried out and are reported. In addition to enabling a significant reduction in PPE consumption, the prototype system has been shown in a preliminary evaluation to significantly reduce the total time required for a respiratory therapist to perform typical setting adjustments on a commercial ventilator, including donning and doffing PPE, from 271 to 109 s.

6.
IEEE Trans Med Robot Bionics ; 2(2): 176-187, 2020 May.
Article in English | MEDLINE | ID: mdl-32699833

ABSTRACT

High-resolution real-time intraocular imaging of retina at the cellular level is very challenging due to the vulnerable and confined space within the eyeball as well as the limited availability of appropriate modalities. A probe-based confocal laser endomicroscopy (pCLE) system, can be a potential imaging modality for improved diagnosis. The ability to visualize the retina at the cellular level could provide information that may predict surgical outcomes. The adoption of intraocular pCLE scanning is currently limited due to the narrow field of view and the micron-scale range of focus. In the absence of motion compensation, physiological tremors of the surgeons' hand and patient movements also contribute to the deterioration of the image quality. Therefore, an image-based hybrid control strategy is proposed to mitigate the above challenges. The proposed hybrid control strategy enables a shared control of the pCLE probe between surgeons and robots to scan the retina precisely, with the absence of hand tremors and with the advantages of an image-based auto-focus algorithm that optimizes the quality of pCLE images. The hybrid control strategy is deployed on two frameworks - cooperative and teleoperated. Better image quality, smoother motion, and reduced workload are all achieved in a statistically significant manner with the hybrid control frameworks.

7.
Int J Med Robot ; 15(4): e1999, 2019 Aug.
Article in English | MEDLINE | ID: mdl-30970387

ABSTRACT

BACKGROUND: It was suggested that the lack of haptic feedback, formerly considered a limitation for the da Vinci robotic system, does not affect robotic surgeons because of training and compensation based on visual feedback. However, conclusive studies are still missing, and the interest in force reflection is rising again. METHODS: We integrated a seven-DoF master into the da Vinci Research Kit. We designed tissue grasping, palpation, and incision tasks with robotic surgeons, to be performed by three groups of users (expert surgeons, medical residents, and nonsurgeons, five users/group), either with or without haptic feedback. Task-specific quantitative metrics and a questionnaire were used for assessment. RESULTS: Force reflection made a statistically significant difference for both palpation (improved inclusion detection rate) and incision (decreased tissue damage). CONCLUSIONS: Haptic feedback can improve key surgical outcomes for tasks requiring a pronounced cognitive burden for the surgeon, to be possibly negotiated with longer completion times.


Subject(s)
Hand Strength , Robotic Surgical Procedures/education , Robotic Surgical Procedures/instrumentation , Surgeons , Adult , Equipment Design , Feedback, Sensory , Female , Humans , Male , Palpation , Software , Surgical Wound , Surveys and Questionnaires , Touch
8.
Healthc Technol Lett ; 5(5): 194-200, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30800322

ABSTRACT

In robot-assisted laparoscopic surgery, the first assistant (FA) is responsible for tasks such as robot docking, passing necessary materials, manipulating hand-held instruments, and helping with trocar planning and placement. The performance of the FA is critical for the outcome of the surgery. The authors introduce ARssist, an augmented reality application based on an optical see-through head-mounted display, to help the FA perform these tasks. ARssist offers (i) real-time three-dimensional rendering of the robotic instruments, hand-held instruments, and endoscope based on a hybrid tracking scheme and (ii) real-time stereo endoscopy that is configurable to suit the FA's hand-eye coordination when operating based on endoscopy feedback. ARssist has the potential to help the FA perform his/her task more efficiently, and hence improve the outcome of robot-assisted laparoscopic surgeries.

9.
Sci Robot ; 2(11)2017 10 25.
Article in English | MEDLINE | ID: mdl-33157887

ABSTRACT

Robot Operating System (ROS) celebrates its 10th anniversary on 7 November 2017.

10.
Med Phys ; 41(9): 091712, 2014 Sep.
Article in English | MEDLINE | ID: mdl-25186387

ABSTRACT

PURPOSE: Brachytherapy is a standard option of care for prostate cancer patients but may be improved by dynamic dose calculation based on localized seed positions. The American Brachytherapy Society states that the major current limitation of intraoperative treatment planning is the inability to localize the seeds in relation to the prostate. An image-guidance system was therefore developed to localize seeds for dynamic dose calculation. METHODS: The proposed system is based on transrectal ultrasound (TRUS) and mobile C-arm fluoroscopy, while using a simple fiducial with seed-like markers to compute pose from the nonencoded C-arm. Three or more fluoroscopic images and an ultrasound volume are acquired and processed by a pipeline of algorithms: (1) seed segmentation, (2) fiducial detection with pose estimation, (3) seed matching with reconstruction, and (4) fluoroscopy-to-TRUS registration. RESULTS: The system was evaluated on ten phantom cases, resulting in an overall mean error of 1.3 mm. The system was also tested on 37 patients and each algorithm was evaluated. Seed segmentation resulted in a 1% false negative rate and 2% false positive rate. Fiducial detection with pose estimation resulted in a 98% detection rate. Seed matching with reconstruction had a mean error of 0.4 mm. Fluoroscopy-to-TRUS registration had a mean error of 1.3 mm. Moreover, a comparison of dose calculations between the authors' intraoperative method and an independent postoperative method shows a small difference of 7% and 2% forD90 and V100, respectively. Finally, the system demonstrated the ability to detect cold spots and required a total processing time of approximately 1 min. CONCLUSIONS: The proposed image-guidance system is the first practical approach to dynamic dose calculation, outperforming earlier solutions in terms of robustness, ease of use, and functional completeness.


Subject(s)
Brachytherapy/methods , Fluoroscopy/methods , Prostatic Neoplasms/radiotherapy , Radiometry/methods , Radiotherapy, Image-Guided/methods , Ultrasonography/methods , Algorithms , Fiducial Markers , Fluoroscopy/instrumentation , Humans , Image Processing, Computer-Assisted/methods , Male , Phantoms, Imaging , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging , Radiometry/instrumentation , Radiotherapy Planning, Computer-Assisted/instrumentation , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Image-Guided/instrumentation , Time , Ultrasonography/instrumentation
11.
J Robot Surg ; 7(3): 217-25, 2013 Sep.
Article in English | MEDLINE | ID: mdl-25525474

ABSTRACT

This paper presents the development and evaluation of video augmentation on the stereoscopic da Vinci S system with intraoperative image guidance for base of tongue tumor resection in transoral robotic surgery (TORS). Proposed workflow for image-guided TORS begins by identifying and segmenting critical oropharyngeal structures (e.g., the tumor and adjacent arteries and nerves) from preoperative computed tomography (CT) and/or magnetic resonance (MR) imaging. These preoperative planned data can be deformably registered to the intraoperative endoscopic view using mobile C-arm cone-beam computed tomography (CBCT) [1, 2]. Augmentation of TORS endoscopic video defining surgical targets and critical structures has the potential to improve navigation, spatial orientation, and confidence in tumor resection. Experiments in animal specimens achieved statistically significant improvement in target localization error when comparing the proposed image guidance system to simulated current practice.

12.
Proc SPIE Int Soc Opt Eng ; 86712013 Mar 08.
Article in English | MEDLINE | ID: mdl-24392207

ABSTRACT

The lack of dynamic dosimetry tools for permanent prostate brachytherapy causes otherwise avoidable problems in prostate cancer patient care. The goal of this work is to satisfy this need in a readily adoptable manner. Using the ubiquitous ultrasound scanner and mobile non-isocentric C-arm, we show that dynamic dosimetry is now possible with only the addition of an arbitrarily configured marker-based fiducial. Not only is the system easily configured from accessible hardware, but it is also simple and convenient, requiring little training from technicians. Furthermore, the proposed system is built upon robust algorithms of seed segmentation, fiducial detection, seed reconstruction, and image registration. All individual steps of the pipeline have been thoroughly tested, and the system as a whole has been validated on a study of 25 patients. The system has shown excellent results of accurately computing dose, and does so with minimal manual intervention, therefore showing promise for widespread adoption of dynamic dosimetry.

13.
Med Image Anal ; 16(7): 1347-58, 2012 Oct.
Article in English | MEDLINE | ID: mdl-22784870

ABSTRACT

Prostate brachytherapy is a treatment for prostate cancer using radioactive seeds that are permanently implanted in the prostate. The treatment success depends on adequate coverage of the target gland with a therapeutic dose, while sparing the surrounding tissue. Since seed implantation is performed under transrectal ultrasound (TRUS) imaging, intraoperative localization of the seeds in ultrasound can provide physicians with dynamic dose assessment and plan modification. However, since all the seeds cannot be seen in the ultrasound images, registration between ultrasound and fluoroscopy is a practical solution for intraoperative dosimetry. In this manuscript, we introduce a new image-based nonrigid registration method that obviates the need for manual seed segmentation in TRUS images and compensates for the prostate displacement and deformation due to TRUS probe pressure. First, we filter the ultrasound images for subsequent registration using thresholding and Gaussian blurring. Second, a computationally efficient point-to-volume similarity metric, an affine transformation and an evolutionary optimizer are used in the registration loop. A phantom study showed final registration errors of 0.84 ± 0.45 mm compared to ground truth. In a study on data from 10 patients, the registration algorithm showed overall seed-to-seed errors of 1.7 ± 1.0 mm and 1.5 ± 0.9 mm for rigid and nonrigid registration methods, respectively, performed in approximately 30s per patient.


Subject(s)
Brachytherapy/methods , Prostatic Neoplasms/diagnosis , Prostatic Neoplasms/radiotherapy , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Image-Guided/methods , Tomography, X-Ray Computed/methods , Ultrasonography/methods , Humans , Male , Radiometry/methods , Radiotherapy Dosage , Reproducibility of Results , Sensitivity and Specificity , Subtraction Technique
14.
Med Eng Phys ; 34(1): 64-77, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21802975

ABSTRACT

Prostate brachytherapy guided by transrectal ultrasound is a common treatment option for early stage prostate cancer. Prostate cancer accounts for 28% of cancer cases and 11% of cancer deaths in men with 217,730 estimated new cases and 32,050 estimated deaths in 2010 in the United States alone. The major current limitation is the inability to reliably localize implanted radiation seeds spatially in relation to the prostate. Multimodality approaches that incorporate X-ray for seed localization have been proposed, but they require both accurate tracking of the imaging device and segmentation of the seeds. Some use image-based radiographic fiducials to track the X-ray device, but manual intervention is needed to select proper regions of interest for segmenting both the tracking fiducial and the seeds, to evaluate the segmentation results, and to correct the segmentations in the case of segmentation failure, thus requiring a significant amount of extra time in the operating room. In this paper, we present an automatic segmentation algorithm that simultaneously segments the tracking fiducial and brachytherapy seeds, thereby minimizing the need for manual intervention. In addition, through the innovative use of image processing techniques such as mathematical morphology, Hough transforms, and RANSAC, our method can detect and separate overlapping seeds that are common in brachytherapy implant images. Our algorithm was validated on 55 phantom and 206 patient images, successfully segmenting both the fiducial and seeds with a mean seed segmentation rate of 96% and sub-millimeter accuracy.


Subject(s)
Brachytherapy , Fiducial Markers , Image Processing, Computer-Assisted/methods , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/radiotherapy , Tomography, X-Ray Computed/standards , Automation , Humans , Image Processing, Computer-Assisted/instrumentation , Male , Microspheres , Phantoms, Imaging , Tomography, X-Ray Computed/instrumentation
15.
Med Image Comput Comput Assist Interv ; 14(Pt 2): 615-22, 2011.
Article in English | MEDLINE | ID: mdl-21995080

ABSTRACT

Ultrasound-Fluoroscopy fusion is a key step toward intraoperative dosimetry for prostate brachytherapy. We propose a method for intensity-based registration of fluoroscopy to ultrasound that obviates the need for seed segmentation required for seed-based registration. We employ image thresholding and morphological and Gaussian filtering to enhance the image intensity distribution of ultrasound volume. Finally, we find the registration parameters by maximizing a point-to-volume similarity metric. We conducted an experiment on a ground truth phantom and achieved registration error of 0.7 +/- 0.2 mm. Our clinical results on 5 patient data sets show excellent visual agreement between the registered seeds and the ultrasound volume with a seed-to-seed registration error of 1.8 +/- 0.9mm. With low registration error, high computational speed and no need for manual seed segmentation, our method is promising for clinical application.


Subject(s)
Brachytherapy/methods , Prostate/pathology , Prostatic Neoplasms/radiotherapy , Radiotherapy, Computer-Assisted/methods , Algorithms , Computers , Humans , Male , Models, Statistical , Normal Distribution , Phantoms, Imaging , Reproducibility of Results , Software
16.
Phys Med Biol ; 56(15): 5011-27, 2011 Aug 07.
Article in English | MEDLINE | ID: mdl-21772077

ABSTRACT

The success of prostate brachytherapy critically depends on delivering adequate dose to the prostate gland, and the capability of intraoperatively localizing implanted seeds provides potential for dose evaluation and optimization during therapy. REDMAPS is a recently reported algorithm that carries out seed localization by detecting, matching and reconstructing seeds in only a few seconds from three acquired x-ray images (Lee et al 2011 IEEE Trans. Med. Imaging 29 38-51). In this paper, we present an automatic pose correction (APC) process that is combined with REDMAPS to allow for both more accurate seed reconstruction and the use of images with relatively large pose errors. APC uses a set of reconstructed seeds as a fiducial and corrects the image pose by minimizing the overall projection error. The seed matching and APC are iteratively computed until a stopping condition is met. Simulations and clinical studies show that APC significantly improves the reconstructions with an overall average matching rate of ⩾99.4%, reconstruction error of ⩽0.5 mm, and the matching solution optimality of ⩾99.8%.


Subject(s)
Artifacts , Brachytherapy/methods , Imaging, Three-Dimensional/methods , Prostatic Neoplasms/radiotherapy , Automation , Fluoroscopy , Humans , Intraoperative Period , Male , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/surgery , Time Factors
17.
Brachytherapy ; 10(1): 57-63, 2011.
Article in English | MEDLINE | ID: mdl-20729152

ABSTRACT

PURPOSE: Optimization of prostate brachytherapy is constrained by tissue deflection of needles and fixed spacing of template holes. We developed and clinically tested a robotic guide toward the goal of allowing greater freedom of needle placement. METHODS AND MATERIALS: The robot consists of a small tubular needle guide attached to a robotically controlled arm. The apparatus is mounted and calibrated to operate in the same coordinate frame as a standard template. Translation in x and y directions over the perineum ±40 mm are possible. Needle insertion is performed manually. RESULTS: Five patients were treated in an institutional review board-approved study. Confirmatory measurements of robotic movements for initial 3 patients using infrared tracking showed mean error of 0.489 mm (standard deviation, 0.328 mm). Fine adjustments in needle positioning were possible when tissue deflection was encountered; adjustments were performed in 54 (30.2%) of 179 needles placed, with 36 (20.1%) of 179 adjustments of >2mm. Twenty-seven insertions were intentionally altered to positions between the standard template grid to improve the dosimetric plan or avoid structures such as pubic bone and blood vessels. CONCLUSIONS: Robotic needle positioning provided a means of compensating for needle deflections and the ability to intentionally place needles into areas between the standard template holes. To our knowledge, these results represent the first clinical testing of such a system. Future work will be incorporation of direct control of the robot by the physician, adding software algorithms to help avoid robot collisions with the ultrasound, and testing the angulation capability in the clinical setting.


Subject(s)
Brachytherapy/instrumentation , Needles , Prostatic Neoplasms/radiotherapy , Feasibility Studies , Humans , Male , Pilot Projects , Prospective Studies , Prostatic Neoplasms/diagnostic imaging , Robotics , Ultrasonography
18.
Midas J ; 2011: 2-9, 2011 Oct 01.
Article in English | MEDLINE | ID: mdl-25243238

ABSTRACT

This paper presents the rationale for the use of a component-based architecture for computer-assisted intervention (CAI) systems, including the ability to reuse components and to easily develop distributed systems. We introduce three additional capabilities, however, that we believe are especially important for research and development of CAI systems. The first is the ability to deploy components among different processes (as conventionally done) or within the same process (for optimal real-time performance), without requiring source-level modifications to the component. This is particularly relevant for real-time video processing, where the use of multiple processes could cause perceptible delays in the video stream. The second key feature is the ability to dynamically reconfigure the system. In a system composed of multiple processes on multiple computers, this allows one process to be restarted (e.g., after correcting a problem) and reconnected to the rest of the system, which is more convenient than restarting the entire distributed application and enables better fault recovery. The third key feature is the availability of run-time tools for data collection, interactive control, and introspection, and offline tools for data analysis and playback. The above features are provided by the open-source cisst software package, which forms the basis for the Surgical Assistant Workstation (SAW) framework. A complex computer-assisted intervention system for retinal microsurgery is presented as an example that relies on these features. This system integrates robotics, stereo microscopy, force sensing, and optical coherence tomography (OCT) imaging to transcend the current limitations of vitreoretinal surgery.

19.
Brachytherapy ; 10(2): 98-106, 2011.
Article in English | MEDLINE | ID: mdl-20692212

ABSTRACT

PURPOSE: To evaluate a prototypical system of dynamic intraoperative dosimetry for prostate brachytherapy using registered ultrasound and fluoroscopy (RUF) with a nonisocentric C-arm (GE OEC, Salt Lake City, UT) and to compare intraoperative dosimetry of RUF as well as ultrasound-based seed localization (USD) with Day 0 CT dosimetry. METHODS: Seed positions were independently determined using RUF and USD. RUF uses a radio-opaque fiducial for registration to ultrasound and 3-dimensional reconstruction of seeds relative to prostate using nonisocentric C-arm fluoroscopy. Postimplant CT was performed on Day 0. Squared differences between dosimetric measures for RUF vs. CT and USD vs. CT were calculated and mean squared differences evaluated. Paired t test was used to evaluate which method was more closely aligned with CT. Accuracies of USD and RUF compared with CT were estimated using a nonparametric approach. RESULTS: Six patients were treated and compared with USD. RUF identified areas of underdosage intraoperatively in all patients and median 5 additional seeds were placed. In 40 of 42 measures, RUF was equally or more closely correlated with CT than USD. USD showed statistically significant variation from CT for 6 of 7 parameters compared with 1 of 7 parameters for RUF. Mean squared differences from CT were significantly smaller for RUF in 4 of 7 parameters compared with USD. CONCLUSIONS: Dynamic intraoperative dosimetry is possible with a conventional nonisocentric C-arm. Compared with an USD method, RUF-based intraoperative dosimetry was more closely aligned with immediate postimplant CT. RUF identified areas of underdosage, which were not detected using USD.


Subject(s)
Brachytherapy/methods , Prostatic Neoplasms/diagnosis , Prostatic Neoplasms/radiotherapy , Prosthesis Implantation/methods , Radiometry/methods , Surgery, Computer-Assisted/methods , Aged , Brachytherapy/instrumentation , Humans , Male , Middle Aged , Radiotherapy Dosage , Tomography, X-Ray Computed/methods , Treatment Outcome , Ultrasonography/methods
20.
Midas J ; 2011 Jun.
Article in English | MEDLINE | ID: mdl-24398557

ABSTRACT

This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.

SELECTION OF CITATIONS
SEARCH DETAIL
...