Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
Int J Comput Assist Radiol Surg ; 19(7): 1273-1280, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38816649

RESUMO

PURPOSE: Skullbase surgery demands exceptional precision when removing bone in the lateral skull base. Robotic assistance can alleviate the effect of human sensory-motor limitations. However, the stiffness and inertia of the robot can significantly impact the surgeon's perception and control of the tool-to-tissue interaction forces. METHODS: We present a situational-aware, force control technique aimed at regulating interaction forces during robot-assisted skullbase drilling. The contextual interaction information derived from the digital twin environment is used to enhance sensory perception and suppress undesired high forces. RESULTS: To validate our approach, we conducted initial feasibility experiments involving a medical and two engineering students. The experiment focused on further drilling around critical structures following cortical mastoidectomy. The experiment results demonstrate that robotic assistance coupled with our proposed control scheme effectively limited undesired interaction forces when compared to robotic assistance without the proposed force control. CONCLUSIONS: The proposed force control techniques show promise in significantly reducing undesired interaction forces during robot-assisted skullbase surgery. These findings contribute to the ongoing efforts to enhance surgical precision and safety in complex procedures involving the lateral skull base.


Assuntos
Procedimentos Cirúrgicos Robóticos , Base do Crânio , Humanos , Base do Crânio/cirurgia , Procedimentos Cirúrgicos Robóticos/métodos , Estudos de Viabilidade , Mastoidectomia/métodos
2.
Int J Comput Assist Radiol Surg ; 19(6): 1147-1155, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38598140

RESUMO

PURPOSE: This paper evaluates user performance in telesurgical tasks with the da Vinci Research Kit (dVRK), comparing unilateral teleoperation, bilateral teleoperation with force sensors and sensorless force estimation. METHODS: A four-channel teleoperation system with disturbance observers and sensorless force estimation with learning-based dynamic compensation was developed. Palpation experiments were conducted with 12 users who tried to locate tumors hidden in tissue phantoms with their fingers or through handheld or teleoperated laparoscopic instruments with visual, force sensor, or sensorless force estimation feedback. In a peg transfer experiment with 10 users, the contribution of sensorless haptic feedback with/without learning-based dynamic compensation was assessed using NASA TLX surveys, measured free motion speeds and forces, environment interaction forces as well as experiment completion times. RESULTS: The first study showed a 30% increase in accuracy in detecting tumors with sensorless haptic feedback over visual feedback with only a 5-10% drop in accuracy when compared with sensor feedback or direct instrument contact. The second study showed that sensorless feedback can help reduce interaction forces due to incidental contacts by about 3 times compared with unilateral teleoperation. The cost is an increase in free motion forces and physical effort. We show that it is possible to improve this with dynamic compensation. CONCLUSION: We demonstrate the benefits of sensorless haptic feedback in teleoperated surgery systems, especially with dynamic compensation, and that it can improve surgical performance without hardware modifications.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Procedimentos Cirúrgicos Robóticos/instrumentação , Imagens de Fantasmas , Desenho de Equipamento , Telemedicina/instrumentação , Palpação/métodos , Palpação/instrumentação , Interface Usuário-Computador , Retroalimentação , Robótica/instrumentação , Robótica/métodos , Laparoscopia/métodos , Laparoscopia/instrumentação
3.
Healthc Technol Lett ; 11(2-3): 179-188, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638499

RESUMO

Surgical robotics has revolutionized the field of surgery, facilitating complex procedures in operating rooms. However, the current teleoperation systems often rely on bulky consoles, which limit the mobility of surgeons. This restriction reduces surgeons' awareness of the patient during procedures and narrows the range of implementation scenarios. To address these challenges, an alternative solution is proposed: a mixed reality-based teleoperation system. This system leverages hand gestures, head motion tracking, and speech commands to enable the teleoperation of surgical robots. The implementation focuses on the da Vinci research kit (dVRK) and utilizes the capabilities of Microsoft HoloLens 2. The system's effectiveness is evaluated through camera navigation tasks and peg transfer tasks. The results indicate that, in comparison to manipulator-based teleoperation, the system demonstrates comparable viability in endoscope teleoperation. However, it falls short in instrument teleoperation, highlighting the need for further improvements in hand gesture recognition and video display quality.

4.
Int J Comput Assist Radiol Surg ; 19(1): 51-59, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37347346

RESUMO

PURPOSE: A virtual reality (VR) system, where surgeons can practice procedures on virtual anatomies, is a scalable and cost-effective alternative to cadaveric training. The fully digitized virtual surgeries can also be used to assess the surgeon's skills using measurements that are otherwise hard to collect in reality. Thus, we present the Fully Immersive Virtual Reality System (FIVRS) for skull-base surgery, which combines surgical simulation software with a high-fidelity hardware setup. METHODS: FIVRS allows surgeons to follow normal clinical workflows inside the VR environment. FIVRS uses advanced rendering designs and drilling algorithms for realistic bone ablation. A head-mounted display with ergonomics similar to that of surgical microscopes is used to improve immersiveness. Extensive multi-modal data are recorded for post-analysis, including eye gaze, motion, force, and video of the surgery. A user-friendly interface is also designed to ease the learning curve of using FIVRS. RESULTS: We present results from a user study involving surgeons with various levels of expertise. The preliminary data recorded by FIVRS differentiate between participants with different levels of expertise, promising future research on automatic skill assessment. Furthermore, informal feedback from the study participants about the system's intuitiveness and immersiveness was positive. CONCLUSION: We present FIVRS, a fully immersive VR system for skull-base surgery. FIVRS features a realistic software simulation coupled with modern hardware for improved realism. The system is completely open source and provides feature-rich data in an industry-standard format.


Assuntos
Realidade Virtual , Humanos , Simulação por Computador , Software , Interface Usuário-Computador , Competência Clínica , Crânio/cirurgia
5.
Med Phys ; 50(6): 3418-3434, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36841948

RESUMO

BACKGROUND: In breast CT, scattered photons form a large portion of the acquired signal, adversely impacting image quality throughout the frequency response of the imaging system. Prior studies provided evidence for a new image acquisition design, dubbed Narrow Beam Breast CT (NB-bCT), in preventing scatter acquisition. PURPOSE: Here, we report the design, implementation, and initial characterization of the first NB-bCT prototype. METHODS: The imaging system's apparatus is composed of two primary assemblies: a dynamic Fluence Modulator (collimator) and a photon-counting line detector. The design of the assemblies enables them to operate in lockstep during image acquisition, converting sourced x-rays into a moving narrow beam. During a projection, this narrow beam sweeps the entire fan angle coverage of the imaging system. The assemblies are each comprised of a metal housing, a sensory system, and a robotic system. A controller unit handles their relative movements. To study the impact of fluence modulation on the signal received in the detector, three physical breast phantoms, representative of small, average, and large size breasts, were developed and imaged, and acquired projections analyzed. The scatter acquisition in each projection as a function of breast phantom size was investigated. The imaging system's spatial resolution at the center and periphery of the field of view was measured. RESULTS: Minimal acquisition of scattered rays occurs during image acquisition with NB-bCT; results in minimal scatter to primary ratios in small, average, and large breast phantoms imaged were 0.05, 0.07, and 0.9, respectively. System spatial resolution of 5.2 lp/mm at 10% max MTF and 2.9 lp/mm at 50% max MTF at the center of the field of view was achieved, with minimal loss with the shift toward the corner (5.0 lp/mm at 10% max MTF and 2.5 lp/mm at 50% max MTF). CONCLUSION: The disclosed development, implementation, and characterization of a physical NB-bCT prototype system demonstrates a new method of CT-based image acquisition that yields high spatial resolution while minimizing scatter-components in acquired projections. This methodology holds promise for high-resolution CT-imaging applications in which reduction of scatter contamination is desirable.


Assuntos
Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Imagens de Fantasmas , Espalhamento de Radiação
6.
IEEE Trans Med Robot Bionics ; 5(4): 966-977, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38779126

RESUMO

As one of the most commonly performed spinal interventions in routine clinical practice, lumbar punctures are usually done with only hand palpation and trial-and-error. Failures can prolong procedure time and introduce complications such as cerebrospinal fluid leaks and headaches. Therefore, an effective needle insertion guidance method is desired. In this work, we present a complete lumbar puncture guidance system with the integration of (1) a wearable mechatronic ultrasound imaging device, (2) volume-reconstruction and bone surface estimation algorithms and (3) two alternative augmented reality user interfaces for needle guidance, including a HoloLens-based and a tablet-based solution. We conducted a quantitative evaluation of the end-to-end navigation accuracy, which shows that our system can achieve an overall needle navigation accuracy of 2.83 mm and 2.76 mm for the Tablet-based and the HoloLens-based solutions, respectively. In addition, we conducted a preliminary user study to qualitatively evaluate the effectiveness and ergonomics of our system on lumbar phantoms. The results show that users were able to successfully reach the target in an average of 1.12 and 1.14 needle insertion attempts for Tablet-based and HoloLens-based systems, respectively, exhibiting the potential to reduce the failure rates of lumbar puncture procedures with the proposed lumbar-puncture guidance.

7.
Front Oncol ; 12: 996537, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36237341

RESUMO

Purpose: In this study, we aim to further evaluate the accuracy of ultrasound tracking for intra-fraction pancreatic tumor motion during radiotherapy by a phantom-based study. Methods: Twelve patients with pancreatic cancer who were treated with stereotactic body radiation therapy were enrolled in this study. The displacement points of the respiratory cycle were acquired from 4DCT and transferred to a motion platform to mimic realistic breathing movements in our phantom study. An ultrasound abdominal phantom was placed and fixed in the motion platform. The ground truth of phantom movement was recorded by tracking an optical tracker attached to this phantom. One tumor inside the phantom was the tracking target. In the evaluation of the results, the monitoring results from the ultrasound system were compared with the phantom motion results from the infrared camera. Differences between infrared monitoring motion and ultrasound tracking motion were analyzed by calculating the root-mean-square error. Results: The 82.2% ultrasound tracking motion was within a 0.5 mm difference value between ultrasound tracking displacement and infrared monitoring motion. 0.7% ultrasound tracking failed to track accurately (a difference value > 2.5 mm). These differences between ultrasound tracking motion and infrared monitored motion do not correlate with respiratory displacements, respiratory velocity, or respiratory acceleration by linear regression analysis. Conclusions: The highly accurate monitoring results of this phantom study prove that the ultrasound tracking system may be a potential method for real-time monitoring targets, allowing more accurate delivery of radiation doses.

8.
Sensors (Basel) ; 22(14)2022 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-35891016

RESUMO

Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot operating system (ROS) is the standard development framework. In the past, there have been several "ad hoc" attempts made to bridge both tools; however, they are all reliant on middleware and custom interfaces. Additionally, none of these attempts have been successful in bridging access to the full suite of tools provided by ROS or 3D Slicer. Therefore, in this paper, we present the SlicerROS2 module, which was designed for the direct use of ROS2 packages and libraries within 3D Slicer. The module was developed to enable real-time visualization of robots, accommodate different robot configurations, and facilitate data transfer in both directions (between ROS and Slicer). We demonstrate the system on multiple robots with different configurations, evaluate the system performance and discuss an image-guided robotic intervention that can be prototyped with this module. This module can serve as a starting point for clinical system development that reduces the need for custom interfaces and time-intensive platform setup.


Assuntos
Robótica , Diagnóstico por Imagem , Espécies Reativas de Oxigênio , Software
9.
Int J Comput Assist Radiol Surg ; 17(5): 903-910, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35384551

RESUMO

PURPOSE: Using the da Vinci Research Kit (dVRK), we propose and experimentally demonstrate transfer learning (Xfer) of dynamics between different configurations and robots distributed around the world. This can extend recent research using neural networks to estimate the dynamics of the patient side manipulator (PSM) to provide accurate external end-effector force estimation, by adapting it to different robots and instruments, and in different configurations, with additional forces applied on the instruments as they pass through the trocar. METHODS: The goal of the learned models is to predict internal joint torques during robot motion. First, exhaustive training is performed during free-space (FS) motion, using several configurations to include gravity effects. Second, to adapt to different setups, a limited amount of training data is collected and then the neural network is updated through Xfer. RESULTS: Xfer can adapt a FS network trained on one robot, in one configuration, with a particular instrument, to provide comparable joint torque estimation for a different robot, in a different configuration, using a different instrument, and inserted through a trocar. The robustness of this approach is demonstrated with multiple PSMs (sampled from the dVRK community), instruments, configurations and trocar ports. CONCLUSION: Xfer provides significant improvements in prediction errors without the need for complete training from scratch and is robust over a wide range of robots, kinematic configurations, surgical instruments, and patient-specific setups.


Assuntos
Robótica , Fenômenos Biomecânicos , Humanos , Redes Neurais de Computação , Instrumentos Cirúrgicos , Torque
10.
Int J Comput Assist Radiol Surg ; 17(5): 911-920, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35334043

RESUMO

PURPOSE: Ultrasound-guided spine interventions often suffer from the insufficient visualization of key anatomical structures due to the complex shapes of the self-shadowing vertebrae. Therefore, we propose an ultrasound imaging paradigm, AutoInFocus (automatic insonification optimization with controlled ultrasound), to improve the key structure visibility. METHODS: A phased-array probe is used in conjunction with a motion platform to image a controlled workspace, and the resulting images from multiple insonification angles are combined to reveal the target anatomy. This idea is first evaluated in simulation and then realized as a robotic platform and a miniaturized patch device. A spine phantom (CIRS) and its CT scan were used in the evaluation experiments to quantitatively and qualitatively analyze the advantages of the proposed method over the traditional approach. RESULTS: We showed in simulation that the proposed system setup increased the visibility of interspinous space boundary, a key feature for lumbar puncture guidance, from 44.13 to 67.73% on average, and the 3D spine surface coverage from 14.31 to 35.87%, compared to traditional imaging setup. We also demonstrated the feasibility of both robotic and patch-based realizations in a spine phantom study. CONCLUSION: This work lays the foundation for a new imaging paradigm that leverages redundant and controlled insonification to allow for imaging optimization of the complex vertebrae anatomy, making it possible for high-quality visualization of key anatomies during ultrasound-guided spine interventions.


Assuntos
Coluna Vertebral , Tomografia Computadorizada por Raios X , Humanos , Imagens de Fantasmas , Coluna Vertebral/diagnóstico por imagem , Ultrassonografia/métodos , Ultrassonografia de Intervenção/métodos
11.
IEEE Trans Vis Comput Graph ; 28(7): 2550-2562, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-33170780

RESUMO

Head-mounted loupes can increase the user's visual acuity to observe the details of an object. On the other hand, optical see-through head-mounted displays (OST-HMD) are able to provide virtual augmentations registered with real objects. In this article, we propose AR-Loupe, combining the advantages of loupes and OST-HMDs, to offer augmented reality in the user's magnified field-of-vision. Specifically, AR-Loupe integrates a commercial OST-HMD, Magic Leap One, and binocular Galilean magnifying loupes, with customized 3D-printed attachments. We model the combination of user's eye, screen of OST-HMD, and the optical loupe as a pinhole camera. The calibration of AR-Loupe involves interactive view segmentation and an adapted version of stereo single point active alignment method (Stereo-SPAAM). We conducted a two-phase multi-user study to evaluate AR-Loupe. The users were able to achieve sub-millimeter accuracy ( 0.82 mm) on average, which is significantly ( ) smaller compared to normal AR guidance ( 1.49 mm). The mean calibration time was 268.46 s. With the increased size of real objects through optical magnification and the registered augmentation, AR-Loupe can aid users in high-precision tasks with better visual acuity and higher accuracy.


Assuntos
Realidade Aumentada , Óculos Inteligentes , Calibragem , Gráficos por Computador , Interface Usuário-Computador
12.
Front Robot AI ; 8: 747917, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34926590

RESUMO

Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore the control and visualization interfaces available to those operators, taking into account the challenges due to significant telemetry time delay. We consider one motivating application of remote teleoperation, which is ground-based control of a robot on-orbit for satellite servicing. This paper presents a model-based architecture that: 1) improves visualization and situation awareness, 2) enables more effective human/robot interaction and control, and 3) detects task failures based on anomalous sensor feedback. We illustrate elements of the architecture by drawing on 10 years of our research in this area. The paper further reports the results of several multi-user experiments to evaluate the model-based architecture, on ground-based test platforms, for satellite servicing tasks subject to round-trip communication latencies of several seconds. The most significant performance gains were obtained by enhancing the operators' situation awareness via improved visualization and by enabling them to precisely specify intended motion. In contrast, changes to the control interface, including model-mediated control or an immersive 3D environment, often reduced the reported task load but did not significantly improve task performance. Considering the challenges of fully autonomous intervention, we expect that some form of teleoperation will continue to be necessary for robotic in-situ servicing, assembly, and manufacturing tasks for the foreseeable future. We propose that effective teleoperation can be enabled by modeling the remote environment, providing operators with a fused view of the real environment and virtual model, and incorporating interfaces and control strategies that enable interactive planning, precise operation, and prompt detection of errors.

13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 4836-4839, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892292

RESUMO

Functional medical imaging systems can provide insights into brain activity during various tasks, but most current imaging systems are bulky devices that are not compatible with many human movements. Our motivating application is to perform Positron Emission Tomography (PET) imaging of subjects during sitting, upright standing and locomotion studies on a treadmill. The proposed long-term solution is to construct a robotic system that can support an imaging system surrounding the subject's head, and then move the system to accommodate natural motion. This paper presents the first steps toward this approach, which are to analyze human head motion, determine initial design parameters for the robotic system, and verify the concept in simulation.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Encéfalo/diagnóstico por imagem , Humanos , Movimento (Física) , Tomografia por Emissão de Pósitrons
14.
Front Robot AI ; 8: 612964, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34250025

RESUMO

Since the first reports of a novel coronavirus (SARS-CoV-2) in December 2019, over 33 million people have been infected worldwide and approximately 1 million people worldwide have died from the disease caused by this virus, COVID-19. In the United States alone, there have been approximately 7 million cases and over 200,000 deaths. This outbreak has placed an enormous strain on healthcare systems and workers. Severe cases require hospital care, and 8.5% of patients require mechanical ventilation in an intensive care unit (ICU). One major challenge is the necessity for clinical care personnel to don and doff cumbersome personal protective equipment (PPE) in order to enter an ICU unit to make simple adjustments to ventilator settings. Although future ventilators and other ICU equipment may be controllable remotely through computer networks, the enormous installed base of existing ventilators do not have this capability. This paper reports the development of a simple, low cost telerobotic system that permits adjustment of ventilator settings from outside the ICU. The system consists of a small Cartesian robot capable of operating a ventilator touch screen with camera vision control via a wirelessly connected tablet master device located outside the room. Engineering system tests demonstrated that the open-loop mechanical repeatability of the device was 7.5 mm, and that the average positioning error of the robotic finger under visual servoing control was 5.94 mm. Successful usability tests in a simulated ICU environment were carried out and are reported. In addition to enabling a significant reduction in PPE consumption, the prototype system has been shown in a preliminary evaluation to significantly reduce the total time required for a respiratory therapist to perform typical setting adjustments on a commercial ventilator, including donning and doffing PPE, from 271 to 109 s.

15.
IEEE ASME Trans Mechatron ; 26(1): 369-380, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34025108

RESUMO

This paper presents the development and experimental evaluation of a redundant robotic system for the less-invasive treatment of osteolysis (bone degradation) behind the acetabular implant during total hip replacement revision surgery. The system comprises a rigid-link positioning robot and a Continuum Dexterous Manipulator (CDM) equipped with highly flexible debriding tools and a Fiber Bragg Grating (FBG)-based sensor. The robot and the continuum manipulator are controlled concurrently via an optimization-based framework using the Tip Position Estimation (TPE) from the FBG sensor as feedback. Performance of the system is evaluated on a setup that consists of an acetabular cup and saw-bone phantom simulating the bone behind the cup. Experiments consist of performing the surgical procedure on the simulated phantom setup. CDM TPE using FBGs, target location placement, cutting performance, and the concurrent control algorithm capability in achieving the desired tasks are evaluated. Mean and standard deviation of the CDM TPE from the FBG sensor and the robotic system are 0.50 mm, and 0.18 mm, respectively. Using the developed surgical system, accurate positioning and successful cutting of desired straight-line and curvilinear paths on saw-bone phantoms behind the cup with different densities are demonstrated. Compared to the conventional rigid tools, the workspace reach behind the acetabular cup is 2.47 times greater when using the developed robotic system.

16.
Int J Comput Assist Radiol Surg ; 16(5): 779-787, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33759079

RESUMO

PURPOSE: Multi- and cross-modal learning consolidates information from multiple data sources which may offer a holistic representation of complex scenarios. Cross-modal learning is particularly interesting, because synchronized data streams are immediately useful as self-supervisory signals. The prospect of achieving self-supervised continual learning in surgical robotics is exciting as it may enable lifelong learning that adapts to different surgeons and cases, ultimately leading to a more general machine understanding of surgical processes. METHODS: We present a learning paradigm using synchronous video and kinematics from robot-mediated surgery. Our approach relies on an encoder-decoder network that maps optical flow to the corresponding kinematics sequence. Clustering on the latent representations reveals meaningful groupings for surgeon gesture and skill level. We demonstrate the generalizability of the representations on the JIGSAWS dataset by classifying skill and gestures on tasks not used for training. RESULTS: For tasks seen in training, we report a 59 to 70% accuracy in surgical gestures classification. On tasks beyond the training setup, we note a 45 to 65% accuracy. Qualitatively, we find that unseen gestures form clusters in the latent space of novice actions, which may enable the automatic identification of novel interactions in a lifelong learning scenario. CONCLUSION: From predicting the synchronous kinematics sequence, optical flow representations of surgical scenes emerge that separate well even for new tasks that the model had not seen before. While the representations are useful immediately for a variety of tasks, the self-supervised learning paradigm may enable research in lifelong and user-specific learning.


Assuntos
Gestos , Procedimentos Cirúrgicos Robóticos , Cirurgiões , Algoritmos , Fenômenos Biomecânicos , Humanos , Aprendizagem , Aprendizado de Máquina , Reprodutibilidade dos Testes , Robótica , Gravação em Vídeo
17.
Int J Comput Assist Radiol Surg ; 15(5): 811-818, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32323207

RESUMO

PURPOSE: Surgical simulations play an increasingly important role in surgeon education and developing algorithms that enable robots to perform surgical subtasks. To model anatomy, finite element method (FEM) simulations have been held as the gold standard for calculating accurate soft tissue deformation. Unfortunately, their accuracy is highly dependent on the simulation parameters, which can be difficult to obtain. METHODS: In this work, we investigate how live data acquired during any robotic endoscopic surgical procedure may be used to correct for inaccurate FEM simulation results. Since FEMs are calculated from initial parameters and cannot directly incorporate observations, we propose to add a correction factor that accounts for the discrepancy between simulation and observations. We train a network to predict this correction factor. RESULTS: To evaluate our method, we use an open-source da Vinci Surgical System to probe a soft tissue phantom and replay the interaction in simulation. We train the network to correct for the difference between the predicted mesh position and the measured point cloud. This results in 15-30% improvement in the mean distance, demonstrating the effectiveness of our approach across a large range of simulation parameters. CONCLUSION: We show a first step towards a framework that synergistically combines the benefits of model-based simulation and real-time observations. It corrects discrepancies between simulation and the scene that results from inaccurate modeling parameters. This can provide a more accurate simulation environment for surgeons and better data with which to train algorithms.


Assuntos
Simulação por Computador , Aprendizado Profundo , Modelos Anatômicos , Procedimentos Cirúrgicos Robóticos/educação , Algoritmos , Fenômenos Biomecânicos/fisiologia , Humanos , Redes Neurais de Computação , Imagens de Fantasmas
18.
Phys Med Biol ; 64(18): 185006, 2019 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-31323649

RESUMO

We have previously developed a robotic ultrasound imaging system for motion monitoring in abdominal radiation therapy. Owing to the slow speed of ultrasound image processing, our previous system could only track abdominal motions under breath-hold. To overcome this limitation, a novel 2D-based image processing method for tracking intra-fraction respiratory motion is proposed. Fifty-seven different anatomical features acquired from 27 sets of 2D ultrasound sequences were used in this study. Three 2D ultrasound sequences were acquired with the robotic ultrasound system from three healthy volunteers. The remaining datasets were provided by the 2015 MICCAI Challenge on Liver Ultrasound Tracking. All datasets were preprocessed to extract the feature point, and a patient-specific motion pattern was extracted by principal component analysis and slow feature analysis (SFA). The tracking finds the most similar frame (or indexed frame) by a k-dimensional-tree-based nearest neighbor search for estimating the tracked object location. A template image was updated dynamically through the indexed frame to perform a fast template matching (TM) within a learned smaller search region on the incoming frame. The mean tracking error between manually annotated landmarks and the location extracted from the indexed training frame is 1.80 ± 1.42 mm. Adding a fast TM procedure within a small search region reduces the mean tracking error to 1.14 ± 1.16 mm. The tracking time per frame is 15 ms, which is well below the frame acquisition time. Furthermore, the anatomical reproducibility was measured by analyzing the location's anatomical landmark relative to the probe; the position-controlled probe has better reproducibility and yields a smaller mean error across all three volunteer cases, compared to the force-controlled probe (2.69 versus 11.20 mm in the superior-inferior direction and 1.19 versus 8.21 mm in the anterior-posterior direction). Our method reduces the processing time for tracking respiratory motion significantly, which can reduce the delivery uncertainty.


Assuntos
Abdome/diagnóstico por imagem , Abdome/efeitos da radiação , Fracionamento da Dose de Radiação , Aprendizado de Máquina , Movimento , Radioterapia Guiada por Imagem/métodos , Respiração , Voluntários Saudáveis , Humanos , Processamento de Imagem Assistida por Computador , Planejamento da Radioterapia Assistida por Computador , Reprodutibilidade dos Testes , Ultrassonografia
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 4065-4068, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30441249

RESUMO

One cause of preventable death is a lack of proper skills for providing critical care. The conventional course taught to non-medical individuals involves instructions of advanced emergency procedures routinely limited to a verbal block of instructions in a standardized presentation (for example, an instructional video).In the present study, we evaluate the benefits of using an OST-HMD for training of caregivers in an emergency medical environment. A rich user interface was implemented that provides 3D visual aids including images, text and tracked 3D overlays corresponding to each task that needs to be performed. A user study with 20 participants is conducted which involves training of two tasks where each subject performs one task with the HMD and the other with standard training. Two evaluations were performed, with the first immediately after the training followed by a second one three weeks later. Our results indicate that using a mixed reality HMD is more engaging, improves the time-on-task, and increases the confidence level of users in providing emergency and critical care.


Assuntos
Interface Usuário-Computador , Recursos Audiovisuais , Humanos
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 2162-2165, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30440832

RESUMO

Training with simulation systems has become a primary alternative for learning the fundamental skills of robotic surgery. However, there exists no consensus regarding a standard training curriculum: sessions defined a priori by expert trainers or self-directed by the trainees feature lack of consistency. This study proposes an adaptive approach that structures the curriculum on the basis of an objective assessment of the trainee's performance. The work comprised an experimental session with 12 participants performing training on virtual reality tasks with the da Vinci Research Kit surgical console. Half of the subjects self-managed their training session, while the others underwent the adaptive training. The final performance of the latter trainees was found to be higher compared to the former (p=0.002), showing how outcome-based, dynamic designs could constitute a promising advance in robotic surgical training.


Assuntos
Procedimentos Cirúrgicos Robóticos , Realidade Virtual , Competência Clínica , Simulação por Computador , Currículo , Projetos Piloto
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...