Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 322
Filter
1.
Trauma Violence Abuse ; : 15248380241253041, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38828776

ABSTRACT

Young people who transition to adulthood from out-of-home care (OOHC) are more likely to experience a range of poorer outcomes relative to their same-age peers in the community. This systematic review assessed the effectiveness of policies or interventions (hereafter "interventions") aimed at improving housing, health, education, economic, and psychosocial outcomes for youth leaving OOHC (hereafter "care leavers"). Eleven databases of published literature were reviewed along with gray literature. Eligible studies used randomized or quasi-experimental designs and assessed interventions that provided support to care leavers prior to, during, or after they left OOHC. Primary outcomes were housing and homelessness, health and well-being, education, economic and employment, criminal and delinquent behavior, and risky behavior, while secondary outcomes were supportive relationships and life skills. Where possible, results were pooled in a meta-analysis. Certainty of evidence was assessed using Grading of Recommendations Assessment, Development and Evaluation. Fourteen studies published in 27 reports were identified that examined independent living programs (ILPs) (n = 5), intensive support services (n = 2), coaching and peer support (C&PSP) (n = 2), transitional housing (n = 1), health information or coaching (n = 2), and extended care (n = 2). All but one study was conducted in the United States. Twenty small meta-analyses were undertaken encompassing ILPs and C&PSP, with two showing results that favored the intervention with certainty. The level of confidence in each meta-analysis was considered very low. A significant risk of bias was identified in each of the included studies. While some interventions showed promise, particularly extended care, the scope and strength of included evidence is insufficient to recommend any included approach.

2.
Article in English | MEDLINE | ID: mdl-38775904

ABSTRACT

PURPOSE: Monocular SLAM algorithms are the key enabling technology for image-based surgical navigation systems for endoscopic procedures. Due to the visual feature scarcity and unique lighting conditions encountered in endoscopy, classical SLAM approaches perform inconsistently. Many of the recent approaches to endoscopic SLAM rely on deep learning models. They show promising results when optimized on singular domains such as arthroscopy, sinus endoscopy, colonoscopy or laparoscopy, but are limited by an inability to generalize to different domains without retraining. METHODS: To address this generality issue, we propose OneSLAM a monocular SLAM algorithm for surgical endoscopy that works out of the box for several endoscopic domains, including sinus endoscopy, colonoscopy, arthroscopy and laparoscopy. Our pipeline builds upon robust tracking any point (TAP) foundation models to reliably track sparse correspondences across multiple frames and runs local bundle adjustment to jointly optimize camera poses and a sparse 3D reconstruction of the anatomy. RESULTS: We compare the performance of our method against three strong baselines previously proposed for monocular SLAM in endoscopy and general scenes. OneSLAM presents better or comparable performance over existing approaches targeted to that specific data in all four tested domains, generalizing across domains without the need for retraining. CONCLUSION: OneSLAM benefits from the convincing performance of TAP foundation models but generalizes to endoscopic sequences of different anatomies all while demonstrating better or comparable performance over domain-specific SLAM approaches. Future research on global loop closure will investigate how to reliably detect loops in endoscopic scenes to reduce accumulated drift and enhance long-term navigation capabilities.

3.
Article in English | MEDLINE | ID: mdl-38816649

ABSTRACT

PURPOSE: Skullbase surgery demands exceptional precision when removing bone in the lateral skull base. Robotic assistance can alleviate the effect of human sensory-motor limitations. However, the stiffness and inertia of the robot can significantly impact the surgeon's perception and control of the tool-to-tissue interaction forces. METHODS: We present a situational-aware, force control technique aimed at regulating interaction forces during robot-assisted skullbase drilling. The contextual interaction information derived from the digital twin environment is used to enhance sensory perception and suppress undesired high forces. RESULTS: To validate our approach, we conducted initial feasibility experiments involving a medical and two engineering students. The experiment focused on further drilling around critical structures following cortical mastoidectomy. The experiment results demonstrate that robotic assistance coupled with our proposed control scheme effectively limited undesired interaction forces when compared to robotic assistance without the proposed force control. CONCLUSIONS: The proposed force control techniques show promise in significantly reducing undesired interaction forces during robot-assisted skullbase surgery. These findings contribute to the ongoing efforts to enhance surgical precision and safety in complex procedures involving the lateral skull base.

4.
Article in English | MEDLINE | ID: mdl-38753135

ABSTRACT

PURPOSE: Preoperative imaging plays a pivotal role in sinus surgery where CTs offer patient-specific insights of complex anatomy, enabling real-time intraoperative navigation to complement endoscopy imaging. However, surgery elicits anatomical changes not represented in the preoperative model, generating an inaccurate basis for navigation during surgery progression. METHODS: We propose a first vision-based approach to update the preoperative 3D anatomical model leveraging intraoperative endoscopic video for navigated sinus surgery where relative camera poses are known. We rely on comparisons of intraoperative monocular depth estimates and preoperative depth renders to identify modified regions. The new depths are integrated in these regions through volumetric fusion in a truncated signed distance function representation to generate an intraoperative 3D model that reflects tissue manipulation RESULTS: We quantitatively evaluate our approach by sequentially updating models for a five-step surgical progression in an ex vivo specimen. We compute the error between correspondences from the updated model and ground-truth intraoperative CT in the region of anatomical modification. The resulting models show a decrease in error during surgical progression as opposed to increasing when no update is employed. CONCLUSION: Our findings suggest that preoperative 3D anatomical models can be updated using intraoperative endoscopy video in navigated sinus surgery. Future work will investigate improvements to monocular depth estimation as well as removing the need for external navigation systems. The resulting ability to continuously update the patient model may provide surgeons with a more precise understanding of the current anatomical state and paves the way toward a digital twin paradigm for sinus surgery.

5.
Int J Comput Assist Radiol Surg ; 19(6): 1213-1222, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38642297

ABSTRACT

PURPOSE: Teamwork in surgery depends on a shared mental model of success, i.e., a common understanding of objectives in the operating room. A shared model leads to increased engagement among team members and is associated with fewer complications and overall better outcomes for patients. However, clinical training typically focuses on role-specific skills, leaving individuals to acquire a shared model indirectly through on-the-job experience. METHODS: We investigate whether virtual reality (VR) cross-training, i.elet@tokeneonedotexposure to other roles, can enhance a shared mental model for non-surgeons more directly. Our study focuses on X-ray guided pelvic trauma surgery, a procedure where successful communication depends on the shared model between the surgeon and a C-arm technologist. We present a VR environment supporting both roles and evaluate a cross-training curriculum in which non-surgeons swap roles with the surgeon. RESULTS: Exposure to the surgical task resulted in higher engagement with the C-arm technologist role in VR, as measured by the mental demand and effort expended by participants ( p < 0.001 ). It also has a significant effect on non-surgeon's mental model of the overall task; novice participants' estimation of the mental demand and effort required for the surgeon's task increases after training, while their perception of overall performance decreases ( p < 0.05 ), indicating a gap in understanding based solely on observation. This phenomenon was also present for a professional C-arm technologist. CONCLUSION: Until now, VR applications for clinical training have focused on virtualizing existing curricula. We demonstrate how novel approaches which are not possible outside of a virtual environment, such as role swapping, may enhance the shared mental model of surgical teams by contextualizing each individual's role within the overall task in a time- and cost-efficient manner. As workflows grow increasingly sophisticated, we see VR curricula as being able to directly foster a shared model for success, ultimately benefiting patient outcomes through more effective teamwork in surgery.


Subject(s)
Patient Care Team , Virtual Reality , Humans , Female , Male , Curriculum , Clinical Competence , Adult , Surgery, Computer-Assisted/methods , Surgery, Computer-Assisted/education , Surgeons/education , Surgeons/psychology
6.
Article in English | MEDLINE | ID: mdl-38686594

ABSTRACT

OBJECTIVE: Obtaining automated, objective 3-dimensional (3D) models of the Eustachian tube (ET) and the internal carotid artery (ICA) from computed tomography (CT) scans could provide useful navigational and diagnostic information for ET pathologies and interventions. We aim to develop a deep learning (DL) pipeline to automatically segment the ET and ICA and use these segmentations to compute distances between these structures. STUDY DESIGN: Retrospective cohort. SETTING: Tertiary referral center. METHODS: From a database of 30 CT scans, 60 ET and ICA pairs were manually segmented and used to train an nnU-Net model, a DL segmentation framework. These segmentations were also used to develop a quantitative tool to capture the magnitude and location of the minimum distance point (MDP) between ET and ICA. Performance metrics for the nnU-Net automated segmentations were calculated via the average Hausdorff distance (AHD) and dice similarity coefficient (DSC). RESULTS: The AHD for the ET and ICA were 0.922 and 0.246 mm, respectively. Similarly, the DSC values for the ET and ICA were 0.578 and 0.884. The mean MDP from ET to ICA in the cartilaginous region was 2.6 mm (0.7-5.3 mm) and was located on average 1.9 mm caudal from the bony cartilaginous junction. CONCLUSION: This study describes the first end-to-end DL pipeline for automated ET and ICA segmentation and analyzes distances between these structures. In addition to helping to ensure the safe selection of patients for ET dilation, this method can facilitate large-scale studies exploring the relationship between ET pathologies and the 3D shape of the ET.

7.
Mol Biol Cell ; 35(6): mr3, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38630519

ABSTRACT

Dendritic spines, the mushroom-shaped extensions along dendritic shafts of excitatory neurons, are critical for synaptic function and are one of the first neuronal structures disrupted in neurodevelopmental and neurodegenerative diseases. Microtubule (MT) polymerization into dendritic spines is an activity-dependent process capable of affecting spine shape and function. Studies have shown that MT polymerization into spines occurs specifically in spines undergoing plastic changes. However, discerning the function of MT invasion of dendritic spines requires the specific inhibition of MT polymerization into spines, while leaving MT dynamics in the dendritic shaft, synaptically connected axons and associated glial cells intact. This is not possible with the unrestricted, bath application of pharmacological compounds. To specifically disrupt MT entry into spines we coupled a MT elimination domain (MTED) from the Efa6 protein to the actin filament-binding peptide LifeAct. LifeAct was chosen because actin filaments are highly concentrated in spines and are necessary for MT invasions. Temporally controlled expression of this LifeAct-MTED construct inhibits MT entry into dendritic spines, while preserving typical MT dynamics in the dendrite shaft. Expression of this construct will allow for the determination of the function of MT invasion of spines and more broadly, to discern how MT-actin interactions affect cellular processes.


Subject(s)
Dendritic Spines , Microtubules , Polymerization , Microtubules/metabolism , Dendritic Spines/metabolism , Animals , Actins/metabolism , Actin Cytoskeleton/metabolism , Neurons/metabolism , Rats , Microfilament Proteins/metabolism
8.
Article in English | MEDLINE | ID: mdl-38488231

ABSTRACT

OBJECTIVE: Use microscopic video-based tracking of laryngeal surgical instruments to investigate the effect of robot assistance on instrument tremor. STUDY DESIGN: Experimental trial. SETTING: Tertiary Academic Medical Center. METHODS: In this randomized cross-over trial, 36 videos were recorded from 6 surgeons performing left and right cordectomies on cadaveric pig larynges. These recordings captured 3 distinct conditions: without robotic assistance, with robot-assisted scissors, and with robot-assisted graspers. To assess tool tremor, we employed computer vision-based algorithms for tracking surgical tools. Absolute tremor bandpower and normalized path length were utilized as quantitative measures. Wilcoxon rank sum exact tests were employed for statistical analyses and comparisons between trials. Additionally, surveys were administered to assess the perceived ease of use of the robotic system. RESULTS: Absolute tremor bandpower showed a significant decrease when using robot-assisted instruments compared to freehand instruments (P = .012). Normalized path length significantly decreased with robot-assisted compared to freehand trials (P = .001). For the scissors, robot-assisted trials resulted in a significant decrease in absolute tremor bandpower (P = .002) and normalized path length (P < .001). For the graspers, there was no significant difference in absolute tremor bandpower (P = .4), but there was a significantly lower normalized path length in the robot-assisted trials (P = .03). CONCLUSION: This study demonstrated that computer-vision-based approaches can be used to assess tool motion in simulated microlaryngeal procedures. The results suggest that robot assistance is capable of reducing instrument tremor.

9.
bioRxiv ; 2024 Mar 04.
Article in English | MEDLINE | ID: mdl-38496454

ABSTRACT

Dendritic spines, the mushroom-shaped extensions along dendritic shafts of excitatory neurons, are critical for synaptic function and are one of the first neuronal structures disrupted in neurodevelopmental and neurodegenerative diseases. Microtubule (MT) polymerization into dendritic spines is an activity-dependent process capable of affecting spine shape and function. Studies have shown that MT polymerization into spines occurs specifically in spines undergoing plastic changes. However, discerning the function of MT invasion of dendritic spines requires the specific inhibition of MT polymerization into spines, while leaving MT dynamics in the dendritic shaft, synaptically connected axons and associated glial cells intact. This is not possible with the unrestricted, bath application of pharmacological compounds. To specifically disrupt MT entry into spines we coupled a MT elimination domain (MTED) from the Efa6 protein to the actin filament-binding peptide LifeAct. LifeAct was chosen because actin filaments are highly concentrated in spines and are necessary for MT invasions. Temporally controlled expression of this LifeAct-MTED construct inhibits MT entry into dendritic spines, while preserving typical MT dynamics in the dendrite shaft. Expression of this construct will allow for the determination of the function of MT invasion of spines and more broadly, to discern how MT-actin interactions affect cellular processes.

10.
IEEE Trans Med Robot Bionics ; 6(1): 135-145, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38304756

ABSTRACT

Subretinal injection methods and other procedures for treating retinal conditions and diseases (many considered incurable) have been limited in scope due to limited human motor control. This study demonstrates the next generation, cooperatively controlled Steady-Hand Eye Robot (SHER 3.0), a precise and intuitive-to-use robotic platform achieving clinical standards for targeting accuracy and resolution for subretinal injections. The system design and basic kinematics are reported and a deflection model for the incorporated delta stage and validation experiments are presented. This model optimizes the delta stage parameters, maximizing the global conditioning index and minimizing torsional compliance. Five tests measuring accuracy, repeatability, and deflection show the optimized stage design achieves a tip accuracy of < 30 µm, tip repeatability of 9.3 µm and 0.02°, and deflections between 20-350 µm/N. Future work will use updated control models to refine tip positioning outcomes and will be tested on in vivo animal models.

11.
Int J Comput Assist Radiol Surg ; 19(2): 199-208, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37610603

ABSTRACT

PURPOSE: To achieve effective robot-assisted laparoscopic prostatectomy, the integration of transrectal ultrasound (TRUS) imaging system which is the most widely used imaging modality in prostate imaging is essential. However, manual manipulation of the ultrasound transducer during the procedure will significantly interfere with the surgery. Therefore, we propose an image co-registration algorithm based on a photoacoustic marker (PM) method, where the ultrasound/photoacoustic (US/PA) images can be registered to the endoscopic camera images to ultimately enable the TRUS transducer to automatically track the surgical instrument. METHODS: An optimization-based algorithm is proposed to co-register the images from the two different imaging modalities. The principle of light propagation and an uncertainty in PM detection were assumed in this algorithm to improve the stability and accuracy of the algorithm. The algorithm is validated using the previously developed US/PA image-guided system with a da Vinci surgical robot. RESULTS: The target-registration-error (TRE) is measured to evaluate the proposed algorithm. In both simulation and experimental demonstration, the proposed algorithm achieved a sub-centimeter accuracy which is acceptable in practical clinics (i.e., 1.15 ± 0.29 mm from the experimental evaluation). The result is also comparable with our previous approach (i.e., 1.05 ± 0.37 mm), and the proposed method can be implemented with a normal white light stereo camera and does not require highly accurate localization of the PM. CONCLUSION: The proposed frame registration algorithm enabled a simple yet efficient integration of commercial US/PA imaging system into laparoscopic surgical setting by leveraging the characteristic properties of acoustic wave propagation and laser excitation, contributing to automated US/PA image-guided surgical intervention applications.


Subject(s)
Laparoscopy , Prostatic Neoplasms , Robotics , Surgery, Computer-Assisted , Male , Humans , Imaging, Three-Dimensional/methods , Ultrasonography/methods , Surgery, Computer-Assisted/methods , Algorithms , Prostatectomy/methods , Prostatic Neoplasms/surgery
12.
IEEE Trans Med Imaging ; 43(1): 275-285, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37549070

ABSTRACT

Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.


Subject(s)
Imaging, Three-Dimensional , Pelvis , Imaging, Three-Dimensional/methods , Fluoroscopy/methods , Software , Algorithms
13.
Int J Comput Assist Radiol Surg ; 19(1): 51-59, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37347346

ABSTRACT

PURPOSE: A virtual reality (VR) system, where surgeons can practice procedures on virtual anatomies, is a scalable and cost-effective alternative to cadaveric training. The fully digitized virtual surgeries can also be used to assess the surgeon's skills using measurements that are otherwise hard to collect in reality. Thus, we present the Fully Immersive Virtual Reality System (FIVRS) for skull-base surgery, which combines surgical simulation software with a high-fidelity hardware setup. METHODS: FIVRS allows surgeons to follow normal clinical workflows inside the VR environment. FIVRS uses advanced rendering designs and drilling algorithms for realistic bone ablation. A head-mounted display with ergonomics similar to that of surgical microscopes is used to improve immersiveness. Extensive multi-modal data are recorded for post-analysis, including eye gaze, motion, force, and video of the surgery. A user-friendly interface is also designed to ease the learning curve of using FIVRS. RESULTS: We present results from a user study involving surgeons with various levels of expertise. The preliminary data recorded by FIVRS differentiate between participants with different levels of expertise, promising future research on automatic skill assessment. Furthermore, informal feedback from the study participants about the system's intuitiveness and immersiveness was positive. CONCLUSION: We present FIVRS, a fully immersive VR system for skull-base surgery. FIVRS features a realistic software simulation coupled with modern hardware for improved realism. The system is completely open source and provides feature-rich data in an industry-standard format.


Subject(s)
Virtual Reality , Humans , Computer Simulation , Software , User-Computer Interface , Clinical Competence , Skull/surgery
14.
Adv Sci (Weinh) ; 11(7): e2305495, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38072667

ABSTRACT

Magnetic resonance imaging (MRI) demonstrates clear advantages over other imaging modalities in neurosurgery with its ability to delineate critical neurovascular structures and cancerous tissue in high-resolution 3D anatomical roadmaps. However, its application has been limited to interventions performed based on static pre/post-operative imaging, where errors accrue from stereotactic frame setup, image registration, and brain shift. To leverage the powerful intra-operative functions of MRI, e.g., instrument tracking, monitoring of physiological changes and tissue temperature in MRI-guided bilateral stereotactic neurosurgery, a multi-stage robotic positioner is proposed. The system positions cannula/needle instruments using a lightweight (203 g) and compact (Ø97 × 81 mm) skull-mounted structure that fits within most standard imaging head coils. With optimized design in soft robotics, the system operates in two stages: i) manual coarse adjustment performed interactively by the surgeon (workspace of ±30°), ii) automatic fine adjustment with precise (<0.2° orientation error), responsive (1.4 Hz bandwidth), and high-resolution (0.058°) soft robotic positioning. Orientation locking provides sufficient transmission stiffness (4.07 N/mm) for instrument advancement. The system's clinical workflow and accuracy is validated with lab-based (<0.8 mm) and MRI-based testing on skull phantoms (<1.7 mm) and a cadaver subject (<2.2 mm). Custom-made wireless omni-directional tracking markers facilitated robot registration under MRI.


Subject(s)
Neurosurgery , Robotics , Neurosurgical Procedures/methods , Brain , Magnetic Resonance Imaging/methods
15.
IEEE Robot Autom Lett ; 8(3): 1287-1294, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37997605

ABSTRACT

This paper introduces the first integrated real-time intraoperative surgical guidance system, in which an endoscope camera of da Vinci surgical robot and a transrectal ultrasound (TRUS) transducer are co-registered using photoacoustic markers that are detected in both fluorescence (FL) and photoacoustic (PA) imaging. The co-registered system enables the TRUS transducer to track the laser spot illuminated by a pulsed-laser-diode attached to the surgical instrument, providing both FL and PA images of the surgical region-of-interest (ROI). As a result, the generated photoacoustic marker is visualized and localized in the da Vinci endoscopic FL images, and the corresponding tracking can be conducted by rotating the TRUS transducer to display the PA image of the marker. A quantitative evaluation revealed that the average registration and tracking errors were 0.84 mm and 1.16°, respectively. This study shows that the co-registered photoacoustic marker tracking can be effectively deployed intraoperatively using TRUS+PA imaging providing functional guidance of the surgical ROI.

16.
IEEE Robot Autom Lett ; 8(3): 1343-1350, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37637101

ABSTRACT

An in situ needle manipulation technique used by physicians when performing spinal injections is modeled to study its effect on needle shape and needle tip position. A mechanics-based model is proposed and solved using finite element method. A test setup is presented to mimic the needle manipulation motion. Tissue phantoms made from plastisol as well as porcine skeletal muscle samples are used to evaluate the model accuracy against medical images. The effect of different compression models as well as model parameters on model accuracy is studied, and the effect of needle-tissue interaction on the needle remote center of motion is examined. With the correct combination of compression model and model parameters, the model simulation is able to predict needle tip position within submillimeter accuracy.

17.
Article in English | MEDLINE | ID: mdl-37555199

ABSTRACT

Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool's pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during "fluoro hunting" for the desired view or standard plane.

18.
Chem Commun (Camb) ; 59(60): 9243-9246, 2023 Jul 25.
Article in English | MEDLINE | ID: mdl-37424373

ABSTRACT

A commercial zeolite is shown to be a highly effective heterogeneous catalyst for the Friedel-Crafts alkyation of mandelic acid with aromatic substrates. The reaction yields mixed diarylacetic acids in one step avoiding the need for inert atmosphere techniques or superacids. The observed reaction pathways are zeolite framework dependent with only the FAU framework giving very high selectivity to the mixed diarylacetic acids.

19.
IEEE Trans Robot ; 39(2): 1373-1387, 2023 Apr.
Article in English | MEDLINE | ID: mdl-37377922

ABSTRACT

Notable challenges during retinal surgery lend themselves to robotic assistance which has proven beneficial in providing a safe steady-hand manipulation. Efficient assistance from the robots heavily relies on accurate sensing of surgery states (e.g. instrument tip localization and tool-to-tissue interaction forces). Many of the existing tool tip localization methods require preoperative frame registrations or instrument calibrations. In this study using an iterative approach and by combining vision and force-based methods, we develop calibration- and registration-independent (RI) algorithms to provide online estimates of instrument stiffness (least squares and adaptive). The estimations are then combined with a state-space model based on the forward kinematics (FWK) of the Steady-Hand Eye Robot (SHER) and Fiber Brag Grating (FBG) sensor measurements. This is accomplished using a Kalman Filtering (KF) approach to improve the deflected instrument tip position estimations during robot-assisted eye surgery. The conducted experiments demonstrate that when the online RI stiffness estimations are used, the instrument tip localization results surpass those obtained from pre-operative offline calibrations for stiffness.

20.
Int J Comput Assist Radiol Surg ; 18(7): 1303-1310, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37266885

ABSTRACT

PURPOSE: Tracking the 3D motion of the surgical tool and the patient anatomy is a fundamental requirement for computer-assisted skull-base surgery. The estimated motion can be used both for intra-operative guidance and for downstream skill analysis. Recovering such motion solely from surgical videos is desirable, as it is compliant with current clinical workflows and instrumentation. METHODS: We present Tracker of Anatomy and Tool (TAToo). TAToo jointly tracks the rigid 3D motion of the patient skull and surgical drill from stereo microscopic videos. TAToo estimates motion via an iterative optimization process in an end-to-end differentiable form. For robust tracking performance, TAToo adopts a probabilistic formulation and enforces geometric constraints on the object level. RESULTS: We validate TAToo on both simulation data, where ground truth motion is available, as well as on anthropomorphic phantom data, where optical tracking provides a strong baseline. We report sub-millimeter and millimeter inter-frame tracking accuracy for skull and drill, respectively, with rotation errors below [Formula: see text]. We further illustrate how TAToo may be used in a surgical navigation setting. CONCLUSIONS: We present TAToo, which simultaneously tracks the surgical tool and the patient anatomy in skull-base surgery. TAToo directly predicts the motion from surgical videos, without the need of any markers. Our results show that the performance of TAToo compares favorably to competing approaches. Future work will include fine-tuning of our depth network to reach a 1 mm clinical accuracy goal desired for surgical applications in the skull base.


Subject(s)
Neurosurgical Procedures , Surgery, Computer-Assisted , Humans , Neurosurgical Procedures/methods , Surgery, Computer-Assisted/methods , Computer Simulation , Skull Base/diagnostic imaging , Skull Base/surgery
SELECTION OF CITATIONS
SEARCH DETAIL
...