Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
1.
Surg Innov ; 29(3): 353-359, 2022 Jun.
Article in English | MEDLINE | ID: mdl-33517863

ABSTRACT

Purpose. See-through head-mounted displays (HMDs) can be used to view fluoroscopic imaging during orthopedic surgical procedures. The goals of this study were to determine whether HMDs reduce procedure time, number of fluoroscopic images required, or number of head turns by the surgeon compared with standard monitors. Methods. Sixteen orthopedic surgery residents each performed fluoroscopy-guided drilling of 8 holes for placement of tibial nail distal interlocking screws in an anatomical model, with 4 holes drilled while using HMD and 4 holes drilled while using a standard monitor. Procedure time, number of fluoroscopic images needed, and number of head turns by the resident during the procedure were compared between the 2 modalities. Statistical significance was set at P < .05. Results. Mean (SD) procedure time did not differ significantly between attempts using the standard monitor (55 [37] seconds) vs the HMD (56 [31] seconds) (P = .73). Neither did mean number of fluoroscopic images differ significantly between attempts using the standard monitor vs the HMD (9 [5] images for each) (P = .84). Residents turned their heads significantly more times when using the standard monitor (9 [5] times) vs the HMD (1 [2] times) (P < .001). Conclusions. Head-mounted displays lessened the need for residents to turn their heads away from the surgical field while drilling holes for tibial nail distal interlocking screws in an anatomical model; however, there was no difference in terms of procedure time or number of fluoroscopic images needed using the HMD compared with the standard monitor.


Subject(s)
Orthopedic Procedures , Fluoroscopy , Monitoring, Physiologic
2.
Sci Rep ; 10(1): 5643, 2020 03 27.
Article in English | MEDLINE | ID: mdl-32221327

ABSTRACT

Minimally invasive treatment of vascular disease demands dynamic navigation through complex blood vessel pathways and accurate placement of an interventional device, which has resulted in increased reliance on fluoroscopic guidance and commensurate radiation exposure to the patient and staff. Here we introduce a guidance system inspired by electric fish that incorporates measurements from a newly designed electrogenic sensory catheter with preoperative imaging to provide continuous feedback to guide vascular procedures without additional contrast injection, radiation, image registration, or external tracking. Electrodes near the catheter tip simultaneously create a weak electric field and measure the impedance, which changes with the internal geometry of the vessel as the catheter advances through the vasculature. The impedance time series is then mapped to a preoperative vessel model to determine the relative position of the catheter within the vessel tree. We present navigation in a synthetic vessel tree based on our mapping technique. Experiments in a porcine model demonstrated the sensor's ability to detect cross-sectional area variation in vivo. These initial results demonstrate the capability and potential of this novel bioimpedance-based navigation technology as a non-fluoroscopic technique to augment existing imaging methods.


Subject(s)
Catheters , Endovascular Procedures/instrumentation , Animals , Endovascular Procedures/methods , Equipment Design/instrumentation , Equipment Design/methods , Female , Fluoroscopy/instrumentation , Fluoroscopy/methods , Imaging, Three-Dimensional/instrumentation , Imaging, Three-Dimensional/methods , Surgery, Computer-Assisted/instrumentation , Surgery, Computer-Assisted/methods , Swine
4.
Med Phys ; 45(6): 2463-2475, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29569728

ABSTRACT

PURPOSE: Cone-beam computed tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and nonoverlapping CBCT volumes to enable 3D measurements on large anatomical structures. METHODS: A CBCT-capable mobile C-arm is augmented with a red-green-blue-depth (RGBD) camera. An offline cocalibration of the two imaging modalities results in coregistered video, infrared, and x-ray views of the surgical scene. Then, automatic stitching of multiple small, nonoverlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. RESULTS: On an animal cadaver, we show stitching errors as low as 0.33, 0.91, and 1.72 mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. CONCLUSIONS: The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures.


Subject(s)
Cone-Beam Computed Tomography/methods , Imaging, Three-Dimensional/methods , Minimally Invasive Surgical Procedures/methods , Pattern Recognition, Automated/methods , Animals , Calibration , Cone-Beam Computed Tomography/instrumentation , Femur/diagnostic imaging , Femur/surgery , Fiducial Markers , Humans , Imaging, Three-Dimensional/instrumentation , Infrared Rays , Intraoperative Period , Minimally Invasive Surgical Procedures/instrumentation , Orthopedic Procedures , Phantoms, Imaging , Swine , Time Factors , Video Recording
5.
J Med Imaging (Bellingham) ; 5(2): 021205, 2018 Apr.
Article in English | MEDLINE | ID: mdl-29322072

ABSTRACT

Reproducibly achieving proper implant alignment is a critical step in total hip arthroplasty procedures that has been shown to substantially affect patient outcome. In current practice, correct alignment of the acetabular cup is verified in C-arm x-ray images that are acquired in an anterior-posterior (AP) view. Favorable surgical outcome is, therefore, heavily dependent on the surgeon's experience in understanding the 3-D orientation of a hemispheric implant from 2-D AP projection images. This work proposes an easy to use intraoperative component planning system based on two C-arm x-ray images that are combined with 3-D augmented reality (AR) visualization that simplifies impactor and cup placement according to the planning by providing a real-time RGBD data overlay. We evaluate the feasibility of our system in a user study comprising four orthopedic surgeons at the Johns Hopkins Hospital and report errors in translation, anteversion, and abduction as low as 1.98 mm, 1.10 deg, and 0.53 deg, respectively. The promising performance of this AR solution shows that deploying this system could eliminate the need for excessive radiation, simplify the intervention, and enable reproducibly accurate placement of acetabular implants.

6.
Healthc Technol Lett ; 4(5): 168-173, 2017 Oct.
Article in English | MEDLINE | ID: mdl-29184659

ABSTRACT

Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red-green-blue-depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.

7.
Int J Comput Assist Radiol Surg ; 12(7): 1221-1230, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28527025

ABSTRACT

PURPOSE: In minimally invasive interventions assisted by C-arm imaging, there is a demand to fuse the intra-interventional 2D C-arm image with pre-interventional 3D patient data to enable surgical guidance. The commonly used intensity-based 2D/3D registration has a limited capture range and is sensitive to initialization. We propose to utilize an opto/X-ray C-arm system which allows to maintain the registration during intervention by automating the re-initialization for the 2D/3D image registration. Consequently, the surgical workflow is not disrupted and the interaction time for manual initialization is eliminated. METHODS: We utilize two distinct vision-based tracking techniques to estimate the relative poses between different C-arm arrangements: (1) global tracking using fused depth information and (2) RGBD SLAM system for surgical scene tracking. A highly accurate multi-view calibration between RGBD and C-arm imaging devices is achieved using a custom-made multimodal calibration target. RESULTS: Several in vitro studies are conducted on pelvic-femur phantom that is encased in gelatin and covered with drapes to simulate a clinically realistic scenario. The mean target registration errors (mTRE) for re-initialization using depth-only and RGB [Formula: see text] depth are 13.23 mm and 11.81 mm, respectively. 2D/3D registration yielded 75% success rate using this automatic re-initialization, compared to a random initialization which yielded only 23% successful registration. CONCLUSION: The pose-aware C-arm contributes to the 2D/3D registration process by globally re-initializing the relationship of C-arm image and pre-interventional CT data. This system performs inside-out tracking, is self-contained, and does not require any external tracking devices.


Subject(s)
Imaging, Three-Dimensional/methods , Minimally Invasive Surgical Procedures/methods , Calibration , Femur , Humans , Multimodal Imaging , Pelvis , Phantoms, Imaging
8.
Int J Comput Assist Radiol Surg ; 12(6): 1003-1011, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28321804

ABSTRACT

PURPOSE: We present the evaluation of the reproducibility of measurements performed using robotic ultrasound imaging in comparison with expert-operated sonography. Robotic imaging for interventional procedures may be a valuable contribution, but requires reproducibility for its acceptance in clinical routine. We study this by comparing repeated measurements based on robotic and expert-operated ultrasound imaging. METHODS: Robotic ultrasound acquisition is performed in three steps under user guidance: First, the patient is observed using a 3D camera on the robot end effector, and the user selects the region of interest. This allows for automatic planning of the robot trajectory. Next, the robot executes a sweeping motion following the planned trajectory, during which the ultrasound images and tracking data are recorded. As the robot is compliant, deviations from the path are possible, for instance due to patient motion. Finally, the ultrasound slices are compounded to create a volume. Repeated acquisitions can be performed automatically by comparing the previous and current patient surface. RESULTS: After repeated image acquisitions, the measurements based on acquisitions performed by the robotic system and expert are compared. Within our case series, the expert measured the anterior-posterior, longitudinal, transversal lengths of both of the left and right thyroid lobes on each of the 4 healthy volunteers 3 times, providing 72 measurements. Subsequently, the same procedure was performed using the robotic system resulting in a cumulative total of 144 clinically relevant measurements. Our results clearly indicated that robotic ultrasound enables more repeatable measurements. CONCLUSIONS: A robotic ultrasound platform leads to more reproducible data, which is of crucial importance for planning and executing interventions.


Subject(s)
Robotics/methods , Ultrasonography/methods , Humans , Reproducibility of Results , Robotic Surgical Procedures , Thyroid Gland/diagnostic imaging
9.
Int J Comput Assist Radiol Surg ; 12(6): 901-910, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28343301

ABSTRACT

PURPOSE: Optical see-through head-mounted displays (OST-HMD) feature an unhindered and instantaneous view of the surgery site and can enable a mixed reality experience for surgeons during procedures. In this paper, we present a systematic approach to identify the criteria for evaluation of OST-HMD technologies for specific clinical scenarios, which benefit from using an object-anchored 2D-display visualizing medical information. METHODS: Criteria for evaluating the performance of OST-HMDs for visualization of medical information and its usage are identified and proposed. These include text readability, contrast perception, task load, frame rate, and system lag. We choose to compare three commercially available OST-HMDs, which are representatives of currently available head-mounted display technologies. A multi-user study and an offline experiment are conducted to evaluate their performance. RESULTS: Statistical analysis demonstrates that Microsoft HoloLens performs best among the three tested OST-HMDs, in terms of contrast perception, task load, and frame rate, while ODG R-7 offers similar text readability. The integration of indoor localization and fiducial tracking on the HoloLens provides significantly less system lag in a relatively motionless scenario. CONCLUSIONS: With ever more OST-HMDs appearing on the market, the proposed criteria could be used in the evaluation of their suitability for mixed reality surgical intervention. Currently, Microsoft HoloLens may be more suitable than ODG R-7 and Epson Moverio BT-200 for clinical usability in terms of the evaluated criteria. To the best of our knowledge, this is the first paper that presents a methodology and conducts experiments to evaluate and compare OST-HMDs for their use as object-anchored 2D-display during interventions.


Subject(s)
Data Display , Equipment Design , Orthopedic Procedures/instrumentation , User-Computer Interface , Computer Graphics , Head , Humans
10.
Int J Comput Assist Radiol Surg ; 12(7): 1211-1219, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28343303

ABSTRACT

PURPOSE: Cone-Beam Computed Tomography (CBCT) is an important 3D imaging technology for orthopedic, trauma, radiotherapy guidance, angiography, and dental applications. The major limitation of CBCT is the poor image quality due to scattered radiation, truncation, and patient movement. In this work, we propose to incorporate information from a co-registered Red-Green-Blue-Depth (RGBD) sensor attached near the detector plane of the C-arm to improve the reconstruction quality, as well as correcting for undesired rigid patient movement. METHODS: Calibration of the RGBD and C-arm imaging devices is performed in two steps: (i) calibration of the RGBD sensor and the X-ray source using a multimodal checkerboard pattern, and (ii) calibration of the RGBD surface reconstruction to the CBCT volume. The patient surface is acquired during the CBCT scan and then used as prior information for the reconstruction using Maximum-Likelihood Expectation-Maximization. An RGBD-based simultaneous localization and mapping method is utilized to estimate the rigid patient movement during scanning. RESULTS: Performance is quantified and demonstrated using artificial data and bone phantoms with and without metal implants. Finally, we present movement-corrected CBCT reconstructions based on RGBD data on an animal specimen, where the average voxel intensity difference reduces from 0.157 without correction to 0.022 with correction. CONCLUSION: This work investigated the advantages of a C-arm X-ray imaging system used with an attached RGBD sensor. The experiments show the benefits of the opto/X-ray imaging system in: (i) improving the quality of reconstruction by incorporating the surface information of the patient, reducing the streak artifacts as well as the number of required projections, and (ii) recovering the scanning trajectory for the reconstruction in the presence of undesired patient rigid movement.


Subject(s)
Cone-Beam Computed Tomography/methods , Imaging, Three-Dimensional/methods , Calibration , Humans , Phantoms, Imaging
11.
IEEE Trans Med Imaging ; 36(2): 538-548, 2017 02.
Article in English | MEDLINE | ID: mdl-27831861

ABSTRACT

Robotic ultrasound has the potential to assist and guide physicians during interventions. In this work, we present a set of methods and a workflow to enable autonomous MRI-guided ultrasound acquisitions. Our approach uses a structured-light 3D scanner for patient-to-robot and image-to-patient calibration, which in turn is used to plan 3D ultrasound trajectories. These MRI-based trajectories are followed autonomously by the robot and are further refined online using automatic MRI/US registration. Despite the low spatial resolution of structured light scanners, the initial planned acquisition path can be followed with an accuracy of 2.46 ± 0.96 mm. This leads to a good initialization of the MRI/US registration: the 3D-scan-based alignment for planning and acquisition shows an accuracy (distance between planned ultrasound and MRI) of 4.47 mm, and 0.97 mm after an online-update of the calibration based on a closed loop registration.


Subject(s)
Magnetic Resonance Imaging , Ultrasonography , Feasibility Studies , Humans , Imaging, Three-Dimensional , Robotics
12.
Int J Comput Assist Radiol Surg ; 11(6): 967-75, 2016 Jun.
Article in English | MEDLINE | ID: mdl-27059022

ABSTRACT

PURPOSE: This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. METHODS: An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. RESULTS: Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. CONCLUSION: To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.


Subject(s)
Algorithms , Cone-Beam Computed Tomography/methods , Imaging, Three-Dimensional , Monitoring, Intraoperative/methods , Phantoms, Imaging , Calibration , Humans , Reproducibility of Results
13.
Int J Comput Assist Radiol Surg ; 11(6): 1173-81, 2016 Jun.
Article in English | MEDLINE | ID: mdl-27097600

ABSTRACT

PURPOSE: Precise needle placement is an important task during several medical procedures. Ultrasound imaging is often used to guide the needle toward the target region in soft tissue. This task remains challenging due to the user's dependence on image quality, limited field of view, moving target, and moving needle. In this paper, we present a novel dual-robot framework for robotic needle insertions under robotic ultrasound guidance. METHOD: We integrated force-controlled ultrasound image acquisition, registration of preoperative and intraoperative images, vision-based robot control, and target localization, in combination with a novel needle tracking algorithm. The framework allows robotic needle insertion to target a preoperatively defined region of interest while enabling real-time visualization and adaptive trajectory planning to provide safe and quick interactions. We assessed the framework by considering both static and moving targets embedded in water and tissue-mimicking gelatin. RESULTS: The presented dual-robot tracking algorithms allow for accurate needle placement, namely to target the region of interest with an error around 1 mm. CONCLUSION: To the best of our knowledge, we show the first use of two independent robots, one for imaging, the other for needle insertion, that are simultaneously controlled using image processing algorithms. Experimental results show the feasibility and demonstrate the accuracy and robustness of the process.


Subject(s)
Algorithms , Robotic Surgical Procedures/methods , Surgery, Computer-Assisted/methods , Equipment Design , Humans , Image Processing, Computer-Assisted , Needles , Phantoms, Imaging , Software , Software Design , Ultrasonography/methods
14.
Int J Comput Assist Radiol Surg ; 11(6): 1007-14, 2016 Jun.
Article in English | MEDLINE | ID: mdl-26995603

ABSTRACT

PURPOSE: In many orthopedic surgeries, there is a demand for correctly placing medical instruments (e.g., K-wire or drill) to perform bone fracture repairs. The main challenge is the mental alignment of X-ray images acquired using a C-arm, the medical instruments, and the patient, which dramatically increases in complexity during pelvic surgeries. Current solutions include the continuous acquisition of many intra-operative X-ray images from various views, which will result in high radiation exposure, long surgical durations, and significant effort and frustration for the surgical staff. This work conducts a preclinical usability study to test and evaluate mixed reality visualization techniques using intra-operative X-ray, optical, and RGBD imaging to augment the surgeon's view to assist accurate placement of tools. METHOD: We design and perform a usability study to compare the performance of surgeons and their task load using three different mixed reality systems during K-wire placements. The three systems are interventional X-ray imaging, X-ray augmentation on 2D video, and 3D surface reconstruction augmented by digitally reconstructed radiographs and live tool visualization. RESULTS: The evaluation criteria include duration, number of X-ray images acquired, placement accuracy, and the surgical task load, which are observed during 21 clinically relevant interventions performed by surgeons on phantoms. Finally, we test for statistically significant improvements and show that the mixed reality visualization leads to a significantly improved efficiency. CONCLUSION: The 3D visualization of patient, tool, and DRR shows clear advantages over the conventional X-ray imaging and provides intuitive feedback to place the medical tools correctly and efficiently.


Subject(s)
Bone Wires , Fracture Fixation, Internal/methods , Fractures, Bone/surgery , Pelvic Bones/surgery , Phantoms, Imaging , Radiography, Interventional/methods , Tomography, X-Ray Computed/methods , Fractures, Bone/diagnosis , Humans , Imaging, Three-Dimensional/methods , Pelvic Bones/diagnostic imaging , Pelvic Bones/injuries
15.
IEEE Trans Med Imaging ; 35(3): 830-8, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26561283

ABSTRACT

In this paper we present the usage of a drop-in gamma probe for intra-operative Single-Photon Emission Computed Tomography (SPECT) imaging in the scope of minimally invasive robot-assisted interventions. The probe is designed to be inserted and reside inside the abdominal cavity during the intervention. It is grasped during the procedure using a robotic laparoscopic gripper enabling full six degrees of freedom handling by the surgeon. We demonstrate the first deployment of the tracked probe for intra-operative in-patient robotic SPECT enabling augmented-reality image guidance. The hybrid mechanical- and image-based in-patient probe tracking is shown to have an accuracy of 0.2 mm. The overall system performance is evaluated and tested with a phantom for gynecological sentinel lymph node interventions and compared to ground-truth data yielding a mean reconstruction accuracy of 0.67 mm.


Subject(s)
Lymph Nodes/diagnostic imaging , Robotic Surgical Procedures/methods , Sentinel Lymph Node Biopsy/methods , Tomography, Emission-Computed, Single-Photon/methods , Equipment Design , Humans , Lymph Nodes/surgery , Minimally Invasive Surgical Procedures , Phantoms, Imaging , Robotic Surgical Procedures/instrumentation , Sentinel Lymph Node Biopsy/instrumentation , Tomography, Emission-Computed, Single-Photon/instrumentation
16.
IEEE Trans Med Imaging ; 34(2): 599-607, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25343757

ABSTRACT

This paper presents an approach to predict the deformation of the lungs and surrounding organs during respiration. The framework incorporates a computational model of the respiratory system, which comprises an anatomical model extracted from computed tomography (CT) images at end-expiration (EE), and a biomechanical model of the respiratory physiology, including the material behavior and interactions between organs. A personalization step is performed to automatically estimate patient-specific thoracic pressure, which drives the biomechanical model. The zone-wise pressure values are obtained by using a trust-region optimizer, where the estimated motion is compared to CT images at end-inspiration (EI). A detailed convergence analysis in terms of mesh resolution, time stepping and number of pressure zones on the surface of the thoracic cavity is carried out. The method is then tested on five public datasets. Results show that the model is able to predict the respiratory motion with an average landmark error of 3.40 ±1.0 mm over the entire respiratory cycle. The estimated 3-D lung motion may constitute as an advanced 3-D surrogate for more accurate medical image reconstruction and patient respiratory analysis.


Subject(s)
Biomechanical Phenomena/physiology , Four-Dimensional Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Lung , Respiration , Computer Simulation , Humans , Lung/anatomy & histology , Lung/physiology , Models, Biological , Movement , Precision Medicine/methods
17.
Med Image Anal ; 18(8): 1312-9, 2014 Dec.
Article in English | MEDLINE | ID: mdl-24842859

ABSTRACT

To enable image guided neurosurgery, the alignment of pre-interventional magnetic resonance imaging (MRI) and intra-operative ultrasound (US) is commonly required. We present two automatic image registration algorithms using the similarity measure Linear Correlation of Linear Combination (LC(2)) to align either freehand US slices or US volumes with MRI images. Both approaches allow an automatic and robust registration, while the three dimensional method yields a significantly improved percentage of optimally aligned registrations for randomly chosen clinically relevant initializations. This study presents a detailed description of the methodology and an extensive evaluation showing an accuracy of 2.51mm, precision of 0.85mm and capture range of 15mm (>95% convergence) using 14 clinical neurosurgical cases.


Subject(s)
Brain Neoplasms/surgery , Echoencephalography/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neurosurgical Procedures/methods , Subtraction Technique , Surgery, Computer-Assisted/methods , Artificial Intelligence , Brain Neoplasms/diagnosis , Humans , Image Enhancement/methods , Imaging, Three-Dimensional/methods , Multimodal Imaging/methods , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity
18.
Article in English | MEDLINE | ID: mdl-24505646

ABSTRACT

Automatic and robust registration of pre-operative magnetic resonance imaging (MRI) and intra-operative ultrasound (US) is essential to neurosurgery. We reformulate and extend an approach which uses a Linear Correlation of Linear Combination (LC2)-based similarity metric, yielding a novel algorithm which allows for fully automatic US-MRI registration in the matter of seconds. It is invariant with respect to the unknown and locally varying relationship between US image intensities and both MRI intensity and its gradient. The overall method based on this both recovers global rigid alignment, as well as the parameters of a free-form-deformation (FFD) model. The algorithm is evaluated on 14 clinical neurosurgical cases with tumors, with an average landmark-based error of 2.52 mm for the rigid transformation. In addition, we systematically study the accuracy, precision, and capture range of the algorithm, as well as its sensitivity to different choices of parameters.


Subject(s)
Brain Neoplasms/diagnosis , Brain Neoplasms/surgery , Echoencephalography/methods , Magnetic Resonance Imaging/methods , Neurosurgical Procedures/methods , Surgery, Computer-Assisted/methods , Ultrasonography/methods , Algorithms , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity , Subtraction Technique
SELECTION OF CITATIONS
SEARCH DETAIL
...