Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 171
Filter
1.
Comput Assist Surg (Abingdon) ; 28(1): 2275522, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37942523

ABSTRACT

A system for performance assessment and quality assurance (QA) of surgical trackers is reported based on principles of geometric accuracy and statistical process control (SPC) for routine longitudinal testing. A simple QA test phantom was designed, where the number and distribution of registration fiducials was determined drawing from analytical models for target registration error (TRE). A tracker testbed was configured with open-source software for measurement of a TRE-based accuracy metric ε and Jitter (J). Six trackers were tested: 2 electromagnetic (EM - Aurora); and 4 infrared (IR - 1 Spectra, 1 Vega, and 2 Vicra) - all NDI (Waterloo, ON). Phase I SPC analysis of Shewhart mean (x¯) and standard deviation (s) determined system control limits. Phase II involved weekly QA of each system for up to 32 weeks and identified Pass, Note, Alert, and Failure action rules. The process permitted QA in <1 min. Phase I control limits were established for all trackers: EM trackers exhibited higher upper control limits than IR trackers in ε (EM: x¯Îµ âˆ¼2.8-3.3 mm, IR: x¯Îµ âˆ¼1.6-2.0 mm) and Jitter (EM: x¯jitter âˆ¼0.30-0.33 mm, IR: x¯jitter âˆ¼0.08-0.10 mm), and older trackers showed evidence of degradation - e.g. higher Jitter for the older Vicra (p-value < .05). Phase II longitudinal tests yielded 676 outcomes in which a total of 4 Failures were noted - 3 resolved by intervention (metal interference for EM trackers) - and 1 owing to restrictive control limits for a new system (Vega). Weekly tests also yielded 40 Notes and 16 Alerts - each spontaneously resolved in subsequent monitoring.


Subject(s)
Surgery, Computer-Assisted , Humans , Phantoms, Imaging , Software
2.
Article in English | MEDLINE | ID: mdl-37937146

ABSTRACT

Purpose: Cone-beam CT (CBCT) is widespread in abdominal interventional imaging, but its long acquisition time makes it susceptible to patient motion. Image-based autofocus has shown success in CBCT deformable motion compensation, via deep autofocus metrics and multi-region optimization, but it is challenged by the large parameter dimensionality required to capture intricate motion trajectories. This work leverages the differentiable nature of deep autofocus metrics to build a novel optimization strategy, Multi-Stage Adaptive Spine Autofocus (MASA), for compensation of complex deformable motion in abdominal CBCT. Methods: MASA poses the autofocus problem as a multi-stage adaptive sampling strategy of the motion trajectory, sampled with Hermite spline basis with variable amplitude and knot temporal positioning. The adaptive method permits simultaneous optimization of the sampling phase, local temporal sampling density, and time-dependent amplitude of the motion trajectory. The optimization is performed in a multi-stage schedule with increasing number of knots that progressively accommodates complex trajectories in late stages, preconditioned by coarser components from early stages, and with minimal increase in dimensionality. MASA was evaluated in controlled simulation experiments with two types of motion trajectories: i) combinations of slow drifts with sudden jerk (sigmoid) motion; and ii) combinations of periodic motion sources of varying frequency into multi-frequency trajectories. Further validation was obtained in clinical data from liver CBCT featuring motion of contrast-enhanced vessels, and soft-tissue structures. Results: The adaptive sampling strategy provided successful motion compensation in sigmoid trajectories, compared to fixed sampling strategies (mean SSIM increase of 0.026 compared to 0.011). Inspection of the estimated motion showed the capability of MASA to automatically allocate larger sampling density to parts of the scan timeline featuring sudden motion, effectively accommodating complex motion without increasing the problem dimension. Experiments on multi-frequency trajectories with 3-stage MASA (5, 10, and 15 knots) yielded a twofold SSIM increase compared to single-stage autofocus with 15 knots (0.076 vs 0.040, respectively). Application of MASA to clinical datasets resulted in simultaneous improvement on the delineation of both contrast-enhanced vessels and soft-tissue structures in the liver. Conclusion: A new autofocus framework, MASA, was developed including a novel multi-stage technique for adaptive temporal sampling of the motion trajectory in combination with fully differentiable deep autofocus metrics. This novel adaptive sampling approach is a crucial step for application of deformable motion compensation to complex temporal motion trajectories.

3.
Phys Med Biol ; 68(21)2023 10 18.
Article in English | MEDLINE | ID: mdl-37774711

ABSTRACT

Objective. Surgical guidewires are commonly used in placing fixation implants to stabilize fractures. Accurate positioning of these instruments is challenged by difficulties in 3D reckoning from 2D fluoroscopy. This work aims to enhance the accuracy and reduce exposure times by providing 3D navigation for guidewire placement from as little as two fluoroscopic images.Approach. Our approach combines machine learning-based segmentation with the geometric model of the imager to determine the 3D poses of guidewires. Instrument tips are encoded as individual keypoints, and the segmentation masks are processed to estimate the trajectory. Correspondence between detections in multiple views is established using the pre-calibrated system geometry, and the corresponding features are backprojected to obtain the 3D pose. Guidewire 3D directions were computed using both an analytical and an optimization-based method. The complete approach was evaluated in cadaveric specimens with respect to potential confounding effects from the imaging geometry and radiographic scene clutter due to other instruments.Main results. The detection network identified the guidewire tips within 2.2 mm and guidewire directions within 1.1°, in 2D detector coordinates. Feature correspondence rejected false detections, particularly in images with other instruments, to achieve 83% precision and 90% recall. Estimating the 3D direction via numerical optimization showed added robustness to guidewires aligned with the gantry rotation plane. Guidewire tips and directions were localized in 3D world coordinates with a median accuracy of 1.8 mm and 2.7°, respectively.Significance. The paper reports a new method for automatic 2D detection and 3D localization of guidewires from pairs of fluoroscopic images. Localized guidewires can be virtually overlaid on the patient's pre-operative 3D scan during the intervention. Accurate pose determination for multiple guidewires from two images offers to reduce radiation dose by minimizing the need for repeated imaging and provides quantitative feedback prior to implant placement.


Subject(s)
Fractures, Bone , Orthopedic Procedures , Surgery, Computer-Assisted , Humans , Orthopedic Procedures/methods , Surgery, Computer-Assisted/methods , Fractures, Bone/surgery , Fluoroscopy/methods , Imaging, Three-Dimensional/methods
4.
Article in English | MEDLINE | ID: mdl-37143861

ABSTRACT

Purpose: Existing methods to improve the accuracy of tibiofibular joint reduction present workflow challenges, high radiation exposure, and a lack of accuracy and precision, leading to poor surgical outcomes. To address these limitations, we propose a method to perform robot-assisted joint reduction using intraoperative imaging to align the dislocated fibula to a target pose relative to the tibia. Methods: The approach (1) localizes the robot via 3D-2D registration of a custom plate adapter attached to its end effector, (2) localizes the tibia and fibula using multi-body 3D-2D registration, and (3) drives the robot to reduce the dislocated fibula according to the target plan. The custom robot adapter was designed to interface directly with the fibular plate while presenting radiographic features to aid registration. Registration accuracy was evaluated on a cadaveric ankle specimen, and the feasibility of robotic guidance was assessed by manipulating a dislocated fibula in a cadaver ankle. Results: Using standard AP and mortise radiographic views registration errors were measured to be less than 1 mm and 1° for the robot adapter and the ankle bones. Experiments in a cadaveric specimen revealed up to 4 mm deviations from the intended path, which was reduced to <2 mm using corrective actions guided by intraoperative imaging and 3D-2D registration. Conclusions: Preclinical studies suggest that significant robot flex and tibial motion occur during fibula manipulation, motivating the use of the proposed method to dynamically correct the robot trajectory. Accurate robot registration was achieved via the use of fiducials embedded within the custom design. Future work will evaluate the approach on a custom radiolucent robot design currently under construction and verify the solution on additional cadaveric specimens.

5.
Article in English | MEDLINE | ID: mdl-38226358

ABSTRACT

Purpose: To advance the development of radiomic models of bone quality using the recently introduced Ultra-High Resolution CT (UHR CT), we investigate inter-scan reproducibility of trabecular bone texture features to spatially-variant azimuthal and radial blurs associated with focal spot elongation and gantry rotation. Methods: The UHR CT system features 250×250 µm detector pixels and an x-ray source with a 0.4×0.5 mm focal spot. Visualization of details down to ~150 µm has been reported for this device. A cadaveric femur was imaged on UHR CT at three radial locations within the field-of-view: 0 cm (isocenter), 9 cm from the isocenter, and 18 cm from the isocenter; we expect the non-stationary blurs to worsen with increasing radial displacement. Gray level cooccurrence (GLCM) and gray level run length (GLRLM) texture features were extracted from 237 trabecular regions of interest (ROIs, 5 cm diameter) placed at corresponding locations in the femoral head in scans obtained at the different shifts. We evaluated concordance correlation coefficient (CCC) between texture features at 0 cm (reference) and at 9 cm and 18 cm. We also investigated whether the spatially-variant blurs affect K-means clustering of trabecular bone ROIs based on their texture features. Results: The average CCCs (against the 0 cm reference) for GLCM and GLRM features were ~0.7 at 9 cm. At 18 cm, the average CCCs were reduced to ~0.17 for GLCM and ~0.26 for GLRM. The non-stationary blurs are incorporated in radiomic features of cancellous bone, leading to inconsistencies in clustering of trabecular ROIs between different radial locations: an intersection-over-union overlap of corresponding (most similar) clusters between 0 cm and 9 cm shift was >70%, but dropped to <60% for the majority of corresponding clusters between 0 cm and 18 cm shift. Conclusion: Non-stationary CT system blurs reduce inter-scan reproducibility of texture features of trabecular bone in UHR CT, especially for locations >15 cm from the isocenter. Radiomic models of bone quality derived from UHR CT measurements at isocenter might need to be revised before application in peripheral body sites such as the hips.

6.
Article in English | MEDLINE | ID: mdl-36381251

ABSTRACT

Cone-beam CT (CBCT) is widely used for guidance in interventional radiology but it is susceptible to motion artifacts. Motion in interventional CBCT features a complex combination of diverse sources including quasi-periodic, consistent motion patterns such as respiratory motion, and aperiodic, quasi-random, motion such as peristalsis. Recent developments in image-based motion compensation methods include approaches that combine autofocus techniques with deep learning models for extraction of image features pertinent to CBCT motion. Training of such deep autofocus models requires the generation of large amounts of realistic, motion-corrupted CBCT. Previous works on motion simulation were mostly focused on quasi-periodic motion patterns, and reliable simulation of complex combined motion with quasi-random components remains an unaddressed challenge. This work presents a framework aimed at synthesis of realistic motion trajectories for simulation of deformable motion in soft-tissue CBCT. The approach leveraged the capability of conditional generative adversarial network (GAN) models to learn the complex underlying motion present in unlabeled, motion-corrupted, CBCT volumes. The approach is designed for training with unpaired clinical CBCT in an unsupervised fashion. This work presents a first feasibility study, in which the model was trained with simulated data featuring known motion, providing a controlled scenario for validation of the proposed approach prior to extension to clinical data. Our proof-of-concept study illustrated the potential of the model to generate realistic, variable simulation of CBCT deformable motion fields, consistent with three trends underlying the designed training data: i) the synthetic motion induced only diffeomorphic deformations - with Jacobian Determinant larger than zero; ii) the synthetic motion showed median displacement of 0. 5 mm in regions predominantly static in the training (e.g., the posterior aspect of the patient laying supine), compared to a median displacement of 3.8 mm in regions more prone to motion in the training; and iii) the synthetic motion exhibited predominant directionality consistent with the training set, resulting in larger motion in the superior-inferior direction (median and maximum amplitude of 4.58 mm and 20 mm, > 2x larger than the two remaining direction). Together, the proposed framework shows the feasibility for realistic motion simulation and synthesis of variable CBCT data.

7.
Article in English | MEDLINE | ID: mdl-36381250

ABSTRACT

Deformable motion is one of the main challenges to image quality in interventional cone beam CT (CBCT). Autofocus methods have been successfully applied for deformable motion compensation in CBCT, using multi-region joint optimization approaches that leverage the moderately smooth spatial variation motion of the deformable motion field with a local neighborhood. However, conventional autofocus metrics enforce images featuring sharp image-appearance, but do not guarantee the preservation of anatomical structures. Our previous work (DL-VIF) showed that deep convolutional neural networks (CNNs) can reproduce metrics of structural similarity (visual information fidelity - VIF), removing the need for a matched motion-free reference, and providing quantification of motion degradation and structural integrity. Application of DL-VIF within local neighborhoods is challenged by the large variability of local image content across a CBCT volume, and requires global context information for successful evaluation of motion effects. In this work, we propose a novel deep autofocus metric, based on a context-aware, multi-resolution, deep CNN design. In addition to the inclusion of contextual information, the resulting metric generates a voxel-wise distribution of reference-free VIF values. The new metric, denoted CADL-VIF, was trained on simulated CBCT abdomen scans with deformable motion at random locations and with amplitude up to 30 mm. The CADL-VIF achieved good correlation with the ground truth VIF map across all test cases with R2 = 0.843 and slope = 0.941. When integrated into a multi-ROI deformable motion compensation method, CADL-VIF consistently reduced motion artifacts, yielding an average increase in SSIM of 0.129 in regions with severe motion and 0.113 in regions with mild motion. This work demonstrated the capability of CADL-VIF to recognize anatomical structures and penalize unrealistic images, which is a key step in developing reliable autofocus for complex deformable motion compensation in CBCT.

8.
Article in English | MEDLINE | ID: mdl-36381563

ABSTRACT

Purpose: Cone-beam CT has become commonplace for 3D guidance in interventional radiology (IR), especially for vascular procedures in which identification of small vascular structures is crucial. However, its long image acquisition time poses a limit to image quality due to soft-tissue deformable motion that hampers visibility of small vessels. Autofocus motion compensation has shown promising potential for soft-tissue deformable motion compensation, but it lacks specific target to the imaging task. This work presents an approach for deformable motion compensation targeted at imaging of vascular structures. Methods: The proposed method consists on a two-stage framework for: i) identification of contrast-enhanced blood vessels in 2D projection data and delineation of an approximate region covering the vascular target in the volume space, and, ii) a novel autofocus approach including a metric designed to promote the presence of vascular structures acting solely in the region of interest. The vesselness of the image is quantified via evaluation of the properties of the 3D image Hessian, yielding a vesselness filter that gives larger values to voxels candidate to be part of a tubular structure. A cost metric is designed to promote large vesselness values and spatial sparsity, as expected in regions of fine vascularity. A targeted autofocus method was designed by combining the presented metric with a conventional autofocus term acting outside of the region of interest. The resulting method was evaluated on simulated data including synthetic vascularity merged with real anatomical features obtained from MDCT data. Further evaluation was obtained in two clinical datasets obtained during TACE procedures with a robotic C-arm (Artis Zeego, Siemens Healthineers). Results: The targeted vascular autofocus effectively restored the shape and contrast of the contrast-enhanced vascularity in the simulation cases, resulting in improved visibility and reduced artifacts. Segmentations performed with a single threshold value on the target vascular regions yielded a net increase of up to 42% in DICE coefficient computed against the static reference. Motion compensation in clinical datasets resulted in improved visibility of vascular structures, observed in maximum intensity projections of the contrast-enhanced liver vessel tree. Conclusion: Targeted motion compensation for vascular imaging showed promising performance for increased identification of small vascular structures in presence of motion. The development of autofocus metrics and methods tailored to vascular imaging opens the way for reliable compensation of deformable motion while preserving the integrity of anatomical structures in the image.

9.
Comput Methods Programs Biomed ; 227: 107222, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36370597

ABSTRACT

PURPOSE: Effective aggregation of intraoperative x-ray images that capture the patient anatomy from multiple view-angles has the potential to enable and improve automated image analysis that can be readily performed during surgery. We present multi-perspective region-based neural networks that leverage knowledge of the imaging geometry for automatic vertebrae labeling in Long-Film images - a novel tomographic imaging modality with an extended field-of-view for spine imaging. METHOD: A multi-perspective network architecture was designed to exploit small view-angle disparities produced by a multi-slot collimator and consolidate information from overlapping image regions. A second network incorporates large view-angle disparities to jointly perform labeling on images from multiple views (viz., AP and lateral). A recurrent module incorporates contextual information and enforce anatomical order for the detected vertebrae. The three modules are combined to form the multi-view multi-slot (MVMS) network for labeling vertebrae using images from all available perspectives. The network was trained on images synthesized from 297 CT images and tested on 50 AP and 50 lateral Long-Film images acquired from 13 cadaveric specimens. Labeling performance of the multi-perspective networks was evaluated with respect to the number of vertebrae appearances and presence of surgical instrumentation. RESULTS: The MVMS network achieved an F1 score of >96% and an average vertebral localization error of 3.3 mm, with 88.3% labeling accuracy on both AP and lateral images - (15.5% and 35.0% higher than conventional Faster R-CNN on AP and lateral views, respectively). Aggregation of multiple appearances of the same vertebra using the multi-slot network significantly improved the labeling accuracy (p < 0.05). Using the multi-view network, labeling accuracy on the more challenging lateral views was improved to the same level as that of the AP views. The approach demonstrated robustness to the presence of surgical instrumentation, commonly encountered in intraoperative images, and achieved comparable performance in images with and without instrumentation (88.9% vs. 91.2% labeling accuracy). CONCLUSION: The MVMS network demonstrated effective multi-perspective aggregation, providing means for accurate, automated vertebrae labeling during spine surgery. The algorithms may be generalized to other imaging tasks and modalities that involve multiple views with view-angle disparities (e.g., bi-plane radiography). Predicted labels can help avoid adverse events during surgery (e.g., wrong-level surgery), establish correspondence with labels in preoperative modalities to facilitate image registration, and enable automated measurement of spinal alignment metrics for intraoperative assessment of spinal curvature.


Subject(s)
Neural Networks, Computer , Spine , Humans , Spine/diagnostic imaging , Spine/surgery , Algorithms , Image Processing, Computer-Assisted
10.
Phys Med Biol ; 68(1)2022 12 22.
Article in English | MEDLINE | ID: mdl-36317269

ABSTRACT

Purpose. Target localization in pulmonary interventions (e.g. transbronchial biopsy of a lung nodule) is challenged by deformable motion and may benefit from fluoroscopic overlay of the target to provide accurate guidance. We present and evaluate a 3D-2D image registration method for fluoroscopic overlay in the presence of tissue deformation using a multi-resolution/multi-scale (MRMS) framework with an objective function that drives registration primarily by soft-tissue image gradients.Methods. The MRMS method registers 3D cone-beam CT to 2D fluoroscopy without gating of respiratory phase by coarse-to-fine resampling and global-to-local rescaling about target regions-of-interest. A variation of the gradient orientation (GO) similarity metric (denotedGO') was developed to downweight bone gradients and drive registration via soft-tissue gradients. Performance was evaluated in terms of projection distance error at isocenter (PDEiso). Phantom studies determined nominal algorithm parameters and capture range. Preclinical studies used a freshly deceased, ventilated porcine specimen to evaluate performance in the presence of real tissue deformation and a broad range of 3D-2D image mismatch.Results. Nominal algorithm parameters were identified that provided robust performance over a broad range of motion (0-20 mm), including an adaptive parameter selection technique to accommodate unknown mismatch in respiratory phase. TheGO'metric yielded median PDEiso= 1.2 mm, compared to 6.2 mm for conventionalGO.Preclinical studies with real lung deformation demonstrated median PDEiso= 1.3 mm with MRMS +GO'registration, compared to 2.2 mm with a conventional transform. Runtime was 26 s and can be reduced to 2.5 s given a prior registration within ∼5 mm as initialization.Conclusions. MRMS registration via soft-tissue gradients achieved accurate fluoroscopic overlay in the presence of deformable lung motion. By driving registration via soft-tissue image gradients, the method avoided false local minima presented by bones and was robust to a wide range of motion magnitude.


Subject(s)
Imaging, Three-Dimensional , Surgery, Computer-Assisted , Animals , Swine , Imaging, Three-Dimensional/methods , Cone-Beam Computed Tomography/methods , Lung/diagnostic imaging , Surgery, Computer-Assisted/methods , Fluoroscopy/methods , Algorithms
11.
Phys Med Biol ; 67(16)2022 08 16.
Article in English | MEDLINE | ID: mdl-35905731

ABSTRACT

Cone-beam computed tomography (CBCT) imaging is becoming increasingly important for a wide range of applications such as image-guided surgery, image-guided radiation therapy as well as diagnostic imaging such as breast and orthopaedic imaging. The potential benefits of non-circular source-detector trajectories was recognized in early work to improve the completeness of CBCT sampling and extend the field of view (FOV). Another important feature of interventional imaging is that prior knowledge of patient anatomy such as a preoperative CBCT or prior CT is commonly available. This provides the opportunity to integrate such prior information into the image acquisition process by customized CBCT source-detector trajectories. Such customized trajectories can be designed in order to optimize task-specific imaging performance, providing intervention or patient-specific imaging settings. The recently developed robotic CBCT C-arms as well as novel multi-source CBCT imaging systems with additional degrees of freedom provide the possibility to largely expand the scanning geometries beyond the conventional circular source-detector trajectory. This recent development has inspired the research community to innovate enhanced image quality by modifying image geometry, as opposed to hardware or algorithms. The recently proposed techniques in this field facilitate image quality improvement, FOV extension, radiation dose reduction, metal artifact reduction as well as 3D imaging under kinematic constraints. Because of the great practical value and the increasing importance of CBCT imaging in image-guided therapy for clinical and preclinical applications as well as in industry, this paper focuses on the review and discussion of the available literature in the CBCT trajectory optimization field. To the best of our knowledge, this paper is the first study that provides an exhaustive literature review regarding customized CBCT algorithms and tries to update the community with the clarification of in-depth information on the current progress and future trends.


Subject(s)
Radiotherapy, Image-Guided , Surgery, Computer-Assisted , Algorithms , Cone-Beam Computed Tomography/methods , Humans , Image Processing, Computer-Assisted/methods , Phantoms, Imaging
12.
Phys Med Biol ; 67(12)2022 06 16.
Article in English | MEDLINE | ID: mdl-35636391

ABSTRACT

Purpose. Patient motion artifacts present a prevalent challenge to image quality in interventional cone-beam CT (CBCT). We propose a novel reference-free similarity metric (DL-VIF) that leverages the capability of deep convolutional neural networks (CNN) to learn features associated with motion artifacts within realistic anatomical features. DL-VIF aims to address shortcomings of conventional metrics of motion-induced image quality degradation that favor characteristics associated with motion-free images, such as sharpness or piecewise constancy, but lack any awareness of the underlying anatomy, potentially promoting images depicting unrealistic image content. DL-VIF was integrated in an autofocus motion compensation framework to test its performance for motion estimation in interventional CBCT.Methods. DL-VIF is a reference-free surrogate for the previously reported visual image fidelity (VIF) metric, computed against a motion-free reference, generated using a CNN trained using simulated motion-corrupted and motion-free CBCT data. Relatively shallow (2-ResBlock) and deep (3-Resblock) CNN architectures were trained and tested to assess sensitivity to motion artifacts and generalizability to unseen anatomy and motion patterns. DL-VIF was integrated into an autofocus framework for rigid motion compensation in head/brain CBCT and assessed in simulation and cadaver studies in comparison to a conventional gradient entropy metric.Results. The 2-ResBlock architecture better reflected motion severity and extrapolated to unseen data, whereas 3-ResBlock was found more susceptible to overfitting, limiting its generalizability to unseen scenarios. DL-VIF outperformed gradient entropy in simulation studies yielding average multi-resolution structural similarity index (SSIM) improvement over uncompensated image of 0.068 and 0.034, respectively, referenced to motion-free images. DL-VIF was also more robust in motion compensation, evidenced by reduced variance in SSIM for various motion patterns (σDL-VIF = 0.008 versusσgradient entropy = 0.019). Similarly, in cadaver studies, DL-VIF demonstrated superior motion compensation compared to gradient entropy (an average SSIM improvement of 0.043 (5%) versus little improvement and even degradation in SSIM, respectively) and visually improved image quality even in severely motion-corrupted images.Conclusion: The studies demonstrated the feasibility of building reference-free similarity metrics for quantification of motion-induced image quality degradation and distortion of anatomical structures in CBCT. DL-VIF provides a reliable surrogate for motion severity, penalizes unrealistic distortions, and presents a valuable new objective function for autofocus motion compensation in CBCT.


Subject(s)
Algorithms , Cone-Beam Computed Tomography , Artifacts , Cadaver , Cone-Beam Computed Tomography/methods , Humans , Image Processing, Computer-Assisted/methods , Motion
13.
Phys Med Biol ; 67(12)2022 06 10.
Article in English | MEDLINE | ID: mdl-35609586

ABSTRACT

Objective.The accuracy of navigation in minimally invasive neurosurgery is often challenged by deep brain deformations (up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach). We propose a deep learning-based deformable registration method to address such deformations between preoperative MR and intraoperative CBCT.Approach.The registration method uses a joint image synthesis and registration network (denoted JSR) to simultaneously synthesize MR and CBCT images to the CT domain and perform CT domain registration using a multi-resolution pyramid. JSR was first trained using a simulated dataset (simulated CBCT and simulated deformations) and then refined on real clinical images via transfer learning. The performance of the multi-resolution JSR was compared to a single-resolution architecture as well as a series of alternative registration methods (symmetric normalization (SyN), VoxelMorph, and image synthesis-based registration methods).Main results.JSR achieved median Dice coefficient (DSC) of 0.69 in deep brain structures and median target registration error (TRE) of 1.94 mm in the simulation dataset, with improvement from single-resolution architecture (median DSC = 0.68 and median TRE = 2.14 mm). Additionally, JSR achieved superior registration compared to alternative methods-e.g. SyN (median DSC = 0.54, median TRE = 2.77 mm), VoxelMorph (median DSC = 0.52, median TRE = 2.66 mm) and provided registration runtime of less than 3 s. Similarly in the clinical dataset, JSR achieved median DSC = 0.72 and median TRE = 2.05 mm.Significance.The multi-resolution JSR network resolved deep brain deformations between MR and CBCT images with performance superior to other state-of-the-art methods. The accuracy and runtime support translation of the method to further clinical studies in high-precision neurosurgery.


Subject(s)
Image Processing, Computer-Assisted , Spiral Cone-Beam Computed Tomography , Algorithms , Cone-Beam Computed Tomography/methods , Image Processing, Computer-Assisted/methods
14.
Med Image Anal ; 75: 102292, 2022 01.
Article in English | MEDLINE | ID: mdl-34784539

ABSTRACT

PURPOSE: The accuracy of minimally invasive, intracranial neurosurgery can be challenged by deformation of brain tissue - e.g., up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach. We report an unsupervised, deep learning-based registration framework to resolve such deformations between preoperative MR and intraoperative CT with fast runtime for neurosurgical guidance. METHOD: The framework incorporates subnetworks for MR and CT image synthesis with a dual-channel registration subnetwork (with synthesis uncertainty providing spatially varying weights on the dual-channel loss) to estimate a diffeomorphic deformation field from both the MR and CT channels. An end-to-end training is proposed that jointly optimizes both the synthesis and registration subnetworks. The proposed framework was investigated using three datasets: (1) paired MR/CT with simulated deformations; (2) paired MR/CT with real deformations; and (3) a neurosurgery dataset with real deformation. Two state-of-the-art methods (Symmetric Normalization and VoxelMorph) were implemented as a basis of comparison, and variations in the proposed dual-channel network were investigated, including single-channel registration, fusion without uncertainty weighting, and conventional sequential training of the synthesis and registration subnetworks. RESULTS: The proposed method achieved: (1) Dice coefficient = 0.82±0.07 and TRE = 1.2 ± 0.6 mm on paired MR/CT with simulated deformations; (2) Dice coefficient = 0.83 ± 0.07 and TRE = 1.4 ± 0.7 mm on paired MR/CT with real deformations; and (3) Dice = 0.79 ± 0.13 and TRE = 1.6 ± 1.0 mm on the neurosurgery dataset with real deformations. The dual-channel registration with uncertainty weighting demonstrated superior performance (e.g., TRE = 1.2 ± 0.6 mm) compared to single-channel registration (TRE = 1.6 ± 1.0 mm, p < 0.05 for CT channel and TRE = 1.3 ± 0.7 mm for MR channel) and dual-channel registration without uncertainty weighting (TRE = 1.4 ± 0.8 mm, p < 0.05). End-to-end training of the synthesis and registration subnetworks also improved performance compared to the conventional sequential training strategy (TRE = 1.3 ± 0.6 mm). Registration runtime with the proposed network was ∼3 s. CONCLUSION: The deformable registration framework based on dual-channel MR/CT registration with spatially varying weights and end-to-end training achieved geometric accuracy and runtime that was superior to state-of-the-art baseline methods and various ablations of the proposed network. The accuracy and runtime of the method may be compatible with the requirements of high-precision neurosurgery.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Humans , Algorithms , Neurosurgical Procedures , Uncertainty
15.
Phys Med Biol ; 66(21)2021 11 01.
Article in English | MEDLINE | ID: mdl-34644684

ABSTRACT

Purpose.Accurate neuroelectrode placement is essential to effective monitoring or stimulation of neurosurgery targets. This work presents and evaluates a method that combines deep learning and model-based deformable 3D-2D registration to guide and verify neuroelectrode placement using intraoperative imaging.Methods.The registration method consists of three stages: (1) detection of neuroelectrodes in a pair of fluoroscopy images using a deep learning approach; (2) determination of correspondence and initial 3D localization among neuroelectrode detections in the two projection images; and (3) deformable 3D-2D registration of neuroelectrodes according to a physical device model. The method was evaluated in phantom, cadaver, and clinical studies in terms of (a) the accuracy of neuroelectrode registration and (b) the quality of metal artifact reduction (MAR) in cone-beam CT (CBCT) in which the deformably registered neuroelectrode models are taken as input to the MAR.Results.The combined deep learning and model-based deformable 3D-2D registration approach achieved 0.2 ± 0.1 mm accuracy in cadaver studies and 0.6 ± 0.3 mm accuracy in clinical studies. The detection network and 3D correspondence provided initialization of 3D-2D registration within 2 mm, which facilitated end-to-end registration runtime within 10 s. Metal artifacts, quantified as the standard deviation in voxel values in tissue adjacent to neuroelectrodes, were reduced by 72% in phantom studies and by 60% in first clinical studies.Conclusions.The method combines the speed and generalizability of deep learning (for initialization) with the precision and reliability of physical model-based registration to achieve accurate deformable 3D-2D registration and MAR in functional neurosurgery. Accurate 3D-2D guidance from fluoroscopy could overcome limitations associated with deformation in conventional navigation, and improved MAR could improve CBCT verification of neuroelectrode placement.


Subject(s)
Algorithms , Cone-Beam Computed Tomography , Cadaver , Cone-Beam Computed Tomography/methods , Humans , Imaging, Three-Dimensional/methods , Phantoms, Imaging , Reproducibility of Results
16.
Phys Med Biol ; 66(12)2021 06 21.
Article in English | MEDLINE | ID: mdl-34082413

ABSTRACT

Purpose.Accurate localization and labeling of vertebrae in computed tomography (CT) is an important step toward more quantitative, automated diagnostic analysis and surgical planning. In this paper, we present a framework (called Ortho2D) for vertebral labeling in CT in a manner that is accurate and memory-efficient.Methods. Ortho2D uses two independent faster R-convolutional neural network networks to detect and classify vertebrae in orthogonal (sagittal and coronal) CT slices. The 2D detections are clustered in 3D to localize vertebrae centroids in the volumetric CT and classify the region (cervical, thoracic, lumbar, or sacral) and vertebral level. A post-process sorting method incorporates the confidence in network output to refine classifications and reduce outliers. Ortho2D was evaluated on a publicly available dataset containing 302 normal and pathological spine CT images with and without surgical instrumentation. Labeling accuracy and memory requirements were assessed in comparison to other recently reported methods. The memory efficiency of Ortho2D permitted extension to high-resolution CT to investigate the potential for further boosts to labeling performance.Results. Ortho2D achieved overall vertebrae detection accuracy of 97.1%, region identification accuracy of 94.3%, and individual vertebral level identification accuracy of 91.0%. The framework achieved 95.8% and 83.6% level identification accuracy in images without and with surgical instrumentation, respectively. Ortho2D met or exceeded the performance of previously reported 2D and 3D labeling methods and reduced memory consumption by a factor of ∼50 (at 1 mm voxel size) compared to a 3D U-Net, allowing extension to higher resolution datasets than normally afforded. The accuracy of level identification increased from 80.1% (for standard/low resolution CT) to 95.1% (for high-resolution CT).Conclusions. The Ortho2D method achieved vertebrae labeling performance that is comparable to other recently reported methods with significant reduction in memory consumption, permitting further performance boosts via application to high-resolution CT.


Subject(s)
Spine , Tomography, X-Ray Computed , Lumbar Vertebrae , Neural Networks, Computer
17.
Phys Med Biol ; 66(5): 055010, 2021 02 17.
Article in English | MEDLINE | ID: mdl-33594993

ABSTRACT

Image-guided therapies in the abdomen and pelvis are often hindered by motion artifacts in cone-beam CT (CBCT) arising from complex, non-periodic, deformable organ motion during long scan times (5-30 s). We propose a deformable image-based motion compensation method to address these challenges and improve CBCT guidance. Motion compensation is achieved by selecting a set of small regions of interest in the uncompensated image to minimize a cost function consisting of an autofocus objective and spatiotemporal regularization penalties. Motion trajectories are estimated using an iterative optimization algorithm (CMA-ES) and used to interpolate a 4D spatiotemporal motion vector field. The motion-compensated image is reconstructed using a modified filtered backprojection approach. Being image-based, the method does not require additional input besides the raw CBCT projection data and system geometry that are used for image reconstruction. Experimental studies investigated: (1) various autofocus objective functions, analyzed using a digital phantom with a range of sinusoidal motion magnitude (4, 8, 12, 16, 20 mm); (2) spatiotemporal regularization, studied using a CT dataset from The Cancer Imaging Archive with deformable sinusoidal motion of variable magnitude (10, 15, 20, 25 mm); and (3) performance in complex anatomy, evaluated in cadavers undergoing simple and complex motion imaged on a CBCT-capable mobile C-arm system (Cios Spin 3D, Siemens Healthineers, Forchheim, Germany). Gradient entropy was found to be the best autofocus objective for soft-tissue CBCT, increasing structural similarity (SSIM) by 42%-92% over the range of motion magnitudes investigated. The optimal temporal regularization strength was found to vary widely (0.5-5 mm-2) over the range of motion magnitudes investigated, whereas optimal spatial regularization strength was relatively constant (0.1). In cadaver studies, deformable motion compensation was shown to improve local SSIM by ∼17% for simple motion and ∼21% for complex motion and provided strong visual improvement of motion artifacts (reduction of blurring and streaks and improved visibility of soft-tissue edges). The studies demonstrate the robustness of deformable motion compensation to a range of motion magnitudes, frequencies, and other factors (e.g. truncation and scatter).


Subject(s)
Cone-Beam Computed Tomography , Image Processing, Computer-Assisted/methods , Organ Motion , Algorithms , Artifacts , Humans , Phantoms, Imaging , Time Factors
18.
Phys Med Biol ; 66(5): 055012, 2021 02 20.
Article in English | MEDLINE | ID: mdl-33477131

ABSTRACT

Model-based iterative reconstruction (MBIR) for cone-beam CT (CBCT) offers better noise-resolution tradeoff and image quality than analytical methods for acquisition protocols with low x-ray dose or limited data, but with increased computational burden that poses a drawback to routine application in clinical scenarios. This work develops a comprehensive framework for acceleration of MBIR in the form of penalized weighted least squares optimized with ordered subsets separable quadratic surrogates. The optimization was scheduled on a set of stages forming a morphological pyramid varying in voxel size. Transition between stages was controlled with a convergence criterion based on the deviation between the mid-band noise power spectrum (NPS) measured on a homogeneous region of the evolving reconstruction and that expected for the converged image, computed with an analytical model that used projection data and the reconstruction parameters. A stochastic backprojector was developed by introducing a random perturbation to the sampling position of each voxel for each ray in the reconstruction within a voxel-based backprojector, breaking the deterministic pattern of sampling artifacts when combined with an unmatched Siddon forward projector. This fast, forward and backprojector pair were included into a multi-resolution reconstruction strategy to provide support for objects partially outside of the field of view. Acceleration from ordered subsets was combined with momentum accumulation stabilized with an adaptive technique that automatically resets the accumulated momentum when it diverges noticeably from the current iteration update. The framework was evaluated with CBCT data of a realistic abdomen phantom acquired on an imaging x-ray bench and with clinical CBCT data from an angiography robotic C-arm (Artis Zeego, Siemens Healthineers, Forchheim, Germany) acquired during a liver embolization procedure. Image fidelity was assessed with the structural similarity index (SSIM) computed with a converged reconstruction. The accelerated framework provided accurate reconstructions in 60 s (SSIM = 0.97) and as little as 27 s (SSIM = 0.94) for soft-tissue evaluation. The use of simple forward and backprojectors resulted in 9.3× acceleration. Accumulation of momentum provided extra ∼3.5× acceleration with stable convergence for 6-30 subsets. The NPS-driven morphological pyramid resulted in initial faster convergence, achieving similar SSIM with 1.5× lower runtime than the single-stage optimization. Acceleration of MBIR to provide reconstruction time compatible with clinical applications is feasible via architectures that integrate algorithmic acceleration with approaches to provide stable convergence, and optimization schedules that maximize convergence speed.


Subject(s)
Algorithms , Cone-Beam Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Artifacts , Germany , Humans
19.
Article in English | MEDLINE | ID: mdl-35982943

ABSTRACT

Purpose: Deep brain stimulation is a neurosurgical procedure used in treatment of a growing spectrum of movement disorders. Inaccuracies in electrode placement, however, can result in poor symptom control or adverse effects and confound variability in clinical outcomes. A deformable 3D-2D registration method is presented for high-precision 3D guidance of neuroelectrodes. Methods: The approach employs a model-based, deformable algorithm for 3D-2D image registration. Variations in lead design are captured in a parametric 3D model based on a B-spline curve. The registration is solved through iterative optimization of 16 degrees-of-freedom that maximize image similarity between the 2 acquired radiographs and simulated forward projections of the neuroelectrode model. The approach was evaluated in phantom models with respect to pertinent imaging parameters, including view selection and imaging dose. Results: The results demonstrate an accuracy of (0.2 ± 0.2) mm in 3D localization of individual electrodes. The solution was observed to be robust to changes in pertinent imaging parameters, which demonstrate accurate localization with ≥20° view separation and at 1/10th the dose of a standard fluoroscopy frame. Conclusions: The presented approach provides the means for guiding neuroelectrode placement from 2 low-dose radiographic images in a manner that accommodates potential deformations at the target anatomical site. Future work will focus on improving runtime though learning-based initialization, application in reducing reconstruction metal artifacts for 3D verification of placement, and extensive evaluation in clinical data from an IRB study underway.

20.
Article in English | MEDLINE | ID: mdl-36090307

ABSTRACT

Purpose: A method and prototype for a fluoroscopically-guided surgical robot is reported for assisting pelvic fracture fixation. The approach extends the compatibility of existing guidance methods with C-arms that are in mainstream use (without prior geometric calibration) using an online calibration of the C-arm geometry automated via registration to patient anatomy. We report the first preclinical studies of this method in cadaver for evaluation of geometric accuracy. Methods: The robot is placed over the patient within the imaging field-of-view and radiographs are acquired as the robot rotates an attached instrument. The radiographs are then used to perform an online geometric calibration via 3D-2D image registration, which solves for the intrinsic and extrinsic parameters of the C-arm imaging system with respect to the patient. The solved projective geometry is then be used to register the robot to the patient and drive the robot to planned trajectories. This method is applied to a robotic system consisting of a drill guide instrument for guidewire placement and evaluated in experiments using a cadaver specimen. Results: Robotic drill guide alignment to trajectories defined in the cadaver pelvis were accurate within 2 mm and 1° (on average) using the calibration-free approach. Conformance of trajectories within bone corridors was confirmed in cadaver by extrapolating the aligned drill guide trajectory into the cadaver pelvis. Conclusion: This study demonstrates the accuracy of image-guided robotic positioning without prior calibration of the C-arm gantry, facilitating the use of surgical robots with simpler imaging devices that cannot establish or maintain an offline calibration. Future work includes testing of the system in a clinical setting with trained orthopaedic surgeons and residents.

SELECTION OF CITATIONS
SEARCH DETAIL
...