Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Comput Assist Radiol Surg ; 16(6): 943-953, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33973113

ABSTRACT

PURPOSES: Accurate and efficient spine registration is crucial to success of spine image guidance. However, changes in spine pose cause intervertebral motion that can lead to significant registration errors. In this study, we develop a geometrical rectification technique via nonlinear principal component analysis (NLPCA) to achieve level-wise vertebral registration that is robust to large changes in spine pose. METHODS: We used explanted porcine spines and live pigs to develop and test our technique. Each sample was scanned with preoperative CT (pCT) in an initial pose and rescanned with intraoperative stereovision (iSV) in a different surgical posture. Patient registration rectified arbitrary spinal postures in pCT and iSV into a common, neutral pose through a parameterized moving-frame approach. Topologically encoded depth projection 2D images were then generated to establish invertible point-to-pixel correspondences. Level-wise point correspondences between pCT and iSV vertebral surfaces were generated via 2D image registration. Finally, closed-form vertebral level-wise rigid registration was obtained by directly mapping 3D surface point pairs. Implanted mini-screws were used as fiducial markers to measure registration accuracy. RESULTS: In seven explanted porcine spines and two live animal surgeries (maximum in-spine pose change of 87.5 mm and 32.7 degrees averaged from all spines), average target registration errors (TRE) of 1.70 ± 0.15 mm and 1.85 ± 0.16 mm were achieved, respectively. The automated spine rectification took 3-5 min, followed by an additional 30 secs for depth image projection and level-wise registration. CONCLUSIONS: Accuracy and efficiency of the proposed level-wise spine registration support its application in human open spine surgeries. The registration framework, itself, may also be applicable to other intraoperative imaging modalities such as ultrasound and MRI, which may expand utility of the approach in spine registration in general.


Subject(s)
Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Spinal Diseases/diagnosis , Spine/diagnostic imaging , Surgery, Computer-Assisted/methods , Ultrasonography/methods , Animals , Disease Models, Animal , Fiducial Markers , Humans , Spinal Diseases/surgery , Spine/surgery , Swine
2.
PLoS One ; 13(5): e0197992, 2018.
Article in English | MEDLINE | ID: mdl-29795640

ABSTRACT

Developing an accurate and reliable injury predictor is central to the biomechanical studies of traumatic brain injury. State-of-the-art efforts continue to rely on empirical, scalar metrics based on kinematics or model-estimated tissue responses explicitly pre-defined in a specific brain region of interest. They could suffer from loss of information. A single training dataset has also been used to evaluate performance but without cross-validation. In this study, we developed a deep learning approach for concussion classification using implicit features of the entire voxel-wise white matter fiber strains. Using reconstructed American National Football League (NFL) injury cases, leave-one-out cross-validation was employed to objectively compare injury prediction performances against two baseline machine learning classifiers (support vector machine (SVM) and random forest (RF)) and four scalar metrics via univariate logistic regression (Brain Injury Criterion (BrIC), cumulative strain damage measure of the whole brain (CSDM-WB) and the corpus callosum (CSDM-CC), and peak fiber strain in the CC). Feature-based machine learning classifiers including deep learning, SVM, and RF consistently outperformed all scalar injury metrics across all performance categories (e.g., leave-one-out accuracy of 0.828-0.862 vs. 0.690-0.776, and .632+ error of 0.148-0.176 vs. 0.207-0.292). Further, deep learning achieved the best cross-validation accuracy, sensitivity, AUC, and .632+ error. These findings demonstrate the superior performances of deep learning in concussion prediction and suggest its promise for future applications in biomechanical investigations of traumatic brain injury.


Subject(s)
Brain Concussion/classification , Brain Concussion/pathology , Brain Mapping/methods , Decision Trees , Football , Machine Learning , White Matter/pathology , Humans , ROC Curve
3.
Oper Neurosurg (Hagerstown) ; 15(6): 686-691, 2018 12 01.
Article in English | MEDLINE | ID: mdl-29518246

ABSTRACT

BACKGROUND: Current methods of spine registration for image guidance have a variety of limitations related to accuracy, efficiency, and cost. OBJECTIVE: To define the accuracy of stereovision-mediated co-registration of a spinal surgical field. METHODS: A total of 10 explanted porcine spines were used. Dorsal soft tissue was removed to a variable degree. Bone screw fiducials were placed in each spine and high-resolution computed tomography (CT) scanning performed. Stereoscopic images were then obtained using a tracked, calibrated stereoscopic camera system; images were processed, reconstructed, and segmented in a semi-automated manner. A multistart registration of the reconstructed spinal surface with preoperative CT was performed. Target registration error (TRE) in the region of the laminae and facets was then determined, using bone screw fiducials not included in the original registration process. Each spine also underwent multilevel laminectomy, and TRE was then recalculated for varying amounts of bone removal. RESULTS: The mean TRE of stereovision registration was 2.19 ± 0.69 mm when all soft tissue was removed and 2.49 ± 0.74 mm when limited soft tissue removal was performed. Accuracy of the registration process was not adversely affected by laminectomy. CONCLUSION: Stereovision offers a promising means of registering an open, dorsal spinal surgical field. In this study, overall mean accuracy of the registration was 2.21 mm, even when bony anatomy was partially obscured by soft tissue or when partial midline laminectomy had been performed.


Subject(s)
Bone Screws , Spine/surgery , Surgery, Computer-Assisted , Animals , Fiducial Markers , Spine/diagnostic imaging , Swine , Tomography, X-Ray Computed/methods
4.
Biomech Model Mechanobiol ; 16(5): 1709-1727, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28500358

ABSTRACT

Reliable prediction and diagnosis of concussion is important for its effective clinical management. Previous model-based studies largely employ peak responses from a single element in a pre-selected anatomical region of interest (ROI) and utilize a single training dataset for injury prediction. A more systematic and rigorous approach is necessary to scrutinize the entire white matter (WM) ROIs as well as ROI-constrained neural tracts. To this end, we evaluated injury prediction performances of the 50 deep WM regions using predictor variables based on strains obtained from simulating the 58 reconstructed American National Football League head impacts. To objectively evaluate performance, repeated random subsampling was employed to split the impacts into independent training and testing datasets (39 and 19 cases, respectively, with 100 trials). Univariate logistic regressions were conducted based on training datasets to compute the area under the receiver operating characteristic curve (AUC), while accuracy, sensitivity, and specificity were reported based on testing datasets. Two tract-wise injury susceptibilities were identified as the best overall via pair-wise permutation test. They had comparable AUC, accuracy, and sensitivity, with the highest values occurring in superior longitudinal fasciculus (SLF; 0.867-0.879, 84.4-85.2, and 84.1-84.6%, respectively). Using metrics based on WM fiber strain, the most vulnerable ROIs included genu of corpus callosum, cerebral peduncle, and uncinate fasciculus, while genu and main body of corpus callosum, and SLF were among the most vulnerable tracts. Even for one un-concussed athlete, injury susceptibility of the cingulum (hippocampus) right was elevated. These findings highlight the unique injury discriminatory potentials of computational models and may provide important insight into how best to incorporate WM structural anisotropy for investigation of brain injury.


Subject(s)
Brain Injuries/pathology , White Matter/pathology , Area Under Curve , Athletes , Craniocerebral Trauma/pathology , Databases as Topic , Humans , Models, Biological
5.
Comput Med Imaging Graph ; 51: 11-9, 2016 07.
Article in English | MEDLINE | ID: mdl-27104497

ABSTRACT

Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice.


Subject(s)
Machine Learning , Spine/diagnostic imaging , Automation , Humans
6.
IEEE Trans Med Imaging ; 34(8): 1676-93, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25594966

ABSTRACT

Computer-aided diagnosis of spine problems relies on the automatic identification of spine structures in images. The task of automatic vertebra recognition is to identify the global spine and local vertebra structural information such as spine shape, vertebra location and pose. Vertebra recognition is challenging due to the large appearance variations in different image modalities/views and the high geometric distortions in spine shape. Existing vertebra recognitions are usually simplified as vertebrae detections, which mainly focuses on the identification of vertebra locations and labels but cannot support further spine quantitative assessment. In this paper, we propose a vertebra recognition method using 3D deformable hierarchical model (DHM) to achieve cross-modality local vertebra location+pose identification with accurate vertebra labeling, and global 3D spine shape recovery. We recast vertebra recognition as deformable model matching, fitting the input spine images with the 3D DHM via deformations. The 3D model-matching mechanism provides a more comprehensive vertebra location+pose+label simultaneous identification than traditional vertebra location+label detection, and also provides an articulated 3D mesh model for the input spine section. Moreover, DHM can conduct versatile recognition on volume and multi-slice data, even on single slice. Experiments show our method can successfully extract vertebra locations, labels, and poses from multi-slice T1/T2 MR and volume CT, and can reconstruct 3D spine model on different image views such as lumbar, cervical, even whole spine. The resulting vertebra information and the recovered shape can be used for quantitative diagnosis of spine problems and can be easily digitalized and integrated in modern medical PACS systems.


Subject(s)
Imaging, Three-Dimensional/methods , Multimodal Imaging/methods , Spine/anatomy & histology , Spine/diagnostic imaging , Algorithms , Databases, Factual , Humans , Magnetic Resonance Imaging/methods , Models, Theoretical , Spinal Fractures/diagnostic imaging , Spinal Fractures/pathology , Spondylosis/diagnostic imaging , Spondylosis/pathology , Tomography, X-Ray Computed/methods
7.
Chem Commun (Camb) ; 51(4): 761-4, 2015 Jan 14.
Article in English | MEDLINE | ID: mdl-25421649

ABSTRACT

Monodisperse silver nanocages (AgNCs) with specific interiors were successfully synthesized by an azeotropic distillation (AD) assisted method and exhibited excellent catalytic activities for reduction of 4-nitrophenol (4-NP) into 4-aminophenol (4-AP) due to the unique hollow morphology and small thickness of the silver shell.


Subject(s)
Nanostructures/chemistry , Nitrophenols/chemistry , Silver/chemistry , Aminophenols/chemistry , Catalysis , Distillation , Nanostructures/ultrastructure , Oxidation-Reduction
8.
IEEE Trans Image Process ; 22(6): 2343-55, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23481858

ABSTRACT

The efficient and robust extraction of invariant patterns from an image is a long-standing problem in computer vision. Invariant structures are often related to repetitive or near-repetitive patterns. The perception of repetitive patterns in an image is strongly linked to the visual interpretation and composition of textures. Repetitive patterns are products of both repetitive structures as well as repetitive reflections or color patterns. In other words, patterns that exhibit near-stationary behavior provide rich information about objects, their shapes, and their texture in an image. In this paper, we propose a new algorithm for repetitive pattern detection and grouping. The algorithm follows the classical region growing image segmentation scheme. It utilizes a mean-shift-like dynamic to group local image patches into clusters. It exploits a continuous joint alignment to: 1) match similar patches, and 2) refine the subspace grouping. We also propose an algorithm for inferring the composition structure of the repetitive patterns. The inference algorithm constructs a data-driven structural completion field, which merges the detected repetitive patterns into specific global geometric structures. The result of higher level grouping for image patterns can be used to infer the geometry of objects and estimate the general layout of a crowded scene.


Subject(s)
Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Algorithms , Biometric Identification , Cluster Analysis , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...