Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
PeerJ ; 6: e4374, 2018.
Article in English | MEDLINE | ID: mdl-29492335

ABSTRACT

Paleontological research increasingly uses high-resolution micro-computed tomography (µCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in µCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

2.
IEEE J Biomed Health Inform ; 21(5): 1315-1326, 2017 09.
Article in English | MEDLINE | ID: mdl-28880152

ABSTRACT

Cardiac magnetic resonance perfusion examinations enable noninvasive quantification of myocardial blood flow. However, motion between frames due to breathing must be corrected for quantitative analysis. Although several methods have been proposed, there is a lack of widely available benchmarks to compare different algorithms. We sought to compare many algorithms from several groups in an open benchmark challenge. Nine clinical studies from two different centers comprising normal and diseased myocardium at both rest and stress were made available for this study. The primary validation measure was regional myocardial blood flow based on the transfer coefficient (Ktrans), which was computed using a compartment model and the myocardial perfusion reserve (MPR) index. The ground truth was calculated using contours drawn manually on all frames by a single observer, and visually inspected by a second observer. Six groups participated and 19 different motion correction algorithms were compared. Each method used one of three different motion models: rigid, global affine, or local deformation. The similarity metric also varied with methods employing either sum-of-squared differences, mutual information, or cross correlation. There were no significant differences in Ktrans or MPR compared across different motion models or similarity metrics. Compared with the ground truth, only Ktrans for the sum-of-squared differences metric, and for local deformation motion models, had significant bias. In conclusion, the open benchmark enabled evaluation of clinical perfusion indices over a wide range of methods. In particular, there was no benefit of nonrigid registration techniques over the other methods evaluated in this study. The benchmark data and results are available from the Cardiac Atlas Project ( www.cardiacatlas.org).


Subject(s)
Cardiac Imaging Techniques , Heart/diagnostic imaging , Image Processing, Computer-Assisted , Magnetic Resonance Angiography , Movement/physiology , Algorithms , Benchmarking , Cardiac Imaging Techniques/methods , Cardiac Imaging Techniques/standards , Humans , Image Processing, Computer-Assisted/methods , Image Processing, Computer-Assisted/standards , Magnetic Resonance Angiography/methods , Magnetic Resonance Angiography/standards
3.
Br J Ophthalmol ; 99(10): 1430-4, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26089215

ABSTRACT

BACKGROUND: Clinical studies report on vision impairment after blunt frontal head trauma. A possible cause is damage to the optic nerve bundle within the optic canal due to microfractures of the anterior skull base leading to indirect traumatic optic neuropathy. METHODS: A finite element study simulating impact forces on the paramedian forehead in different grades was initiated. The set-up consisted of a high-resolution skull model with about 740 000 elements, a blunt impactor and was solved in a transient time-dependent simulation. Individual bone material parameters were calculated for each volume element to increase realism. RESULTS: Results showed stress propagation from the frontal impact towards the optic foramen and the chiasm even at low-force fist-like impacts. Higher impacts produced stress patterns corresponding to typical fracture patterns of the anterior skull base including the optic canal. Transient simulation discerned two stress peaks equalling oscillation. CONCLUSIONS: It can be concluded that even comparatively low stresses and oscillation in the optic foramen may cause micro damage undiscerned by CT or MRI explaining consecutive vision loss. Higher impacts lead to typical comminuted fractures, which may affect the integrity of the optic canal. Finite element simulation can be effectively used in studying head trauma and its clinical consequences.


Subject(s)
Craniocerebral Trauma/diagnostic imaging , Image Processing, Computer-Assisted , Optic Chiasm/diagnostic imaging , Skull Base/diagnostic imaging , Vision, Low/etiology , Wounds, Nonpenetrating/diagnostic imaging , Biomechanical Phenomena , Craniocerebral Trauma/complications , Finite Element Analysis , Humans , Optic Chiasm/injuries , Radiography , Vision, Low/physiopathology , Wounds, Nonpenetrating/complications
4.
Head Face Med ; 11: 21, 2015 Jun 16.
Article in English | MEDLINE | ID: mdl-26077866

ABSTRACT

INTRODUCTION: Zygomatic fractures form a major entity in craniomaxillofacial traumatology. Few studies have dealt with biomechanical basics and none with the role of the facial soft tissues. Therefore this study should investigate, whether facial soft tissue plays a protecting role in lateral midfacial trauma. METHODS: A head-to-head encounter was simulated by way of finite element analysis. In two scenarios this impact - with and without soft tissues - was investigated to demonstrate the potential protective effects. To achieve realism, a transient simulation was chosen, which considers temporal dynamics and realistic material parameters derived from CT grey values. RESULTS: The simulation results presented a typical zygomatic fracture with all relevant fracture lines. Including soft tissues did not change the maximum bony stress pattern, but increased the time period from impact to maximal stresses by 1.3 msec. CONCLUSIONS: Although this could have clinical implications, facial soft tissues may be disregarded in biomechanical simulations of the lateral midface, if only the bony structures are to be investigated. Soft tissue seems to act as a temporal buffer only.


Subject(s)
Facial Bones/injuries , Zygomatic Fractures/physiopathology , Biomechanical Phenomena , Computer Simulation , Face , Finite Element Analysis , Humans
5.
Gigascience ; 3: 23, 2014.
Article in English | MEDLINE | ID: mdl-25392734

ABSTRACT

BACKGROUND: Perfusion quantification by using first-pass gadolinium-enhanced myocardial perfusion magnetic resonance imaging (MRI) has proved to be a reliable tool for the diagnosis of coronary artery disease that leads to reduced blood flow to the myocardium. The image series resulting from such acquisition usually exhibits a breathing motion that needs to be compensated for if a further automatic analysis of the perfusion is to be executed. Various algorithms have been presented to facilitate such a motion compensation, but the lack of publicly available data sets hinders a proper, reproducible comparison of these algorithms. MATERIAL: Free breathing perfusion MRI series of ten patients considered clinically to have a stress perfusion defect were acquired; for each patient a rest and a stress study was executed. Manual segmentations of the left ventricle myocardium and the right-left ventricle insertion point are provided for all images in order to make a unified validation of the motion compensation algorithms and the perfusion analysis possible. In addition, all the scripts and the software required to run the experiments are provided alongside the data, and to enable interested parties to directly run the experiments themselves, the test bed is also provided as a virtual hard disk. FINDINGS: To illustrate the utility of the data set two motion compensation algorithms with publicly available implementations were applied to the data and earlier reported results about the performance of these algorithms could be confirmed. CONCLUSION: The data repository alongside the evaluation test bed provides the option to reliably compare motion compensation algorithms for myocardial perfusion MRI. In addition, we encourage that researchers add their own annotations to the data set, either to provide inter-observer comparisons of segmentations, or to make other applications possible, for example, the validation of segmentation algorithms.

6.
Comput Methods Programs Biomed ; 115(2): 76-94, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24768617

ABSTRACT

We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multichannel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan-rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge.


Subject(s)
Bayes Theorem , Brain/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/statistics & numerical data , Adult , Aged , Aged, 80 and over , Aging/pathology , Algorithms , Brain/pathology , Cluster Analysis , Humans , Markov Chains , Middle Aged , Models, Statistical , Multivariate Analysis , Organ Size , Software , Young Adult
7.
Med Image Anal ; 18(1): 22-35, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24080528

ABSTRACT

Accurate detection of liver lesions is of great importance in hepatic surgery planning. Recent studies have shown that the detection rate of liver lesions is significantly higher in gadoxetic acid-enhanced magnetic resonance imaging (Gd-EOB-DTPA-enhanced MRI) than in contrast-enhanced portal-phase computed tomography (CT); however, the latter remains essential because of its high specificity, good performance in estimating liver volumes and better vessel visibility. To characterize liver lesions using both the above image modalities, we propose a multimodal nonrigid registration framework using organ-focused mutual information (OF-MI). This proposal tries to improve mutual information (MI) based registration by adding spatial information, benefiting from the availability of expert liver segmentation in clinical protocols. The incorporation of an additional information channel containing liver segmentation information was studied. A dataset of real clinical images and simulated images was used in the validation process. A Gd-EOB-DTPA-enhanced MRI simulation framework is presented. To evaluate results, warping index errors were calculated for the simulated data, and landmark-based and surface-based errors were calculated for the real data. An improvement of the registration accuracy for OF-MI as compared with MI was found for both simulated and real datasets. Statistical significance of the difference was tested and confirmed in the simulated dataset (p<0.01).


Subject(s)
Gadolinium DTPA , Liver Neoplasms/diagnosis , Magnetic Resonance Imaging/methods , Multimodal Imaging/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Tomography, X-Ray Computed/methods , Algorithms , Contrast Media , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Reproducibility of Results , Sensitivity and Specificity
8.
Source Code Biol Med ; 8(1): 20, 2013 Oct 11.
Article in English | MEDLINE | ID: mdl-24119305

ABSTRACT

BACKGROUND: Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large.Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers.One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development.Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don't provide an clear approach when one wants to shape a new command line tool from a prototype shell script. RESULTS: The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. CONCLUSION: In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

9.
Neuroinformatics ; 11(1): 77-89, 2013 Jan.
Article in English | MEDLINE | ID: mdl-22903439

ABSTRACT

Subtraction of Ictal SPECT Co-registered to MRI (SISCOM) is an imaging technique used to localize the epileptogenic focus in patients with intractable partial epilepsy. The aim of this study was to determine the accuracy of registration algorithms involved in SISCOM analysis using FocusDET, a new user-friendly application. To this end, Monte Carlo simulation was employed to generate realistic SPECT studies. Simulated sinograms were reconstructed by using the Filtered BackProjection (FBP) algorithm and an Ordered Subsets Expectation Maximization (OSEM) reconstruction method that included compensation for all degradations. Registration errors in SPECT-SPECT and SPECT-MRI registration were evaluated by comparing the theoretical and actual transforms. Patient studies with well-localized epilepsy were also included in the registration assessment. Global registration errors including SPECT-SPECT and SPECT-MRI registration errors were less than 1.2 mm on average, exceeding the voxel size (3.32 mm) of SPECT studies in no case. Although images reconstructed using OSEM led to lower registration errors than images reconstructed with FBP, differences after using OSEM or FBP in reconstruction were less than 0.2 mm on average. This indicates that correction for degradations does not play a major role in the SISCOM process, thereby facilitating the application of the methodology in centers where OSEM is not implemented with correction of all degradations. These findings together with those obtained by clinicians from patients via MRI, interictal and ictal SPECT and video-EEG, show that FocusDET is a robust application for performing SISCOM analysis in clinical practice.


Subject(s)
Brain/diagnostic imaging , Diagnostic Errors/statistics & numerical data , Epilepsies, Partial/diagnostic imaging , Image Interpretation, Computer-Assisted , Image Processing, Computer-Assisted/statistics & numerical data , Algorithms , Electroencephalography , Humans , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Monte Carlo Method , Subtraction Technique , Tomography, Emission-Computed, Single-Photon
10.
Med Image Anal ; 16(5): 1015-28, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22465078

ABSTRACT

Images acquired during free breathing using first-pass gadolinium-enhanced myocardial perfusion magnetic resonance imaging (MRI) exhibit a quasiperiodic motion pattern that needs to be compensated for if a further automatic analysis of the perfusion is to be executed. In this work, we present a method to compensate this movement by combining independent component analysis (ICA) and image registration: First, we use ICA and a time-frequency analysis to identify the motion and separate it from the intensity change induced by the contrast agent. Then, synthetic reference images are created by recombining all the independent components but the one related to the motion. Therefore, the resulting image series does not exhibit motion and its images have intensities similar to those of their original counterparts. Motion compensation is then achieved by using a multi-pass image registration procedure. We tested our method on 39 image series acquired from 13 patients, covering the basal, mid and apical areas of the left heart ventricle and consisting of 58 perfusion images each. We validated our method by comparing manually tracked intensity profiles of the myocardial sections to automatically generated ones before and after registration of 13 patient data sets (39 distinct slices). We compared linear, non-linear, and combined ICA based registration approaches and previously published motion compensation schemes. Considering run-time and accuracy, a two-step ICA based motion compensation scheme that first optimizes a translation and then for non-linear transformation performed best and achieves registration of the whole series in 32±12s on a recent workstation. The proposed scheme improves the Pearsons correlation coefficient between manually and automatically obtained time-intensity curves from .84±.19 before registration to .96±.06 after registration.


Subject(s)
Artifacts , Coronary Artery Disease/diagnosis , Image Enhancement/methods , Magnetic Resonance Angiography/methods , Myocardial Perfusion Imaging/methods , Pattern Recognition, Automated/methods , Respiratory-Gated Imaging Techniques/methods , Humans , Reproducibility of Results , Respiratory Mechanics , Sensitivity and Specificity , Subtraction Technique
11.
IEEE Trans Med Imaging ; 29(8): 1516-27, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20442043

ABSTRACT

Free-breathing image acquisition is desirable in first-pass gadolinium- enhanced magnetic resonance imaging (MRI), but the breathing movements hinder the direct automatic analysis of the myocardial perfusion and qualitative readout by visual tracking. Nonrigid registration can be used to compensate for these movements but needs to deal with local contrast and intensity changes with time. We propose an automatic registration scheme that exploits the quasiperiodicity of free breathing to decouple movement from intensity change. First, we identify and register a subset of the images corresponding to the same phase of the breathing cycle. This registration step deals with small differences caused by movement but maintains the full range of intensity change. The remaining images are then registered to synthetic references that are created as a linear combination of images belonging to the already registered subset. Because of the quasiperiodic respiratory movement, the subset images are distributed evenly over time and, therefore, the synthetic references exhibit intensities similar to their corresponding unregistered images. Thus, this second registration step needs to account only for the movement. Validation experiments were performed on data obtained from six patients, three slices per patient, and the automatically obtained perfusion profiles were compared with profiles obtained by manually segmenting the myocardium. The results show that our automatic approach is well suited to compensate for the free-breathing movement and that it achieves a significant improvement in the average Pearson correlation coefficient between manually and automatically obtained perfusion profiles before (0.87 +/- 0.18) and after (0.96 +/- 0.09) registration.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Myocardial Perfusion Imaging/methods , Pattern Recognition, Automated/methods , Algorithms , Gadolinium , Heart/physiology , Humans , Movement/physiology , Reproducibility of Results , Respiration
12.
Article in English | MEDLINE | ID: mdl-19163436

ABSTRACT

Breathing movements during the image acquisition of first-pass gadolinium enhanced, myocardial perfusion Magnetic Resonance Imaging (MRI) hinder a direct automatic analysis of the blood flow of the myocardium. In addition, a qualitative readout by visual tracking is more difficult as well. Non-rigid registration can be used to compensate for these movements in the image series. Because of the local contrast and intensity change over time, the registration criterion needs to be chosen carefully. We propose a measure based on Normalized Gradient Fields (NGF) in order to obtain registration. Since this measure requires strong gradients in the images, we also test combining the measure with the Sum of Squared Differences (SSD) to maintain registration forces over the whole image area. To ensure smoothness, we employ a Laplacian regularizer and use the B-spline based approach to describe the transformation of the image space. Our experiments show that by using NGF good registration results can be obtained for image exhibiting a high intensity contrast. For images with a low intensity contrast, combining NGF and SSD improves the registration results significantly over using NGF only. Both measures are differentiable making possible the application of fast, gradient based optimizers.


Subject(s)
Magnetic Resonance Imaging/methods , Myocardium/pathology , Respiration , Algorithms , Automation , Contrast Media/pharmacology , Electronic Data Processing , Gadolinium/pharmacology , Humans , Models, Statistical , Motion , Perfusion , Reproducibility of Results , Software
13.
IEEE Trans Med Imaging ; 23(2): 246-55, 2004 Feb.
Article in English | MEDLINE | ID: mdl-14964568

ABSTRACT

This paper is concerned with the detection of multiple small brain lesions from magnetic resonance imaging (MRI) data. A model based on the marked point process framework is designed to detect Virchow-Robin spaces (VRSs). These tubular shaped spaces are due to retraction of the brain parenchyma from its supplying arteries. VRS are described by simple geometrical objects that are introduced as small tubular structures. Their radiometric properties are embedded in a data term. A prior model includes interactions describing the clustering property of VRS. A Reversible Jump Markov Chain Monte Carlo algorithm (RJMCMC) optimizes the proposed model, obtained by multiplying the prior and the data model. Example results are shown on T1-weighted MRI datasets of elderly subjects.


Subject(s)
Algorithms , Brain Diseases/diagnosis , Brain/pathology , Central Nervous System Cysts/diagnosis , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated , Signal Processing, Computer-Assisted , Humans , Magnetic Resonance Imaging , Reproducibility of Results , Sensitivity and Specificity
14.
IEEE Trans Med Imaging ; 21(8): 946-52, 2002 Aug.
Article in English | MEDLINE | ID: mdl-12472267

ABSTRACT

Though fluid dynamics offer a good approach to nonrigid registration and give accurate results, even with large-scale deformations, its application is still very time consuming. We introduce and discuss different approaches to solve the core problem of nonrigid registration, the partial differential equation of fluid dynamics. We focus on the solvers, their computional costs and the accuracy of registration. Numerical experiments show that relaxation is currently the best approach, especially when reducing the cost/iteration by focusing the updates on deformation spots.


Subject(s)
Algorithms , Brain/anatomy & histology , Image Enhancement/methods , Magnetic Resonance Imaging/methods , Rheology/methods , Subtraction Technique , Computer Simulation , Elasticity , Humans , Models, Neurological , Quality Control , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...