Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Phys Med Biol ; 61(13): N322-36, 2016 07 07.
Article in English | MEDLINE | ID: mdl-27280456

ABSTRACT

In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of (18)F-florbetapir using the Siemens Biograph mMR scanner.


Subject(s)
Image Processing, Computer-Assisted/methods , Positron-Emission Tomography , Uncertainty , Signal-To-Noise Ratio , Software , Time Factors
2.
Phys Med Biol ; 60(1): 279-99, 2015 Jan 07.
Article in English | MEDLINE | ID: mdl-25490178

ABSTRACT

Bootstrap resampling has been successfully used for estimation of statistical uncertainty of parameters such as tissue metabolism, blood flow or displacement fields for image registration. The performance of bootstrap resampling as applied to PET list-mode data of the human brain and dedicated phantoms is assessed in a novel and systematic way such that: (1) the assessment is carried out in two resampling stages: the 'real world' stage where multiple reference datasets of varying statistical level are generated and the 'bootstrap world' stage where corresponding bootstrap replicates are generated from the reference datasets. (2) All resampled datasets were reconstructed yielding images from which multiple voxel and regions of interest (ROI) values were extracted to form corresponding distributions between the two stages. (3) The difference between the distributions from both stages was quantified using the Jensen-Shannon divergence and the first four moments. It was found that the bootstrap distributions are consistently different to the real world distributions across the statistical levels. The difference was explained by a shift in the mean (up to 33% for voxels and 14% for ROIs) being proportional to the inverse square root of the statistical level (number of counts). Other moments were well replicated by the bootstrap although for very low statistical levels the estimation of the variance was poor. Therefore, the bootstrap method should be used with care when estimating systematic errors (bias) and variance when very low statistical levels are present such as in early time frames of dynamic acquisitions, when the underlying population may not be sufficiently represented.


Subject(s)
Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Models, Theoretical , Phantoms, Imaging , Positron-Emission Tomography/methods , Signal Processing, Computer-Assisted , Spinal Cord/diagnostic imaging , Algorithms , Computer Simulation , Humans , Image Interpretation, Computer-Assisted/methods , Likelihood Functions , Monte Carlo Method , Positron-Emission Tomography/instrumentation
3.
Phys Med Biol ; 58(15): 5061-83, 2013 Aug 07.
Article in English | MEDLINE | ID: mdl-23831633

ABSTRACT

Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Models, Theoretical , Fluorodeoxyglucose F18 , Humans , Imaging, Three-Dimensional , Phantoms, Imaging , Positron-Emission Tomography , Time Factors
4.
Phys Med Biol ; 56(21): N247-61, 2011 Nov 07.
Article in English | MEDLINE | ID: mdl-21983701

ABSTRACT

This note presents a practical approach to a custom-made design of PET phantoms enabling the use of digital radioactive distributions with high quantitative accuracy and spatial resolution. The phantom design allows planar sources of any radioactivity distribution to be imaged in transaxial and axial (sagittal or coronal) planes. Although the design presented here is specially adapted to the high-resolution research tomograph (HRRT), the presented methods can be adapted to almost any PET scanner. Although the presented phantom design has many advantages, a number of practical issues had to be overcome such as positioning of the printed source, calibration, uniformity and reproducibility of printing. A well counter (WC) was used in the calibration procedure to find the nonlinear relationship between digital voxel intensities and the actual measured radioactive concentrations. Repeated printing together with WC measurements and computed radiography (CR) using phosphor imaging plates (IP) were used to evaluate the reproducibility and uniformity of such printing. Results show satisfactory printing uniformity and reproducibility; however, calibration is dependent on the printing mode and the physical state of the cartridge. As a demonstration of the utility of using printed phantoms, the image resolution and quantitative accuracy of reconstructed HRRT images are assessed. There is very good quantitative agreement in the calibration procedure between HRRT, CR and WC measurements. However, the high resolution of CR and its quantitative accuracy supported by WC measurements made it possible to show the degraded resolution of HRRT brain images caused by the partial-volume effect and the limits of iterative image reconstruction.


Subject(s)
Image Enhancement/instrumentation , Phantoms, Imaging , Positron-Emission Tomography/instrumentation , Brain/diagnostic imaging , Brain/pathology , Calibration , Equipment Design , Humans , Image Enhancement/methods , Phosphorus , Positron-Emission Tomography/methods , Reproducibility of Results
5.
Neuroimage ; 56(3): 1382-5, 2011 Jun 01.
Article in English | MEDLINE | ID: mdl-21338696

ABSTRACT

The assessment of accuracy and robustness of multivariate analysis of FDG-PET brain images as presented in [Markiewicz, P.J., Matthews, J.C., Declerck, J., Herholz, K., 2009. Robustness of multivariate image analysis assessed by resampling techniques and applied to FDG-PET scans of patients with Alzheimer's disease. Neuroimage 46, 472-485.] using a homogeneous sample (from one centre) of small size is here verified using a heterogeneous sample (from multiple centres) of much larger size. Originally the analysis, which included principal component analysis (PCA) and Fisher discriminant analysis (FDA), was established using a sample of 42 subjects (19 Normal Controls (NCs) and 23 Alzheimer's disease (AD) patients) and here the analysis is verified using an independent sample of 166 subjects (86 NCs and 80 ADs) obtained from the ADNI database. It is shown that bootstrap resampling combined with the metric of the largest principal angle between PCA subspaces as well as the deliberate clinical misdiagnosis simulation can predict robustness of the multivariate analysis when used with new datasets. Cross-validation (CV) and the .632 bootstrap overestimated the predictive accuracy encouraging less robust solutions. Also, it is shown that the type of PET scanner and image reconstruction method has an impact on such analysis and affects the accuracy of the verification sample.


Subject(s)
Image Processing, Computer-Assisted/methods , Multivariate Analysis , Positron-Emission Tomography/methods , Aged , Alzheimer Disease/diagnostic imaging , Databases, Factual , Discriminant Analysis , Female , Fluorodeoxyglucose F18 , Humans , Male , Principal Component Analysis , Radiopharmaceuticals , Reproducibility of Results
6.
Neuroimage ; 56(2): 782-7, 2011 May 15.
Article in English | MEDLINE | ID: mdl-20595075

ABSTRACT

In neuroimaging it is helpful and useful to obtain robust and accurate estimates of relationships between the image derived data and separately derived covariates such as clinical and demographic measures. Due to the high dimensionality of brain images, complex image analysis is typically used to extract certain image features, which may or may not relate to the covariates. These correlations which explain variance within the image data are frequently of interest. Principal component analysis (PCA) is used to extract image features from a sample of 42 FDG PET brain images (19 normal controls (NCs), 23 Alzheimer's disease (AD) patients). For the first three most robust PCs, the correlation of the PC scores with: i) the Mini Mental Status Exam (MMSE) score and ii) age is examined. The key aspects of this work is the assessment of: i) the robustness and significance of the correlations using bootstrap resampling; ii) the influence of the PCA on the robustness of the correlations; iii) the impact of two intensity normalization methods (global and cerebellum). Results show that: i) Pearson's statistics can lead to overoptimistic results. ii) The robustness of the correlations deteriorate with the number of PCs. iii) The correlations are hugely influenced by the method of intensity normalization: the correlation of cognitive impairment with PC1 are stronger and more significant for global normalization; whereas the correlations with age were strongest and more robust with PC2 and cerebellar normalization.


Subject(s)
Alzheimer Disease/diagnostic imaging , Brain Mapping/methods , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Principal Component Analysis/methods , Fluorodeoxyglucose F18 , Humans , Neuropsychological Tests , Positron-Emission Tomography , Radiopharmaceuticals
7.
Neuroimage ; 46(2): 472-85, 2009 Jun.
Article in English | MEDLINE | ID: mdl-19385015

ABSTRACT

For finite and noisy samples extraction of robust features or patterns which are representative of the population is a formidable task in which over-interpretation is not uncommon. In this work, resampling techniques have been applied to a sample of 42 FDG PET brain images of 19 healthy volunteers (HVs) and 23 Alzheimer's disease (AD) patients to assess the robustness of image features extracted through principal component analysis (PCA) and Fisher discriminant analysis (FDA). The objective of this work is to: 1) determine the relative variance described by the PCA to the population variance; 2) assess the robustness of the PCA to the population sample using the largest principal angle between PCA subspaces; 3) assess the robustness and accuracy of the FDA. Since the sample does not have histopathological data the impact of possible clinical misdiagnosis on the discrimination analysis is investigated. The PCA can describe up to 40% of the total population variability. Not more than the first three or four PCs can be regarded as robust on which a robust FDA can be build. Standard error images showed that regions close to the falx and around ventricles are less stable. Using the first three PCs, sensitivity and specificity were 90.5% and 96.9% respectively. The use of resampling techniques in the evaluation of the robustness of many multivariate image analysis methods enables researchers to avoid over-analysis when using these methods applied to many different neuroimaging studies often with small sample sizes.


Subject(s)
Algorithms , Alzheimer Disease/diagnostic imaging , Brain/diagnostic imaging , Fluorodeoxyglucose F18 , Image Interpretation, Computer-Assisted/methods , Positron-Emission Tomography/methods , Female , Humans , Image Enhancement/methods , Male , Middle Aged , Multivariate Analysis , Principal Component Analysis , Radiopharmaceuticals , Reproducibility of Results , Sample Size , Sensitivity and Specificity , Young Adult
8.
Phys Med Biol ; 52(3): 829-47, 2007 Feb 07.
Article in English | MEDLINE | ID: mdl-17228124

ABSTRACT

A new technique for modelling multiple-order Compton scatter which uses the absolute probabilities relating the image space to the projection space in 3D whole body PET is presented. The details considered in this work give a valuable insight into the scatter problem, particularly for multiple scatter. Such modelling is advantageous for large attenuating media where scatter is a dominant component of the measured data, and where multiple scatter may dominate the total scatter depending on the energy threshold and object size. The model offers distinct features setting it apart from previous research: (1) specification of the scatter distribution for each voxel based on the transmission data, the physics of Compton scattering and the specification of a given PET system; (2) independence from the true activity distribution; (3) in principle no scaling or iterative process is required to find the distribution; (4) explicit multiple scatter modelling; (5) no scatter subtraction or addition to the forward model when included in the system matrix used with statistical image reconstruction methods; (6) adaptability to many different scatter compensation methods from simple and fast to more sophisticated and therefore slower methods; (7) accuracy equivalent to that of a Monte Carlo model. The scatter model has been validated using Monte Carlo simulation (SimSET).


Subject(s)
Positron-Emission Tomography/statistics & numerical data , Algorithms , Biophysical Phenomena , Biophysics , Humans , Imaging, Three-Dimensional/statistics & numerical data , Models, Theoretical , Monte Carlo Method , Photons , Scattering, Radiation
SELECTION OF CITATIONS
SEARCH DETAIL
...