Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Med Phys ; 50(10): 6047-6059, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37538038

RESUMO

BACKGROUND: Physiological motion, such as respiratory motion, has become a limiting factor in the spatial resolution of positron emission tomography (PET) imaging as the resolution of PET detectors continue to improve. Motion-induced misregistration between PET and CT images can also cause attenuation correction artifacts. Respiratory gating can be used to freeze the motion and to reduce motion induced artifacts. PURPOSE: In this study, we propose a robust data-driven approach using an unsupervised deep clustering network that employs an autoencoder (AE) to extract latent features for respiratory gating. METHODS: We first divide list-mode PET data into short-time frames. The short-time frame images are reconstructed without attenuation, scatter, or randoms correction to avoid attenuation mismatch artifacts and to reduce image reconstruction time. The deep AE is then trained using reconstructed short-time frame images to extract latent features for respiratory gating. No additional data are required for the AE training. K-means clustering is subsequently used to perform respiratory gating based on the latent features extracted by the deep AE. The effectiveness of our proposed Deep Clustering method was evaluated using physical phantom and real patient datasets. The performance was compared against phase gating based on an external signal (External) and image based principal component analysis (PCA) with K-means clustering (Image PCA). RESULTS: The proposed method produced gated images with higher contrast and sharper myocardium boundaries than those obtained using the External gating method and Image PCA. Quantitatively, the gated images generated by the proposed Deep Clustering method showed larger center of mass (COM) displacement and higher lesion contrast than those obtained using the other two methods. CONCLUSIONS: The effectiveness of our proposed method was validated using physical phantom and real patient data. The results showed our proposed framework could provide superior gating than the conventional External method and Image PCA.

2.
IEEE Trans Med Imaging ; 41(5): 1230-1241, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34928789

RESUMO

Respiratory motion is one of the main sources of motion artifacts in positron emission tomography (PET) imaging. The emission image and patient motion can be estimated simultaneously from respiratory gated data through a joint estimation framework. However, conventional motion estimation methods based on registration of a pair of images are sensitive to noise. The goal of this study is to develop a robust joint estimation method that incorporates a deep learning (DL)-based image registration approach for motion estimation. We propose a joint estimation framework by incorporating a learned image registration network into a regularized PET image reconstruction. The joint estimation was formulated as a constrained optimization problem with moving gated images related to a fixed image via the deep neural network. The constrained optimization problem is solved by the alternating direction method of multipliers (ADMM) algorithm. The effectiveness of the algorithm was demonstrated using simulated and real data. We compared the proposed DL-ADMM joint estimation algorithm with a monotonic iterative joint estimation. Motion compensated reconstructions using pre-calculated deformation fields by DL-based (DL-MC recon) and iterative (iterative-MC recon) image registration were also included for comparison. Our simulation study shows that the proposed DL-ADMM joint estimation method reduces bias compared to the ungated image without increasing noise and outperforms the competing methods. In the real data study, our proposed method also generated higher lesion contrast and sharper liver boundaries compared to the ungated image and had lower noise than the reference gated image.


Assuntos
Aprendizado Profundo , Algoritmos , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Movimento (Física) , Tomografia por Emissão de Pósitrons/métodos
3.
PET Clin ; 16(4): 483-492, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34353746

RESUMO

Artificial intelligence (AI) has significant potential to positively impact and advance medical imaging, including positron emission tomography (PET) imaging applications. AI has the ability to enhance and optimize all aspects of the PET imaging chain from patient scheduling, patient setup, protocoling, data acquisition, detector signal processing, reconstruction, image processing, and interpretation. AI poses industry-specific challenges which will need to be addressed and overcome to maximize the future potentials of AI in PET. This article provides an overview of these industry-specific challenges for the development, standardization, commercialization, and clinical adoption of AI and explores the potential enhancements to PET imaging brought on by AI in the near future. In particular, the combination of on-demand image reconstruction, AI, and custom-designed data-processing workflows may open new possibilities for innovation which would positively impact the industry and ultimately patients.


Assuntos
Inteligência Artificial , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador , Radiografia
4.
Med Phys ; 48(9): 5244-5258, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34129690

RESUMO

PURPOSE: The developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co-learning three-dimensional (3D) convolutional neural network (CNN) to extract modality-specific features from PET/CT image pairs and integrate complementary features into an iterative reconstruction framework to improve PET image reconstruction. METHODS: We used a pretrained deep neural network to represent PET images. The network was trained using low-count PET and CT image pairs as inputs and high-count PET images as labels. This network was then incorporated into a constrained maximum likelihood framework to regularize PET image reconstruction. Two different network structures were investigated for the integration of anatomical information from CT images. One was a multichannel CNN, which treated PET and CT volumes as separate channels of the input. The other one was multibranch CNN, which implemented separate encoders for PET and CT images to extract latent features and fed the combined latent features into a decoder. Using computer-based Monte Carlo simulations and two real patient datasets, the proposed method has been compared with existing methods, including the maximum likelihood expectation maximization (MLEM) reconstruction, a kernel-based reconstruction and a CNN-based deep penalty method with and without anatomical guidance. RESULTS: Reconstructed images showed that the proposed constrained ML reconstruction approach produced higher quality images than the competing methods. The tumors in the lung region have higher contrast in the proposed constrained ML reconstruction than in the CNN-based deep penalty reconstruction. The image quality was further improved by incorporating the anatomical information. Moreover, the liver standard deviation was lower in the proposed approach than all the competing methods at a matched lesion contrast. CONCLUSIONS: The supervised co-learning strategy can improve the performance of constrained maximum likelihood reconstruction. Compared with existing techniques, the proposed method produced a better lesion contrast versus background standard deviation trade-off curve, which can potentially improve lesion detection.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X
5.
EJNMMI Phys ; 8(1): 31, 2021 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-33765233

RESUMO

BACKGROUND: Deep learning (DL)-based image quality improvement is a novel technique based on convolutional neural networks. The aim of this study was to compare the clinical value of 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) images obtained with the DL method with those obtained using a Gaussian filter. METHODS: Fifty patients with a mean age of 64.4 (range, 19-88) years who underwent 18F-FDG PET/CT between April 2019 and May 2019 were included in the study. PET images were obtained with the DL method in addition to conventional images reconstructed with three-dimensional time of flight-ordered subset expectation maximization and filtered with a Gaussian filter as a baseline for comparison. The reconstructed images were reviewed by two nuclear medicine physicians and scored from 1 (poor) to 5 (excellent) for tumor delineation, overall image quality, and image noise. For the semi-quantitative analysis, standardized uptake values in tumors and healthy tissues were compared between images obtained using the DL method and those obtained with a Gaussian filter. RESULTS: Images acquired using the DL method scored significantly higher for tumor delineation, overall image quality, and image noise compared to baseline (P < 0.001). The Fleiss' kappa value for overall inter-reader agreement was 0.78. The standardized uptake values in tumor obtained by DL were significantly higher than those acquired using a Gaussian filter (P < 0.001). CONCLUSIONS: Deep learning method improves the quality of PET images.

6.
Phys Med Biol ; 65(12): 125016, 2020 06 23.
Artigo em Inglês | MEDLINE | ID: mdl-32357352

RESUMO

Positron emission tomography (PET) is an ill-posed inverse problem and suffers high noise due to limited number of detected events. Prior information can be used to improve the quality of reconstructed PET images. Deep neural networks have also been applied to regularized image reconstruction. One method is to use a pretrained denoising neural network to represent the PET image and to perform a constrained maximum likelihood estimation. In this work, we propose to use a generative adversarial network (GAN) to further improve the network performance. We also modify the objective function to include a data-matching term on the network input. Experimental studies using computer-based Monte Carlo simulations and real patient datasets demonstrate that the proposed method leads to noticeable improvements over the kernel-based and U-net-based regularization methods in terms of lesion contrast recovery versus background noise trade-offs.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons , Simulação por Computador , Bases de Dados Factuais , Humanos , Método de Monte Carlo
7.
Phys Med Biol ; 65(15): 155003, 2020 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-32244230

RESUMO

Artifacts caused by patient breathing and movement during PET data acquisition affect image quality. Respiratory gating is commonly used to gate the list-mode PET data into multiple bins over a respiratory cycle. Non-rigid registration of respiratory-gated PET images can reduce motion artifacts and preserve count statistics, but it is time consuming. In this work, we propose an unsupervised non-rigid image registration framework using deep learning for motion correction. Our network uses a differentiable spatial transformer layer to warp the moving image to the fixed image and uses a stacked structure for deformation field refinement. Estimated deformation fields were incorporated into an iterative image reconstruction algorithm to perform motion compensated PET image reconstruction. We validated the proposed method using simulation and clinical data and implemented an iterative image registration approach for comparison. Motion compensated reconstructions were compared with ungated images. Our simulation study showed that the motion compensated methods can generate images with sharp boundaries and reveal more details in the heart region compared with the ungated image. The resulting normalized root mean square error (NRMS) was 24.3 ± 1.7% for the deep learning based motion correction, 31.1 ± 1.4% for the iterative registration based motion correction, and 41.9 ± 2.0% for ungated reconstruction. The proposed deep learning based motion correction reduced the bias compared with the ungated image without increasing the noise level and outperformed the iterative registration based method. In the real data study, both motion compensated images provided higher lesion contrast and sharper liver boundaries than the ungated image and had lower noise than the reference gate image. The contrast of the proposed method based on the deep neural network was higher than the ungated image and iterative registration method at any matched noise level.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Movimento , Tomografia por Emissão de Pósitrons , Técnicas de Imagem de Sincronização Respiratória , Artefatos , Humanos
8.
Phys Med Biol ; 62(12): 5114-5130, 2017 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-28402287

RESUMO

Penalized likelihood (PL) reconstruction has demonstrated potential to improve image quality of positron emission tomography (PET) over unregularized ordered-subsets expectation-maximization (OSEM) algorithm. However, selecting proper regularization parameters in PL reconstruction has been challenging due to the lack of ground truth and variation of penalty functions. Here we present a method to choose regularization parameters using a cross-validation log-likelihood (CVLL) function. This new method does not require any knowledge of the true image and is directly applicable to list-mode PET data. We performed statistical analysis of the mean and variance of the CVLL. The results show that the CVLL provides an unbiased estimate of the log-likelihood function calculated using the noise free data. The predicted variance can be used to verify the statistical significance of the difference between CVLL values. The proposed method was validated using simulation studies and also applied to real patient data. The reconstructed images using optimum parameters selected by the proposed method show good image quality visually.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons , Algoritmos , Humanos , Funções Verossimilhança , Imagens de Fantasmas
9.
Phys Med Biol ; 60(19): 7437-60, 2015 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-26352168

RESUMO

For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images.Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Fígado/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Imagem Multimodal/métodos , Doses de Radiação , Razão Sinal-Ruído
10.
Phys Med Biol ; 60(15): 5733-51, 2015 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-26158503

RESUMO

Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Funções Verossimilhança , Hepatopatias/diagnóstico por imagem , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/métodos , Antropometria , Teorema de Bayes , Humanos , Interpretação de Imagem Assistida por Computador , Hepatopatias/patologia , Probabilidade
11.
Phys Med Biol ; 60(13): 5241-59, 2015 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-26086713

RESUMO

Quantitative PET imaging is widely used in clinical diagnosis in oncology and neuroimaging. Accurate normalization correction for the efficiency of each line-of- response is essential for accurate quantitative PET image reconstruction. In this paper, we propose a normalization calibration method by using the delayed-window coincidence events from the scanning phantom or patient. The proposed method could dramatically reduce the 'ring' artifacts caused by mismatched system count-rates between the calibration and phantom/patient datasets. Moreover, a modified algorithm for mean detector efficiency estimation is proposed, which could generate crystal efficiency maps with more uniform variance. Both phantom and real patient datasets are used for evaluation. The results show that the proposed method could lead to better uniformity in reconstructed images by removing ring artifacts, and more uniform axial variance profiles, especially around the axial edge slices of the scanner. The proposed method also has the potential benefit to simplify the normalization calibration procedure, since the calibration can be performed using the on-the-fly acquired delayed-window dataset.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Fígado/diagnóstico por imagem , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/instrumentação , Tomografia por Emissão de Pósitrons/métodos , Algoritmos , Calibragem , Humanos , Posicionamento do Paciente
12.
Artigo em Inglês | MEDLINE | ID: mdl-26185410

RESUMO

Extremely low-dose CT acquisitions for the purpose of PET attenuation correction will have a high level of noise and biasing artifacts due to factors such as photon starvation. This work explores a priori knowledge appropriate for CT iterative image reconstruction for PET attenuation correction. We investigate the maximum a posteriori (MAP) framework with cluster-based, multinomial priors for the direct reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction was modeled as a Poisson log-likelihood with prior terms consisting of quadratic (Q) and mixture (M) distributions. The attenuation map is assumed to have values in 4 clusters: air+background, lung, soft tissue, and bone. Under this assumption, the MP was a mixture probability density function consisting of one exponential and three Gaussian distributions. The relative proportion of each cluster was jointly estimated during each voxel update of direct iterative coordinate decent (dICD) method. Noise-free data were generated from NCAT phantom and Poisson noise was added. Reconstruction with FBP (ramp filter) was performed on the noise-free (ground truth) and noisy data. For the noisy data, dICD reconstruction was performed with the combination of different prior strength parameters (ß and γ) of Q- and M-penalties. The combined quadratic and mixture penalties reduces the RMSE by 18.7% compared to post-smoothed iterative reconstruction and only 0.7% compared to quadratic alone. For direct PET attenuation map reconstruction from ultra-low dose CT acquisitions, the combination of quadratic and mixture priors offers regularization of both variance and bias and is a potential method to derive attenuation maps with negligible patient dose. However, the small improvement in quantitative accuracy relative to the substantial increase in algorithm complexity does not currently justify the use of mixture-based PET attenuation priors for reconstruction of CT images for PET attenuation correction.

13.
Phys Med Biol ; 57(2): 309-28, 2012 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-22156174

RESUMO

A challenge for positron emission tomography/computed tomography (PET/CT) quantitation is patient respiratory motion, which can cause an underestimation of lesion activity uptake and an overestimation of lesion volume. Several respiratory motion correction methods benefit from longer duration CT scans that are phase matched with PET scans. However, even with the currently available, lowest dose CT techniques, extended duration cine CT scans impart a substantially high radiation dose. This study evaluates methods designed to reduce CT radiation dose in PET/CT scanning. We investigated selected combinations of dose reduced acquisition and noise suppression methods that take advantage of the reduced requirement of CT for PET attenuation correction (AC). These include reducing CT tube current, optimizing CT tube voltage, adding filtration, CT sinogram smoothing and clipping. We explored the impact of these methods on PET quantitation via simulations on different digital phantoms. CT tube current can be reduced much lower for AC than that in low dose CT protocols. Spectra that are higher energy and narrower are generally more dose efficient with respect to PET image quality. Sinogram smoothing could be used to compensate for the increased noise and artifacts at radiation dose reduced CT images, which allows for a further reduction of CT dose with no penalty for PET image quantitation. When CT is not used for diagnostic and anatomical localization purposes, we showed that ultra-low dose CT for PET/CT is feasible. The significant dose reduction strategies proposed here could enable respiratory motion compensation methods that require extended duration CT scans and reduce radiation exposure in general for all PET/CT imaging.


Assuntos
Imagem Multimodal/métodos , Tomografia por Emissão de Pósitrons , Doses de Radiação , Tomografia Computadorizada por Raios X , Imagens de Fantasmas , Razão Sinal-Ruído
14.
IEEE Trans Med Imaging ; 26(1): 58-67, 2007 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-17243584

RESUMO

We describe a fast and globally convergent fully four-dimensional incremental gradient (4DIG) algorithm to estimate the continuous-time tracer density from list mode positron emission tomography (PET) data. Detection of 511-keV photon pairs produced by positron-electron annihilation is modeled as an inhomogeneous Poisson process whose rate function is parameterized using cubic B-splines. The rate functions are estimated by minimizing the cost function formed by the sum of the negative log-likelihood of arrival times, spatial and temporal roughness penalties, and a negativity penalty. We first derive a computable bound for the norm of the optimal temporal basis function coefficients. Based on this bound we then construct and prove convergence of an incremental gradient algorithm. Fully 4-D simulations demonstrate the substantially faster convergence behavior of the 4DIG algorithm relative to preconditioned conjugate gradient. Four-dimensional reconstructions of real data are also included to illustrate the performance of this method.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Tomografia por Emissão de Pósitrons/métodos , Técnica de Subtração , Animais , Camundongos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Fatores de Tempo
15.
IEEE Trans Med Imaging ; 25(1): 42-54, 2006 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-16398413

RESUMO

We derive computationally efficient methods for the estimation of the mean and variance properties of penalized likelihood dynamic positron emission tomography (PET) images. This allows us to predict the accuracy of reconstructed activity estimates and to compare reconstruction algorithms theoretically. We combine a bin-mode approach in which data is modeled as a collection of independent Poisson random variables at each spatiotemporal bin with the space-time separabilities in the imaging equation and penalties to derive rapidly computable analytic mean and variance approximations. We use these approximations to compare bias/variance properties of our dynamic PET image reconstruction algorithm with those of multiframe static PET reconstructions.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação Estatística de Dados , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Tomografia por Emissão de Pósitrons/métodos , Bases de Dados Factuais , Imageamento Tridimensional/métodos , Funções Verossimilhança , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
IEEE Trans Med Imaging ; 23(9): 1057-64, 2004 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-15377114

RESUMO

The Fisher information matrix (FIM) plays a key role in the analysis and applications of statistical image reconstruction methods based on Poisson data models. The elements of the FIM are a function of the reciprocal of the mean values of sinogram elements. Conventional plug-in FIM estimation methods do not work well at low counts, where the FIM estimate is highly sensitive to the reciprocal mean estimates at individual detector pairs. A generalized error look-up table (GELT) method is developed to estimate the reciprocal of the mean of the sinogram data. This approach is also extended to randoms precorrected data. Based on these techniques, an accurate FIM estimate is obtained for both Poisson and randoms precorrected data. As an application, the new GELT method is used to improve resolution uniformity and achieve near-uniform image resolution in low count situations.


Assuntos
Algoritmos , Encéfalo/diagnóstico por imagem , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Modelos Biológicos , Modelos Estatísticos , Tomografia Computadorizada de Emissão/métodos , Humanos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Tomografia Computadorizada de Emissão/instrumentação
17.
IEEE Trans Med Imaging ; 21(4): 396-404, 2002 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-12022627

RESUMO

We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.


Assuntos
Algoritmos , Simulação por Computador , Aumento da Imagem/métodos , Modelos Estatísticos , Tomografia Computadorizada de Emissão/métodos , Encéfalo/metabolismo , Radioisótopos de Carbono/farmacocinética , Humanos , Imagens de Fantasmas , Distribuição de Poisson , Racloprida/farmacocinética , Compostos Radiofarmacêuticos/farmacocinética , Distribuição Tecidual , Tomografia Computadorizada de Emissão/instrumentação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...