Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Med Phys ; 50(5): 2733-2758, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36705079

RESUMO

BACKGROUND: Noise amplification in material decomposition is an issue for exploiting photon-counting computed tomography (PCCT). Regularization techniques and neighborhood filters have been widely used, but degraded spatial resolution and bias are concerns. PURPOSE: This paper proposes likelihood-based bilateral filters that can be applied to pre-estimated basis sinograms to reduce the noise while minimally affecting spatial resolution and accuracy. METHODS: The proposed method needs system models (e.g., incident spectrum, detector response) to calculate the likelihood. First, it performs maximum likelihood (ML)-based estimation in the projection domain to obtain basis sinograms. The estimated basis sinograms suffer from severe noise but are asymptotically unbiased without degrading spatial resolution. Then it calculates the neighborhood likelihoods for a given measurement at the center pixel using the neighborhood estimates and designs the weights based on the distance of likelihoods. It is also analyzed in terms of statistical inference, and then two variations of the filter are introduced: one that requires the significance level instead of the empirical hyperparameter. The other is a measurement-based filter, which can be applied when accurate estimates are given without the system models. The proposed methods were validated by analyzing the local property of noise and spatial resolution and the global trend of noise and bias using numerical thorax and abdominal phantoms for a two-material decomposition (water and bone). They were compared to the conventional neighborhood filters and the model-based iterative reconstruction with an edge-preserving penalty applied in the basis images. RESULTS: The proposed method showed comparable or superior performance for the local and global properties to conventional methods in many cases. The thorax phantom: The full width at half maximum (FWHM) decreased by -2%-31% (-2 indicates that it increased by 2% compared to the best performance from conventional methods), and the global bias was reduced by 2%-19% compared to other methods for similar noise levels (local: 51% of the ML, global: 49%) in the water basis image. The FWHM decreased by 8%-31%, and the global bias was reduced by 9%-44% for similar noise levels (local: 44% of the ML, global: 36%) in the CT image at 65 keV. The abdominal phantom: The FWHM decreased by 10%-32%, and the global bias was reduced by 3%-35% compared to other methods for similar noise levels (local: 66% of the ML, global: 67%) in the water basis image. The FWHM decreased by up to -11%-47%, and the global bias was reduced by 13%-35% for similar noise levels (local: 71% of the ML, global: 70%) in the CT image at 60 keV. CONCLUSIONS: This paper introduced the likelihood-based bilateral filters as a post-processing method applied to the ML-based estimates of basis sinograms. The proposed filters effectively reduced the noise in the basis images and the synthesized monochromatic CT images. It showed the potential of using likelihood-based filters in the projection domain as a substitute for conventional regularization or filtering methods.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Funções Verossimilhança , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Tórax , Imagens de Fantasmas
2.
Med Phys ; 48(10): 6531-6535, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34169523

RESUMO

PURPOSE: This study aims to develop a calibration-based estimator for the photon-counting detector (PCD)-based x-ray computed tomography. METHODS: We propose the nearest neighborhood (NN)-based estimator, which searches for the nearest calibration data for a given PCD output and sets the associated basis line-integrals as the estimate. Searching for the nearest neighbors can be accelerated using the pre-calculated k-d tree for the data. RESULTS: The proposed method is compared to the model-based maximum likelihood (ML) estimator. For slab phantom study, both ML and NN-based methods achieve the Cramér-Rao lower bound and are unbiased for various combinations of three basis materials (water, bone, and gold). The proposed method is also validated for K-edge imaging and presents almost unbiased Au concentrations in the region of interest. CONCLUSIONS: The proposed NN-based method is demonstrated to be as accurate as the model-based ML estimator, but it is computationally efficient and requires only calibration measurements.


Assuntos
Fótons , Tomografia Computadorizada por Raios X , Calibragem , Imagens de Fantasmas
3.
Phys Med Biol ; 66(14)2021 07 09.
Artigo em Inglês | MEDLINE | ID: mdl-34144545

RESUMO

The nanoparticle agent, combined with a targeting factor reacting with lesions, enables specific CT imaging. Thus, the identification of the nanoparticle agents has the potential to improve clinical diagnosis. Thanks to the energy sensitivity of the photon-counting detector (PCD), it can exploit the K-edge of the nanoparticle agents in the clinical x-ray energy range to identify the agents. In this paper, we propose a novel data-driven approach for nanoparticle agent identification using the PCD. We generate two sets of training data consisting of PCD measurements from calibration phantoms, one in the presence of nanoparticle agent and the other in the absence of the agent. For a given sinogram of PCD counts, the proposed method calculates the normalized log-likelihood sinogram for each class (class 1: with the agent, class 2: without the agent) using theKnearest neighbors (KNN) estimator, backproject the sinograms, and compare the backprojection images to identify the agent. We also proved that the proposed algorithm is equivalent to the maximum likelihood-based classification. We studied the robustness of dose reduction with gold nanoparticles as the K-edge contrast media and demonstrated that the proposed method identifies targets with different concentrations of the agents without background noise.


Assuntos
Ouro , Nanopartículas Metálicas , Funções Verossimilhança , Imagens de Fantasmas , Fótons , Tomografia Computadorizada por Raios X
4.
Med Phys ; 45(11): 4822-4843, 2018 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-30136278

RESUMO

PURPOSE: Smaller pixel sizes of x-ray photon counting detectors (PCDs) benefit count rate capabilities but increase cross-talk and "double-counting" between neighboring PCD pixels. When an x-ray photon produces multiple (n) counts at neighboring (sub-)pixels and they are added during post-acquisition N × N binning process, the variance of the final PCD output-pixel will be larger than its mean. In the meantime, anti-scatter grids are placed at the pixel boundaries in most of x-ray CT systems and will decrease cross-talk between sub-pixels because the grids mask sub-pixels underneath them, block the primary x-rays, and increase the separation distance between active sub-pixels. The aim of this paper was, first, to study the PCD statistics with various N × N binning schemes and three different masking methods in the presence of cross-talks, and second, to assess one of the most fundamental performances of x-ray CT: soft tissue contrast visibility. METHODS: We used a PCD cross-talk model (Photon counting toolkit, PcTK) and produced cross-talk data between 3 × 3 neighboring sub-pixels and calculated the mean, variance, and covariance of output-pixels with each of N × N binning scheme [4 × 4 binning, 2 × 2 binning, and 1 × 1 binning (i.e., no binning)] and three different sub-pixel masking methods (no mask, 1-D mask, and 2-D mask). We then set up simulation to evaluate the soft tissue contrast visibility. X-rays of 120 kVp were attenuated by 10-40 cm-thick water, with the right side of PCDs having 0.5 cm thicker water than the left side. A pair of output-pixels across the left-right boundary were used to assess the sensitivity index (SI or d'), which typically ranges 0-1 and is a generalized signal-to-noise ratio and a statistics used in signal detection theory. RESULTS: Binning a larger number of sub-pixels resulted in larger mean counts and larger variance-to-mean ratio when the lower threshold of the energy window was lower than the half of the incident energy. Mean counts are in the order of no mask (the largest), 1-D mask, and 2-D mask but the difference in variance-to-mean ratio was small. For a given sub-pixel size and masking method, binning more sub-pixels degraded the normalized SI values but the difference between 4 × 4 binning and 1 × 1 binning was typically less than 0.06. 1-D mask provided better normalized SI values than no mask and 2-D mask for side-by-side case and the improvements were larger with fewer binnings, although the difference was less than 0.10. 2-D mask was the best for embedded case. The normalized SI values of combined binning, sub-pixel size, and masking were in the order of 1 × 1 (900 µm)2 binning, 2 × 2 (450 µm)2 binning, and 4 × 4 (225 µm)2 binning for a given masking method but the difference between each of them were typically 0.02-0.05. CONCLUSION: We have evaluated the effect of double-counting between PCD sub-pixels with various binning and masking methods. SI values were better with fewer number of binning and larger sub-pixels. The difference among various binning and masking methods, however, was typically less than 0.06, which might result in a dose penalty of 13% if the CT system were linear.


Assuntos
Fótons , Contagem de Cintilação/instrumentação
5.
Invest Radiol ; 53(7): 432-439, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29543692

RESUMO

OBJECTIVES: A novel imaging technique ("X-map") has been developed to identify acute ischemic lesions for stroke patients using non-contrast-enhanced dual-energy computed tomography (NE-DE-CT). Using the 3-material decomposition technique, the original X-map ("X-map 1.0") eliminates fat and bone from the images, suppresses the gray matter (GM)-white matter (WM) tissue contrast, and makes signals of edema induced by severe ischemia easier to detect. The aim of this study was to address the following 2 problems with the X-map 1.0: (1) biases in CT numbers (or artifacts) near the skull of NE-DE-CT images and (2) large intrapatient and interpatient variations in X-map 1.0 values. MATERIALS AND METHODS: We improved both an iterative beam-hardening correction (iBHC) method and the X-map algorithm. The new iBHC (iBHC2) modeled x-ray physics more accurately. The new X-map ("X-map 2.0") estimated regional GM values-thus, maximizing the ability to suppress the GM-WM contrast, make edema signals quantitative, and enhance the edema signals that denote an increased water density for each pixel. We performed a retrospective study of 11 patients (3 men, 8 women; mean age, 76.3 years; range, 68-90 years) who presented to the emergency department with symptoms of acute stroke. Images were reconstructed with the old iBHC (iBHC1) and the iBHC2, and biases in CT numbers near the skull were measured. Both X-map 2.0 maps and X-map 1.0 maps were computed from iBHC2 images, both with and without a material decomposition-based edema signal enhancement (ESE) process. X-map values were measured at 5 to 9 locations on GM without infarct per patient; the mean value was calculated for each patient (we call it the patient-mean X-map value) and subtracted from the measured X-map values to generate zero-mean X-map values. The standard deviation of the patient-mean X-map values over multiple patients denotes the interpatient variation; the standard deviation over multiple zero-mean X-map values denotes the intrapatient variation. The Levene F test was performed to assess the difference in the standard deviations with different algorithms. Using 5 patient data who had diffusion weighted imaging (DWI) within 2 hours of NE-DE-CT, mean values at and near ischemic lesions were measured at 7 to 14 locations per patient with X-map images, CT images (low kV and high kV), and DWI images. The Pearson correlation coefficient was calculated between a normalized increase in DWI signals and either X-map or CT. RESULTS: The bias in CT numbers was lower with iBHC2 than with iBHC1 in both high- and low-kV images (2.5 ± 2.0 HU [95% confidence interval (CI), 1.3-3.8 HU] for iBHC2 vs 6.9 ± 2.3 HU [95% CI, 5.4-8.3 HU] for iBHC1 with high-kV images, P < 0.01; 1.5 ± 3.6 HU [95% CI, -0.8 to 3.7 HU] vs 12.8 ± 3.3 HU [95% CI, 10.7-14.8 HU] with low-kV images, P < 0.01). The interpatient variation was smaller with X-map 2.0 than with X-map 1.0, both with and without ESE (4.3 [95% CI, 3.0-7.6] for X-map 2.0 vs 19.0 [95% CI, 13.3-22.4] for X-map 1.0, both with ESE, P < 0.01; 3.0 [95% CI, 2.1-5.3] vs 12.0 [95% CI, 8.4-21.0] without ESE, P < 0.01). The intrapatient variation was also smaller with X-map 2.0 than with X-map 1.0 (6.2 [95% CI, 5.3-7.3] vs 8.5 [95% CI, 7.3-10.1] with ESE, P = 0.0122; 4.1 [95% CI, 3.6-4.9] vs 6.3 [95% CI, 5.5-7.6] without ESE, P < 0.01). The best 3 correlation coefficients (R) with DWI signals were -0.733 (95% CI, -0.845 to -0.560, P < 0.001) for X-map 2.0 with ESE, -0.642 (95% CI, -0.787 to -0.429, P < 0.001) for high-kV CT, and -0.609 (95% CI, -0.766 to -0.384, P < 0.001) for X-map 1.0 with ESE. CONCLUSION: Both of the 2 problems outlined in the objectives have been addressed by improving both iBHC and X-map algorithm. The iBHC2 improved the bias in CT numbers and the visibility of GM-WM contrast throughout the brain space. The combination of iBHC2 and X-map 2.0 with ESE decreased both intrapatient and interpatient variations of edema signals significantly and had a strong correlation with DWI signals in terms of the strength of edema signals.


Assuntos
Isquemia Encefálica/diagnóstico por imagem , Edema/diagnóstico por imagem , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Acidente Vascular Cerebral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Isquemia Encefálica/complicações , Isquemia Encefálica/fisiopatologia , Edema/complicações , Edema/fisiopatologia , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Estudos Retrospectivos , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia
6.
Med Phys ; 45(5): 1985-1998, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29537627

RESUMO

PURPOSE: The interpixel cross-talk of energy-sensitive photon counting x-ray detectors (PCDs) has been studied and an analytical model (version 2.1) has been developed for double-counting between neighboring pixels due to charge sharing and K-shell fluorescence x-ray emission followed by its reabsorption (Taguchi K, et al., Medical Physics 2016;43(12):6386-6404). While the model version 2.1 simulated the spectral degradation well, it had the following problems that has been found to be significant recently: (1) The spectrum is inaccurate with smaller pixel sizes; (2) the charge cloud size must be smaller than the pixel size; (3) the model underestimates the spectrum/counts for 10-40 keV; and (4) the model version 2.1 cannot handlen-tuple-counting withn > 2 (i.e., triple-counting or higher). These problems are inherent to the design of the model version 2.1; therefore, we developed a new model and addressed these problems in this study. METHODS: We propose a new PCD cross-talk model (version 3.2; Pc TK for "photon counting toolkit") that is based on a completely different design concept from the previous version. It uses a numerical approach and starts with a 2-D model of charge sharing (as opposed to an analytical approach and a 1-D model with version 2.1) and addresses all of the four problems. The model takes the following factors into account: (1) shift-variant electron density of the charge cloud (Gaussian-distributed), (2) detection efficiency, (3) interactions between photons and PCDs via photoelectric effect, and (4) electronic noise. Correlated noisy PCD data can be generated using either a multivariate normal random number generator or a Poisson random number generator. The effect of the two parameters, the effective charge cloud diameter (d0 ) and pixel size (dpix ), was studied and results were compared with Monte Carlo simulations and the previous model version 2.1. Finally, a script for the workflow for CT image quality assessment has been developed, which started with a few material density images, generated material-specific sinogram (line integrals) data, noisy PCD data with spectral distortion using the model version 3.2, and reconstructed PCD- CT images for four energy windows. RESULTS: The model version 3.2 addressed all of the four problems listed above. The spectra withdpix  = 56-113 µm agreed with that of Medipix3 detector withdpix  = 55-110 µm without charge summing mode qualitatively. The counts for 10-40 keV were larger than the previous model (version 2.1) and agreed with MC simulations very well (root-mean-square difference values with model version 3.2 were decreased to 16%-67% of the values with version 2.1). There were many non-zero off-diagonal elements withn-tuple-counting withn > 2 in the normalized covariance matrix of 3 × 3 neighboring pixels. Reconstructed images showed biases and artifacts attributed to the spectral distortion due to the charge sharing and fluorescence x rays. CONCLUSION: We have developed a new PCD model for spatio-energetic cross-talk and correlation between PCD pixels. The workflow demonstrated the utility of the model for general or task-specific image quality assessments for the PCD- CT.Note: The program (Pc TK) and the workflow scripts have been made available to academic researchers. Interested readers should visit the website (pctk.jhu.edu) or contact the corresponding author.


Assuntos
Método de Monte Carlo , Fótons , Garantia da Qualidade dos Cuidados de Saúde/métodos , Tomografia Computadorizada por Raios X , Fluxo de Trabalho , Processamento de Imagem Assistida por Computador , Razão Sinal-Ruído
7.
IEEE Trans Med Imaging ; 36(11): 2389-2403, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28866486

RESUMO

Photon counting detectors (PCDs) provide multiple energy-dependent measurements for estimating basis line-integrals. However, the measured spectrum is distorted from the spectral response effect (SRE) via charge sharing, K-fluorescence emission, and so on. Thus, in order to avoid bias and artifacts in images, the SRE needs to be compensated. For this purpose, we recently developed a computationally efficient three-step algorithm for PCD-CT without contrast agents by approximating smooth X-ray transmittance using low-order polynomial bases. It compensated the SRE by incorporating the SRE model in a linearized estimation process and achieved nearly the minimum variance and unbiased (MVU) estimator. In this paper, we extend the three-step algorithm to K-edge imaging applications by designing optimal bases using a low-rank approximation to model X-ray transmittances with arbitrary shapes (i.e., smooth without the K-edge or discontinuous with the K-edge). The bases can be used to approximate the X-ray transmittance and to linearize the PCD measurement modeling and then the three-step estimator can be derived as in the previous approach: estimating the x-ray transmittance in the first step, estimating basis line-integrals including that of the contrast agent in the second step, and correcting for a bias in the third step. We demonstrate that the proposed method is more accurate and stable than the low-order polynomial-based approaches with extensive simulation studies using gadolinium for the K-edge imaging application. We also demonstrate that the proposed method achieves nearly MVU estimator, and is more stable than the conventional maximum likelihood estimator in high attenuation cases with fewer photon counts.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Radiografia/métodos , Meios de Contraste , Gadolínio , Humanos , Modelos Estatísticos , Imagens de Fantasmas , Fótons , Radiografia Abdominal
8.
Med Phys ; 43(12): 6386, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-27908175

RESUMO

PURPOSE: An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the pixels. This is called double-counting with charge sharing. (A photoelectric effect with K-shell fluorescence x-ray emission would result in double-counting as well). As a result, PCD data are spatially and energetically correlated, although the output of individual PCD pixels is Poisson distributed. Major problems include the lack of a detector noise model for the spatio-energetic cross talk and lack of a computationally efficient simulation tool for generating correlated Poisson data. A Monte Carlo (MC) simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. METHODS: In this study, the authors developed a new detector model and implemented it in an efficient software simulator that uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account: (1) detection efficiency; (2) incomplete charge collection and ballistic effect; (3) interaction with PCDs via photoelectric effect (with or without K-shell fluorescence x-ray emission, which may escape from the PCDs or be reabsorbed); and (4) electronic noise. The correlation was modeled by using these two simplifying assumptions: energy conservation and mutual exclusiveness. The mutual exclusiveness is that no more than two pixels measure energy from one photon. The effect of model parameters has been studied and results were compared with MC simulations. The agreement, with respect to the spectrum, was evaluated using the reduced χ2 statistics or a weighted sum of squared errors, χred2(≥1), where χred2=1 indicates a perfect fit. RESULTS: The model produced spectra with flat field irradiation that qualitatively agree with previous studies. The spectra generated with different model and geometry parameters allowed for understanding the effect of the parameters on the spectrum and the correlation of data. The agreement between the model and MC data was very strong. The mean spectra with 90 keV and 140 kVp agreed exceptionally well: χred2 values were 1.049 with 90 keV data and 1.007 with 140 kVp data. The degrees of cross talk (in terms of the relative increase from single pixel irradiation to flat field irradiation) were 22% with 90 keV and 19% with 140 kVp for MC simulations, while they were 21% and 17%, respectively, for the model. The covariance was in strong agreement qualitatively, although it was overestimated. The noisy data generation was very efficient, taking less than a CPU minute as opposed to CPU hours for MC simulators. CONCLUSIONS: The authors have developed a novel, computationally efficient PCD model that takes into account double-counting and resulting spatio-energetic correlation between PCD pixels. The MC simulation validated the accuracy.


Assuntos
Modelos Teóricos , Fótons , Distribuição de Poisson , Análise Espectral
9.
Opt Express ; 21(22): 26589-604, 2013 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-24216880

RESUMO

Some optical properties of a highly scattering medium, such as tissue, can be reconstructed non-invasively by diffuse optical tomography (DOT). Since the inverse problem of DOT is severely ill-posed and nonlinear, iterative methods that update Green's function have been widely used to recover accurate optical parameters. However, recent research has shown that the joint sparse recovery principle can provide an important clue in achieving reconstructions without an iterative update of Green's function. One of the main limitations of the previous work is that it can only be applied to absorption parameter reconstruction. In this paper, we extended this theory to estimate the absorption and scattering parameters simultaneously when the background optical properties are known. The main idea for such an extension is that a joint sparse recovery step gives us unknown fluence on the estimated support set, which eliminates the nonlinearity in an integral equation for the simultaneous estimation of the optical parameters. Our numerical results show that the proposed algorithm reduces the cross-talk artifacts between the parameters and provides improved reconstruction results compared to existing methods.


Assuntos
Absorção , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Luz , Espalhamento de Radiação , Tomografia Óptica/métodos , Simulação por Computador , Modelos Teóricos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
IEEE Trans Med Imaging ; 30(5): 1129-42, 2011 May.
Artigo em Inglês | MEDLINE | ID: mdl-21402507

RESUMO

Diffuse optical tomography (DOT) is a sensitive and relatively low cost imaging modality that reconstructs optical properties of a highly scattering medium. However, due to the diffusive nature of light propagation, the problem is severely ill-conditioned and highly nonlinear. Even though nonlinear iterative methods have been commonly used, they are computationally expensive especially for three dimensional imaging geometry. Recently, compressed sensing theory has provided a systematic understanding of high resolution reconstruction of sparse objects in many imaging problems; hence, the goal of this paper is to extend the theory to the diffuse optical tomography problem. The main contributions of this paper are to formulate the imaging problem as a joint sparse recovery problem in a compressive sensing framework and to propose a novel noniterative and exact inversion algorithm that achieves the l(0) optimality as the rank of measurement increases to the unknown sparsity level. The algorithm is based on the recently discovered generalized MUSIC criterion, which exploits the advantages of both compressive sensing and array signal processing. A theoretical criterion for optimizing the imaging geometry is provided, and simulation results confirm that the new algorithm outperforms the existing algorithms and reliably reconstructs the optical inhomogeneities when we assume that the optical background is known to a reasonable accuracy.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Tomografia Óptica/métodos , Algoritmos , Animais , Simulação por Computador , Camundongos , Modelos Teóricos , Imagens de Fantasmas , Tomografia Óptica/instrumentação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...