Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
Phys Med Biol ; 68(24)2023 Dec 08.
Article in English | MEDLINE | ID: mdl-37802071

ABSTRACT

Objective.Over the past several decades, dual-energy CT (DECT) imaging has seen significant advancements due to its ability to distinguish between materials. DECT statistical iterative reconstruction (SIR) has exhibited potential for noise reduction and enhanced accuracy. However, its slow convergence and substantial computational demands render the elapsed time for 3D DECT SIR often clinically unacceptable. The objective of this study is to accelerate 3D DECT SIR while maintaining subpercentage or near-subpercentage accuracy.Approach.We incorporate DECT SIR into a deep-learning model-based unrolling network for 3D DECT reconstruction (MB-DECTNet), which can be trained end-to-end. This deep learning-based approach is designed to learn shortcuts between initial conditions and the stationary points of iterative algorithms while preserving the unbiased estimation property of model-based algorithms. MB-DECTNet comprises multiple stacked update blocks, each containing a data consistency layer (DC) and a spatial mixer layer, with the DC layer functioning as a one-step update from any traditional iterative algorithm.Main results.The quantitative results indicate that our proposed MB-DECTNet surpasses both the traditional image-domain technique (MB-DECTNet reduces average bias by a factor of 10) and a pure deep learning method (MB-DECTNet reduces average bias by a factor of 8.8), offering the potential for accurate attenuation coefficient estimation, akin to traditional statistical algorithms, but with considerably reduced computational costs. This approach achieves 0.13% bias and 1.92% mean absolute error and reconstructs a full image of a head in less than 12 min. Additionally, we show that the MB-DECTNet output can serve as an initializer for DECT SIR, leading to further improvements in results.Significance.This study presents a model-based deep unrolling network for accurate 3D DECT reconstruction, achieving subpercentage error in estimating virtual monoenergetic images for a full head at 60 and 150 keV in 30 min, representing a 40-fold speedup compared to traditional approaches. These findings have significant implications for accelerating DECT SIR and making it more clinically feasible.


Subject(s)
Head , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Tomography, X-Ray Computed/methods , Algorithms
2.
Phys Med Biol ; 68(14)2023 07 05.
Article in English | MEDLINE | ID: mdl-37327796

ABSTRACT

Objective.Dual-energy computed tomography (DECT) has been widely used to reconstruct numerous types of images due its ability to better discriminate tissue properties. Sequential scanning is a popular dual-energy data acquisition method as it requires no specialized hardware. However, patient motion between two sequential scans may lead to severe motion artifacts in DECT statistical iterative reconstructions (SIR) images. The objective is to reduce the motion artifacts in such reconstructions.Approach.We propose a motion-compensation scheme that incorporates a deformation vector field into any DECT SIR. The deformation vector field is estimated via the multi-modality symmetric deformable registration method. The precalculated registration mapping and its inverse or adjoint are then embedded into each iteration of the iterative DECT algorithm.Main results.Results from a simulated and clinical case show that the proposed framework is capable of reducing motion artifacts in DECT SIRs. Percentage mean square errors in regions of interest in the simulated and clinical cases were reduced from 4.6% to 0.5% and 6.8% to 0.8%, respectively. A perturbation analysis was then performed to determine errors in approximating the continuous deformation by using the deformation field and interpolation. Our findings show that errors in our method are mostly propagated through the target image and amplified by the inverse matrix of the combination of the Fisher information and Hessian of the penalty term.Significance.We have proposed a novel motion-compensation scheme to incorporate a 3D registration method into the joint statistical iterative DECT algorithm in order to reduce motion artifacts caused by inter-scan motion, and successfully demonstrate that interscan motion corrections can be integrated into the DECT SIR process, enabling accurate imaging of radiological quantities on conventional SECT scanners, without significant loss of either computational efficiency or accuracy.


Subject(s)
Algorithms , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Motion , Phantoms, Imaging , Artifacts
3.
Med Phys ; 49(3): 1599-1618, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35029302

ABSTRACT

PURPOSE: To assess the potential of a joint dual-energy computerized tomography (CT) reconstruction process (statistical image reconstruction method built on a basis vector model (JSIR-BVM)) implemented on a 16-slice commercial CT scanner to measure high spatial resolution stopping-power ratio (SPR) maps with uncertainties of less than 1%. METHODS: JSIR-BVM was used to reconstruct images of effective electron density and mean excitation energy from dual-energy CT (DECT) sinograms for 10 high-purity samples of known density and atomic composition inserted into head and body phantoms. The measured DECT data consisted of 90 and 140 kVp axial sinograms serially acquired on a Philips Brilliance Big Bore CT scanner without beam-hardening corrections. The corresponding SPRs were subsequently measured directly via ion chamber measurements on a MEVION S250 superconducting synchrocyclotron and evaluated theoretically from the known sample compositions and densities. Deviations of JSIR-BVM SPR values from their theoretically calculated and directly measured ground-truth values were evaluated for our JSIR-BVM method and our implementation of the Hünemohr-Saito (H-S) DECT image-domain decomposition technique for SPR imaging. A thorough uncertainty analysis was then performed for five different scenarios (comparison of JSIR-BVM stopping-power ratio/stopping power (SPR/SP) to International Commission on Radiation Measurements and Units benchmarks; comparison of JSIR-BVM SPR to measured benchmarks; and uncertainties in JSIR-BVM SPR/SP maps for patients of unknown composition) per the Joint Committee for Guides in Metrology and the Guide to Expression of Uncertainty in Measurement, including the impact of uncertainties in measured photon spectra, sample composition and density, photon cross section and I-value models, and random measurement uncertainty. Estimated SPR uncertainty for three main tissue groups in patients of unknown composition and the weighted proportion of each tissue type for three proton treatment sites were then used to derive a composite range uncertainty for our method. RESULTS: Mean JSIR-BVM SPR estimates deviated by less than 1% from their theoretical and directly measured ground-truth values for most inserts and phantom geometries except for high-density Delrin and Teflon samples with SPR error relative to proton measurements of 1.1% and -1.0% (head phantom) and 1.1% and -1.1% (body phantom). The overall root-mean-square (RMS) deviations over all samples were 0.39% and 0.52% (head phantom) and 0.43% and 0.57% (body phantom) relative to theoretical and directly measured ground-truth SPRs, respectively. The corresponding RMS (maximum) errors for the image-domain decomposition method were 2.68% and 2.73% (4.68% and 4.99%) for the head phantom and 0.71% and 0.87% (1.37% and 1.66%) for the body phantom. Compared to H-S SPR maps, JSIR-BVM yielded 30% sharper and twofold sharper images for soft tissues and bone-like surrogates, respectively, while reducing noise by factors of 6 and 3, respectively. The uncertainty (coverage factor k = 1) of the DECT-to-benchmark values comparison ranged from 0.5% to 1.5% and is dominated by scanning-beam photon-spectra uncertainties. An analysis of the SPR uncertainty for patients of unknown composition showed a JSIR-BVM uncertainty of 0.65%, 1.21%, and 0.77% for soft-, lung-, and bony-tissue groups which led to a composite range uncertainty of 0.6-0.9%. CONCLUSIONS: Observed JSIR-BVM SPR estimation errors were all less than 50% of the estimated k = 1 total uncertainty of our benchmarking experiment, demonstrating that JSIR-BVM high spatial resolution, low-noise SPR mapping is feasible and is robust to variations in the geometry of the scanned object. In contrast, the much larger H-S SPR estimation errors are dominated by imaging noise and residual beam-hardening artifacts. While the uncertainties characteristic of our current JSIR-BVM implementation can be as large as 1.5%, achieving < 1% total uncertainty is feasible by improving the accuracy of scanner-specific scatter-profile and photon-spectrum estimates. With its robustness to beam-hardening artifact, image noise, and variations in phantom size and geometry, JSIR-BVM has the potential to achieve high spatial-resolution SPR mapping with subpercentage accuracy and estimated uncertainty in the clinical setting.


Subject(s)
Protons , Tomography, X-Ray Computed , Humans , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Tomography, X-Ray Computed/methods , Uncertainty
4.
Article in English | MEDLINE | ID: mdl-32025078

ABSTRACT

Photon counting CT (PCCT) is an x-ray imaging technique that has undergone great development in the past decade. PCCT has the potential to improve dose efficiency and low-dose performance. In this paper, we propose a statistics-based iterative algorithm to perform a direct reconstruction of material-decomposed images. Compared with the conventional sinogram-based decomposition method which has degraded performance in low-dose scenarios, the multi-energy alternating minimization algorithm for photon counting CT (MEAM-PCCT) can generate accurate material-decomposed image with much smaller biases.

5.
Med Phys ; 46(1): 273-285, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30421790

ABSTRACT

PURPOSE: To experimentally commission a dual-energy CT (DECT) joint statistical image reconstruction (JSIR) method, which is built on a linear basis vector model (BVM) of material characterization, for proton stopping power ratio (SPR) estimation. METHODS: The JSIR-BVM method builds on the relationship between the energy-dependent photon attenuation coefficients and the proton stopping power via a pair of BVM component weights. The two BVM component images are simultaneously reconstructed from the acquired DECT sinograms and then used to predict the electron density and mean excitation energy (I-value), which are required by the Bethe equation for SPR computation. A post-reconstruction image-based DECT method, which utilizes the two separate CT images reconstructed via the scanner's software, was implemented for comparison. The DECT measurement data were acquired on a Philips Brilliance scanner at 90 and 140 kVp for two phantoms of different sizes. Each phantom contains 12 different soft and bony tissue surrogates with known compositions. The SPR estimation results were compared to the reference values computed from the known compositions. The difference of the computed water equivalent path lengths (WEPL) across the phantoms between the two methods was also compared. RESULTS: The overall root-mean-square (RMS) of SPR estimation error of the JSIR-BVM method are 0.33% and 0.37% for the head- and body-sized phantoms, respectively, and all SPR estimates of the test samples are within 0.7% of the reference ground truth. The image-based method achieves overall RMS errors of 2.35% and 2.50% for the head- and body-sized phantoms, respectively. The JSIR-BVM method also reduces the pixel-wise random variation by 4-fold to 6-fold within homogeneous regions compared to the image-based method. The average differences between the JSIR-BVM method and the image-based method are 0.54% and 1.02% for the head- and body-sized phantoms, respectively. CONCLUSION: By taking advantage of an accurate polychromatic CT data model and a model-based DECT statistical reconstruction algorithm, the JSIR-BVM method accounts for both systematic bias and random noise in the acquired DECT measurement data. Therefore, the JSIR-BVM method achieves good accuracy and precision on proton SPR estimation for various tissue surrogates and object sizes. In contrast, the experimentally achievable accuracy of the image-based method may be limited by the uncertainties in the image formation process. The result suggests that the JSIR-BVM method has the potential for more accurate SPR prediction compared to post-reconstruction image-based methods in clinical settings.


Subject(s)
Image Processing, Computer-Assisted/methods , Protons , Tomography, X-Ray Computed , Phantoms, Imaging
6.
Sci Rep ; 8(1): 9286, 2018 06 18.
Article in English | MEDLINE | ID: mdl-29915334

ABSTRACT

Computed tomography (CT) examinations are commonly used to predict lung nodule malignancy in patients, which are shown to improve noninvasive early diagnosis of lung cancer. It remains challenging for computational approaches to achieve performance comparable to experienced radiologists. Here we present NoduleX, a systematic approach to predict lung nodule malignancy from CT data, based on deep learning convolutional neural networks (CNN). For training and validation, we analyze >1000 lung nodules in images from the LIDC/IDRI cohort. All nodules were identified and classified by four experienced thoracic radiologists who participated in the LIDC project. NoduleX achieves high accuracy for nodule malignancy classification, with an AUC of ~0.99. This is commensurate with the analysis of the dataset by experienced radiologists. Our approach, NoduleX, provides an effective framework for highly accurate nodule malignancy prediction with the model trained on a large patient population. Our results are replicable with software available at http://bioinformatics.astate.edu/NoduleX .


Subject(s)
Lung Neoplasms/diagnostic imaging , Lung Neoplasms/diagnosis , Models, Biological , Solitary Pulmonary Nodule/diagnostic imaging , Tomography, X-Ray Computed , Cohort Studies , Databases as Topic , Humans , Image Processing, Computer-Assisted , Neural Networks, Computer , ROC Curve , Software
7.
Med Phys ; 45(5): 2129-2142, 2018 May.
Article in English | MEDLINE | ID: mdl-29570809

ABSTRACT

PURPOSE: The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. METHODS: The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. RESULTS: All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. CONCLUSION: Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes.


Subject(s)
Image Processing, Computer-Assisted/methods , Protons , Statistics as Topic , Tomography, X-Ray Computed , Algorithms , Uncertainty
8.
Article in English | MEDLINE | ID: mdl-28572719

ABSTRACT

Model-based image reconstruction (MBIR) techniques have the potential to generate high quality images from noisy measurements and a small number of projections which can reduce the x-ray dose in patients. These MBIR techniques rely on projection and backprojection to refine an image estimate. One of the widely used projectors for these modern MBIR based technique is called branchless distance driven (DD) projection and backprojection. While this method produces superior quality images, the computational cost of iterative updates keeps it from being ubiquitous in clinical applications. In this paper, we provide several new parallelization ideas for concurrent execution of the DD projectors in multi-GPU systems using CUDA programming tools. We have introduced some novel schemes for dividing the projection data and image voxels over multiple GPUs to avoid runtime overhead and inter-device synchronization issues. We have also reduced the complexity of overlap calculation of the algorithm by eliminating the common projection plane and directly projecting the detector boundaries onto image voxel boundaries. To reduce the time required for calculating the overlap between the detector edges and image voxel boundaries, we have proposed a pre-accumulation technique to accumulate image intensities in perpendicular 2D image slabs (from a 3D image) before projection and after backprojection to ensure our DD kernels run faster in parallel GPU threads. For the implementation of our iterative MBIR technique we use a parallel multi-GPU version of the alternating minimization (AM) algorithm with penalized likelihood update. The time performance using our proposed reconstruction method with Siemens Sensation 16 patient scan data shows an average of 24 times speedup using a single TITAN X GPU and 74 times speedup using 3 TITAN X GPUs in parallel for combined projection and backprojection.

9.
Med Phys ; 44(6): 2438-2446, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28295418

ABSTRACT

PURPOSE: To evaluate and compare the theoretically achievable accuracy of two families of two-parameter photon cross-section models: basis vector model (BVM) and modified parametric fit model (mPFM). METHOD: The modified PFM assumes that photoelectric absorption and scattering cross-sections can be accurately represented by power functions in effective atomic number and/or energy plus the Klein-Nishina cross-section, along with empirical corrections that enforce exact prediction of elemental cross-sections. Two mPFM variants were investigated: the widely used Torikoshi model (tPFM) and a more complex "VCU" variant (vPFM). For 43 standard soft and bony tissues and phantom materials, all consisting of elements with atomic number less than 20 (except iodine), we evaluated the theoretically achievable accuracy of tPFM and vPFM for predicting linear attenuation, photoelectric absorption, and energy-absorption coefficients, and we compared it to a previously investigated separable, linear two-parameter model, BVM. RESULTS: For an idealized dual-energy computed tomography (DECT) imaging scenario, the cross-section mapping process demonstrates that BVM more accurately predicts photon cross-sections of biological mixtures than either tPFM or vPFM. Maximum linear attenuation coefficient prediction errors were 15% and 5% for tPFM and BVM, respectively. The root-mean-square (RMS) prediction errors of total linear attenuation over the 20 keV to 1000 keV energy range of tPFM and BVM were 0.93% (tPFM) and 0.1% (BVM) for adipose tissue, 0.8% (tPFM) and 0.2% (BVM) for muscle tissue, and 1.6% (tPFM) and 0.2% (BVM) for cortical bone tissue. With exception of the thyroid and Teflon, the RMS error for photoelectric absorption and scattering coefficient was within 4% for the tPFM and 2% for the BVM. Neither model predicts the photon cross-sections of thyroid tissue accurately, exhibiting relative errors as large as 20%. For the energy-absorption coefficients prediction error, RMS errors for the BVM were less than 1.5%, while for the tPFM, the RMS errors were as large as 16%. CONCLUSION: Compared to modified PFMs, BVM shows superior potential to support dual-energy CT cross-section mapping. In addition, the linear, separable BVM can be more efficiently deployed by iterative model-based DECT image-reconstruction algorithms.


Subject(s)
Algorithms , Phantoms, Imaging , Tomography, X-Ray Computed , Humans , Models, Statistical , Photons
10.
J Comput Assist Tomogr ; 40(4): 589-95, 2016.
Article in English | MEDLINE | ID: mdl-27096403

ABSTRACT

OBJECTIVE: The aim of this study was to compare the performance of 2- (2D) and 3-dimensional (3D) quantitative computed tomography (CT) methods for classifying lung nodules as lung cancer, metastases, or benign. METHODS: Using semiautomated software and computerized analysis, we analyzed more than 50 quantitative CT features of 96 solid nodules in 94 patients, in 2D from a single slice and in 3D from the entire nodule volume. Multivariable logistic regression was used to classify nodule types. Model performance was assessed by the area under the receiver operating characteristic curve (AUC) using leave-one-out cross-validation. RESULTS: The AUC for distinguishing 53 primary lung cancers from 18 benign nodules and 25 metastases ranged from 0.79 to 0.83 and was not significantly different for 2D and 3D analyses (P = 0.29-0.78). Models distinguishing metastases from benign nodules were statistically significant only by 3D analysis (AUC = 0.84). CONCLUSIONS: Three-dimensional CT methods did not improve discrimination of lung cancer, but may help distinguish benign nodules from metastases.


Subject(s)
Imaging, Three-Dimensional/methods , Lung Neoplasms/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Solitary Pulmonary Nodule/diagnostic imaging , Tomography, X-Ray Computed/methods , Aged , Diagnosis, Differential , Female , Humans , Male , Middle Aged , Radiographic Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity , Tumor Burden
11.
IEEE Trans Med Imaging ; 35(2): 685-98, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26469126

ABSTRACT

We propose a new algorithm, called line integral alternating minimization (LIAM), for dual-energy X-ray CT image reconstruction. Instead of obtaining component images by minimizing the discrepancy between the data and the mean estimates, LIAM allows for a tunable discrepancy between the basis material projections and the basis sinograms. A parameter is introduced that controls the size of this discrepancy, and with this parameter the new algorithm can continuously go from a two-step approach to the joint estimation approach. LIAM alternates between iteratively updating the line integrals of the component images and reconstruction of the component images using an image iterative deblurring algorithm. An edge-preserving penalty function can be incorporated in the iterative deblurring step to decrease the roughness in component images. Images from both simulated and experimentally acquired sinograms from a clinical scanner were reconstructed by LIAM while varying the regularization parameters to identify good choices. The results from the dual-energy alternating minimization algorithm applied to the same data were used for comparison. Using a small fraction of the computation time of dual-energy alternating minimization, LIAM achieves better accuracy of the component images in the presence of Poisson noise for simulated data reconstruction and achieves the same level of accuracy for real data reconstruction.


Subject(s)
Algorithms , Radiographic Image Enhancement/methods , Tomography, X-Ray Computed/methods , Absorptiometry, Photon , Humans , Models, Biological , Phantoms, Imaging
12.
Med Phys ; 42(6): 2908-14, 2015 Jun.
Article in English | MEDLINE | ID: mdl-26127044

ABSTRACT

PURPOSE: To provide a noninvasive technique to measure the intensity profile of the fan beam in a computed tomography (CT) scanner that is cost effective and easily implemented without the need to access proprietary scanner information or service modes. METHODS: The fabrication of an inexpensive aperture is described, which is used to expose radiochromic film in a rotating CT gantry. A series of exposures is made, each of which is digitized on a personal computer document scanner, and the resulting data set is analyzed to produce a self-consistent calibration of relative radiation exposure. The bow tie profiles were analyzed to determine the precision of the process and were compared to two other measurement techniques, direct measurements from CT gantry detectors and a dynamic dosimeter. RESULTS: The radiochromic film method presented here can measure radiation exposures with a precision of ∼ 6% root-mean-square relative error. The intensity profiles have a maximum 25% root-mean-square relative error compared with existing techniques. CONCLUSIONS: The proposed radiochromic film method for measuring bow tie profiles is an inexpensive (∼$100 USD + film costs), noninvasive method to measure the fan beam intensity profile in CT scanners.


Subject(s)
Film Dosimetry/methods , Tomography, X-Ray Computed/instrumentation , Rotation
13.
Med Phys ; 41(10): 101915, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25281967

ABSTRACT

PURPOSE: Several areas of computed tomography (CT) research require knowledge about the intensity profile of the x-ray fan beam that is introduced by a bow tie filter. This information is considered proprietary by CT manufacturers, so noninvasive measurement methods are required. One method using real-time dosimeters has been proposed in the literature. A commercially available dosimeter was used to apply that method, and analysis techniques were developed to extract fan beam profiles from measurements. METHODS: A real-time ion chamber was placed near the periphery of an empty CT gantry and the dose rate versus time waveform was recorded as the x-ray source rotated about the isocenter. In contrast to previously proposed analysis methods that assumed a pointlike detector, the finite-size ion chamber received varying amounts of coverage by the collimated x-ray beam during rotation, precluding a simple relationship between the source intensity as a function of fan beam angle and measured intensity. A two-parameter model for measurement intensity was developed that included both effective collimation width and source-to-detector distance, which then was iteratively solved to minimize the error between duplicate measurements at corresponding fan beam angles, allowing determination of the fan beam profile from measured dose-rate waveforms. Measurements were performed on five different scanner systems while varying parameters such as collimation, kVp, and bow tie filters. On one system, direct measurements of the bow tie profile were collected for comparison with the real-time dosimeter technique. RESULTS: The data analysis method for a finite-size detector was found to produce a fan beam profile estimate with a relative error between duplicate measurement intensities of <5%. It was robust over a wide range of collimation widths (e.g., 1-40 mm), producing fan beam profiles that agreed with a relative error of 1%-5%. Comparison with a direct measurement technique on one system produced agreement with a relative error of 2%-6%. Fan beam profiles were found to differ for different filter types on a given system and between different vendors. CONCLUSIONS: A commercially available real-time dosimeter probe was found to be a convenient and accurate instrument for measuring fan beam profiles. An analysis method was developed that could handle a wide range of collimation widths by explicitly considering the finite width of the ion chamber. Relative errors in the profiles were found to be less than 5%. Measurements of five different clinical scanners demonstrate the variation in bow tie designs, indicating that generic bow tie models will not be adequate for CT system research.


Subject(s)
Radiometry/instrumentation , Radiometry/methods , Tomography Scanners, X-Ray Computed , Algorithms , Models, Theoretical , Tomography, X-Ray Computed , X-Rays
14.
J Opt Soc Am A Opt Image Sci Vis ; 31(7): 1369-94, 2014 Jul 01.
Article in English | MEDLINE | ID: mdl-25121423

ABSTRACT

We investigate new sampling strategies for projection tomography, enabling one to employ fewer measurements than expected from classical sampling theory without significant loss of information. Inspired by compressed sensing, our approach is based on the understanding that many real objects are compressible in some known representation, implying that the number of degrees of freedom defining an object is often much smaller than the number of pixels/voxels. We propose a new approach based on quasi-random detector subsampling, whereas previous approaches only addressed subsampling with respect to source location (view angle). The performance of different sampling strategies is considered using object-independent figures of merit, and also based on reconstructions for specific objects, with synthetic and real data. The proposed approach can be implemented using a structured illumination of the interrogated object or the detector array by placing a coded aperture/mask at the source or detector side, respectively. Advantages of the proposed approach include (i) for structured illumination of the detector array, it leads to fewer detector pixels and allows one to integrate detectors for scattered radiation in the unused space; (ii) for structured illumination of the object, it leads to a reduced radiation dose for patients in medical scans; (iii) in the latter case, the blocking of rays reduces scattered radiation while keeping the same energy in the transmitted rays, resulting in a higher signal-to-noise ratio than that achieved by lowering exposure times or the energy of the source; (iv) compared to view-angle subsampling, it allows one to use fewer measurements for the same image quality, or leads to better image quality for the same number of measurements. The proposed approach can also be combined with view-angle subsampling.

15.
Med Phys ; 40(12): 121914, 2013 Dec.
Article in English | MEDLINE | ID: mdl-24320525

ABSTRACT

PURPOSE: Accurate patient-specific photon cross-section information is needed to support more accurate model-based dose calculation for low energy photon-emitting modalities in medicine such as brachytherapy and kilovoltage x-ray imaging procedures. A postprocessing dual-energy CT (pDECT) technique for noninvasive in vivo estimation of photon linear attenuation coefficients has been experimentally implemented on a commercial CT scanner and its accuracy assessed in idealized phantom geometries. METHODS: Eight test materials of known composition and density were used to compare pDECT-estimated linear attenuation coefficients to NIST reference values over an energy range from 10 keV to 1 MeV. As statistical image reconstruction (SIR) has been shown to reconstruct images with less random and systematic error than conventional filtered backprojection (FBP), the pDECT technique was implemented with both an in-house polyenergetic SIR algorithm, alternating minimization (AM), as well as a conventional FBP reconstruction algorithm. Improvement from increased spectral separation was also investigated by filtering the high-energy beam with an additional 0.5 mm of tin. The law of propagated uncertainty was employed to assess the sensitivity of the pDECT process to errors in reconstructed images. RESULTS: Mean pDECT-estimated linear attenuation coefficients for the eight test materials agreed within 1% of NIST reference values for energies from 1 MeV down to 30 keV, with mean errors rising to between 3% and 6% at 10 keV, indicating that the method is unbiased when measurement and calibration phantom geometries are matched. Reconstruction with FBP and AM algorithms conferred similar mean pDECT accuracy. However, single-voxel pDECT estimates reconstructed on a 1 × 1 × 3 mm(3) grid are shown to be highly sensitive to reconstructed image uncertainty; in some cases pDECT attenuation coefficient estimates exhibited standard deviations on the order of 20% around the mean. Reconstruction with the statistical AM algorithm led to standard deviations roughly 40% to 60% less than FBP reconstruction. Additional tin filtration of the high energy beam exhibits similar pDECT estimation accuracy as the unfiltered beam, even when scanning with only 25% of the dose. Using the law of propagated uncertainty, low Z materials are found to be more sensitive to image reconstruction errors than high Z materials. Furthermore, it is estimated that reconstructed CT image uncertainty must be limited to less than 0.25% to achieve a target linear-attenuation coefficient estimation uncertainty of 3% at 28 keV. CONCLUSIONS: That pDECT supports mean linear attenuation coefficient measurement accuracies of 1% of reference values for energies greater than 30 keV is encouraging. However, the sensitivity of the pDECT measurements to noise and systematic errors in reconstructed CT images warrants further investigation in more complex phantom geometries. The investigated statistical reconstruction algorithm, AM, reduced random measurement uncertainty relative to FBP owing to improved noise performance. These early results also support efforts to increase DE spectral separation, which can further reduce the pDECT sensitivity to measurement uncertainty.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Photons , Tomography, X-Ray Computed/instrumentation , Calibration , Uncertainty
16.
Article in English | MEDLINE | ID: mdl-24111225

ABSTRACT

Glioblastoma Mulitforme is highly infiltrative, making precise delineation of tumor margin difficult. Multimodality or multi-parametric MR imaging sequences promise an advantage over anatomic sequences such as post contrast enhancement as methods for determining the spatial extent of tumor involvement. In considering multi-parametric imaging sequences however, manual image segmentation and classification is time-consuming and prone to error. As a preliminary step toward integration of multi-parametric imaging into clinical assessments of primary brain tumors, we propose a machine-learning based multi-parametric approach that uses radiologist generated labels to train a classifier that is able to classify tissue on a voxel-wise basis and automatically generate a tumor segmentation. A random forests classifier was trained using a leave-one-out experimental paradigm. A simple linear classifier was also trained for comparison. The random forests classifier accurately predicted radiologist generated segmentations and tumor extent.


Subject(s)
Brain Neoplasms/diagnosis , Brain Neoplasms/pathology , Glioblastoma/diagnosis , Glioblastoma/pathology , Magnetic Resonance Imaging , Algorithms , Artificial Intelligence , Contrast Media , Diagnostic Imaging , Humans , Image Processing, Computer-Assisted , Pattern Recognition, Automated , Predictive Value of Tests , Probability , ROC Curve
17.
Phys Med ; 29(5): 500-12, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23343747

ABSTRACT

PURPOSE: To present a framework for characterizing the data needed to implement a polyenergetic model-based statistical reconstruction algorithm, Alternating Minimization (AM), on a commercial fan-beam CT scanner and a novel method for assessing the accuracy of the commissioned data model. METHODS: The X-ray spectra for three tube potentials on the Philips Brilliance CT scanner were estimated by fitting a semi-empirical X-ray spectrum model to transmission measurements. Spectral variations due to the bowtie filter were computationally modeled. Eight homogeneous cylinders of PMMA, Teflon and water with varying diameters were scanned at each energy. Central-axis scatter was measured for each cylinder using a beam-stop technique. AM reconstruction with a single-basis object-model matched to the scanned cylinder's composition allows assessment of the accuracy of the AM algorithm's polyenergetic data model. Filtered-backprojection (FBP) was also performed to compare consistency metrics such as uniformity and object-size dependence. RESULTS: The spectrum model fit measured transmission curves with residual root-mean-square-error of 1.20%-1.34% for the three scanning energies. The estimated spectrum and scatter data supported polyenergetic AM reconstruction of the test cylinders to within 0.5% of expected in the matched object-model reconstruction test. In comparison to FBP, polyenergetic AM exhibited better uniformity and less object-size dependence. CONCLUSIONS: Reconstruction using a matched object-model illustrate that the polyenergetic AM algorithm's data model was commissioned to within 0.5% of an expected ground truth. These results support ongoing and future research with polyenergetic AM reconstruction of commercial fan-beam CT data for quantitative CT applications.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Statistics as Topic/methods , Tomography, X-Ray Computed/instrumentation , Radiotherapy, Image-Guided , Uncertainty
18.
Front Neurol ; 3: 76, 2012.
Article in English | MEDLINE | ID: mdl-22701446

ABSTRACT

Like many complex dynamic systems, the brain exhibits scale-free dynamics that follow power-law scaling. Broadband power spectral density (PSD) of brain electrical activity exhibits state-dependent power-law scaling with a log frequency exponent that varies across frequency ranges. Widely divergent naturally occurring neural states, awake and slow wave sleep (SWS), were used to evaluate the nature of changes in scale-free indices of brain electrical activity. We demonstrate two analytic approaches to characterizing electrocorticographic (ECoG) data obtained during awake and SWS states. A data-driven approach was used, characterizing all available frequency ranges. Using an equal error state discriminator (EESD), a single frequency range did not best characterize state across data from all six subjects, though the ability to distinguish awake and SWS ECoG data in individual subjects was excellent. Multi-segment piecewise linear fits were used to characterize scale-free slopes across the entire frequency range (0.2-200 Hz). These scale-free slopes differed between awake and SWS states across subjects, particularly at frequencies below 10 Hz and showed little difference at frequencies above 70 Hz. A multivariate maximum likelihood analysis (MMLA) method using the multi-segment slope indices successfully categorized ECoG data in most subjects, though individual variation was seen. In exploring the differences between awake and SWS ECoG data, these analytic techniques show that no change in a single frequency range best characterizes differences between these two divergent biological states. With increasing computational tractability, the use of scale-free slope values to characterize ECoG and EEG data will have practical value in clinical and research studies.

19.
Med Phys ; 38(3): 1444-58, 2011 Mar.
Article in English | MEDLINE | ID: mdl-21520856

ABSTRACT

PURPOSE: In comparison with conventional filtered backprojection (FBP) algorithms for x-ray computed tomography (CT) image reconstruction, statistical algorithms directly incorporate the random nature of the data and do not assume CT data are linear, noiseless functions of the attenuation line integral. Thus, it has been hypothesized that statistical image reconstruction may support a more favorable tradeoff than FBP between image noise and spatial resolution in dose-limited applications. The purpose of this study is to evaluate the noise-resolution tradeoff for the alternating minimization (AM) algorithm regularized using a nonquadratic penalty function. METHODS: Idealized monoenergetic CT projection data with Poisson noise were simulated for two phantoms with inserts of varying contrast (7%-238%) and distance from the field-of-view (FOV) center (2-6.5 cm). Images were reconstructed for the simulated projection data by the FBP algorithm and two penalty function parameter values of the penalized AM algorithm. Each algorithm was run with a range of smoothing strengths to allow quantification of the noise-resolution tradeoff curve. Image noise is quantified as the standard deviation in the water background around each contrast insert. Modulation transfer functions (MTFs) were calculated from six-parameter model fits to oversampled edge-spread functions defined by the circular contrast-insert edges as a metric of local resolution. The integral of the MTF up to 0.5 1p/mm was adopted as a single-parameter measure of local spatial resolution. RESULTS: The penalized AM algorithm noise-resolution tradeoff curve was always more favorable than that of the FBP algorithm. While resolution and noise are found to vary as a function of distance from the FOV center differently for the two algorithms, the ratio of noises when matching the resolution metric is relatively uniform over the image. The ratio of AM-to-FBP image variances, a predictor of dose-reduction potential, was strongly dependent on the shape of the AM's nonquadratic penalty function and was also strongly influenced by the contrast of the insert for which resolution is quantified. Dose-reduction potential, reported here as the fraction (%) of FBP dose necessary for AM to reconstruct an image with comparable noise and resolution, for one penalty parameter value of the AM algorithm was found to vary from 70% to 50% for low-contrast and high-contrast structures, respectively, and from 70% to 10% for the second AM penalty parameter value. However, the second penalty, AM-700, was found to suffer from poor low-contrast resolution when matching the high-contrast resolution metric with FBP. CONCLUSIONS: The results of this simulation study imply that penalized AM has the potential to reconstruct images with similar noise and resolution using a fraction (10%-70%) of the FBP dose. However, this dose-reduction potential depends strongly on the AM penalty parameter and the contrast magnitude of the structures of interest. In addition, the authors' results imply that the advantage of AM can be maximized by optimizing the nonquadratic penalty function to the specific imaging task of interest. Future work will extend the methods used here to quantify noise and resolution in images reconstructed from real CT data.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Normal Distribution , Phantoms, Imaging , Scattering, Radiation
20.
IEEE Trans Med Imaging ; 25(10): 1392-404, 2006 Oct.
Article in English | MEDLINE | ID: mdl-17024842

ABSTRACT

We address the problem of image formation in transmission tomography when metal objects of known composition and shape, but unknown pose, are present in the scan subject. Using an alternating minimization (AM) algorithm, derived from a model in which the detected data are viewed as Poisson-distributed photon counts, we seek to eliminate the streaking artifacts commonly seen in filtered back projection images containing high-contrast objects. We show that this algorithm, which minimizes the I-divergence (or equivalently, maximizes the log-likelihood) between the measured data and model-based estimates of the means of the data, converges much faster when knowledge of the high-density materials (such as brachytherapy applicators or prosthetic implants) is exploited. The algorithm incorporates a steepest descent-based method to find the position and orientation (collectively called the pose) of the known objects. This pose is then used to constrain the image pixels to their known attenuation values, or, for example, to form a mask on the "missing" projection data in the shadow of the objects. Results from two-dimensional simulations are shown in this paper. The extension of the model and methods used to three dimensions is outlined.


Subject(s)
Artifacts , Artificial Intelligence , Pattern Recognition, Automated/methods , Prostheses and Implants , Radiographic Image Enhancement/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Algorithms , Metals , Phantoms, Imaging , Reproducibility of Results , Sensitivity and Specificity , Tomography, X-Ray Computed/instrumentation
SELECTION OF CITATIONS
SEARCH DETAIL
...