Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 98
Filter
1.
Med Phys ; 51(7): 4948-4969, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38753884

ABSTRACT

BACKGROUND: Edge-on-irradiated silicon detectors are currently being investigated for use in full-body photon-counting computed tomography (CT) applications. The low atomic number of silicon leads to a significant number of incident photons being Compton scattered in the detector, depositing a part of their energy and potentially being counted multiple times. Even though the physics of Compton scatter is well established, the effects of Compton interactions in the detector on image quality for an edge-on-irradiated silicon detector have still not been thoroughly investigated. PURPOSE: To investigate and explain effects of Compton scatter on low-frequency detective quantum efficiency (DQE) for photon-counting CT using edge-on-irradiated silicon detectors. METHODS: We extend an existing Monte Carlo model of an edge-on-irradiated silicon detector with 60 mm active absorption depth, previously used to evaluate spatial-frequency-based performance, to develop projection and image domain performance metrics for pure density and pure spectral imaging tasks with 30 and 40 cm water backgrounds. We show that the lowest energy threshold of the detector can be used as an effective discriminator of primary counts and cross-talk caused by Compton scatter. We study the developed metrics as functions of the lowest threshold energy for root-mean-square electronic noise levels of 0.8, 1.6, and 3.2 keV, where the intermediate level 1.6 keV corresponds to the noise level previously measured on a single sensor element in isolation. We also compare the performance of a modeled detector with 8, 4, and 2 optimized energy bins to a detector with 1-keV-wide bins. RESULTS: In terms of low-frequency DQE for density imaging, there is a tradeoff between using a threshold low enough to capture Compton interactions and avoiding electronic noise counts. For 30 cm water phantom, 4 energy bins, and a root-mean-square electronic noise of 0.8, 1.6, and 3.2 keV, it is optimal to put the lowest energy threshold at 3, 6, and 1 keV, which gives optimal projection-domain DQEs of 0.64, 0.59, and 0.52, respectively. Low-frequency DQE for spectral imaging also benefits from measuring Compton interactions with respective optimal thresholds of 12, 12, and 13 keV. No large dependence on background thickness was observed. For the intermediate noise level (1.6 keV), increasing the lowest threshold from 5 to 35 keV increases the variance in a iodine basis image by 60%-62% (30 cm phantom) and 67%-69% (40 cm phantom), with 8 bins. Both spectral and density DQE are adversely affected by increasing the electronic noise level. Image-domain DQE exhibits similar qualitative behavior as projection-domain DQE. CONCLUSIONS: Compton interactions contribute significantly to the density imaging performance of edge-on-irradiated silicon detectors. With the studied detector topology, the benefit of counting primary Compton interactions outweighs the penalty of multiple counting at all lowest threshold energies. Compton interactions also contribute significantly to the spectral imaging performance for measured energies above 10 keV.


Subject(s)
Monte Carlo Method , Photons , Scattering, Radiation , Silicon , Tomography, X-Ray Computed , Silicon/chemistry , Tomography, X-Ray Computed/instrumentation , Phantoms, Imaging
2.
Med Phys ; 51(1): 224-238, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37401203

ABSTRACT

BACKGROUND: Photon counting detectors (PCDs) provide higher spatial resolution, improved contrast-to-noise ratio (CNR), and energy discriminating capabilities. However, the greatly increased amount of projection data in photon counting computed tomography (PCCT) systems becomes challenging to transmit through the slip ring, process, and store. PURPOSE: This study proposes and evaluates an empirical optimization algorithm to obtain optimal energy weights for energy bin data compression. This algorithm is universally applicable to spectral imaging tasks including 2 and 3 material decomposition (MD) tasks and virtual monoenergetic images (VMIs). This method is simple to implement while preserving spectral information for the full range of object thicknesses and is applicable to different PCDs, for example, silicon detectors and CdTe detectors. METHODS: We used realistic detector energy response models to simulate the spectral response of different PCDs and an empirical calibration method to fit a semi-empirical forward model for each PCD. We numerically optimized the optimal energy weights by minimizing the average relative Cramér-Rao lower bound (CRLB) due to the energy-weighted bin compression, for MD and VMI tasks over a range of material area density ρ A , m ${\rho }_{A,m}$ (0-40 g/cm2 water, 0-2.16 g/cm2 calcium). We used Monte Carlo simulation of a step wedge phantom and an anthropomorphic head phantom to evaluate the performance of this energy bin compression method in the projection domain and image domain, respectively. RESULTS: The results show that for 2 MD, the energy bin compression method can reduce PCCT data size by 75% and 60%, with an average variance penalty of less than 17% and 3% for silicon and CdTe detectors, respectively. For 3 MD tasks with a K-edge material (iodine), this method can reduce the data size by 62.5% and 40% with an average variance penalty of less than 12% and 13% for silicon and CdTe detectors, respectively. CONCLUSIONS: We proposed an energy bin compression method that is broadly applicable to different PCCT systems and object sizes, with high data compression ratio and little loss of spectral information.


Subject(s)
Cadmium Compounds , Quantum Dots , X-Rays , Silicon , Tellurium , Photons , Phantoms, Imaging
3.
Med Phys ; 50 Suppl 1: 85-90, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36681879

ABSTRACT

Although CT imaging was introduced at Massachusetts General Hospital (MGH) quite early, with its first CT scanner installed in 1973, CT research at MGH started years earlier. The goal of this paper is to describe some of this innovative work and related accomplishments.


Subject(s)
Hospitals, General , Physics , Massachusetts , Tomography, X-Ray Computed
4.
J Med Imaging (Bellingham) ; 8(5): 052101, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34738026

ABSTRACT

Guest editors Patrick La Riviere, Rebecca Fahrig, and Norbert Pelc introduce the JMI Special Section Celebrating X-Ray Computed Tomography at 50.

5.
J Med Imaging (Bellingham) ; 8(5): 052110, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34729383

ABSTRACT

As we arrive at the 50th anniversary of the first computed tomography (CT) scan of a live patient, we take this opportunity to revisit the history of early CT development. It is not an exaggeration to say that the invention of CT may represent the greatest revolution in medical imaging since the discovery of x-rays. We cover events over a period of about two decades that started with the realization that accurate cross-sectional soft-tissue detail is possible and could be a significant advance. We describe in some detail the development of the first CT system and then the rapid technical advances during the following years that included the entry of many companies into the field and the circumstances that led many of those entrants to exit the field. Rather than focusing on the specific technical details (which can be found elsewhere), we include stories and events in the hope that broader lessons can be learned. As the first x-ray-based digital imaging modality, CT brought into common use an exceptional tool that benefits countless patients every day. It also introduced dramatic changes to biomedical imaging as a field that continues to influence progress to this day.

6.
Med Phys ; 48(8): 4523-4531, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34231224

ABSTRACT

The past decade has seen the increasing integration of magnetic resonance (MR) imaging into radiation therapy (RT). This growth can be contributed to multiple factors, including hardware and software advances that have allowed the acquisition of high-resolution volumetric data of RT patients in their treatment position (also known as MR simulation) and the development of methods to image and quantify tissue function and response to therapy. More recently, the advent of MR-guided radiation therapy (MRgRT) - achieved through the integration of MR imaging systems and linear accelerators - has further accelerated this trend. As MR imaging in RT techniques and technologies, such as MRgRT, gain regulatory approval worldwide, these systems will begin to propagate beyond tertiary care academic medical centers and into more community-based health systems and hospitals, creating new opportunities to provide advanced treatment options to a broader patient population. Accompanying these opportunities are unique challenges related to their adaptation, adoption, and use including modification of hardware and software to meet the unique and distinct demands of MR imaging in RT, the need for standardization of imaging techniques and protocols, education of the broader RT community (particularly in regards to MR safety) as well as the need to continue and support research, and development in this space. In response to this, an ad hoc committee of the American Association of Physicists in Medicine (AAPM) was formed to identify the unmet needs, roadblocks, and opportunities within this space. The purpose of this document is to report on the major findings and recommendations identified. Importantly, the provided recommendations represent the consensus opinions of the committee's membership, which were submitted in the committee's report to the AAPM Board of Directors. In addition, AAPM ad hoc committee reports differ from AAPM task group reports in that ad hoc committee reports are neither reviewed nor ultimately approved by the committee's parent groups, including at the council and executive committee level. Thus, the recommendations given in this summary should not be construed as being endorsed by or official recommendations from the AAPM.


Subject(s)
Magnetic Resonance Imaging , Radiotherapy, Image-Guided , Humans , Particle Accelerators , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted , United States
7.
Med Phys ; 48(7): 3500-3510, 2021 Jul.
Article in English | MEDLINE | ID: mdl-33877693

ABSTRACT

PURPOSE: Physicians utilize cerebral perfusion maps (e.g., cerebral blood flow, cerebral blood volume, transit time) to prescribe the plan of care for stroke patients. Variability in scanning techniques and post-processing software can result in differences between these perfusion maps. To determine which techniques are acceptable for clinical care, it is important to validate the accuracy and reproducibility of the perfusion maps. Validation using clinical data is challenging due to the lack of a gold standard to assess cerebral perfusion and the impracticality of scanning patients multiple times with different scanning techniques. In contrast, simulated data from a realistic digital phantom of the cerebral perfusion in acute stroke patients would enable studies to optimize and validate the scanning and post-processing techniques. METHODS: We describe a complete framework to simulate CT perfusion studies for stroke assessment. We begin by expanding the XCAT brain phantom to enable spatially varying contrast agent dynamics and incorporate a realistic model of the dynamics in the cerebral vasculature derived from first principles. A dynamic CT simulator utilizes the time-concentration curves to define the contrast agent concentration in the object at each time point and generates CT perfusion images compatible with commercially available post-processing software. We also generate ground truth perfusion maps to which the maps generated by post-processing software can be compared. RESULTS: We demonstrate a dynamic CT perfusion study of a simulated patient with an ischemic stroke and the resulting perfusion maps generated by post-processing software. We include a visual comparison between the computer-generated perfusion maps and the ground truth perfusion maps. The framework is highly tunable; users can modify the perfusion properties (e.g., occlusion location, CBF, CBV, and MTT), scanner specifications (e.g., focal spot size and detector configuration), scanning protocol (e.g., kVp and mAs), and reconstruction parameters (e.g., slice thickness and reconstruction filter). CONCLUSIONS: This framework provides realistic test data with the underlying ground truth that enables a robust assessment of CT perfusion techniques and post-processing methods for stroke assessment.


Subject(s)
Stroke , Tomography, X-Ray Computed , Cerebrovascular Circulation , Humans , Perfusion , Reproducibility of Results , Stroke/diagnostic imaging
8.
IEEE Trans Radiat Plasma Med Sci ; 5(4): 453-464, 2021 Jul.
Article in English | MEDLINE | ID: mdl-35419500

ABSTRACT

Photon counting x-ray detectors (PCDs) with spectral capabilities have the potential to revolutionize computed tomography (CT) for medical imaging. The ideal PCD provides accurate energy information for each incident x-ray, and at high spatial resolution. This information enables material-specific imaging, enhanced radiation dose efficiency, and improved spatial resolution in CT images. In practice, PCDs are affected by non-idealities, including limited energy resolution, pulse pileup, and cross talk due to charge sharing, K-fluorescence, and Compton scattering. In order to maximize their performance, PCDs must be carefully designed to reduce these effects and then later account for them during correction and post-acquisition steps. This review article examines algorithms for using PCDs in spectral CT applications, including how non-idealities impact image quality. Performance assessment metrics that account for spatial resolution and noise such as the detective quantum efficiency (DQE) can be used to compare different PCD designs, as well as compare PCDs with conventional energy integrating detectors (EIDs). These methods play an important role in enhancing spectral CT images and assessing the overall performance of PCDs.

9.
Article in English | MEDLINE | ID: mdl-33226938

ABSTRACT

Transcranial magnetic resonance-guided focused ultrasound (tcMRgFUS) is gaining significant acceptance as a noninvasive treatment for motion disorders and shows promise for novel applications such as blood-brain barrier opening for tumor treatment. A typical procedure relies on CT-derived acoustic property maps to simulate the transfer of ultrasound through the skull. Accurate estimates of the acoustic attenuation in the skull are essential to accurate simulations, but there is no consensus about how attenuation should be estimated from CT images and there is interest in exploring MR as a predictor of attenuation in the skull. In this study, we measure the acoustic attenuation at 0.5, 1, and 2.25 MHz in 89 samples taken from two ex vivo human skulls. CT scans acquired with a variety of X-ray energies, reconstruction kernels, and reconstruction algorithms, and MR images acquired with ultrashort and zero echo time sequences are used to estimate the average Hounsfield unit value, MR magnitude, and T2* value in each sample. The measurements are used to develop a model of attenuation as a function of frequency and each individual imaging parameter.


Subject(s)
Magnetic Resonance Imaging , Tomography, X-Ray Computed , Acoustics , Algorithms , Humans , Skull/diagnostic imaging
10.
J Med Imaging (Bellingham) ; 7(4): 043501, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32715022

ABSTRACT

Purpose: Developing photon-counting CT detectors requires understanding the impact of parameters, such as converter material, thickness, and pixel size. We apply a linear-systems framework, incorporating spatial and energy resolution, to study realistic silicon (Si) and cadmium telluride (CdTe) detectors at a low count rate. Approach: We compared CdTe detector designs with 0.5 × 0.5 mm 2 and 0.225 × 0.225 mm 2 pixels and Si detector designs with 0.5 × 0.5 mm 2 pixels of 30 and 60 mm active thickness, with and without tungsten scatter blockers. Monte-Carlo simulations of photon transport were used together with Gaussian charge sharing models fitted to published data. Results: For detection in a 300-mm-thick object at 120 kVp, the 0.5- and 0.225-mm pixel CdTe systems have 28% to 41% and 5% to 29% higher detective quantum efficiency (DQE), respectively, than the 60-mm Si system with tungsten, whereas the corresponding numbers for two-material decomposition are 2% lower to 11% higher DQE and 31% to 54% lower DQE compared to Si. We also show that combining these detectors with dual-spectrum acquisition is beneficial. Conclusions: In the low-count-rate regime, CdTe detector systems outperform the Si systems for detection tasks, whereas silicon outperforms one or both of the CdTe systems for material decomposition.

11.
Med Phys ; 47(7): e881-e912, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32215937

ABSTRACT

In x-ray computed tomography (CT), materials with different elemental compositions can have identical CT number values, depending on the mass density of each material and the energy of the detected x-ray beam. Differentiating and classifying different tissue types and contrast agents can thus be extremely challenging. In multienergy CT, one or more additional attenuation measurements are obtained at a second, third or more energy. This allows the differentiation of at least two materials. Commercial dual-energy CT systems (only two energy measurements) are now available either using sequential acquisitions of low- and high-tube potential scans, fast tube-potential switching, beam filtration combined with spiral scanning, dual-source, or dual-layer detector approaches. The use of energy-resolving, photon-counting detectors is now being evaluated on research systems. Irrespective of the technological approach to data acquisition, all commercial multienergy CT systems circa 2020 provide dual-energy data. Material decomposition algorithms are then used to identify specific materials according to their effective atomic number and/or to quantitate mass density. These algorithms are applied to either projection or image data. Since 2006, a number of clinical applications have been developed for commercial release, including those that automatically (a) remove the calcium signal from bony anatomy and/or calcified plaque; (b) create iodine concentration maps from contrast-enhanced CT data and/or quantify absolute iodine concentration; (c) create virtual non-contrast-enhanced images from contrast-enhanced scans; (d) identify perfused blood volume in lung parenchyma or the myocardium; and (e) characterize materials according to their elemental compositions, which can allow in vivo differentiation between uric acid and non-uric acid urinary stones or uric acid (gout) or non-uric acid (calcium pyrophosphate) deposits in articulating joints and surrounding tissues. In this report, the underlying physical principles of multienergy CT are reviewed and each of the current technical approaches are described. In addition, current and evolving clinical applications are introduced. Finally, the impact of multienergy CT technology on patient radiation dose is summarized.


Subject(s)
Iodine , Tomography, X-Ray Computed , Algorithms , Humans , Phantoms, Imaging , Photons , X-Rays
12.
Med Phys ; 47(1): 27-36, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31665541

ABSTRACT

PURPOSE: Charge sharing and migration of scattered and fluorescence photons in an energy discriminating photon counting detector (PCD) degrade the detector's energy response and can cause a single incident photon to be registered as multiple events at different energies among neighboring pixels, leading to spatio-energetic correlation. Such a correlation in conventional linear, space-invariant imaging system can be usefully characterized by the frequency dependent detective quantum efficiency DQE(f). Defining and estimating DQE(f) for PCDs in a manner consistent with that of conventional detectors is complicated because the traditional definition of DQE(f) does not address spectral information. METHODS: We introduce the concept of presampling spectroscopic detective quantum efficiency, DQEs (f), and present an analysis of it for CdTe PCDs using a spatial domain method that starts from a previously described analytic computation of spatio-energetic crosstalk. DQEs (f) is estimated as the squared signal-to-noise ratio of the amplitude of a small-signal sinusoidal modulation of the object (cortical bone) thickness at frequency f estimated using data from the detector under consideration compared that obtained from the photon distribution incident on the detector. DQEs for material decomposition (spectral) and effective monoenergetic imaging tasks for different pixel pitch is studied based on the multipixel Cramér-Rao lower bound (CRLB) that accounts for inter pixel basis material correlation. Effective monoenergetic DQEs is estimated from the CRLB of a linear weighted combination of basis materials, and its energy dependence is also studied. RESULTS: Zero frequency DQEs for the spectral task was ~18%, 25%, and 34% for 250 µm, 500 µm, and 1 mm detector pixels respectively. Inter pixel signal correlation results in positive noise correlation between same basis material estimates of neighboring pixels, resulting in least impact on DQEs at the detector's Nyquist frequency. Effective monoenergetic DQEs (0) at the optimal energy is relatively tolerant of spectral degradation (85-91% depending on pixel size), but is highly dependent on the selected effective energy, with maximum variation (in 250 µm pixels) of 17% to 85% for effective energy between 30 to 120 keV. CONCLUSIONS: Our results show that spatio-energetic correlations degrade DQEs (f) beyond what is lost by poor spectral response in a single detector element. The positive correlation between computed single basis material values in neighboring pixels results in the penalty to DQEs (f) to be the least at the Nyquist frequency of the detector. It is desirable to reduce spectral degradation and crosstalk to minimize the impact on system performance. Larger pixels sizes have better spatio-energetic response due to lower charge sharing and escape of scatter and K-fluorescence photons, and therefore higher DQEs (0). Effective monoenergetic DQEs (0) at the optimal energy is much less affected by spectral degradation and crosstalk compared to DQEs for spectral tasks.


Subject(s)
Photons , Radiation Monitoring/methods , Models, Theoretical , Scattering, Radiation
13.
IEEE Trans Med Imaging ; 39(6): 1906-1916, 2020 06.
Article in English | MEDLINE | ID: mdl-31870981

ABSTRACT

Tools to simulate lower dose, noisy computed tomography (CT) images from existing data enable protocol optimization by quantifying the trade-off between patient dose and image quality. Many studies have developed and validated noise insertion techniques; however, most of these tools operate on proprietary projection data which can be difficult to access and can be time consuming when a large number of realizations is needed. In response, this work aims to develop and validate an image domain approach to accurately insert CT noise and simulate low dose scans. In this framework, information from the image is utilized to estimate the variance map and local noise power spectra (NPS). Normally distributed noise is filtered within small patches in the image domain using the inverse Fourier transform of the square root of the estimated local NPS to generate noise with the appropriate spatial correlation. The patches are overlapped and element-wise multiplied by the standard deviation map to produce locally varying, spatially correlated noise. The resulting noise image is scaled based on the relationship between the initial and desired dose and added to the original image. The results demonstrate excellent agreement between traditional projection domain methods and the proposed method, both for simulated and real data sets. This new framework is not intended to replace projection domain methods; rather, it fills a gap in CT noise simulation tools and is an accurate alternative when projection domain methods are not practical, for example, in large scale repeatability or detectability studies.


Subject(s)
Algorithms , Tomography, X-Ray Computed , Computer Simulation , Humans , Image Processing, Computer-Assisted , Phantoms, Imaging , Radiation Dosage
14.
Med Phys ; 46(1): 127-139, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30383310

ABSTRACT

PURPOSE: A dynamic bowtie filter can modulate flux along both fan and view angles for reduced patient dose, scatter, and required photon flux, which is especially important for photon counting detectors (PCDs). Among the proposed dynamic bowtie designs, the piecewise-linear attenuator (Hsieh and Pelc, Med Phys. 2013;40:031910) offers more flexibility than conventional filters, but relies on analog positioning of a limited number of wedges. In this work, we study our previously proposed dynamic attenuator design, the fluid-filled dynamic bowtie filter (FDBF) that has digital control. Specifically, we use computer simulations to study fluence modulation, reconstructed image noise, and radiation dose and to compare it to other attenuators. FDBF is an array of small channels each of which, if it can be filled with dense fluid or emptied quickly, has a binary effect on the flux. The cumulative attenuation from each channel along the x-ray path contributes to the FDBF total attenuation. METHODS: An algorithm is proposed for selecting which FDBF channels should be filled. Two optimization metrics are considered: minimizing the maximum-count-rate for PCDs and minimizing peak-variance for energy-integrating detectors (EIDs) at fixed radiation dose (for optimizing dose efficiency). Using simulated chest, abdomen, and shoulder data, the performance is compared with a conventional bowtie and a piecewise-linear attenuator. For minimizing peak-variance, a perfect-attenuator (hypothetical filter capable of adjusting the fluence of each ray individually) and flat-variance attenuator are also included in the comparison. Two possible fluids, solutions of zinc bromide and gadolinium chloride, were tested. RESULTS: To obtain the same SNR as routine clinical protocols, the proposed FDBF reduces the maximum-count-rate (across projection data, averaged over the test objects) of PCDs to 1.2 Mcps/mm2 , which is 55.8 and 3.3 times lower than the max-count-rate of the conventional bowtie and the piecewise-linear bowtie, respectively. (Averaged across objects for FDBF, the max-count-rate without object and FDBF is 2063.5 Mcps/mm2 , and the max-count-rate with object without FDBF is 749.8 Mcps/mm2 .) Moreover, for the peak-variance analysis, the FDBF can reduce entrance-energy-fluence (sum of energy incident on objects, used as a surrogate for dose) to 34% of the entrance-energy-fluence from the conventional filter on average while achieving the same peak noise level. Its entrance-energy-fluence reduction performance is only 7% worse than the perfect-attenuator on average and is 13% better than the piecewise-linear filter for chest and shoulder. Furthermore, the noise-map in reconstructed image domain from the FDBF is more uniform than the piecewise-linear filter, with 3 times less variation across the object. For the dose reduction task, the zinc bromide solution performed slightly poorer than stainless steel but was better than the gadolinium chloride solution. CONCLUSIONS: The FDBF allows finer control over flux distribution compared to piecewise-linear and conventional bowtie filters. It can reduce the required maximum-count-rate for PCDs to a level achievable by current detector designs and offers a high dose reduction factor.


Subject(s)
Tomography, X-Ray Computed/methods , Algorithms , Equipment Design , Image Processing, Computer-Assisted , Radiation Dosage , Scattering, Radiation , Tomography, X-Ray Computed/instrumentation
15.
Radiology ; 289(2): 293-312, 2018 11.
Article in English | MEDLINE | ID: mdl-30179101

ABSTRACT

Photon-counting CT is an emerging technology with the potential to dramatically change clinical CT. Photon-counting CT uses new energy-resolving x-ray detectors, with mechanisms that differ substantially from those of conventional energy-integrating detectors. Photon-counting CT detectors count the number of incoming photons and measure photon energy. This technique results in higher contrast-to-noise ratio, improved spatial resolution, and optimized spectral imaging. Photon-counting CT can reduce radiation exposure, reconstruct images at a higher resolution, correct beam-hardening artifacts, optimize the use of contrast agents, and create opportunities for quantitative imaging relative to current CT technology. In this review, the authors will explain the technical principles of photon-counting CT in nonmathematical terms for radiologists and clinicians. Following a general overview of the current status of photon-counting CT, they will explain potential clinical applications of this technology.


Subject(s)
Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/instrumentation , Tomography, X-Ray Computed/methods , Humans , Photons
16.
Med Phys ; 45(11): 4897-4915, 2018 Nov.
Article in English | MEDLINE | ID: mdl-30191571

ABSTRACT

PURPOSE: Photon-counting, energy-resolving detectors are subject to intense research interest, and there is a need for a general framework for performance assessment of these detectors. The commonly used linear-systems theory framework, which measures detector performance in terms of noise-equivalent quanta (NEQ) and detective quantum efficiency (DQE) is widely used for characterizing conventional x-ray detectors but does not take energy-resolving capabilities into account. The purpose of this work is to extend this framework to encompass energy-resolving photon-counting detectors and elucidate how the imperfect energy response and other imperfections in real-world detectors affect imaging performance, both for feature detection and for material quantification tasks. METHOD: We generalize NEQ and DQE to matrix-valued quantities as functions of spatial frequency, and show how these matrices can be calculated from simple Monte Carlo simulations. To demonstrate how the new metrics can be interpreted, we compute them for simplified models of fluorescence and Compton scatter in a photon-counting detector and for a Monte Carlo model of a CdTe detector with 0.5 × 0.5 mm 2 pixels. RESULTS: Our results show that the ideal-linear-observer performance for any detection or material quantification task can be calculated from the proposed generalized NEQ and DQE metrics. We also demonstrate that the proposed NEQ metric is closely related to a generalized version of the Cramér-Rao lower bound commonly used for assessing material quantification performance. Off-diagonal elements in the NEQ and DQE matrices are shown to be related to loss of energy information due to imperfect energy resolution. The Monte Carlo model of the CdTe detector predicts a zero-frequency dose efficiency relative to an ideal detector of 0.86 and 0.65 for detecting water and bone, respectively. When the task instead is to quantify these materials, the corresponding values are 0.34 for water and 0.26 for bone. CONCLUSIONS: We have developed a framework for assessing the performance of photon-counting energy-resolving detectors and shown that the matrix-valued NEQ and DQE metrics contain sufficient information for calculating the dose efficiency for both detection and quantification tasks, the task having any spatial and energy dependence. This framework will be beneficial for the development and optimization of photon-counting x-ray detectors.


Subject(s)
Photons , Radiometry/instrumentation , Models, Theoretical , Scattering, Radiation
17.
Article in English | MEDLINE | ID: mdl-29993366

ABSTRACT

Transcranial magnetic resonance-guided focused ultrasound continues to gain traction as a noninvasive treatment option for a variety of pathologies. Focusing ultrasound through the skull can be accomplished by adding a phase correction to each element of a hemispherical transducer array. The phase corrections are determined with acoustic simulations that rely on speed of sound estimates derived from CT scans. While several studies have investigated the relationship between acoustic velocity and CT Hounsfield units (HUs), these studies have largely ignored the impact of X-ray energy, reconstruction method, and reconstruction kernel on the measured HU, and therefore the estimated velocity, and none have measured the relationship directly. In this paper, 91 ex vivo human skull fragments from two skulls are imaged by 80 CT scans with a variety of energies and reconstruction methods. The average HU from each fragment is found for each scan and correlated with the speed of sound measured using a through transmission technique in that fragment. As measured by the -squared value, the results show that CT is able to account for 23%-53% of the variation in velocity in the human skull. Both the X-ray energy and the reconstruction technique significantly alter the -squared value and the linear relationship between HU and speed of sound in bone. Accounting for these variations will lead to more accurate phase corrections and more efficient transmission of acoustic energy through the skull.


Subject(s)
Skull/diagnostic imaging , Tomography, X-Ray Computed/methods , Ultrasonography/methods , Female , Humans , Male , Middle Aged , Models, Biological , Photons , Skull/physiology
18.
IEEE Trans Med Imaging ; 37(8): 1910-1919, 2018 08.
Article in English | MEDLINE | ID: mdl-29993882

ABSTRACT

Charge sharing, scatter, and fluorescence events in a photon counting detector can result in counting of a single incident photon in multiple neighboring pixels, each at a fraction of the true energy. This causes energy distortion and correlation of data across energy bins in neighboring pixels (spatio-energy correlation), with the severity depending on the detector pixel size and detector material. If a "macro-pixel" is formed by combining the counts from multiple adjacent small pixels, it will exhibit correlations across its energy bins. Understanding these effects can be crucial for detector design and for model-based imaging applications. This paper investigates the impact of these effects in basis material and effective monoenergetic estimates using the Cramér-Rao Lower Bound. To do so, we derive a correlation model for the multi-counting events. CdTe detectors with grids of pixels with side length of $250~\mu \text{m}$ , $500~\mu \text{m}$ , and 1 mm were compared, with binning of $4\times4$ , $2\times2$ , and $1\times1$ pixels, respectively, to keep the same net 1 mm2 aperture constant. The same flux was applied to each. The mean and covariance matrix of measured photon counts were derived analytically using spatio-energy response functions precomputed from Monte Carlo simulations. Our results show that a 1 mm2 macro-pixel with $250\times 250\,\,\mu \text{m}^{\textsf {2}}$ sub-pixels shows 35% higher standard deviation than a single 1 mm2 pixel for material-specific imaging, while the penalty for effective monoenergetic imaging is <10% compared with a single 1 mm $^{\textsf {2}}$ pixel. Potential benefits of sub-pixels (higher spatial resolution and lower pulse pile-up effects) are important but were not investigated here.


Subject(s)
Photons , Radiography , Radiography/instrumentation , Radiography/methods
19.
Med Phys ; 45(4): 1433-1443, 2018 Apr.
Article in English | MEDLINE | ID: mdl-29418004

ABSTRACT

PURPOSE: Photon-counting detectors using CdTe or CZT substrates are promising candidates for future CT systems but suffer from a number of nonidealities, including charge sharing and pulse pileup. By increasing the pixel size of the detector, the system can improve charge sharing characteristics at the expense of increasing pileup. The purpose of this work is to describe these considerations in the optimization of the detector pixel pitch. METHODS: The transport of x rays through the CdTe substrate was simulated in a Monte Carlo fashion using GEANT4. Deposited energy was converted into charges distributed as a Gaussian function with size dependent on interaction depth to capture spreading from diffusion and Coulomb repulsion. The charges were then collected in a pixelated fashion. Pulse pileup was incorporated separately with Monte Carlo simulation. The Cramér-Rao lower bound (CRLB) of the measurement variance was numerically estimated for the basis material projections. Noise in these estimates was propagated into CT images. We simulated pixel pitches of 250, 350, and 450 microns and compared the results to a photon counting detector with pileup but otherwise ideal energy response and an ideal dual-energy system (80/140 kVp with tin filtration). The modeled CdTe thickness was 2 mm, the incident spectrum was 140 kVp and 500 mA, and the effective dead time was 67 ns. Charge summing circuitry was not modeled. We restricted our simulations to objects of uniform thickness and did not consider the potential advantage of smaller pixels at high spatial frequencies. RESULTS: At very high x-ray flux, pulse pileup dominates and small pixel sizes perform best. At low flux or for thick objects, charge sharing dominates and large pixel sizes perform best. At low flux and depending on the beam hardness, the CRLB of variance in basis material projections tasks can be 32%-55% higher with a 250 micron pixel pitch compared to a 450 micron pixel pitch. However, both are about four times worse in variance than the ideal photon counting detector. The optimal pixel size depends on a number of factors such as x-ray technique and object size. At high technique (140 kVp/500 mA), the ratio of variance for a 450 micron pixel compared to a 250 micron pixel size is 2126%, 200%, 97%, and 78% when imaging 10, 15, 20, and 25 cm of water, respectively. If 300 mg/cm2 of iodine is also added to the object, the variance ratio is 117%, 91%, 74%, and 72%, respectively. Nonspectral tasks, such as equivalent monoenergetic imaging, are less sensitive to spectral distortion. CONCLUSIONS: The detector pixel size is an important design consideration in CdTe detectors. Smaller pixels allow for improved capabilities at high flux but increase charge sharing, which in turn compromises spectral performance. The optimal pixel size will depend on the specific task and on the charge shaping time.


Subject(s)
Cadmium Compounds , Tellurium , Tomography, X-Ray Computed/instrumentation , Humans , Monte Carlo Method
20.
J Med Imaging (Bellingham) ; 4(2): 023503, 2017 Apr.
Article in English | MEDLINE | ID: mdl-28560242

ABSTRACT

We present a fast, noise-efficient, and accurate estimator for material separation using photon-counting x-ray detectors (PCXDs) with multiple energy bin capability. The proposed targeted least squares estimator (TLSE) is an improvement of a previously described A-table method by incorporating dynamic weighting that allows the variance to be closer to the Cramér-Rao lower bound (CRLB) throughout the operating range. We explore Cartesian and average-energy segmentation of the basis material space for TLSE and show that, compared with Cartesian segmentation, the average-energy method requires fewer segments to achieve similar performance. We compare the average-energy TLSE to other proposed estimators-including the gold standard maximum likelihood estimator (MLE) and the A-table-in terms of variance, bias, and computational efficiency. The variance and bias were simulated in the range of 0 to 6 cm of aluminum and 0 to 50 cm of water with Monte Carlo methods. The Average-energy TLSE achieves an average variance within 2% of the CRLB and mean absolute error of [Formula: see text]. Using the same protocol, the MLE showed variance within 1.9% of the CRLB ratio and average absolute error of [Formula: see text] but was 50 times slower in our implementations. Compared with the A-table method, TLSE gives a more homogenously optimal variance-to-CRLB ratio in the operating region. We show that variance in basis material estimates for TLSE is lower than that of the A-table method by as much as [Formula: see text] in the peripheral region of operating range (thin or thick objects). The TLSE is a computationally efficient and fast method for material separation with PCXDs, with accuracy and precision comparable to the MLE.

SELECTION OF CITATIONS
SEARCH DETAIL
...