Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
Phys Med Biol ; 69(11)2024 May 14.
Article in English | MEDLINE | ID: mdl-38640917

ABSTRACT

Purpose. Fast kV-switching (FKS) and dual-layer flat-panel detector (DL-FPD) technologies have been actively studied as promising dual-energy spectral imaging solutions for FPD-based cone-beam computed tomography (CT). However, cone-beam CT (CBCT) spectral imaging is known to face challenges in obtaining accurate and robust material discrimination performance. That is because the energy separation by either FKS or DL-FPD, alone, is still limited, along with apparently unpaired signal levels in the effective low- and high-energy projections in real applications, not to mention the x-ray scatter in cone-beam scan which will make the material decomposition almost impossible if no correction is applied. To further improve CBCT spectral imaging capability, this work aims to promote a source-detector joint multi-energy spectral imaging solution which takes advantages of both FKS and DL-FPD, and to conduct a feasibility study on the first tabletop CBCT system with the joint spectral imaging capability developed.Methods. For CBCT, development of multi-energy spectral imaging can be jointly realized by using an x-ray source with a generator whose kilo-voltages can alternate in tens of Hertz (i.e. FKS), and a DL-FPD whose top- and bottom-layer projections corresponds to different effective energy levels. Thanks to the complimentary characteristics inherent in FKS and DL-FPD, the overall energy separation will be significantly better when compared with FKS or DL-FPD alone, and the x-ray photon detection efficiency will be also improved when compared with FKS alone. In this work, a noise performance analysis using the Cramér-Rao lower bound (CRLB) method is conducted. The CRLB for basis material after a projection-domain material decomposition is derived, followed by a set of numerical calculations of CRLBs, for the FKS, the DL-FPD and the joint solution, respectively. To compensate for the slightly angular mismatch between low- and high- projections in FKS, a dual-domain projection completion scheme is implemented. Afterwards material decomposition from the complete projection data is carried out by using the maximum-likelihood method, followed by reconstruction of basis material and virtual monochromatic images (VMI). In this work, the first FKS and DL-FPD jointly enabled multi-energy tabletop CBCT system, to the best of our knowledge, has been developed in our laboratory. To evaluate its spectral imaging performance, a set of physics experiments are conducted, where the multi-energy and head phantoms are scanned using the 80/105/130 kVp switching pairs and projection data are collected using a prototype DL-FPD, whose both top and bottom layer of panels are composed of 550µm of cesium iodine (CsI) scintillators with no intermediate metal filter in-between.Results. The numerical simulations show that the joint spectral imaging solution can lead to a significant improvement in energy separation and lower noise levels in most of material decomposition cases. The physics experiments confirmed the feasibility and superiority of the joint spectral imaging, whose CNRs in the selected regions of interest of the multi-energy phantom were boosted by an average improvement of 21.9%, 20.4% for water basis images and 32.8%, 62.8% for iodine images when compared with that of the FKS and DL-FPD, respectively. For the head phantom case, the joint spectral imaging can effectively reduce the streaking artifacts as well, and the standard deviation in the selected regions of interest are reduced by an average decrement of 19.5% and 8.1% for VMI when compared with that of the FKS and DL-FPD, respectively.Conclusions. A feasibility study of the joint spectral imaging solution for CBCT by utilizing both the FKS and DL-FPD was conducted, with the first tabletop CBCT system having such a capability being developed, which exhibits improved CNR and is more effective in avoiding streaking artifacts as expected.


Subject(s)
Cone-Beam Computed Tomography , Phantoms, Imaging , Cone-Beam Computed Tomography/instrumentation , Cone-Beam Computed Tomography/methods , Time Factors , Image Processing, Computer-Assisted/methods , Humans , Feasibility Studies
2.
Med Phys ; 51(4): 2398-2412, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38477717

ABSTRACT

BACKGROUND: Cone-beam CT (CBCT) has been extensively employed in industrial and medical applications, such as image-guided radiotherapy and diagnostic imaging, with a growing demand for quantitative imaging using CBCT. However, conventional CBCT can be easily compromised by scatter and beam hardening artifacts, and the entanglement of scatter and spectral effects introduces additional complexity. PURPOSE: The intertwined scatter and spectral effects within CBCT pose significant challenges to the quantitative performance of spectral imaging. In this work, we present the first attempt to develop a stationary spectral modulator with flying focal spot (SMFFS) technology as a promising, low-cost approach to accurately solving the x-ray scattering problem and physically enabling spectral imaging in a unified framework, and with no significant misalignment in data sampling of spectral projections. METHODS: To deal with the intertwined scatter-spectral challenge, we propose a novel scatter-decoupled material decomposition (SDMD) method for SMFFS, which consists of four steps in total, including (1) spatial resolution-preserved and noise-suppressed multi-energy "residual" projection generation free from scatter, based on a hypothesis of scatter similarity; (2) first-pass material decomposition from the generated multi-energy residual projections in non-penumbra regions, with a structure similarity constraint to overcome the increased noise and penumbra effect; (3) scatter estimation for complete data; and (4) second-pass material decomposition for complete data by using a multi-material spectral correction method. Monte Carlo simulations of a pure-water cylinder phantom with different focal spot deflections are conducted to validate the scatter similarity hypothesis. Both numerical simulations using a clinical abdominal CT dataset, and physics experiments on a tabletop CBCT system using a Gammex multi-energy CT phantom and an anthropomorphic chest phantom, are carried out to demonstrate the feasibility of CBCT spectral imaging with SMFFS and our proposed SDMD method. RESULTS: Monte Carlo simulations show that focal spot deflections within a range of 2 mm share quite similar scatter distributions overall. Numerical simulations demonstrate that SMFFS with SDMD method can achieve better material decomposition and CT number accuracy with fewer artifacts. In physics experiments, for the Gammex phantom, the average error of the mean values ( E RMSE ROI $E^{\text{ROI}}_{\text{RMSE}}$ ) in selected regions of interest (ROIs) of virtual monochromatic image (VMI) at 70 keV is 8 HU in SMFFS cone-beam (CB) scan, and 19 and 210 HU in sequential 80/120 kVp (dual kVp, DKV) CB scan with and without scatter correction, respectively. For the chest phantom, the E RMSE ROI $E^{\text{ROI}}_{\text{RMSE}}$ in selected ROIs of VMIs is 12 HU for SMFFS CB scan, and 15 and 438 HU for sequential 80/140 kVp CB scan with and without scatter correction, respectively. Also, the non-uniformity among selected regions of the chest phantom is 14 HU for SMFFS CB scan, and 59 and 184 HU for the DKV CB scan with and without a traditional scatter correction method, respectively. CONCLUSIONS: We propose a SDMD method for CBCT with SMFFS. Our preliminary results show that SMFFS can enable spectral imaging with simultaneous scatter correction for CBCT and effectively improve its quantitative imaging performance.


Subject(s)
Spiral Cone-Beam Computed Tomography , Image Processing, Computer-Assisted/methods , Scattering, Radiation , Physical Phenomena , Phantoms, Imaging , Cone-Beam Computed Tomography/methods , Artifacts , Algorithms
3.
Med Phys ; 51(6): 4121-4132, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38452276

ABSTRACT

BACKGROUND: Ring artifact is a common problem in Computed Tomography (CT), which can lead to inaccurate diagnoses and treatment plans. It can be caused by various factors such as detector imperfections, anti-scatter grids, or other nonuniform filters placed in the x-ray beam. Physics-based corrections for these x-ray source and detector non-uniformity, in general cannot completely get rid of the ring artifacts. Therefore, there is a need for a robust method that can effectively remove ring artifacts in the image domain while preserving details. PURPOSE: This study aims to develop an effective method for removing ring artifacts from reconstructed CT images. METHODS: The proposed method starts by converting the reconstructed CT image containing ring artifacts into polar coordinates, thereby transforming these artifacts into stripes. Relative Total Variation is used to extract the image's overall structural information. For the efficient restoration of intricate details, we introduce Directional Gradient Domain Optimization (DGDO) and design objective functions that make use of both the image's gradient and its overall structure. Subsequently, we present an efficient analytical algorithm to minimize these objective functions. The image obtained through DGDO is then transformed back into Cartesian coordinates, finalizing the ring artifact correction process. RESULTS: Through a series of synthetic and real-world experiments, we have effectively demonstrated the prowess of our proposed method in the correction of ring artifacts while preserving intricate details in reconstructed CT images. In a direct comparison, our method has exhibited superior visual quality compared to several previous approaches. These results underscore the remarkable potential of our approach for enhancing the overall quality and clinical utility of CT imaging. CONCLUSIONS: The proposed method offers an analytical solution for removing ring artifacts from CT images while preserving details. As ring artifacts are a common problem in CT imaging, this method has high practical value in the medical field. The proposed method can improve image quality and reduce the difficulty of disease diagnosis, thereby contributing to better patient care.


Subject(s)
Artifacts , Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Tomography, X-Ray Computed/methods , Image Processing, Computer-Assisted/methods , Algorithms , Phantoms, Imaging , Humans
4.
Med Phys ; 50(11): 6762-6778, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37675888

ABSTRACT

BACKGROUND: Flat panel detector (FPD) based cone-beam computed tomography (CT) has made tremendous progress in the last two decades, with many new and advanced medical and industrial applications keeping emerging from diagnostic imaging and image guidance for radiotherapy and interventional surgery. The current cone-beam CT (CBCT), however, is still suboptimal for head CT scan which requires a high standard of image quality. While the dual-layer FPD technology is under extensive development and is promising to further advance CBCT from qualitative anatomic imaging to quantitative dual-energy CT, its potential of enabling head CBCT applications has not yet been fully investigated. PURPOSE: The relatively moderate energy separation from the dual-layer FPD and the overall low signal level especially at the bottom-layer detector, could raise significant challenges in performing high-quality dual-energy material decomposition (MD). In this work, we propose a hybrid, physics and model guided, MD algorithm that attempts to fully use the detected x-ray signals and prior-knowledge behind head CBCT using dual-layer FPD. METHODS: Firstly, a regular projection-domain MD is performed as initial results of our approach and for comparison as conventional method. Secondly, based on the combined projection, a dual-layer multi-material spectral correction (dMMSC) is applied to generate beam hardening free images. Thirdly, the dMMSC corrected projections are adopted as a physics-model based guidance to generate a hybrid MD. A set of physics experiments including fan-beam scan and cone-beam scan using a head phantom and a Gammex Multi-Energy CT phantom are conducted to validate our proposed approach. RESULTS: The combined reconstruction could reduce noise by about 10% with no visible resolution degradation. The fan-beam studies on the Gammex phantom demonstrated an improved MD performance, with the averaged iodine quantification error for the 5-15 mg/ml iodine inserts reduced from about 5.6% to 3.0% by the hybrid method. On fan-beam scan of the head phantom, our proposed hybrid MD could significantly reduce the streak artifacts, with CT number nonuniformity (NU) in the selected regions of interest (ROIs) reduced from 23 Hounsfield Units (HU) to 4.2 HU, and the corresponding noise suppressed from 31 to 6.5 HU. For cone-beam scan, after scatter correction (SC) and cone-beam artifact reduction (CBAR), our approach can also significantly improve image quality, with CT number NU in the selected ROI reduced from 24.2 to 6.6 HU and the noise level suppressed from 22.1 to 8.2 HU. CONCLUSIONS: Our proposed physics and model guided hybrid MD for dual-layer FPD based head CBCT can significantly improve the robustness of MD and suppress the low-signal artifact. This preliminary feasibility study also demonstrated that the dual-layer FPD is promising to enable head CBCT spectral imaging.


Subject(s)
Iodine , Tomography, X-Ray Computed , Feasibility Studies , Cone-Beam Computed Tomography/methods , Head/diagnostic imaging , Algorithms , Phantoms, Imaging , Artifacts , Image Processing, Computer-Assisted/methods
5.
Med Phys ; 50(8): 5150-5165, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37379056

ABSTRACT

BACKGROUND: With advanced x-ray source and detector technologies being continuously developed, non-traditional CT geometries have been widely explored. Generalized-Equiangular Geometry CT (GEGCT) architecture, in which an x-ray source might be positioned radially far away from the focus of arced detector array that is equiangularly spaced, is of importance in many novel CT systems and designs. PURPOSE: GEGCT, unfortunately, has no theoretically exact and shift-invariant analytical image reconstruction algorithm in general. In this study, to obtain fast and accurate reconstruction from GEGCT and to promote its system design and optimization, an in-depth investigation on a group of approximate Filtered Back-Projection (FBP) algorithms with a variety of weighting strategies has been conducted. METHODS: The architecture of GEGCT is first presented and characterized by using a normalized-radial-offset distance (NROD). Next, shift-invariant weighted FBP-type algorithms are derived in a unified framework, with pre-filtering, filtering, and post-filtering weights, for both fixed and dynamic NROD configurations. Three viable weighting strategies are then presented including a classic one developed by Besson in the literature and two new ones generated from a curvature fitting and from an empirical formula, where all of the three weights can be expressed as certain functions of NROD. After that, an analysis of reconstruction accuracy is conducted with a wide range of NROD. Finally, the weighted FBP algorithm for GEGCT is extended to a three-dimensional form in the case of cone-beam scan with a cylindrical detector array. RESULTS: Theoretical analysis and numerical study show that weights in the shift-invariant FBP algorithms can guarantee highly accurate reconstruction for GEGCT. A simulation of Shepp-Logan phantom and a GEGCT scan of lung mimicked by using a clinical lung CT dataset both demonstrate that FBP reconstructions with Besson and polynomial weights can achieve excellent image quality, with Peak Signal to Noise Ratio and Structural Similarity being at the same level as that from the standard equiangular fan-beam CT scan. Reconstruction of a cylinder object with multiple contrasts from simulated GEGCT scan with dynamic NROD is also highly consistent with fixed ones when using the Besson and polynomial weights, with root mean square error less than 7 hounsfield units, demonstrating the robustness and flexibility of the presented FBP algorithms. In terms of resolution, the direct FBP methods for GEGCT could achieve 1.35 lp/mm of spatial resolution at 10% modulation transfer functions point, higher than that of the rebinning method which can only reach 1.14 lp/mm. Moreover, 3D reconstructions of a disc phantom reveal that a greater value of NROD for GEGCT will bring less cone beam artifacts as expected. CONCLUSIONS: We propose the concept of GEGCT and investigate the feasibility of using shift-invariant weighted FBP-type algorithms for reconstruction from GEGCT data without rebinning. A comprehensive analysis and phantom studies have been conducted to validate the effectiveness of proposed weighting strategies in a wide range of NROD for GEGCT with fixed and dynamic NROD.

6.
IEEE Trans Med Imaging ; 42(8): 2133-2145, 2023 08.
Article in English | MEDLINE | ID: mdl-37022909

ABSTRACT

CT metal artefact reduction (MAR) methods based on supervised deep learning are often troubled by domain gap between simulated training dataset and real-application dataset, i.e., methods trained on simulation cannot generalize well to practical data. Unsupervised MAR methods can be trained directly on practical data, but they learn MAR with indirect metrics and often perform unsatisfactorily. To tackle the domain gap problem, we propose a novel MAR method called UDAMAR based on unsupervised domain adaptation (UDA). Specifically, we introduce a UDA regularization loss into a typical image-domain supervised MAR method, which mitigates the domain discrepancy between simulated and practical artefacts by feature-space alignment. Our adversarial-based UDA focuses on a low-level feature space where the domain difference of metal artefacts mainly lies. UDAMAR can simultaneously learn MAR from simulated data with known labels and extract critical information from unlabeled practical data. Experiments on both clinical dental and torso datasets show the superiority of UDAMAR by outperforming its supervised backbone and two state-of-the-art unsupervised methods. We carefully analyze UDAMAR by both experiments on simulated metal artefacts and various ablation studies. On simulation, its close performance to the supervised methods and advantages over the unsupervised methods justify its efficacy. Ablation studies on the influence from the weight of UDA regularization loss, UDA feature layers, and the amount of practical data used for training further demonstrate the robustness of UDAMAR. UDAMAR provides a simple and clean design and is easy to implement. These advantages make it a very feasible solution for practical CT MAR.


Subject(s)
Artifacts , Deep Learning , Computer Simulation , Tomography, X-Ray Computed
7.
IEEE Trans Med Imaging ; 41(10): 2912-2924, 2022 10.
Article in English | MEDLINE | ID: mdl-35576423

ABSTRACT

Limited angle reconstruction is a typical ill-posed problem in computed tomography (CT). Given incomplete projection data, images reconstructed by conventional analytical algorithms and iterative methods suffer from severe structural distortions and artifacts. In this paper, we proposed a self-augmented multi-stage deep-learning network (Sam's Net) for end-to-end reconstruction of limited angle CT. With the merit of the alternating minimization technique, Sam's Net integrates multi-stage self-constraints into cross-domain optimization to provide additional constraints on the manifold of neural networks. In practice, a sinogram completion network (SCNet) and artifact suppression network (ASNet), together with domain transformation layers constitute the backbone for cross-domain optimization. An online self-augmentation module was designed following the manner defined by alternating minimization, which enables a self-augmented learning procedure and multi-stage inference manner. Besides, a substitution operation was applied as a hard constraint for the solution space based on the data fidelity and a learnable weighting layer was constructed for data consistency refinement. Sam's Net forms a new framework for ill-posed reconstruction problems. In the training phase, the self-augmented procedure guides the optimization into a tightened solution space with enriched diverse data distribution and enhanced data consistency. In the inference phase, multi-stage prediction can improve performance progressively. Extensive experiments with both simulated and practical projections under 90-degree and 120-degree fan-beam configurations validate that Sam's Net can significantly improve the reconstruction quality with high stability and robustness.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Algorithms , Artifacts , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Phantoms, Imaging , Tomography, X-Ray Computed/methods
8.
Med Phys ; 48(10): 6106-6120, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34432891

ABSTRACT

PURPOSE: X-ray phase-contrast imaging (XPCI) can provide multiple contrasts with great potentials for clinical and industrial applications, including conventional attenuation, phase contrast, and dark field. Grating-based imaging (GBI) and edge-illumination (EI) are two promising types of XPCI as the conventional x-ray sources can be directly utilized. For the GBI and EI systems, the phase-stepping acquisition with multiple exposures at a constant fluence is usually adopted in the literature.This work, however, attempts to challenge such a constant fluence concept during the phase-stepping process and proposes a fluence adaptation mechanism for dose reduction. METHOD: Given the importance of patient radiation dose for clinical applications, numerous studies have tried to reduce patient dose in XPCI by altering imaging system designs, data acquisition, and information retrieval. Recently, analytic multiorder moment analysis has been proposed to improve the computing efficiency. In these algorithms, multiple contrasts can be calculated by summing together the weighted phase-stepping curves (PSCs) with some kernel functions, which suggests us that the raw data at different steps have different contributions for the noise in retrieved contrasts. Therefore, it is possible to improve the noise performance by adjusting the fluence distribution during the phase-stepping process directly. Based on analytic retrieval formulas and the Gaussian noise model for detected signals, we derived an optimal adaptive fluence distribution, which is proportional to the absolute weighting kernel functions and the root of original sample PSCs acquired under the constant fluence. Considering that the original sample PSC might be unavailable, we proposed two practical forms for the GBI and EI systems, which are also able to reduce the contrast noise when comparing with the constant fluence distribution. Since the kernel functions are target contrast-dependent, our proposed fluence adaptation mechanism provides a way of realizing a contrast-based dose optimization while keeping the same noise level. RESULTS: To validate our analyses, simulations and experiments are conducted for the GBI and EI systems. Simulated results demonstrate that the dose reduction ratio between our proposed fluence distributions and the typical constant one can be about 20% for the phase contrast, which is consistent with our theoretical predictions. Although the experimental noise reduction ratios are a little smaller than the theoretical ones, low-dose experiments observe better noise performance by our proposed method. Our simulated results also give out the effective ranges of the parameters of the PSCs, such as the visibility in the GBI, the standard deviation, and the mean value in the EI, providing a guidance for the use of our proposed approach in practice. CONCLUSIONS: In this paper, we propose a fluence adaptation mechanism for contrast-based dose optimization in XPCI, which can be applied to the GBI and EI systems. Our proposed method explores a new direction for dose reduction, and may also be further extended to other types of XPCI systems and information retrieval algorithms.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Algorithms , Humans , Phantoms, Imaging , Radiography , X-Rays
9.
Opt Express ; 29(14): 21902-21920, 2021 Jul 05.
Article in English | MEDLINE | ID: mdl-34265967

ABSTRACT

In grating-based x-ray phase contrast imaging, Fourier component analysis (FCA) is usually recognized as a gold standard to retrieve the contrasts including attenuation, phase and dark-field, since it is well-established on wave optics and is of high computational efficiency. Meanwhile, an alternative approach basing on the particle scattering theory is being developed and can provide similar contrasts with FCA by calculating multi-order moments of deconvolved small-angle x-ray scattering, so called as multi-order moment analysis (MMA). Although originated from quite different physics theories, the high consistency between the contrasts retrieved by FCA and MMA implies us that there may be some intrinsic connections between them, which has not been fully revealed to the best of our knowledge. In this work, we present a Fourier-based interpretation of MMA and conclude that the contrasts retrieved by MMA are actually the weighted compositions of Fourier coefficients, which means MMA delivers similar physical information as FCA. Based on the recognized cosine model, we also provide a truncated analytic MMA method, and its computational efficiency can be hundreds of times faster than the original deconvolution-based MMA method. Moreover, a noise analysis for our proposed truncated method is also conducted to further evaluate its performances. The results of numerical simulation and physical experiments support our analyses and conclusions.

10.
Phys Med Biol ; 66(7)2021 03 23.
Article in English | MEDLINE | ID: mdl-33657536

ABSTRACT

X-ray scatter remains a major physics challenge in volumetric computed tomography (CT), whose physical and statistical behaviors have been commonly leveraged in order to eliminate its impact on CT image quality. In this work, we conduct an in-depth derivation of how the scatter distribution and scatter to primary ratio (SPR) will change during the spectral correction, leading to an interesting finding on the property of scatter. Such a characterization of scatter's behavior provides an analytic approach of compensating for the SPR as well as approximating the change of scatter distribution after spectral correction, even though both of them might be significantly distorted as the linearization mapping function in spectral correction could vary a lot from one detector pixel to another. We conduct an evaluation of SPR compensations (SPRCs) on a Catphan phantom and an anthropomorphic chest phantom to validate the characteristics of scatter. In addition, this scatter property is also directly adopted into CT imaging using a spectral modulator with flying focal spot technology (SMFFS) as an example to demonstrate its potential in practical applications. For cone-beam CT (CBCT) scans at both 80 and 120 kVp, CT images with accurate CT numbers can be achieved after spectral correction followed by the appropriate SPRC based on our presented scatter property. In the case of the SMFFS based CBCT scan of the Catphan phantom at 120 kVp, after a scatter correction using an analytic algorithm derived from the scatter property, CT image quality was significantly improved, with the averaged root mean square error reduced from 297.9 to 6.5 Hounsfield units.


Subject(s)
Artifacts , Image Processing, Computer-Assisted , Algorithms , Cone-Beam Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Scattering, Radiation , Tomography, X-Ray Computed , X-Rays
11.
Med Phys ; 48(4): 1557-1570, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33420741

ABSTRACT

PURPOSE: Modulation of the x-ray source in computed tomography (CT) by a designated filter to achieve a desired distribution of photon flux has been greatly advanced in recent years. In this work, we present a densely sampled spectral modulation (DSSM) as a promising low-cost solution to quantitative CT imaging in the presence of scatter. By leveraging a special stationary filter (namely a spectral modulator) and a flying focal spot, DSSM features a strong correlation in the scatter distributions across focal spot positions and sees no substantial projection sparsity or misalignment in data sampling, making it possible to simultaneously correct for scatter and spectral effects in a unified framework. METHODS: The concept of DSSM is first introduced, followed by an analysis of the design and benefits of using the stationary spectral modulator with a flying focal spot (SMFFS) that dramatically changes the data sampling and its associated data processing. With an assumption that the scatter distributions across focal spot positions have strong correlation, a scatter estimation and spectral correction algorithm from DSSM is then developed, where a dual-energy modulator along with two flying focal spot positions is of interest. Finally, a phantom study on a tabletop cone-beam CT system is conducted to understand the feasibility of DSSM by SMFFS, using a copper modulator and by moving the x-ray tube position in the X direction to mimic the flying focal spot. RESULTS: Based on our analytical analysis of the DSSM by SMFFS, the misalignment of low- and high-energy projection rays can be reduced by a factor of more than 10 when compared with a stationary modulator only. With respect to modulator design, metal materials such as copper, molybdenum, silver, and tin could be good candidates in terms of energy separation at a given attenuation of photon flux. Physical experiments using a Catphan phantom as well as an anthropomorphic chest phantom demonstrate the effectiveness of DSSM by SMFFS with much better CT number accuracy and less image artifacts. The root mean squared error was reduced from 297.9 to 6.5 Hounsfield units (HU) for the Catphan phantom and from 409.3 to 39.2 HU for the chest phantom. CONCLUSIONS: The concept of DSSM using a SMFFS is proposed. Phantom results on its scatter estimation and spectral correction performance validate our main ideas and key assumptions, demonstrating its potential and feasibility for quantitative CT imaging.


Subject(s)
Cone-Beam Computed Tomography , Image Processing, Computer-Assisted , Algorithms , Artifacts , Feasibility Studies , Phantoms, Imaging , Scattering, Radiation , Tomography, X-Ray Computed , X-Rays
12.
IEEE Trans Med Imaging ; 39(12): 4445-4457, 2020 12.
Article in English | MEDLINE | ID: mdl-32866095

ABSTRACT

In this work, we investigate the Fourier properties of a symmetric-geometry computed tomography (SGCT) with linearly distributed source and detector in a stationary configuration. A linkage between the 1D Fourier Transform of a weighted projection from SGCT and the 2D Fourier Transform of a deformed object is established in a simple mathematical form (i.e., the Fourier slice theorem for SGCT). Based on its Fourier slice theorem and its unique data sampling in the Fourier space, a Linogram-based Fourier reconstruction method is derived for SGCT. We demonstrate that the entire Linogram reconstruction process can be embedded as known operators into an end-to-end neural network. As a learning-based approach, the proposed Linogram-Net has capability of improving CT image quality for non-ideal imaging scenarios, a limited-angle SGCT for instance, through combining weights learning in the projection domain and loss minimization in the image domain. Numerical simulations and physical experiments on an SGCT prototype platform showed that our proposed Linogram-based method can achieve accurate reconstruction from a dual-SGCT scan and can greatly reduce computational complexity when compared with the filtered backprojection type reconstruction. The Linogram-Net achieved accurate reconstruction when projection data are complete and significantly suppressed image artifacts from a limited-angle SGCT scan mimicked by using a clinical CT dataset, with the average CT number error in the selected regions of interest reduced from 67.7 Hounsfield Units (HU) to 28.7 HU, and the average normalized mean square error of overall images reduced from 4.21e-3 to 2.65e-3.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Algorithms , Artifacts , Fourier Analysis , Neural Networks, Computer , Phantoms, Imaging
13.
Phys Med Biol ; 65(24): 245030, 2020 12 11.
Article in English | MEDLINE | ID: mdl-32365345

ABSTRACT

Helical CT has been widely used in clinical diagnosis. In this work, we focus on a new prototype of helical CT, equipped with sparsely spaced multidetector and multi-slit collimator (MSC) in the axis direction. This type of system can not only lower radiation dose, and suppress scattering by MSC, but also cuts down the manufacturing cost of the detector. The major problem to overcome with such a system, however, is that of insufficient data for reconstruction. Hence, we propose a deep learning-based function optimization method for this ill-posed inverse problem. By incorporating a Radon inverse operator, and disentangling each slice, we significantly simplify the complexity of our network for 3D reconstruction. The network is composed of three subnetworks. Firstly, a convolutional neural network (CNN) in the projection domain is constructed to estimate missing projection data, and to convert helical projection data to 2D fan-beam projection data. This is follwed by the deployment of an analytical linear operator to transfer the data from the projection domain to the image domain. Finally, an additional CNN in the image domain is added for further image refinement. These three steps work collectively, and can be trained end to end. The overall network is trained on a simulated CT dataset based on eight patients from the American Association of Physicists in Medicine (AAPM) Low Dose CT Grand Challenge. We evaluate the trained network on both simulated datasets and clinical datasets. Extensive experimental studies have yielded very encouraging results, based on both visual examination and quantitative evaluation. These results demonstrate the effectiveness of our method and its potential for clinical usage. The proposed method provides us with a new solution for a fully 3D ill-posed problem.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Tomography, Spiral Computed/methods , Humans
14.
Med Phys ; 47(5): 2222-2236, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32009236

ABSTRACT

PURPOSE: Inverse-geometry computed tomography (IGCT) could have great potential in medical applications and security inspections, and has been actively investigated in recent years. In this work, we explore a special architecture of IGCT in a stationary configuration: symmetric-geometry computed tomography (SGCT), where the x-ray source and detector are linearly distributed in a symmetric design. A direct filtered backprojection (FBP)-type algorithm is developed to analytically reconstruct images from the SGCT projections. METHODS: In our proposed SGCT system, a big number of x-ray source points equally distributed along a straight-line trajectory will sequentially fire in an ultra-fast manner in one side, and an equispaced detector whose total length is comparable to that of the source will continuously collect data in the opposite side, as the object to be scanned moves into the imaging plane. We firstly present the overall design of SGCT. An FBP-type reconstruction algorithm is then derived for this unique imaging configuration. With finite length of x-ray source and detector arrays, projection data from one segment of SGCT scan are insufficient for an exact reconstruction. As a result, in practical applications, dual-SGCT scan whose detector segments are placed perpendicular to each other, is of particular interest and is proposed. Two segments of SGCT together can make sure that the passing rays cover at least 180 degrees for each and every point if carefully designed. In general, however, there exists a data redundancy problem for a dual-SGCT. So a weighting strategy is developed to maximize the use of projection data collected while avoid image artifacts. In addition, we further extend the fan-beam SGCT to cone beam and obtain a Feldkamp-Davis-Kress (FDK)-type reconstruction algorithm. Finally, we conduct a set of experimental studies both in simulation and on a prototype SGCT system and validate our proposed methods. RESULTS: A simulation study using the Shepp-Logan head phantom confirms that CT images can be exactly reconstructed from dual-SGCT scan and that our proposed weighting strategy is able to handle the data redundancy properly. Compared with the rebinning-to-parallel-beam method using the forward projection of an abdominal CT dataset, our proposed method is seen to be less sensitive to data truncation. Our algorithm can achieve 10.64 lp/cm of spatial resolution at 50% modulation transfer functions point, higher than that of the rebinning method which can only reach at 9.42 lp/cm even with extremely fine interpolation. Real experiments of a cylindrical object on a prototype SGCT further prove the effectiveness and practicability of the direct FBP method proposed, with similar level of noise performance to rebinning algorithm. CONCLUSIONS: A new concept of SGCT with linearly distributed source and detector is investigated in this work, in which spinning of sources and detectors is no longer needed during data acquisition, simplifying its system design, development, and manufacturing. A direct FBP-type algorithm is developed for analytical reconstruction from SGCT projection data. Numerical and real experiments validate our method and show that exact CT image can be reconstructed from dual-SGCT scan, where data redundancy problem can be solved by our proposed weighting function.


Subject(s)
Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed , Algorithms , Artifacts , Linear Models , Phantoms, Imaging
15.
Med Phys ; 47(3): 1189-1198, 2020 Mar.
Article in English | MEDLINE | ID: mdl-31829437

ABSTRACT

PURPOSE: Grating-based x-ray phase-contrast imaging (GPCI) is a promising technique for clinical applications as it can provide two newly emerging imaging modalities (differential phase-contrast and dark-field contrast) in addition to the conventional absorption contrast. As far, phase-stepping strategy is the most commonly used approach in GPCI to indirectly acquire differential phase-contrast and dark-field contrast. It is known that the obtained phase-stepping curves (PSCs) have the cosine property and the convolution property, leading to two types of information retrieval approaches in literature: the Fourier component analysis and the multi-order moment analysis. The purpose of this paper is to derive a new property of PSCs and apply the property to noise optimization for information retrieval. METHODS: Based on the cosine expression of the flat PSC without the sample and the well-established convolution relationship between the flat PSC and the sample PSC, we reveal an important integral property of PSCs: the inner product of PSCs and an arbitrary function contains only zero-order and first-order components in the Fourier series. Furthermore, we apply the property to the direct multi-order moment analysis and propose a set of generalized forms including an optimal one in the presence of noise. RESULTS: To validate the effectiveness of our analysis, we compared the simulated and real experiment results retrieved by the original direct multi-order moment analysis with the ones retrieved by our proposed noise-optimal form. A significant improvement of noise performance by our method is observed and the improvement ratio in differential phase-contrast is consistent with our theoretical calculation (39.2%). CONCLUSIONS: In this paper, we reveal a new integral property of the acquired PSCs with and without samples in GPCI, which can be applied to information retrieval approaches like the direct multi-order moment analysis. Then we optimize these approaches to improve the noise performance, offering great potentials of dose reduction in practical applications.


Subject(s)
Image Processing, Computer-Assisted/methods , Radiography , Signal-To-Noise Ratio , Fourier Analysis
16.
Comput Math Methods Med ; 2019: 7546215, 2019.
Article in English | MEDLINE | ID: mdl-31641370

ABSTRACT

Wireless capsule endoscopy (WCE) has developed rapidly over the last several years and now enables physicians to examine the gastrointestinal tract without surgical operation. However, a large number of images must be analyzed to obtain a diagnosis. Deep convolutional neural networks (CNNs) have demonstrated impressive performance in different computer vision tasks. Thus, in this work, we aim to explore the feasibility of deep learning for ulcer recognition and optimize a CNN-based ulcer recognition architecture for WCE images. By analyzing the ulcer recognition task and characteristics of classic deep learning networks, we propose a HAnet architecture that uses ResNet-34 as the base network and fuses hyper features from the shallow layer with deep features in deeper layers to provide final diagnostic decisions. 1,416 independent WCE videos are collected for this study. The overall test accuracy of our HAnet is 92.05%, and its sensitivity and specificity are 91.64% and 92.42%, respectively. According to our comparisons of F1, F2, and ROC-AUC, the proposed method performs better than several off-the-shelf CNN models, including VGG, DenseNet, and Inception-ResNet-v2, and classical machine learning methods with handcrafted features for WCE image classification. Overall, this study demonstrates that recognizing ulcers in WCE images via the deep CNN method is feasible and could help reduce the tedious image reading work of physicians. Moreover, our HAnet architecture tailored for this problem gives a fine choice for the design of network structure.


Subject(s)
Capsule Endoscopy/methods , Ulcer/diagnostic imaging , Wireless Technology , Algorithms , Area Under Curve , Databases, Factual , Deep Learning , Diagnosis, Computer-Assisted , Feasibility Studies , Female , Humans , Image Processing, Computer-Assisted/methods , Machine Learning , Male , Neural Networks, Computer , ROC Curve , Sensitivity and Specificity , Video Recording
17.
Phys Med Biol ; 64(23): 235014, 2019 12 05.
Article in English | MEDLINE | ID: mdl-31645019

ABSTRACT

Compared with conventional gastroscopy which is invasive and painful, wireless capsule endoscopy (WCE) can provide noninvasive examination of gastrointestinal (GI) tract. The WCE video can effectively support physicians to reach a diagnostic decision while a huge number of images need to be analyzed (more than 50 000 frames per patient). In this paper, we propose a computer-aided diagnosis method called second glance (secG) detection framework for automatic detection of ulcers based on deep convolutional neural networks that provides both classification confidence and bounding box of lesion area. We evaluated its performance on a large dataset that consists of 1504 patient cases (the largest WCE ulcer dataset to our best knowledge, 1076 cases with ulcers, 428 normal cases). We use 15 781 ulcer frames from 753 ulcer cases and 17 138 normal frames from 300 normal cases for training. Validation dataset consists of 2040 ulcer frames from 108 cases and 2319 frames from 43 normal cases. For test, we use 4917 ulcer frames from 215 ulcer cases and 5007 frames from 85 normal cases. Test results demonstrate the 0.9469 ROC-AUC of the proposed secG detection framework outperforms state-of-the-art detection frameworks including Faster-RCNN (0.9014) and SSD-300 (0.8355), which implies the effectiveness of our method. From the ulcer size analysis, we find the detection of ulcers is highly related to the size. For ulcers with size larger than 1% of the full image size, the sensitivity exceeds 92.00%. For ulcers that are smaller than 1% of the full image size, the sensitivity is around 85.00%. The overall sensitivity, specificity and accuracy are 89.71%, 90.48% and 90.10%, at a threshold value of 0.6706, which implies the potential of the proposed method to suppress oversights and to reduce the burden of physicians.


Subject(s)
Capsule Endoscopy/methods , Diagnosis, Computer-Assisted/methods , Gastrointestinal Tract/diagnostic imaging , Neural Networks, Computer , Ulcer/diagnostic imaging , Capsule Endoscopy/standards , Diagnosis, Computer-Assisted/standards , Gastrointestinal Tract/pathology , Humans , Sensitivity and Specificity
18.
Phys Med Biol ; 64(12): 125006, 2019 06 12.
Article in English | MEDLINE | ID: mdl-30999285

ABSTRACT

X-ray computed tomography (CT) scatter correction using primary modulator has been continuously developed over the past years, with progress in improving the performance of scatter correction. In this work, we further advance the primary modulator technique towards practical applications where the spectral nonuniformity caused by the modulator continues to be a challenging problem. A physics-based spectral compensation algorithm is proposed to adaptively correct for the spectral nonuniformity, and hence to reduce the resultant ring artifacts on reconstructed CT images. First, an initial spectrum of the CT system without the primary modulator is modeled using an understanding of x-ray CT physics, and optimized by an expectation maximization method; then, the optimized estimation of the initial spectrum is utilized to adaptively calculate the effective modulator thickness from measured transmissions of the primary modulator at each detector element, leading to a set of new spectra that is able to capture the nonuniform spectral distribution of the primary modulator; finally, using the modulator-modeled spectrum, a beam hardening mapping function is generated and beam hardening correction is applied to CT projections. A CatPhan600 phantom and an anthropomorphic thorax phantom were scanned with three different primary modulators to evaluate the approach. For the Catphan phantom, the spectral compensation algorithm efficiently removes the ring (and band) artifacts that otherwise dominate the reconstructed CT image. For the three modulators with nominal copper thickness of 52.5, 105 and 210 [Formula: see text]m, our method reduces the CT number nonuniformity from 147.9, 436.2 and 696.4 Hounsfield units (HU) to 14.6, 26.2 and 13.6 HU, respectively, close to that of the reference image (i.e. 7.5 HU). For the thorax phantom, the ring artifacts are also suppressed significantly on the transaxial image; on the sagittal image, the alternating black-and-white patterns are largely removed, with the CT number nonuniformity being reduced from 282.0 HU to 38.5 HU.


Subject(s)
Algorithms , Cone-Beam Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Artifacts , Humans , Scattering, Radiation
19.
Phys Med Biol ; 64(12): 125010, 2019 06 12.
Article in English | MEDLINE | ID: mdl-30840945

ABSTRACT

The cosine-model analysis (CMA) method and the small angle x-ray scattering (SAXS) method are two major types of information retrieval algorithms, commonly utilized in x-ray phase-contrast imaging with a grating interferometer. However, there are significant differences between the two methods in algorithm implementation, and the existing literature has not completely revealed their intrinsic relationship. In this paper, we theoretically derive and experimentally verify the intrinsic connections between CMA and SAXS, and it is seen that SAXS can be interpreted well by the cosine-model assumption of CMA. To validate our analysis of the scattering distribution when applying the cosine model to the convolution used in SAXS, we applied a deconvolution process into CMA before using the Fourier transform to get the three contrasts. Furthermore, the principal component analysis (PCA) is introduced in this work, and two PCA-based retrieval algorithms are presented in order to simplify the iteration process of deconvolution in SAXS or to obtain absorption and dark-field signals instead of the Fourier transform in CMA. Applying a quantitative structural similarity (SSIM) index and a profile analysis to the results of an ex vivo mammography, it is proved that retrieved images via CMA and SAXS are consistent with each other (SSIM values are 1.0000, 0.9845 and 0.9767 respectively), and that the extra deconvolution process applied into CMA shows a good performance and our analytical analysis of the scattering distribution is valid when applying the cosine model to the convolution used in SAXS. Besides, it is concluded that PCA shows almost the same performance with the Fourier transform (SSIM values are 1.0000 for both absorption and dark-field images), and the simplified SAXS-analogous method works well with higher efficiency in computation and better stability relative to the original SAXS, while maintaining the similar level of image quality (SSIM values are 1.0000, 0.9839 and 0.9781 respectively).


Subject(s)
Algorithms , Fourier Analysis , Information Storage and Retrieval/methods , Interferometry/methods , Microscopy, Phase-Contrast/methods , Scattering, Small Angle , X-Ray Diffraction , Humans
20.
Sensors (Basel) ; 19(6)2019 Mar 14.
Article in English | MEDLINE | ID: mdl-30875816

ABSTRACT

This paper is focused on designing a cost function of selecting a foothold for a physical quadruped robot walking on rough terrain. The quadruped robot is modeled with Denavit⁻Hartenberg (DH) parameters, and then a default foothold is defined based on the model. Time of Flight (TOF) camera is used to perceive terrain information and construct a 2.5D elevation map, on which the terrain features are detected. The cost function is defined as the weighted sum of several elements including terrain features and some features on the relative pose between the default foothold and other candidates. It is nearly impossible to hand-code the weight vector of the function, so the weights are learned using Supporting Vector Machine (SVM) techniques, and the training data set is generated from the 2.5D elevation map of a real terrain under the guidance of experts. Four candidate footholds around the default foothold are randomly sampled, and the expert gives the order of such four candidates by rotating and scaling the view for seeing clearly. Lastly, the learned cost function is used to select a suitable foothold and drive the quadruped robot to walk autonomously across the rough terrain with wooden steps. Comparing to the approach with the original standard static gait, the proposed cost function shows better performance.

SELECTION OF CITATIONS
SEARCH DETAIL
...