Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters











Publication year range
1.
Comput Biol Med ; 177: 108591, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38788372

ABSTRACT

This paper suggests a novel hybrid tensor-ring (TR) decomposition and first-order tensor-based total variation (FOTTV) model, known as the TRFOTTV model, for super-resolution and noise suppression of optical coherence tomography (OCT) images. OCT imaging faces two fundamental problems undermining correct OCT-based diagnosis: significant noise levels and low sampling rates to speed up the capturing process. Inspired by the effectiveness of TR decomposition in analyzing complicated data structures, we suggest the TRFOTTV model for noise suppression and super-resolution of OCT images. Initially, we extract the nonlocal 3D patches from OCT data and group them to create a third-order low-rank tensor. Subsequently, using TR decomposition, we extract the correlations among all modes of the grouped OCT tensor. Finally, FOTTV is integrated into the TR model to enhance spatial smoothness in OCT images and conserve layer structures more effectively. The proximal alternating minimization and alternative direction method of multipliers are applied to solve the obtained optimization problem. The effectiveness of the suggested method is verified by four OCT datasets, demonstrating superior visual and numerical outcomes compared to state-of-the-art procedures.


Subject(s)
Retina , Tomography, Optical Coherence , Tomography, Optical Coherence/methods , Humans , Eye Diseases/diagnostic imaging , Retina/diagnostic imaging , Models, Theoretical , Artifacts
2.
Entropy (Basel) ; 26(4)2024 Apr 21.
Article in English | MEDLINE | ID: mdl-38667903

ABSTRACT

The practical implementation of massive multi-user multi-input-multi-output (MU-MIMO) downlink communication systems power amplifiers that are energy efficient; otherwise, the power consumption of the base station (BS) will be prohibitive. Constant envelope (CE) precoding is gaining increasing interest for its capability to utilize low-cost, high-efficiency nonlinear radio frequency amplifiers. Our work focuses on the topic of CE precoding in massive MU-MIMO systems and presents an efficient CE precoding algorithm. This algorithm uses an alternating minimization (AltMin) framework to optimize the CE precoded signal and precoding factor, aiming to minimize the difference between the received signal and the transmit symbol. For the optimization of the CE precoded signal, we provide a powerful approach that integrates the majorization-minimization (MM) method and the fast iterative shrinkage-thresholding (FISTA) method. This algorithm combines the characteristics of the massive MU-MIMO channel with the second-order Taylor expansion to construct the surrogate function in the MM method, in which minimizing this surrogate function is the worst-case of the system. Specifically, we expand the suggested CE precoding algorithm to involve the discrete constant envelope (DCE) precoding case. In addition, we thoroughly examine the exact property, convergence, and computational complexity of the proposed algorithm. Simulation results demonstrate that the proposed CE precoding algorithm can achievean uncoded biterror rate (BER) performance gain of roughly 1dB compared to the existing CE precoding algorithm and has an acceptable computational complexity. This performance advantage also exists when it comes to DCE precoding.

3.
IEEE Control Syst Lett ; 6: 1244-1249, 2022.
Article in English | MEDLINE | ID: mdl-35754939

ABSTRACT

This letter studies a topology identification problem for an electric distribution grid using sign patterns of the inverse covariance matrix of bus voltage magnitudes and angles, while accounting for hidden buses. Assuming the grid topology is sparse and the number of hidden buses are fewer than those of the observed buses, we express the observed voltages inverse covariance matrix as the sum of three structured matrices: sparse matrix, low-rank matrix with sparse factors, and low-rank matrix. Using the sign patterns of the first two of these matrices, we develop an algorithm to identify the topology of a distribution grid with a minimum cycle length greater than three. To estimate the structured matrices from the empirical inverse covariance matrix, we formulate a novel convex optimization problem with appropriate sparsity and structured norm constraints and solve it using an alternating minimization method. We validate the proposed algorithm's performance on a modified IEEE 33 bus system.

4.
Sensors (Basel) ; 22(4)2022 Feb 15.
Article in English | MEDLINE | ID: mdl-35214387

ABSTRACT

Self-interference occurs when there is electromagnetic coupling between the transmission and reception of the same node; thus, degrading the RX sensitivity to incoming signals. In this paper we present a low-complexity technique for self-interference cancellation in multiple carrier multiple access systems employing whole band direct to digital sampling. In this scenario, multiple users are simultaneously received and transmitted by the system at overlapping arbitrary bandwidths and powers. Traditional algorithms for self-interference mitigation based on recursive least squares (RLS) or least mean squares (LMS), fail to provide sufficient rejection, since the incoming signal is far from being spectrally flat, which is critical for their performance. The proposed algorithm mitigates the interference by modeling the incoming multiple user signal as an autoregressive (AR) process and jointly estimates the AR parameters and self-interference. The resulting algorithm can be implemented using a low-complexity architecture comprised of only two RLS modules. The novel algorithm further satisfies low latency constraints and is adaptive, supporting time varying channel conditions. We compare this to many self-interference cancellation algorithms, mostly adopted from the acoustic echo cancellation literature, and show significant performance gain.


Subject(s)
Artifacts , Signal Processing, Computer-Assisted , Algorithms , Least-Squares Analysis
5.
Inf inference ; 9(4): 785-811, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33343894

ABSTRACT

Sparsity-based models and techniques have been exploited in many signal processing and imaging applications. Data-driven methods based on dictionary and sparsifying transform learning enable learning rich image features from data and can outperform analytical models. In particular, alternating optimization algorithms have been popular for learning such models. In this work, we focus on alternating minimization for a specific structured unitary sparsifying operator learning problem and provide a convergence analysis. While the algorithm converges to the critical points of the problem generally, our analysis establishes under mild assumptions, the local linear convergence of the algorithm to the underlying sparsifying model of the data. Analysis and numerical simulations show that our assumptions hold for standard probabilistic data models. In practice, the algorithm is robust to initialization.

6.
Med Phys ; 46(11): 4803-4815, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31408539

ABSTRACT

PURPOSE: In computed tomography (CT), miscalibrated or imperfect detector elements produce stripe artifacts in the sinogram. The stripe artifacts in Radon space are responsible for concentric ring artifacts in the reconstructed images. In this work, a novel optimization model is proposed to remove the ring artifacts in an iterative reconstruction procedure. METHOD: In the proposed optimization model, a novel ring total variation (RTV) regularization is developed to penalize the ring artifacts in the image domain. Moreover, to correct the sinogram, a new correcting vector is proposed to compensate for malfunctioning of detectors in the projection domain. The optimization problem is solved by using the alternating minimization scheme (AMS). In each iteration, the fidelity term along with the RTV regularization is solved using the alternating direction method of multipliers (ADMM) to find the image, and then the correcting coefficient vector is updated for certain detectors according to the obtained image. Because the sinogram and the image are simultaneously updated, the proposed method basically performs in both image and sinogram domains. RESULTS: The proposed method is evaluated using both simulated and physical phantom datasets containing different ring artifact patterns. In the simulated datasets, the Shepp-Logan phantom, a real chest scan image and a noisy low-contrast phantom are considered for the performance evaluation of our method. We compare the quantitative root mean square error (RMSE) and structural similarity (SSIM) results of our algorithm with wavelet-Fourier sinogram filtering method by Munch et al., the ring artifact reduction method by Brun et al., and the TV-based ring correction method by Paleo and Mirone. Our proposed method is also evaluated using a physical phantom dataset where strong ring artifacts are manifest due to the miscalibration of a large number of detectors. Our proposed method outperforms the competing methods in terms of both qualitative and quantitative evaluation results. CONCLUSION: The experimental results in both simulated and physical phantom datasets show that the proposed method achieves the state-of-the-art ring artifact reduction performance in terms of RMSE, SSIM, and subjective visual quality.


Subject(s)
Algorithms , Artifacts , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed , Fourier Analysis , Phantoms, Imaging
7.
Magn Reson Med ; 82(1): 485-494, 2019 07.
Article in English | MEDLINE | ID: mdl-30860286

ABSTRACT

PURPOSE: To introduce a novel framework to combine deep-learned priors along with complementary image regularization penalties to reconstruct free breathing & ungated cardiac MRI data from highly undersampled multi-channel measurements. METHODS: Image recovery is formulated as an optimization problem, where the cost function is the sum of data consistency term, convolutional neural network (CNN) denoising prior, and SmooThness regularization on manifolds (SToRM) prior that exploits the manifold structure of images in the dataset. An iterative algorithm, which alternates between denoizing of the image data using CNN and SToRM, and conjugate gradients (CG) step that minimizes the data consistency cost is introduced. Unrolling the iterative algorithm yields a deep network, which is trained using exemplar data. RESULTS: The experimental results demonstrate that the proposed framework can offer fast recovery of free breathing and ungated cardiac MRI data from less than 8.2s of acquisition time per slice. The reconstructions are comparable in image quality to SToRM reconstructions from 42s of acquisition time, offering a fivefold reduction in scan time. CONCLUSIONS: The results show the benefit in combining deep learned CNN priors with complementary image regularization penalties. Specifically, this work demonstrates the benefit in combining the CNN prior that exploits local and population generalizable redundancies together with SToRM, which capitalizes on patient-specific information including cardiac and respiratory patterns. The synergistic combination is facilitated by the proposed framework.


Subject(s)
Cardiac Imaging Techniques/methods , Deep Learning , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Algorithms , Databases, Factual , Heart/diagnostic imaging , Humans , Respiration
8.
Article in English | MEDLINE | ID: mdl-32025078

ABSTRACT

Photon counting CT (PCCT) is an x-ray imaging technique that has undergone great development in the past decade. PCCT has the potential to improve dose efficiency and low-dose performance. In this paper, we propose a statistics-based iterative algorithm to perform a direct reconstruction of material-decomposed images. Compared with the conventional sinogram-based decomposition method which has degraded performance in low-dose scenarios, the multi-energy alternating minimization algorithm for photon counting CT (MEAM-PCCT) can generate accurate material-decomposed image with much smaller biases.

9.
Psychometrika ; 84(1): 124-146, 2019 03.
Article in English | MEDLINE | ID: mdl-30456747

ABSTRACT

Joint maximum likelihood (JML) estimation is one of the earliest approaches to fitting item response theory (IRT) models. This procedure treats both the item and person parameters as unknown but fixed model parameters and estimates them simultaneously by solving an optimization problem. However, the JML estimator is known to be asymptotically inconsistent for many IRT models, when the sample size goes to infinity and the number of items keeps fixed. Consequently, in the psychometrics literature, this estimator is less preferred to the marginal maximum likelihood (MML) estimator. In this paper, we re-investigate the JML estimator for high-dimensional exploratory item factor analysis, from both statistical and computational perspectives. In particular, we establish a notion of statistical consistency for a constrained JML estimator, under an asymptotic setting that both the numbers of items and people grow to infinity and that many responses may be missing. A parallel computing algorithm is proposed for this estimator that can scale to very large datasets. Via simulation studies, we show that when the dimensionality is high, the proposed estimator yields similar or even better results than those from the MML estimator, but can be obtained computationally much more efficiently. An illustrative real data example is provided based on the revised version of Eysenck's Personality Questionnaire (EPQ-R).


Subject(s)
Factor Analysis, Statistical , Likelihood Functions , Algorithms , Computer Simulation , Data Interpretation, Statistical , Female , Humans , Monte Carlo Method , Personality , Personality Tests , Psychometrics/methods , Surveys and Questionnaires
10.
Magn Reson Imaging ; 57: 165-175, 2019 04.
Article in English | MEDLINE | ID: mdl-30500348

ABSTRACT

In magnetic resonance (MR) imaging, for highly under-sampled k-space data, it is typically difficult to reconstruct images and preserve their original texture simultaneously. The high-degree total variation (HDTV) regularization handles staircase effects but still blurs textures. On the other hand, the non-local TV (NLTV) regularization can preserve textures, but will introduce additional artifacts for highly-noised images. In this paper, we propose a reconstruction model derived from HDTV and NLTV for robust MRI reconstruction. First, an MR image is decomposed into a smooth component and a texture component. Second, for the smooth component with sharp edges, isotropic second-order TV is used to reduce staircase effects. For the texture component with piecewise constant background, NLTV and contourlet-based sparsity regularizations are employed to recover textures. The piecewise constant background in the texture component contributes to accurately detect non-local similar image patches and avoid artifacts introduced by NLTV. Finally, the proposed reconstruction model is solved through an alternating minimization scheme. The experimental results demonstrate that the proposed reconstruction model can effectively achieve satisfied quality of reconstruction for highly under-sampled k-space data.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Algorithms , Arteries/diagnostic imaging , Artifacts , Brain/diagnostic imaging , Brain Mapping , Heart/diagnostic imaging , Humans , Magnetic Resonance Spectroscopy , Models, Statistical , Normal Distribution , Signal-To-Noise Ratio , Surface Properties
11.
J Xray Sci Technol ; 26(4): 603-622, 2018.
Article in English | MEDLINE | ID: mdl-29689766

ABSTRACT

Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Supervised Machine Learning , Tomography, X-Ray Computed/methods , Animals , Humans , Lung/diagnostic imaging , Phantoms, Imaging , Sheep
12.
J Inequal Appl ; 2017(1): 232, 2017.
Article in English | MEDLINE | ID: mdl-29026279

ABSTRACT

In this paper, we study the minimization problem of the type [Formula: see text], where f and g are both nonconvex nonsmooth functions, and R is a smooth function we can choose. We present a proximal alternating minimization algorithm with inertial effect. We obtain the convergence by constructing a key function H that guarantees a sufficient decrease property of the iterates. In fact, we prove that if H satisfies the Kurdyka-Lojasiewicz inequality, then every bounded sequence generated by the algorithm converges strongly to a critical point of L.

13.
Psychometrika ; 81(3): 702-26, 2016 09.
Article in English | MEDLINE | ID: mdl-26608962

ABSTRACT

Given a positive definite covariance matrix [Formula: see text] of dimension n, we approximate it with a covariance of the form [Formula: see text], where H has a prescribed number [Formula: see text] of columns and [Formula: see text] is diagonal. The quality of the approximation is gauged by the I-divergence between the zero mean normal laws with covariances [Formula: see text] and [Formula: see text], respectively. To determine a pair (H, D) that minimizes the I-divergence we construct, by lifting the minimization into a larger space, an iterative alternating minimization algorithm (AML) à la Csiszár-Tusnády. As it turns out, the proper choice of the enlarged space is crucial for optimization. The convergence of the algorithm is studied, with special attention given to the case where D is singular. The theoretical properties of the AML are compared to those of the popular EM algorithm for exploratory factor analysis. Inspired by the ECME (a Newton-Raphson variation on EM), we develop a similar variant of AML, called ACML, and in a few numerical experiments, we compare the performances of the four algorithms.


Subject(s)
Algorithms , Factor Analysis, Statistical , Psychometrics , Statistics as Topic , Humans
14.
J Comput Graph Stat ; 24(4): 994-1013, 2015.
Article in English | MEDLINE | ID: mdl-27087770

ABSTRACT

Clustering is a fundamental problem in many scientific applications. Standard methods such as k-means, Gaussian mixture models, and hierarchical clustering, however, are beset by local minima, which are sometimes drastically suboptimal. Recently introduced convex relaxations of k-means and hierarchical clustering shrink cluster centroids toward one another and ensure a unique global minimizer. In this work we present two splitting methods for solving the convex clustering problem. The first is an instance of the alternating direction method of multipliers (ADMM); the second is an instance of the alternating minimization algorithm (AMA). In contrast to previously considered algorithms, our ADMM and AMA formulations provide simple and unified frameworks for solving the convex clustering problem under the previously studied norms and open the door to potentially novel norms. We demonstrate the performance of our algorithm on both simulated and real data examples. While the differences between the two algorithms appear to be minor on the surface, complexity analysis and numerical experiments show AMA to be significantly more efficient. This article has supplemental materials available online.

15.
Med Eng Phys ; 36(11): 1428-35, 2014 Nov.
Article in English | MEDLINE | ID: mdl-24998900

ABSTRACT

T2 mapping is a powerful noninvasive technique providing quantitative biological information of the inherent tissue properties. However, its clinical usage is limited due to the relative long scanning time. This paper proposed a novel model-based method to address this problem. Typically, we directly estimated the relaxation values from undersampled k-space data by exploiting the sparse property of proton density and T2 map in a penalized maximum likelihood formulation. An alternating minimization approach was presented to estimate the relaxation maps separately. Both numerical phantom and in vivo experiment dataset were used to demonstrate the performance of the proposed method. We showed that the proposed method outperformed the state-of-the-art techniques in terms of detail preservation and artifact suppression with various reduction factors and in both moderate and heavy noise circumstances. The superior reconstruction performance validated its promising potential in fast T2 mapping applications.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Models, Theoretical , Protons , Brain Mapping , Phantoms, Imaging , Time Factors
16.
Phys Med ; 29(5): 500-12, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23343747

ABSTRACT

PURPOSE: To present a framework for characterizing the data needed to implement a polyenergetic model-based statistical reconstruction algorithm, Alternating Minimization (AM), on a commercial fan-beam CT scanner and a novel method for assessing the accuracy of the commissioned data model. METHODS: The X-ray spectra for three tube potentials on the Philips Brilliance CT scanner were estimated by fitting a semi-empirical X-ray spectrum model to transmission measurements. Spectral variations due to the bowtie filter were computationally modeled. Eight homogeneous cylinders of PMMA, Teflon and water with varying diameters were scanned at each energy. Central-axis scatter was measured for each cylinder using a beam-stop technique. AM reconstruction with a single-basis object-model matched to the scanned cylinder's composition allows assessment of the accuracy of the AM algorithm's polyenergetic data model. Filtered-backprojection (FBP) was also performed to compare consistency metrics such as uniformity and object-size dependence. RESULTS: The spectrum model fit measured transmission curves with residual root-mean-square-error of 1.20%-1.34% for the three scanning energies. The estimated spectrum and scatter data supported polyenergetic AM reconstruction of the test cylinders to within 0.5% of expected in the matched object-model reconstruction test. In comparison to FBP, polyenergetic AM exhibited better uniformity and less object-size dependence. CONCLUSIONS: Reconstruction using a matched object-model illustrate that the polyenergetic AM algorithm's data model was commissioned to within 0.5% of an expected ground truth. These results support ongoing and future research with polyenergetic AM reconstruction of commercial fan-beam CT data for quantitative CT applications.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Statistics as Topic/methods , Tomography, X-Ray Computed/instrumentation , Radiotherapy, Image-Guided , Uncertainty
SELECTION OF CITATIONS
SEARCH DETAIL