Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Phys Rev E ; 106(2-1): 024136, 2022 Aug.
Article in English | MEDLINE | ID: mdl-36109982

ABSTRACT

We consider increasingly complex models of matrix denoising and dictionary learning in the Bayes-optimal setting, in the challenging regime where the matrices to infer have a rank growing linearly with the system size. This is in contrast with most existing literature concerned with the low-rank (i.e., constant-rank) regime. We first consider a class of rotationally invariant matrix denoising problems whose mutual information and minimum mean-square error are computable using techniques from random matrix theory. Next, we analyze the more challenging models of dictionary learning. To do so we introduce a combination of the replica method from statistical mechanics together with random matrix theory, coined spectral replica method. This allows us to derive variational formulas for the mutual information between hidden representations and the noisy data of the dictionary learning problem, as well as for the overlaps quantifying the optimal reconstruction error. The proposed method reduces the number of degrees of freedom from Θ(N^{2}) matrix entries to Θ(N) eigenvalues (or singular values), and yields Coulomb gas representations of the mutual information which are reminiscent of matrix models in physics. The main ingredients are a combination of large deviation results for random matrices together with a replica symmetric decoupling ansatz at the level of the probability distributions of eigenvalues (or singular values) of certain overlap matrices and the use of Harish-Chandra-Itzykson-Zuber spherical integrals.

2.
Entropy (Basel) ; 23(7)2021 Jun 23.
Article in English | MEDLINE | ID: mdl-34201581

ABSTRACT

We propose the first correct special-purpose quantum circuits for preparation of Bell diagonal states (BDS), and implement them on the IBM Quantum computer, characterizing and testing complex aspects of their quantum correlations in the full parameter space. Among the circuits proposed, one involves only two quantum bits but requires adapted quantum tomography routines handling classical bits in parallel. The entire class of Bell diagonal states is generated, and several characteristic indicators, namely entanglement of formation and concurrence, CHSH non-locality, steering and discord, are experimentally evaluated over the full parameter space and compared with theory. As a by-product of this work, we also find a remarkable general inequality between "quantum discord" and "asymmetric relative entropy of discord": the former never exceeds the latter. We also prove that for all BDS the two coincide.

3.
Proc Natl Acad Sci U S A ; 116(12): 5451-5460, 2019 03 19.
Article in English | MEDLINE | ID: mdl-30824595

ABSTRACT

Generalized linear models (GLMs) are used in high-dimensional machine learning, statistics, communications, and signal processing. In this paper we analyze GLMs when the data matrix is random, as relevant in problems such as compressed sensing, error-correcting codes, or benchmark models in neural networks. We evaluate the mutual information (or "free entropy") from which we deduce the Bayes-optimal estimation and generalization errors. Our analysis applies to the high-dimensional limit where both the number of samples and the dimension are large and their ratio is fixed. Nonrigorous predictions for the optimal errors existed for special cases of GLMs, e.g., for the perceptron, in the field of statistical physics based on the so-called replica method. Our present paper rigorously establishes those decades-old conjectures and brings forward their algorithmic interpretation in terms of performance of the generalized approximate message-passing algorithm. Furthermore, we tightly characterize, for many learning problems, regions of parameters for which this algorithm achieves the optimal performance and locate the associated sharp phase transitions separating learnable and nonlearnable regions. We believe that this random version of GLMs can serve as a challenging benchmark for multipurpose algorithms.

SELECTION OF CITATIONS
SEARCH DETAIL
...