Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Entropy (Basel) ; 23(12)2021 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-34945946

RESUMO

Neural networks play a growing role in many scientific disciplines, including physics. Variational autoencoders (VAEs) are neural networks that are able to represent the essential information of a high dimensional data set in a low dimensional latent space, which have a probabilistic interpretation. In particular, the so-called encoder network, the first part of the VAE, which maps its input onto a position in latent space, additionally provides uncertainty information in terms of variance around this position. In this work, an extension to the autoencoder architecture is introduced, the FisherNet. In this architecture, the latent space uncertainty is not generated using an additional information channel in the encoder but derived from the decoder by means of the Fisher information metric. This architecture has advantages from a theoretical point of view as it provides a direct uncertainty quantification derived from the model and also accounts for uncertainty cross-correlations. We can show experimentally that the FisherNet produces more accurate data reconstructions than a comparable VAE and its learning performance also apparently scales better with the number of latent space dimensions.

2.
Entropy (Basel) ; 23(7)2021 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-34356394

RESUMO

Efficiently accessing the information contained in non-linear and high dimensional probability distributions remains a core challenge in modern statistics. Traditionally, estimators that go beyond point estimates are either categorized as Variational Inference (VI) or Markov-Chain Monte-Carlo (MCMC) techniques. While MCMC methods that utilize the geometric properties of continuous probability distributions to increase their efficiency have been proposed, VI methods rarely use the geometry. This work aims to fill this gap and proposes geometric Variational Inference (geoVI), a method based on Riemannian geometry and the Fisher information metric. It is used to construct a coordinate transformation that relates the Riemannian manifold associated with the metric to Euclidean space. The distribution, expressed in the coordinate system induced by the transformation, takes a particularly simple form that allows for an accurate variational approximation by a normal distribution. Furthermore, the algorithmic structure allows for an efficient implementation of geoVI which is demonstrated on multiple examples, ranging from low-dimensional illustrative ones to non-linear, hierarchical Bayesian inverse problems in thousands of dimensions.

3.
Entropy (Basel) ; 23(6)2021 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-34073066

RESUMO

We showed how to use trained neural networks to perform Bayesian reasoning in order to solve tasks outside their initial scope. Deep generative models provide prior knowledge, and classification/regression networks impose constraints. The tasks at hand were formulated as Bayesian inference problems, which we approximately solved through variational or sampling techniques. The approach built on top of already trained networks, and the addressable questions grew super-exponentially with the number of available networks. In its simplest form, the approach yielded conditional generative models. However, multiple simultaneous constraints constitute elaborate questions. We compared the approach to specifically trained generators, showed how to solve riddles, and demonstrated its compatibility with state-of-the-art architectures.

4.
Phys Rev E ; 97(3-1): 033314, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29776172

RESUMO

Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.

5.
Sci Adv ; 3(10): e1701634, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-28983512

RESUMO

Galaxy clusters are the most massive constituents of the large-scale structure of the universe. Although the hot thermal gas that pervades galaxy clusters is relatively well understood through observations with x-ray satellites, our understanding of the nonthermal part of the intracluster medium (ICM) remains incomplete. With Low-Frequency Array (LOFAR) and Giant Metrewave Radio Telescope (GMRT) observations, we have identified a phenomenon that can be unveiled only at extremely low radio frequencies and offers new insights into the nonthermal component. We propose that the interplay between radio-emitting plasma and the perturbed intracluster medium can gently reenergize relativistic particles initially injected by active galactic nuclei. Sources powered through this mechanism can maintain electrons at higher energies than radiative aging would allow. If this mechanism is common for aged plasma, a population of mildly relativistic electrons can be accumulated inside galaxy clusters providing the seed population for merger-induced reacceleration mechanisms on larger scales such as turbulence and shock waves.

6.
Phys Rev E ; 96(4-1): 042114, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29347581

RESUMO

We present a method for the separation of superimposed, independent, autocorrelated components from noisy multichannel measurement. The presented method simultaneously reconstructs and separates the components, taking all channels into account, and thereby increases the effective signal-to-noise ratio considerably, allowing separations even in the high-noise regime. Characteristics of the measurement instruments can be included, allowing for application in complex measurement situations. Independent posterior samples can be provided, permitting error estimates on all desired quantities. Using the concept of information field theory, the algorithm is not restricted to any dimensionality of the underlying space or discretization scheme thereof.

7.
Phys Rev E ; 96(5-1): 052104, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-29347698

RESUMO

Stochastic differential equations are of utmost importance in various scientific and industrial areas. They are the natural description of dynamical processes whose precise equations of motion are either not known or too expensive to solve, e.g., when modeling Brownian motion. In some cases, the equations governing the dynamics of a physical system on macroscopic scales occur to be unknown since they typically cannot be deduced from general principles. In this work, we describe how the underlying laws of a stochastic process can be approximated by the spectral density of the corresponding process. Furthermore, we show how the density can be inferred from possibly very noisy and incomplete measurements of the dynamical field. Generally, inverse problems like these can be tackled with the help of Information Field Theory. For now, we restrict to linear and autonomous processes. To demonstrate its applicability, we employ our reconstruction algorithm on a time-series and spatiotemporal processes.

8.
Phys Rev E ; 94(5-1): 053306, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27967173

RESUMO

Signal inference problems with non-Gaussian posteriors can be hard to tackle. Through using the concept of Gibbs free energy these posteriors are rephrased as Gaussian posteriors for the price of computing various expectation values with respect to a Gaussian distribution. We present a way of translating these expectation values to a language of operators which is similar to that in quantum mechanics. This simplifies many calculations, for instance such as those involving log-normal priors. The operator calculus is illustrated by deriving a self-calibrating algorithm which is tested with mock data.

9.
Phys Rev E ; 94(1-1): 012132, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27575101

RESUMO

Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω(t) and damping factor γ(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

10.
Artigo em Inglês | MEDLINE | ID: mdl-26274302

RESUMO

Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.

11.
Artigo em Inglês | MEDLINE | ID: mdl-25679743

RESUMO

The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

12.
Artigo em Inglês | MEDLINE | ID: mdl-25375617

RESUMO

Response calibration is the process of inferring how much the measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate self-calibration methods for linear signal measurements and linear dependence of the response on the calibration parameters. The common practice is to augment an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration by exploiting redundancies in the measurements. This can be understood in terms of maximizing the joint probability of signal and calibration. However, the full uncertainty structure of this joint probability around its maximum is thereby not taken into account by these schemes. Therefore, better schemes, in sense of minimal square error, can be designed by accounting for asymmetries in the uncertainty of signal and calibration. We argue that at least a systematic correction of the common self-calibration scheme should be applied in many measurement situations in order to properly treat uncertainties of the signal on which one calibrates. Otherwise, the calibration solutions suffer from a systematic bias, which consequently distorts the signal reconstruction. Furthermore, we argue that nonparametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.

13.
Artigo em Inglês | MEDLINE | ID: mdl-24329375

RESUMO

We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.

14.
Artigo em Inglês | MEDLINE | ID: mdl-23496560

RESUMO

The simulation of complex stochastic network dynamics arising, for instance, from models of coupled biomolecular processes remains computationally challenging. Often, the necessity to scan a model's dynamics over a large parameter space renders full-fledged stochastic simulations impractical, motivating approximation schemes. Here we propose an approximation scheme which improves upon the standard linear noise approximation while retaining similar computational complexity. The underlying idea is to minimize, at each time step, the Kullback-Leibler divergence between the true time evolved probability distribution and a Gaussian approximation (entropic matching). This condition leads to ordinary differential equations for the mean and the covariance matrix of the Gaussian. For cases of weak nonlinearity, the method is more accurate than the linear method when both are compared to stochastic simulations.


Assuntos
Modelos Biológicos , Modelos Estatísticos , Proteoma/metabolismo , Processos Estocásticos , Animais , Simulação por Computador , Entropia , Humanos
15.
Artigo em Inglês | MEDLINE | ID: mdl-23410461

RESUMO

Information field dynamics (IFD) is introduced here as a framework to derive numerical schemes for the simulation of physical and other fields without assuming a particular subgrid structure as many schemes do. IFD constructs an ensemble of nonparametric subgrid field configurations from the combination of the data in computer memory, representing constraints on possible field configurations, and prior assumptions on the subgrid field statistics. Each of these field configurations can formally be evolved to a later moment since any differential operator of the dynamics can act on fields living in continuous space. However, these virtually evolved fields need again a representation by data in computer memory. The maximum entropy principle of information theory guides the construction of updated data sets via entropic matching, optimally representing these field configurations at the later time. The field dynamics thereby become represented by a finite set of evolution equations for the data that can be solved numerically. The subgrid dynamics is thereby treated within auxiliary analytic considerations. The resulting scheme acts solely on the data space. It should provide a more accurate description of the physical field dynamics than simulation schemes constructed ad hoc, due to the more rigorous accounting of subgrid physics and the space discretization process. Assimilation of measurement data into an IFD simulation is conceptually straightforward since measurement and simulation data can just be merged. The IFD approach is illustrated using the example of a coarsely discretized representation of a thermally excited classical Klein-Gordon field. This should pave the way towards the construction of schemes for more complex systems like turbulent hydrodynamics.


Assuntos
Algoritmos , Campos Eletromagnéticos , Modelos Teóricos , Análise Numérica Assistida por Computador , Simulação por Computador
16.
Phys Rev E Stat Nonlin Soft Matter Phys ; 85(2 Pt 1): 021134, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22463179

RESUMO

Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method.


Assuntos
Algoritmos , Modelos Estatísticos , Processos Estocásticos , Simulação por Computador
17.
Phys Rev E Stat Nonlin Soft Matter Phys ; 84(4 Pt 1): 041118, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22181098

RESUMO

We derive a method to reconstruct Gaussian signals from linear measurements with Gaussian noise. This new algorithm is intended for applications in astrophysics and other sciences. The starting point of our considerations is the principle of minimum Gibbs free energy, which was previously used to derive a signal reconstruction algorithm handling uncertainties in the signal covariance. We extend this algorithm to simultaneously uncertain noise and signal covariances using the same principles in the derivation. The resulting equations are general enough to be applied in many different contexts. We demonstrate the performance of the algorithm by applying it to specific example situations and compare it to algorithms not allowing for uncertainties in the noise covariance. The results show that the method we suggest performs very well under a variety of circumstances and is indeed qualitatively superior to the other methods in cases where uncertainty in the noise covariance is present.

18.
Phys Rev E Stat Nonlin Soft Matter Phys ; 82(5 Pt 1): 051112, 2010 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-21230442

RESUMO

Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.

19.
Science ; 314(5800): 772-3, 2006 Nov 03.
Artigo em Inglês | MEDLINE | ID: mdl-17082445
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...