Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Int J Numer Method Biomed Eng ; 39(4): e3576, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-35099851

RESUMO

Computational hemodynamics has received increasing attention recently. Patient-specific simulations require questionable model assumptions, for example, for geometry, boundary conditions, and material parameters. Consequently, the credibility of these simulations is much doubted, and rightly so. Yet, the matter may be addressed by a rigorous uncertainty quantification. In this contribution, we investigated the impact of blood rheological models on wall shear stress uncertainties in aortic hemodynamics obtained in numerical simulations. Based on shear-rheometric experiments, we compare the non-Newtonian Carreau model to a simple Newtonian model and a Reynolds number-equivalent Newtonian model. Bayesian Probability Theory treats uncertainties consistently and allows to include elusive assumptions such as the comparability of flow regimes. We overcome the prohibitively high computational cost for the simulation with a surrogate model, and account for the uncertainties of the surrogate model itself, too. We have two main findings: (1) The Newtonian models mostly underestimate the uncertainties as compared to the non-Newtonian model. (2) The wall shear stresses of specific persons cannot be distinguished due to largely overlapping uncertainty bands, implying that a more precise determination of person-specific blood rheological properties is necessary for person-specific simulations. While we refrain from a general recommendation for one rheological model, we have quantified the error of the uncertainty quantification associated with these modeling choices.


Assuntos
Aorta , Hemodinâmica , Humanos , Teorema de Bayes , Incerteza , Reologia , Estresse Mecânico , Modelos Cardiovasculares , Velocidade do Fluxo Sanguíneo , Simulação por Computador
2.
Entropy (Basel) ; 23(12)2021 Dec 10.
Artigo em Inglês | MEDLINE | ID: mdl-34945967

RESUMO

An aortic dissection, a particular aortic pathology, occurs when blood pushes through a tear between the layers of the aorta and forms a so-called false lumen. Aortic dissection has a low incidence compared to other diseases, but a relatively high mortality that increases with disease progression. An early identification and treatment increases patients' chances of survival. State-of-the-art medical imaging techniques have several disadvantages; therefore, we propose the detection of aortic dissections through their signatures in impedance cardiography signals. These signatures arise due to pathological blood flow characteristics and a blood conductivity that strongly depends on the flow field, i.e., the proposed method is, in principle, applicable to any aortic pathology that changes the blood flow characteristics. For the signal classification, we trained a convolutional neural network (CNN) with artificial impedance cardiography data based on a simulation model for a healthy virtual patient and a virtual patient with an aortic dissection. The network architecture was tailored to a multi-sensor, multi-channel time-series classification with a categorical cross-entropy loss function as the training objective. The trained network typically yielded a specificity of (93.9±0.1)% and a sensitivity of (97.5±0.1)%. A study of the accuracy as a function of the size of an aortic dissection yielded better results for a small false lumen with larger noise, which emphasizes the question of the feasibility of detecting aortic dissections in an early state.

3.
Phys Rev E ; 99(4-1): 043303, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31108647

RESUMO

We present further developments of the auxiliary master equation approach (AMEA), a numerical method to simulate many-body quantum systems in as well as out of equilibrium and apply it to the interacting resonant level model to benchmark the new developments. In particular, our results are obtained by employing the stochastic wave functions method to solve the auxiliary open quantum system arising within AMEA. This development allows us to reach extremely low wall times for the calculation of correlation functions with respect to previous implementations of AMEA. An additional significant improvement is obtained by extrapolating a series of results obtained by increasing the number of auxiliary bath sites, N_{B}, used within the auxiliary open quantum system formally to the limit of N_{B}→∞. Results for the current-voltage characteristics and for equilibrium correlation functions are compared with the one obtained by exact and matrix-product states-based approaches. Further, we complement this benchmark by the presentation of spectral functions for higher temperatures where we find different behaviors around zero frequency depending on the hybridization strength.

4.
Entropy (Basel) ; 21(1)2019 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-33266809

RESUMO

This paper employs Bayesian probability theory for analyzing data generated in femtosecond pump-probe photoelectron-photoion coincidence (PEPICO) experiments. These experiments allow investigating ultrafast dynamical processes in photoexcited molecules. Bayesian probability theory is consistently applied to data analysis problems occurring in these types of experiments such as background subtraction and false coincidences. We previously demonstrated that the Bayesian formalism has many advantages, amongst which are compensation of false coincidences, no overestimation of pump-only contributions, significantly increased signal-to-noise ratio, and applicability to any experimental situation and noise statistics. Most importantly, by accounting for false coincidences, our approach allows running experiments at higher ionization rates, resulting in an appreciable reduction of data acquisition times. In addition to our previous paper, we include fluctuating laser intensities, of which the straightforward implementation highlights yet another advantage of the Bayesian formalism. Our method is thoroughly scrutinized by challenging mock data, where we find a minor impact of laser fluctuations on false coincidences, yet a noteworthy influence on background subtraction. We apply our algorithm to data obtained in experiments and discuss the impact of laser fluctuations on the data analysis.

5.
Entropy (Basel) ; 22(1)2019 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-33285833

RESUMO

In 2000, Kennedy and O'Hagan proposed a model for uncertainty quantification that combines data of several levels of sophistication, fidelity, quality, or accuracy, e.g., a coarse and a fine mesh in finite-element simulations. They assumed each level to be describable by a Gaussian process, and used low-fidelity simulations to improve inference on costly high-fidelity simulations. Departing from there, we move away from the common non-Bayesian practice of optimization and marginalize the parameters instead. Thus, we avoid the awkward logical dilemma of having to choose parameters and of neglecting that choice's uncertainty. We propagate the parameter uncertainties by averaging the predictions and the prediction uncertainties over all the possible parameters. This is done analytically for all but the nonlinear or inseparable kernel function parameters. What is left is a low-dimensional and feasible numerical integral depending on the choice of kernels, thus allowing for a fully Bayesian treatment. By quantifying the uncertainties of the parameters themselves too, we show that "learning" or optimising those parameters has little meaning when data is little and, thus, justify all our mathematical efforts. The recent hype about machine learning has long spilled over to computational engineering but fails to acknowledge that machine learning is a big data problem and that, in computational engineering, we usually face a little data problem. We devise the fully Bayesian uncertainty quantification method in a notation following the tradition of E.T. Jaynes and find that generalization to an arbitrary number of levels of fidelity and parallelisation becomes rather easy. We scrutinize the method with mock data and demonstrate its advantages in its natural application where high-fidelity data is little but low-fidelity data is not. We then apply the method to quantify the uncertainties in finite element simulations of impedance cardiography of aortic dissection. Aortic dissection is a cardiovascular disease that frequently requires immediate surgical treatment and, thus, a fast diagnosis before. While traditional medical imaging techniques such as computed tomography, magnetic resonance tomography, or echocardiography certainly do the job, Impedance cardiography too is a clinical standard tool and promises to allow earlier diagnoses as well as to detect patients that otherwise go under the radar for too long.

6.
Phys Rev Lett ; 110(8): 086403, 2013 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-23473180

RESUMO

We introduce a versatile method to compute electronic steady-state properties of strongly correlated extended quantum systems out of equilibrium. The approach is based on dynamical mean-field theory (DMFT), in which the original system is mapped onto an auxiliary nonequilibrium impurity problem imbedded in a Markovian environment. The steady-state Green's function of the auxiliary system is solved by full diagonalization of the corresponding Lindblad equation. The approach can be regarded as the nontrivial extension of the exact-diagonalization-based DMFT to the nonequilibrium case. As a first application, we consider an interacting Hubbard layer attached to two metallic leads and present results for the steady-state current and the nonequilibrium density of states.

7.
J Phys Condens Matter ; 24(29): 295601, 2012 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-22738846

RESUMO

A strong coupling expansion based on the Kato-Bloch perturbation theory, which has recently been proposed by Eckardt et al (2009 Phys. Rev. B 79 195131) and Teichmann et al (2009 Phys. Rev. B 79 224515), is implemented in order to study various aspects of the Bose-Hubbard and Jaynes-Cummings lattice models. The approach, which allows us to generate numerically all diagrams up to a desired order in the interaction strength, is generalized for disordered systems and for the Jaynes-Cummings lattice model. Results for the Bose-Hubbard and Jaynes-Cummings lattice models will be presented and compared with results from the variational cluster approach and density matrix renormalization group. Our focus will be on the Mott insulator to superfluid transition.

8.
Comput Phys Commun ; 182(10): 2168-2173, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21969734

RESUMO

We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver.Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...