Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Chaos ; 34(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38572942

RESUMO

Data-driven approximations of the Koopman operator are promising for predicting the time evolution of systems characterized by complex dynamics. Among these methods, the approach known as extended dynamic mode decomposition with dictionary learning (EDMD-DL) has garnered significant attention. Here, we present a modification of EDMD-DL that concurrently determines both the dictionary of observables and the corresponding approximation of the Koopman operator. This innovation leverages automatic differentiation to facilitate gradient descent computations through the pseudoinverse. We also address the performance of several alternative methodologies. We assess a "pure" Koopman approach, which involves the direct time-integration of a linear, high-dimensional system governing the dynamics within the space of observables. Additionally, we explore a modified approach where the system alternates between spaces of states and observables at each time step-this approach no longer satisfies the linearity of the true Koopman operator representation. For further comparisons, we also apply a state-space approach (neural ordinary differential equations). We consider systems encompassing two- and three-dimensional ordinary differential equation systems featuring steady, oscillatory, and chaotic attractors, as well as partial differential equations exhibiting increasingly complex and intricate behaviors. Our framework significantly outperforms EDMD-DL. Furthermore, the state-space approach offers superior performance compared to the "pure" Koopman approach where the entire time evolution occurs in the space of observables. When the temporal evolution of the Koopman approach alternates between states and observables at each time step, however, its predictions become comparable to those of the state-space approach.

2.
Chaos ; 32(7): 073110, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35907719

RESUMO

Dissipative partial differential equations that exhibit chaotic dynamics tend to evolve to attractors that exist on finite-dimensional manifolds. We present a data-driven reduced-order modeling method that capitalizes on this fact by finding a coordinate representation for this manifold and then a system of ordinary differential equations (ODEs) describing the dynamics in this coordinate system. The manifold coordinates are discovered using an undercomplete autoencoder-a neural network (NN) that reduces and then expands dimension. Then, the ODE, in these coordinates, is determined by a NN using the neural ODE framework. Both of these steps only require snapshots of data to learn a model, and the data can be widely and/or unevenly spaced. Time-derivative information is not needed. We apply this framework to the Kuramoto-Sivashinsky equation for domain sizes that exhibit chaotic dynamics with again estimated manifold dimensions ranging from 8 to 28. With this system, we find that dimension reduction improves performance relative to predictions in the ambient space, where artifacts arise. Then, with the low-dimensional model, we vary the training data spacing and find excellent short- and long-time statistical recreation of the true dynamics for widely spaced data (spacing of ∼ 0.7 Lyapunov times). We end by comparing performance with various degrees of dimension reduction and find a "sweet spot" in terms of performance vs dimension.


Assuntos
Redes Neurais de Computação , Dinâmica não Linear , Artefatos , Coleta de Dados , Fatores de Tempo
3.
Phys Rev E ; 101(6-1): 062209, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32688613

RESUMO

A data-driven framework is developed to represent chaotic dynamics on an inertial manifold (IM) and applied to solutions of the Kuramoto-Sivashinsky equation. A hybrid method combining linear and nonlinear (neural-network) dimension reduction transforms between coordinates in the full state space and on the IM. Additional neural networks predict time evolution on the IM. The formalism accounts for translation invariance and energy conservation, and substantially outperforms linear dimension reduction, reproducing very well key dynamic and statistical features of the attractor.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...