Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw Learn Syst ; 34(11): 8271-8283, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35180089

RESUMO

We propose the Poisson neural networks (PNNs) to learn Poisson systems and trajectories of autonomous systems from data. Based on the Darboux-Lie theorem, the phase flow of a Poisson system can be written as the composition of: 1) a coordinate transformation; 2) an extended symplectic map; and 3) the inverse of the transformation. In this work, we extend this result to the unknotted trajectories of autonomous systems. We employ structured neural networks with physical priors to approximate the three aforementioned maps. We demonstrate through several simulations that PNNs are capable of handling very accurately several challenging tasks, including the motion of a particle in the electromagnetic potential, the nonlinear Schrödinger equation, and pixel observations of the two-body problem.

2.
Neural Netw ; 147: 72-80, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34995951

RESUMO

Measure-preserving neural networks are well-developed invertible models, however, their approximation capabilities remain unexplored. This paper rigorously analyzes the approximation capabilities of existing measure-preserving neural networks including NICE and RevNets. It is shown that for compact U⊂RD with D≥2, the measure-preserving neural networks are able to approximate arbitrary measure-preserving map ψ:U→RD which is bounded and injective in the Lp-norm. In particular, any continuously differentiable injective map with ±1 determinant of Jacobian is measure-preserving, thus can be approximated.


Assuntos
Redes Neurais de Computação
3.
Neural Netw ; 132: 166-179, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32890788

RESUMO

We propose new symplectic networks (SympNets) for identifying Hamiltonian systems from data based on a composition of linear, activation and gradient modules. In particular, we define two classes of SympNets: the LA-SympNets composed of linear and activation modules, and the G-SympNets composed of gradient modules. Correspondingly, we prove two new universal approximation theorems that demonstrate that SympNets can approximate arbitrary symplectic maps based on appropriate activation functions. We then perform several experiments including the pendulum, double pendulum and three-body problems to investigate the expressivity and the generalization ability of SympNets. The simulation results show that even very small size SympNets can generalize well, and are able to handle both separable and non-separable Hamiltonian systems with data points resulting from short or long time steps. In all the test cases, SympNets outperform the baseline models, and are much faster in training and prediction. We also develop an extended version of SympNets to learn the dynamics from irregularly sampled data. This extended version of SympNets can be thought of as a universal model representing the solution to an arbitrary Hamiltonian system.


Assuntos
Simulação por Computador , Aprendizado Profundo , Redes Neurais de Computação , Humanos
4.
Neural Netw ; 130: 85-99, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32650153

RESUMO

The accuracy of deep learning, i.e., deep neural networks, can be characterized by dividing the total error into three main types: approximation error, optimization error, and generalization error. Whereas there are some satisfactory answers to the problems of approximation and optimization, much less is known about the theory of generalization. Most existing theoretical works for generalization fail to explain the performance of neural networks in practice. To derive a meaningful bound, we study the generalization error of neural networks for classification problems in terms of data distribution and neural network smoothness. We introduce the cover complexity (CC) to measure the difficulty of learning a data set and the inverse of the modulus of continuity to quantify neural network smoothness. A quantitative bound for expected accuracy/error is derived by considering both the CC and neural network smoothness. Although most of the analysis is general and not specific to neural networks, we validate our theoretical assumptions and results numerically for neural networks by several data sets of images. The numerical results confirm that the expected error of trained networks scaled with the square root of the number of classes has a linear relationship with respect to the CC. We also observe a clear consistency between test loss and neural network smoothness during the training process. In addition, we demonstrate empirically that the neural network smoothness decreases when the network size increases whereas the smoothness is insensitive to training dataset size.


Assuntos
Aprendizado Profundo/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...