Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 112: 54-72, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30753963

RESUMO

Gaussian Processes (GPs) models have been successfully applied to the problem of learning from sequential observations. In such context, the family of Recurrent Gaussian Processes (RGPs) have been recently introduced with a specifically designed structure to handle dynamical data. However, RGPs present a limitation shared by most GP approaches: they become computationally infeasible when facing very large datasets. In the present work, with the aim of improving scalability, we modify the original variational approach used with RGPs in order to enable inference via stochastic mini-batch optimization, giving rise to the Stochastic Recurrent Variational Bayes (S-REVARB) framework. We review recent related literature and comprehensively contextualize it with our approach. Moreover, we propose two learning procedures, the Local and Global S-REVARB algorithms, which prevent computational costs from scaling with the number of training samples. The global variant permits even greater scalability by also preventing the number of variational parameters from increasing with the training set, through the use of neural networks as sequential recognition models. The proposed framework is evaluated in the task of dynamical system identification for large scale datasets, a scenario not readily supported by the standard batch inference for RGPs. The promising results indicate that the S-REVARB framework opens up the possibility of applying powerful hierarchical recurrent GP-based models to massive sequential data.


Assuntos
Aprendizagem , Redes Neurais de Computação , Processos Estocásticos , Algoritmos , Teorema de Bayes , Distribuição Normal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...