Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 170: 94-110, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37977092

ABSTRACT

Recent work has shown that machine learning (ML) models can skillfully forecast the dynamics of unknown chaotic systems. Short-term predictions of the state evolution and long-term predictions of the statistical patterns of the dynamics ("climate") can be produced by employing a feedback loop, whereby the model is trained to predict forward only one time step, then the model output is used as input for multiple time steps. In the absence of mitigating techniques, however, this feedback can result in artificially rapid error growth ("instability"). One established mitigating technique is to add noise to the ML model training input. Based on this technique, we formulate a new penalty term in the loss function for ML models with memory of past inputs that deterministically approximates the effect of many small, independent noise realizations added to the model input during training. We refer to this penalty and the resulting regularization as Linearized Multi-Noise Training (LMNT). We systematically examine the effect of LMNT, input noise, and other established regularization techniques in a case study using reservoir computing, a machine learning method using recurrent neural networks, to predict the spatiotemporal chaotic Kuramoto-Sivashinsky equation. We find that reservoir computers trained with noise or with LMNT produce climate predictions that appear to be indefinitely stable and have a climate very similar to the true system, while the short-term forecasts are substantially more accurate than those trained with other regularization techniques. Finally, we show the deterministic aspect of our LMNT regularization facilitates fast reservoir computer regularization hyperparameter tuning.


Subject(s)
Machine Learning , Neural Networks, Computer , Computers , Forecasting
2.
Chaos ; 33(7)2023 Jul 01.
Article in English | MEDLINE | ID: mdl-37408150

ABSTRACT

We propose a new approach to dynamical system forecasting called data-informed-reservoir computing (DI-RC) that, while solely being based on data, yields increased accuracy, reduced computational cost, and mitigates tedious hyper-parameter optimization of the reservoir computer (RC). Our DI-RC approach is based on the recently proposed hybrid setup where a knowledge-based model is combined with a machine learning prediction system, but it replaces the knowledge-based component by a data-driven model discovery technique. As a result, our approach can be chosen when a suitable knowledge-based model is not available. We demonstrate our approach using a delay-based RC as the machine learning component in conjunction with sparse identification of nonlinear dynamical systems for the data-driven model component. We test the performance on two example systems: the Lorenz system and the Kuramoto-Sivashinsky system. Our results indicate that our proposed technique can yield an improvement in the time-series forecasting capabilities compared with both approaches applied individually, while remaining computationally cheap. The benefit of our proposed approach, compared with pure RC, is most pronounced when the reservoir parameters are not optimized, thereby reducing the need for hyperparameter optimization.

3.
Chaos ; 31(5): 053114, 2021 May.
Article in English | MEDLINE | ID: mdl-34240950

ABSTRACT

We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data are in the form of noisy partial measurements of the past and present state of the dynamical system. Recently, there have been several promising data-driven approaches to forecasting of chaotic dynamical systems using machine learning. Particularly promising among these are hybrid approaches that combine machine learning with a knowledge-based model, where a machine-learning technique is used to correct the imperfections in the knowledge-based model. Such imperfections may be due to incomplete understanding and/or limited resolution of the physical processes in the underlying dynamical system, e.g., the atmosphere or the ocean. Previously proposed data-driven forecasting approaches tend to require, for training, measurements of all the variables that are intended to be forecast. We describe a way to relax this assumption by combining data assimilation with machine learning. We demonstrate this technique using the Ensemble Transform Kalman Filter to assimilate synthetic data for the three-variable Lorenz 1963 system and for the Kuramoto-Sivashinsky system, simulating a model error in each case by a misspecified parameter value. We show that by using partial measurements of the state of the dynamical system, we can train a machine-learning model to improve predictions made by an imperfect knowledge-based model.

4.
Chaos ; 30(5): 053111, 2020 May.
Article in English | MEDLINE | ID: mdl-32491877

ABSTRACT

We consider the commonly encountered situation (e.g., in weather forecast) where the goal is to predict the time evolution of a large, spatiotemporally chaotic dynamical system when we have access to both time series data of previous system states and an imperfect model of the full system dynamics. Specifically, we attempt to utilize machine learning as the essential tool for integrating the use of past data into predictions. In order to facilitate scalability to the common scenario of interest where the spatiotemporally chaotic system is very large and complex, we propose combining two approaches: (i) a parallel machine learning prediction scheme and (ii) a hybrid technique for a composite prediction system composed of a knowledge-based component and a machine learning-based component. We demonstrate that not only can this method combining (i) and (ii) be scaled to give excellent performance for very large systems but also that the length of time series data needed to train our multiple, parallel machine learning components is dramatically less than that necessary without parallelization. Furthermore, considering cases where computational realization of the knowledge-based component does not resolve subgrid-scale processes, our scheme is able to use training data to incorporate the effect of the unresolved short-scale dynamics upon the resolved longer-scale dynamics (subgrid-scale closure).

5.
Chaos ; 28(4): 041101, 2018 Apr.
Article in English | MEDLINE | ID: mdl-31906641

ABSTRACT

A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.

SELECTION OF CITATIONS
SEARCH DETAIL
...