Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 133: 177-192, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33220642

ABSTRACT

Echo State Networks (ESNs) are efficient recurrent neural networks (RNNs) which have been successfully applied to time series modeling tasks. However, ESNs are unable to capture the history information far from the current time step, since the echo state at the present step of ESNs mostly impacted by the previous one. Thus, ESN may have difficulty in capturing the long-term dependencies of temporal data. In this paper, we propose an end-to-end model named Echo Memory-Augmented Network (EMAN) for time series classification. An EMAN consists of an echo memory-augmented encoder and a multi-scale convolutional learner. First, the time series is fed into the reservoir of an ESN to produce the echo states, which are all collected into an echo memory matrix along with the time steps. After that, we design an echo memory-augmented mechanism employing the sparse learnable attention to the echo memory matrix to obtain the Echo Memory-Augmented Representations (EMARs). In this way, the input time series is encoded into the EMARs with enhancing the temporal memory of the ESN. We then use multi-scale convolutions with the max-over-time pooling to extract the most discriminative features from the EMARs. Finally, a fully-connected layer and a softmax layer calculate the probability distribution on categories. Experiments conducted on extensive time series datasets show that EMAN is state-of-the-art compared to existing time series classification methods. The visualization analysis also demonstrates the effectiveness of enhancing the temporal memory of the ESN.


Subject(s)
Deep Learning/classification , Neural Networks, Computer , Memory , Nonlinear Dynamics , Time Factors
2.
IEEE Trans Cybern ; 51(3): 1613-1625, 2021 Mar.
Article in English | MEDLINE | ID: mdl-31217137

ABSTRACT

As efficient recurrent neural network (RNN) models, echo state networks (ESNs) have attracted widespread attention and been applied in many application domains in the last decade. Although they have achieved great success in modeling time series, a single ESN may have difficulty in capturing the multitimescale structures that naturally exist in temporal data. In this paper, we propose the convolutional multitimescale ESN (ConvMESN), which is a novel training-efficient model for capturing multitimescale structures and multiscale temporal dependencies of temporal data. In particular, a multitimescale memory encoder is constructed with a multireservoir structure, in which different reservoirs have recurrent connections with different skip lengths (or time spans). By collecting all past echo states in each reservoir, this multireservoir structure encodes the history of a time series as nonlinear multitimescale echo state representations (MESRs). Our visualization analysis verifies that the MESRs provide better discriminative features for time series. Finally, multiscale temporal dependencies of MESRs are learned by a convolutional layer. By leveraging the multitimescale reservoirs followed by a convolutional learner, the ConvMESN has not only efficient memory encoding ability for temporal data with multitimescale structures but also strong learning ability for complex temporal dependencies. Furthermore, the training-free reservoirs and the single convolutional layer provide high-computational efficiency for the ConvMESN to model complex temporal data. Extensive experiments on 18 multivariate time series (MTS) benchmark datasets and 3 skeleton-based action recognition datasets demonstrate that the ConvMESN captures multitimescale dynamics and outperforms existing methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...