Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 133: 177-192, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33220642

ABSTRACT

Echo State Networks (ESNs) are efficient recurrent neural networks (RNNs) which have been successfully applied to time series modeling tasks. However, ESNs are unable to capture the history information far from the current time step, since the echo state at the present step of ESNs mostly impacted by the previous one. Thus, ESN may have difficulty in capturing the long-term dependencies of temporal data. In this paper, we propose an end-to-end model named Echo Memory-Augmented Network (EMAN) for time series classification. An EMAN consists of an echo memory-augmented encoder and a multi-scale convolutional learner. First, the time series is fed into the reservoir of an ESN to produce the echo states, which are all collected into an echo memory matrix along with the time steps. After that, we design an echo memory-augmented mechanism employing the sparse learnable attention to the echo memory matrix to obtain the Echo Memory-Augmented Representations (EMARs). In this way, the input time series is encoded into the EMARs with enhancing the temporal memory of the ESN. We then use multi-scale convolutions with the max-over-time pooling to extract the most discriminative features from the EMARs. Finally, a fully-connected layer and a softmax layer calculate the probability distribution on categories. Experiments conducted on extensive time series datasets show that EMAN is state-of-the-art compared to existing time series classification methods. The visualization analysis also demonstrates the effectiveness of enhancing the temporal memory of the ESN.


Subject(s)
Deep Learning/classification , Neural Networks, Computer , Memory , Nonlinear Dynamics , Time Factors
2.
IEEE Trans Neural Netw Learn Syst ; 32(9): 3942-3955, 2021 Sep.
Article in English | MEDLINE | ID: mdl-32866103

ABSTRACT

Time series clustering is usually an essential unsupervised task in cases when category information is not available and has a wide range of applications. However, existing time series clustering methods usually either ignore temporal dynamics of time series or isolate the feature extraction from clustering tasks without considering the interaction between them. In this article, a time series clustering framework named self-supervised time series clustering network (STCN) is proposed to optimize the feature extraction and clustering simultaneously. In the feature extraction module, a recurrent neural network (RNN) conducts a one-step time series prediction that acts as the reconstruction of the input data, capturing the temporal dynamics and maintaining the local structures of the time series. The parameters of the output layer of the RNN are regarded as model-based dynamic features and then fed into a self-supervised clustering module to obtain the predicted labels. To bridge the gap between these two modules, we employ spectral analysis to constrain the similar features to have the same pseudoclass labels and align the predicted labels with pseudolabels as well. STCN is trained by iteratively updating the model parameters and the pseudoclass labels. Experiments conducted on extensive time series data sets show that STCN has state-of-the-art performance, and the visualization analysis also demonstrates the effectiveness of the proposed model.

3.
Neural Netw ; 117: 225-239, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31176962

ABSTRACT

Echo state networks (ESNs) are randomly connected recurrent neural networks (RNNs) that can be used as a temporal kernel for modeling time series data, and have been successfully applied on time series prediction tasks. Recently, ESNs have been applied to time series classification (TSC) tasks. However, previous ESN-based classifiers involve either training the model by predicting the next item of a sequence, or predicting the class label at each time step. The former is essentially a predictive model adapted from time series prediction work, rather than a model designed specifically for the classification task. The latter approach only considers local patterns at each time step and then averages over the classifications. Hence, rather than selecting the most discriminating sections of the time series, this approach will incorporate non-discriminative information into the classification, reducing accuracy. In this paper, we propose a novel end-to-end framework called the Echo Memory Network (EMN) in which the time series dynamics and multi-scale discriminative features are efficiently learned from an unrolled echo memory using multi-scale convolution and max-over-time pooling. First, the time series data are projected into the high dimensional nonlinear space of the reservoir and the echo states are collected into the echo memory matrix, followed by a single multi-scale convolutional layer to extract multi-scale features from the echo memory matrix. Max-over-time pooling is used to maintain temporal invariance and select the most important local patterns. Finally, a fully-connected hidden layer feeds into a softmax layer for classification. This architecture is applied to both time series classification and human action recognition datasets. For the human action recognition datasets, we divide the action data into five different components of the human body, and propose two spatial information fusion strategies to integrate the spatial information over them. With one training-free recurrent layer and only one layer of convolution, the EMN is a very efficient end-to-end model, and ranks first in overall classification ability on 55 TSC benchmark datasets and four 3D skeleton-based human action recognition tasks.


Subject(s)
Neural Networks, Computer , Humans , Time
SELECTION OF CITATIONS
SEARCH DETAIL
...