Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw Learn Syst ; 32(4): 1512-1524, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32310801

RESUMO

Compressive learning (CL) is an emerging topic that combines signal acquisition via compressive sensing (CS) and machine learning to perform inference tasks directly on a small number of measurements. Many data modalities naturally have a multidimensional or tensorial format, with each dimension or tensor mode representing different features such as the spatial and temporal information in video sequences or the spatial and spectral information in hyperspectral images. However, in existing CL frameworks, the CS component utilizes either random or learned linear projection on the vectorized signal to perform signal acquisition, thus discarding the multidimensional structure of the signals. In this article, we propose multilinear CL (MCL), a framework that takes into account the tensorial nature of multidimensional signals in the acquisition step and builds the subsequent inference model on the structurally sensed measurements. Our theoretical complexity analysis shows that the proposed framework is more efficient compared to its vector-based counterpart in both memory and computation requirement. With extensive experiments, we also empirically show that our MCL framework outperforms the vector-based framework in object classification and face recognition tasks, and scales favorably when the dimensionalities of the original signals increase, making it highly efficient for high-dimensional multidimensional signals.

2.
IEEE Trans Neural Netw Learn Syst ; 31(3): 710-724, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31170081

RESUMO

The traditional multilayer perceptron (MLP) using a McCulloch-Pitts neuron model is inherently limited to a set of neuronal activities, i.e., linear weighted sum followed by nonlinear thresholding step. Previously, generalized operational perceptron (GOP) was proposed to extend the conventional perceptron model by defining a diverse set of neuronal activities to imitate a generalized model of biological neurons. Together with GOP, a progressive operational perceptron (POP) algorithm was proposed to optimize a predefined template of multiple homogeneous layers in a layerwise manner. In this paper, we propose an efficient algorithm to learn a compact, fully heterogeneous multilayer network that allows each individual neuron, regardless of the layer, to have distinct characteristics. Based on the complexity of the problem, the proposed algorithm operates in a progressive manner on a neuronal level, searching for a compact topology, not only in terms of depth but also width, i.e., the number of neurons in each layer. The proposed algorithm is shown to outperform other related learning methods in extensive experiments on several classification problems.


Assuntos
Algoritmos , Bases de Dados Factuais/classificação , Redes Neurais de Computação
3.
IEEE Trans Neural Netw Learn Syst ; 30(5): 1407-1418, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-30281493

RESUMO

Financial time-series forecasting has long been a challenging problem because of the inherently noisy and stochastic nature of the market. In the high-frequency trading, forecasting for trading purposes is even a more challenging task, since an automated inference system is required to be both accurate and fast. In this paper, we propose a neural network layer architecture that incorporates the idea of bilinear projection as well as an attention mechanism that enables the layer to detect and focus on crucial temporal information. The resulting network is highly interpretable, given its ability to highlight the importance and contribution of each temporal instance, thus allowing further analysis on the time instances of interest. Our experiments in a large-scale limit order book data set show that a two-hidden-layer network utilizing our proposed layer outperforms by a large margin all existing state-of-the-art results coming from much deeper architectures while requiring far fewer computations.

4.
Neural Netw ; 105: 328-339, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29920430

RESUMO

The excellent performance of deep neural networks has enabled us to solve several automatization problems, opening an era of autonomous devices. However, current deep net architectures are heavy with millions of parameters and require billions of floating point operations. Several works have been developed to compress a pre-trained deep network to reduce memory footprint and, possibly, computation. Instead of compressing a pre-trained network, in this work, we propose a generic neural network layer structure employing multilinear projection as the primary feature extractor. The proposed architecture requires several times less memory as compared to the traditional Convolutional Neural Networks (CNN), while inherits the similar design principles of a CNN. In addition, the proposed architecture is equipped with two computation schemes that enable computation reduction or scalability. Experimental results show the effectiveness of our compact projection that outperforms traditional CNN, while requiring far fewer parameters.


Assuntos
Redes Neurais de Computação , Compressão de Dados/métodos , Aprendizado de Máquina/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...