RESUMO
We propose a sparse computation method for optimizing the inference of neural networks in reinforcement learning (RL) tasks. Motivated by the processing abilities of the brain, this method combines simple neural network pruning with a delta-network algorithm to account for the input data correlations. The former mimics neuroplasticity by eliminating inefficient connections; the latter makes it possible to update neuron states only when their changes exceed a certain threshold. This combination significantly reduces the number of multiplications during the neural network inference for fast neuromorphic computing. We tested the approach in popular deep RL tasks, yielding up to a 100-fold reduction in the number of required multiplications without substantial performance loss (sometimes, the performance even improved).
RESUMO
At present, it is obvious that different sections of nervous system utilize different methods for information coding. Primary afferent signals in most cases are represented in form of spike trains using a combination of rate coding and population coding while there are clear evidences that temporal coding is used in various regions of cortex. In the present paper, it is shown that conversion between these two coding schemes can be performed under certain conditions by a homogenous chaotic neural network. Interestingly, this effect can be achieved without network training and synaptic plasticity.