Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 17: 1285914, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38099202

RESUMO

Recently, the accuracy of spike neural network (SNN) has been significantly improved by deploying convolutional neural networks (CNN) and their parameters to SNN. The deep convolutional SNNs, however, suffer from large amounts of computations, which is the major bottleneck for energy efficient SNN processor design. In this paper, we present an input-dependent computation reduction approach, where relatively unimportant neurons are identified and pruned without seriously sacrificing the accuracies. Specifically, a neuron pruning in temporal domain is proposed that prunes less important neurons and skips its future operations based on the layer-wise pruning thresholds of membrane voltages. To find the pruning thresholds, two pruning threshold search algorithms are presented that can efficiently trade-off accuracy and computational complexity with a given computation reduction ratio. The proposed neuron pruning scheme has been implemented using 65 nm CMOS process. The SNN processor achieves a 57% energy reduction and a 2.68× speed up, with up to 0.82% accuracy loss and 7.3% area overhead for CIFAR-10 dataset.

2.
IEEE Trans Biomed Circuits Syst ; 14(1): 125-137, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31905147

RESUMO

Two main bottlenecks encountered when implementing energy-efficient spike-timing-dependent plasticity (STDP) based sparse coding, are the complex computation of winner-take-all (WTA) operation and repetitive neuronal operations in the time domain processing. In this article, we present an energy-efficient STDP based sparse coding processor. The low-cost hardware is based on the algorithmic reduction techniques as following: First, the complex WTA operation is simplified based on the prediction of spike emitting neurons. Sparsity based approximation in spatial and temporal domain are also efficiently exploited to remove the redundant neurons with negligible algorithmic accuracy loss. We designed and implemented the hardware of the STDP based sparse coding using 65nm CMOS process. By exploiting input sparsity, the proposed SNN architecture can dynamically trade off algorithmic quality for computation energy (up to 74%) for Natural image (maximum 0.01 RMSE increment) and MNIST (no accuracy loss) applications. In the inference mode of operations, the SNN hardware achieves the throughput of 374 Mpixels/s and 840.2 GSOP/s with the energy-efficiency of 781.52 pJ/pixel and 0.35 pJ/SOP.


Assuntos
Neurônios/fisiologia , Sinapses/fisiologia , Desenho de Equipamento , Humanos , Dispositivos Lab-On-A-Chip , Aprendizado de Máquina , Modelos Neurológicos , Redes Neurais de Computação
3.
IEEE Trans Biomed Circuits Syst ; 13(6): 1664-1677, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31603797

RESUMO

In this paper, we present an energy and area efficient spike neural network (SNN) processor based on novel spike counts based methods. For the low cost SNN design, we propose hardware-friendly complexity reduction techniques for both of learning and inferencing modes of operations. First, for the unsupervised learning process, we propose a spike counts based learning method. The novel learning approach utilizes pre- and post-synaptic spike counts to reduce the bit-width of synaptic weights as well as the number of weight updates. For the energy efficient inferencing operations, we propose an accumulation based computing scheme, where the number of input spikes for each input axon is accumulated without instant membrane updates until the pre-defined number of spikes are reached. In addition, the computation skip schemes identify meaningless computations and skip them to improve energy efficiency. Based on the proposed low complexity design techniques, we design and implement the SNN processor using 65 nm CMOS process. According to the implementation results, the SNN processor achieves 87.4% of recognition accuracy in MNIST dataset using only 1-bit 230 k synaptic weights with 400 excitatory neurons. The energy consumptions are 0.26 pJ/SOP and 0.31 µJ/inference in inferencing mode, and 1.42 pJ/SOP and 2.63 µJ/learning in learning mode of operations.


Assuntos
Sinapses/fisiologia , Animais , Humanos , Modelos Neurológicos , Redes Neurais de Computação , Aprendizado de Máquina não Supervisionado
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...