Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 17: 1285914, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38099202

RESUMO

Recently, the accuracy of spike neural network (SNN) has been significantly improved by deploying convolutional neural networks (CNN) and their parameters to SNN. The deep convolutional SNNs, however, suffer from large amounts of computations, which is the major bottleneck for energy efficient SNN processor design. In this paper, we present an input-dependent computation reduction approach, where relatively unimportant neurons are identified and pruned without seriously sacrificing the accuracies. Specifically, a neuron pruning in temporal domain is proposed that prunes less important neurons and skips its future operations based on the layer-wise pruning thresholds of membrane voltages. To find the pruning thresholds, two pruning threshold search algorithms are presented that can efficiently trade-off accuracy and computational complexity with a given computation reduction ratio. The proposed neuron pruning scheme has been implemented using 65 nm CMOS process. The SNN processor achieves a 57% energy reduction and a 2.68× speed up, with up to 0.82% accuracy loss and 7.3% area overhead for CIFAR-10 dataset.

2.
IEEE Trans Biomed Circuits Syst ; 16(3): 442-455, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35687615

RESUMO

In this paper, we present a novel early termination based training acceleration technique for temporal coding based spiking neural network (SNN) processor design. The proposed early termination scheme can efficiently identify the non-contributing training images during the training's feedforward process, and it skips the rest of the processes to save training energy and time. A metric to evaluate each input image's contribution to training has been developed, and it is compared with pre-determined threshold to decide whether to skip the rest of the training process. For the threshold selection, an adaptive threshold calculation method is presented to increase the computation skip ratio without sacrificing accuracy. Timestep splitting approach is also employed to allow more frequent early termination in split timesteps, thus leading to more computation savings. The proposed early termination and timestep splitting techniques achieve 51.21/42.31/93.53/30.36% reduction of synaptic operations and 86.06/64.63/90.82/49.14% reduction of feedforward timestep for the training process on MNIST/Fashion-MNIST/ETH-80/EMNIST-Letters dataset, respectively. The hardware implementation of the proposed SNN processor using 28 nm CMOS process shows that the SNN processor achieves the training energy saving of 61.76/31.88% and computation cycle reduction of 69.10/36.26% on MNIST/Fashion-MNIST dataset, respectively.


Assuntos
Computadores , Redes Neurais de Computação , Aceleração
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...