Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 14: 423, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32733180

RESUMO

Hardware-based spiking neural networks (SNNs) inspired by a biological nervous system are regarded as an innovative computing system with very low power consumption and massively parallel operation. To train SNNs with supervision, we propose an efficient on-chip training scheme approximating backpropagation algorithm suitable for hardware implementation. We show that the accuracy of the proposed scheme for SNNs is close to that of conventional artificial neural networks (ANNs) by using the stochastic characteristics of neurons. In a hardware configuration, gated Schottky diodes (GSDs) are used as synaptic devices, which have a saturated current with respect to the input voltage. We design the SNN system by using the proposed on-chip training scheme with the GSDs, which can update their conductance in parallel to speed up the overall system. The performance of the on-chip training SNN system is validated through MNIST data set classification based on network size and total time step. The SNN systems achieve accuracy of 97.83% with 1 hidden layer and 98.44% with 4 hidden layers in fully connected neural networks. We then evaluate the effect of non-linearity and asymmetry of conductance response for long-term potentiation (LTP) and long-term depression (LTD) on the performance of the on-chip training SNN system. In addition, the impact of device variations on the performance of the on-chip training SNN system is evaluated.

2.
J Nanosci Nanotechnol ; 20(11): 6603-6608, 2020 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-32604482

RESUMO

Deep learning represents state-of-the-art results in various machine learning tasks, but for applications that require real-time inference, the high computational cost of deep neural networks becomes a bottleneck for the efficiency. To overcome the high computational cost of deep neural networks, spiking neural networks (SNN) have been proposed. Herein, we propose a hardware implementation of the SNN with gated Schottky diodes as synaptic devices. In addition, we apply L1 regularization for connection pruning of the deep spiking neural networks using gated Schottky diodes as synap-tic devices. Applying L1 regularization eliminates the need for a re-training procedure because it prunes the weights based on the cost function. The compressed hardware-based SNN is energy efficient while achieving a classification accuracy of 97.85% which is comparable to 98.13% of the software deep neural networks (DNN).

3.
J Nanosci Nanotechnol ; 20(7): 4138-4142, 2020 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-31968431

RESUMO

NAND flash memory which is mature technology has great advantage in high density and great storage capacity per chip because cells are connected in series between a bit-line and a source-line. Therefore, NAND flash cell can be used as a synaptic device which is very useful for a high-density synaptic array. In this paper, the effect of the word-line bias on the linearity of multi-level conductance steps of the NAND flash cell is investigated. A 3-layer perceptron network (784×200×10) is trained by a suitable weight update method for NAND flash memory using MNIST data set. The linearity of multi-level conductance steps is improved as the word line bias increases from Vth -0.5 to Vth +1 at a fixed bit-line bias of 0.2 V. As a result, the learning accuracy is improved as the word-line bias increases from Vth -0.5 to Vth+1.

4.
J Nanosci Nanotechnol ; 19(10): 6135-6138, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31026923

RESUMO

A gated Schottky diode with a field-plate structure is proposed and investigated as a new low-power synaptic device to suppress the forward current of the Schottky diode. In a hardware-based neural network, unwanted forward current can flow through gated Schottky diode-type synaptic devices during integration operations, possibly causing a malfunction of the neural network and increasing the power consumption. By adopting a field-plate structure, a virtual pn junction to suppress the forward current of the Schottky diode is formed in the poly-Si active layer. As a result, the unwanted forward current of the gated Schottky diode is successfully reduced to less than 1 pA/µm.

5.
Nanotechnology ; 30(3): 032001, 2019 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-30422812

RESUMO

In this paper, we reviewed the recent trends on neuromorphic computing using emerging memory technologies. Two representative learning algorithms used to implement a hardware-based neural network are described as a bio-inspired learning algorithm and software-based learning algorithm, in particular back-propagation. The requirements of the synaptic device to apply each algorithm were analyzed. Then, we reviewed the research trends of synaptic devices to implement an artificial neural network.

6.
Front Neurosci ; 12: 704, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30356702

RESUMO

Hardware-based spiking neural networks (SNNs) to mimic biological neurons have been reported. However, conventional neuron circuits in SNNs have a large area and high power consumption. In this work, a split-gate floating-body positive feedback (PF) device with a charge trapping capability is proposed as a new neuron device that imitates the integrate-and-fire function. Because of the PF characteristic, the subthreshold swing (SS) of the device is less than 0.04 mV/dec. The super-steep SS of the device leads to a low energy consumption of ∼0.25 pJ/spike for a neuron circuit (PF neuron) with the PF device, which is ∼100 times smaller than that of a conventional neuron circuit. The charge storage properties of the device mimic the integrate function of biological neurons without a large membrane capacitor, reducing the PF neuron area by about 17 times compared to that of a conventional neuron. We demonstrate the successful operation of a dense multiple PF neuron system with reset and lateral inhibition using a common self-controller in a neuron layer through simulation. With the multiple PF neuron system and the synapse array, on-line unsupervised pattern learning and recognition are successfully performed to demonstrate the feasibility of our PF device in a neural network.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...