Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw Learn Syst ; 34(10): 7350-7364, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35073273

RESUMO

Since sparse neural networks usually contain many zero weights, these unnecessary network connections can potentially be eliminated without degrading network performance. Therefore, well-designed sparse neural networks have the potential to significantly reduce the number of floating-point operations (FLOPs) and computational resources. In this work, we propose a new automatic pruning method-sparse connectivity learning (SCL). Specifically, a weight is reparameterized as an elementwise multiplication of a trainable weight variable and a binary mask. Thus, network connectivity is fully described by the binary mask, which is modulated by a unit step function. We theoretically prove the fundamental principle of using a straight-through estimator (STE) for network pruning. This principle is that the proxy gradients of STE should be positive, ensuring that mask variables converge at their minima. After finding Leaky ReLU, Softplus, and identity STEs can satisfy this principle, we propose to adopt identity STE in SCL for discrete mask relaxation. We find that mask gradients of different features are very unbalanced; hence, we propose to normalize mask gradients of each feature to optimize mask variable training. In order to automatically train sparse masks, we include the total number of network connections as a regularization term in our objective function. As SCL does not require pruning criteria or hyperparameters defined by designers for network layers, the network is explored in a larger hypothesis space to achieve optimized sparse connectivity for the best performance. SCL overcomes the limitations of existing automatic pruning methods. Experimental results demonstrate that SCL can automatically learn and select important network connections for various baseline network structures. Deep learning models trained by SCL outperform the state-of-the-art human-designed and automatic pruning methods in sparsity, accuracy, and FLOPs reduction.

2.
IEEE Trans Neural Netw Learn Syst ; 33(10): 6021-6029, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33914687

RESUMO

Singular value decomposition (SVD) is one of the most effective algorithms in recommender systems (RSs). Due to the iterative nature of SVD algorithms, one big challenge is initialization that has a major impact on the convergence and performance of RSs. Unfortunately, existing SVD algorithms in the literature typically initialize the user and item features in a random manner; thus, data information is not fully utilized. This work addresses the challenge of developing an efficient initialization method for SVD algorithms. We propose a general neural embedding initialization framework, where a low-complexity probabilistic autoencoder neural network initializes the features of user and item. This framework supports explicit and implicit feedback data sets. The design details of our proposed framework are elaborated and discussed. Experimental results show that RSs based on our proposed initialization framework outperform the state-of-the-art methods in rating prediction. Moreover, regarding item ranking, our proposed framework shows an improvement of at least 2.20% ~5.74% than existing SVD algorithms and other matrix factorization methods in the literature.

3.
ACS Appl Mater Interfaces ; 11(51): 48029-48038, 2019 Dec 26.
Artigo em Inglês | MEDLINE | ID: mdl-31789034

RESUMO

The development of the information age has made resistive random access memory (RRAM) a critical nanoscale memristor device (MD). However, due to the randomness of the area formed by the conductive filaments (CFs), the RRAM MD still suffers from a problem of insufficient reliability. In this study, the memristor of Ag/ZrO2/WS2/Pt structure is proposed for the first time, and a layer of two-dimensional (2D) WS2 nanosheets was inserted into the MD to form 2D material and oxide double-layer MD (2DOMD) to improve the reliability of single-layer devices. The results indicate that the electrochemical metallization memory cell exhibits a highly stable memristive switching and concentrated ON- and OFF-state voltage distribution, high speed (∼10 ns), and robust endurance (>109 cycles). This result is superior to MDs with a single-layer ZrO2 or WS2 film because two layers have different ion transport rates, thereby limiting the rupture/rejuvenation of CFs to the bilayer interface region, which can greatly reduce the randomness of CFs in MDs. Moreover, we used the handwritten recognition dataset (i.e., the Modified National Institute of Standards and Technology (MNIST) database) for neuromorphic simulations. Furthermore, biosynaptic functions and plasticity, including spike-timing-dependent plasticity and paired-pulse facilitation, have been successfully achieved. By incorporating 2D materials and oxides into a double-layer MD, the practical application of RRAM MD can be significantly enhanced to facilitate the development of artificial synapses for brain-enhanced computing systems in the future.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...