Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Neural Netw Learn Syst ; 34(8): 4130-4138, 2023 Aug.
Article in English | MEDLINE | ID: mdl-34752408

ABSTRACT

The k -winners-take-all ( k -WTA) problem refers to the selection of k winners with the first k largest inputs over a group of n neurons, where each neuron has an input. In existing k -WTA neural network models, the positive integer k is explicitly given in the corresponding mathematical models. In this article, we consider another case where the number k in the k -WTA problem is implicitly specified by the initial states of the neurons. Based on the constraint conversion for a classical optimization problem formulation of the k -WTA, via modifying the traditional gradient descent, we propose an initialization-based k -WTA neural network model with only n neurons for n -dimensional inputs, and the dynamics of the neural network model is described by parameterized gradient descent. Theoretical results show that the state vector of the proposed k -WTA neural network model globally asymptotically converges to the theoretical k -WTA solution under mild conditions. Simulative examples demonstrate the effectiveness of the proposed model and indicate that its convergence can be accelerated by readily setting two design parameters.

2.
IEEE Trans Neural Netw Learn Syst ; 30(5): 1360-1369, 2019 May.
Article in English | MEDLINE | ID: mdl-30281486

ABSTRACT

Conjugate gradient (CG) methods are a class of important methods for solving linear equations and nonlinear optimization problems. In this paper, we propose a new stochastic CG algorithm with variance reduction1 and we prove its linear convergence with the Fletcher and Reeves method for strongly convex and smooth functions. We experimentally demonstrate that the CG with variance reduction algorithm converges faster than its counterparts for four learning models, which may be convex, nonconvex or nonsmooth. In addition, its area under the curve performance on six large-scale data sets is comparable to that of the LIBLINEAR solver for the L2 -regularized L2 -loss but with a significant improvement in computational efficiency.1CGVR algorithm is available on github: https://github.com/xbjin/cgvr.

3.
IEEE Trans Neural Netw Learn Syst ; 26(9): 1979-91, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25376047

ABSTRACT

Despite the great success of graph-based transductive learning methods, most of them have serious problems in scalability and robustness. In this paper, we propose an efficient and robust graph-based transductive classification method, called minimum tree cut (MTC), which is suitable for large-scale data. Motivated from the sparse representation of graph, we approximate a graph by a spanning tree. Exploiting the simple structure, we develop a linear-time algorithm to label the tree such that the cut size of the tree is minimized. This significantly improves graph-based methods, which typically have a polynomial time complexity. Moreover, we theoretically and empirically show that the performance of MTC is robust to the graph construction, overcoming another big problem of traditional graph-based methods. Extensive experiments on public data sets and applications on web-spam detection and interactive image segmentation demonstrate our method's advantages in aspect of accuracy, speed, and robustness.

SELECTION OF CITATIONS
SEARCH DETAIL
...