Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-37847629

ABSTRACT

In this article, we investigate the boundedness and convergence of the online gradient method with the smoothing group L1/2 regularization for the sigma-pi-sigma neural network (SPSNN). This enhances the sparseness of the network and improves its generalization ability. For the original group L1/2 regularization, the error function is nonconvex and nonsmooth, which can cause oscillation of the error function. To ameliorate this drawback, we propose a simple and effective smoothing technique, which can effectively eliminate the deficiency of the original group L1/2 regularization. The group L1/2 regularization effectively optimizes the network structure from two aspects redundant hidden nodes tending to zero and redundant weights of surviving hidden nodes in the network tending to zero. This article shows the strong and weak convergence results for the proposed method and proves the boundedness of weights. Experiment results clearly demonstrate the capability of the proposed method and the effectiveness of redundancy control. The simulation results are observed to support the theoretical results.

2.
Nanotechnology ; 31(21): 215702, 2020 May 22.
Article in English | MEDLINE | ID: mdl-32032008

ABSTRACT

Pyramidal SnO/CeO2 nano-heterojunction photocatalysts were successfully synthesized via a facile hydrothermal method. The structural characterization was investigated by using common characterization tools. The content of SnO affected the morphology and photocatalytic performance of the SnO/CeO2 nano-heterojunctions. With the increase of the SnO content, the morphology of the samples changed from a spherical structure to a pyramidal structure. The photocurrent of the SnO/CeO2 (1:6) sample was about 36 times that of pure CeO2. With SnO/CeO2 (1:6) as the photocatalyst, the degradation rate of tetracycline (TC) was 99% within 140 min under visible light and after five cyclic tests the photocatalytic efficiency of TC remained at 98%, which suggests that the SnO/CeO2 (1:6) nano-heterojunction had a high photocatalytic efficiency and stable photocatalytic performance. These results indicate that SnO/CeO2 (1:6) nano-heterojunction possesses broad prospects for industrial application.

3.
Springerplus ; 5: 295, 2016.
Article in English | MEDLINE | ID: mdl-27066332

ABSTRACT

This paper presents new theoretical results on the backpropagation algorithm with smoothing [Formula: see text] regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes to a fixed point as n (n is iteration steps) tends to infinity, respectively. Also, our results are more general since we do not require the error function to be quadratic or uniformly convex, and neuronal activation functions are relaxed. Moreover, compared with existed algorithms, our novel algorithm can get more sparse network structure, namely it forces weights to become smaller during the training and can eventually removed after the training, which means that it can simply the network structure and lower operation time. Finally, two numerical experiments are presented to show the characteristics of the main results in detail.

4.
Neural Netw ; 50: 72-8, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24291693

ABSTRACT

The aim of this paper is to develop a novel method to prune feedforward neural networks by introducing an L1/2 regularization term into the error function. This procedure forces weights to become smaller during the training and can eventually removed after the training. The usual L1/2 regularization term involves absolute values and is not differentiable at the origin, which typically causes oscillation of the gradient of the error function during the training. A key point of this paper is to modify the usual L1/2 regularization term by smoothing it at the origin. This approach offers the following three advantages: First, it removes the oscillation of the gradient value. Secondly, it gives better pruning, namely the final weights to be removed are smaller than those produced through the usual L1/2 regularization. Thirdly, it makes it possible to prove the convergence of the training. Supporting numerical examples are also provided.


Subject(s)
Algorithms , Learning , Neural Networks, Computer , Computer Simulation , Feedback , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...