Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Neural Netw Learn Syst ; 31(11): 4500-4511, 2020 11.
Article in English | MEDLINE | ID: mdl-31880565

ABSTRACT

Stochastic gradient descent (SGD) is one of the core techniques behind the success of deep neural networks. The gradient provides information on the direction in which a function has the steepest rate of change. The main problem with basic SGD is to change by equal-sized steps for all parameters, irrespective of the gradient behavior. Hence, an efficient way of deep network optimization is to have adaptive step sizes for each parameter. Recently, several attempts have been made to improve gradient descent methods such as AdaGrad, AdaDelta, RMSProp, and adaptive moment estimation (Adam). These methods rely on the square roots of exponential moving averages of squared past gradients. Thus, these methods do not take advantage of local change in gradients. In this article, a novel optimizer is proposed based on the difference between the present and the immediate past gradient (i.e., diffGrad). In the proposed diffGrad optimization technique, the step size is adjusted for each parameter in such a way that it should have a larger step size for faster gradient changing parameters and a lower step size for lower gradient changing parameters. The convergence analysis is done using the regret bound approach of the online learning framework. In this article, thorough analysis is made over three synthetic complex nonconvex functions. The image categorization experiments are also conducted over the CIFAR10 and CIFAR100 data sets to observe the performance of diffGrad with respect to the state-of-the-art optimizers such as SGDM, AdaGrad, AdaDelta, RMSProp, AMSGrad, and Adam. The residual unit (ResNet)-based convolutional neural network (CNN) architecture is used in the experiments. The experiments show that diffGrad outperforms other optimizers. Also, we show that diffGrad performs uniformly well for training CNN using different activation functions. The source code is made publicly available at https://github.com/shivram1987/diffGrad.

2.
Mol Biol Rep ; 44(3): 281-287, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28551733

ABSTRACT

The purpose of this study is to develop a novel Reverse Transcriptase Loop-mediated isothermal amplification (RT-LAMP) based assay for in vitro profiling of heat shock protein 70 (Hsp70) in bovine peripheral blood mononuclear cell (PBMC) culture model utilizing the absorbance level of magnesium pyrophosphate-a by-product of LAMP reaction. A set of bovine Hsp70 specific RT-LAMP primers were designed to detect the differential absorbance level of magnesium pyrophosphate by-product which signifies the degree of Hsp70 amplification from cDNA of thermally induced cultured cells at different recovery periods. The study revealed significant (P < 0.05) correlation between absorbance level and the fold change of Hsp70 transcripts at different kinetic intervals of heat stress recovery in bovine PBMC cell culture models. RT-LAMP based absorbance assay can be used as an indicator to measure the degree of bovine Hsp70 transcripts produced during thermal stress and can be used as an alternative to the traditional Real time PCR assay. Developed RT-LAMP assay can be used as a cost-effective method for profiling of bovine HSP70 gene.


Subject(s)
Cattle/metabolism , HSP70 Heat-Shock Proteins/analysis , Leukocytes, Mononuclear/metabolism , Nucleic Acid Amplification Techniques/methods , Real-Time Polymerase Chain Reaction/methods , Animals , Cells, Cultured , Gene Expression , HSP70 Heat-Shock Proteins/genetics , Nucleic Acid Amplification Techniques/economics , RNA, Messenger
SELECTION OF CITATIONS
SEARCH DETAIL
...