Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Neural Netw ; 147: 186-197, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35042156

ABSTRACT

This paper proposes an Information Bottleneck theory based filter pruning method that uses a statistical measure called Mutual Information (MI). The MI between filters and class labels, also called Relevance, is computed using the filter's activation maps and the annotations. The filters having High Relevance (HRel) are considered to be more important. Consequently, the least important filters, which have lower Mutual Information with the class labels, are pruned. Unlike the existing MI based pruning methods, the proposed method determines the significance of the filters purely based on their corresponding activation map's relationship with the class labels. Architectures such as LeNet-5, VGG-16, ResNet-56, ResNet-110 and ResNet-50 are utilized to demonstrate the efficacy of the proposed pruning method over MNIST, CIFAR-10 and ImageNet datasets. The proposed method shows the state-of-the-art pruning results for LeNet-5, VGG-16, ResNet-56, ResNet-110 and ResNet-50 architectures. In the experiments, we prune 97.98%, 84.85%, 76.89%, 76.95%, and 63.99% of Floating Point Operation (FLOP)s from LeNet-5, VGG-16, ResNet-56, ResNet-110, and ResNet-50 respectively. The proposed HRel pruning method outperforms recent state-of-the-art filter pruning methods. Even after pruning the filters from convolutional layers of LeNet-5 drastically (i.e., from 20, 50 to 2, 3, respectively), only a small accuracy drop of 0.52% is observed. Notably, for VGG-16, 94.98% parameters are reduced, only with a drop of 0.36% in top-1 accuracy. ResNet-50 has shown a 1.17% drop in the top-5 accuracy after pruning 66.42% of the FLOPs. In addition to pruning, the Information Plane dynamics of Information Bottleneck theory is analyzed for various Convolutional Neural Network architectures with the effect of pruning. The code is available at https://github.com/sarvanichinthapalli/HRel.


Subject(s)
Neural Networks, Computer , Information Theory
2.
Neural Netw ; 133: 112-122, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33181405

ABSTRACT

Transfer learning enables solving a specific task having limited data by using the pre-trained deep networks trained on large-scale datasets. Typically, while transferring the learned knowledge from source task to the target task, the last few layers are fine-tuned (re-trained) over the target dataset. However, these layers are originally designed for the source task that might not be suitable for the target task. In this paper, we introduce a mechanism for automatically tuning the Convolutional Neural Networks (CNN) for improved transfer learning. The pre-trained CNN layers are tuned with the knowledge from target data using Bayesian Optimization. First, we train the final layer of the base CNN model by replacing the number of neurons in the softmax layer with the number of classes involved in the target task. Next, the CNN is tuned automatically by observing the classification performance on the validation data (greedy criteria). To evaluate the performance of the proposed method, experiments are conducted on three benchmark datasets, e.g., CalTech-101, CalTech-256, and Stanford Dogs. The classification results obtained through the proposed AutoTune method outperforms the standard baseline transfer learning methods over the three datasets by achieving 95.92%, 86.54%, and 84.67% accuracy over CalTech-101, CalTech-256, and Stanford Dogs, respectively. The experimental results obtained in this study depict that tuning of the pre-trained CNN layers with the knowledge from the target dataset confesses better transfer learning ability. The source codes are available at https://github.com/JekyllAndHyde8999/AutoTune_CNN_TransferLearning.


Subject(s)
Databases, Factual , Machine Learning , Neural Networks, Computer , Pattern Recognition, Automated/methods , Animals , Bayes Theorem , Dogs
3.
Indian J Pediatr ; 88(6): 562-567, 2021 06.
Article in English | MEDLINE | ID: mdl-33175364

ABSTRACT

OBJECTIVES: There is sparsity of studies evaluating blood pressure in children with sickle cell disease (SCD), which have shown inconsistent results. Few of the studies have documented lower office blood pressure (BP) in SCD patients, whereas, others have shown presence of masked hypertension and abnormal ambulatory blood BP monitoring (ABPM). Thus, the present study was conducted to examine 24 h ABPM parameters and renal dysfunction in children with SCD and compare them with healthy controls. METHODS: A cross-sectional study was conducted on 56 children (30 children having SCD and 26 controls). ABPM and evaluation of renal functions including serum creatinine, serum urea, urinary creatinine, urinary protein and specific gravity was performed. RESULTS: Spot urinary protein to creatinine ratio was found to be higher in patients with SCD (63.3%) as compared to controls (p < 0.001). Proteinuria was observed in 1/4th of the SCD patients less than ten years of age. Masked hypertension was present in 2 (6.6%) patients, ambulatory hypertension in 4 (13.3%), ambulatory pre-hypertension in 1 (3.3%) and abnormal dipping in 60%. A statistically significant correlation of BMI for age Z-score and standard deviation score (SDS/Z) of 24 h systolic BP (r = 0.56, p = 0.002); estimated glomerular filtration rate (eGFR) with 24 h diastolic BP SDS (r = -0.52; p = 0.038) and age with e GFR (r = 0.54; p = 0.025) was found in the present study. CONCLUSIONS: The present study corroborates that ABPM abnormalities (ambulatory hypertension, non-dipping pattern, ambulatory prehypertension) and early onset proteinuria are significant findings in patients with SCD. This underscores the importance of regular screening for proteinuria and ABPM in routine care, for early detection and prevention of progressive renal damage in SCD.


Subject(s)
Anemia, Sickle Cell , Hypertension , Kidney Diseases , Anemia, Sickle Cell/complications , Blood Pressure , Blood Pressure Monitoring, Ambulatory , Child , Cross-Sectional Studies , Humans , Hypertension/diagnosis , Hypertension/etiology
4.
IEEE Trans Neural Netw Learn Syst ; 31(11): 4500-4511, 2020 11.
Article in English | MEDLINE | ID: mdl-31880565

ABSTRACT

Stochastic gradient descent (SGD) is one of the core techniques behind the success of deep neural networks. The gradient provides information on the direction in which a function has the steepest rate of change. The main problem with basic SGD is to change by equal-sized steps for all parameters, irrespective of the gradient behavior. Hence, an efficient way of deep network optimization is to have adaptive step sizes for each parameter. Recently, several attempts have been made to improve gradient descent methods such as AdaGrad, AdaDelta, RMSProp, and adaptive moment estimation (Adam). These methods rely on the square roots of exponential moving averages of squared past gradients. Thus, these methods do not take advantage of local change in gradients. In this article, a novel optimizer is proposed based on the difference between the present and the immediate past gradient (i.e., diffGrad). In the proposed diffGrad optimization technique, the step size is adjusted for each parameter in such a way that it should have a larger step size for faster gradient changing parameters and a lower step size for lower gradient changing parameters. The convergence analysis is done using the regret bound approach of the online learning framework. In this article, thorough analysis is made over three synthetic complex nonconvex functions. The image categorization experiments are also conducted over the CIFAR10 and CIFAR100 data sets to observe the performance of diffGrad with respect to the state-of-the-art optimizers such as SGDM, AdaGrad, AdaDelta, RMSProp, AMSGrad, and Adam. The residual unit (ResNet)-based convolutional neural network (CNN) architecture is used in the experiments. The experiments show that diffGrad outperforms other optimizers. Also, we show that diffGrad performs uniformly well for training CNN using different activation functions. The source code is made publicly available at https://github.com/shivram1987/diffGrad.

5.
Trop Doct ; 47(1): 60-63, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27216226

ABSTRACT

Takayasu arteritis (TA) is a chronic inflammatory and obliterative disease of large vessels, which mainly affects the aorta and its major branches. TA can lead to renal failure and renovascular hypertension in 60% of patients; it is rare in children aged <10 years and, more rarely, it presents with malignant hypertension in the paediatric age group. Here we present a case of 9-year-old boy with TA who presented with malignant hypertension and required surgical intervention to control the blood pressure. Subsequently, his medications were titrated using 24 h ambulatory blood pressure monitoring (ABPM) and is doing well on follow-up.


Subject(s)
Hypertension, Malignant/etiology , Takayasu Arteritis/complications , Antihypertensive Agents/therapeutic use , Child , Humans , Hypertension, Malignant/diagnostic imaging , Hypertension, Malignant/drug therapy , Hypertension, Malignant/surgery , Male , Nephrectomy , Rare Diseases , Takayasu Arteritis/diagnosis
6.
IEEE Trans Image Process ; 25(9): 4018-32, 2016 09.
Article in English | MEDLINE | ID: mdl-27295674

ABSTRACT

Local binary pattern (LBP) is widely adopted for efficient image feature description and simplicity. To describe the color images, it is required to combine the LBPs from each channel of the image. The traditional way of binary combination is to simply concatenate the LBPs from each channel, but it increases the dimensionality of the pattern. In order to cope with this problem, this paper proposes a novel method for image description with multichannel decoded LBPs. We introduce adder- and decoder-based two schemas for the combination of the LBPs from more than one channel. Image retrieval experiments are performed to observe the effectiveness of the proposed approaches and compared with the existing ways of multichannel techniques. The experiments are performed over 12 benchmark natural scene and color texture image databases, such as Corel-1k, MIT-VisTex, USPTex, Colored Brodatz, and so on. It is observed that the introduced multichannel adder- and decoder-based LBPs significantly improve the retrieval performance over each database and outperform the other multichannel-based approaches in terms of the average retrieval precision and average retrieval rate.

7.
IEEE Trans Image Process ; 24(12): 5892-903, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26513789

ABSTRACT

A new image feature description based on the local wavelet pattern (LWP) is proposed in this paper to characterize the medical computer tomography (CT) images for content-based CT image retrieval. In the proposed work, the LWP is derived for each pixel of the CT image by utilizing the relationship of center pixel with the local neighboring information. In contrast to the local binary pattern that only considers the relationship between a center pixel and its neighboring pixels, the presented approach first utilizes the relationship among the neighboring pixels using local wavelet decomposition, and finally considers its relationship with the center pixel. A center pixel transformation scheme is introduced to match the range of center value with the range of local wavelet decomposed values. Moreover, the introduced local wavelet decomposition scheme is centrally symmetric and suitable for CT images. The novelty of this paper lies in the following two ways: 1) encoding local neighboring information with local wavelet decomposition and 2) computing LWP using local wavelet decomposed values and transformed center pixel values. We tested the performance of our method over three CT image databases in terms of the precision and recall. We also compared the proposed LWP descriptor with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval.


Subject(s)
Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Wavelet Analysis , Brain/diagnostic imaging , Databases, Factual , Humans , Radiography, Thoracic
8.
IEEE Trans Image Process ; 23(12): 5323-33, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25248185

ABSTRACT

The region descriptors using local intensity ordering patterns have become more popular recent years for image matching due to its enhanced discriminative ability. However, the dimension of these descriptors increases rapidly with the slight increase in the number of local neighbors under consideration and becomes unreasonable for image matching due to time constraint. In this paper, we reduce the dimension of the descriptor and matching time significantly while keeping up the comparable performance by considering the number of neighboring sample points in an interleaved manner. The proposed interleaved order based local descriptor (IOLD) considers the local neighbors of a pixel as a set of interleaved neighbors and constructs the descriptor over each set separately and finally combines them to produce a single pattern. We extract the local ordering pattern to cope up with the illumination effect in an inherent rotation invariant manner. The novelty lies with using multiple neighboring sets in an interleaved fashion. We also explored the local intensity order pattern in a multisupport-region scenario. Results are compared over three challenging and widely adopted image matching data sets with other prominent descriptors under various image transformations. Results based on experiments suggest that the proposed IOLD descriptor outperforms in terms of both improved matching performance and reduced matching time. We also found that the amount of improvement is significant under complex illumination difference while showing more robustness toward noise.

SELECTION OF CITATIONS
SEARCH DETAIL
...