Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
1.
PLoS One ; 18(7): e0288457, 2023.
Article in English | MEDLINE | ID: mdl-37478054

ABSTRACT

Hypergraphs have gained increasing attention in the machine learning community lately due to their superiority over graphs in capturing super-dyadic interactions among entities. In this work, we propose a novel approach for the partitioning of k-uniform hypergraphs. Most of the existing methods work by reducing the hypergraph to a graph followed by applying standard graph partitioning algorithms. The reduction step restricts the algorithms to capturing only some weighted pairwise interactions and hence loses essential information about the original hypergraph. We overcome this issue by utilizing tensor-based representation of hypergraphs, which enables us to capture actual super-dyadic interactions. We extend the notion of minimum ratio-cut and normalized-cut from graphs to hypergraphs and show that the relaxed optimization problem can be solved using eigenvalue decomposition of the Laplacian tensor. This novel formulation also enables us to remove a hyperedge completely by using the "hyperedge score" metric proposed by us, unlike the existing reduction approaches. We propose a hypergraph partitioning algorithm inspired from spectral graph theory and also derive a tighter upper bound on the minimum positive eigenvalue of even-order hypergraph Laplacian tensor in terms of its conductance, which is utilized in the partitioning algorithm to approximate the normalized cut. The efficacy of the proposed method is demonstrated numerically on synthetic hypergraphs generated by stochastic block model. We also show improvement for the min-cut solution on 2-uniform hypergraphs (graphs) over the standard spectral partitioning algorithm.


Subject(s)
Algorithms , Machine Learning
2.
Front Neural Circuits ; 17: 1092933, 2023.
Article in English | MEDLINE | ID: mdl-37416627

ABSTRACT

We present a deep network-based model of the associative memory functions of the hippocampus. The proposed network architecture has two key modules: (1) an autoencoder module which represents the forward and backward projections of the cortico-hippocampal projections and (2) a module that computes familiarity of the stimulus and implements hill-climbing over the familiarity which represents the dynamics of the loops within the hippocampus. The proposed network is used in two simulation studies. In the first part of the study, the network is used to simulate image pattern completion by autoassociation under normal conditions. In the second part of the study, the proposed network is extended to a heteroassociative memory and is used to simulate picture naming task in normal and Alzheimer's disease (AD) conditions. The network is trained on pictures and names of digits from 0 to 9. The encoder layer of the network is partly damaged to simulate AD conditions. As in case of AD patients, under moderate damage condition, the network recalls superordinate words ("odd" instead of "nine"). Under severe damage conditions, the network shows a null response ("I don't know"). Neurobiological plausibility of the model is extensively discussed.


Subject(s)
Alzheimer Disease , Humans , Recognition, Psychology/physiology , Mental Recall/physiology , Hippocampus
3.
PLoS Comput Biol ; 19(4): e1011022, 2023 04.
Article in English | MEDLINE | ID: mdl-37093889

ABSTRACT

With the evolution of multicellularity, communication among cells in different tissues and organs became pivotal to life. Molecular basis of such communication has long been studied, but genome-wide screens for genes and other biomolecules mediating tissue-tissue signaling are lacking. To systematically identify inter-tissue mediators, we present a novel computational approach MultiCens (Multilayer/Multi-tissue network Centrality measures). Unlike single-layer network methods, MultiCens can distinguish within- vs. across-layer connectivity to quantify the "influence" of any gene in a tissue on a query set of genes of interest in another tissue. MultiCens enjoys theoretical guarantees on convergence and decomposability, and performs well on synthetic benchmarks. On human multi-tissue datasets, MultiCens predicts known and novel genes linked to hormones. MultiCens further reveals shifts in gene network architecture among four brain regions in Alzheimer's disease. MultiCens-prioritized hypotheses from these two diverse applications, and potential future ones like "Multi-tissue-expanded Gene Ontology" analysis, can enable whole-body yet molecular-level systems investigations in humans.


Subject(s)
Alzheimer Disease , Brain , Humans , Gene Regulatory Networks/genetics , Alzheimer Disease/genetics
4.
Front Big Data ; 5: 616617, 2022.
Article in English | MEDLINE | ID: mdl-35464122

ABSTRACT

Many real-world applications deal with data that have an underlying graph structure associated with it. To perform downstream analysis on such data, it is crucial to capture relational information of nodes over their expanded neighborhood efficiently. Herein, we focus on the problem of Collective Classification (CC) for assigning labels to unlabeled nodes. Most deep learning models for CC heavily rely on differentiable variants of Weisfeiler-Lehman (WL) kernels. However, due to current computing architectures' limitations, WL kernels and their differentiable variants are limited in their ability to capture useful relational information only over a small expanded neighborhood of a node. To address this concern, we propose the framework, I-HOP, that couples differentiable kernels with an iterative inference mechanism to scale to larger neighborhoods. I-HOP scales differentiable graph kernels to capture and summarize information from a larger neighborhood in each iteration by leveraging a historical neighborhood summary obtained in the previous iteration. This recursive nature of I-HOP provides an exponential reduction in time and space complexity over straightforward differentiable graph kernels. Additionally, we point out a limitation of WL kernels where the node's original information is decayed exponentially with an increase in neighborhood size and provide a solution to address it. Finally, extensive evaluation across 11 datasets showcases the improved results and robustness of our proposed iterative framework, I-HOP.

5.
Front Artif Intell ; 5: 759255, 2022.
Article in English | MEDLINE | ID: mdl-35265829

ABSTRACT

Hand pose estimation in 3D from depth images is a highly complex task. Current state-of-the-art 3D hand pose estimators focus only on the accuracy of the model as measured by how closely it matches the ground truth hand pose but overlook the resulting hand pose's anatomical correctness. In this paper, we present the Single Shot Corrective CNN (SSC-CNN) to tackle the problem of enforcing anatomical correctness at the architecture level. In contrast to previous works which use post-facto pose filters, SSC-CNN predicts the hand pose that conforms to the human hand's biomechanical bounds and rules in a single forward pass. The model was trained and tested on the HANDS2017 and MSRA datasets. Experiments show that our proposed model shows comparable accuracy to the state-of-the-art models as measured by the ground truth pose. However, the previous methods have high anatomical errors, whereas our model is free from such errors. Experiments show that our proposed model shows zero anatomical errors along with comparable accuracy to the state-of-the-art models as measured by the ground truth pose. The previous methods have high anatomical errors, whereas our model is free from such errors. Surprisingly even the ground truth provided in the existing datasets suffers from anatomical errors, and therefore Anatomical Error Free (AEF) versions of the datasets, namely AEF-HANDS2017 and AEF-MSRA, were created.

6.
Front Genet ; 12: 722198, 2021.
Article in English | MEDLINE | ID: mdl-34630517

ABSTRACT

Essential gene prediction models built so far are heavily reliant on sequence-based features, and the scope of network-based features has been narrow. Previous work from our group demonstrated the importance of using network-based features for predicting essential genes with high accuracy. Here, we apply our approach for the prediction of essential genes to organisms from the STRING database and host the results in a standalone website. Our database, NetGenes, contains essential gene predictions for 2,700+ bacteria predicted using features derived from STRING protein-protein functional association networks. Housing a total of over 2.1 million genes, NetGenes offers various features like essentiality scores, annotations, and feature vectors for each gene. NetGenes database is available from https://rbc-dsai-iitm.github.io/NetGenes/.

7.
Cancers (Basel) ; 13(10)2021 May 14.
Article in English | MEDLINE | ID: mdl-34068918

ABSTRACT

Identifying cancer-causing mutations from sequenced cancer genomes hold much promise for targeted therapy and precision medicine. "Driver" mutations are primarily responsible for cancer progression, while "passengers" are functionally neutral. Although several computational approaches have been developed for distinguishing between driver and passenger mutations, very few have concentrated on using the raw nucleotide sequences surrounding a particular mutation as potential features for building predictive models. Using experimentally validated cancer mutation data in this study, we explored various string-based feature representation techniques to incorporate information on the neighborhood bases immediately 5' and 3' from each mutated position. Density estimation methods showed significant distributional differences between the neighborhood bases surrounding driver and passenger mutations. Binary classification models derived using repeated cross-validation experiments provided comparable performances across all window sizes. Integrating sequence features derived from raw nucleotide sequences with other genomic, structural, and evolutionary features resulted in the development of a pan-cancer mutation effect prediction tool, NBDriver, which was highly efficient in identifying pathogenic variants from five independent validation datasets. An ensemble predictor obtained by combining the predictions from NBDriver with three other commonly used driver prediction tools (FATHMM (cancer), CONDEL, and MutationTaster) significantly outperformed existing pan-cancer models in prioritizing a literature-curated list of driver and passenger mutations. Using the list of true positive mutation predictions derived from NBDriver, we identified a list of 138 known driver genes with functional evidence from various sources. Overall, our study underscores the efficacy of using raw nucleotide sequences as features to distinguish between driver and passenger mutations from sequenced cancer genomes.

8.
Front Artif Intell ; 3: 3, 2020.
Article in English | MEDLINE | ID: mdl-33733123

ABSTRACT

Models often need to be constrained to a certain size for them to be considered interpretable. For example, a decision tree of depth 5 is much easier to understand than one of depth 50. Limiting model size, however, often reduces accuracy. We suggest a practical technique that minimizes this trade-off between interpretability and classification accuracy. This enables an arbitrary learning algorithm to produce highly accurate small-sized models. Our technique identifies the training data distribution to learn from that leads to the highest accuracy for a model of a given size. We represent the training distribution as a combination of sampling schemes. Each scheme is defined by a parameterized probability mass function applied to the segmentation produced by a decision tree. An Infinite Mixture Model with Beta components is used to represent a combination of such schemes. The mixture model parameters are learned using Bayesian Optimization. Under simplistic assumptions, we would need to optimize for O(d) variables for a distribution over a d-dimensional input space, which is cumbersome for most real-world data. However, we show that our technique significantly reduces this number to a fixed set of eight variables at the cost of relatively cheap preprocessing. The proposed technique is flexible: it is model-agnostic, i.e., it may be applied to the learning algorithm for any model family, and it admits a general notion of model size. We demonstrate its effectiveness using multiple real-world datasets to construct decision trees, linear probability models and gradient boosted models with different sizes. We observe significant improvements in the F1-score in most instances, exceeding an improvement of 100% in some cases.

9.
Front Genet ; 10: 164, 2019.
Article in English | MEDLINE | ID: mdl-30918511

ABSTRACT

Biological networks catalog the complex web of interactions happening between different molecules, typically proteins, within a cell. These networks are known to be highly modular, with groups of proteins associated with specific biological functions. Human diseases often arise from the dysfunction of one or more such proteins of the biological functional group. The ability, to identify and automatically extract these modules has implications for understanding the etiology of different diseases as well as the functional roles of different protein modules in disease. The recent DREAM challenge posed the problem of identifying disease modules from six heterogeneous networks of proteins/genes. There exist many community detection algorithms, but all of them are not adaptable to the biological context, as these networks are densely connected and the size of biologically relevant modules is quite small. The contribution of this study is 3-fold: first, we present a comprehensive assessment of many classic community detection algorithms for biological networks to identify non-overlapping communities, and propose heuristics to identify small and structurally well-defined communities-core modules. We evaluated our performance over 180 GWAS datasets. In comparison to traditional approaches, with our proposed approach we could identify 50% more number of disease-relevant modules. Thus, we show that it is important to identify more compact modules for better performance. Next, we sought to understand the peculiar characteristics of disease-enriched modules and what causes standard community detection algorithms to detect so few of them. We performed a comprehensive analysis of the interaction patterns of known disease genes to understand the structure of disease modules and show that merely considering the known disease genes set as a module does not give good quality clusters, as measured by typical metrics such as modularity and conductance. We go on to present a methodology leveraging these known disease genes, to also include the neighboring nodes of these genes into a module, to form good quality clusters and subsequently extract a "gold-standard set" of disease modules. Lastly, we demonstrate, with justification, that "overlapping" community detection algorithms should be the preferred choice for disease module identification since several genes participate in multiple biological functions.

10.
J Indian Inst Sci ; 99(2): 237-246, 2019 Jun.
Article in English | MEDLINE | ID: mdl-34282354

ABSTRACT

The study of networks has been evolving because of its applications in diverse fields. Many complex systems involve multiple types of interactions and such systems are better modeled as multilayer networks.The question "which are the most (or least) important nodes in a given network?", has gained substantial attention in the network science community. The importance of a node is known as centrality and there are multiple ways to define it. Extending the centrality measure to multilayer networks is challenging since the relative contribution of intra-layer edges vs. that of inter-layer edges to multilayer centrality is not straight-forward. With the growing applications of multilayer networks, several attempts have been made to define centrality in multilayer networks in recent years. There are different ways of tuning the inter-layer couplings which may lead to different classes of centrality measures. In this article, we provide an overview of the recent works related to centrality in multilayer networks with a focus on key use cases and implications of the type of inter-layer coupling on centrality and subsequent uses of the different centrality measures. We discuss the effect of three popular interlayer coupling methods viz. diagonal coupling between adjacent layers, diagonal coupling and cross coupling. We hope the colloquial tone of this article would make it a pleasant read for understanding the theoretical as well as experimental aspects of the work.

11.
PLoS One ; 13(12): e0208722, 2018.
Article in English | MEDLINE | ID: mdl-30543651

ABSTRACT

Machine learning approaches to predict essential genes have gained a lot of traction in recent years. These approaches predominantly make use of sequence and network-based features to predict essential genes. However, the scope of network-based features used by the existing approaches is very narrow. Further, many of these studies focus on predicting essential genes within the same organism, which cannot be readily used to predict essential genes across organisms. Therefore, there is clearly a need for a method that is able to predict essential genes across organisms, by leveraging network-based features. In this study, we extract several sets of network-based features from protein-protein association networks available from the STRING database. Our network features include some common measures of centrality, and also some novel recursive measures recently proposed in social network literature. We extract hundreds of network-based features from networks of 27 diverse organisms to predict the essentiality of 87000+ genes. Our results show that network-based features are statistically significantly better at classifying essential genes across diverse bacterial species, compared to the current state-of-the-art methods, which use mostly sequence and a few 'conventional' network-based features. Our diverse set of network properties gave an AUROC of 0.847 and a precision of 0.320 across 27 organisms. When we augmented the complete set of network features with sequence-derived features, we achieved an improved AUROC of 0.857 and a precision of 0.335. We also constructed a reduced set of 100 sequence and network features, which gave a comparable performance. Further, we show that our features are useful for predicting essential genes in new organisms by using leave-one-species-out validation. Our network features capture the local, global and neighbourhood properties of the network and are hence effective for prediction of essential genes across diverse organisms, even in the absence of other complex biological knowledge. Our approach can be readily exploited to predict essentiality for organisms in interactome databases such as the STRING, where both network and sequence are readily available. All codes are available at https://github.com/RamanLab/nbfpeg.


Subject(s)
Algorithms , Genes, Essential , Models, Genetic , Bacteria/genetics , Bacteria/metabolism , Databases, Protein , Proteins/genetics , Proteins/metabolism
12.
Neural Comput ; 28(2): 257-85, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26654210

ABSTRACT

Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches.


Subject(s)
Learning , Neural Networks, Computer , Pattern Recognition, Automated/methods , Statistics as Topic , Humans , Models, Neurological
13.
Article in English | MEDLINE | ID: mdl-26136679

ABSTRACT

There is significant evidence that in addition to reward-punishment based decision making, the Basal Ganglia (BG) contributes to risk-based decision making (Balasubramani et al., 2014). Despite this evidence, little is known about the computational principles and neural correlates of risk computation in this subcortical system. We have previously proposed a reinforcement learning (RL)-based model of the BG that simulates the interactions between dopamine (DA) and serotonin (5HT) in a diverse set of experimental studies including reward, punishment and risk based decision making (Balasubramani et al., 2014). Starting with the classical idea that the activity of mesencephalic DA represents reward prediction error, the model posits that serotoninergic activity in the striatum controls risk-prediction error. Our prior model of the BG was an abstract model that did not incorporate anatomical and cellular-level data. In this work, we expand the earlier model into a detailed network model of the BG and demonstrate the joint contributions of DA-5HT in risk and reward-punishment sensitivity. At the core of the proposed network model is the following insight regarding cellular correlates of value and risk computation. Just as DA D1 receptor (D1R) expressing medium spiny neurons (MSNs) of the striatum were thought to be the neural substrates for value computation, we propose that DA D1R and D2R co-expressing MSNs are capable of computing risk. Though the existence of MSNs that co-express D1R and D2R are reported by various experimental studies, prior existing computational models did not include them. Ours is the first model that accounts for the computational possibilities of these co-expressing D1R-D2R MSNs, and describes how DA and 5HT mediate activity in these classes of neurons (D1R-, D2R-, D1R-D2R- MSNs). Starting from the assumption that 5HT modulates all MSNs, our study predicts significant modulatory effects of 5HT on D2R and co-expressing D1R-D2R MSNs which in turn explains the multifarious functions of 5HT in the BG. The experiments simulated in the present study relates 5HT to risk sensitivity and reward-punishment learning. Furthermore, our model is shown to capture reward-punishment and risk based decision making impairment in Parkinson's Disease (PD). The model predicts that optimizing 5HT levels along with DA medications might be essential for improving the patients' reward-punishment learning deficits.

14.
PLoS One ; 10(6): e0127542, 2015.
Article in English | MEDLINE | ID: mdl-26042675

ABSTRACT

Impulsivity, i.e. irresistibility in the execution of actions, may be prominent in Parkinson's disease (PD) patients who are treated with dopamine precursors or dopamine receptor agonists. In this study, we combine clinical investigations with computational modeling to explore whether impulsivity in PD patients on medication may arise as a result of abnormalities in risk, reward and punishment learning. In order to empirically assess learning outcomes involving risk, reward and punishment, four subject groups were examined: healthy controls, ON medication PD patients with impulse control disorder (PD-ON ICD) or without ICD (PD-ON non-ICD), and OFF medication PD patients (PD-OFF). A neural network model of the Basal Ganglia (BG) that has the capacity to predict the dysfunction of both the dopaminergic (DA) and the serotonergic (5HT) neuromodulator systems was developed and used to facilitate the interpretation of experimental results. In the model, the BG action selection dynamics were mimicked using a utility function based decision making framework, with DA controlling reward prediction and 5HT controlling punishment and risk predictions. The striatal model included three pools of Medium Spiny Neurons (MSNs), with D1 receptor (R) alone, D2R alone and co-expressing D1R-D2R. Empirical studies showed that reward optimality was increased in PD-ON ICD patients while punishment optimality was increased in PD-OFF patients. Empirical studies also revealed that PD-ON ICD subjects had lower reaction times (RT) compared to that of the PD-ON non-ICD patients. Computational modeling suggested that PD-OFF patients have higher punishment sensitivity, while healthy controls showed comparatively higher risk sensitivity. A significant decrease in sensitivity to punishment and risk was crucial for explaining behavioral changes observed in PD-ON ICD patients. Our results highlight the power of computational modelling for identifying neuronal circuitry implicated in learning, and its impairment in PD. The results presented here not only show that computational modelling can be used as a valuable tool for understanding and interpreting clinical data, but they also show that computational modeling has the potential to become an invaluable tool to predict the onset of behavioral changes during disease progression.


Subject(s)
Biomarkers/metabolism , Impulsive Behavior , Neural Networks, Computer , Parkinson Disease/complications , Parkinson Disease/drug therapy , Analysis of Variance , Basal Ganglia , Dopamine/metabolism , Female , Humans , Male , Serotonin/metabolism , Task Performance and Analysis
15.
Article in English | MEDLINE | ID: mdl-24795614

ABSTRACT

Although empirical and neural studies show that serotonin (5HT) plays many functional roles in the brain, prior computational models mostly focus on its role in behavioral inhibition. In this study, we present a model of risk based decision making in a modified Reinforcement Learning (RL)-framework. The model depicts the roles of dopamine (DA) and serotonin (5HT) in Basal Ganglia (BG). In this model, the DA signal is represented by the temporal difference error (δ), while the 5HT signal is represented by a parameter (α) that controls risk prediction error. This formulation that accommodates both 5HT and DA reconciles some of the diverse roles of 5HT particularly in connection with the BG system. We apply the model to different experimental paradigms used to study the role of 5HT: (1) Risk-sensitive decision making, where 5HT controls risk assessment, (2) Temporal reward prediction, where 5HT controls time-scale of reward prediction, and (3) Reward/Punishment sensitivity, in which the punishment prediction error depends on 5HT levels. Thus the proposed integrated RL model reconciles several existing theories of 5HT and DA in the BG.

SELECTION OF CITATIONS
SEARCH DETAIL
...