Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Phys Rev E ; 107(3-1): 034308, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37072975

ABSTRACT

Compressed sensing is a scheme that allows for sparse signals to be acquired, transmitted, and stored using far fewer measurements than done by conventional means employing the Nyquist sampling theorem. Since many naturally occurring signals are sparse (in some domain), compressed sensing has rapidly seen popularity in a number of applied physics and engineering applications, particularly in designing signal and image acquisition strategies, e.g., magnetic resonance imaging, quantum state tomography, scanning tunneling microscopy, and analog to digital conversion technologies. Contemporaneously, causal inference has become an important tool for the analysis and understanding of processes and their interactions in many disciplines of science, especially those dealing with complex systems. Direct causal analysis for compressively sensed data is required to avoid the task of reconstructing the compressed data. Also, for some sparse signals, such as for sparse temporal data, it may be difficult to discover causal relations directly using available data-driven or model-free causality estimation techniques. In this work, we provide a mathematical proof that structured compressed sensing matrices, specifically circulant and Toeplitz, preserve causal relationships in the compressed signal domain, as measured by Granger causality (GC). We then verify this theorem on a number of bivariate and multivariate coupled sparse signal simulations which are compressed using these matrices. We also demonstrate a real world application of network causal connectivity estimation from sparse neural spike train recordings from rat prefrontal cortex. In addition to demonstrating the effectiveness of structured matrices for GC estimation from sparse signals, we also show a computational time advantage of the proposed strategy for causal inference from compressed signals of both sparse and regular autoregressive processes as compared to standard GC estimation from original signals.

2.
Med Biol Eng Comput ; 60(8): 2245-2255, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35668230

ABSTRACT

The high spread rate of SARS-CoV-2 virus has put the researchers all over the world in a demanding situation. The need of the hour is to develop novel learning algorithms that can effectively learn a general pattern by training with fewer genome sequences of coronavirus. Learning from very few training samples is necessary and important during the beginning of a disease outbreak when sequencing data is limited. This is because a successful detection and isolation of patients can curb the spread of the virus. However, this poses a huge challenge for machine learning and deep learning algorithms as they require huge amounts of training data to learn the pattern and distinguish from other closely related viruses. In this paper, we propose a new paradigm - Neurochaos Learning (NL) for classification of coronavirus genome sequence that addresses this specific problem. NL is inspired from the empirical evidence of chaos and non-linearity at the level of neurons in biological neural networks. The average sensitivity, specificity and accuracy for NL are 0.998, 0.999 and 0.998 respectively for the multiclass classification problem (SARS-CoV-2, Coronaviridae, Metapneumovirus, Rhinovirus and Influenza) using leave one out crossvalidation. With just one training sample per class for 1000 independent random trials of training, we report an average macro F1-score [Formula: see text] for the classification of SARS-CoV-2 from SARS-CoV-1 genome sequences. We compare the performance of NL with K-nearest neighbours (KNN), logistic regression, random forest, SVM, and naïve Bayes classifiers. We foresee promising future applications in genome classification using NL with novel combinations of chaotic feature engineering and other machine learning algorithms.


Subject(s)
COVID-19 , SARS-CoV-2 , Bayes Theorem , Genome, Viral/genetics , Humans , Machine Learning , SARS-CoV-2/genetics
3.
Front Neurol ; 13: 755094, 2022.
Article in English | MEDLINE | ID: mdl-35250803

ABSTRACT

Seizure detection algorithms are often optimized to detect seizures from the epileptogenic cortex. However, in non-localizable epilepsies, the thalamus is frequently targeted for neuromodulation. Developing a reliable seizure detection algorithm from thalamic SEEG may facilitate the translation of closed-loop neuromodulation. Deep learning algorithms promise reliable seizure detectors, but the major impediment is the lack of larger samples of curated ictal thalamic SEEG needed for training classifiers. We aimed to investigate if synthetic data generated by temporal Generative Adversarial Networks (TGAN) can inflate the sample size to improve the performance of a deep learning classifier of ictal and interictal states from limited samples of thalamic SEEG. Thalamic SEEG from 13 patients (84 seizures) was obtained during stereo EEG evaluation for epilepsy surgery. Overall, TGAN generated synthetic data augmented the performance of the bidirectional Long-Short Term Memory (BiLSTM) performance in classifying thalamic ictal and baseline states. Adding synthetic data improved the accuracy of the detection model by 18.5%. Importantly, this approach can be applied to classify electrographic seizure onset patterns or develop patient-specific seizure detectors from implanted neuromodulation devices.

4.
Entropy (Basel) ; 25(1)2022 Dec 31.
Article in English | MEDLINE | ID: mdl-36673224

ABSTRACT

Finding a vaccine or specific antiviral treatment for a global pandemic of virus diseases (such as the ongoing COVID-19) requires rapid analysis, annotation and evaluation of metagenomic libraries to enable a quick and efficient screening of nucleotide sequences. Traditional sequence alignment methods are not suitable and there is a need for fast alignment-free techniques for sequence analysis. Information theory and data compression algorithms provide a rich set of mathematical and computational tools to capture essential patterns in biological sequences. In this study, we investigate the use of compression-complexity (Effort-to-Compress or ETC and Lempel-Ziv or LZ complexity) based distance measures for analyzing genomic sequences. The proposed distance measure is used to successfully reproduce the phylogenetic trees for a mammalian dataset consisting of eight species clusters, a set of coronaviruses belonging to group I, group II, group III, and SARS-CoV-1 coronaviruses, and a set of coronaviruses causing COVID-19 (SARS-CoV-2), and those not causing COVID-19. Having demonstrated the usefulness of these compression complexity measures, we employ them for the automatic classification of COVID-19-causing genome sequences using machine learning techniques. Two flavors of SVM (linear and quadratic) along with linear discriminant and fine K Nearest Neighbors classifer are used for classification. Using a data set comprising 1001 coronavirus sequences (causing COVID-19 and those not causing COVID-19), a classification accuracy of 98% is achieved with a sensitivity of 95% and a specificity of 99.8%. This work could be extended further to enable medical practitioners to automatically identify and characterize coronavirus strains and their rapidly growing mutants in a fast and efficient fashion.

5.
Neural Netw ; 143: 425-435, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34252737

ABSTRACT

Chaos and Noise are ubiquitous in the Brain. Inspired by the chaotic firing of neurons and the constructive role of noise in neuronal models, we for the first time connect chaos, noise and learning. In this paper, we demonstrate Stochastic Resonance (SR) phenomenon in Neurochaos Learning (NL). SR manifests at the level of a single neuron of NL and enables efficient subthreshold signal detection. Furthermore, SR is shown to occur in single and multiple neuronal NL architecture for classification tasks - both on simulated and real-world spoken digit datasets, and in architectures with 1D chaotic maps as well as Hindmarsh-Rose spiking neurons. Intermediate levels of noise in neurochaos learning enable peak performance in classification tasks thus highlighting the role of SR in AI applications, especially in brain inspired learning architectures.


Subject(s)
Models, Neurological , Neurons , Brain , Cluster Analysis , Learning , Stochastic Processes
6.
Entropy (Basel) ; 23(3)2021 Mar 10.
Article in English | MEDLINE | ID: mdl-33802138

ABSTRACT

Detection of the temporal reversibility of a given process is an interesting time series analysis scheme that enables the useful characterisation of processes and offers an insight into the underlying processes generating the time series. Reversibility detection measures have been widely employed in the study of ecological, epidemiological and physiological time series. Further, the time reversal of given data provides a promising tool for analysis of causality measures as well as studying the causal properties of processes. In this work, the recently proposed Compression-Complexity Causality (CCC) measure (by the authors) is shown to be free of the assumption that the "cause precedes the effect", making it a promising tool for causal analysis of reversible processes. CCC is a data-driven interventional measure of causality (second rung on the Ladder of Causation) that is based on Effort-to-Compress (ETC), a well-established robust method to characterize the complexity of time series for analysis and classification. For the detection of the temporal reversibility of processes, we propose a novel measure called the Compressive Potential based Asymmetry Measure. This asymmetry measure compares the probability of the occurrence of patterns at different scales between the forward-time and time-reversed process using ETC. We test the performance of the measure on a number of simulated processes and demonstrate its effectiveness in determining the asymmetry of real-world time series of sunspot numbers, digits of the transcedental number π and heart interbeat interval variability.

7.
J Biomed Inform ; 117: 103724, 2021 05.
Article in English | MEDLINE | ID: mdl-33722730

ABSTRACT

Causal inference is one of the most fundamental problems across all domains of science. We address the problem of inferring a causal direction from two observed discrete symbolic sequences X and Y. We present a framework which relies on lossless compressors for inferring context-free grammars (CFGs) from sequence pairs and quantifies the extent to which the grammar inferred from one sequence compresses the other sequence. We infer X causes Y if the grammar inferred from X better compresses Y than in the other direction. To put this notion to practice, we propose three models that use the Compression-Complexity Measures (CCMs) - Lempel-Ziv (LZ) complexity and Effort-To-Compress (ETC) to infer CFGs and discover causal directions without demanding temporal structures. We evaluate these models on synthetic and real-world benchmarks and empirically observe performances competitive with current state-of-the-art methods. Lastly, we present two unique applications of the proposed models for causal inference directly from pairs of genome sequences belonging to the SARS-CoV-2 virus. Using numerous sequences, we show that our models capture causal information exchanged between genome sequence pairs, presenting novel opportunities for addressing key issues in sequence analysis to investigate the evolution of virulence and pathogenicity in future applications.


Subject(s)
COVID-19 , Causality , Data Compression , Algorithms , Humans , Models, Theoretical , SARS-CoV-2
8.
Eur Phys J Spec Top ; 229(16): 2629-2738, 2020.
Article in English | MEDLINE | ID: mdl-33194093

ABSTRACT

Quantification of habitability is a complex task. Previous attempts at measuring habitability are well documented. Classification of exoplanets, on the other hand, is a different approach and depends on quality of training data available in habitable exoplanet catalogs. Classification is the task of predicting labels of newly discovered planets based on available class labels in the catalog. We present analytical exploration of novel activation functions as consequence of integration of several ideas leading to implementation and subsequent use in habitability classification of exoplanets. Neural networks, although a powerful engine in supervised methods, often require expensive tuning efforts for optimized performance. Habitability classes are hard to discriminate, especially when attributes used as hard markers of separation are removed from the data set. The solution is approached from the point of investigating analytical properties of the proposed activation functions. The theory of ordinary differential equations and fixed point are exploited to justify the "lack of tuning efforts" to achieve optimal performance compared to traditional activation functions. Additionally, the relationship between the proposed activation functions and the more popular ones is established through extensive analytical and empirical evidence. Finally, the activation functions have been implemented in plain vanilla feed-forward neural network to classify exoplanets. The mathematical exercise supplements the grand idea of classifying exoplanets, computing habitability scores/indices and automatic grouping of the exoplanets converging at some level.

9.
Chaos ; 29(11): 113125, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31779350

ABSTRACT

Inspired by chaotic firing of neurons in the brain, we propose ChaosNet-a novel chaos based artificial neural network architecture for classification tasks. ChaosNet is built using layers of neurons, each of which is a 1D chaotic map known as the Generalized Luröth Series (GLS) that has been shown in earlier works to possess very useful properties for compression, cryptography, and for computing XOR and other logical operations. In this work, we design a novel learning algorithm on ChaosNet that exploits the topological transitivity property of the chaotic GLS neurons. The proposed learning algorithm gives consistently good performance accuracy in a number of classification tasks on well known publicly available datasets with very limited training samples. Even with as low as seven (or fewer) training samples/class (which accounts for less than 0.05% of the total available data), ChaosNet yields performance accuracies in the range of 73.89%-98.33%. We demonstrate the robustness of ChaosNet to additive parameter noise and also provide an example implementation of a two layer ChaosNet for enhancing classification accuracy. We envisage the development of several other novel learning algorithms on ChaosNet in the near future.

10.
Chaos ; 29(9): 091103, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31575134

ABSTRACT

Synchronization of chaos arises between coupled dynamical systems and is very well understood as a temporal phenomenon, which leads the coupled systems to converge or develop a dependence with time. In this work, we provide a complementary spatial perspective to this phenomenon by introducing the novel idea of causal stability. We then propose and prove a causal stability synchronization theorem as a necessary and sufficient condition for complete synchronization. We also provide an empirical criterion to identify synchronizing variables in coupled identical chaotic dynamical systems based on intrasystem causal influences estimated using time series data of the driving system alone. For this, a recently proposed measure, Compression-Complexity Causality (CCC), is used. The sign and magnitude of the estimated CCC value capture the nature of dynamical influences from each variable to rest of the subsystem and are thus able to determine whether or not the variable, when used to couple another system, will drive that system to synchronization.

11.
Heliyon ; 5(2): e01181, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30828654

ABSTRACT

Measuring complexity of brain networks in the form of integrated information is a leading approach towards building a fundamental theory of consciousness. Integrated Information Theory (IIT) has gained attention in this regard due to its theoretically strong framework. Nevertheless, it faces some limitations such as current state dependence, computational intractability and inability to be applied to real brain data. On the other hand, Perturbational Complexity Index (PCI) is a clinical measure for distinguishing different levels of consciousness. Though PCI claims to capture the functional differentiation and integration in brain networks (similar to IIT), its link to integrated information is rather weak. Inspired by these two perspectives, we propose a new complexity measure for brain networks - Φ C using a novel perturbation based compression-complexity approach that serves as a bridge between the two, for the first time. Φ C is founded on the principles of lossless data compression based complexity measures which is computed by a perturbational approach. Φ C exhibits following salient innovations: (i) mathematically well bounded, (ii) negligible current state dependence unlike Φ, (iii) network complexity measured as compression-complexity rather than as an infotheoretic quantity, and (iv) lower computational complexity since number of atomic bipartitions scales linearly with the number of nodes of the network, thus avoiding combinatorial explosion. Our computations have revealed that Φ C has similar hierarchy to <Φ> for several multiple-node networks and it demonstrates a rich interplay between differentiation, integration and entropy of the nodes of a network. Φ C is a promising heuristic measure to characterize network complexity (and hence might be useful in contributing to building a measure of consciousness) with potential applications in estimating brain complexity on neurophysiological data.

12.
PeerJ Comput Sci ; 5: e171, 2019.
Article in English | MEDLINE | ID: mdl-33816824

ABSTRACT

Error detection is a fundamental need in most computer networks and communication systems in order to combat the effect of noise. Error detection techniques have also been incorporated with lossless data compression algorithms for transmission across communication networks. In this paper, we propose to incorporate a novel error detection scheme into a Shannon optimal lossless data compression algorithm known as Generalized Luröth Series (GLS) coding. GLS-coding is a generalization of the popular Arithmetic Coding which is an integral part of the JPEG2000 standard for still image compression. GLS-coding encodes the input message as a symbolic sequence on an appropriate 1D chaotic map Generalized Luröth Series (GLS) and the compressed file is obtained as the initial value by iterating backwards on the map. However, in the presence of noise, even small errors in the compressed file leads to catastrophic decoding errors owing to sensitive dependence on initial values, the hallmark of deterministic chaos. In this paper, we first show that repetition codes, the oldest and the most basic error correction and detection codes in literature, actually lie on a Cantor set with a fractal dimension of 1 n , which is also the rate of the code. Inspired by this, we incorporate error detection capability to GLS-coding by ensuring that the compressed file (initial value on the chaotic map) lies on a Cantor set. Even a 1-bit error in the initial value will throw it outside the Cantor set, which can be detected while decoding. The rate of the code can be adjusted by the fractal dimension of the Cantor set, thereby controlling the error detection performance.

13.
PeerJ Comput Sci ; 5: e196, 2019.
Article in English | MEDLINE | ID: mdl-33816849

ABSTRACT

Causality testing methods are being widely used in various disciplines of science. Model-free methods for causality estimation are very useful, as the underlying model generating the data is often unknown. However, existing model-free/data-driven measures assume separability of cause and effect at the level of individual samples of measurements and unlike model-based methods do not perform any intervention to learn causal relationships. These measures can thus only capture causality which is by the associational occurrence of 'cause' and 'effect' between well separated samples. In real-world processes, often 'cause' and 'effect' are inherently inseparable or become inseparable in the acquired measurements. We propose a novel measure that uses an adaptive interventional scheme to capture causality which is not merely associational. The scheme is based on characterizing complexities associated with the dynamical evolution of processes on short windows of measurements. The formulated measure, Compression-Complexity Causality is rigorously tested on simulated and real datasets and its performance is compared with that of existing measures such as Granger Causality and Transfer Entropy. The proposed measure is robust to the presence of noise, long-term memory, filtering and decimation, low temporal resolution (including aliasing), non-uniform sampling, finite length signals and presence of common driving variables. Our measure outperforms existing state-of-the-art measures, establishing itself as an effective tool for causality testing in real world applications.

14.
Ann Indian Acad Neurol ; 20(4): 403-407, 2017.
Article in English | MEDLINE | ID: mdl-29184345

ABSTRACT

Progressive loss of heart rate variability (HRV) and complexity are associated with increased risk of mortality in patients with cardiovascular disease and are a candidate marker for patients at risk of sudden cardiac death. HRV is influenced by the cardiac autonomic nervous system (ANS), although it is unclear which arm of the ANS (sympathetic or parasympathetic) needs to be perturbed to increase the complexity of HRV. In this case-control study, we have analyzed the relation between modulation of vagus nerve stimulation (VNS) and changes in complexity of HRV as a function of states of vigilance. We hypothesize that VNS - being a preferential activator of the parasympathetic system - will decrease the heart rate (HR) and increase the complexity of HRV maximum during sleep. The electrocardiogram (EKG) obtained from a 37-year-old, right-handed male with known intractable partial epilepsy and left therapeutic VNS was analyzed during wakefulness and sleep with VNS ON and OFF states. Age-matched control EKG was obtained from five participants (three with intractable epilepsy and two without epilepsy) that had no VNS implant. The study demonstrated the following: (1) VNS increased the complexity of HRV during sleep and decreased it during wakefulness. (2) An increase in parasympathetic tone is associated with increased complexity of HRV even in the presence of decreased HR. These results need to be replicated in a larger cohort before developing patterned stimulation using VNS to stabilize cardiac dysautonomia and prevent fatal arrhythmias.

15.
PeerJ ; 4: e2755, 2016.
Article in English | MEDLINE | ID: mdl-27957395

ABSTRACT

As we age, our hearts undergo changes that result in a reduction in complexity of physiological interactions between different control mechanisms. This results in a potential risk of cardiovascular diseases which are the number one cause of death globally. Since cardiac signals are nonstationary and nonlinear in nature, complexity measures are better suited to handle such data. In this study, three complexity measures are used, namely Lempel-Ziv complexity (LZ), Sample Entropy (SampEn) and Effort-To-Compress (ETC). We determined the minimum length of RR tachogram required for characterizing complexity of healthy young and healthy old hearts. All the three measures indicated significantly lower complexity values for older subjects than younger ones. However, the minimum length of heart-beat interval data needed differs for the three measures, with LZ and ETC needing as low as 10 samples, whereas SampEn requires at least 80 samples. Our study indicates that complexity measures such as LZ and ETC are good candidates for the analysis of cardiovascular dynamics since they are able to work with very short RR tachograms.

16.
Chaos ; 19(3): 033102, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19791982

ABSTRACT

Multiplexing of discrete chaotic signals in presence of noise is investigated. The existing methods are based on chaotic synchronization, which is susceptible to noise, precision limitations, and requires more iterates. Furthermore, most of these methods fail for multiplexing more than two discrete chaotic signals. We propose novel methods to multiplex multiple discrete chaotic signals based on the principle of symbolic sequence invariance in presence of noise and finite precision implementation of finding the initial condition of an arbitrarily long symbolic sequence of a chaotic map. Our methods work for single precision and as less as 35 iterates. For two signals, our method is robust up to 50% noise level.


Subject(s)
Algorithms , Computer Simulation , Models, Statistical , Nonlinear Dynamics , Oscillometry/methods , Signal Processing, Computer-Assisted
17.
Chaos ; 19(1): 013136, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19335000

ABSTRACT

Uniquely decodable codes are central to lossless data compression in both classical and quantum communication systems. The Kraft-McMillan inequality is a basic result in information theory which gives a necessary and sufficient condition for a code to be uniquely decodable and also has a quantum analogue. In this letter, we provide a novel dynamical systems proof of this inequality and its converse for prefix-free codes (no codeword is a prefix of another-the popular Huffman codes are an example). For constrained sources, the problem is still open.

SELECTION OF CITATIONS
SEARCH DETAIL
...