Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
1.
PLoS Comput Biol ; 19(11): e1011567, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37976328

ABSTRACT

Studies investigating neural information processing often implicitly ask both, which processing strategy out of several alternatives is used and how this strategy is implemented in neural dynamics. A prime example are studies on predictive coding. These often ask whether confirmed predictions about inputs or prediction errors between internal predictions and inputs are passed on in a hierarchical neural system-while at the same time looking for the neural correlates of coding for errors and predictions. If we do not know exactly what a neural system predicts at any given moment, this results in a circular analysis-as has been criticized correctly. To circumvent such circular analysis, we propose to express information processing strategies (such as predictive coding) by local information-theoretic quantities, such that they can be estimated directly from neural data. We demonstrate our approach by investigating two opposing accounts of predictive coding-like processing strategies, where we quantify the building blocks of predictive coding, namely predictability of inputs and transfer of information, by local active information storage and local transfer entropy. We define testable hypotheses on the relationship of both quantities, allowing us to identify which of the assumed strategies was used. We demonstrate our approach on spiking data collected from the retinogeniculate synapse of the cat (N = 16). Applying our local information dynamics framework, we are able to show that the synapse codes for predictable rather than surprising input. To support our findings, we estimate quantities applied in the partial information decomposition framework, which allow to differentiate whether the transferred information is primarily bottom-up sensory input or information transferred conditionally on the current state of the synapse. Supporting our local information-theoretic results, we find that the synapse preferentially transfers bottom-up information.


Subject(s)
Brain , Cognition , Nerve Net , Synapses
2.
PLoS Comput Biol ; 19(1): e1010380, 2023 01.
Article in English | MEDLINE | ID: mdl-36701388

ABSTRACT

Nature relies on highly distributed computation for the processing of information in nervous systems across the entire animal kingdom. Such distributed computation can be more easily understood if decomposed into the three elementary components of information processing, i.e. storage, transfer and modification, and rigorous information theoretic measures for these components exist. However, the distributed computation is often also linked to neural dynamics exhibiting distinct rhythms. Thus, it would be beneficial to associate the above components of information processing with distinct rhythmic processes where possible. Here we focus on the storage of information in neural dynamics and introduce a novel spectrally-resolved measure of active information storage (AIS). Drawing on intracortical recordings of neural activity in ferrets under anesthesia before and after loss of consciousness (LOC) we show that anesthesia- related modulation of AIS is highly specific to different frequency bands and that these frequency-specific effects differ across cortical layers and brain regions. We found that in the high/low gamma band the effects of anesthesia result in AIS modulation only in the supergranular layers, while in the alpha/beta band the strongest decrease in AIS can be seen at infragranular layers. Finally, we show that the increase of spectral power at multiple frequencies, in particular at alpha and delta bands in frontal areas, that is often observed during LOC ('anteriorization') also impacts local information processing-but in a frequency specific way: Increases in isoflurane concentration induced a decrease in AIS in the alpha frequencies, while they increased AIS in the delta frequency range < 2Hz. Thus, the analysis of spectrally-resolved AIS provides valuable additional insights into changes in cortical information processing under anaesthesia.


Subject(s)
Anesthesia , Isoflurane , Animals , Ferrets , Brain/physiology , Unconsciousness , Isoflurane/pharmacology , Electroencephalography
3.
Front Aging Neurosci ; 13: 631599, 2021.
Article in English | MEDLINE | ID: mdl-33897405

ABSTRACT

Aging is accompanied by unisensory decline. To compensate for this, two complementary strategies are potentially relied upon increasingly: first, older adults integrate more information from different sensory organs. Second, according to the predictive coding (PC) model, we form "templates" (internal models or "priors") of the environment through our experiences. It is through increased life experience that older adults may rely more on these templates compared to younger adults. Multisensory integration and predictive coding would be effective strategies for the perception of near-threshold stimuli, which may however come at the cost of integrating irrelevant information. Both strategies can be studied in multisensory illusions because these require the integration of different sensory information, as well as an internal model of the world that can take precedence over sensory input. Here, we elicited a classic multisensory illusion, the sound-induced flash illusion, in younger (mean: 27 years, N = 25) and older (mean: 67 years, N = 28) adult participants while recording the magnetoencephalogram. Older adults perceived more illusions than younger adults. Older adults had increased pre-stimulus beta-band activity compared to younger adults as predicted by microcircuit theories of predictive coding, which suggest priors and predictions are linked to beta-band activity. Transfer entropy analysis and dynamic causal modeling of pre-stimulus magnetoencephalography data revealed a stronger illusion-related modulation of cross-modal connectivity from auditory to visual cortices in older compared to younger adults. We interpret this as the neural correlate of increased reliance on a cross-modal predictive template in older adults leading to the illusory percept.

4.
PLoS One ; 16(3): e0248166, 2021.
Article in English | MEDLINE | ID: mdl-33735199

ABSTRACT

Scan pattern analysis has been discussed as a promising tool in the context of real-time gaze-based applications. In particular, information-theoretic measures of scan path predictability, such as the gaze transition entropy (GTE), have been proposed for detecting relevant changes in user state or task demand. These measures model scan patterns as first-order Markov chains, assuming that only the location of the previous fixation is predictive of the next fixation in time. However, this assumption may not be sufficient in general, as recent research has shown that scan patterns may also exhibit more long-range temporal correlations. Thus, we here evaluate the active information storage (AIS) as a novel information-theoretic approach to quantifying scan path predictability in a dynamic task. In contrast to the GTE, the AIS provides means to statistically test and account for temporal correlations in scan path data beyond the previous last fixation. We compare AIS to GTE in a driving simulator experiment, in which participants drove in a highway scenario, where trials were defined based on an experimental manipulation that encouraged the driver to start an overtaking maneuver. Two levels of difficulty were realized by varying the time left to complete the task. We found that individual observers indeed showed temporal correlations beyond a single past fixation and that the length of the correlation varied between observers. No effect of task difficulty was observed on scan path predictability for either AIS or GTE, but we found a significant increase in predictability during overtaking. Importantly, for participants for which the first-order Markov chain assumption did not hold, this was only shown using AIS but not GTE. We conclude that accounting for longer time horizons in scan paths in a personalized fashion is beneficial for interpreting gaze pattern in dynamic tasks.


Subject(s)
Attention/physiology , Automobile Driving , Eye Movement Measurements , Eye Movements/physiology , Individuality , Information Storage and Retrieval , Adult , Humans , User-Computer Interface , Young Adult
5.
Entropy (Basel) ; 23(2)2021 Jan 29.
Article in English | MEDLINE | ID: mdl-33573069

ABSTRACT

Entropy-based measures are an important tool for studying human gaze behavior under various conditions. In particular, gaze transition entropy (GTE) is a popular method to quantify the predictability of a visual scanpath as the entropy of transitions between fixations and has been shown to correlate with changes in task demand or changes in observer state. Measuring scanpath predictability is thus a promising approach to identifying viewers' cognitive states in behavioral experiments or gaze-based applications. However, GTE does not account for temporal dependencies beyond two consecutive fixations and may thus underestimate the actual predictability of the current fixation given past gaze behavior. Instead, we propose to quantify scanpath predictability by estimating the active information storage (AIS), which can account for dependencies spanning multiple fixations. AIS is calculated as the mutual information between a processes' multivariate past state and its next value. It is thus able to measure how much information a sequence of past fixations provides about the next fixation, hence covering a longer temporal horizon. Applying the proposed approach, we were able to distinguish between induced observer states based on estimated AIS, providing first evidence that AIS may be used in the inference of user states to improve human-machine interaction.

6.
PLoS Comput Biol ; 16(12): e1008526, 2020 12.
Article in English | MEDLINE | ID: mdl-33370259

ABSTRACT

Information transfer, measured by transfer entropy, is a key component of distributed computation. It is therefore important to understand the pattern of information transfer in order to unravel the distributed computational algorithms of a system. Since in many natural systems distributed computation is thought to rely on rhythmic processes a frequency resolved measure of information transfer is highly desirable. Here, we present a novel algorithm, and its efficient implementation, to identify separately frequencies sending and receiving information in a network. Our approach relies on the invertible maximum overlap discrete wavelet transform (MODWT) for the creation of surrogate data in the computation of transfer entropy and entirely avoids filtering of the original signals. The approach thereby avoids well-known problems due to phase shifts or the ineffectiveness of filtering in the information theoretic setting. We also show that measuring frequency-resolved information transfer is a partial information decomposition problem that cannot be fully resolved to date and discuss the implications of this issue. Last, we evaluate the performance of our algorithm on simulated data and apply it to human magnetoencephalography (MEG) recordings and to local field potential recordings in the ferret. In human MEG we demonstrate top-down information flow in temporal cortex from very high frequencies (above 100Hz) to both similarly high frequencies and to frequencies around 20Hz, i.e. a complex spectral configuration of cortical information transmission that has not been described before. In the ferret we show that the prefrontal cortex sends information at low frequencies (4-8 Hz) to early visual cortex (V1), while V1 receives the information at high frequencies (> 125 Hz).


Subject(s)
Systems Biology , Wavelet Analysis , Algorithms , Animals , Entropy , Ferrets , Humans , Magnetoencephalography
7.
Netw Neurosci ; 3(3): 827-847, 2019.
Article in English | MEDLINE | ID: mdl-31410382

ABSTRACT

Network inference algorithms are valuable tools for the study of large-scale neuroimaging datasets. Multivariate transfer entropy is well suited for this task, being a model-free measure that captures nonlinear and lagged dependencies between time series to infer a minimal directed network model. Greedy algorithms have been proposed to efficiently deal with high-dimensional datasets while avoiding redundant inferences and capturing synergistic effects. However, multiple statistical comparisons may inflate the false positive rate and are computationally demanding, which limited the size of previous validation studies. The algorithm we present-as implemented in the IDTxl open-source software-addresses these challenges by employing hierarchical statistical tests to control the family-wise error rate and to allow for efficient parallelization. The method was validated on synthetic datasets involving random networks of increasing size (up to 100 nodes), for both linear and nonlinear dynamics. The performance increased with the length of the time series, reaching consistently high precision, recall, and specificity (>98% on average) for 10,000 time samples. Varying the statistical significance threshold showed a more favorable precision-recall trade-off for longer time series. Both the network size and the sample size are one order of magnitude larger than previously demonstrated, showing feasibility for typical EEG and magnetoencephalography experiments.

8.
J Neurosci ; 37(34): 8273-8283, 2017 08 23.
Article in English | MEDLINE | ID: mdl-28751458

ABSTRACT

Predictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre)activated prior knowledge serving these predictions are still unknown. Based on the idea that such preactivated prior knowledge must be maintained until needed, we measured the amount of maintained information in neural signals via the active information storage (AIS) measure. AIS was calculated on whole-brain beamformer-reconstructed source time courses from MEG recordings of 52 human subjects during the baseline of a Mooney face/house detection task. Preactivation of prior knowledge for faces showed as α-band-related and ß-band-related AIS increases in content-specific areas; these AIS increases were behaviorally relevant in the brain's fusiform face area. Further, AIS allowed decoding of the cued category on a trial-by-trial basis. Our results support accounts indicating that activated prior knowledge and the corresponding predictions are signaled in low-frequency activity (<30 Hz).SIGNIFICANCE STATEMENT Our perception is not only determined by the information our eyes/retina and other sensory organs receive from the outside world, but strongly depends also on information already present in our brains, such as prior knowledge about specific situations or objects. A currently popular theory in neuroscience, predictive coding theory, suggests that this prior knowledge is used by the brain to form internal predictions about upcoming sensory information. However, neurophysiological evidence for this hypothesis is rare, mostly because this kind of evidence requires strong a priori assumptions about the specific predictions the brain makes and the brain areas involved. Using a novel, assumption-free approach, we find that face-related prior knowledge and the derived predictions are represented in low-frequency brain activity.


Subject(s)
Brain Waves/physiology , Brain/physiology , Facial Recognition/physiology , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Adult , Female , Forecasting , Humans , Magnetoencephalography/methods , Male , Young Adult
9.
PLoS Comput Biol ; 13(6): e1005511, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28570661

ABSTRACT

The disruption of coupling between brain areas has been suggested as the mechanism underlying loss of consciousness in anesthesia. This hypothesis has been tested previously by measuring the information transfer between brain areas, and by taking reduced information transfer as a proxy for decoupling. Yet, information transfer is a function of the amount of information available in the information source-such that transfer decreases even for unchanged coupling when less source information is available. Therefore, we reconsidered past interpretations of reduced information transfer as a sign of decoupling, and asked whether impaired local information processing leads to a loss of information transfer. An important prediction of this alternative hypothesis is that changes in locally available information (signal entropy) should be at least as pronounced as changes in information transfer. We tested this prediction by recording local field potentials in two ferrets after administration of isoflurane in concentrations of 0.0%, 0.5%, and 1.0%. We found strong decreases in the source entropy under isoflurane in area V1 and the prefrontal cortex (PFC)-as predicted by our alternative hypothesis. The decrease in source entropy was stronger in PFC compared to V1. Information transfer between V1 and PFC was reduced bidirectionally, but with a stronger decrease from PFC to V1. This links the stronger decrease in information transfer to the stronger decrease in source entropy-suggesting reduced source entropy reduces information transfer. This conclusion fits the observation that the synaptic targets of isoflurane are located in local cortical circuits rather than on the synapses formed by interareal axonal projections. Thus, changes in information transfer under isoflurane seem to be a consequence of changes in local processing more than of decoupling between brain areas. We suggest that source entropy changes must be considered whenever interpreting changes in information transfer as decoupling.


Subject(s)
Anesthetics, Inhalation/pharmacology , Consciousness , Isoflurane/pharmacology , Mental Processes/drug effects , Unconsciousness , Anesthesia , Animals , Consciousness/drug effects , Consciousness/physiology , Female , Ferrets , Prefrontal Cortex/drug effects , Unconsciousness/chemically induced , Unconsciousness/physiopathology
10.
PLoS One ; 10(10): e0140530, 2015.
Article in English | MEDLINE | ID: mdl-26479713

ABSTRACT

Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable because of the combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm uses interaction delays reconstructed for directed bivariate interactions to tag potentially spurious edges on the basis of their timing signatures in the context of the surrounding network. Such tagged interactions may then be pruned, which produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation in MATLAB to test the algorithm's performance on simulated networks as well as networks derived from magnetoencephalographic data. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a tractable and data-efficient way of reconstructing approximative networks of multivariate interactions. It is preferable if available data are limited or if fully multivariate approaches are computationally infeasible.


Subject(s)
Computer Graphics , Models, Neurological , Nerve Net/physiology , Algorithms , Brain/physiology , Humans , Magnetoencephalography
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2015: 4045-8, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26737182

ABSTRACT

In anesthesia research it is an open question how general anesthetics lead to loss of consciousness (LOC). It has been proposed that LOC may be caused by the disruption of cortical information processing, preventing information integration. Therefore, recent studies investigating information processing under anesthesia focused on changes in information transfer, measured by transfer entropy (TE). However, often this complex technique was not applied rigorously, using time series in symbolic representation, or using TE differences without accounting for neural conduction delays, or without accounting for signal history. Here, we used current best-practice in TE estimation to investigate information transfer under anesthesia: We conducted simultaneous recordings in primary visual cortex (V1) and prefrontal cortex (PFC) of head-fixed ferrets in a dark environment under different levels of anesthesia (awake, 0.5% isoflurane, 1.0 % isoflurane). To elucidate reasons for changes in TE, we further quantified information processing within brain areas by estimating active information storage (AIS) as an estimator of predictable information, and Lempel-Ziv complexity (LZC) as an estimator of signal entropy. Under anesthesia, we found a reduction in information transfer (TE) between PFC and V1 with a stronger reduction for the feedback direction (PFC to V1), validating previous results. Furthermore, entropy (LZC) was reduced and activity became more predictable as indicated by higher values of AIS. We conclude that higher anesthesia concentrations indeed lead to reduced inter-areal information transfer, which may be partly caused by decreases in local entropy and increases in local predictability. In revealing a possible reason for reduced TE that is potentially independent of inter-areal coupling, we demonstrate the value of directly quantifying information processing in addition to focusing on dynamic properties such as coupling strength.


Subject(s)
Anesthesia/methods , Prefrontal Cortex/physiology , Visual Cortex/physiology , Animals , Brain/drug effects , Brain/physiology , Entropy , Ferrets , Isoflurane/pharmacology , Prefrontal Cortex/drug effects , Unconsciousness , Visual Cortex/drug effects , Wakefulness
12.
PLoS One ; 9(7): e102833, 2014.
Article in English | MEDLINE | ID: mdl-25068489

ABSTRACT

Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.


Subject(s)
Models, Theoretical , Algorithms
13.
Front Neuroinform ; 8: 9, 2014.
Article in English | MEDLINE | ID: mdl-24592235

ABSTRACT

Autism spectrum disorder (ASD) is a common developmental disorder characterized by communication difficulties and impaired social interaction. Recent results suggest altered brain dynamics as a potential cause of symptoms in ASD. Here, we aim to describe potential information-processing consequences of these alterations by measuring active information storage (AIS)-a key quantity in the theory of distributed computation in biological networks. AIS is defined as the mutual information between the past state of a process and its next measurement. It measures the amount of stored information that is used for computation of the next time step of a process. AIS is high for rich but predictable dynamics. We recorded magnetoencephalography (MEG) signals in 10 ASD patients and 14 matched control subjects in a visual task. After a beamformer source analysis, 12 task-relevant sources were obtained. For these sources, stationary baseline activity was analyzed using AIS. Our results showed a decrease of AIS values in the hippocampus of ASD patients in comparison with controls, meaning that brain signals in ASD were either less predictable, reduced in their dynamic richness or both. Our study suggests the usefulness of AIS to detect an abnormal type of dynamics in ASD. The observed changes in AIS are compatible with Bayesian theories of reduced use or precision of priors in ASD.

14.
Alzheimer Dis Assoc Disord ; 27(4): 293-301, 2013.
Article in English | MEDLINE | ID: mdl-23751370

ABSTRACT

This paper (1) highlights the relevance of functional communication as an outcome parameter in Alzheimer disease (AD) clinical trials; (2) identifies studies that have reported functional communication outcome measures in AD clinical trials; (3) critically reviews the scales of functional communication used in recent AD clinical trials by summarizing the sources of information, characteristics, and available psychometric data for these scales; and (4) evaluates whether these measures actually or partially assess functional communication. To provide direction for future research and generate suggestions to assist in the development of a valid and reliable functional communication scale for the needs of AD clinical trials, we have included not only functional communication scales, but also related concepts that give thought-provoking impulses for the development of a functional communication scale. As outcome measures for AD clinical trials, the 6 identified papers use 6 different scales, for functional communication and for related concepts. All of the scales appear to have questionable psychometric properties, but still provide a promising basis for the creation of a functional communication scale. We conclude with concrete suggestions on how to combine the advantages of the existing scales for future research aimed at developing a valid and reliable functional communication scale for the needs of AD clinical trials.


Subject(s)
Alzheimer Disease/psychology , Alzheimer Disease/therapy , Communication , Clinical Trials as Topic/methods , Humans , Treatment Outcome
15.
Article in English | MEDLINE | ID: mdl-23366725

ABSTRACT

To understand the function of networks we have to identify the structure of their interactions, but also interaction timing, as compromised timing of interactions may disrupt network function. We demonstrate how both questions can be addressed using a modified estimator of transfer entropy. Transfer entropy is an implementation of Wiener's principle of observational causality based on information theory, and detects arbitrary linear and non-linear interactions. Using a modified estimator that uses delayed states of the driving system and independently optimized delayed states of the receiving system, we show that transfer entropy values peak if the delay of the state of the driving system equals the true interaction delay. In addition, we show how reconstructed delays from a bivariate transfer entropy analysis of a network can be used to label spurious interactions arising from cascade effects and apply this approach to local field potential (LFP) and magnetoencephalography (MEG) data.


Subject(s)
Entropy , Image Processing, Computer-Assisted , Information Theory , Action Potentials/physiology , Causality , Humans , Magnetoencephalography , Multivariate Analysis , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...