Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
Chaos ; 34(6)2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38865092

ABSTRACT

There has recently been an explosion of interest in how "higher-order" structures emerge in complex systems comprised of many interacting elements (often called "synergistic" information). This "emergent" organization has been found in a variety of natural and artificial systems, although at present, the field lacks a unified understanding of what the consequences of higher-order synergies and redundancies are for systems under study. Typical research treats the presence (or absence) of synergistic information as a dependent variable and report changes in the level of synergy in response to some change in the system. Here, we attempt to flip the script: rather than treating higher-order information as a dependent variable, we use evolutionary optimization to evolve boolean networks with significant higher-order redundancies, synergies, or statistical complexity. We then analyze these evolved populations of networks using established tools for characterizing discrete dynamics: the number of attractors, the average transient length, and the Derrida coefficient. We also assess the capacity of the systems to integrate information. We find that high-synergy systems are unstable and chaotic, but with a high capacity to integrate information. In contrast, evolved redundant systems are extremely stable, but have negligible capacity to integrate information. Finally, the complex systems that balance integration and segregation (known as Tononi-Sporns-Edelman complexity) show features of both chaosticity and stability, with a greater capacity to integrate information than the redundant systems while being more stable than the random and synergistic systems. We conclude that there may be a fundamental trade-off between the robustness of a system's dynamics and its capacity to integrate information (which inherently requires flexibility and sensitivity) and that certain kinds of complexity naturally balance this trade-off.

2.
PLoS One ; 19(2): e0297128, 2024.
Article in English | MEDLINE | ID: mdl-38315691

ABSTRACT

Since its introduction, the partial information decomposition (PID) has emerged as a powerful, information-theoretic technique useful for studying the structure of (potentially higher-order) interactions in complex systems. Despite its utility, the applicability of the PID is restricted by the need to assign elements as either "sources" or "targets", as well as the specific structure of the mutual information itself. Here, I introduce a generalized information decomposition that relaxes the source/target distinction while still satisfying the basic intuitions about information. This approach is based on the decomposition of the Kullback-Leibler divergence, and consequently allows for the analysis of any information gained when updating from an arbitrary prior to an arbitrary posterior. As a result, any information-theoretic measure that can be written as a linear combination of Kullback-Leibler divergences admits a decomposition in the style of Williams and Beer, including the total correlation, the negentropy, and the mutual information as special cases. This paper explores how the generalized information decomposition can reveal novel insights into existing measures, as well as the nature of higher-order synergies. We show that synergistic information is intimately related to the well-known Tononi-Sporns-Edelman (TSE) complexity, and that synergistic information requires a similar integration/segregation balance as a high TSE complexity. Finally, I end with a discussion of how this approach fits into other attempts to generalize the PID and the possibilities for empirical applications.

3.
Proc Natl Acad Sci U S A ; 120(30): e2300888120, 2023 07 25.
Article in English | MEDLINE | ID: mdl-37467265

ABSTRACT

The standard approach to modeling the human brain as a complex system is with a network, where the basic unit of interaction is a pairwise link between two brain regions. While powerful, this approach is limited by the inability to assess higher-order interactions involving three or more elements directly. In this work, we explore a method for capturing higher-order dependencies in multivariate data: the partial entropy decomposition (PED). Our approach decomposes the joint entropy of the whole system into a set of nonnegative atoms that describe the redundant, unique, and synergistic interactions that compose the system's structure. PED gives insight into the mathematics of functional connectivity and its limitation. When applied to resting-state fMRI data, we find robust evidence of higher-order synergies that are largely invisible to standard functional connectivity analyses. Our approach can also be localized in time, allowing a frame-by-frame analysis of how the distributions of redundancies and synergies change over the course of a recording. We find that different ensembles of regions can transiently change from being redundancy-dominated to synergy-dominated and that the temporal pattern is structured in time. These results provide strong evidence that there exists a large space of unexplored structures in human brain data that have been largely missed by a focus on bivariate network connectivity models. This synergistic structure is dynamic in time and likely will illuminate interesting links between brain and behavior. Beyond brain-specific application, the PED provides a very general approach for understanding higher-order structures in a variety of complex systems.


Subject(s)
Brain Mapping , Brain , Humans , Entropy , Brain/diagnostic imaging , Brain Mapping/methods , Magnetic Resonance Imaging/methods , Rest
4.
Neuroimage ; 277: 120266, 2023 08 15.
Article in English | MEDLINE | ID: mdl-37414231

ABSTRACT

Dynamic models of ongoing BOLD fMRI brain dynamics and models of communication strategies have been two important approaches to understanding how brain network structure constrains function. However, dynamic models have yet to widely incorporate one of the most important insights from communication models: the brain may not use all of its connections in the same way or at the same time. Here we present a variation of a phase delayed Kuramoto coupled oscillator model that dynamically limits communication between nodes on each time step. An active subgraph of the empirically derived anatomical brain network is chosen in accordance with the local dynamic state on every time step, thus coupling dynamics and network structure in a novel way. We analyze this model with respect to its fit to empirical time-averaged functional connectivity, finding that, with the addition of only one parameter, it significantly outperforms standard Kuramoto models with phase delays. We also perform analyses on the novel time series of active edges it produces, demonstrating a slowly evolving topology moving through intermittent episodes of integration and segregation. We hope to demonstrate that the exploration of novel modeling mechanisms and the investigation of dynamics of networks in addition to dynamics on networks may advance our understanding of the relationship between brain structure and function.


Subject(s)
Brain , Models, Neurological , Humans , Neural Pathways , Brain/diagnostic imaging , Brain Mapping/methods , Magnetic Resonance Imaging/methods , Nerve Net/diagnostic imaging
6.
Commun Biol ; 6(1): 451, 2023 04 24.
Article in English | MEDLINE | ID: mdl-37095282

ABSTRACT

One of the most well-established tools for modeling the brain is the functional connectivity network, which is constructed from pairs of interacting brain regions. While powerful, the network model is limited by the restriction that only pairwise dependencies are considered and potentially higher-order structures are missed. Here, we explore how multivariate information theory reveals higher-order dependencies in the human brain. We begin with a mathematical analysis of the O-information, showing analytically and numerically how it is related to previously established information theoretic measures of complexity. We then apply the O-information to brain data, showing that synergistic subsystems are widespread in the human brain. Highly synergistic subsystems typically sit between canonical functional networks, and may serve an integrative role. We then use simulated annealing to find maximally synergistic subsystems, finding that such systems typically comprise ≈10 brain regions, recruited from multiple canonical brain systems. Though ubiquitous, highly synergistic subsystems are invisible when considering pairwise functional connectivity, suggesting that higher-order dependencies form a kind of shadow structure that has been unrecognized by established network-based analyses. We assert that higher-order interactions in the brain represent an under-explored space that, accessible with tools of multivariate information theory, may offer novel scientific insights.


Subject(s)
Brain Mapping , Information Theory , Humans , Brain , Cerebral Cortex , Models, Neurological
7.
PLoS One ; 18(3): e0282950, 2023.
Article in English | MEDLINE | ID: mdl-36952508

ABSTRACT

A core feature of complex systems is that the interactions between elements in the present causally constrain their own futures, and the futures of other elements as the system evolves through time. To fully model all of these interactions (between elements, as well as ensembles of elements), it is possible to decompose the total information flowing from past to future into a set of non-overlapping temporal interactions that describe all the different modes by which information can be stored, transferred, or modified. To achieve this, I propose a novel information-theoretic measure of temporal dependency (Iτsx) based on the logic of local probability mass exclusions. This integrated information decomposition can reveal emergent and higher-order interactions within the dynamics of a system, as well as refining existing measures. To demonstrate the utility of this framework, I apply the decomposition to spontaneous spiking activity recorded from dissociated neural cultures of rat cerebral cortex to show how different modes of information processing are distributed over the system. Furthermore, being a localizable analysis, Iτsx can provide insight into the computational structure of single moments. I explore the time-resolved computational structure of neuronal avalanches and find that different types of information atoms have distinct profiles over the course of an avalanche, with the majority of non-trivial information dynamics happening before the first half of the cascade is completed. These analyses allow us to move beyond the historical focus on single measures of dependency such as information transfer or information integration, and explore a panoply of different relationships between elements (and groups of elements) in complex systems.


Subject(s)
Models, Neurological , Neurons , Rats , Animals , Neurons/physiology , Cerebral Cortex/physiology , Cognition , Probability
8.
Proc Natl Acad Sci U S A ; 120(2): e2207677120, 2023 01 10.
Article in English | MEDLINE | ID: mdl-36603032

ABSTRACT

One of the essential functions of biological neural networks is the processing of information. This includes everything from processing sensory information to perceive the environment, up to processing motor information to interact with the environment. Due to methodological limitations, it has been historically unclear how information processing changes during different cognitive or behavioral states and to what extent information is processed within or between the network of neurons in different brain areas. In this study, we leverage recent advances in the calculation of information dynamics to explore neural-level processing within and between the frontoparietal areas AIP, F5, and M1 during a delayed grasping task performed by three macaque monkeys. While information processing was high within all areas during all cognitive and behavioral states of the task, interareal processing varied widely: During visuomotor transformation, AIP and F5 formed a reciprocally connected processing unit, while no processing was present between areas during the memory period. Movement execution was processed globally across all areas with predominance of processing in the feedback direction. Furthermore, the fine-scale network structure reconfigured at the neuron level in response to different grasping conditions, despite no differences in the overall amount of information present. These results suggest that areas dynamically form higher-order processing units according to the cognitive or behavioral demand and that the information-processing network is hierarchically organized at the neuron level, with the coarse network structure determining the behavioral state and finer changes reflecting different conditions.


Subject(s)
Motor Cortex , Animals , Motor Cortex/physiology , Macaca mulatta , Parietal Lobe/physiology , Cognition , Neural Networks, Computer , Cerebral Cortex
9.
Entropy (Basel) ; 24(7)2022 Jul 05.
Article in English | MEDLINE | ID: mdl-35885153

ABSTRACT

The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the "higher-order" information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure-function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses.

10.
Elife ; 112022 06 16.
Article in English | MEDLINE | ID: mdl-35708741

ABSTRACT

Activity-dependent self-organization plays an important role in the formation of specific and stereotyped connectivity patterns in neural circuits. By combining neuronal cultures, and tools with approaches from network neuroscience and information theory, we can study how complex network topology emerges from local neuronal interactions. We constructed effective connectivity networks using a transfer entropy analysis of spike trains recorded from rat embryo dissociated hippocampal neuron cultures between 6 and 35 days in vitro to investigate how the topology evolves during maturation. The methodology for constructing the networks considered the synapse delay and addressed the influence of firing rate and population bursts as well as spurious effects on the inference of connections. We found that the number of links in the networks grew over the course of development, shifting from a segregated to a more integrated architecture. As part of this progression, three significant aspects of complex network topology emerged. In agreement with previous in silico and in vitro studies, a small-world architecture was detected, largely due to strong clustering among neurons. Additionally, the networks developed in a modular topology, with most modules comprising nearby neurons. Finally, highly active neurons acquired topological characteristics that made them important nodes to the network and integrators of modules. These findings leverage new insights into how neuronal effective network topology relates to neuronal assembly self-organization mechanisms.


Subject(s)
Nerve Net , Neurons , Animals , Entropy , Hippocampus , Nerve Net/physiology , Neurons/physiology , Rats , Synapses/physiology
11.
Philos Trans A Math Phys Eng Sci ; 380(2227): 20210150, 2022 Jul 11.
Article in English | MEDLINE | ID: mdl-35599561

ABSTRACT

Is reduction always a good scientific strategy? The existence of the special sciences above physics suggests not. Previous research has shown that dimensionality reduction (macroscales) can increase the dependency between elements of a system (a phenomenon called 'causal emergence'). Here, we provide an umbrella mathematical framework for emergence based on information conversion. We show evidence that coarse-graining can convert information from one 'type' to another. We demonstrate this using the well-understood mutual information measure applied to Boolean networks. Using partial information decomposition, the mutual information can be decomposed into redundant, unique and synergistic information atoms. Then by introducing a novel measure of the synergy bias of a given decomposition, we are able to show that the synergy component of a Boolean network's mutual information can increase at macroscales. This can occur even when there is no difference in the total mutual information between a macroscale and its underlying microscale, proving information conversion. We relate this broad framework to previous work, compare it to other theories, and argue it complexifies any notion of universal reduction in the sciences, since such reduction would likely lead to a loss of synergistic information in scientific models. This article is part of the theme issue 'Emergent phenomena in complex physical and socio-technical systems: from cells to societies'.


Subject(s)
Models, Theoretical
12.
Entropy (Basel) ; 24(10)2022 Sep 28.
Article in English | MEDLINE | ID: mdl-37420406

ABSTRACT

The theory of intersectionality proposes that an individual's experience of society has aspects that are irreducible to the sum of one's various identities considered individually, but are "greater than the sum of their parts". In recent years, this framework has become a frequent topic of discussion both in social sciences and among popular movements for social justice. In this work, we show that the effects of intersectional identities can be statistically observed in empirical data using information theory, particularly the partial information decomposition framework. We show that, when considering the predictive relationship between various identity categories such as race and sex, on outcomes such as income, health and wellness, robust statistical synergies appear. These synergies show that there are joint-effects of identities on outcomes that are irreducible to any identity considered individually and only appear when specific categories are considered together (for example, there is a large, synergistic effect of race and sex considered jointly on income irreducible to either race or sex). Furthermore, these synergies are robust over time, remaining largely constant year-to-year. We then show using synthetic data that the most widely used method of assessing intersectionalities in data (linear regression with multiplicative interaction coefficients) fails to disambiguate between truly synergistic, greater-than-the-sum-of-their-parts interactions, and redundant interactions. We explore the significance of these two distinct types of interactions in the context of making inferences about intersectional relationships in data and the importance of being able to reliably differentiate the two. Finally, we conclude that information theory, as a model-free framework sensitive to nonlinearities and synergies in data, is a natural method by which to explore the space of higher-order social dynamics.

13.
Entropy (Basel) ; 25(1)2022 Dec 28.
Article in English | MEDLINE | ID: mdl-36673195

ABSTRACT

"Emergence", the phenomenon where a complex system displays properties, behaviours, or dynamics not trivially reducible to its constituent elements, is one of the defining properties of complex systems. Recently, there has been a concerted effort to formally define emergence using the mathematical framework of information theory, which proposes that emergence can be understood in terms of how the states of wholes and parts collectively disclose information about the system's collective future. In this paper, we show how a common, foundational component of information-theoretic approaches to emergence implies an inherent instability to emergent properties, which we call flickering emergence. A system may, on average, display a meaningful emergent property (be it an informative coarse-graining, or higher-order synergy), but for particular configurations, that emergent property falls apart and becomes misinformative. We show existence proofs that flickering emergence occurs in two different frameworks (one based on coarse-graining and another based on multivariate information decomposition) and argue that any approach based on temporal mutual information will display it. Finally, we argue that flickering emergence should not be a disqualifying property of any model of emergence, but that it should be accounted for when attempting to theorize about how emergence relates to practical models of the natural world.

14.
R Soc Open Sci ; 8(6): 201971, 2021 Jun 23.
Article in English | MEDLINE | ID: mdl-34168888

ABSTRACT

Research has found that the vividness of conscious experience is related to brain dynamics. Despite both being anaesthetics, propofol and ketamine produce different subjective states: we explore the different effects of these two anaesthetics on the structure of dynamic attractors reconstructed from electrophysiological activity recorded from cerebral cortex of two macaques. We used two methods: the first embeds the recordings in a continuous high-dimensional manifold on which we use topological data analysis to infer the presence of higher-order dynamics. The second reconstruction, an ordinal partition network embedding, allows us to create a discrete state-transition network, which is amenable to information-theoretic analysis and contains rich information about state-transition dynamics. We find that the awake condition generally had the 'richest' structure, visiting the most states, the presence of pronounced higher-order structures, and the least deterministic dynamics. By contrast, the propofol condition had the most dissimilar dynamics, transitioning to a more impoverished, constrained, low-structure regime. The ketamine condition, interestingly, seemed to combine aspects of both: while it was generally less complex than the awake condition, it remained well above propofol in almost all measures. These results provide deeper and more comprehensive insights than what is typically gained by using point-measures of complexity.

15.
Front Neurosci ; 15: 787068, 2021.
Article in English | MEDLINE | ID: mdl-35221887

ABSTRACT

In the last two decades, there has been an explosion of interest in modeling the brain as a network, where nodes correspond variously to brain regions or neurons, and edges correspond to structural or statistical dependencies between them. This kind of network construction, which preserves spatial, or structural, information while collapsing across time, has become broadly known as "network neuroscience." In this work, we provide an alternative application of network science to neural data: network-based analysis of non-linear time series and review applications of these methods to neural data. Instead of preserving spatial information and collapsing across time, network analysis of time series does the reverse: it collapses spatial information, instead preserving temporally extended dynamics, typically corresponding to evolution through some kind of phase/state-space. This allows researchers to infer a, possibly low-dimensional, "intrinsic manifold" from empirical brain data. We will discuss three methods of constructing networks from nonlinear time series, and how to interpret them in the context of neural data: recurrence networks, visibility networks, and ordinal partition networks. By capturing typically continuous, non-linear dynamics in the form of discrete networks, we show how techniques from network science, non-linear dynamics, and information theory can extract meaningful information distinct from what is normally accessible in standard network neuroscience approaches.

16.
PLoS Comput Biol ; 16(12): e1008418, 2020 12.
Article in English | MEDLINE | ID: mdl-33347455

ABSTRACT

Whether the brain operates at a critical "tipping" point is a long standing scientific question, with evidence from both cellular and systems-scale studies suggesting that the brain does sit in, or near, a critical regime. Neuroimaging studies of humans in altered states of consciousness have prompted the suggestion that maintenance of critical dynamics is necessary for the emergence of consciousness and complex cognition, and that reduced or disorganized consciousness may be associated with deviations from criticality. Unfortunately, many of the cellular-level studies reporting signs of criticality were performed in non-conscious systems (in vitro neuronal cultures) or unconscious animals (e.g. anaesthetized rats). Here we attempted to address this knowledge gap by exploring critical brain dynamics in invasive ECoG recordings from multiple sessions with a single macaque as the animal transitioned from consciousness to unconsciousness under different anaesthetics (ketamine and propofol). We use a previously-validated test of criticality: avalanche dynamics to assess the differences in brain dynamics between normal consciousness and both drug-states. Propofol and ketamine were selected due to their differential effects on consciousness (ketamine, but not propofol, is known to induce an unusual state known as "dissociative anaesthesia"). Our analyses indicate that propofol dramatically restricted the size and duration of avalanches, while ketamine allowed for more awake-like dynamics to persist. In addition, propofol, but not ketamine, triggered a large reduction in the complexity of brain dynamics. All states, however, showed some signs of persistent criticality when testing for exponent relations and universal shape-collapse. Further, maintenance of critical brain dynamics may be important for regulation and control of conscious awareness.


Subject(s)
Anesthetics, Dissociative/pharmacology , Brain/drug effects , Hypnotics and Sedatives/pharmacology , Ketamine/pharmacology , Propofol/pharmacology , Animals , Brain/physiology , Consciousness/drug effects , Consciousness/physiology , Electroencephalography/methods , Haplorhini , Wakefulness/physiology
17.
Neuroimage ; 220: 117049, 2020 10 15.
Article in English | MEDLINE | ID: mdl-32619708

ABSTRACT

Psychedelic drugs, such as psilocybin and LSD, represent unique tools for researchers investigating the neural origins of consciousness. Currently, the most compelling theories of how psychedelics exert their effects is by increasing the complexity of brain activity and moving the system towards a critical point between order and disorder, creating more dynamic and complex patterns of neural activity. While the concept of criticality is of central importance to this theory, few of the published studies on psychedelics investigate it directly, testing instead related measures such as algorithmic complexity or Shannon entropy. We propose using the fractal dimension of functional activity in the brain as a measure of complexity since findings from physics suggest that as a system organizes towards criticality, it tends to take on a fractal structure. We tested two different measures of fractal dimension, one spatial and one temporal, using fMRI data from volunteers under the influence of both LSD and psilocybin. The first was the fractal dimension of cortical functional connectivity networks and the second was the fractal dimension of BOLD time-series. In addition to the fractal measures, we used a well-established, non-fractal measure of signal complexity and show that they behave similarly. We were able to show that both psychedelic drugs significantly increased the fractal dimension of functional connectivity networks, and that LSD significantly increased the fractal dimension of BOLD signals, with psilocybin showing a non-significant trend in the same direction. With both LSD and psilocybin, we were able to localize changes in the fractal dimension of BOLD signals to brain areas assigned to the dorsal-attenion network. These results show that psychedelic drugs increase the fractal dimension of activity in the brain and we see this as an indicator that the changes in consciousness triggered by psychedelics are associated with evolution towards a critical zone.


Subject(s)
Cerebral Cortex/drug effects , Default Mode Network/drug effects , Hallucinogens/pharmacology , Lysergic Acid Diethylamide/pharmacology , Psilocybin/pharmacology , Cerebral Cortex/diagnostic imaging , Consciousness/drug effects , Default Mode Network/diagnostic imaging , Humans , Magnetic Resonance Imaging
18.
PLoS One ; 15(2): e0223812, 2020.
Article in English | MEDLINE | ID: mdl-32053587

ABSTRACT

Recent evidence suggests that the quantity and quality of conscious experience may be a function of the complexity of activity in the brain and that consciousness emerges in a critical zone between low and high-entropy states. We propose fractal shapes as a measure of proximity to this critical point, as fractal dimension encodes information about complexity beyond simple entropy or randomness, and fractal structures are known to emerge in systems nearing a critical point. To validate this, we tested several measures of fractal dimension on the brain activity from healthy volunteers and patients with disorders of consciousness of varying severity. We used a Compact Box Burning algorithm to compute the fractal dimension of cortical functional connectivity networks as well as computing the fractal dimension of the associated adjacency matrices using a 2D box-counting algorithm. To test whether brain activity is fractal in time as well as space, we used the Higuchi temporal fractal dimension on BOLD time-series. We found significant decreases in the fractal dimension between healthy volunteers (n = 15), patients in a minimally conscious state (n = 10), and patients in a vegetative state (n = 8), regardless of the mechanism of injury. We also found significant decreases in adjacency matrix fractal dimension and Higuchi temporal fractal dimension, which correlated with decreasing level of consciousness. These results suggest that cortical functional connectivity networks display fractal character and that this is associated with level of consciousness in a clinically relevant population, with higher fractal dimensions (i.e. more complex) networks being associated with higher levels of consciousness. This supports the hypothesis that level of consciousness and system complexity are positively associated, and is consistent with previous EEG, MEG, and fMRI studies.


Subject(s)
Brain Injuries/physiopathology , Brain/physiopathology , Models, Neurological , Nerve Net/physiopathology , Persistent Vegetative State/physiopathology , Adult , Algorithms , Brain/diagnostic imaging , Brain Injuries/diagnosis , Consciousness/physiology , Female , Fractals , Healthy Volunteers , Humans , Magnetic Resonance Imaging , Persistent Vegetative State/diagnosis , Severity of Illness Index
19.
Sci Rep ; 10(1): 1018, 2020 01 23.
Article in English | MEDLINE | ID: mdl-31974390

ABSTRACT

The brain is possibly the most complex system known to mankind, and its complexity has been called upon to explain the emergence of consciousness. However, complexity has been defined in many ways by multiple different fields: here, we investigate measures of algorithmic and process complexity in both the temporal and topological domains, testing them on functional MRI BOLD signal data obtained from individuals undergoing various levels of sedation with the anaesthetic agent propofol, replicating our results in two separate datasets. We demonstrate that the various measures are differently able to discriminate between levels of sedation, with temporal measures showing higher sensitivity. Further, we show that all measures are strongly related to a single underlying construct explaining most of the variance, as assessed by Principal Component Analysis, which we interpret as a measure of "overall complexity" of our data. This overall complexity was also able to discriminate between levels of sedation and serum concentrations of propofol, supporting the hypothesis that consciousness is related to complexity - independent of how the latter is measured.


Subject(s)
Anesthesia/methods , Anesthetics, Intravenous/pharmacology , Brain/drug effects , Consciousness/drug effects , Deep Sedation/methods , Propofol/pharmacology , Anesthetics, Intravenous/blood , Brain/physiology , Consciousness/physiology , Electroencephalography , Humans , Hypnotics and Sedatives/blood , Hypnotics and Sedatives/pharmacology , Magnetic Resonance Imaging , Propofol/blood
SELECTION OF CITATIONS
SEARCH DETAIL
...