Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
Phys Rev E ; 109(4-1): 044305, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38755869

ABSTRACT

Humans are exposed to sequences of events in the environment, and the interevent transition probabilities in these sequences can be modeled as a graph or network. Many real-world networks are organized hierarchically and while much is known about how humans learn basic transition graph topology, whether and to what degree humans can learn hierarchical structures in such graphs remains unknown. We probe the mental estimates of transition probabilities via the surprisal effect phenomenon: humans react more slowly to less expected transitions. Using mean-field predictions and numerical simulations, we show that surprisal effects are stronger for finer-level than coarser-level hierarchical transitions, and that surprisal effects at coarser levels are difficult to detect for limited learning times or in small samples. Using a serial response experiment with human participants (n=100), we replicate our predictions by detecting a surprisal effect at the finer level of the hierarchy but not at the coarser level of the hierarchy. We then evaluate the presence of a trade-off in learning, whereby humans who learned the finer level of the hierarchy better also tended to learn the coarser level worse, and vice versa. This study elucidates the processes by which humans learn sequential events in hierarchical contexts. More broadly, our work charts a road map for future investigation of the neural underpinnings and behavioral manifestations of graph learning.


Subject(s)
Learning , Humans , Male , Female , Models, Theoretical , Probability , Adult
2.
iScience ; 27(1): 108734, 2024 Jan 19.
Article in English | MEDLINE | ID: mdl-38226174

ABSTRACT

Large-scale interactions among multiple brain regions manifest as bursts of activations called neuronal avalanches, which reconfigure according to the task at hand and, hence, might constitute natural candidates to design brain-computer interfaces (BCIs). To test this hypothesis, we used source-reconstructed magneto/electroencephalography during resting state and a motor imagery task performed within a BCI protocol. To track the probability that an avalanche would spread across any two regions, we built an avalanche transition matrix (ATM) and demonstrated that the edges whose transition probabilities significantly differed between conditions hinged selectively on premotor regions in all subjects. Furthermore, we showed that the topology of the ATMs allows task-decoding above the current gold standard. Hence, our results suggest that neuronal avalanches might capture interpretable differences between tasks that can be used to inform brain-computer interfaces.

3.
ArXiv ; 2023 Sep 06.
Article in English | MEDLINE | ID: mdl-37731654

ABSTRACT

Humans are constantly exposed to sequences of events in the environment. Those sequences frequently evince statistical regularities, such as the probabilities with which one event transitions to another. Collectively, inter-event transition probabilities can be modeled as a graph or network. Many real-world networks are organized hierarchically and understanding how these networks are learned by humans is an ongoing aim of current investigations. While much is known about how humans learn basic transition graph topology, whether and to what degree humans can learn hierarchical structures in such graphs remains unknown. Here, we investigate how humans learn hierarchical graphs of the Sierpinski family using computer simulations and behavioral laboratory experiments. We probe the mental estimates of transition probabilities via the surprisal effect: a phenomenon in which humans react more slowly to less expected transitions, such as those between communities or modules in the network. Using mean-field predictions and numerical simulations, we show that surprisal effects are stronger for finer-level than coarser-level hierarchical transitions. Notably, surprisal effects at coarser levels of the hierarchy are difficult to detect for limited learning times or in small samples. Using a serial response experiment with human participants (n=100), we replicate our predictions by detecting a surprisal effect at the finer-level of the hierarchy but not at the coarser-level of the hierarchy. To further explain our findings, we evaluate the presence of a trade-off in learning, whereby humans who learned the finer-level of the hierarchy better tended to learn the coarser-level worse, and vice versa. Taken together, our computational and experimental studies elucidate the processes by which humans learn sequential events in hierarchical contexts. More broadly, our work charts a road map for future investigation of the neural underpinnings and behavioral manifestations of graph learning.

4.
Neuron ; 111(21): 3465-3478.e7, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37611585

ABSTRACT

Animals frequently make decisions based on expectations of future reward ("values"). Values are updated by ongoing experience: places and choices that result in reward are assigned greater value. Yet, the specific algorithms used by the brain for such credit assignment remain unclear. We monitored accumbens dopamine as rats foraged for rewards in a complex, changing environment. We observed brief dopamine pulses both at reward receipt (scaling with prediction error) and at novel path opportunities. Dopamine also ramped up as rats ran toward reward ports, in proportion to the value at each location. By examining the evolution of these dopamine place-value signals, we found evidence for two distinct update processes: progressive propagation of value along taken paths, as in temporal difference learning, and inference of value throughout the maze, using internal models. Our results demonstrate that within rich, naturalistic environments dopamine conveys place values that are updated via multiple, complementary learning algorithms.


Subject(s)
Decision Making , Dopamine , Rats , Animals , Reward , Brain
5.
Mol Psychiatry ; 28(8): 3314-3323, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37353585

ABSTRACT

Schizophrenia is marked by deficits in facial affect processing associated with abnormalities in GABAergic circuitry, deficits also found in first-degree relatives. Facial affect processing involves a distributed network of brain regions including limbic regions like amygdala and visual processing areas like fusiform cortex. Pharmacological modulation of GABAergic circuitry using benzodiazepines like alprazolam can be useful for studying this facial affect processing network and associated GABAergic abnormalities in schizophrenia. Here, we use pharmacological modulation and computational modeling to study the contribution of GABAergic abnormalities toward emotion processing deficits in schizophrenia. Specifically, we apply principles from network control theory to model persistence energy - the control energy required to maintain brain activation states - during emotion identification and recall tasks, with and without administration of alprazolam, in a sample of first-degree relatives and healthy controls. Here, persistence energy quantifies the magnitude of theoretical external inputs during the task. We find that alprazolam increases persistence energy in relatives but not in controls during threatening face processing, suggesting a compensatory mechanism given the relative absence of behavioral abnormalities in this sample of unaffected relatives. Further, we demonstrate that regions in the fusiform and occipital cortices are important for facilitating state transitions during facial affect processing. Finally, we uncover spatial relationships (i) between regional variation in differential control energy (alprazolam versus placebo) and (ii) both serotonin and dopamine neurotransmitter systems, indicating that alprazolam may exert its effects by altering neuromodulatory systems. Together, these findings provide a new perspective on the distributed emotion processing network and the effect of GABAergic modulation on this network, in addition to identifying an association between schizophrenia risk and abnormal GABAergic effects on persistence energy during threat processing.


Subject(s)
Schizophrenia , Humans , Schizophrenia/drug therapy , Alprazolam/pharmacology , Emotions , Brain , Amygdala , Brain Mapping , Magnetic Resonance Imaging
6.
bioRxiv ; 2023 Aug 15.
Article in English | MEDLINE | ID: mdl-36747703

ABSTRACT

Human experience is built upon sequences of discrete events. From those sequences, humans build impressively accurate models of their world. This process has been referred to as graph learning, a form of structure learning in which the mental model encodes the graph of event-to-event transition probabilities [1], [2], typically in medial temporal cortex [3]-[6]. Recent evidence suggests that some network structures are easier to learn than others [7]-[9], but the neural properties of this effect remain unknown. Here we use fMRI to show that the network structure of a temporal sequence of stimuli influences the fidelity with which those stimuli are represented in the brain. Healthy adult human participants learned a set of stimulus-motor associations following one of two graph structures. The design of our experiment allowed us to separate regional sensitivity to the structural, stimulus, and motor response components of the task. As expected, whereas the motor response could be decoded from neural representations in postcentral gyrus, the shape of the stimulus could be decoded from lateral occipital cortex. The structure of the graph impacted the nature of neural representations: when the graph was modular as opposed to lattice-like, BOLD representations in visual areas better predicted trial identity in a held-out run and displayed higher intrinsic dimensionality. Our results demonstrate that even over relatively short timescales, graph structure determines the fidelity of event representations as well as the dimensionality of the space in which those representations are encoded. More broadly, our study shows that network context influences the strength of learned neural representations, motivating future work in the design, optimization, and adaptation of network contexts for distinct types of learning over different timescales.

7.
eNeuro ; 9(2)2022.
Article in English | MEDLINE | ID: mdl-35105662

ABSTRACT

Humans deftly parse statistics from sequences. Some theories posit that humans learn these statistics by forming cognitive maps, or underlying representations of the latent space which links items in the sequence. Here, an item in the sequence is a node, and the probability of transitioning between two items is an edge. Sequences can then be generated from walks through the latent space, with different spaces giving rise to different sequence statistics. Individual or group differences in sequence learning can be modeled by changing the time scale over which estimates of transition probabilities are built, or in other words, by changing the amount of temporal discounting. Latent space models with temporal discounting bear a resemblance to models of navigation through Euclidean spaces. However, few explicit links have been made between predictions from Euclidean spatial navigation and neural activity during human sequence learning. Here, we use a combination of behavioral modeling and intracranial encephalography (iEEG) recordings to investigate how neural activity might support the formation of space-like cognitive maps through temporal discounting during sequence learning. Specifically, we acquire human reaction times from a sequential reaction time task, to which we fit a model that formulates the amount of temporal discounting as a single free parameter. From the parameter, we calculate each individual's estimate of the latent space. We find that neural activity reflects these estimates mostly in the temporal lobe, including areas involved in spatial navigation. Similar to spatial navigation, we find that low-dimensional representations of neural activity allow for easy separation of important features, such as modules, in the latent space. Lastly, we take advantage of the high temporal resolution of iEEG data to determine the time scale on which latent spaces are learned. We find that learning typically happens within the first 500 trials, and is modulated by the underlying latent space and the amount of temporal discounting characteristic of each participant. Ultimately, this work provides important links between behavioral models of sequence learning and neural activity during the same behavior, and contextualizes these results within a broader framework of domain general cognitive maps.


Subject(s)
Spatial Navigation , Cognition/physiology , Humans , Learning/physiology , Reaction Time , Spatial Navigation/physiology , Temporal Lobe/physiology
8.
Nat Commun ; 11(1): 2313, 2020 05 08.
Article in English | MEDLINE | ID: mdl-32385232

ABSTRACT

Humans are adept at uncovering abstract associations in the world around them, yet the underlying mechanisms remain poorly understood. Intuitively, learning the higher-order structure of statistical relationships should involve complex mental processes. Here we propose an alternative perspective: that higher-order associations instead arise from natural errors in learning and memory. Using the free energy principle, which bridges information theory and Bayesian inference, we derive a maximum entropy model of people's internal representations of the transitions between stimuli. Importantly, our model (i) affords a concise analytic form, (ii) qualitatively explains the effects of transition network structure on human expectations, and (iii) quantitatively predicts human reaction times in probabilistic sequential motor tasks. Together, these results suggest that mental errors influence our abstract representations of the world in significant and predictable ways, with direct implications for the study and design of optimally learnable information sources.


Subject(s)
Learning/physiology , Memory/physiology , Bayes Theorem , Humans , Reaction Time/physiology
9.
J Neural Eng ; 17(2): 026031, 2020 04 09.
Article in English | MEDLINE | ID: mdl-31968320

ABSTRACT

OBJECTIVE: Predicting how the brain can be driven to specific states by means of internal or external control requires a fundamental understanding of the relationship between neural connectivity and activity. Network control theory is a powerful tool from the physical and engineering sciences that can provide insights regarding that relationship; it formalizes the study of how the dynamics of a complex system can arise from its underlying structure of interconnected units. APPROACH: Given the recent use of network control theory in neuroscience, it is now timely to offer a practical guide to methodological considerations in the controllability of structural brain networks. Here we provide a systematic overview of the framework, examine the impact of modeling choices on frequently studied control metrics, and suggest potentially useful theoretical extensions. We ground our discussions, numerical demonstrations, and theoretical advances in a dataset of high-resolution diffusion imaging with 730 diffusion directions acquired over approximately 1 h of scanning from ten healthy young adults. MAIN RESULTS: Following a didactic introduction of the theory, we probe how a selection of modeling choices affects four common statistics: average controllability, modal controllability, minimum control energy, and optimal control energy. Next, we extend the current state-of-the-art in two ways: first, by developing an alternative measure of structural connectivity that accounts for radial propagation of activity through abutting tissue, and second, by defining a complementary metric quantifying the complexity of the energy landscape of a system. We close with specific modeling recommendations and a discussion of methodological constraints. SIGNIFICANCE: Our hope is that this accessible account will inspire the neuroimaging community to more fully exploit the potential of network control theory in tackling pressing questions in cognitive, developmental, and clinical neuroscience.


Subject(s)
Brain , Brain/diagnostic imaging , Humans , Young Adult
10.
Neuroimage ; 209: 116500, 2020 04 01.
Article in English | MEDLINE | ID: mdl-31927130

ABSTRACT

Brain-computer interfaces (BCIs) have been largely developed to allow communication, control, and neurofeedback in human beings. Despite their great potential, BCIs perform inconsistently across individuals and the neural processes that enable humans to achieve good control remain poorly understood. To address this question, we performed simultaneous high-density electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings in a motor imagery-based BCI training involving a group of healthy subjects. After reconstructing the signals at the cortical level, we showed that the reinforcement of motor-related activity during the BCI skill acquisition is paralleled by a progressive disconnection of associative areas which were not directly targeted during the experiments. Notably, these network connectivity changes reflected growing automaticity associated with BCI performance and predicted future learning rate. Altogether, our findings provide new insights into the large-scale cortical organizational mechanisms underlying BCI learning, which have implications for the improvement of this technology in a broad range of real-life applications.


Subject(s)
Brain-Computer Interfaces , Cerebral Cortex/physiology , Connectome , Imagination/physiology , Learning/physiology , Motor Activity/physiology , Nerve Net/physiology , Reinforcement, Psychology , Adult , Electroencephalography , Female , Humans , Longitudinal Studies , Magnetoencephalography , Male , Young Adult
11.
Neuroimage ; 210: 116498, 2020 04 15.
Article in English | MEDLINE | ID: mdl-31917325

ABSTRACT

Most humans have the good fortune to live their lives embedded in richly structured social groups. Yet, it remains unclear how humans acquire knowledge about these social structures to successfully navigate social relationships. Here we address this knowledge gap with an interdisciplinary neuroimaging study drawing on recent advances in network science and statistical learning. Specifically, we collected BOLD MRI data while participants learned the community structure of both social and non-social networks, in order to examine whether the learning of these two types of networks was differentially associated with functional brain network topology. We found that participants learned the community structure of the networks, as evidenced by a slower reaction time when a trial moved between communities than when a trial moved within a community. Learning the community structure of social networks was also characterized by significantly greater functional connectivity of the hippocampus and temporoparietal junction when transitioning between communities than when transitioning within a community. Furthermore, temporoparietal regions of the default mode were more strongly connected to hippocampus, somatomotor, and visual regions for social networks than for non-social networks. Collectively, our results identify neurophysiological underpinnings of social versus non-social network learning, extending our knowledge about the impact of social context on learning processes. More broadly, this work offers an empirical approach to study the learning of social network structures, which could be fruitfully extended to other participant populations, various graph architectures, and a diversity of social contexts in future studies.


Subject(s)
Association Learning/physiology , Cerebral Cortex/physiology , Connectome , Nerve Net/physiology , Pattern Recognition, Visual/physiology , Social Cognition , Social Networking , Adult , Cerebral Cortex/diagnostic imaging , Female , Hippocampus/diagnostic imaging , Hippocampus/physiology , Humans , Magnetic Resonance Imaging , Male , Probability Learning , Young Adult
12.
Cell Rep ; 28(10): 2554-2566.e7, 2019 Sep 03.
Article in English | MEDLINE | ID: mdl-31484068

ABSTRACT

Optimizing direct electrical stimulation for the treatment of neurological disease remains difficult due to an incomplete understanding of its physical propagation through brain tissue. Here, we use network control theory to predict how stimulation spreads through white matter to influence spatially distributed dynamics. We test the theory's predictions using a unique dataset comprising diffusion weighted imaging and electrocorticography in epilepsy patients undergoing grid stimulation. We find statistically significant shared variance between the predicted activity state transitions and the observed activity state transitions. We then use an optimal control framework to posit testable hypotheses regarding which brain states and structural properties will efficiently improve memory encoding when stimulated. Our work quantifies the role that white matter architecture plays in guiding the dynamics of direct electrical stimulation and offers empirical support for the utility of network control theory in explaining the brain's response to stimulation.


Subject(s)
Models, Neurological , Neural Pathways/physiology , White Matter/physiology , Adult , Electric Stimulation , Female , Humans , Male
13.
Netw Neurosci ; 3(3): 848-877, 2019.
Article in English | MEDLINE | ID: mdl-31410383

ABSTRACT

Chronically implantable neurostimulation devices are becoming a clinically viable option for treating patients with neurological disease and psychiatric disorders. Neurostimulation offers the ability to probe and manipulate distributed networks of interacting brain areas in dysfunctional circuits. Here, we use tools from network control theory to examine the dynamic reconfiguration of functionally interacting neuronal ensembles during targeted neurostimulation of cortical and subcortical brain structures. By integrating multimodal intracranial recordings and diffusion-weighted imaging from patients with drug-resistant epilepsy, we test hypothesized structural and functional rules that predict altered patterns of synchronized local field potentials. We demonstrate the ability to predictably reconfigure functional interactions depending on stimulation strength and location. Stimulation of areas with structurally weak connections largely modulates the functional hubness of downstream areas and concurrently propels the brain towards more difficult-to-reach dynamical states. By using focal perturbations to bridge large-scale structure, function, and markers of behavior, our findings suggest that stimulation may be tuned to influence different scales of network interactions driving cognition.

14.
Nat Biomed Eng ; 3(11): 902-916, 2019 11.
Article in English | MEDLINE | ID: mdl-31133741

ABSTRACT

Electrocorticography (ECoG) data can be used to estimate brain-wide connectivity patterns. Yet, the invasiveness of ECoG, incomplete cortical coverage, and variability in electrode placement across individuals make the network analysis of ECoG data challenging. Here, we show that the architecture of whole-brain ECoG networks and the factors that shape it can be studied by analysing whole-brain, interregional and band-limited ECoG networks from a large cohort-in this case, of individuals with medication-resistant epilepsy. Using tools from network science, we characterized the basic organization of ECoG networks, including frequency-specific architecture, segregated modules and the dependence of connection weights on interregional Euclidean distance. We then used linear models to explain variabilities in the connection strengths between pairs of brain regions, and to highlight the joint role, in shaping the brain-wide organization of ECoG networks, of communication along white matter pathways, interregional Euclidean distance and correlated gene expression. Moreover, we extended these models to predict out-of-sample, single-subject data. Our predictive models may have future clinical utility; for example, by anticipating the effect of cortical resection on interregional communication.


Subject(s)
Brain/physiology , Electrocorticography/methods , Gene Expression , Human Genetics , Adolescent , Adult , Aged , Brain Mapping , Computer Simulation , Electrodes , Gene Ontology , Humans , Middle Aged , Models, Biological , Young Adult
15.
J Exp Psychol Learn Mem Cogn ; 45(2): 253-271, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30024255

ABSTRACT

How do people acquire knowledge about which individuals belong to different cliques or communities? And to what extent does this learning process differ from the process of learning higher-order information about complex associations between nonsocial bits of information? Here, the authors use a paradigm in which the order of stimulus presentation forms temporal associations between the stimuli, collectively constituting a complex network. They examined individual differences in the ability to learn community structure of networks composed of social versus nonsocial stimuli. Although participants were able to learn community structure of both social and nonsocial networks, their performance in social network learning was uncorrelated with their performance in nonsocial network learning. In addition, social traits, including social orientation and perspective-taking, uniquely predicted the learning of social community structure but not the learning of nonsocial community structure. Taken together, the results suggest that the process of learning higher-order community structure in social networks is partially distinct from the process of learning higher-order community structure in nonsocial networks. The study design provides a promising approach to identify neurophysiological drivers of social network versus nonsocial network learning, extending knowledge about the impact of individual differences on these learning processes. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Individuality , Learning/physiology , Nonverbal Communication , Social Behavior , Female , Humans , Male , Photic Stimulation , Reaction Time/physiology
16.
Nat Phys ; 14: 91-98, 2018.
Article in English | MEDLINE | ID: mdl-29422941

ABSTRACT

Networked systems display complex patterns of interactions between components. In physical networks, these interactions often occur along structural connections that link components in a hard-wired connection topology, supporting a variety of system-wide dynamical behaviors such as synchronization. While descriptions of these behaviors are important, they are only a first step towards understanding and harnessing the relationship between network topology and system behavior. Here, we use linear network control theory to derive accurate closed-form expressions that relate the connectivity of a subset of structural connections (those linking driver nodes to non-driver nodes) to the minimum energy required to control networked systems. To illustrate the utility of the mathematics, we apply this approach to high-resolution connectomes recently reconstructed from Drosophila, mouse, and human brains. We use these principles to suggest an advantage of the human brain in supporting diverse network dynamics with small energetic costs while remaining robust to perturbations, and to perform clinically accessible targeted manipulation of the brain's control performance by removing single edges in the network. Generally, our results ground the expectation of a control system's behavior in its network architecture, and directly inspire new directions in network analysis and design via distributed control.

17.
Nat Hum Behav ; 2(12): 936-947, 2018 12.
Article in English | MEDLINE | ID: mdl-30988437

ABSTRACT

Human learners are adept at grasping the complex relationships underlying incoming sequential input1. In the present work, we formalize complex relationships as graph structures2 derived from temporal associations3,4 in motor sequences. Next, we explore the extent to which learners are sensitive to key variations in the topological properties5 inherent to those graph structures. Participants performed a probabilistic motor sequence task in which the order of button presses was determined by the traversal of graphs with modular, lattice-like or random organization. Graph nodes each represented a unique button press, and edges represented a transition between button presses. The results indicate that learning, indexed here by participants' response times, was strongly mediated by the graph's mesoscale organization, with modular graphs being associated with shorter response times than random and lattice graphs. Moreover, variations in a node's number of connections (degree) and a node's role in mediating long-distance communication (betweenness centrality) impacted graph learning, even after accounting for the level of practice on that node. These results demonstrate that the graph architecture underlying temporal sequences of stimuli fundamentally constrains learning, and moreover that tools from network science provide a valuable framework for assessing how learners encode complex, temporally structured information.


Subject(s)
Psychomotor Performance , Serial Learning , Humans , Neural Networks, Computer , Probability , Psychomotor Performance/physiology , Reaction Time , Serial Learning/physiology
18.
Nat Commun ; 8(1): 1252, 2017 11 01.
Article in English | MEDLINE | ID: mdl-29093441

ABSTRACT

As the human brain develops, it increasingly supports coordinated control of neural activity. The mechanism by which white matter evolves to support this coordination is not well understood. Here we use a network representation of diffusion imaging data from 882 youth ages 8-22 to show that white matter connectivity becomes increasingly optimized for a diverse range of predicted dynamics in development. Notably, stable controllers in subcortical areas are negatively related to cognitive performance. Investigating structural mechanisms supporting these changes, we simulate network evolution with a set of growth rules. We find that all brain networks are structured in a manner highly optimized for network control, with distinct control mechanisms predicted in child vs. older youth. We demonstrate that our results cannot be explained by changes in network modularity. This work reveals a possible mechanism of human brain development that preferentially optimizes dynamic network control over static network architecture.


Subject(s)
Brain/growth & development , Nerve Net/growth & development , White Matter/growth & development , Adolescent , Adolescent Development , Brain/diagnostic imaging , Child , Child Development , Diffusion Tensor Imaging , Female , Humans , Male , Nerve Net/diagnostic imaging , White Matter/diagnostic imaging , Young Adult
19.
Sci Rep ; 7(1): 12733, 2017 10 06.
Article in English | MEDLINE | ID: mdl-28986524

ABSTRACT

Network science has emerged as a powerful tool through which we can study the higher-order architectural properties of the world around us. How human learners exploit this information remains an essential question. Here, we focus on the temporal constraints that govern such a process. Participants viewed a continuous sequence of images generated by three distinct walks on a modular network. Walks varied along two critical dimensions: their predictability and the density with which they sampled from communities of images. Learners exposed to walks that richly sampled from each community exhibited a sharp increase in processing time upon entry into a new community. This effect was eliminated in a highly regular walk that sampled exhaustively from images in short, successive cycles (i.e., that increasingly minimized uncertainty about the nature of upcoming stimuli). These results demonstrate that temporal organization plays an essential role in learners' sensitivity to the network architecture underlying sensory input.


Subject(s)
Social Networking , Humans , Models, Biological , Reaction Time
20.
Curr Biol ; 27(11): 1561-1572.e8, 2017 Jun 05.
Article in English | MEDLINE | ID: mdl-28552358

ABSTRACT

The human brain is organized into large-scale functional modules that have been shown to evolve in childhood and adolescence. However, it remains unknown whether the underlying white matter architecture is similarly refined during development, potentially allowing for improvements in executive function. In a sample of 882 participants (ages 8-22) who underwent diffusion imaging as part of the Philadelphia Neurodevelopmental Cohort, we demonstrate that structural network modules become more segregated with age, with weaker connections between modules and stronger connections within modules. Evolving modular topology facilitates global network efficiency and is driven by age-related strengthening of hub edges present both within and between modules. Critically, both modular segregation and network efficiency are associated with enhanced executive performance and mediate the improvement of executive functioning with age. Together, results delineate a process of structural network maturation that supports executive function in youth.


Subject(s)
Connectome/methods , Executive Function/physiology , Nerve Net/physiology , White Matter/physiology , Adolescent , Age Factors , Child , Diffusion Magnetic Resonance Imaging/methods , Female , Humans , Male , White Matter/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...