Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 59
Filter
Add more filters










Publication year range
1.
Commun Biol ; 7(1): 284, 2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38454134

ABSTRACT

Language comprehension involves integrating low-level sensory inputs into a hierarchy of increasingly high-level features. Prior work studied brain representations of different levels of the language hierarchy, but has not determined whether these brain representations are shared between written and spoken language. To address this issue, we analyze fMRI BOLD data that were recorded while participants read and listened to the same narratives in each modality. Levels of the language hierarchy are operationalized as timescales, where each timescale refers to a set of spectral components of a language stimulus. Voxelwise encoding models are used to determine where different timescales are represented across the cerebral cortex, for each modality separately. These models reveal that between the two modalities timescale representations are organized similarly across the cortical surface. Our results suggest that, after low-level sensory processing, language integration proceeds similarly regardless of stimulus modality.


Subject(s)
Language , Reading , Humans , Cerebral Cortex/diagnostic imaging , Brain , Brain Mapping/methods
3.
bioRxiv ; 2023 Dec 11.
Article in English | MEDLINE | ID: mdl-37577530

ABSTRACT

Language comprehension involves integrating low-level sensory inputs into a hierarchy of increasingly high-level features. Prior work studied brain representations of different levels of the language hierarchy, but has not determined whether these brain representations are shared between written and spoken language. To address this issue, we analyzed fMRI BOLD data recorded while participants read and listened to the same narratives in each modality. Levels of the language hierarchy were operationalized as timescales, where each timescale refers to a set of spectral components of a language stimulus. Voxelwise encoding models were used to determine where different timescales are represented across the cerebral cortex, for each modality separately. These models reveal that between the two modalities timescale representations are organized similarly across the cortical surface. Our results suggest that, after low-level sensory processing, language integration proceeds similarly regardless of stimulus modality.

4.
bioRxiv ; 2023 Jul 19.
Article in English | MEDLINE | ID: mdl-37503232

ABSTRACT

Functional connectivity (FC) is the most popular method for recovering functional networks of brain areas with fMRI. However, because FC is defined as temporal correlations in brain activity, FC networks are confounded by noise and lack a precise functional role. To overcome these limitations, we developed model connectivity (MC). MC is defined as similarities in encoding model weights, which quantify reliable functional activity in terms of interpretable stimulus- or task-related features. To compare FC and MC, both methods were applied to a naturalistic story listening dataset. FC recovered spatially broad networks that are confounded by noise, and that lack a clear role during natural language comprehension. By contrast, MC recovered spatially localized networks that are robust to noise, and that represent distinct categories of semantic concepts. Thus, MC is a powerful data-driven approach for recovering and interpreting the functional networks that support complex cognitive processes.

5.
Nat Commun ; 14(1): 4309, 2023 07 18.
Article in English | MEDLINE | ID: mdl-37463907

ABSTRACT

Speech processing requires extracting meaning from acoustic patterns using a set of intermediate representations based on a dynamic segmentation of the speech stream. Using whole brain mapping obtained in fMRI, we investigate the locus of cortical phonemic processing not only for single phonemes but also for short combinations made of diphones and triphones. We find that phonemic processing areas are much larger than previously described: they include not only the classical areas in the dorsal superior temporal gyrus but also a larger region in the lateral temporal cortex where diphone features are best represented. These identified phonemic regions overlap with the lexical retrieval region, but we show that short word retrieval is not sufficient to explain the observed responses to diphones. Behavioral studies have shown that phonemic processing and lexical retrieval are intertwined. Here, we also have identified candidate regions within the speech cortical network where this joint processing occurs.


Subject(s)
Speech Perception , Speech , Humans , Speech/physiology , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Brain/physiology , Speech Perception/physiology , Brain Mapping , Magnetic Resonance Imaging , Cerebral Cortex/diagnostic imaging
6.
J Neurosci ; 43(17): 3144-3158, 2023 04 26.
Article in English | MEDLINE | ID: mdl-36973013

ABSTRACT

The meaning of words in natural language depends crucially on context. However, most neuroimaging studies of word meaning use isolated words and isolated sentences with little context. Because the brain may process natural language differently from how it processes simplified stimuli, there is a pressing need to determine whether prior results on word meaning generalize to natural language. fMRI was used to record human brain activity while four subjects (two female) read words in four conditions that vary in context: narratives, isolated sentences, blocks of semantically similar words, and isolated words. We then compared the signal-to-noise ratio (SNR) of evoked brain responses, and we used a voxelwise encoding modeling approach to compare the representation of semantic information across the four conditions. We find four consistent effects of varying context. First, stimuli with more context evoke brain responses with higher SNR across bilateral visual, temporal, parietal, and prefrontal cortices compared with stimuli with little context. Second, increasing context increases the representation of semantic information across bilateral temporal, parietal, and prefrontal cortices at the group level. In individual subjects, only natural language stimuli consistently evoke widespread representation of semantic information. Third, context affects voxel semantic tuning. Finally, models estimated using stimuli with little context do not generalize well to natural language. These results show that context has large effects on the quality of neuroimaging data and on the representation of meaning in the brain. Thus, neuroimaging studies that use stimuli with little context may not generalize well to the natural regime.SIGNIFICANCE STATEMENT Context is an important part of understanding the meaning of natural language, but most neuroimaging studies of meaning use isolated words and isolated sentences with little context. Here, we examined whether the results of neuroimaging studies that use out-of-context stimuli generalize to natural language. We find that increasing context improves the quality of neuro-imaging data and changes where and how semantic information is represented in the brain. These results suggest that findings from studies using out-of-context stimuli may not generalize to natural language used in daily life.


Subject(s)
Comprehension , Semantics , Humans , Female , Comprehension/physiology , Brain/physiology , Language , Brain Mapping/methods , Magnetic Resonance Imaging/methods
7.
Neuroimage ; 264: 119728, 2022 12 01.
Article in English | MEDLINE | ID: mdl-36334814

ABSTRACT

Encoding models provide a powerful framework to identify the information represented in brain recordings. In this framework, a stimulus representation is expressed within a feature space and is used in a regularized linear regression to predict brain activity. To account for a potential complementarity of different feature spaces, a joint model is fit on multiple feature spaces simultaneously. To adapt regularization strength to each feature space, ridge regression is extended to banded ridge regression, which optimizes a different regularization hyperparameter per feature space. The present paper proposes a method to decompose over feature spaces the variance explained by a banded ridge regression model. It also describes how banded ridge regression performs a feature-space selection, effectively ignoring non-predictive and redundant feature spaces. This feature-space selection leads to better prediction accuracy and to better interpretability. Banded ridge regression is then mathematically linked to a number of other regression methods with similar feature-space selection mechanisms. Finally, several methods are proposed to address the computational challenge of fitting banded ridge regressions on large numbers of voxels and feature spaces. All implementations are released in an open-source Python package called Himalaya.


Subject(s)
Regression Analysis , Humans , Linear Models
8.
J Neurosci ; 2022 Jul 20.
Article in English | MEDLINE | ID: mdl-35863889

ABSTRACT

Object and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied brain activity recorded from five subjects (1 female) via functional magnetic resonance imaging while they viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex, and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity towards target actions, and that tuning shifts are a general feature of conceptual representations in the brain.SIGNIFICANCE STATEMENTThe ability to swiftly perceive the actions and intentions of others is a crucial skill for humans, which relies on efficient allocation of limited brain resources to prioritise the attended targets over distractors. However, little is known about the nature of high-level semantic representations during natural visual search for action categories. Here we provide the first evidence showing that attention significantly warps semantic representations by inducing tuning shifts in single cortical voxels, broadly spread across occipitotemporal, parietal, prefrontal, and cingulate cortices. This dynamic attentional mechanism can facilitate action perception by efficiently allocating neural resources to accentuate the representation of task-relevant action categories.

9.
Nat Neurosci ; 24(11): 1628-1636, 2021 11.
Article in English | MEDLINE | ID: mdl-34711960

ABSTRACT

Semantic information in the human brain is organized into multiple networks, but the fine-grain relationships between them are poorly understood. In this study, we compared semantic maps obtained from two functional magnetic resonance imaging experiments in the same participants: one that used silent movies as stimuli and another that used narrative stories. Movies evoked activity from a network of modality-specific, semantically selective areas in visual cortex. Stories evoked activity from another network of semantically selective areas immediately anterior to visual cortex. Remarkably, the pattern of semantic selectivity in these two distinct networks corresponded along the boundary of visual cortex: for visual categories represented posterior to the boundary, the same categories were represented linguistically on the anterior side. These results suggest that these two networks are smoothly joined to form one contiguous map.


Subject(s)
Linguistics/methods , Pattern Recognition, Visual/physiology , Semantics , Visual Cortex/diagnostic imaging , Visual Cortex/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male , Photic Stimulation/methods , Young Adult
10.
Cortex ; 143: 127-147, 2021 10.
Article in English | MEDLINE | ID: mdl-34411847

ABSTRACT

Humans have an impressive ability to rapidly process global information in natural scenes to infer their category. Yet, it remains unclear whether and how scene categories observed dynamically in the natural world are represented in cerebral cortex beyond few canonical scene-selective areas. To address this question, here we examined the representation of dynamic visual scenes by recording whole-brain blood oxygenation level-dependent (BOLD) responses while subjects viewed natural movies. We fit voxelwise encoding models to estimate tuning for scene categories that reflect statistical ensembles of objects and actions in the natural world. We find that this scene-category model explains a significant portion of the response variance broadly across cerebral cortex. Cluster analysis of scene-category tuning profiles across cortex reveals nine spatially-segregated networks of brain regions consistently across subjects. These networks show heterogeneous tuning for a diverse set of dynamic scene categories related to navigation, human activity, social interaction, civilization, natural environment, non-human animals, motion-energy, and texture, suggesting that the organization of scene category representation is quite complex.


Subject(s)
Cerebral Cortex , Magnetic Resonance Imaging , Brain , Brain Mapping , Cluster Analysis , Humans , Pattern Recognition, Visual , Photic Stimulation , Visual Perception
11.
Neuron ; 109(9): 1433-1448, 2021 05 05.
Article in English | MEDLINE | ID: mdl-33689687

ABSTRACT

Over the past few decades, neuroscience experiments have become increasingly complex and naturalistic. Experimental design has in turn become more challenging, as experiments must conform to an ever-increasing diversity of design constraints. In this article, we demonstrate how this design process can be greatly assisted using an optimization tool known as mixed-integer linear programming (MILP). MILP provides a rich framework for incorporating many types of real-world design constraints into a neuroscience experiment. We introduce the mathematical foundations of MILP, compare MILP to other experimental design techniques, and provide four case studies of how MILP can be used to solve complex experimental design challenges.


Subject(s)
Models, Neurological , Models, Theoretical , Neurosciences/methods , Programming, Linear , Research Design , Animals , Humans
12.
Elife ; 92020 09 28.
Article in English | MEDLINE | ID: mdl-32985972

ABSTRACT

Experience influences behavior, but little is known about how experience is encoded in the brain, and how changes in neural activity are implemented at a network level to improve performance. Here we investigate how differences in experience impact brain circuitry and behavior in larval zebrafish prey capture. We find that experience of live prey compared to inert food increases capture success by boosting capture initiation. In response to live prey, animals with and without prior experience of live prey show activity in visual areas (pretectum and optic tectum) and motor areas (cerebellum and hindbrain), with similar visual area retinotopic maps of prey position. However, prey-experienced animals more readily initiate capture in response to visual area activity and have greater visually-evoked activity in two forebrain areas: the telencephalon and habenula. Consequently, disruption of habenular neurons reduces capture performance in prey-experienced fish. Together, our results suggest that experience of prey strengthens prey-associated visual drive to the forebrain, and that this lowers the threshold for prey-associated visual activity to trigger activity in motor areas, thereby improving capture performance.


Subject(s)
Learning/physiology , Predatory Behavior/physiology , Prosencephalon/physiology , Visual Pathways/physiology , Zebrafish/physiology , Animals
13.
Front Neurosci ; 14: 565976, 2020.
Article in English | MEDLINE | ID: mdl-34045937

ABSTRACT

Complex natural tasks likely recruit many different functional brain networks, but it is difficult to predict how such tasks will be represented across cortical areas and networks. Previous electrophysiology studies suggest that task variables are represented in a low-dimensional subspace within the activity space of neural populations. Here we develop a voxel-based state space modeling method for recovering task-related state spaces from human fMRI data. We apply this method to data acquired in a controlled visual attention task and a video game task. We find that each task induces distinct brain states that can be embedded in a low-dimensional state space that reflects task parameters, and that attention increases state separation in the task-related subspace. Our results demonstrate that the state space framework offers a powerful approach for modeling human brain activity elicited by complex natural tasks.

14.
J Neurosci ; 39(39): 7722-7736, 2019 09 25.
Article in English | MEDLINE | ID: mdl-31427396

ABSTRACT

An integral part of human language is the capacity to extract meaning from spoken and written words, but the precise relationship between brain representations of information perceived by listening versus reading is unclear. Prior neuroimaging studies have shown that semantic information in spoken language is represented in multiple regions in the human cerebral cortex, while amodal semantic information appears to be represented in a few broad brain regions. However, previous studies were too insensitive to determine whether semantic representations were shared at a fine level of detail rather than merely at a coarse scale. We used fMRI to record brain activity in two separate experiments while participants listened to or read several hours of the same narrative stories, and then created voxelwise encoding models to characterize semantic selectivity in each voxel and in each individual participant. We find that semantic tuning during listening and reading are highly correlated in most semantically selective regions of cortex, and models estimated using one modality accurately predict voxel responses in the other modality. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.SIGNIFICANCE STATEMENT Humans can comprehend the meaning of words from both spoken and written language. It is therefore important to understand the relationship between the brain representations of spoken or written text. Here, we show that although the representation of semantic information in the human brain is quite complex, the semantic representations evoked by listening versus reading are almost identical. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/physiology , Comprehension/physiology , Models, Neurological , Visual Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Photic Stimulation , Reading , Semantics
15.
Neuroimage ; 197: 482-492, 2019 08 15.
Article in English | MEDLINE | ID: mdl-31075394

ABSTRACT

Predictive models for neural or fMRI data are often fit using regression methods that employ priors on the model parameters. One widely used method is ridge regression, which employs a spherical multivariate normal prior that assumes equal and independent variance for all parameters. However, a spherical prior is not always optimal or appropriate. There are many cases where expert knowledge or hypotheses about the structure of the model parameters could be used to construct a better prior. In these cases, non-spherical multivariate normal priors can be employed using a generalized form of ridge known as Tikhonov regression. Yet Tikhonov regression is only rarely used in neuroscience. In this paper we discuss the theoretical basis for Tikhonov regression, demonstrate a computationally efficient method for its application, and show several examples of how Tikhonov regression can improve predictive models for fMRI data. We also show that many earlier studies have implicitly used Tikhonov regression by linearly transforming the regressors before performing ridge regression.


Subject(s)
Brain/physiology , Computer Simulation , Magnetic Resonance Imaging , Models, Neurological , Neurosciences/methods , Algorithms , Humans
16.
Neuron ; 101(1): 178-192.e7, 2019 01 02.
Article in English | MEDLINE | ID: mdl-30497771

ABSTRACT

It has been argued that scene-selective areas in the human brain represent both the 3D structure of the local visual environment and low-level 2D features (such as spatial frequency) that provide cues for 3D structure. To evaluate the degree to which each of these hypotheses explains variance in scene-selective areas, we develop an encoding model of 3D scene structure and test it against a model of low-level 2D features. We fit the models to fMRI data recorded while subjects viewed visual scenes. The fit models reveal that scene-selective areas represent the distance to and orientation of large surfaces, at least partly independent of low-level features. Principal component analysis of the model weights reveals that the most important dimensions of 3D structure are distance and openness. Finally, reconstructions of the stimuli based on the model weights demonstrate that our model captures unprecedented detail about the local visual environment from scene-selective areas.


Subject(s)
Brain Mapping/methods , Brain/diagnostic imaging , Brain/physiology , Image Processing, Computer-Assisted/methods , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male
17.
J Neurosci ; 37(27): 6539-6557, 2017 07 05.
Article in English | MEDLINE | ID: mdl-28588065

ABSTRACT

Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech.SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech. Both cerebral hemispheres were actively involved in speech processing in large and equal amounts. Also, the transformation from spectral features to semantic elements occurs early in the cortical speech-processing stream. Our experimental and analytical approaches are important alternatives and complements to standard approaches that use segmented speech and block designs, which report more laterality in speech processing and associated semantic processing to higher levels of cortex than reported here.


Subject(s)
Cerebral Cortex/physiology , Models, Neurological , Nerve Net/physiology , Speech Perception/physiology , Adult , Computer Simulation , Female , Humans , Male , Neural Pathways/physiology
18.
J Vis ; 17(1): 11, 2017 01 01.
Article in English | MEDLINE | ID: mdl-28114479

ABSTRACT

During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.


Subject(s)
Brain/physiology , Eye Movements/physiology , Fixation, Ocular/physiology , Visual Perception/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male
19.
Front Neuroinform ; 10: 49, 2016.
Article in English | MEDLINE | ID: mdl-27920675

ABSTRACT

In this article we introduce Pyrcca, an open-source Python package for performing canonical correlation analysis (CCA). CCA is a multivariate analysis method for identifying relationships between sets of variables. Pyrcca supports CCA with or without regularization, and with or without linear, polynomial, or Gaussian kernelization. We first use an abstract example to describe Pyrcca functionality. We then demonstrate how Pyrcca can be used to analyze neuroimaging data. Specifically, we use Pyrcca to implement cross-subject comparison in a natural movie functional magnetic resonance imaging (fMRI) experiment by finding a data-driven set of functional response patterns that are similar across individuals. We validate this cross-subject comparison method in Pyrcca by predicting responses to novel natural movies across subjects. Finally, we show how Pyrcca can reveal retinotopic organization in brain responses to natural movies without the need for an explicit model.

20.
J Neurosci ; 36(40): 10257-10273, 2016 10 05.
Article in English | MEDLINE | ID: mdl-27707964

ABSTRACT

Functional MRI studies suggest that at least three brain regions in human visual cortex-the parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA; often called the transverse occipital sulcus)-represent large-scale information in natural scenes. Tuning of voxels within each region is often assumed to be functionally homogeneous. To test this assumption, we recorded blood oxygenation level-dependent responses during passive viewing of complex natural movies. We then used a voxelwise modeling framework to estimate voxelwise category tuning profiles within each scene-selective region. In all three regions, cluster analysis of the voxelwise tuning profiles reveals two functional subdomains that differ primarily in their responses to animals, man-made objects, social communication, and movement. Thus, the conventional functional definitions of the PPA, RSC, and OPA appear to be too coarse. One attractive hypothesis is that this consistent functional subdivision of scene-selective regions is a reflection of an underlying anatomical organization into two separate processing streams, one selectively biased toward static stimuli and one biased toward dynamic stimuli. SIGNIFICANCE STATEMENT: Visual scene perception is a critical ability to survive in the real world. It is therefore reasonable to assume that the human brain contains neural circuitry selective for visual scenes. Here we show that responses in three scene-selective areas-identified in previous studies-carry information about many object and action categories encountered in daily life. We identify two subregions in each area: one that is selective for categories of man-made objects, and another that is selective for vehicles and locomotion-related action categories that appear in dynamic scenes. This consistent functional subdivision may reflect an anatomical organization into two processing streams, one biased toward static stimuli and one biased toward dynamic stimuli.


Subject(s)
Cerebral Cortex/physiology , Occipital Lobe/physiology , Parahippocampal Gyrus/physiology , Adult , Brain Mapping , Communication , Female , Functional Laterality , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Models, Neurological , Movement , Oxygen/blood , Photic Stimulation , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...