Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
J Neurosci ; 41(18): 4100-4119, 2021 05 05.
Article in English | MEDLINE | ID: mdl-33753548

ABSTRACT

Understanding how and where in the brain sentence-level meaning is constructed from words presents a major scientific challenge. Recent advances have begun to explain brain activation elicited by sentences using vector models of word meaning derived from patterns of word co-occurrence in text corpora. These studies have helped map out semantic representation across a distributed brain network spanning temporal, parietal, and frontal cortex. However, it remains unclear whether activation patterns within regions reflect unified representations of sentence-level meaning, as opposed to superpositions of context-independent component words. This is because models have typically represented sentences as "bags-of-words" that neglect sentence-level structure. To address this issue, we interrogated fMRI activation elicited as 240 sentences were read by 14 participants (9 female, 5 male), using sentences encoded by a recurrent deep artificial neural-network trained on a sentence inference task (InferSent). Recurrent connections and nonlinear filters enable InferSent to transform sequences of word vectors into unified "propositional" sentence representations suitable for evaluating intersentence entailment relations. Using voxelwise encoding modeling, we demonstrate that InferSent predicts elements of fMRI activation that cannot be predicted by bag-of-words models and sentence models using grammatical rules to assemble word vectors. This effect occurs throughout a distributed network, which suggests that propositional sentence-level meaning is represented within and across multiple cortical regions rather than at any single site. In follow-up analyses, we place results in the context of other deep network approaches (ELMo and BERT) and estimate the degree of unpredicted neural signal using an "experiential" semantic model and cross-participant encoding.SIGNIFICANCE STATEMENT A modern-day scientific challenge is to understand how the human brain transforms word sequences into representations of sentence meaning. A recent approach, emerging from advances in functional neuroimaging, big data, and machine learning, is to computationally model meaning, and use models to predict brain activity. Such models have helped map a cortical semantic information-processing network. However, how unified sentence-level information, as opposed to word-level units, is represented throughout this network remains unclear. This is because models have typically represented sentences as unordered "bags-of-words." Using a deep artificial neural network that recurrently and nonlinearly combines word representations into unified propositional sentence representations, we provide evidence that sentence-level information is encoded throughout a cortical network, rather than in a single region.


Subject(s)
Cerebral Cortex/diagnostic imaging , Cerebral Cortex/physiology , Comprehension/physiology , Language , Neural Networks, Computer , Semantics , Adult , Computer Simulation , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Reading , Young Adult
2.
Nat Commun ; 11(1): 5916, 2020 11 20.
Article in English | MEDLINE | ID: mdl-33219210

ABSTRACT

Everyone experiences common events differently. This leads to personal memories that presumably provide neural signatures of individual identity when events are reimagined. We present initial evidence that these signatures can be read from brain activity. To do this, we progress beyond previous work that has deployed generic group-level computational semantic models to distinguish between neural representations of different events, but not revealed interpersonal differences in event representations. We scanned 26 participants' brain activity using functional Magnetic Resonance Imaging as they vividly imagined themselves personally experiencing 20 common scenarios (e.g., dancing, shopping, wedding). Rather than adopting a one-size-fits-all approach to generically model scenarios, we constructed personal models from participants' verbal descriptions and self-ratings of sensory/motor/cognitive/spatiotemporal and emotional characteristics of the imagined experiences. We demonstrate that participants' neural representations are better predicted by their own models than other peoples'. This showcases how neuroimaging and personalized models can quantify individual-differences in imagined experiences.


Subject(s)
Brain Mapping , Imagination , Memory, Long-Term , Aged , Brain Mapping/methods , Brain Mapping/psychology , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging/methods , Male , Nervous System Physiological Phenomena , Semantics
3.
Brain Imaging Behav ; 14(6): 2488-2499, 2020 Dec.
Article in English | MEDLINE | ID: mdl-31493140

ABSTRACT

Cumulative evidence suggests the existence of common processes underlying subjective experience of cognitive and physical fatigue. However, mechanistic understanding of the brain structural connections underlying the experience of fatigue in general, without the influence of clinical conditions, is limited. The purpose of the study was to examine the relationship between structural connectivity and perceived state fatigue in older adults. We enrolled cognitively and physically healthy older individuals (n = 52) and categorized them into three groups (low cognitive/low physical fatigue; low cognitive/high physical fatigue; high cognitive/low physical fatigue; no subjects had high cognitive/high physical fatigue) based on perceived fatigue from cognitive and physical fatigue manipulation tasks. Using sophisticated diffusion tensor imaging processing techniques, we extracted connectome matrices for six different characteristics of whole-brain structural connections for each subject. Tensor network principal component analysis was used to examine group differences in these connectome matrices, and extract principal brain networks for each group. Connected surface area of principal brain networks differentiated the two high fatigue groups from the low cognitive/physical fatigue group (high vs. low physical fatigue, p = 0.046; high vs. low cognitive fatigue, p = 0.036). Greater connected surface area within striatal-frontal-parietal networks was correlated with lower cognitive and physical fatigue, and was predictive of perceived physical and cognitive fatigue measures not used for group categorization (Pittsburgh fatigability physical subscale, R2 = 0.70, p < 0.0001; difference in self-report fatigue before and after gambling tasks, R2 = 0.54, p < 0.0001). There are potentially structural connectomes resilient to both cognitive and physical fatigue in older adults.


Subject(s)
Connectome , Aged , Brain/diagnostic imaging , Cognition , Diffusion Tensor Imaging , Female , Humans , Magnetic Resonance Imaging , Male
4.
J Neurosci ; 39(45): 8969-8987, 2019 11 06.
Article in English | MEDLINE | ID: mdl-31570538

ABSTRACT

The brain is thought to combine linguistic knowledge of words and nonlinguistic knowledge of their referents to encode sentence meaning. However, functional neuroimaging studies aiming at decoding language meaning from neural activity have mostly relied on distributional models of word semantics, which are based on patterns of word co-occurrence in text corpora. Here, we present initial evidence that modeling nonlinguistic "experiential" knowledge contributes to decoding neural representations of sentence meaning. We model attributes of peoples' sensory, motor, social, emotional, and cognitive experiences with words using behavioral ratings. We demonstrate that fMRI activation elicited in sentence reading is more accurately decoded when this experiential attribute model is integrated with a text-based model than when either model is applied in isolation (participants were 5 males and 9 females). Our decoding approach exploits a representation-similarity-based framework, which benefits from being parameter free, while performing at accuracy levels comparable with those from parameter fitting approaches, such as ridge regression. We find that the text-based model contributes particularly to the decoding of sentences containing linguistically oriented "abstract" words and reveal tentative evidence that the experiential model improves decoding of more concrete sentences. Finally, we introduce a cross-participant decoding method to estimate an upper bound on model-based decoding accuracy. We demonstrate that a substantial fraction of neural signal remains unexplained, and leverage this gap to pinpoint characteristics of weakly decoded sentences and hence identify model weaknesses to guide future model development.SIGNIFICANCE STATEMENT Language gives humans the unique ability to communicate about historical events, theoretical concepts, and fiction. Although words are learned through language and defined by their relations to other words in dictionaries, our understanding of word meaning presumably draws heavily on our nonlinguistic sensory, motor, interoceptive, and emotional experiences with words and their referents. Behavioral experiments lend support to the intuition that word meaning integrates aspects of linguistic and nonlinguistic "experiential" knowledge. However, behavioral measures do not provide a window on how meaning is represented in the brain and tend to necessitate artificial experimental paradigms. We present a model-based approach that reveals early evidence that experiential and linguistically acquired knowledge can be detected in brain activity elicited in reading natural sentences.


Subject(s)
Comprehension , Models, Neurological , Reading , Adult , Brain/physiology , Female , Humans , Knowledge , Learning , Male , Semantics
5.
Neuroimage Clin ; 22: 101788, 2019.
Article in English | MEDLINE | ID: mdl-30991624

ABSTRACT

Alzheimer's disease (AD) is associated with a loss of semantic knowledge reflecting brain pathophysiology that begins years before dementia. Identifying early signs of pathophysiology induced dysfunction in the neural systems that access and process words' meaning could therefore help forecast dementia. This article reviews pioneering studies demonstrating that abnormal functional Magnetic Resonance Imaging (fMRI) response patterns elicited in semantic tasks reflect both AD-pathophysiology and the hereditary risk of AD, and also can help forecast cognitive decline. However, to bring current semantic task-based fMRI research up to date with new AD research guidelines the relationship with different types of AD-pathophysiology needs to be more thoroughly examined. We shall argue that new analytic techniques and experimental paradigms will be critical for this. Previous work has relied on specialized tests of specific components of semantic knowledge/processing (e.g. famous name recognition) to reveal coarse AD-related changes in activation across broad brain regions. Recent computational advances now enable more detailed tests of the semantic information that is represented within brain regions during more natural language comprehension. These new methods stand to more directly index how pathophysiology alters neural information processing, whilst using language comprehension as the basis for a more comprehensive examination of semantic brain function. We here connect the semantic pattern information analysis literature up with AD research to raise awareness to potential cross-disciplinary research opportunities.


Subject(s)
Alzheimer Disease/diagnostic imaging , Alzheimer Disease/physiopathology , Comprehension/physiology , Functional Neuroimaging , Nerve Net/diagnostic imaging , Nerve Net/physiopathology , Neuropsychological Tests , Semantics , Humans
6.
Cereb Cortex ; 29(6): 2396-2411, 2019 06 01.
Article in English | MEDLINE | ID: mdl-29771323

ABSTRACT

Deciphering how sentence meaning is represented in the brain remains a major challenge to science. Semantically related neural activity has recently been shown to arise concurrently in distributed brain regions as successive words in a sentence are read. However, what semantic content is represented by different regions, what is common across them, and how this relates to words in different grammatical positions of sentences is weakly understood. To address these questions, we apply a semantic model of word meaning to interpret brain activation patterns elicited in sentence reading. The model is based on human ratings of 65 sensory/motor/emotional and cognitive features of experience with words (and their referents). Through a process of mapping functional Magnetic Resonance Imaging activation back into model space we test: which brain regions semantically encode content words in different grammatical positions (e.g., subject/verb/object); and what semantic features are encoded by different regions. In left temporal, inferior parietal, and inferior/superior frontal regions we detect the semantic encoding of words in all grammatical positions tested and reveal multiple common components of semantic representation. This suggests that sentence comprehension involves a common core representation of multiple words' meaning being encoded in a network of regions distributed across the brain.


Subject(s)
Brain/physiology , Comprehension/physiology , Models, Neurological , Semantics , Speech Perception/physiology , Brain Mapping/methods , Humans , Language , Magnetic Resonance Imaging/methods
7.
Cereb Cortex ; 27(9): 4379-4395, 2017 09 01.
Article in English | MEDLINE | ID: mdl-27522069

ABSTRACT

We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences.


Subject(s)
Brain/physiology , Motivation/physiology , Reading , Semantics , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging/methods , Male , Middle Aged , Multivariate Analysis , Photic Stimulation/methods , Young Adult
8.
Neuroimage ; 128: 44-53, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26732404

ABSTRACT

Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.


Subject(s)
Brain Mapping/methods , Brain/physiology , Image Processing, Computer-Assisted/methods , Models, Neurological , Models, Theoretical , Humans , Magnetic Resonance Imaging , Memory/physiology
9.
Neuroimage ; 120: 309-22, 2015 Oct 15.
Article in English | MEDLINE | ID: mdl-26188260

ABSTRACT

Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations.


Subject(s)
Brain Mapping/methods , Cerebral Cortex/physiology , Imagination/physiology , Models, Theoretical , Pattern Recognition, Visual/physiology , Reading , Concept Formation , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Semantics
10.
Proc Biol Sci ; 270 Suppl 1: S18-20, 2003 Aug 07.
Article in English | MEDLINE | ID: mdl-12952625

ABSTRACT

Motion camouflage is a stealth strategy that allows a predator to conceal its apparent motion as it approaches a moving prey. Although male hoverflies have been observed to move in a manner consistent with motion camouflage to track females, the successful application of the technique has not previously been demonstrated. This article describes the implementation and results of a psychophysical experiment suggesting that humans are susceptible to motion camouflage. The experiment masqueraded as a computer-game competition. The basis of the competition was a game designed to test the comparative success of different predatory-approach strategies. The experiment showed that predators were able to approach closer to their prey (the player of the game) before being detected when using motion camouflage than when using other approach strategies tested. For an autonomous predator, the calculation of a motion-camouflage approach is a non-trivial problem. It was, therefore, of particular interest that in the game the players were deceived by motion-camouflage predators controlled by artificial neural systems operating using realistic levels of input information. It is suggested that these results are especially of interest to biologists, visual psychophysicists, military engineers and computer-games designers.


Subject(s)
Predatory Behavior/physiology , Adult , Animals , Computer Simulation , Escape Reaction/physiology , Game Theory , Humans , Learning , Models, Biological , Movement/physiology
11.
Proc Biol Sci ; 270(1514): 489-95, 2003 Mar 07.
Article in English | MEDLINE | ID: mdl-12641903

ABSTRACT

A computational model of a stealth strategy inspired by the apparent mating tactics of male hoverflies is presented. The stealth strategy (motion camouflage) paradoxically allows a predator to approach a moving prey in such a way that it appears to be a stationary object. In the model, the predators are controlled by neural sensorimotor systems that base their decisions on realistic levels of input information. They are shown to be able to employ motion camouflage to approach prey that move along both real hoverfly flight paths and artificially generated flight paths. The camouflaged approaches made demonstrate that the control systems have an ability to predict future prey movements. This is illustrated using two- and three-dimensional simulations.


Subject(s)
Computer Simulation , Models, Biological , Predatory Behavior , Animals , Diptera/physiology , Male , Movement , Sexual Behavior, Animal
SELECTION OF CITATIONS
SEARCH DETAIL
...