Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
bioRxiv ; 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38948870

ABSTRACT

Human language comprehension is remarkably robust to ill-formed inputs (e.g., word transpositions). This robustness has led some to argue that syntactic parsing is largely an illusion, and that incremental comprehension is more heuristic, shallow, and semantics-based than is often assumed. However, the available data are also consistent with the possibility that humans always perform rule-like symbolic parsing and simply deploy error correction mechanisms to reconstruct ill-formed inputs when needed. We put these hypotheses to a new stringent test by examining brain responses to a) stimuli that should pose a challenge for syntactic reconstruction but allow for complex meanings to be built within local contexts through associative/shallow processing (sentences presented in a backward word order), and b) grammatically well-formed but semantically implausible sentences that should impede semantics-based heuristic processing. Using a novel behavioral syntactic reconstruction paradigm, we demonstrate that backward-presented sentences indeed impede the recovery of grammatical structure during incremental comprehension. Critically, these backward-presented stimuli elicit a relatively low response in the language areas, as measured with fMRI. In contrast, semantically implausible but grammatically well-formed sentences elicit a response in the language areas similar in magnitude to naturalistic (plausible) sentences. In other words, the ability to build syntactic structures during incremental language processing is both necessary and sufficient to fully engage the language network. Taken together, these results provide strongest to date support for a generalized reliance of human language comprehension on syntactic parsing.

2.
Cereb Cortex ; 34(3)2024 03 01.
Article in English | MEDLINE | ID: mdl-38466812

ABSTRACT

How do polyglots-individuals who speak five or more languages-process their languages, and what can this population tell us about the language system? Using fMRI, we identified the language network in each of 34 polyglots (including 16 hyperpolyglots with knowledge of 10+ languages) and examined its response to the native language, non-native languages of varying proficiency, and unfamiliar languages. All language conditions engaged all areas of the language network relative to a control condition. Languages that participants rated as higher proficiency elicited stronger responses, except for the native language, which elicited a similar or lower response than a non-native language of similar proficiency. Furthermore, unfamiliar languages that were typologically related to the participants' high-to-moderate-proficiency languages elicited a stronger response than unfamiliar unrelated languages. The results suggest that the language network's response magnitude scales with the degree of engagement of linguistic computations (e.g. related to lexical access and syntactic-structure building). We also replicated a prior finding of weaker responses to native language in polyglots than non-polyglot bilinguals. These results contribute to our understanding of how multiple languages coexist within a single brain and provide new evidence that the language network responds more strongly to stimuli that more fully engage linguistic computations.


Subject(s)
Multilingualism , Humans , Magnetic Resonance Imaging , Language , Brain/diagnostic imaging , Brain/physiology , Brain Mapping
3.
Nat Hum Behav ; 8(3): 544-561, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38172630

ABSTRACT

Transformer models such as GPT generate human-like language and are predictive of human brain responses to language. Here, using functional-MRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of the brain response associated with each sentence. We then use the model to identify new sentences that are predicted to drive or suppress responses in the human language network. We show that these model-selected novel sentences indeed strongly drive and suppress the activity of human language areas in new individuals. A systematic analysis of the model-selected sentences reveals that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of neural network models to not only mimic human language but also non-invasively control neural activity in higher-level cortical areas, such as the language network.


Subject(s)
Comprehension , Language , Humans , Comprehension/physiology , Brain/diagnostic imaging , Brain/physiology , Linguistics/methods , Brain Mapping/methods
4.
bioRxiv ; 2024 Jan 30.
Article in English | MEDLINE | ID: mdl-36711949

ABSTRACT

How do polyglots-individuals who speak five or more languages-process their languages, and what can this population tell us about the language system? Using fMRI, we identified the language network in each of 34 polyglots (including 16 hyperpolyglots with knowledge of 10+ languages) and examined its response to the native language, non-native languages of varying proficiency, and unfamiliar languages. All language conditions engaged all areas of the language network relative to a control condition. Languages that participants rated as higher-proficiency elicited stronger responses, except for the native language, which elicited a similar or lower response than a non-native language of similar proficiency. Furthermore, unfamiliar languages that were typologically related to the participants' high-to-moderate-proficiency languages elicited a stronger response than unfamiliar unrelated languages. The results suggest that the language network's response magnitude scales with the degree of engagement of linguistic computations (e.g., related to lexical access and syntactic-structure building). We also replicated a prior finding of weaker responses to native language in polyglots than non-polyglot bilinguals. These results contribute to our understanding of how multiple languages co-exist within a single brain and provide new evidence that the language network responds more strongly to stimuli that more fully engage linguistic computations.

5.
bioRxiv ; 2023 Jul 28.
Article in English | MEDLINE | ID: mdl-37546901

ABSTRACT

What constitutes a language? Natural languages share some features with other domains: from math, to music, to gesture. However, the brain mechanisms that process linguistic input are highly specialized, showing little or no response to diverse non-linguistic tasks. Here, we examine constructed languages (conlangs) to ask whether they draw on the same neural mechanisms as natural languages, or whether they instead pattern with domains like math and logic. Using individual-subject fMRI analyses, we show that understanding conlangs recruits the same brain areas as natural language comprehension. This result holds for Esperanto (n=19 speakers)- created to resemble natural languages-and fictional conlangs (Klingon (n=10), Na'vi (n=9), High Valyrian (n=3), and Dothraki (n=3)), created to differ from natural languages, and suggests that conlangs and natural languages share critical features and that the notable differences between conlangs and natural language are not consequential for the cognitive and neural mechanisms that they engage.

6.
bioRxiv ; 2023 Oct 30.
Article in English | MEDLINE | ID: mdl-37090673

ABSTRACT

Transformer models such as GPT generate human-like language and are highly predictive of human brain responses to language. Here, using fMRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of brain response associated with each sentence. Then, we use the model to identify new sentences that are predicted to drive or suppress responses in the human language network. We show that these model-selected novel sentences indeed strongly drive and suppress activity of human language areas in new individuals. A systematic analysis of the model-selected sentences reveals that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of neural network models to not only mimic human language but also noninvasively control neural activity in higher-level cortical areas, like the language network.

SELECTION OF CITATIONS
SEARCH DETAIL
...