Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Neurobiol Lang (Camb) ; 5(1): 201-224, 2024.
Article in English | MEDLINE | ID: mdl-38645619

ABSTRACT

In computational neurolinguistics, it has been demonstrated that hierarchical models such as recurrent neural network grammars (RNNGs), which jointly generate word sequences and their syntactic structures via the syntactic composition, better explained human brain activity than sequential models such as long short-term memory networks (LSTMs). However, the vanilla RNNG has employed the top-down parsing strategy, which has been pointed out in the psycholinguistics literature as suboptimal especially for head-final/left-branching languages, and alternatively the left-corner parsing strategy has been proposed as the psychologically plausible parsing strategy. In this article, building on this line of inquiry, we investigate not only whether hierarchical models like RNNGs better explain human brain activity than sequential models like LSTMs, but also which parsing strategy is more neurobiologically plausible, by developing a novel fMRI corpus where participants read newspaper articles in a head-final/left-branching language, namely Japanese, through the naturalistic fMRI experiment. The results revealed that left-corner RNNGs outperformed both LSTMs and top-down RNNGs in the left inferior frontal and temporal-parietal regions, suggesting that there are certain brain regions that localize the syntactic composition with the left-corner parsing strategy.

2.
Front Psychol ; 11: 513740, 2020.
Article in English | MEDLINE | ID: mdl-33281652

ABSTRACT

One of the central debates in the cognitive science of language has revolved around the nature of human linguistic competence. Whether syntactic competence should be characterized by abstract hierarchical structures or reduced to surface linear strings has been actively debated, but the nature of morphological competence has been insufficiently appreciated despite the parallel question in the cognitive science literature. In this paper, in order to investigate whether morphological competence should be characterized by abstract hierarchical structures, we conducted a crowdsourced acceptability judgment experiment on morphologically complex words and evaluated five computational models of morphological competence against human acceptability judgments: Character Markov Models (Character), Syllable Markov Models (Syllable), Morpheme Markov Models (Morpheme), Hidden Markov Models (HMM), and Probabilistic Context-Free Grammars (PCFG). Our psycholinguistic experimentation and computational modeling demonstrated that "morphous" computational models with morpheme units outperformed "amorphous" computational models without morpheme units and, importantly, PCFG with hierarchical structures most accurately explained human acceptability judgments on several evaluation metrics, especially for morphologically complex words with nested morphological structures. Those results strongly suggest that human morphological competence should be characterized by abstract hierarchical structures internally generated by the grammar, not reduced to surface linear strings externally attested in large corpora.

3.
Cortex ; 106: 213-236, 2018 09.
Article in English | MEDLINE | ID: mdl-30007863

ABSTRACT

A central part of knowing a language is the ability to combine basic linguistic units to form complex representations. While our neurobiological understanding of how words combine into larger structures has significantly advanced in recent years, the combinatory operations that build words themselves remain unknown. Are complex words such as tombstone and starlet built with the same mechanisms that construct phrases from words, such as grey stone or bright star? Here we addressed this with two magnetoencephalography (MEG) experiments, which simultaneously varied demands associated with phrasal composition, and the processing of morphological complexity in compound and suffixed nouns. Replicating previous findings, we show that portions of the left anterior temporal lobe (LATL) are engaged in the combination of modifiers and monomorphemic nouns in phrases (e.g., brown rabbit). As regards compounding, we show that semantically transparent compounds (e.g., tombstone) also engage left anterior temporal cortex, though the spatiotemporal details of this effect differed from phrasal composition. Further, when a phrase was constructed from a modifier and a transparent compound (e.g., granite tombstone), the typical LATL phrasal composition response appeared at a delayed latency, which follows if an initial within-word operation (tomb + stone) must take place before the combination of the compound with the preceding modifier (granite + tombstone). In contrast to compounding, suffixation (i.e., star + let) did not engage the LATL in any consistent way, suggesting a distinct processing route. Finally, our results suggest an intriguing generalization that morpho-orthographic complexity that does not recruit the LATL may block the engagement of the LATL in subsequent phrase building. In sum, our findings offer a detailed spatiotemporal characterization of the lowest level combinatory operations that ultimately feed the composition of full sentences.


Subject(s)
Comprehension/physiology , Language , Semantics , Temporal Lobe/physiology , Brain Mapping , Female , Humans , Magnetoencephalography/methods , Male , Reading , Temporal Lobe/pathology
SELECTION OF CITATIONS
SEARCH DETAIL
...