Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 250
Filter
1.
Proc Natl Acad Sci U S A ; 121(30): e2315438121, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39028693

ABSTRACT

There is evidence from both behavior and brain activity that the way information is structured, through the use of focus, can up-regulate processing of focused constituents, likely to give prominence to the relevant aspects of the input. This is hypothesized to be universal, regardless of the different ways in which languages encode focus. In order to test this universalist hypothesis, we need to go beyond the more familiar linguistic strategies for marking focus, such as by means of intonation or specific syntactic structures (e.g., it-clefts). Therefore, in this study, we examine Makhuwa-Enahara, a Bantu language spoken in northern Mozambique, which uniquely marks focus through verbal conjugation. The participants were presented with sentences that consisted of either a semantically anomalous constituent or a semantically nonanomalous constituent. Moreover, focus on this particular constituent could be either present or absent. We observed a consistent pattern: Focused information generated a more negative N400 response than the same information in nonfocus position. This demonstrates that regardless of how focus is marked, its consequence seems to result in an upregulation of processing of information that is in focus.


Subject(s)
Language , Humans , Female , Male , Adult , Mozambique , Electroencephalography , Semantics , Brain/physiology , Young Adult , Linguistics , Evoked Potentials/physiology
2.
Neurobiol Lang (Camb) ; 5(1): 225-247, 2024.
Article in English | MEDLINE | ID: mdl-38645618

ABSTRACT

The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.

3.
Brain Struct Funct ; 2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38625556

ABSTRACT

BACKGROUND: Despite a pervasive cortico-centric view in cognitive neuroscience, subcortical structures including the thalamus have been shown to be increasingly involved in higher cognitive functions. Previous structural and functional imaging studies demonstrated cortico-thalamo-cortical loops which may support various cognitive functions including language. However, large-scale functional connectivity of the thalamus during language tasks has not been examined before. METHODS: The present study employed meta-analytic connectivity modeling to identify language-related coactivation patterns of the left and right thalami. The left and right thalami were used as regions of interest to search the BrainMap functional database for neuroimaging experiments with healthy participants reporting language-related activations in each region of interest. Activation likelihood estimation analyses were then carried out on the foci extracted from the identified studies to estimate functional convergence for each thalamus. A functional decoding analysis based on the same database was conducted to characterize thalamic contributions to different language functions. RESULTS: The results revealed bilateral frontotemporal and bilateral subcortical (basal ganglia) coactivation patterns for both the left and right thalami, and also right cerebellar coactivations for the left thalamus, during language processing. In light of previous empirical studies and theoretical frameworks, the present connectivity and functional decoding findings suggest that cortico-subcortical-cerebellar-cortical loops modulate and fine-tune information transfer within the bilateral frontotemporal cortices during language processing, especially during production and semantic operations, but also other language (e.g., syntax, phonology) and cognitive operations (e.g., attention, cognitive control). CONCLUSION: The current findings show that the language-relevant network extends beyond the classical left perisylvian cortices and spans bilateral cortical, bilateral subcortical (bilateral thalamus, bilateral basal ganglia) and right cerebellar regions.

4.
Proc Natl Acad Sci U S A ; 121(11): e2310766121, 2024 Mar 12.
Article in English | MEDLINE | ID: mdl-38442171

ABSTRACT

The neural correlates of sentence production are typically studied using task paradigms that differ considerably from the experience of speaking outside of an experimental setting. In this fMRI study, we aimed to gain a better understanding of syntactic processing in spontaneous production versus naturalistic comprehension in three regions of interest (BA44, BA45, and left posterior middle temporal gyrus). A group of participants (n = 16) was asked to speak about the events of an episode of a TV series in the scanner. Another group of participants (n = 36) listened to the spoken recall of a participant from the first group. To model syntactic processing, we extracted word-by-word metrics of phrase-structure building with a top-down and a bottom-up parser that make different hypotheses about the timing of structure building. While the top-down parser anticipates syntactic structure, sometimes before it is obvious to the listener, the bottom-up parser builds syntactic structure in an integratory way after all of the evidence has been presented. In comprehension, neural activity was found to be better modeled by the bottom-up parser, while in production, it was better modeled by the top-down parser. We additionally modeled structure building in production with two strategies that were developed here to make different predictions about the incrementality of structure building during speaking. We found evidence for highly incremental and anticipatory structure building in production, which was confirmed by a converging analysis of the pausing patterns in speech. Overall, this study shows the feasibility of studying the neural dynamics of spontaneous language production.


Subject(s)
Benchmarking , Mental Recall , Humans , Language , Software , Speech
5.
Top Cogn Sci ; 2024 Mar 17.
Article in English | MEDLINE | ID: mdl-38493475

ABSTRACT

Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.

6.
J Neurosci ; 44(10)2024 Mar 06.
Article in English | MEDLINE | ID: mdl-38199864

ABSTRACT

During communication in real-life settings, our brain often needs to integrate auditory and visual information and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging and magnetoencephalography to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing nonlinear signal interactions, was enhanced in the left frontotemporal and frontal regions. Focusing on the left inferior frontal gyrus, this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.


Subject(s)
Brain , Speech Perception , Humans , Male , Female , Brain/physiology , Visual Perception/physiology , Magnetoencephalography , Speech/physiology , Attention/physiology , Speech Perception/physiology , Acoustic Stimulation , Photic Stimulation
7.
Behav Res Methods ; 56(3): 2675-2691, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37382814

ABSTRACT

When perceiving the world around us, we are constantly integrating pieces of information. The integrated experience consists of more than just the sum of its parts. For example, visual scenes are defined by a collection of objects as well as the spatial relations amongst them and sentence meaning is computed based on individual word semantic but also syntactic configuration. Having quantitative models of such integrated representations can help evaluate cognitive models of both language and scene perception. Here, we focus on language, and use a behavioral measure of perceived similarity as an approximation of integrated meaning representations. We collected similarity judgments of 200 subjects rating nouns or transitive sentences through an online multiple arrangement task. We find that perceived similarity between sentences is most strongly modulated by the semantic action category of the main verb. In addition, we show how non-negative matrix factorization of similarity judgment data can reveal multiple underlying dimensions reflecting both semantic as well as relational role information. Finally, we provide an example of how similarity judgments on sentence stimuli can serve as a point of comparison for artificial neural networks models (ANNs) by comparing our behavioral data against sentence similarity extracted from three state-of-the-art ANNs. Overall, our method combining the multiple arrangement task on sentence stimuli with matrix factorization can capture relational information emerging from integration of multiple words in a sentence even in the presence of strong focus on the verb.


Subject(s)
Language , Semantics , Humans , Judgment
8.
J Cogn Neurosci ; 36(2): 225-238, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-37944125

ABSTRACT

Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.


Subject(s)
Electroencephalography , Evoked Potentials , Humans , Electroencephalography/methods , Language , Reading , Semantics
9.
Front Robot AI ; 10: 1271610, 2023.
Article in English | MEDLINE | ID: mdl-38106543

ABSTRACT

Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot's turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot's emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.

10.
Neuropsychologia ; 191: 108730, 2023 Dec 15.
Article in English | MEDLINE | ID: mdl-37939871

ABSTRACT

EEG and eye-tracking provide complementary information when investigating language comprehension. Evidence that speech processing may be facilitated by speech prediction comes from the observation that a listener's eye gaze moves towards a referent before it is mentioned if the remainder of the spoken sentence is predictable. However, changes to the trajectory of anticipatory fixations could result from a change in prediction or an attention shift. Conversely, N400 amplitudes and concurrent spectral power provide information about the ease of word processing the moment the word is perceived. In a proof-of-principle investigation, we combined EEG and eye-tracking to study linguistic prediction in naturalistic, virtual environments. We observed increased processing, reflected in theta band power, either during verb processing - when the verb was predictive of the noun - or during noun processing - when the verb was not predictive of the noun. Alpha power was higher in response to the predictive verb and unpredictable nouns. We replicated typical effects of noun congruence but not predictability on the N400 in response to the noun. Thus, the rich visual context that accompanied speech in virtual reality influenced language processing compared to previous reports, where the visual context may have facilitated processing of unpredictable nouns. Finally, anticipatory fixations were predictive of spectral power during noun processing and the length of time fixating the target could be predicted by spectral power at verb onset, conditional on the object having been fixated. Overall, we show that combining EEG and eye-tracking provides a promising new method to answer novel research questions about the prediction of upcoming linguistic input, for example, regarding the role of extralinguistic cues in prediction during language comprehension.


Subject(s)
Electroencephalography , Speech Perception , Humans , Male , Female , Speech , Eye-Tracking Technology , Comprehension/physiology , Evoked Potentials , Speech Perception/physiology
11.
Open Mind (Camb) ; 7: 757-783, 2023.
Article in English | MEDLINE | ID: mdl-37840763

ABSTRACT

In a typical text, readers look much longer at some words than at others, even skipping many altogether. Historically, researchers explained this variation via low-level visual or oculomotor factors, but today it is primarily explained via factors determining a word's lexical processing ease, such as how well word identity can be predicted from context or discerned from parafoveal preview. While the existence of these effects is well established in controlled experiments, the relative importance of prediction, preview and low-level factors in natural reading remains unclear. Here, we address this question in three large naturalistic reading corpora (n = 104, 1.5 million words), using deep neural networks and Bayesian ideal observers to model linguistic prediction and parafoveal preview from moment to moment in natural reading. Strikingly, neither prediction nor preview was important for explaining word skipping-the vast majority of explained variation was explained by a simple oculomotor model, using just fixation position and word length. For reading times, by contrast, we found strong but independent contributions of prediction and preview, with effect sizes matching those from controlled experiments. Together, these results challenge dominant models of eye movements in reading, and instead support alternative models that describe skipping (but not reading times) as largely autonomous from word identification, and mostly determined by low-level oculomotor information.

12.
Neuroimage ; 272: 120040, 2023 05 15.
Article in English | MEDLINE | ID: mdl-36935084

ABSTRACT

During listening, brain activity tracks the rhythmic structures of speech signals. Here, we directly dissociated the contribution of neural envelope tracking in the processing of speech acoustic cues from that related to linguistic processing. We examined the neural changes associated with the comprehension of Noise-Vocoded (NV) speech using magnetoencephalography (MEG). Participants listened to NV sentences in a 3-phase training paradigm: (1) pre-training, where NV stimuli were barely comprehended, (2) training with exposure of the original clear version of speech stimulus, and (3) post-training, where the same stimuli gained intelligibility from the training phase. Using this paradigm, we tested if the neural responses of a speech signal was modulated by its intelligibility without any change in its acoustic structure. To test the influence of spectral degradation on neural envelope tracking independently of training, participants listened to two types of NV sentences (4-band and 2-band NV speech), but were only trained to understand 4-band NV speech. Significant changes in neural tracking were observed in the delta range in relation to the acoustic degradation of speech. However, we failed to find a direct effect of intelligibility on the neural tracking of speech envelope in both theta and delta ranges, in both auditory regions-of-interest and whole-brain sensor-space analyses. This suggests that acoustics greatly influence the neural tracking response to speech envelope, and that caution needs to be taken when choosing the control signals for speech-brain tracking analyses, considering that a slight change in acoustic parameters can have strong effects on the neural tracking response.


Subject(s)
Speech Perception , Speech , Humans , Speech/physiology , Acoustic Stimulation , Speech Perception/physiology , Magnetoencephalography , Noise , Speech Intelligibility
13.
Cognition ; 230: 105252, 2023 01.
Article in English | MEDLINE | ID: mdl-36115201

ABSTRACT

According to the language marker hypothesis language has provided homo sapiens with a rich symbolic system that plays a central role in interpreting signals delivered by our sensory apparatus, in shaping action goals, and in creating a powerful tool for reasoning and inferencing. This view provides an important correction on embodied accounts of language that reduce language to action, perception, emotion and mental simulation. The presence of a language system has, however, also important consequences for perception, action, emotion, and memory. Language stamps signals from perception, action, and emotional systems with rich cognitive markers that transform the role of these signals in the overall cognitive architecture of the human mind. This view does not deny that language is implemented by means of universal principles of neural organization. However, language creates the possibility to generate rich internal models of the world that are shaped and made accessible by the characteristics of a language system. This makes us less dependent on direct action-perception couplings and might even sometimes go at the expense of the veridicality of perception. In cognitive (neuro)science the pendulum has swung from language as the key to understand the organization of the human mind to the perspective that it is a byproduct of perception and action. It is time that it partly swings back again.


Subject(s)
Emotions , Language , Humans , Computer Simulation
14.
J Physiol ; 601(15): 3265-3295, 2023 08.
Article in English | MEDLINE | ID: mdl-36168736

ABSTRACT

Neuron models with explicit dendritic dynamics have shed light on mechanisms for coincidence detection, pathway selection and temporal filtering. However, it is still unclear which morphological and physiological features are required to capture these phenomena. In this work, we introduce the Tripod neuron model and propose a minimal structural reduction of the dendritic tree that is able to reproduce these computations. The Tripod is a three-compartment model consisting of two segregated passive dendrites and a somatic compartment modelled as an adaptive, exponential integrate-and-fire neuron. It incorporates dendritic geometry, membrane physiology and receptor dynamics as measured in human pyramidal cells. We characterize the response of the Tripod to glutamatergic and GABAergic inputs and identify parameters that support supra-linear integration, coincidence-detection and pathway-specific gating through shunting inhibition. Following NMDA spikes, the Tripod neuron generates plateau potentials whose duration depends on the dendritic length and the strength of synaptic input. When fitted with distal compartments, the Tripod encodes previous activity into a dendritic depolarized state. This dendritic memory allows the neuron to perform temporal binding, and we show that it solves transition and sequence detection tasks on which a single-compartment model fails. Thus, the Tripod can account for dendritic computations previously explained only with more detailed neuron models or neural networks. Due to its simplicity, the Tripod neuron can be used efficiently in simulations of larger cortical circuits. KEY POINTS: We present a neuron model, called the Tripod, with two segregated dendritic branches that are connected to an axosomatic compartment. Each branch implements inhibitory GABAergic and excitatory glutamatergic synaptic transmission, including voltage-gated NMDA receptors. Dendrites are modelled on relevant geometric and physiological parameters measured in human pyramidal cells. The neuron reproduces classical dendritic computations, such as coincidence detection and pathway selection via shunting inhibition, that are beyond the scope of point-neuron models. Under some conditions, dendritic NMDA spikes cause plateau potentials, and we show that they provide a form of short-term memory which is useful for sequence recognition. The dendritic structure of the Tripod neuron is sufficiently simple to be integrated into efficient network simulations and studied in a broad functional context.


Subject(s)
Dendrites , N-Methylaspartate , Humans , Dendrites/physiology , Synapses/physiology , Models, Neurological , Neurons/physiology , Pyramidal Cells/physiology , Action Potentials/physiology
15.
Curr Res Neurobiol ; 3: 100043, 2022.
Article in English | MEDLINE | ID: mdl-36518343

ABSTRACT

Listening to speech is difficult in noisy environments, and is even harder when the interfering noise consists of intelligible speech as compared to unintelligible sounds. This suggests that the competing linguistic information interferes with the neural processing of target speech. Interference could either arise from a degradation of the neural representation of the target speech, or from increased representation of distracting speech that enters in competition with the target speech. We tested these alternative hypotheses using magnetoencephalography (MEG) while participants listened to a target clear speech in the presence of distracting noise-vocoded speech. Crucially, the distractors were initially unintelligible but became more intelligible after a short training session. Results showed that the comprehension of the target speech was poorer after training than before training. The neural tracking of target speech in the delta range (1-4 Hz) reduced in strength in the presence of a more intelligible distractor. In contrast, the neural tracking of distracting signals was not significantly modulated by intelligibility. These results suggest that the presence of distracting speech signals degrades the linguistic representation of target speech carried by delta oscillations.

16.
Ann N Y Acad Sci ; 1517(1): 125-142, 2022 11.
Article in English | MEDLINE | ID: mdl-36069117

ABSTRACT

Vocal learning, the ability to produce modified vocalizations via learning from acoustic signals, is a key trait in the evolution of speech. While extensively studied in songbirds, mammalian models for vocal learning are rare. Bats present a promising study system given their gregarious natures, small size, and the ability of some species to be maintained in captive colonies. We utilize the pale spear-nosed bat (Phyllostomus discolor) and report advances in establishing this species as a tractable model for understanding vocal learning. We have taken an interdisciplinary approach, aiming to provide an integrated understanding across genomics (Part I), neurobiology (Part II), and transgenics (Part III). In Part I, we generated new, high-quality genome annotations of coding genes and noncoding microRNAs to facilitate functional and evolutionary studies. In Part II, we traced connections between auditory-related brain regions and reported neuroimaging to explore the structure of the brain and gene expression patterns to highlight brain regions. In Part III, we created the first successful transgenic bats by manipulating the expression of FoxP2, a speech-related gene. These interdisciplinary approaches are facilitating a mechanistic and evolutionary understanding of mammalian vocal learning and can also contribute to other areas of investigation that utilize P. discolor or bats as study species.


Subject(s)
Chiroptera , Animals , Chiroptera/genetics , Vocalization, Animal , Acoustics , Speech , Brain
17.
Proc Natl Acad Sci U S A ; 119(32): e2201968119, 2022 08 09.
Article in English | MEDLINE | ID: mdl-35921434

ABSTRACT

Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input. However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions. Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable neural signatures of predictions about syntactic category (parts of speech), phonemes, and semantics. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.


Subject(s)
Brain , Comprehension , Language , Speech Perception , Brain/physiology , Comprehension/physiology , Humans , Linguistics , Neural Networks, Computer , Semantics , Speech Perception/physiology
19.
J Neurosci ; 42(15): 3216-3227, 2022 04 13.
Article in English | MEDLINE | ID: mdl-35232761

ABSTRACT

The ability to comprehend phrases is an essential integrative property of the brain. Here, we evaluate the neural processes that enable the transition from single-word processing to a minimal compositional scheme. Previous research has reported conflicting timing effects of composition, and disagreement persists with respect to inferior frontal and posterior temporal contributions. To address these issues, 19 patients (10 male, 9 female) implanted with penetrating depth or surface subdural intracranial electrodes, heard auditory recordings of adjective-noun, pseudoword-noun, and adjective-pseudoword phrases and judged whether the phrase matched a picture. Stimulus-dependent alterations in broadband gamma activity, low-frequency power, and phase-locking values across the language-dominant left hemisphere were derived. This revealed a mosaic located on the lower bank of the posterior superior temporal sulcus (pSTS), in which closely neighboring cortical sites displayed exclusive sensitivity to either lexicality or phrase structure, but not both. Distinct timings were found for effects of phrase composition (210-300 ms) and pseudoword processing (∼300-700 ms), and these were localized to neighboring electrodes in pSTS. The pars triangularis and temporal pole encoded anticipation of composition in broadband low frequencies, and both regions exhibited greater functional connectivity with pSTS during phrase composition. Our results suggest that the pSTS is a highly specialized region composed of sparsely interwoven heterogeneous constituents that encodes both lower and higher level linguistic features. This hub in pSTS for minimal phrase processing may form the neural basis for the human-specific computational capacity for forming hierarchically organized linguistic structures.SIGNIFICANCE STATEMENT Linguists have claimed that the integration of multiple words into a phrase demands a computational procedure distinct from single-word processing. Here, we provide intracranial recordings from a large patient cohort, with high spatiotemporal resolution, to track the cortical dynamics of phrase composition. Epileptic patients volunteered to participate in a task in which they listened to phrases (red boat), word-pseudoword or pseudoword-word pairs (e.g., red fulg). At the onset of the second word in phrases, greater broadband high gamma activity was found in posterior superior temporal sulcus in electrodes that exclusively indexed phrasal meaning and not lexical meaning. These results provide direct, high-resolution signatures of minimal phrase composition in humans, a potentially species-specific computational capacity.


Subject(s)
Broca Area , Language , Brain , Brain Mapping , Female , Humans , Linguistics , Male , Semantics
20.
Neurobiol Lang (Camb) ; 3(1): 149-179, 2022.
Article in English | MEDLINE | ID: mdl-37215333

ABSTRACT

Typical adults read remarkably quickly. Such fast reading is facilitated by brain processes that are sensitive to both word frequency and contextual constraints. It is debated as to whether these attributes have additive or interactive effects on language processing in the brain. We investigated this issue by analysing existing magnetoencephalography data from 99 participants reading intact and scrambled sentences. Using a cross-validated model comparison scheme, we found that lexical frequency predicted the word-by-word elicited MEG signal in a widespread cortical network, irrespective of sentential context. In contrast, index (ordinal word position) was more strongly encoded in sentence words, in left front-temporal areas. This confirms that frequency influences word processing independently of predictability, and that contextual constraints affect word-by-word brain responses. With a conservative multiple comparisons correction, only the interaction between lexical frequency and surprisal survived, in anterior temporal and frontal cortex, and not between lexical frequency and entropy, nor between lexical frequency and index. However, interestingly, the uncorrected index × frequency interaction revealed an effect in left frontal and temporal cortex that reversed in time and space for intact compared to scrambled sentences. Finally, we provide evidence to suggest that, in sentences, lexical frequency and predictability may independently influence early (<150 ms) and late stages of word processing, but also interact during late stages of word processing (>150-250 ms), thus helping to converge previous contradictory eye-tracking and electrophysiological literature. Current neurocognitive models of reading would benefit from accounting for these differing effects of lexical frequency and predictability on different stages of word processing.

SELECTION OF CITATIONS
SEARCH DETAIL
...