Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
Add more filters










Publication year range
1.
PLoS One ; 18(11): e0287507, 2023.
Article in English | MEDLINE | ID: mdl-37976324

ABSTRACT

The social robots market will grow considerably in the coming years. What the arrival of these new kind of social agents means for society, however, is largely unknown. Existing cases of robot abuse point to risks of introducing such artificial social agents (ASAs) without considerations about consequences (risks for the robots and the human witnesses to the abuse). We believe that humans react aggressively towards ASAs when they are enticed into establishing dominance hierarchies. This happens when there is a basis for skill comparison. We therefore presented pairs of robots on which we varied similarity and the degree of stimulatability of their mechanisms/functions with the human body (walking, jumping = simulatable; rolling, floating = non-simulatable). We asked which robot (i) resembled more a human, (ii) possessed more "essentialized human qualities" (e.g. creativity). To estimate social acceptability, participants had also (iii) to predict the outcome of a situation where a robot approached a group of humans. For robots with simulatable functions, rating of essentialized human qualities decreased as human resemblance decreased (jumper < walker). For robots with non-simulable functions, the reversed relation was seen: robots that least resembled humans (floater) scored highest in qualities. Critically, robot's acceptability followed ratings of essentialized human qualities. Humans respond socially to certain morphological (physical aspects) and behavioral cues. Therefore, unless ASAs perfectly mimic humans, it is safer to provide them with mechanisms/functions that cannot be simulated with the human body.


Subject(s)
Robotics , Humans , Social Interaction , Social Dominance , Walking , Cues
2.
J Exp Psychol Gen ; 151(9): 2173-2194, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35157482

ABSTRACT

It is well established that the processing of hand-, mouth-, and foot-related action terms can activate areas of the motor cortex that are involved in the planning and execution of the described actions. In the present study, the sensitivity of these motor structures to language processes was exploited to test linguistic theories on information layering. Human languages possess a variety of linguistic devices, so-called presupposition triggers, that allow us to convey background information without asserting it. A statement such as "Marie stopped smoking" presupposes, without asserting it, that Marie used to smoke. How such presupposed information is represented in the brain is not yet understood. Using a grip-force sensor that allows capturing motor brain activity during language processing, we investigated effects of information layering by comparing asserted information that is known to trigger motor activity ("In the living room, Peter irons his shirt") with information embedded under a presuppositional factive verb construction ("Louis knows that Peter irons his shirt"; Experiment 1) and a nonfactive verb construction ("Louis believes that Peter irons his shirt"; Experiment 2). Furthermore, we examined whether the projection behavior of a factive verb construction modulates grip force under negation ("Louis does not know that Peter irons his shirt"; Experiment 3). The data show that only the presupposed action verb in affirmative contexts (Experiment 1) triggers an increase in grip force comparable to the one of asserted action verbs, whereas the nonfactive complement and projection structure show a weaker response (Experiments 2 and 3). While the first two experiments seem to confirm the sensitivity of the grip-force response to the construction of a plausible situation or event model, in which the motor action is represented as taking place, the third one raises the question of how robust this hypothesis is and how it can take the specificity of projection into account. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Brain , Language , Brain/physiology , Gemifloxacin , Hand , Hand Strength , Humans
3.
Brain Cogn ; 145: 105628, 2020 11.
Article in English | MEDLINE | ID: mdl-33007685

ABSTRACT

Our study was designed to test a recent proposal by Cayol and Nazir (2020), according to which language processing takes advantage of motor system "emulators". An emulator is a brain mechanism that learns the causal relationship between an action and its sensory consequences. Emulators predict the outcome of a motor command in terms of its sensory reafference and serve monitoring ongoing movements. For the purpose of motor planning/learning, emulators can "run offline", decoupled from sensory input and motor output. Such offline simulations are equivalent to mental imagery (Grush, 2004). If language processing can profit from the associative-memory network of emulators, mental-imagery-aptitude should predict language skills. However, this should hold only for language content that is imageable. We tested this assumption in typically developing adolescents using two motor-imagery paradigms. One that measured participant's error in estimating their motor ability, and another that measured the time to perform a mental simulation. When the time to perform a mental simulation is taken as measure, mental-imagery-aptitude does indeed selectively predict word-definition performance for high imageable words. These results provide an alternative position relative to the question of why language processes recruit modality-specific brain regions and support the often-hypothesized link between language and motor skills.


Subject(s)
Aptitude , Language , Memory , Adolescent , Brain , Humans , Imagination , Motor Skills
4.
J Cogn ; 3(1): 35, 2020 Sep 30.
Article in English | MEDLINE | ID: mdl-33043245

ABSTRACT

Whether language comprehension requires the participation of brain structures that evolved for perception and action has been a subject of intense debate. While brain-imaging evidence for the involvement of such modality-specific regions has grown, the fact that lesions to these structures do not necessarily erase word knowledge has invited the conclusion that language-induced activity in these structures might not be essential for word recognition. Why language processing recruits these structures remains unanswered, however. Here, we examine the original findings from a slightly different perspective. We first consider the 'original' function of structures in modality-specific brain regions that are recruited by language activity. We propose that these structures help elaborate 'internal forward models' in motor control (c.f. emulators). Emulators are brain systems that capture the relationship between an action and its sensory consequences. During language processing emulators could thus allow accessing associative memories. We further postulate the existence of a linguistic system that exploits, in a rule-based manner, emulators and other nonlinguistic brain systems, to gain complementary (and redundant) information during language processing. Emulators are therefore just one of several sources of information. We emphasize that whether a given word-form triggers activity in modality-specific brain regions depends on the linguistic context and not on the word-form as such. The role of modality-specific systems in language processing is thus not to help understanding words but to model the verbally depicted situation by supplying memorized context information. We present a model derived from these assumptions and provide predictions and perspectives for future research.

5.
Behav Res Methods ; 49(1): 61-73, 2017 02.
Article in English | MEDLINE | ID: mdl-26705116

ABSTRACT

Research in cognitive neuroscience has shown that brain structures serving perceptual, emotional, and motor processes are also recruited during the understanding of language when it refers to emotion, perception, and action. However, the exact linguistic and extralinguistic conditions under which such language-induced activity in modality-specific cortex is triggered are not yet well understood. The purpose of this study is to introduce a simple experimental technique that allows for the online measure of language-induced activity in motor structures of the brain. This technique consists in the use of a grip force sensor that captures subtle grip force variations while participants listen to words and sentences. Since grip force reflects activity in motor brain structures, the continuous monitoring of force fluctuations provides a fine-grained estimation of motor activity across time. In other terms, this method allows for both localization of the source of language-induced activity to motor brain structures and high temporal resolution of the recorded data. To facilitate comparison of the data to be collected with this tool, we present two experiments that describe in detail the technical setup, the nature of the recorded data, and the analyses (including justification about the data filtering and artifact rejection) that we applied. We also discuss how the tool could be used in other domains of behavioral research.


Subject(s)
Data Collection/instrumentation , Hand Strength/physiology , Language , Adult , Auditory Perception , Female , Humans , Male , Young Adult
6.
Front Psychol ; 7: 2016, 2016.
Article in English | MEDLINE | ID: mdl-28082939

ABSTRACT

Non-verbal social interaction between humans requires accurate understanding of the others' actions. The cognitivist approach suggests that successful interaction depends on the creation of a shared representation of the task, where the pairing of perceptive and motor systems of partners allows inclusion of the other's goal into the overarching representation. Activity of the Mirror Neurons System (MNS) is thought to be a crucial mechanism linking two individuals during a joint action through action observation. The construction of a shared representation of an interaction (i.e., joint action) depends upon sensorimotor cognitive processes that modulate the ability to adapt in time and space. We attempted to detect individuals' behavioral/kinematic change resulting in a global amelioration of performance for both subjects when a common representation of the action is built using a repetitive joint action. We asked pairs of subjects to carry out a simple task where one puts a base in the middle of a table and the other places a parallelepiped fitting into the base, the crucial manipulation being that participants switched roles during the experiment. We aimed to show that a full comprehension of a joint action is not an automatic process. We found that, before switching the interactional role, the participant initially placing the base orientated it in a way that led to an uncomfortable action for participants placing the parallelepiped. However, after switching roles, the action's kinematics by the participant who places the base changed in order to facilitate the action of the other. More precisely, our data shows significant modulation of the base angle in order to ease the completion of the joint action, highlighting the fact that a shared knowledge of the complete action facilitates the generation of a common representation. This evidence suggests the ability to establish an efficient shared representation of a joint action benefits from physically taking our partner's perspective because simply observing the actions of others may not be enough.

7.
J Cogn Neurosci ; 26(11): 2552-63, 2014 Nov.
Article in English | MEDLINE | ID: mdl-24893746

ABSTRACT

Growing evidence suggests that semantic knowledge is represented in distributed neural networks that include modality-specific structures. Here, we examined the processes underlying the acquisition of words from different semantic categories to determine whether the emergence of visual- and action-based categories could be tracked back to their acquisition. For this, we applied correspondence analysis (CA) to ERPs recorded at various moments during acquisition. CA is a multivariate statistical technique typically used to reveal distance relationships between words of a corpus. Applied to ERPs, it allows isolating factors that best explain variations in the data across time and electrodes. Participants were asked to learn new action and visual words by associating novel pseudowords with the execution of hand movements or the observation of visual images. Words were probed before and after training on two consecutive days. To capture processes that unfold during lexical access, CA was applied on the 100-400 msec post-word onset interval. CA isolated two factors that organized the data as a function of test sessions and word categories. Conventional ERP analyses further revealed a category-specific increase in the negativity of the ERPs to action and visual words at the frontal and occipital electrodes, respectively. The distinct neural processes underlying action and visual words can thus be tracked back to the acquisition of word-referent relationships and may have its origin in association learning. Given current evidence for the flexibility of language-induced sensory-motor activity, we argue that these associative links may serve functions beyond word understanding, that is, the elaboration of situation models.


Subject(s)
Association Learning/physiology , Brain/physiology , Motion Perception/physiology , Semantics , Speech Perception/physiology , Acoustic Stimulation , Electroencephalography , Evoked Potentials , Female , Humans , Male , Neuropsychological Tests , Photic Stimulation , Psycholinguistics , Young Adult
8.
Front Hum Neurosci ; 8: 163, 2014.
Article in English | MEDLINE | ID: mdl-24744714

ABSTRACT

Many neurocognitive studies on the role of motor structures in action-language processing have implicitly adopted a "dictionary-like" framework within which lexical meaning is constructed on the basis of an invariant set of semantic features. The debate has thus been centered on the question of whether motor activation is an integral part of the lexical semantics (embodied theories) or the result of a post-lexical construction of a situation model (disembodied theories). However, research in psycholinguistics show that lexical semantic processing and context-dependent meaning construction are narrowly integrated. An understanding of the role of motor structures in action-language processing might thus be better achieved by focusing on the linguistic contexts under which such structures are recruited. Here, we therefore analyzed online modulations of grip force while subjects listened to target words embedded in different linguistic contexts. When the target word was a hand action verb and when the sentence focused on that action (John signs the contract) an early increase of grip force was observed. No comparable increase was detected when the same word occurred in a context that shifted the focus toward the agent's mental state (John wants to sign the contract). There mere presence of an action word is thus not sufficient to trigger motor activation. Moreover, when the linguistic context set up a strong expectation for a hand action, a grip force increase was observed even when the tested word was a pseudo-verb. The presence of a known action word is thus not required to trigger motor activation. Importantly, however, the same linguistic contexts that sufficed to trigger motor activation with pseudo-verbs failed to trigger motor activation when the target words were verbs with no motor action reference. Context is thus not by itself sufficient to supersede an "incompatible" word meaning. We argue that motor structure activation is part of a dynamic process that integrates the lexical meaning potential of a term and the context in the online construction of a situation model, which is a crucial process for fluent and efficient online language comprehension.

9.
Neuropsychologia ; 55: 85-97, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24157538

ABSTRACT

Successful non-verbal social interaction between human beings requires dynamic and efficient encoding of others' gestures. Our study aimed at identifying neural markers of social interaction and goal variations in a non-verbal task. For this, we recorded simultaneously the electroencephalogram from two participants (dual-EEG), an actor and an observer, and their arm/hand kinematics in a real face-to-face paradigm. The observer watched "biological actions" performed by the human actor and "non-biological actions" performed by a robot. All actions occurred within an interactive or non-interactive context depending on whether the observer had to perform a complementary action or not (e.g., the actor presents a saucer and the observer either places the corresponding cup or does nothing). We analysed the EEG signals of both participants (i.e., beta (~20 Hz) oscillations as an index of cortical motor activity and motor related potentials (MRPs)). We identified markers of social interactions by synchronising EEG to the onset of the actor's movement. Movement kinematics did not differ in the two context conditions and the MRPs of the actor were similar in the two conditions. For the observer, however, an observation-related MRP was measured in all conditions but was more negative in the interactive context over fronto-central electrodes. Moreover, this feature was specific to biological actions. Concurrently, the suppression of beta oscillations was observed in the actor's EEG and the observer's EEG rapidly after the onset of the actor's movement. Critically, this suppression was stronger in the interactive than in the non-interactive context despite the fact that movement kinematics did not differ in the two context conditions. For the observer, this modulation was observed independently of whether the actor was a human or a robot. Our results suggest that acting in a social context induced analogous modulations of motor and sensorimotor regions in observer and actor. Sharing a common goal during an interaction seems thus to evoke a common representation of the global action that includes both actor and observer movements.


Subject(s)
Brain/physiology , Gestures , Interpersonal Relations , Motor Activity/physiology , Visual Perception/physiology , Adolescent , Adult , Arm/physiology , Beta Rhythm , Biomechanical Phenomena , Electroencephalography , Evoked Potentials, Motor , Female , Hand/physiology , Humans , Male , Neural Pathways/physiology , Robotics , Task Performance and Analysis , Young Adult
10.
Front Hum Neurosci ; 7: 646, 2013.
Article in English | MEDLINE | ID: mdl-24133437

ABSTRACT

Action observation, simulation and execution share neural mechanisms that allow for a common motor representation. It is known that when these overlapping mechanisms are simultaneously activated by action observation and execution, motor performance is influenced by observation and vice versa. To understand the neural dynamics underlying this influence and to measure how variations in brain activity impact the precise kinematics of motor behavior, we coupled kinematics and electrophysiological recordings of participants while they performed and observed congruent or non-congruent actions or during action execution alone. We found that movement velocities and the trajectory deviations of the executed actions increased during the observation of congruent actions compared to the observation of non-congruent actions or action execution alone. This facilitation was also discernible in the motor-related potentials of the participants; the motor-related potentials were transiently more negative in the congruent condition around the onset of the executed movement, which occurred 300 ms after the onset of the observed movement. This facilitation seemed to depend not only on spatial congruency but also on the optimal temporal relationship of the observation and execution events.

11.
Exp Brain Res ; 227(3): 407-19, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23615976

ABSTRACT

Action observation and execution share overlapping neural resonating mechanisms. In the present study, we sought to examine the effect of the activation of this system during concurrent movement observation and execution in a prehension task, when no a priori information about the requirements of grasping action was available. Although it is known that simultaneous activation by observation and execution influences motor performance, the importance of the delays of these two events and the specific effect of movement observation itself (and not the prediction of the to-be-observed movement) on action performance are poorly known. Fine-grained kinematic analysis of both the transport and grasp components of the movement should provide knowledge about the influence of movement observation on the precision and the performance of the executed movement. The experiment involved two real participants who were asked to grasp a different side of a single object that was composed of a large and a small part. In the first experiment, we measured how the transport component and the grasp component were affected by movement observation. We tested whether this influence was greater if the observed movement occurred just before the onset of movement (200 ms) or well before the onset of movement (1 s). In a second experiment, to reproduce the previous experiment and to verify the specificity of the grasping movements, we also included a condition consisting of pointing towards the object. Both experiments showed two main results. A general facilitation of the transport component was found when observing a simultaneous action, independent of its congruency. Moreover, a specific facilitation of the grasp component was present during the observation of a congruent action when movement execution and observation were nearly synchronised. While the general facilitation may arise from a competition between the two participants as they reached for the object, the specific facilitation of the grasp component seems to be directly related to mirror neuron system activity induced by action observation itself. Moreover, the time course of the events appears to be an essential factor for this modulation, implying the transitory activation of the mirror neuron system.


Subject(s)
Arm/physiology , Hand Strength/physiology , Movement/physiology , Psychomotor Performance/physiology , Adult , Biomechanical Phenomena/physiology , Female , Humans , Male , Photic Stimulation , Reaction Time/physiology
12.
PLoS One ; 7(1): e30663, 2012.
Article in English | MEDLINE | ID: mdl-22292014

ABSTRACT

Evidence for cross-talk between motor and language brain structures has accumulated over the past several years. However, while a significant amount of research has focused on the interaction between language perception and action, little attention has been paid to the potential impact of language production on overt motor behaviour. The aim of the present study was to test whether verbalizing during a grasp-to-displace action would affect motor behaviour and, if so, whether this effect would depend on the semantic content of the pronounced word (Experiment I). Furthermore, we sought to test the stability of such effects in a different group of participants and investigate at which stage of the motor act language intervenes (Experiment II). For this, participants were asked to reach, grasp and displace an object while overtly pronouncing verbal descriptions of the action ("grasp" and "put down") or unrelated words (e.g. "butterfly" and "pigeon"). Fine-grained analyses of several kinematic parameters such as velocity peaks revealed that when participants produced action-related words their movements became faster compared to conditions in which they did not verbalize or in which they produced words that were not related to the action. These effects likely result from the functional interaction between semantic retrieval of the words and the planning and programming of the action. Therefore, links between (action) language and motor structures are significant to the point that language can refine overt motor behaviour.


Subject(s)
Language , Movement/physiology , Psychomotor Performance/physiology , Semantics , Verbal Behavior/physiology , Adolescent , Adult , Affect/physiology , Biomechanical Phenomena/physiology , Female , France , Hand Strength/physiology , Humans , Male , Models, Biological , Motivation/physiology , Motor Activity/physiology , Voice/physiology , Young Adult
13.
Cortex ; 48(7): 888-99, 2012 Jul.
Article in English | MEDLINE | ID: mdl-21864836

ABSTRACT

Action words referring to face, arm or leg actions activate areas along the motor strip that also control the planning and execution of the actions specified by the words. This electroencephalogram (EEG) study aimed to test the learning profile of this language-induced motor activity. Participants were trained to associate novel verbal stimuli to videos of object-oriented hand and arm movements or animated visual images on two consecutive days. Each training session was preceded and followed by a test-session with isolated videos and verbal stimuli. We measured motor-related brain activity (reflected by a desynchronization in the µ frequency bands; 8-12 Hz range) localized at centro-parietal and fronto-central electrodes. We compared activity from viewing the videos to activity resulting from processing the language stimuli only. At centro-parietal electrodes, stable action-related µ suppression was observed during viewing of videos in each test-session of the two days. For processing of verbal stimuli associated with motor actions, a similar pattern of activity was evident only in the second test-session of Day 1. Over the fronto-central regions, µ suppression was observed in the second test-session of Day 2 for the videos and in the second test-session of Day 1 for the verbal stimuli. Whereas the centro-parietal µ suppression can be attributed to motor events actually experienced during training, the fronto-central µ suppression seems to serve as a convergence zone that mediates underspecified motor information. Consequently, sensory-motor reactivations through which concepts are comprehended seem to differ in neural dynamics from those implicated in their acquisition.


Subject(s)
Association Learning/physiology , Brain Waves/physiology , Brain/physiology , Movement/physiology , Psychomotor Performance/physiology , Acoustic Stimulation , Adult , Brain Mapping , Cognition/physiology , Electroencephalography , Female , Humans , Male , Photic Stimulation
14.
Cereb Cortex ; 20(5): 1153-63, 2010 May.
Article in English | MEDLINE | ID: mdl-19684250

ABSTRACT

The sensitivity of the left ventral occipito-temporal (vOT) cortex to visual word processing has triggered a considerable debate about the role of this region in reading. One popular view is that the left vOT underlies the perceptual expertise needed for rapid skilled reading. Because skilled reading breaks down when words are presented in a visually unfamiliar format, we tested this hypothesis by analyzing vOT responses to horizontally presented words (familiar format) and vertically presented words (unfamiliar format). In addition, we compared the activity in participants with left and right cerebral dominance for language generation. Our results revealed 1) that the vOT activity during reading is lateralized to the same side as the inferior frontal activity during word generation, 2) that vertically and horizontally presented words triggered the same amount of activity in the vOT of the dominant hemisphere, but 3) that there was significantly more activity for vertically presented words in the vOT of the nondominant hemisphere. We suggest that the reading-related activity in vOT reflects the integration of general perceptual processes with language processing in the anterior brain regions and is not limited to skilled reading in the familiar horizontal format.


Subject(s)
Functional Laterality/physiology , Language , Occipital Lobe/physiology , Recognition, Psychology/physiology , Temporal Lobe/physiology , Vocabulary , Adult , Analysis of Variance , Brain Mapping , Decision Making/physiology , Electroencephalography/methods , Female , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Male , Occipital Lobe/blood supply , Oxygen/blood , Photic Stimulation/methods , Reading , Temporal Lobe/blood supply , Young Adult
15.
Brain Res ; 1272: 32-44, 2009 May 26.
Article in English | MEDLINE | ID: mdl-19332032

ABSTRACT

Visual expertise underlying reading is attributed to processes involving the left ventral visual pathway. However, converging evidence suggests that the dorsal visual pathway is also involved in early levels of visual word processing, especially when words are presented in unfamiliar visual formats. In the present study, event-related potentials (ERPs) were used to investigate the time course of the early engagement of the ventral and dorsal pathways during processing of orthographic stimuli (high and low frequency words, pseudowords and consonant strings) by manipulating visual format (familiar horizontal vs. unfamiliar vertical format). While early ERP components (P1 and N1) already distinguished between formats, the effect of stimulus type emerged at the latency of the N2 component (225-275 ms). The N2 scalp topography and sLORETA source localisation for this differentiation showed an occipito-temporal negativity for the horizontal format and a negativity that extended towards the dorsal regions for the vertical format. In a later time window (350-425 ms) ERPs elicited by vertically displayed stimuli distinguished words from pseudowords in the ventral area, as confirmed by source localisation. The sustained contribution of occipito-temporal processes for vertical stimuli suggests that the ventral pathway is essential for lexical access. Parietal regions appear to be involved when a serial mechanism of visual attention is required to shift attention from one letter to another. The two pathways cooperate during visual word recognition and processing in these pathways should not be considered as alternative but as complementary elements of reading.


Subject(s)
Brain Mapping , Evoked Potentials, Visual/physiology , Recognition, Psychology/physiology , Visual Pathways/physiology , Visual Perception/physiology , Adolescent , Adult , Electroencephalography/methods , Female , Humans , Male , Photic Stimulation/methods , Reaction Time/physiology , Reading , Vocabulary , Young Adult
16.
Psychol Res ; 72(6): 657-65, 2008 Nov.
Article in English | MEDLINE | ID: mdl-18841389

ABSTRACT

A perceptual frequency variant of the orthographic cue (OC) hypothesis (Peressotti, Cubelli, & Job, 2003) was tested in two perceptual identification experiments using the variable viewing position technique: German nouns and non-nouns that are most frequently perceived with or without initial letter capitalization, respectively, were tachistoscopically presented in upper-case, lower-case, or with initial capitalization. The results indicated that words were best recognized in the form they are most frequently perceived in, which suggests that during reading acquisition abstract as well as case- and item-specific OCs may be learned and used for recognition.


Subject(s)
Attention , Language , Reading , Semantics , Size Perception , Writing , Cues , Discrimination Learning , Humans , Linguistics , Reaction Time
17.
J Physiol Paris ; 102(1-3): 130-6, 2008.
Article in English | MEDLINE | ID: mdl-18485678

ABSTRACT

Recent evidence has shown that processing action-related language and motor action share common neural representations to a point that the two processes can interfere when performed concurrently. To support the assumption that language-induced motor activity contributes to action word understanding, the present study aimed at ruling out that this activity results from mental imagery of the movements depicted by the words. For this purpose, we examined cross-talk between action word processing and an arm reaching movement, using words that were presented too fast to be consciously perceived (subliminally). Encephalogram (EEG) and movement kinematics were recorded. EEG recordings of the "Readiness potential" ("RP", indicator of motor preparation) revealed that subliminal displays of action verbs during movement preparation reduced the RP and affected the subsequent reaching movement. The finding that motor processes were modulated by language processes despite the fact that words were not consciously perceived, suggests that cortical structures that serve the preparation and execution of motor actions are indeed part of the (action) language processing network.


Subject(s)
Electroencephalography , Language , Mental Processes/physiology , Movement/physiology , Psychomotor Performance/physiology , Adult , Biomechanical Phenomena , Contingent Negative Variation , Female , Humans , Male , Reaction Time/physiology , Semantics , Time Factors
19.
Q J Exp Psychol (Hove) ; 61(6): 933-43, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18470823

ABSTRACT

In a recent study Boulenger et al. (2006) found that processing action verbs assisted reaching movement when the word was processed prior to movement onset and interfered with the movement when the word was processed at movement onset. The present study aimed to further corroborate the existence of such cross-talk between language processes and overt motor behaviour by demonstrating that the reaching movement can be disturbed by action words even when the words are presented delayed with respect to movement onset (50 ms and 200 ms). The results are compared to studies that show language-motor interaction in conditions where the word is presented prior to movement onset and are discussed within the context of embodied theories of language comprehension.


Subject(s)
Attention , Imagination , Psychomotor Performance , Reaction Time , Reading , Semantics , Biomechanical Phenomena , Distance Perception , Functional Laterality , Hand Strength , Humans , Orientation , Size Perception
20.
J Cogn Neurosci ; 20(4): 672-81, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18052778

ABSTRACT

The brain areas involved in visual word processing rapidly become lateralized to the left cerebral hemisphere. It is often assumed this is because, in the vast majority of people, cortical structures underlying language production are lateralized to the left hemisphere. An alternative hypothesis, however, might be that the early stages of visual word processing are lateralized to the left hemisphere because of intrinsic hemispheric differences in processing low-level visual information as required for distinguishing fine-grained visual forms such as letters. If the alternative hypothesis was correct, we would expect posterior occipito-temporal processing stages still to be lateralized to the left hemisphere for participants with right hemisphere dominance for the frontal lobe processes involved in language production. By analyzing event-related potentials of native readers of French with either left hemisphere or right hemisphere dominance for language production (determined using a verb generation task), we were able to show that the posterior occipito-temporal areas involved in visual word processing are lateralized to the same hemisphere as language production. This finding could suggest top-down influences in the development of posterior visual word processing areas.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Comprehension/physiology , Functional Laterality/physiology , Imagination/physiology , Reading , Adult , Analysis of Variance , Evoked Potentials/physiology , Frontal Lobe/physiology , Humans , Language , Occipital Lobe/physiology , Reference Values , Temporal Lobe/physiology , Verbal Behavior/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...