Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 48
Filter
Add more filters










Publication year range
2.
Neuropsychologia ; 114: 243-250, 2018 06.
Article in English | MEDLINE | ID: mdl-29729959

ABSTRACT

BACKGROUND: Strong embodiment theories claimed that action language representation is grounded in the sensorimotor system, which would be crucially to semantic understanding. However, there is a large disagreement in literature about the neural mechanisms involved in abstract (symbolic) language comprehension. OBJECTIVE: In the present study, we investigated the role of motor context in the semantic processing of abstract language. We hypothesized that motor cortex excitability during abstract word comprehension could be modulated by previous presentation of a stimuli which associated a congruent motor content (i.e., a semantically related gesture) to the word. METHODS AND RESULTS: We administered a semantic priming paradigm where postures of gestures (primes) were followed by semantically congruent verbal stimuli (targets, meaningful or meaningless words). Transcranial Magnetic Stimulation was delivered to left motor cortex 100, 250 and 500 ms after the presentation of each target. Results showed that Motor evoked potentials of hand muscle significantly increased in correspondence to meaningful compared to meaningless words, but only in the earlier phase of semantic processing (100 and 250 ms from target onset). CONCLUSION: Results suggested that the gestural motor representation was integrated with corresponding word meaning in order to accomplish (and facilitate) the lexical task. We concluded that motor context resulted crucial to highlight motor system involvement during semantic processing of abstract language.


Subject(s)
Comprehension , Evoked Potentials, Motor/physiology , Motor Cortex/physiology , Semantics , Transcranial Magnetic Stimulation/methods , Visual Perception/physiology , Adult , Analysis of Variance , Female , Gestures , Humans , Male , Photic Stimulation , Psychomotor Performance , Reaction Time/physiology , Time Factors , Young Adult
3.
Cortex ; 100: 95-110, 2018 03.
Article in English | MEDLINE | ID: mdl-29079343

ABSTRACT

Sensorimotor and affective brain systems are known to be involved in language processing. However, to date it is still debated whether this involvement is a crucial step of semantic processing or, on the contrary, it is dependent on the specific context or strategy adopted to solve a task at hand. The present electroencephalographic (EEG) study is aimed at investigating which brain circuits are engaged when processing written verbs. By aligning event-related potentials (ERPs) both to the verb onset and to the motor response indexing the accomplishment of a semantic task of categorization, we were able to dissociate the relative stimulus-related and response-related cognitive components at play, respectively. EEG signal source reconstruction showed that while the recruitment of sensorimotor fronto-parietal circuits was time-locked with action verb onset, a left temporal-parietal circuit was time-locked to the task accomplishment. Crucially, by comparing the time course of both these bottom-up and top-down cognitive components, it appears that the frontal motor involvement precedes the task-related temporal-parietal activity. The present findings suggest that the recruitment of fronto-parietal sensorimotor circuits is independent of the specific strategy adopted to solve a semantic task and, given its temporal hierarchy, it may provide crucial information to brain circuits involved in the categorization task. Eventually, a discussion on how the present results may contribute to the clinical literature on patients affected by disorders specifically impairing the motor system is provided.


Subject(s)
Brain/physiology , Psychomotor Performance/physiology , Speech Perception/physiology , Verbal Behavior/physiology , Adolescent , Adult , Evoked Potentials/physiology , Female , Humans , Male , Movement/physiology , Reaction Time , Semantics , Young Adult
4.
Front Hum Neurosci ; 11: 565, 2017.
Article in English | MEDLINE | ID: mdl-29204114

ABSTRACT

During social interaction, actions, and words may be expressed in different ways, for example, gently or rudely. A handshake can be gentle or vigorous and, similarly, tone of voice can be pleasant or rude. These aspects of social communication have been named vitality forms by Daniel Stern. Vitality forms represent how an action is performed and characterize all human interactions. In spite of their importance in social life, to date it is not clear whether the vitality forms expressed by the agent can influence the execution of a subsequent action performed by the receiver. To shed light on this matter, in the present study we carried out a kinematic study aiming to assess whether and how visual and auditory properties of vitality forms expressed by others influenced the motor response of participants. In particular, participants were presented with video-clips showing a male and a female actor performing a "giving request" (give me) or a "taking request" (take it) in visual, auditory, and mixed modalities (visual and auditory). Most importantly, requests were expressed with rude or gentle vitality forms. After the actor's request, participants performed a subsequent action. Results showed that vitality forms expressed by the actors influenced the kinematic parameters of the participants' actions regardless to the modality by which they are conveyed.

5.
Front Psychol ; 8: 2339, 2017.
Article in English | MEDLINE | ID: mdl-29403408

ABSTRACT

It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect, intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on the specific movement required. Results are discussed in relation to the Basic Emotion Theory and embodied cognition framework.

6.
Front Psychol ; 7: 672, 2016.
Article in English | MEDLINE | ID: mdl-27242586

ABSTRACT

AIM: Do the emotional content and meaning of sentences affect the kinematics of successive motor sequences? MATERIAL AND METHODS: Participants observed video-clips of an actor pronouncing sentences expressing positive or negative emotions and meanings (related to happiness or anger in Experiment 1 and food admiration or food disgust in Experiment 2). Then, they reached-to-grasp and placed a sugar lump on the actor's mouth. Participants acted in response to sentences whose content could convey (1) emotion (i.e., face expression and prosody) and meaning, (2) meaning alone, or (3) emotion alone. Within each condition, the kinematic effects of sentences expressing positive and negative emotions were compared. Stimuli (positive for food admiration and negative for food disgust), conveyed either by emotion or meaning affected similarly the kinematics of both grasp and reach. RESULTS: In Experiment 1, the kinematics did not vary between positive and negative sentences either when the content was expressed by both emotion and meaning, or meaning alone. In contrast, in the case of sole emotion, sentences with positive valence made faster the approach of the conspecific. In Experiment 2, the valence of emotions (positive for food admiration and negative for food disgust) affected the kinematics of both grasp and reach, independently of the modality. DISCUSSION: The lack of an effect of meaning in Experiment 1 could be due to the weak relevance of sentence meaning with respect to the motor sequence goal (feeding). Experiment 2 demonstrated that, indeed, this was the case, because when the meaning and the consequent emotion were related to the sequence goal, they affected the kinematics. In contrast, the sole emotion activated approach or avoidance toward the actor according to positive and negative valence. The data suggest a behavioral dissociation between effects of emotion and meaning.

7.
Front Psychol ; 6: 1648, 2015.
Article in English | MEDLINE | ID: mdl-26579031

ABSTRACT

AIM: This study delineated how observing sports scenes of cooperation or competition modulated an action of interaction, in expert athletes, depending on their specific sport attitude. METHOD: In a kinematic study, athletes were divided into two groups depending on their attitude toward teammates (cooperative or competitive). Participants observed sport scenes of cooperation and competition (basketball, soccer, water polo, volleyball, and rugby) and then they reached for, picked up, and placed an object on the hand of a conspecific (giving action). Mixed-design ANOVAs were carried out on the mean values of grasping-reaching parameters. RESULTS: Data showed that the type of scene observed as well as the athletes' attitude affected reach-to-grasp actions to give. In particular, the cooperative athletes were speeded when they observed scenes of cooperation compared to when they observed scenes of competition. DISCUSSION: Participants were speeded when executing a giving action after observing actions of cooperation. This occurred only when they had a cooperative attitude. A match between attitude and intended action seems to be a necessary prerequisite for observing an effect of the observed type of scene on the performed action. It is possible that the observation of scenes of competition activated motor strategies which interfered with the strategies adopted by the cooperative participants to execute a cooperative (giving) sequence.

8.
Neuroimage ; 117: 375-85, 2015 Aug 15.
Article in English | MEDLINE | ID: mdl-26044859

ABSTRACT

The present study aimed at determining whether elaboration of communicative signals (symbolic gestures and words) is always accompanied by integration with each other and, if present, this integration can be considered in support of the existence of a same control mechanism. Experiment 1 aimed at determining whether and how gesture is integrated with word. Participants were administered with a semantic priming paradigm with a lexical decision task and pronounced a target word, which was preceded by a meaningful or meaningless prime gesture. When meaningful, the gesture could be either congruent or incongruent with word meaning. Duration of prime presentation (100, 250, 400 ms) randomly varied. Voice spectra, lip kinematics, and time to response were recorded and analyzed. Formant 1 of voice spectra, and mean velocity in lip kinematics increased when the prime was meaningful and congruent with the word, as compared to meaningless gesture. In other words, parameters of voice and movement were magnified by congruence, but this occurred only when prime duration was 250 ms. Time to response to meaningful gesture was shorter in the condition of congruence compared to incongruence. Experiment 2 aimed at determining whether the mechanism of integration of a prime word with a target word is similar to that of a prime gesture with a target word. Formant 1 of the target word increased when word prime was meaningful and congruent, as compared to meaningless congruent prime. Increase was, however, present for whatever prime word duration. Experiment 3 aimed at determining whether symbolic prime gesture comprehension makes use of motor simulation. Transcranial Magnetic Stimulation was delivered to left primary motor cortex 100, 250, 500 ms after prime gesture presentation. Motor Evoked Potential of First Dorsal Interosseus increased when stimulation occurred 100 ms post-stimulus. Thus, gesture was understood within 100 ms and integrated with the target word within 250 ms. Experiment 4 excluded any hand motor simulation in order to comprehend prime word. Thus, the same type of integration with a word was present for both prime gesture and word. It was probably successive to understanding of the signal, which used motor simulation for gesture and direct access to semantics for words.


Subject(s)
Evoked Potentials, Motor/physiology , Gestures , Motor Cortex/physiology , Speech Production Measurement/methods , Transcranial Magnetic Stimulation/methods , Verbal Behavior/physiology , Adult , Female , Humans , Male , Psychomotor Performance/physiology , Repetition Priming/physiology , Semantics , Time Factors , Young Adult
9.
Brain Topogr ; 28(4): 591-605, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25124860

ABSTRACT

What happens if you see a person pronouncing the word "go" after having gestured "stop"? Differently from iconic gestures, that must necessarily be accompanied by verbal language in order to be unambiguously understood, symbolic gestures are so conventionalized that they can be effortlessly understood in the absence of speech. Previous studies proposed that gesture and speech belong to a unique communication system. From an electrophysiological perspective the N400 modulation was considered the main variable indexing the interplay between two stimuli. However, while many studies tested this effect between iconic gestures and speech, little is known about the capability of an emblem to modulate the neural response to subsequently presented words. Using high-density EEG, the present study aimed at evaluating the presence of an N400 effect and its spatiotemporal dynamics, in terms of cortical activations, when emblems primed the observation of words. Participants were presented with symbolic gestures followed by a semantically congruent or incongruent verb. A N400 modulation was detected, showing larger negativity when gesture and words were incongruent. The source localization during N400 time window evidenced the activation of different portions of temporal cortex according to the gesture and word congruence. Our data provide further evidence of how the observation of an emblem influences verbal language perception, and of how this interplay is mainly instanced by different portions of the temporal cortex.


Subject(s)
Cerebral Cortex/physiology , Comprehension/physiology , Gestures , Semantics , Adult , Electroencephalography , Evoked Potentials , Female , Humans , Male , Temporal Lobe/physiology , Visual Perception/physiology , Young Adult
10.
Neuropsychologia ; 61: 163-74, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24956569

ABSTRACT

Different accounts have been proposed to explain the nature of concept representations. Embodied accounts claim a key involvement of sensory-motor systems during semantic processing while more traditional accounts posit that concepts are abstract mental entities independent of perceptual and motor brain systems. While the involvement of sensory-motor areas in concrete language processing is supported by a large number of studies, this involvement is far from being established when considering abstract language. The present study addressed abstract and concrete verb processing, by investigating the spatiotemporal dynamics of evoked responses by means of high density EEG while participants performed a semantic decision task. In addition, RTs to the same set of stimuli were collected. In both early and late time intervals, ERP scalp topography significantly differed according to word categories. Concrete verbs showed involvement of parieto-frontal networks for action, according to the implied body effector. In contrast, abstract verbs recruited mostly frontal regions outside the motor system, suggesting a non-motor semantic processing for this category. In addition, differently from what has been reported during action observation, the parietal recruitment related to concrete verbs presentation followed the frontal one. The present findings suggest that action word semantic is grounded in sensory-motor systems, provided a bodily effector is specified, while abstract concepts׳ representation cannot be easily explained by a motor embodiment.


Subject(s)
Brain/physiology , Reading , Semantics , Adult , Decision Making/physiology , Electroencephalography , Evoked Potentials , Female , Humans , Male , Neuropsychological Tests , Pattern Recognition, Visual/physiology , Photic Stimulation , Psycholinguistics , Reaction Time , Signal Processing, Computer-Assisted
11.
Exp Brain Res ; 232(7): 2431-8, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24748482

ABSTRACT

The present experiment aimed at verifying whether the spatial alignment effect modifies kinematic parameters of pantomimed reaching-grasping of cups located at reachable and not reachable distance. The cup's handle could be oriented either to the right or to the left, thus inducing a grasp movement that could be either congruent or incongruent with the pantomime. The incongruence/congruence induced an increase/decrease in maximal finger aperture, which was observed when the cup was located near but not far from the body. This effect probably depended on influence of the size of the cup body on pantomime control when, in the incongruent condition, cup body was closer to the grasp hand as compared to the handle. Cup distance (near and far) influenced the pantomime even if it was actually executed in the same peripersonal space. Specifically, arm and hand temporal parameters were affected by actual cup distance as well as movement amplitudes. The results indicate that, when executing a reach-to-grasp pantomime, affordance related to the use of the object was instantiated (and in particular the spatial alignment effect became effective), but only when the object could be actually reached. Cup distance (extrinsic object property) influenced affordance, independently of the possibility to actually reach the target.


Subject(s)
Hand Strength , Psychomotor Performance/physiology , Space Perception/physiology , Adult , Analysis of Variance , Biomechanical Phenomena , Female , Fingers/innervation , Humans , Male , Movement/physiology , Photic Stimulation , Young Adult
12.
Cogn Process ; 15(1): 85-92, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24113915

ABSTRACT

Does the comprehension of both action-related and abstract verbs rely on motor simulation? In a behavioral experiment, in which a semantic task was used, response times to hand-action-related verbs were briefer than those to abstract verbs and both decreased with repetition of presentation. In a transcranial magnetic stimulation (TMS) experiment, single-pulse stimulation was randomly delivered over hand motor area of the left primary motor cortex to measure cortical-spinal excitability at 300 or 500 ms after verb presentation. Two blocks of trials were run. In each block, the same verbs were randomly presented. In the first block, stimulation induced an increase in motor evoked potentials only when TMS was applied 300 ms after action-related verb presentation. In the second block, no modulation of motor cortex was found according to type of verb and stimulation-delay. These results confirm that motor simulation can be used to understand action rather than abstract verbs. Moreover, they suggest that with repetition, the semantic processing for action verbs does not require activation of primary motor cortex anymore.


Subject(s)
Comprehension/physiology , Evoked Potentials, Motor/physiology , Reaction Time/physiology , Semantics , Transcranial Magnetic Stimulation , Acoustic Stimulation , Adult , Analysis of Variance , Electromyography , Female , Humans , Male , Time Factors , Young Adult
13.
Behav Brain Res ; 259: 297-301, 2014 Feb 01.
Article in English | MEDLINE | ID: mdl-24275380

ABSTRACT

The present study aimed at determining whether or not the comprehension of symbolic gestures, and corresponding-in-meaning words, makes use of cortical circuits involved in movement execution control. Participants were presented with videos of an actress producing meaningful or meaningless gestures, pronouncing corresponding-in-meaning words or pseudo-words; they were required to judge whether the signal was meaningful or meaningless. Single pulse TMS was applied to forearm primary motor cortex area 150-200 ms after the point when the stimulus meaning could be understood. MEPs were significantly greater when processing meaningless signals as compared to a baseline condition presenting a still-and-silent actress. In contrast, this was not the case for meaningful signals whose motor activation did not differ from that for the baseline stimulus. MEPs were significantly greater for meaningless than meaningful signals and no significant difference was found between gesture and speech. On the basis of these results, we hypothesized that the observation-of/listening-to meaningless signals recruits motor areas. In contrast, this did not occur when the signals were meaningful. Overall, the data suggest that the processes related to comprehension of symbolic gestures and communicative words do not involve primary motor area and probably use brain areas involved in semantics.


Subject(s)
Comprehension/physiology , Evoked Potentials, Motor/physiology , Gestures , Semantics , Vocabulary , Adult , Analysis of Variance , Electromyography , Female , Humans , Male , Reaction Time , Transcranial Magnetic Stimulation , Verbal Behavior , Young Adult
14.
Eur J Neurosci ; 39(5): 841-51, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24289090

ABSTRACT

Request and emblematic gestures, despite being both communicative gestures, do differ in terms of social valence. Indeed, only the former are used to initiate/maintain/terminate an actual interaction. If such a difference is at stake, a relevant social cue, i.e. eye contact, should have different impacts on the neuronal underpinnings of the two types of gesture. We measured blood oxygen level-dependent signals, using functional magnetic resonance imaging, while participants watched videos of an actor, either blindfolded or not, performing emblems, request gestures, or meaningless control movements. A left-lateralized network was more activated by both types of communicative gestures than by meaningless movements, regardless of the accessibility of the actor's eyes. Strikingly, when eye contact was taken into account as a factor, a right-lateralized network was more strongly activated by emblematic gestures performed by the non-blindfolded actor than by those performed by the blindfolded actor. Such modulation possibly reflects the integration of information conveyed by the eyes with the representation of emblems. Conversely, a wider right-lateralized network was more strongly activated by request gestures performed by the blindfolded than by those performed by the non-blindfolded actor. This probably reflects the effect of the conflict between the observed action and its associated contextual information, in which relevant social cues are missing.


Subject(s)
Brain Mapping , Brain/physiology , Cues , Gestures , Interpersonal Relations , Adult , Female , Functional Laterality/physiology , Hand , Humans , Image Interpretation, Computer-Assisted , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Young Adult
15.
PLoS One ; 8(11): e81197, 2013.
Article in English | MEDLINE | ID: mdl-24278395

ABSTRACT

The present study aimed at determining how actions executed by two conspecifics can be coordinated with each other, or more specifically, how the observation of different phases of a reaching-grasping action is temporary related to the execution of a movement of the observer. Participants observed postures of initial finger opening, maximal finger aperture, and final finger closing of grasp after observation of an initial hand posture. Then, they opened or closed their right thumb and index finger (experiments 1, 2 and 3). Response times decreased, whereas acceleration and velocity of actual finger movements increased when observing the two late phases of grasp. In addition, the results ruled out the possibility that this effect was due to salience of the visual stimulus when the hand was close to the target and confirmed an effect of even hand postures in addition to hand apparent motion due to the succession of initial hand posture and grasp phase. In experiments 4 and 5, the observation of grasp phases modulated even foot movements and pronunciation of syllables. Finally, in experiment 6, transcranial magnetic stimulation applied to primary motor cortex 300 ms post-stimulus induced an increase in hand motor evoked potentials of opponens pollicis muscle when observing the two late phases of grasp. These data suggest that the observation of grasp phases induced simulation which was stronger during observation of finger closing. This produced shorter response times, greater acceleration and velocity of the successive movement. In general, our data suggest best concatenation between two movements (one observed and the other executed) when the observed (and simulated) movement was to be accomplished. The mechanism joining the observation of a conspecific's action with our own movement may be precursor of social functions. It may be at the basis for interactions between conspecifics, and related to communication between individuals.


Subject(s)
Hand Strength/physiology , Psychomotor Performance/physiology , Adult , Analysis of Variance , Behavior , Biomechanical Phenomena , Female , Humans , Male , Reaction Time , Transcranial Magnetic Stimulation , Young Adult
16.
Front Hum Neurosci ; 7: 542, 2013.
Article in English | MEDLINE | ID: mdl-24046742

ABSTRACT

The present kinematic study aimed at determining whether the observation of arm/hand gestures performed by conspecifics affected an action apparently unrelated to the gesture (i.e., reaching-grasping). In 3 experiments we examined the influence of different gestures on action kinematics. We also analyzed the effects of words corresponding in meaning to the gestures, on the same action. In Experiment 1, the type of gesture, valence and actor's gaze were the investigated variables Participants executed the action of reaching-grasping after discriminating whether the gestures produced by a conspecific were meaningful or not. The meaningful gestures were request or symbolic and their valence was positive or negative. They were presented by the conspecific either blindfolded or not. In control Experiment 2 we searched for effects of the sole gaze, and, in Experiment 3, the effects of the same characteristics of words corresponding in meaning to the gestures and visually presented by the conspecific. Type of gesture, valence, and gaze influenced the actual action kinematics; these effects were similar, but not the same as those induced by words. We proposed that the signal activated a response which made the actual action faster for negative valence of gesture, whereas for request signals and available gaze, the response interfered with the actual action more than symbolic signals and not available gaze. Finally, we proposed the existence of a common circuit involved in the comprehension of gestures and words and in the activation of consequent responses to them.

17.
PLoS One ; 7(5): e36390, 2012.
Article in English | MEDLINE | ID: mdl-22693550

ABSTRACT

One of the most important faculties of humans is to understand the behaviour of other conspecifics. The present study aimed at determining whether, in a social context, request gesture and gaze direction of an individual are enough to infer his/her intention to communicate, by searching for their effects on the kinematics of another individual's arm action. In four experiments participants reached, grasped and lifted a bottle filled of orange juice in presence of an empty glass. In experiment 1, the further presence of a conspecific not producing any request with a hand and gaze did not modify the kinematics of the sequence. Conversely, experiments 2 and 3 showed that the presence of a conspecific producing only a request of pouring by holding the glass with his/her right hand, or only a request of comunicating with the conspecific, by using his/her gaze, affected lifting and grasping of the sequence, respectively. Experiment 4 showed that hand gesture and eye contact simultaneously produced affected the entire sequence. The results suggest that the presence of both request gesture and direct gaze produced by an individual changes the control of a motor sequence executed by another individual. We propose that a social request activates a social affordance that interferes with the control of whatever sequence and that the gaze of the potential receiver who held the glass with her hand modulates the effectiveness of the manual gesture. This paradigm if applied to individuals affected by autism disorder can give new insight on the nature of their impairment in social interaction and communication.


Subject(s)
Eye , Gestures , Interpersonal Relations , Social Behavior , Adult , Biomechanical Phenomena , Female , Humans , Male , Young Adult
18.
Behav Brain Res ; 233(1): 130-40, 2012 Jul 15.
Article in English | MEDLINE | ID: mdl-22561125

ABSTRACT

We tested whether a system coupling hand postures related to gestures to the control of internal mouth articulators during production of vowels exists and it can be precursor of a system relating hand/arm gestures to words. Participants produced unimanual and bimanual representational gestures expressing the meaning of LARGE or SMALL. Once the gesture was produced, in experiment 1 they pronounced the vowels "A" or "I", in experiment 2 the word "GRÀNDE" (large) or "PÌCCOLO" (small), and in experiment 3 the pseudo-words "SCRÀNTA" or "SBÌCCARA". Mouth, hand kinematics and voice spectra were recorded and analyzed. Unimanual gestures affected voice spectra of the two vowels pronounced alone (experiment 1). Bimanual and both unimanual and bimanual gestures affected voice spectra of /a/ and /i/ included in the words (experiment 2) and pseudo-words (experiment 3), respectively. The results support the hypothesis that a system coupling hand gestures to vowel production exists. Moreover, they suggest the existence of a more general system relating gestures to words.


Subject(s)
Gestures , Hand/innervation , Mouth/innervation , Posture/physiology , Psychomotor Performance/physiology , Vocabulary , Adult , Analysis of Variance , Biomechanical Phenomena , Female , Humans , Male , Movement/physiology , Spectrum Analysis , Verbal Behavior , Young Adult
19.
Exp Brain Res ; 218(4): 539-49, 2012 May.
Article in English | MEDLINE | ID: mdl-22411580

ABSTRACT

The present study aimed at determining whether the observation of two functionally compatible artefacts, that is which potentially concur in achieving a specific function, automatically activates a motor programme of interaction between the two objects. To this purpose, an interference paradigm was used during which an artefact (a bottle filled with orange juice), target of a reaching-grasping and lifting sequence, was presented alone or with a non-target object (distractor) of the same or different semantic category and functionally compatible or not. In experiment 1, the bottle was presented alone or with an artefact (a sphere), or a natural (an apple) distractor. In experiment 2, the bottle was presented with either the apple or a glass (an artefact) filled with orange juice, whereas in experiment 3, either an empty or a filled glass was presented. In the control experiment 4, we compared the kinematics of reaching-grasping and pouring with those of reaching-grasping and lifting. The kinematics of reach, grasp and lift was affected by distractor presentation. However, no difference was observed between two distractors that belonged to different semantic categories. In contrast, the presence of the empty rather filled glass affected the kinematics of the actual grasp. This suggests that an actually functional compatibility between target (the bottle) and distractor (the empty glass) was necessary to activate automatically a programme of interaction (i.e. pouring) between the two artefacts. This programme affected the programme actually executed (i.e. lifting). The results of the present study indicate that, in addition to affordances related to intrinsic object properties, "working affordances" related to a specific use of an artefact with another object can be activated on the basis of functional compatibility.


Subject(s)
Executive Function/physiology , Hand Strength/physiology , Hand , Movement/physiology , Psychomotor Performance/physiology , Adult , Analysis of Variance , Attention/physiology , Biomechanical Phenomena , Decision Making/physiology , Female , Humans , Male , Photic Stimulation , Reaction Time/physiology , Young Adult
20.
Behav Brain Res ; 225(1): 201-8, 2011 Nov 20.
Article in English | MEDLINE | ID: mdl-21802449

ABSTRACT

Is the human kinematics automatically imitated when an individual observes transitive actions (i.e. directed upon an object) executed with different effectors and then executes either the same or a different action? In three experiments participants executed actions of reaching-grasping after observation of reaching-grasping, bringing-to-the-mouth, foot-touching, and arrow-touching. Both power and precision interactions were presented. The kinematics of all movements was those typical of humans. The observed variations in velocity due to the type of interaction were imitated when an interacting biological effector (i.e. arm, mouth and foot) was presented. In contrast, no imitation was observed when a non-biological effector (i.e. the arrow) was presented. The results of the present study suggest that there exists a kinematic representation of the rules governing the types of interaction of biological effectors with objects. It is automatically activated by action observation and affects kinematic landmarks of the successive action, i.e. the peak velocities and accelerations. This representation is common to different biological effectors probably because it shares the aim of those actions.


Subject(s)
Imitative Behavior/physiology , Movement/physiology , Psychomotor Performance/physiology , Adult , Analysis of Variance , Biomechanical Phenomena , Female , Fingers/innervation , Hand Strength/physiology , Humans , Male , Observation , Photic Stimulation/methods , Wrist/innervation , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...