Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 48
Filter
1.
Hum Brain Mapp ; 45(11): e26797, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39041175

ABSTRACT

Speech comprehension is crucial for human social interaction, relying on the integration of auditory and visual cues across various levels of representation. While research has extensively studied multisensory integration (MSI) using idealised, well-controlled stimuli, there is a need to understand this process in response to complex, naturalistic stimuli encountered in everyday life. This study investigated behavioural and neural MSI in neurotypical adults experiencing audio-visual speech within a naturalistic, social context. Our novel paradigm incorporated a broader social situational context, complete words, and speech-supporting iconic gestures, allowing for context-based pragmatics and semantic priors. We investigated MSI in the presence of unimodal (auditory or visual) or complementary, bimodal speech signals. During audio-visual speech trials, compared to unimodal trials, participants more accurately recognised spoken words and showed a more pronounced suppression of alpha power-an indicator of heightened integration load. Importantly, on the neural level, these effects surpassed mere summation of unimodal responses, suggesting non-linear MSI mechanisms. Overall, our findings demonstrate that typically developing adults integrate audio-visual speech and gesture information to facilitate speech comprehension in noisy environments, highlighting the importance of studying MSI in ecologically valid contexts.


Subject(s)
Gestures , Speech Perception , Humans , Female , Male , Speech Perception/physiology , Young Adult , Adult , Visual Perception/physiology , Electroencephalography , Comprehension/physiology , Acoustic Stimulation , Speech/physiology , Brain/physiology , Photic Stimulation/methods
2.
J Acoust Soc Am ; 156(1): 638-654, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39051718

ABSTRACT

This experimental study investigated whether infants use iconicity in speech and gesture cues to interpret word meanings. Specifically, we tested infants' sensitivity to size sound symbolism and iconic gesture cues and asked whether combining these cues in a multimodal fashion would enhance infants' sensitivity in a superadditive manner. Thirty-six 14-17-month-old infants participated in a preferential looking task in which they heard a spoken nonword (e.g., "zudzud") while observing a small and large object (e.g., a small and large square). All infants were presented with an iconic cue for object size (small or large) (1) in the pitch of the spoken non-word (high vs low), (2) in gesture (small or large), or (3) congruently in pitch and gesture (e.g., a high pitch and small gesture indicating a small square). Infants did not show a preference for congruently sized objects in any iconic cue condition. Bayes factor analyses showed moderate to strong support for the null hypotheses. In conclusion, 14-17-month-old infants did not use iconic pitch cues, iconic gesture cues, or iconic multimodal cues (pitch and gesture) to associate speech sounds with their referents. These findings challenge theories that emphasize the role of iconicity in early language development.


Subject(s)
Cues , Gestures , Speech Perception , Humans , Infant , Male , Female , Acoustic Stimulation , Bayes Theorem , Symbolism , Pitch Perception , Comprehension , Size Perception
3.
Dev Sci ; 26(3): e13315, 2023 05.
Article in English | MEDLINE | ID: mdl-36059145

ABSTRACT

Previous research has shown a strong positive association between right-handed gesturing and vocabulary development. However, the causal nature of this relationship remains unclear. In the current study, we tested whether gesturing with the right hand enhances linguistic processing in the left hemisphere, which is contralateral to the right hand. We manipulated the gesture hand children used in pointing tasks to test whether it would affect their performance. In either a linguistic task (verb learning) or a non-linguistic control task (memory), 131 typically developing right-handed 3-year-olds were encouraged to use either their right hand or left hand to respond. While encouraging children to use a specific hand to indicate their responses had no effect on memory performance, encouraging children to use the right hand to respond, compared to the left hand, significantly improved their verb learning performance. This study is the first to show that manipulating the hand with which children are encouraged to gesture gives them a linguistic advantage. Language lateralization in healthy right-handed children typically involves a dominant left hemisphere. Producing right-handed gestures may therefore lead to increased activation in the left hemisphere which may, in turn, facilitate forming and accessing lexical representations. It is important to note that this study manipulated gesture handedness among right-handers and does therefore not support the practice of encouraging children to become right-handed in manual activities. RESEARCH HIGHLIGHTS: Right-handed 3-year-olds were instructed to point to indicate their answers exclusively with their right or left hand in either a memory or verb learning task. Right-handed pointing was associated with improved verb generalization performance, but not improved memory performance. Thus, gesturing with the right hand, compared to the left hand, gives right-handed 3-year-olds an advantage in a linguistic but not a non-linguistic task. Right-handed pointing might lead to increased activation in the left hemisphere and facilitate forming and accessing lexical representations.


Subject(s)
Functional Laterality , Language , Child , Humans , Child, Preschool , Functional Laterality/physiology , Vocabulary , Hand/physiology , Gestures
4.
Cogn Sci ; 46(6): e13160, 2022 06.
Article in English | MEDLINE | ID: mdl-35665955

ABSTRACT

A considerable body of research has documented the emergence of what appears to be instrumental helping behavior in early childhood. The current study tested the hypothesis that one basic psychological mechanism motivating this behavior is a preference for completing unfinished actions. To test this, a paradigm was implemented in which 2-year-olds (n = 34, 16 females/18 males, mostly White middle-class children) could continue an adult's action when the adult no longer wanted to complete the action. The results showed that children continued the adult's actions more often when the goal had been abandoned than when it had been reached (OR = 2.37). This supports the hypothesis that apparent helping behavior in 2-year-olds is motivated by a preference for completing unfinished actions.


Subject(s)
Helping Behavior , Motivation , Adult , Child , Child, Preschool , Female , Humans , Male
5.
J Exp Psychol Gen ; 151(1): 262, 2022 Jan.
Article in English | MEDLINE | ID: mdl-35025581

ABSTRACT

Reports an error in "Prior experience with unlabeled actions promotes 3-year-old children's verb learning" by Suzanne Aussems, Katherine H. Mumford and Sotaro Kita (Journal of Experimental Psychology: General, Advanced Online Publication, Jul 15, 2021, np). In the original article, acknowledgment of and formatting for Economic and Social Research Council funding was omitted. The author note and copyright line now reflect the standard acknowledgment of and formatting for the funding received for this article. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2021-63321-001). This study investigated what type of prior experience with unlabeled actions promotes 3-year-old children's verb learning. We designed a novel verb learning task in which we manipulated prior experience with unlabeled actions and the gesture type children saw with this prior experience. Experiment 1 showed that children (N = 96) successfully generalized more novel verbs when they had prior experience with unlabeled exemplars of the referent actions ("relevant exemplars"), but only if the referent actions were highlighted with iconic gestures during prior experience. Experiment 2 showed that children (N = 48) successfully generalized more novel verbs when they had prior experience with one relevant exemplar and an iconic gesture than with two relevant exemplars (i.e., the same referent action performed by different actors) shown simultaneously. However, children also successfully generalized verbs above chance in the two-relevant-exemplars condition (without the help of iconic gesture). Overall, these findings suggest that prior experience with unlabeled actions is an important first step in children's verb learning process, provided that children get a cue for focusing on the relevant information (i.e., actions) during prior experience so that they can create stable memory representations of the actions. Such stable action memory representations promote verb learning because they make the actions stand out when children later encounter labeled exemplars of the same actions. Adults can provide top-down cues (e.g., iconic gestures) and bottom-up cues (e.g., simultaneous exemplars) to focus children's attention on actions; however, iconic gesture is more beneficial for successful verb learning than simultaneous exemplars. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Gestures , Verbal Learning , Adult , Child, Preschool , Humans
6.
J Exp Psychol Gen ; 151(1): 246-262, 2022 01.
Article in English | MEDLINE | ID: mdl-34264715

ABSTRACT

[Correction Notice: An Erratum for this article was reported online in Journal of Experimental Psychology: General on Jan 6 2022 (see record 2022-20753-001). In the original article, acknowledgment of and formatting for Economic and Social Research Council funding was omitted. The author note and copyright line now reflect the standard acknowledgment of and formatting for the funding received for this article. All versions of this article have been corrected.] This study investigated what type of prior experience with unlabeled actions promotes 3-year-old children's verb learning. We designed a novel verb learning task in which we manipulated prior experience with unlabeled actions and the gesture type children saw with this prior experience. Experiment 1 showed that children (N = 96) successfully generalized more novel verbs when they had prior experience with unlabeled exemplars of the referent actions ("relevant exemplars"), but only if the referent actions were highlighted with iconic gestures during prior experience. Experiment 2 showed that children (N = 48) successfully generalized more novel verbs when they had prior experience with one relevant exemplar and an iconic gesture than with two relevant exemplars (i.e., the same referent action performed by different actors) shown simultaneously. However, children also successfully generalized verbs above chance in the two-relevant-exemplars condition (without the help of iconic gesture). Overall, these findings suggest that prior experience with unlabeled actions is an important first step in children's verb learning process, provided that children get a cue for focusing on the relevant information (i.e., actions) during prior experience so that they can create stable memory representations of the actions. Such stable action memory representations promote verb learning because they make the actions stand out when children later encounter labeled exemplars of the same actions. Adults can provide top-down cues (e.g., iconic gestures) and bottom-up cues (e.g., simultaneous exemplars) to focus children's attention on actions; however, iconic gesture is more beneficial for successful verb learning than simultaneous exemplars. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Gestures , Verbal Learning , Child, Preschool , Cues , Humans
7.
Psychol Sci ; 32(7): 1073-1085, 2021 07.
Article in English | MEDLINE | ID: mdl-34111370

ABSTRACT

Two-year-olds typically extend labels of novel objects by the objects' shape (shape bias), whereas adults do so by the objects' function. Is this because shape is conceptually easier to comprehend than function? To test whether the conceptual complexity of function prevents infants from developing a function bias, we trained twelve 17-month-olds (function-training group) to focus on objects' functions when labeling the objects over a period of 7 weeks. Our training was similar to previously used methods in which 17-month-olds were successfully taught to focus on the shape of objects, resulting in a precocious shape bias. We exposed another 12 infants (control group) to the same objects over 7 weeks but without labeling the items or demonstrating their functions. Only the infants in the function-training group developed a function bias. Thus, the conceptual complexity of function was not a barrier for developing a function bias, which suggests that the shape bias emerges naturally because shape is perceptually more accessible than function.


Subject(s)
Language Development , Bias , Female , Humans , Infant , Male
8.
J Exp Child Psychol ; 209: 105171, 2021 09.
Article in English | MEDLINE | ID: mdl-33962107

ABSTRACT

Previous research has established that goal tracking emerges early in the first year of life and rapidly becomes increasingly sophisticated. However, it has not yet been shown whether young children continue to update their representations of others' goals over time. The current study investigated this by probing young children's (24- to 30-month-olds; N = 24) ability to differentiate between goal-directed actions that have been halted because the goal was interrupted and those that have been halted because the goal was abandoned. To test whether children are sensitive to this distinction, we manipulated the experimenter's reason for not completing a goal-directed action; his initial goal was either interrupted by an obstacle or abandoned in favor of an alternative. We measured whether children's helping behavior was sensitive to the experimenter's reason for not completing his goal-directed action by recording whether children completed the experimenter's initial goal or the alternative goal. The results showed that children helped to complete the experimenter's initial goal significantly more often after this goal had been interrupted than after it had been abandoned. These results support the hypothesis that children continue to update their representations of others' goals over time by 2 years of age and specifically that they differentiate between abandoned and interrupted goals.


Subject(s)
Goals , Motivation , Child , Child, Preschool , Humans
9.
Child Dev ; 92(1): 124-141, 2021 01.
Article in English | MEDLINE | ID: mdl-32666515

ABSTRACT

This study investigated whether seeing iconic gestures depicting verb referents promotes two types of generalization. We taught 3- to 4-year-olds novel locomotion verbs. Children who saw iconic manner gestures during training generalized more verbs to novel events (first-order generalization) than children who saw interactive gestures (Experiment 1, N = 48; Experiment 2, N = 48) and path-tracing gestures (Experiment 3, N = 48). Furthermore, immediately (Experiments 1 and 3) and after 1 week (Experiment 2), the iconic manner gesture group outperformed the control groups in subsequent generalization trials with different novel verbs (second-order generalization), although all groups saw interactive gestures. Thus, seeing iconic gestures that depict verb referents helps children (a) generalize individual verb meanings to novel events and (b) learn more verbs from the same subcategory.


Subject(s)
Child Development/physiology , Comprehension , Gestures , Language Development , Learning/physiology , Speech Perception/physiology , Child, Preschool , Generalization, Psychological , Habits , Humans , Male
10.
PLoS One ; 14(7): e0218707, 2019.
Article in English | MEDLINE | ID: mdl-31291274

ABSTRACT

This paper demonstrates a new quantitative approach to examine cross-linguistically shared and language-specific sound symbolism in languages. Unlike most previous studies taking a hypothesis-testing approach, we employed a data mining approach to uncover unknown sound-symbolic correspondences in the domain of locomotion, without limiting ourselves to pre-determined sound-meaning correspondences. In the experiment, we presented 70 locomotion videos to Japanese and English speakers and asked them to create a sound symbolically matching word for each action. Participants also rated each action on five meaning variables. Multivariate analyses revealed cross-linguistically shared and language-specific sound-meaning correspondences within a single semantic domain. The present research also established that a substantial number of sound-symbolic links emerge from conventionalized form-meaning mappings in the native languages of the speakers.


Subject(s)
Phonetics , Semantics , Symbolism , Vocabulary , Adult , Female , Humans , Japan , Locomotion/physiology , Male , Multivariate Analysis , Pattern Recognition, Physiological/physiology , Pattern Recognition, Visual/physiology , Running/physiology , Sound , Speech/physiology , United Kingdom , Video Recording , Walking/physiology
11.
Child Dev ; 90(4): 1123-1137, 2019 07.
Article in English | MEDLINE | ID: mdl-29115673

ABSTRACT

An experiment with 72 three-year-olds investigated whether encoding events while seeing iconic gestures boosts children's memory representation of these events. The events, shown in videos of actors moving in an unusual manner, were presented with either iconic gestures depicting how the actors performed these actions, interactive gestures, or no gesture. In a recognition memory task, children in the iconic gesture condition remembered actors and actions better than children in the control conditions. Iconic gestures were categorized based on how much of the actors was represented by the hands (feet, legs, or body). Only iconic hand-as-body gestures boosted actor memory. Thus, seeing iconic gestures while encoding events facilitates children's memory of those aspects of events that are schematically highlighted by gesture.


Subject(s)
Gestures , Mental Recall , Child , Female , Hand , Humans , Male , Memory Consolidation
12.
Behav Res Methods ; 50(3): 1270-1284, 2018 06.
Article in English | MEDLINE | ID: mdl-28916988

ABSTRACT

Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.


Subject(s)
Gestures , Locomotion , Nonverbal Communication , Photic Stimulation/methods , Sign Language , Visual Perception , Adult , Databases, Factual , Female , Humans , Male , Video Recording
13.
Acta Psychol (Amst) ; 179: 89-95, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28750209

ABSTRACT

This study examined spatial story representations created by speaker's cohesive gestures. Participants were presented with three-sentence discourse with two protagonists. In the first and second sentences, gestures consistently located the two protagonists in the gesture space: one to the right and the other to the left. The third sentence (without gestures) referred to one of the protagonists, and the participants responded with one of the two keys to indicate the relevant protagonist. The response keys were either spatially congruent or incongruent with the gesturally established locations for the two participants. Though the cohesive gestures did not provide any clue for the correct response, they influenced performance: the reaction time in the congruent condition was faster than that in the incongruent condition. Thus, cohesive gestures automatically establish spatial story representations and the spatial story representations remain activated in a subsequent sentence without any gesture.


Subject(s)
Comprehension/physiology , Gestures , Spatial Memory/physiology , Speech Perception/physiology , Adolescent , Adult , Auditory Perception , Female , Humans , Male , Reaction Time , Speech/physiology , Young Adult
14.
Psychol Rev ; 124(3): 245-266, 2017 04.
Article in English | MEDLINE | ID: mdl-28240923

ABSTRACT

People spontaneously produce gestures during speaking and thinking. The authors focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e., representational gestures). Previous research indicates that such gestures have not only communicative functions, but also self-oriented cognitive functions. In this article, the authors propose a new theoretical framework, the gesture-for-conceptualization hypothesis, which explains the self-oriented functions of representational gestures. According to this framework, representational gestures affect cognitive processes in 4 main ways: gestures activate, manipulate, package, and explore spatio-motoric information for speaking and thinking. These four functions are shaped by gesture's ability to schematize information, that is, to focus on a small subset of available information that is potentially relevant to the task at hand. The framework is based on the assumption that gestures are generated from the same system that generates practical actions, such as object manipulation; however, gestures are distinct from practical actions in that they represent information. The framework provides a novel, parsimonious, and comprehensive account of the self-oriented functions of gestures. The authors discuss how the framework accounts for gestures that depict abstract or metaphoric content, and they consider implications for the relations between self-oriented and communicative functions of gestures. (PsycINFO Database Record


Subject(s)
Cognition , Gestures , Speech , Thinking , Concept Formation , Humans
15.
J Exp Psychol Learn Mem Cogn ; 43(6): 874-886, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28080121

ABSTRACT

Research suggests that speech-accompanying gestures influence cognitive processes, but it is not clear whether the gestural benefit is specific to the gesturing hand. Two experiments tested the "(right/left) hand-specificity" hypothesis for self-oriented functions of gestures: gestures with a particular hand enhance cognitive processes involving the hemisphere contralateral to the gesturing hand. Specifically, we tested whether left-hand gestures enhance metaphor explanation, which involves right-hemispheric processing. In Experiment 1, right-handers explained metaphorical phrases (e.g., "to spill the beans," beans represent pieces of information). Participants kept the one hand (right, left) still while they were allowed to spontaneously gesture (or not) with their other free hand (left, right). Metaphor explanations were better when participants chose to gesture when their left hand was free than when they did not. An analogous effect of gesturing was not found when their right hand was free. In Experiment 2, different right-handers performed the same metaphor explanation task but, unlike Experiment 1, they were encouraged to gesture with their left or right hand or to not gesture at all. Metaphor explanations were better when participants gestured with their left hand than when they did not gesture, but the right hand gesture condition did not significantly differ from the no-gesture condition. Furthermore, we measured participants' mouth asymmetry during additional verbal tasks to determine individual differences in the degree of right-hemispheric involvement in speech production. The left-over-right-side mouth dominance, indicating stronger right-hemispheric involvement, positively correlated with the left-over-right-hand gestural benefit on metaphor explanation. These converging findings supported the "hand-specificity" hypothesis. (PsycINFO Database Record


Subject(s)
Functional Laterality , Gestures , Hand , Metaphor , Speech , Hand/physiology , Humans , Linear Models , Male , Psycholinguistics , Psychomotor Performance , Reproducibility of Results , Speech/physiology , Young Adult
16.
Child Dev ; 88(3): 964-978, 2017 05.
Article in English | MEDLINE | ID: mdl-27966800

ABSTRACT

Previous research has shown that children aged 4-5 years, but not 2-3 years, show adult-like interference from a partner when performing a joint task (Milward, Kita, & Apperly, 2014). This raises questions about the cognitive skills involved in the development of such "corepresentation (CR)" of a partner (Sebanz, Knoblich, & Prinz, 2003). Here, individual differences data from one hundred and thirteen 4- to 5-year-olds showed theory of mind (ToM) and inhibitory control (IC) as predictors of ability to avoid CR interference, suggesting that children with better ToM abilities are more likely to succeed in decoupling self and other representations in a joint task, while better IC is likely to help children avoid interference from a partner's response when selecting their own response on the task.


Subject(s)
Child Behavior/physiology , Cooperative Behavior , Executive Function/physiology , Individuality , Inhibition, Psychological , Social Perception , Theory of Mind/physiology , Child, Preschool , Ego , Female , Humans , Male
17.
J Exp Psychol Learn Mem Cogn ; 42(2): 257-70, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26237615

ABSTRACT

People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments 1 and 2). This suggests that the 2 types of gestures are generated from the same process. We then investigated whether both types of gestures can be generated from the representational use of the action generation process that also generates purposeful actions that have a direct physical impact on the world, such as manipulating an object or locomotion (the action generation hypothesis). To this end, we examined the effect of object affordances on the production of both types of gestures (Experiments 3 and 4). We found that individuals produced co-thought and co-speech gestures more often when the stimulus objects afforded action (objects with a smooth surface) than when they did not (objects with a spiky surface). These results support the action generation hypothesis for representational gestures. However, our findings are incompatible with the hypothesis that co-speech representational gestures are solely generated from the speech production process (the speech production hypothesis).


Subject(s)
Gestures , Psychomotor Performance , Speech , Adolescent , Adult , Female , Humans , Imagination , Male , Motion Perception , Photic Stimulation , Psycholinguistics , Psychological Tests , Young Adult
18.
Cogn Sci ; 39(8): 1855-80, 2015 Nov.
Article in English | MEDLINE | ID: mdl-25779093

ABSTRACT

We examined whether children's ability to integrate speech and gesture follows the pattern of a broader developmental shift between 3- and 5-year-old children (Ramscar & Gitcho, 2007) regarding the ability to process two pieces of information simultaneously. In Experiment 1, 3-year-olds, 5-year-olds, and adults were presented with either an iconic gesture or a spoken sentence or a combination of the two on a computer screen, and they were instructed to select a photograph that best matched the message. The 3-year-olds did not integrate information in speech and gesture, but 5-year-olds and adults did. In Experiment 2, 3-year-old children were presented with the same speech and gesture as in Experiment 1 that were produced live by an experimenter. When presented live, 3-year-olds could integrate speech and gesture. We concluded that development of the integration ability is a part of the broader developmental shift; however, live-presentation facilitates the nascent integration ability in 3-year-olds.


Subject(s)
Comprehension , Gestures , Speech Perception , Adult , Child, Preschool , Female , Humans , Male , Photic Stimulation , Semantics , Young Adult
19.
PLoS One ; 10(2): e0116494, 2015.
Article in English | MEDLINE | ID: mdl-25695741

ABSTRACT

Sound symbolism, or the nonarbitrary link between linguistic sound and meaning, has often been discussed in connection with language evolution, where the oral imitation of external events links phonetic forms with their referents (e.g., Ramachandran & Hubbard, 2001). In this research, we explore whether sound symbolism may also facilitate synchronic language learning in human infants. Sound symbolism may be a useful cue particularly at the earliest developmental stages of word learning, because it potentially provides a way of bootstrapping word meaning from perceptual information. Using an associative word learning paradigm, we demonstrated that 14-month-old infants could detect Köhler-type (1947) shape-sound symbolism, and could use this sensitivity in their effort to establish a word-referent association.


Subject(s)
Sound , Symbolism , Female , Humans , Infant , Language Development , Male , Verbal Learning/physiology
20.
Cortex ; 63: 196-205, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25282057

ABSTRACT

A fundamental question in language development is how infants start to assign meaning to words. Here, using three Electroencephalogram (EEG)-based measures of brain activity, we establish that preverbal 11-month-old infants are sensitive to the non-arbitrary correspondences between language sounds and concepts, that is, to sound symbolism. In each trial, infant participants were presented with a visual stimulus (e.g., a round shape) followed by a novel spoken word that either sound-symbolically matched ("moma") or mismatched ("kipi") the shape. Amplitude increase in the gamma band showed perceptual integration of visual and auditory stimuli in the match condition within 300 msec of word onset. Furthermore, phase synchronization between electrodes at around 400 msec revealed intensified large-scale, left-hemispheric communication between brain regions in the mismatch condition as compared to the match condition, indicating heightened processing effort when integration was more demanding. Finally, event-related brain potentials showed an increased adult-like N400 response - an index of semantic integration difficulty - in the mismatch as compared to the match condition. Together, these findings suggest that 11-month-old infants spontaneously map auditory language onto visual experience by recruiting a cross-modal perceptual processing system and a nascent semantic network within the first year of life.


Subject(s)
Brain/physiology , Evoked Potentials, Auditory/physiology , Language Development , Speech Perception/physiology , Acoustic Stimulation , Electroencephalography , Female , Humans , Infant , Language , Male , Symbolism
SELECTION OF CITATIONS
SEARCH DETAIL
...