Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 52
Filter
Add more filters










Publication year range
2.
Nat Commun ; 14(1): 4309, 2023 07 18.
Article in English | MEDLINE | ID: mdl-37463907

ABSTRACT

Speech processing requires extracting meaning from acoustic patterns using a set of intermediate representations based on a dynamic segmentation of the speech stream. Using whole brain mapping obtained in fMRI, we investigate the locus of cortical phonemic processing not only for single phonemes but also for short combinations made of diphones and triphones. We find that phonemic processing areas are much larger than previously described: they include not only the classical areas in the dorsal superior temporal gyrus but also a larger region in the lateral temporal cortex where diphone features are best represented. These identified phonemic regions overlap with the lexical retrieval region, but we show that short word retrieval is not sufficient to explain the observed responses to diphones. Behavioral studies have shown that phonemic processing and lexical retrieval are intertwined. Here, we also have identified candidate regions within the speech cortical network where this joint processing occurs.


Subject(s)
Speech Perception , Speech , Humans , Speech/physiology , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Brain/physiology , Speech Perception/physiology , Brain Mapping , Magnetic Resonance Imaging , Cerebral Cortex/diagnostic imaging
3.
J Neurosci ; 43(14): 2579-2596, 2023 04 05.
Article in English | MEDLINE | ID: mdl-36859308

ABSTRACT

Many social animals can recognize other individuals by their vocalizations. This requires a memory system capable of mapping incoming acoustic signals to one of many known individuals. Using the zebra finch, a social songbird that uses songs and distance calls to communicate individual identity (Elie and Theunissen, 2018), we tested the role of two cortical-like brain regions in a vocal recognition task. We found that the rostral region of the Cadomedial Nidopallium (NCM), a secondary auditory region of the avian pallium, was necessary for maintaining auditory memories for conspecific vocalizations in both male and female birds, whereas HVC (used as a proper name), a premotor areas that gates auditory input into the vocal motor and song learning pathways in male birds (Roberts and Mooney, 2013), was not. Both NCM and HVC have previously been implicated for processing the tutor song in the context of song learning (Sakata and Yazaki-Sugiyama, 2020). Our results suggest that NCM might not only store songs as templates for future vocal imitation but also songs and calls for perceptual discrimination of vocalizers in both male and female birds. NCM could therefore operate as a site for auditory memories for vocalizations used in various facets of communication. We also observed that new auditory memories could be acquired without intact HVC or NCM but that for these new memories NCM lesions caused deficits in either memory capacity or auditory discrimination. These results suggest that the high-capacity memory functions of the avian pallial auditory system depend on NCM.SIGNIFICANCE STATEMENT Many aspects of vocal communication require the formation of auditory memories. Voice recognition, for example, requires a memory for vocalizers to identify acoustical features. In both birds and primates, the locus and neural correlates of these high-level memories remain poorly described. Previous work suggests that this memory formation is mediated by high-level sensory areas, not traditional memory areas such as the hippocampus. Using lesion experiments, we show that one secondary auditory brain region in songbirds that had previously been implicated in storing song memories for vocal imitation is also implicated in storing vocal memories for individual recognition. The role of the neural circuits in this region in interpreting the meaning of communication calls should be investigated in the future.


Subject(s)
Finches , Vocalization, Animal , Animals , Male , Female , Acoustic Stimulation , Learning , Brain , Auditory Perception
4.
Nat Commun ; 11(1): 2914, 2020 06 04.
Article in English | MEDLINE | ID: mdl-32499545

ABSTRACT

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

5.
Sci Rep ; 10(1): 3561, 2020 Feb 21.
Article in English | MEDLINE | ID: mdl-32081889

ABSTRACT

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

6.
PLoS Comput Biol ; 15(9): e1006698, 2019 09.
Article in English | MEDLINE | ID: mdl-31557151

ABSTRACT

Although information theoretic approaches have been used extensively in the analysis of the neural code, they have yet to be used to describe how information is accumulated in time while sensory systems are categorizing dynamic sensory stimuli such as speech sounds or visual objects. Here, we present a novel method to estimate the cumulative information for stimuli or categories. We further define a time-varying categorical information index that, by comparing the information obtained for stimuli versus categories of these same stimuli, quantifies invariant neural representations. We use these methods to investigate the dynamic properties of avian cortical auditory neurons recorded in zebra finches that were listening to a large set of call stimuli sampled from the complete vocal repertoire of this species. We found that the time-varying rates carry 5 times more information than the mean firing rates even in the first 100 ms. We also found that cumulative information has slow time constants (100-600 ms) relative to the typical integration time of single neurons, reflecting the fact that the behaviorally informative features of auditory objects are time-varying sound patterns. When we correlated firing rates and information values, we found that average information correlates with average firing rate but that higher-rates found at the onset response yielded similar information values as the lower-rates found in the sustained response: the onset and sustained response of avian cortical auditory neurons provide similar levels of independent information about call identity and call-type. Finally, our information measures allowed us to rigorously define categorical neurons; these categorical neurons show a high degree of invariance for vocalizations within a call-type. Peak invariance is found around 150 ms after stimulus onset. Surprisingly, call-type invariant neurons were found in both primary and secondary avian auditory areas.


Subject(s)
Auditory Cortex , Models, Neurological , Neurons/physiology , Vocalization, Animal/physiology , Acoustic Stimulation , Animals , Auditory Cortex/cytology , Auditory Cortex/physiology , Computational Biology , Female , Finches/physiology , Male
7.
Nat Commun ; 9(1): 4026, 2018 10 02.
Article in English | MEDLINE | ID: mdl-30279497

ABSTRACT

Individual recognition is critical in social animal communication, but it has not been demonstrated for a complete vocal repertoire. Deciphering the nature of individual signatures across call types is necessary to understand how animals solve the problem of combining, in the same signal, information about identity and behavioral state. We show that distinct signatures differentiate zebra finch individuals for each call type. The distinctiveness of these signatures varies: contact calls bear strong individual signatures while calls used during aggressive encounters are less individualized. We propose that the costly solution of using multiple signatures evolved because of the limitations of the passive filtering properties of the birds' vocal organ for generating sufficiently individualized features. Thus, individual recognition requires the memorization of multiple signatures for the entire repertoire of conspecifics of interests. We show that zebra finches excel at these tasks.


Subject(s)
Auditory Perception , Discrimination, Psychological , Finches , Recognition, Psychology , Vocalization, Animal , Animals , Female , Male
8.
Sci Rep ; 8(1): 13826, 2018 09 14.
Article in English | MEDLINE | ID: mdl-30218053

ABSTRACT

Timbre, the unique quality of a sound that points to its source, allows us to quickly identify a loved one's voice in a crowd and distinguish a buzzy, bright trumpet from a warm cello. Despite its importance for perceiving the richness of auditory objects, timbre is a relatively poorly understood feature of sounds. Here we demonstrate for the first time that listeners adapt to the timbre of a wide variety of natural sounds. For each of several sound classes, participants were repeatedly exposed to two sounds (e.g., clarinet and oboe, male and female voice) that formed the endpoints of a morphed continuum. Adaptation to timbre resulted in consistent perceptual aftereffects, such that hearing sound A significantly altered perception of a neutral morph between A and B, making it sound more like B. Furthermore, these aftereffects were robust to moderate pitch changes, suggesting that adaptation to timbral features used for object identification drives these effects, analogous to face adaptation in vision.


Subject(s)
Auditory Perception/physiology , Hearing/physiology , Pitch Perception/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Female , Humans , Male , Music , Pitch Discrimination , Psychoacoustics , Sound , Sound Spectrography/methods , Voice , Young Adult
9.
Front Syst Neurosci ; 11: 61, 2017.
Article in English | MEDLINE | ID: mdl-29018336

ABSTRACT

Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of "Encoding" models, in which stimulus features are used to model brain activity, and "Decoding" models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses.

10.
Front Comput Neurosci ; 11: 68, 2017.
Article in English | MEDLINE | ID: mdl-28824408

ABSTRACT

The signal transformations that take place in high-level sensory regions of the brain remain enigmatic because of the many nonlinear transformations that separate responses of these neurons from the input stimuli. One would like to have dimensionality reduction methods that can describe responses of such neurons in terms of operations on a large but still manageable set of relevant input features. A number of methods have been developed for this purpose, but often these methods rely on the expansion of the input space to capture as many relevant stimulus components as statistically possible. This expansion leads to a lower effective sampling thereby reducing the accuracy of the estimated components. Alternatively, so-called low-rank methods explicitly search for a small number of components in the hope of achieving higher estimation accuracy. Even with these methods, however, noise in the neural responses can force the models to estimate more components than necessary, again reducing the methods' accuracy. Here we describe how a flexible regularization procedure, together with an explicit rank constraint, can strongly improve the estimation accuracy compared to previous methods suitable for characterizing neural responses to natural stimuli. Applying the proposed low-rank method to responses of auditory neurons in the songbird brain, we find multiple relevant components making up the receptive field for each neuron and characterize their computations in terms of logical OR and AND computations. The results highlight potential differences in how invariances are constructed in visual and auditory systems.

11.
J Neurosci ; 37(27): 6539-6557, 2017 07 05.
Article in English | MEDLINE | ID: mdl-28588065

ABSTRACT

Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech.SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech. Both cerebral hemispheres were actively involved in speech processing in large and equal amounts. Also, the transformation from spectral features to semantic elements occurs early in the cortical speech-processing stream. Our experimental and analytical approaches are important alternatives and complements to standard approaches that use segmented speech and block designs, which report more laterality in speech processing and associated semantic processing to higher levels of cortex than reported here.


Subject(s)
Cerebral Cortex/physiology , Models, Neurological , Nerve Net/physiology , Speech Perception/physiology , Adult , Computer Simulation , Female , Humans , Male , Neural Pathways/physiology
12.
J Neurosci ; 37(13): 3491-3510, 2017 03 29.
Article in English | MEDLINE | ID: mdl-28235893

ABSTRACT

One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging.SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio.


Subject(s)
Action Potentials/physiology , Animal Communication , Auditory Cortex/physiology , Finches/physiology , Sensory Receptor Cells/physiology , Social Identification , Acoustic Stimulation/methods , Animals , Female , Male , Socialization
13.
Nat Commun ; 7: 13654, 2016 12 20.
Article in English | MEDLINE | ID: mdl-27996965

ABSTRACT

Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed 'perceptual enhancement' in understanding speech.


Subject(s)
Auditory Cortex/physiology , Speech Intelligibility/physiology , Acoustic Stimulation , Animals , Auditory Cortex/anatomy & histology , Auditory Perception/physiology , Brain Mapping , Electrocorticography , Evoked Potentials, Auditory , Humans , Neuronal Plasticity/physiology , Phonetics
14.
Nature ; 532(7600): 453-8, 2016 Apr 28.
Article in English | MEDLINE | ID: mdl-27121839

ABSTRACT

The meaning of language is represented in regions of the cerebral cortex collectively known as the 'semantic system'. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods--commonplace in studies of human neuroanatomy and functional connectivity--provide a powerful and efficient means for mapping functional representations in the brain.


Subject(s)
Brain Mapping , Cerebral Cortex/anatomy & histology , Cerebral Cortex/physiology , Semantics , Speech , Adult , Auditory Perception , Female , Humans , Magnetic Resonance Imaging , Male , Narration , Principal Component Analysis , Reproducibility of Results
15.
Anim Cogn ; 19(2): 285-315, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26581377

ABSTRACT

Although a universal code for the acoustic features of animal vocal communication calls may not exist, the thorough analysis of the distinctive acoustical features of vocalization categories is important not only to decipher the acoustical code for a specific species but also to understand the evolution of communication signals and the mechanisms used to produce and understand them. Here, we recorded more than 8000 examples of almost all the vocalizations of the domesticated zebra finch, Taeniopygia guttata: vocalizations produced to establish contact, to form and maintain pair bonds, to sound an alarm, to communicate distress or to advertise hunger or aggressive intents. We characterized each vocalization type using complete representations that avoided any a priori assumptions on the acoustic code, as well as classical bioacoustics measures that could provide more intuitive interpretations. We then used these acoustical features to rigorously determine the potential information-bearing acoustical features for each vocalization type using both a novel regularized classifier and an unsupervised clustering algorithm. Vocalization categories are discriminated by the shape of their frequency spectrum and by their pitch saliency (noisy to tonal vocalizations) but not particularly by their fundamental frequency. Notably, the spectral shape of zebra finch vocalizations contains peaks or formants that vary systematically across categories and that would be generated by active control of both the vocal organ (source) and the upper vocal tract (filter).


Subject(s)
Algorithms , Finches/physiology , Vocalization, Animal , Animals , Female , Male , Social Behavior , Sound Spectrography
16.
Horm Behav ; 75: 130-41, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26407661

ABSTRACT

Physiological resonance - where the physiological state of a subject generates the same state in a perceiver - has been proposed as a proximate mechanism facilitating pro-social behaviours. While mainly described in mammals, state matching in physiology and behaviour could be a phylogenetically shared trait among social vertebrates. Birds show complex social lives and cognitive abilities, and their monogamous pair-bond is a highly coordinated partnership, therefore we hypothesised that birds express state matching between mates. We show that calls of male zebra finches Taeniopygia guttata produced during corticosterone treatment (after oral administration of exogenous corticosterone and during visual separation from the partner) provoke both an increase in corticosterone concentrations and behavioural changes in their female partner compared to control calls (regular calls emitted by the same male during visual separation from the partner only), whereas calls produced during corticosterone treatment by unfamiliar males have no such effect. Irrespective of the caller status (mate/non-mate), calls' acoustic properties were predictive of female corticosterone concentration after playback, but the identity of mate calls was necessary to fully explain female responses. Female responses were unlikely due to a failure of the call-based mate recognition system: in a discrimination task, females perceive calls produced during corticosterone treatment as being more similar to the control calls of the same male than to control calls of other males, even after taking acoustical differences into account. These results constitute the first evidence of physiological resonance solely on acoustic cues in birds, and support the presence of empathic processes.


Subject(s)
Empathy/physiology , Finches/physiology , Pair Bond , Vocalization, Animal/physiology , Acoustic Stimulation/veterinary , Animals , Corticosterone/blood , Cues , Female , Finches/blood , Male
17.
Eur J Neurosci ; 41(5): 546-67, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25728175

ABSTRACT

Understanding how the brain extracts the behavioral meaning carried by specific vocalization types that can be emitted by various vocalizers and in different conditions is a central question in auditory research. This semantic categorization is a fundamental process required for acoustic communication, and presupposes discriminative and invariance properties of the auditory system for conspecific vocalizations. Songbirds have been used extensively to study vocal learning, but the communicative function of all their vocalizations and their neural representation has yet to be examined. In this study, we first generated a library containing almost the entire zebra finch vocal repertoire, and organised communication calls along nine different categories according to their behavioral meaning. We then investigated the neural representations of these semantic categories in the primary and secondary auditory areas of six anesthetised zebra finches. To analyse how single units encode these call categories, we described neural responses in terms of their discrimination, selectivity and invariance properties. Quantitative measures for these neural properties were obtained with an optimal decoder using both spike counts and spike patterns. Information theoretic metrics show that almost half of the single units encode semantic information. Neurons achieve higher discrimination of these semantic categories by being more selective and more invariant. These results demonstrate that computations necessary for semantic categorization of meaningful vocalizations are already present in the auditory cortex, and emphasise the value of a neuro-ethological approach to understand vocal communication.


Subject(s)
Auditory Cortex/physiology , Brain Mapping , Animals , Auditory Cortex/cytology , Female , Finches , Male , Neurons/physiology , Vocalization, Animal
18.
PLoS One ; 9(7): e102842, 2014.
Article in English | MEDLINE | ID: mdl-25061795

ABSTRACT

BACKGROUND: Assessing the active space of the various types of information encoded by songbirds' vocalizations is important to address questions related to species ecology (e.g. spacing of individuals), as well as social behavior (e.g. territorial and/or mating strategies). Up to now, most of the previous studies have investigated the degradation of species-specific related information (species identity), and there is a gap of knowledge of how finer-grained information (e.g. individual identity) can transmit through the environment. Here we studied how the individual signature coded in the zebra finch long distance contact call degrades with propagation. METHODOLOGY: We performed sound transmission experiments of zebra finches' distance calls at various propagation distances. The propagated calls were analyzed using discriminant function analyses on a set of analytical parameters describing separately the spectral and temporal envelopes, as well as on a complete spectrographic representation of the signals. RESULTS/CONCLUSION: We found that individual signature is remarkably resistant to propagation as caller identity can be recovered even at distances greater than a hundred meters. Male calls show stronger discriminability at long distances than female calls, and this difference can be explained by the more pronounced frequency modulation found in their calls. In both sexes, individual information is carried redundantly using multiple acoustical features. Interestingly, features providing the highest discrimination at short distances are not the same ones that provide the highest discrimination at long distances.


Subject(s)
Animal Communication , Auditory Perception/physiology , Finches/physiology , Vocalization, Animal/physiology , Acoustic Stimulation , Animals , Female , Male , Social Behavior , Species Specificity
19.
J Exp Biol ; 217(Pt 17): 3169-77, 2014 Sep 01.
Article in English | MEDLINE | ID: mdl-24948627

ABSTRACT

Reliable transmission of acoustic information about individual identity is of critical importance for pair bond maintenance in numerous monogamous songbirds. However, information transfer can be impaired by environmental constraints such as external noise or propagation-induced degradation. Birds have been shown to use several adaptive strategies to deal with difficult signal transmission contexts. Specifically, a number of studies have suggested that vocal plasticity at the emitter's level allows birds to counteract the deleterious effects of sound degradation. Although the communication process involves both the emitter and the receiver, perceptual plasticity at the receiver's level has received little attention. Here, we explored the reliability of individual recognition by female zebra finches (Taeniopygia guttata), testing whether perceptual training can improve discrimination of degraded individual vocal signatures. We found that female zebra finches are proficient in discriminating between calls of individual males at long distances, and even more so when they can train themselves with increasingly degraded signals over time. In this latter context, females succeed in discriminating between males as far as 250 m. This result emphasizes that adaptation to adverse communication conditions may involve not only the emitter's vocal plasticity but also the receptor's decoding process through on-going learning.


Subject(s)
Auditory Perception/physiology , Finches/physiology , Learning , Recognition, Psychology/physiology , Vocalization, Animal , Acoustics , Animal Communication , Animals , Female , Male , Pair Bond
20.
Emotion ; 14(4): 651-65, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24866527

ABSTRACT

Emotional vocal signals are important ways of communicating norms to young infants. The second year is a period of increase in various forms of child transgressions, but also a period when infants have limited linguistic abilities. Two studies investigated the hypothesis that mothers respond with different vocal emotional tones to 3 types of child transgressions: moral (harming others), prudential (harming oneself), and pragmatic (creating inconvenience, e.g., by spilling) transgressions. We used a combination of naturalistic observation (Study 1) and experimental manipulation (Study 2) to record, code, and analyze maternal vocal responses to child transgressions. Both studies showed that mothers were more likely to use intense, angry vocalizations in response to moral transgressions, fearful vocalizations in response to prudential transgressions, comforting vocalizations in response to pragmatic and prudential transgressions, and (in Study 2) playful vocalizations in response to pragmatic transgressions. Study 1 showed that this differential use of vocal tone is used systematically in everyday life. Study 2 allowed us to standardize the context of the maternal intervention and perform additional acoustical analyses. A combination of principal component analysis and linear discriminant analysis applied to pitch and intensity data provided quantitative measures of the differences in vocal responses. These differentiated vocal responses are likely contributors to children's acquisition of norms from early in life.


Subject(s)
Infant Behavior/psychology , Mother-Child Relations/psychology , Mothers/psychology , Voice , Adult , Anger , Fear , Female , Humans , Infant , Male , Morals , Mothers/statistics & numerical data , Videotape Recording
SELECTION OF CITATIONS
SEARCH DETAIL
...