Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Folia Phoniatr Logop ; : 1-11, 2024 May 29.
Article in English | MEDLINE | ID: mdl-38810611

ABSTRACT

INTRODUCTION: This paper aimed at observing the impact of dysphonic voice on children's reception of a linguistic message by evaluating their reaction times (RTs) to instructions given by functional dysphonic and control female schoolteachers (STs). METHODS: French minimal pairs such as /muʃ/ ("mouche" fly) versus /buʃ/ ("bouche" mouth) embedded in a carrier sentence "click on the drawing of…" were produced by two groups of 10 dysphonic and control female ST, matched in age and year of experience. The phonemical contrasts observed are voicing, nasality, consonantal place of articulation, vowel roundedness, and vowel place of articulation. The experimentation was presented in the form of a computer game to children from 7 to 10 years old. Two images illustrating the target words were presented, accompanied by the oral instructions recorded by ST. With a two-button box created for the experiment, children had to click as quickly as possible on the image corresponding to the instruction. RESULTS: Our results show that the RTs of all children are affected by the ST's dysphonia, regardless of their age and that they have significantly longer RT when discriminating minimal pairs contrasting in voicing when the instruction is given by a dysphonic speaker compared to the same instruction given by a control speaker. CONCLUSION: These observations could be explained by the fact that functional dysphonia is associated with improper use of the vocal folds and thus an alteration of voicing.

2.
J Acoust Soc Am ; 150(6): 4429, 2021 12.
Article in English | MEDLINE | ID: mdl-34972287

ABSTRACT

Nursery rhymes, lullabies, or traditional stories are pieces of oral tradition that constitute an integral part of communication between caregivers and preverbal infants. Caregivers use a distinct acoustic style when singing or narrating to their infants. Unlike spontaneous infant-directed (ID) interactions, codified interactions benefit from highly stable acoustics due to their repetitive character. The aim of the study was to determine whether specific combinations of acoustic traits (i.e., vowel pitch, duration, spectral structure, and their variability) form characteristic "signatures" of different communicative dimensions during codified interactions, such as vocalization type, interactive stimulation, and infant-directedness. Bayesian analysis, applied to over 14 000 vowels from codified live interactions between mothers and their 6-months-old infants, showed that a few acoustic traits prominently characterize arousing vs calm interactions and sung vs spoken interactions. While pitch and duration and their variation played a prominent role in constituting these signatures, more linguistic aspects such as vowel clarity showed small or no effects. Infant-directedness was identifiable in a larger set of acoustic cues than the other dimensions. These findings provide insights into the functions of acoustic variation of ID communication and into the potential role of codified interactions for infants' learning about communicative intent and expressive forms typical of language and music.


Subject(s)
Mother-Child Relations , Speech Acoustics , Acoustics , Bayes Theorem , Communication , Humans , Infant
3.
Cogn Emot ; 26(4): 710-9, 2012.
Article in English | MEDLINE | ID: mdl-21851327

ABSTRACT

We examined what determines the typicality, or graded structure, of vocal emotion expressions. Separate groups of judges rated acted and spontaneous expressions of anger, fear, and joy with regard to their typicality and three main determinants of the graded structure of categories: category members' similarity to the central tendency of their category (CT); category members' frequency of instantiation, i.e., how often they are encountered as category members (FI); and category members' similarity to ideals associated with the goals served by its category, i.e., suitability to express particular emotions. Partial correlations and multiple regression analysis revealed that similarity to ideals, rather than CT or FI, explained most variance in judged typicality. Results thus suggest that vocal emotion expressions constitute ideal-based goal-derived categories, rather than taxonomic categories based on CT and FI. This could explain how prototypical expressions can be acoustically distinct and highly recognisable but occur relatively rarely in everyday speech.


Subject(s)
Emotions , Judgment , Speech , Acoustic Stimulation/methods , Acoustic Stimulation/psychology , Adult , Auditory Perception , Female , Goals , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...