Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
J Voice ; 37(2): 293.e7-293.e23, 2023 Mar.
Article in English | MEDLINE | ID: mdl-33495033

ABSTRACT

OBJECTIVES: This study examines the effects of including acoustic research-based elements of the vocal expression of emotions in the singing lessons of acting students during a seven-week teaching period. This information may be useful in improving the training of interpretation in singing. STUDY DESIGN: Experimental comparative study. METHODS: Six acting students participated in seven weeks of extra training concerning voice quality in the expression of emotions in singing. Song samples were recorded before and after the training. A control group of six acting students were recorded twice within a seven-week period, during which they participated in ordinary training. All participants sang on the vowel [a:] and on a longer phrase expressing anger, sadness, joy, tenderness, and neutral states. The vowel and phrase samples were evaluated by 34 listeners for the perceived emotion. Additionally, the vowel samples were analyzed for formant frequencies (F1-F4), sound pressure level (SPL), spectral structure (Alpha ratio = SPL 1500-5000 Hz - SPL 50-1500 Hz), harmonic-to-noise ratio (HNR), and perturbation (jitter, shimmer). RESULTS: The number of correctly perceived expressions improved in the test group's vowel samples, while no significant change was observed in the control group. The overall recognition was higher for the phrases than for the vowel samples. Of the acoustic parameters, F1 and SPL significantly differentiated emotions in both groups, and HNR specifically differentiated emotions in the test group. The Alpha ratio was found to statistically significantly differentiate emotion expression after training. CONCLUSIONS: The expression of emotion in the singing voice improved after seven weeks of voice quality training. The F1, SPL, Alpha ratio, and HNR differentiated emotional expression. The variation in acoustic parameters became wider after training. Similar changes were not observed after seven weeks of ordinary voice training.


Subject(s)
Singing , Voice , Humans , Acoustics , Students , Emotions
2.
J Voice ; 2022 Jan 03.
Article in English | MEDLINE | ID: mdl-34991936

ABSTRACT

Everyday observations indicate that creaky voice has become common in Finland in recent years. Previous studies suggest that this trend is also occurring in other countries. This cross-sectional study investigates the use of creaky voice among Finnish university students from the 1990's to the 2010's. Material was obtained from a sound archive. It consisted of 200 samples from normophonic speakers (95 males, 105 females; mean age 23.7 years, SD 3.3 years, range 19-35 years). Normophonia was checked by two speech therapists in a preliminary perceptual analysis. Thereafter, two voice specialists rated the amount of creak and strain. A scale of 0-4 was used (0 = none, 4 = a lot). The inter- and intrarater reliability for the listening evaluations were satisfactory (for creaky phonation, rho = 0.611, P < 0.001 for interrater reliability and rho = 0.540, P < 0.001 for intrarater reliability; for strain, rho = 0.463, P < 0.001 and rho = 0.697, P < 0.001 for inter- and intrarater reliability, respectively). These results revealed a significant increase in the amount of perceived creak in females (from 1.04, SD 0.69 to 1.55, SD 1.06; P < 0.05, Mann-Whitney U test). In males, no significant change was found. However, the frequency of creaky voice use increased in both genders. No male speakers from the 1990's were rated as using "a lot" of creaky voice, but 2.3% of male speakers from the 2010's received this rating. Male speakers who were rated "quite a lot" increased from 5.9% in the 1990's to 18.1% in the 2010's. Female speakers rated "a lot" increased from 0% to 6%, and female speakers rated "quite a lot" increased from 7% to 25.8% over the studied time periods. Creaky phonation and strain correlated slightly in males (rho = 0.24, P < 0.05) and moderately in females (rho = 0.55, P < 0.001). Age did not correlate with the amount of creaky phonation (rho = 0.005, P > 0.10 for males, rho = -0.011, P > 0.10 for females). It can be concluded that the prevalence of creaky voice has increased among young Finnish speakers, particularly females.

3.
J Voice ; 35(4): 570-580, 2021 Jul.
Article in English | MEDLINE | ID: mdl-31708368

ABSTRACT

OBJECTIVE: This study examines the acoustic correlates of the vocal expression of emotions in contemporary commercial music (CCM) and classical styles of singing. This information may be useful in improving the training of interpretation in singing. STUDY DESIGN: This is an experimental comparative study. METHODS: Eleven female singers with a minimum of 3 years of professional-level singing training in CCM, classical, or both styles participated. They sang the vowel [ɑ:] at three pitches (A3 220Hz, E4 330Hz, and A4 440Hz) expressing anger, sadness, joy, tenderness, and a neutral voice. Vowel samples were analyzed for fundamental frequency (fo) formant frequencies (F1-F5), sound pressure level (SPL), spectral structure (alpha ratio = SPL 1500-5000 Hz-SPL 50-1500 Hz), harmonics-to-noise ratio (HNR), perturbation (jitter, shimmer), onset and offset duration, sustain time, rate and extent of fo variation in vibrato, and rate and extent of amplitude vibrato. RESULTS: The parameters that were statistically significantly (RM-ANOVA, P ≤ 0.05) related to emotion expression in both genres were SPL, alpha ratio, F1, and HNR. Additionally, for CCM, significance was found in sustain time, jitter, shimmer, F2, and F4. When fo and SPL were set as covariates in the variance analysis, jitter, HNR, and F4 did not show pure dependence on expression. The alpha ratio, F1, F2, shimmer apq5, amplitude vibrato rate, and sustain time of vocalizations had emotion-related variation also independent of fo and SPL in the CCM style, while these parameters were related to fo and SPL in the classical style. CONCLUSIONS: The results differed somewhat for the CCM and classical styles. The alpha ratio showed less variation in the classical style, most likely reflecting the demand for a more stable voice source quality. The alpha ratio, F1, F2, shimmer, amplitude vibrato rate, and the sustain time of the vocalizations were related to fo and SPL control in the classical style. The only common independent sound parameter indicating emotional expression for both styles was SPL. The CCM style offers more freedom for expression-related changes in voice quality.


Subject(s)
Singing , Voice , Emotions , Female , Humans , Speech Acoustics , Voice Quality
4.
J Voice ; 35(2): 326.e21-326.e28, 2021 Mar.
Article in English | MEDLINE | ID: mdl-31597605

ABSTRACT

OBJECTIVES: The present study aimed to investigate whether there are differences between Arabic-speaking and Finnish-speaking listeners in the impressions of a speaker's personality as evoked by various intentional voice qualities. STUDY DESIGN: This is an experimental study. METHODS: Samples (N = 55) were gathered from native Finnish-speaking males (N = 4) and females (N = 5), who read a text passage of 43 words using eight different voice qualities: (1) habitual speaking voice, speaking with (2) a forward or (3) backward placement of the tongue, or a (4) breathy, (5) tense, (6) creaky, (7) nasalized, or (8) denasalized voice. Native Arabic-speaking participants (34 males, 12 females; N = 46) were asked to evaluate the speech samples on a seven-point polarized scale by choosing 1-5 from a total of 18 contrasting pairs of personality traits. The listening tests were presented via Windows Media Player and a Genelec Biamp loudspeaker. Traits that had evaluations of 30% or more were selected for the final analysis. The results were compared to the results of native Finnish-speaking listeners (12 males, 38 females; N = 50). Statistical analyses were carried out using IBM SPSS Statistics 25. RESULTS: On the whole, both listener groups perceived the speakers' voice qualities similarly, although the Finnish-speaking listeners linked many voice qualities, especially nasal and denasal voices, with unpleasant and other negative personality traits. Moreover, somewhat opposing evaluations were given by the two language groups for voices with forward and backward placements of the tongue, and breathy and tense voices. In many cases, the evaluations by the Arabic-speaking listeners were more scattered. The speakers' sex also seemed to affect perceptions of personality. CONCLUSIONS: There seem to be similar stereotypical tendencies to relate certain voice qualities with certain personality traits, which is explainable by how the voice types are produced and used to express emotions. Some opposing trends found between the two language groups may be related to language and cultural differences. A further study with larger listener groups and including also samples from Arabic language speakers is needed to confirm the results of the present study.


Subject(s)
Speech Perception , Voice , Female , Finland , Humans , Language , Male , Personality , Voice Quality
5.
J Voice ; 33(4): 501-509, 2019 Jul.
Article in English | MEDLINE | ID: mdl-29478708

ABSTRACT

OBJECTIVES: This study examines the recognition of emotion in contemporary commercial music (CCM) and classical styles of singing. This information may be useful in improving the training of interpretation in singing. STUDY DESIGN: This is an experimental comparative study. METHODS: Thirteen singers (11 female, 2 male) with a minimum of 3 years' professional-level singing studies (in CCM or classical technique or both) participated. They sang at three pitches (females: a, e1, a1, males: one octave lower) expressing anger, sadness, joy, tenderness, and a neutral state. Twenty-nine listeners listened to 312 short (0.63- to 4.8-second) voice samples, 135 of which were sung using a classical singing technique and 165 of which were sung in a CCM style. The listeners were asked which emotion they heard. Activity and valence were derived from the chosen emotions. RESULTS: The percentage of correct recognitions out of all the answers in the listening test (N = 9048) was 30.2%. The recognition percentage for the CCM-style singing technique was higher (34.5%) than for the classical-style technique (24.5%). Valence and activation were better perceived than the emotions themselves, and activity was better recognized than valence. A higher pitch was more likely to be perceived as joy or anger, and a lower pitch as sorrow. Both valence and activation were better recognized in the female CCM samples than in the other samples. CONCLUSIONS: There are statistically significant differences in the recognition of emotions between classical and CCM styles of singing. Furthermore, in the singing voice, pitch affects the perception of emotions, and valence and activity are more easily recognized than emotions.


Subject(s)
Auditory Perception , Emotions , Recognition, Psychology , Singing , Voice Quality , Adult , Cues , Female , Humans , Male , Pitch Perception , Young Adult
6.
J Speech Lang Hear Res ; 61(4): 973-985, 2018 04 17.
Article in English | MEDLINE | ID: mdl-29587304

ABSTRACT

Purpose: Listening tests for emotion identification were conducted with 8-17-year-old children with hearing impairment (HI; N = 25) using cochlear implants, and their 12-year-old peers with normal hearing (N = 18). The study examined the impact of musical interests and acoustics of the stimuli on correct emotion identification. Method: The children completed a questionnaire with their background information and noting musical interests. They then listened to vocal stimuli produced by actors (N = 5) and consisting of nonsense sentences and prolonged vowels ([a:], [i:], and [u:]; N = 32) expressing excitement, anger, contentment, and fear. The children's task was to identify the emotions they heard in the sample by choosing from the provided options. Acoustics of the samples were studied using Praat software, and statistics were examined using SPSS 24 software. Results: The children with HI identified the emotions with 57% accuracy and the normal hearing children with 75% accuracy. Female listeners were more accurate than male listeners in both groups. Those who were implanted before age of 3 years identified emotions more accurately than others (p < .05). No connection between the child's audiogram and correct identification was observed. Musical interests and voice quality parameters were found to be related to correct identification. Conclusions: Implantation age, musical interests, and voice quality tended to have an impact on correct emotion identification. Thus, in developing the cochlear implants, it may be worth paying attention to the acoustic structures of vocal emotional expressions, especially the formant frequency of F3. Supporting the musical interests of children with HI may help their emotional development and improve their social lives.


Subject(s)
Cochlear Implants , Emotions , Hearing Loss/rehabilitation , Music , Speech Perception , Voice Quality , Adolescent , Child , Female , Hearing Loss/psychology , Humans , Male , Sex Factors , Social Perception , Speech Acoustics , Time-to-Treatment
7.
J Voice ; 31(4): 508.e11-508.e16, 2017 Jul.
Article in English | MEDLINE | ID: mdl-27856093

ABSTRACT

OBJECTIVE: This study aimed to assess teachers' voice symptoms and noise in schools in Upper Egypt and to study possible differences between teachers in public and private schools. STUDY DESIGN: A cross-sectional analysis via questionnaire was carried out. METHODS: Four schools were chosen randomly to represent primary and preparatory schools as well as public and private ones. In these schools, a total of 140 teachers participated in the study. They answered a questionnaire on vocal and throat symptoms and their effects on working and social activities, as well as levels and effects of experienced noise. RESULTS: Of all teachers, 47.9% reported moderate or severe dysphonia within the last 6 months, and 21.4% reported daily dysphonia. All teachers reported frequent feelings of being in noise, with 82.2% feeling it sometimes or always during the working day, resulting in a need to raise their voice. Teachers in public schools experienced more noise from nearby classes. CONCLUSION: The working conditions and vocal health of teachers in Upper Egypt, especially in public schools, are alarming.


Subject(s)
Noise , Occupational Diseases/epidemiology , Occupational Exposure/statistics & numerical data , Schools/statistics & numerical data , Voice Disorders/epidemiology , Adult , Cross-Sectional Studies , Egypt/epidemiology , Female , Humans , Loudness Perception , Male , Middle Aged , Young Adult
8.
Logoped Phoniatr Vocol ; 42(4): 160-166, 2017 Dec.
Article in English | MEDLINE | ID: mdl-27869518

ABSTRACT

The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.


Subject(s)
Emotions , Pitch Perception , Recognition, Psychology , Speech Perception , Visual Perception , Acoustic Stimulation , Adult , Female , Humans , Male , Nonverbal Communication , Photic Stimulation , Sex Factors , Voice Quality
9.
Logoped Phoniatr Vocol ; 40(3): 129-35, 2015 Oct.
Article in English | MEDLINE | ID: mdl-24861103

ABSTRACT

The present study focused on the identification of emotions in cross-cultural conditions on different continents and among subjects with divergent language backgrounds. The aim was to investigate whether the perception of the basic emotions from nonsense vocal samples was universal, dependent on voice quality, musicality, and/or gender. Listening tests for 350 participants were conducted on location in a variety of cultures: China, Egypt, Estonia, Finland, Russia, Sweden, and the USA. The results suggested that the voice quality parameters played a role in the identification of emotions without the linguistic content. Cultural background may affect the interpretation of the emotions more than the presumed universality. Musical interest tended to facilitate emotion identification. No gender differences were found.


Subject(s)
Auditory Perception , Emotions , Voice Quality , Acoustic Stimulation , Acoustics , Adult , Audiometry, Speech , China , Cross-Cultural Comparison , Cues , Cultural Characteristics , Egypt , Europe , Female , Humans , Language , Male , Middle Aged , Music , Pitch Perception , Recognition, Psychology , Sex Factors , Surveys and Questionnaires , United States , Young Adult
10.
Logoped Phoniatr Vocol ; 40(4): 156-70, 2015 Dec.
Article in English | MEDLINE | ID: mdl-24998780

ABSTRACT

Vocal emotions are expressed either by speech or singing. The difference is that in singing the pitch is predetermined while in speech it may vary freely. It was of interest to study whether there were voice quality differences between freely varying and mono-pitched vowels expressed by professional actors. Given their profession, actors have to be able to express emotions both by speech and singing. Electroglottogram and acoustic analyses of emotional utterances embedded in expressions of freely varying vowels [a:], [i:], [u:] (96 samples) and mono-pitched protracted vowels (96 samples) were studied. Contact quotient (CQEGG) was calculated using 35%, 55%, and 80% threshold levels. Three different threshold levels were used in order to evaluate their effects on emotions. Genders were studied separately. The results suggested significant gender differences for CQEGG 80% threshold level. SPL, CQEGG, and F4 were used to convey emotions, but to a lesser degree, when F0 was predetermined. Moreover, females showed fewer significant variations than males. Both genders used more hypofunctional phonation type in mono-pitched utterances than in the expressions with freely varying pitch. The present material warrants further study of the interplay between CQEGG threshold levels and formant frequencies, and listening tests to investigate the perceptual value of the mono-pitched vowels in the communication of emotions.


Subject(s)
Acoustics , Electrodiagnosis , Emotions , Speech Acoustics , Speech Perception , Voice Quality , Auditory Threshold , Female , Humans , Male , Phonetics , Sex Factors , Sound Spectrography , Speech Production Measurement
11.
Front Psychol ; 4: 344, 2013.
Article in English | MEDLINE | ID: mdl-23801972

ABSTRACT

The present study focused on voice quality and the perception of the basic emotions from speech samples in cross-cultural conditions. It was examined whether voice quality, cultural, or language background, age, or gender were related to the identification of the emotions. Professional actors (n2) and actresses (n2) produced non-sense sentences (n32) and protracted vowels (n8) expressing the six basic emotions, interest, and a neutral emotional state. The impact of musical interests on the ability to distinguish between emotions or valence (on an axis positivity - neutrality - negativity) from voice samples was studied. Listening tests were conducted on location in five countries: Estonia, Finland, Russia, Sweden, and the USA with 50 randomly chosen participants (25 males and 25 females) in each country. The participants (total N = 250) completed a questionnaire eliciting their background information and musical interests. The responses in the listening test and the questionnaires were statistically analyzed. Voice quality parameters and the share of the emotions and valence identified correlated significantly with each other for both genders. The percentage of emotions and valence identified was clearly above the chance level in each of the five countries studied, however, the countries differed significantly from each other for the identified emotions and the gender of the speaker. The samples produced by females were identified significantly better than those produced by males. Listener's age was a significant variable. Only minor gender differences were found for the identification. Perceptual confusion in the listening test between emotions seemed to be dependent on their similar voice production types. Musical interests tended to have a positive effect on the identification of the emotions. The results also suggest that identifying emotions from speech samples may be easier for those listeners who share a similar language or cultural background with the speaker.

12.
Logoped Phoniatr Vocol ; 38(1): 11-8, 2013 Apr.
Article in English | MEDLINE | ID: mdl-22587654

ABSTRACT

The aim of the present study was to investigate whether the glottal and filter variables of emotional expressions vary by emotion and valence expressed. Prolonged emotional vowels (n = 96) were produced by professional actors and actresses (n = 4) expressing joy, surprise, interest, sadness, fear, anger, disgust, and a neutral emotional state. Acoustic parameters and the contact quotient from the electroglottographic signal (CQEGG) were calculated. Statistics were calculated for the parameters. Vocal fold contact time differed significantly between the emotional expressions reflecting differences in phonation types. It was concluded that CQEGG may vary simultaneously and inversely with F3 and F4 in emotional expressions of positive emotions. Changes in the lower pharynx and larynx may affect the higher formant frequencies.


Subject(s)
Acoustics , Electrodiagnosis , Emotions , Glottis/physiology , Phonation , Phonetics , Speech Acoustics , Voice Quality , Analysis of Variance , Biomechanical Phenomena , Female , Humans , Male , Sound Spectrography , Speech Production Measurement , Time Factors , Vocal Cords/physiology
13.
J Voice ; 24(1): 30-8, 2010 Jan.
Article in English | MEDLINE | ID: mdl-19111438

ABSTRACT

This study aimed to investigate the role of voice source and formant frequencies in the perception of emotional valence and psychophysiological activity level from short vowel samples (approximately 150 milliseconds). Nine professional actors (five males and four females) read a prose passage simulating joy, tenderness, sadness, anger, and a neutral emotional state. The stress carrying vowel [a:] was extracted from continuous speech during the Finnish word [ta:k:ahan] and analyzed for duration, fundamental frequency (F0), equivalent sound level (L(eq)), alpha ratio, and formant frequencies F1-F4. Alpha ratio was calculated by subtracting the L(eq) (dB) in the range 50 Hz-1 kHz from the L(eq) in the range 1-5 kHz. The samples were inverse filtered by Iterative Adaptive Inverse Filtering and the estimates of the glottal flow obtained were parameterized with the normalized amplitude quotient (NAQ = f(AC)/(d(peak)T)). Fifty listeners (mean age 28.5 years) identified the emotional valences from the randomized samples. Multinomial Logistic Regression Analysis was used to study the interrelations of the parameters for perception. It appeared to be possible to identify valences from vowel samples of short duration ( approximately 150 milliseconds). NAQ tended to differentiate between the valences and activity levels perceived in both genders. Voice source may not only reflect variations of F0 and L(eq), but may also have an independent role in expression, reflecting phonation types. To some extent, formant frequencies appeared to be related to valence perception but no clear patterns could be identified. Coding of valence tends to be a complicated multiparameter phenomenon with wide individual variation.


Subject(s)
Emotions , Phonetics , Speech Perception , Speech , Adult , Female , Glottis/physiology , Humans , Language , Logistic Models , Male , Middle Aged , Psychoacoustics , Psycholinguistics , Sex Characteristics , Speech/physiology , Speech Acoustics , Speech Production Measurement , Time Factors
14.
Folia Phoniatr Logop ; 60(5): 249-55, 2008.
Article in English | MEDLINE | ID: mdl-18765945

ABSTRACT

Fundamental frequency (F(0)) and intensity are known to be important variables in the communication of emotions in speech. In singing, however, pitch is predetermined and yet the voice should convey emotions. Hence, other vocal parameters are needed to express emotions. This study investigated the role of voice source characteristics and formant frequencies in the communication of emotions in monopitched vowel samples [a:], [i:] and [u:]. Student actors (5 males, 8 females) produced the emotional samples simulating joy, tenderness, sadness, anger and a neutral emotional state. Equivalent sound level (L(eq)), alpha ratio [SPL (1-5 kHz) - SPL (50 Hz-1 kHz)] and formant frequencies F1-F4 were measured. The [a:] samples were inverse filtered and the estimated glottal flows were parameterized with the normalized amplitude quotient [NAQ = f(AC)/(d(peak)T)]. Interrelations of acoustic variables were studied by ANCOVA, considering the valence and psychophysiological activity of the expressions. Forty participants listened to the randomized samples (n = 210) for identification of the emotions. The capacity of monopitched vowels for conveying emotions differed. L(eq) and NAQ differentiated activity levels. NAQ also varied independently of L(eq). In [a:], filter (formant frequencies F1-F4) was related to valence. The interplay between voice source and F1-F4 warrants a synthesis study.


Subject(s)
Emotions , Perception/physiology , Phonation/physiology , Pitch Perception/physiology , Voice Quality/physiology , Voice/physiology , Auditory Perception/physiology , Communication , Drama , Female , Humans , Language , Male , Pitch Discrimination
15.
Logoped Phoniatr Vocol ; 31(4): 153-6, 2006.
Article in English | MEDLINE | ID: mdl-17114127

ABSTRACT

The present study investigates the role of F3 in the perception of valence of emotional expressions by using a vowel [a:] with different F3 values: the original, one with F3 either lowered or raised by 30% in frequency, and one with F3 removed. The vowel [a:] was extracted from the simulated emotions, inverse filtered and manipulated. The resulting 12 synthesized samples were randomized and presented to 30 listeners who evaluated the valence (positiveness/negativeness) of the expressions. The vowel with raised F3 was perceived more often as positive than the sample with original (p = 0.063), lowered (p = 0.006) or removed F3 (p = 0.066). F3 may affect perception of valence if the signal has sufficient energy in high frequency range.


Subject(s)
Affect , Phonation/physiology , Voice Quality , Adult , Female , Humans , Male , Phonetics , Speech Perception
16.
Logoped Phoniatr Vocol ; 31(1): 43-8, 2006.
Article in English | MEDLINE | ID: mdl-16517522

ABSTRACT

The aim of this investigation is to study how well voice quality conveys emotional content that can be discriminated by human listeners and the computer. The speech data were produced by nine professional actors (four women, five men). The speakers simulated the following basic emotions in a unit consisting of a vowel extracted from running Finnish speech: neutral, sadness, joy, anger, and tenderness. The automatic discrimination was clearly more successful than human emotion recognition. Human listeners thus apparently need longer speech samples than vowel-length units for reliable emotion discrimination than the machine, which utilizes quantitative parameters effectively for short speech samples.


Subject(s)
Emotions , Speech Acoustics , Speech Perception/physiology , Adult , Female , Humans , Male , Middle Aged , Psycholinguistics , Recognition, Psychology , Voice
17.
Logoped Phoniatr Vocol ; 30(3-4): 181-4, 2005.
Article in English | MEDLINE | ID: mdl-16287660

ABSTRACT

Authentic Finnish-English speech data was collected as part of an English conversation class in a Finnish college. Intonation was coded utilizing the framework involving 'tone', 'key' and 'termination'. A categorization of voice quality was chosen (e.g., 'modal voice', 'creak', 'breathy', and 'tense'). The tempo of speech was transcribed with such descriptors as, e.g., 'fast' and 'slow'. The majority of dispreferred turns in the data represented mitigated disagreement, with structural complexity (involving, e.g., wordiness). The p tone was predominant: a low/mid key often accompanied mitigated disagreement. The r and r+ tones were virtually absent in the mitigated dispreferred turns. Instead, the speakers often used a very lax/breathy voice quality and a slow/decelerating tempo, often resulting in creak near a transition relevant place.


Subject(s)
Language , Multilingualism , Speech Perception , Verbal Behavior , Voice Quality , Affect , Analysis of Variance , Humans , Speech Acoustics , Tape Recording , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...