Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Phonetica ; 78(2): 141-168, 2021 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-33892529

RESUMO

The existence of word stress in Indonesian languages has been controversial. Recent acoustic analyses of Papuan Malay suggest that this language has word stress, counter to other studies and unlike closely related languages. The current study further investigates Papuan Malay by means of lexical (non-acoustic) analyses of two different aspects of word stress. In particular, this paper reports two distribution analyses of a word corpus, 1) investigating the extent to which stress patterns may help word recognition and 2) exploring the phonological factors that predict the distribution of stress patterns. The facilitating role of stress patterns in word recognition was investigated in a lexical analysis of word embeddings. The results show that Papuan Malay word stress (potentially) helps to disambiguate words. As for stress predictors, a random forest analysis investigated the effect of multiple morpho-phonological factors on stress placement. It was found that the mid vowels /ɛ/ and /ɔ/ play a central role in stress placement, refining the conclusions of previous work that mainly focused on /ɛ/. The current study confirms that non-acoustic research on stress can complement acoustic research in important ways. Crucially, the combined findings on stress in Papuan Malay so far give rise to an integrated perspective to word stress, in which phonetic, phonological and cognitive factors are considered.


Assuntos
Idioma , Fonética , Acústica , Humanos , Malásia
2.
J Child Lang ; 46(1): 111-141, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30334510

RESUMO

The perception and production of emotional and linguistic (focus) prosody were compared in children with cochlear implants (CI) and normally hearing (NH) peers. Thirteen CI and thirteen hearing-age-matched school-aged NH children were tested, as baseline, on non-verbal emotion understanding, non-word repetition, and stimulus identification and naming. Main tests were verbal emotion discrimination, verbal focus position discrimination, acted emotion production, and focus production. Productions were evaluated by NH adult Dutch listeners. All scores between groups were comparable, except a lower score for the CI group for non-word repetition. Emotional prosody perception and production scores correlated weakly for CI children but were uncorrelated for NH children. In general, hearing age weakly predicted emotion production but not perception. Non-verbal emotional (but not linguistic) understanding predicted CI children's (but not controls') emotion perception and production. In conclusion, increasing time in sound might facilitate vocal emotional expression, possibly requiring independently maturing emotion perception skills.


Assuntos
Implante Coclear , Surdez/reabilitação , Percepção da Fala , Adolescente , Adulto , Percepção Auditiva , Estudos de Casos e Controles , Criança , Implantes Cocleares , Feminino , Humanos , Linguística , Masculino
3.
J Speech Lang Hear Res ; 61(12): 3075-3094, 2018 12 10.
Artigo em Inglês | MEDLINE | ID: mdl-30515513

RESUMO

Purpose: Relative to normally hearing (NH) peers, the speech of children with cochlear implants (CIs) has been found to have deviations such as a high fundamental frequency, elevated jitter and shimmer, and inadequate intonation. However, two important dimensions of prosody (temporal and spectral) have not been systematically investigated. Given that, in general, the resolution in CI hearing is best for the temporal dimension and worst for the spectral dimension, we expected this hierarchy to be reflected in the amount of CI speech's deviation from NH speech. Deviations, however, were expected to diminish with increasing device experience. Method: Of 9 Dutch early- and late-implanted (division at 2 years of age) children and 12 hearing age-matched NH controls, spontaneous speech was recorded at 18, 24, and 30 months after implantation (CI) or birth (NH). Six spectral and temporal outcome measures were compared between groups, sessions, and genders. Results: On most measures, interactions of Group and/or Gender with Session were significant. For CI recipients as compared with controls, performance on temporal measures was not in general more deviant than spectral measures, although differences were found for individual measures. The late-implanted group had a tendency to be closer to the NH group than the early-implanted group. Groups converged over time. Conclusions: Results did not support the phonetic dimension hierarchy hypothesis, suggesting that the appropriateness of the production of basic prosodic measures does not depend on auditory resolution. Rather, it seems to depend on the amount of control necessary for speech production.


Assuntos
Fatores Etários , Implantes Cocleares/psicologia , Surdez/fisiopatologia , Medida da Produção da Fala/estatística & dados numéricos , Fala/fisiologia , Estudos de Casos e Controles , Criança , Linguagem Infantil , Pré-Escolar , Implante Coclear , Surdez/psicologia , Feminino , Humanos , Lactente , Masculino , Fonética , Período Pós-Operatório
4.
J Acoust Soc Am ; 141(5): 3349, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-28599540

RESUMO

This study aimed to find the optimal filter slope for cochlear implant simulations (vocoding) by testing the effect of a wide range of slopes on the discrimination of emotional and linguistic (focus) prosody, with varying availability of F0 and duration cues. Forty normally hearing participants judged if (non-)vocoded sentences were pronounced with happy or sad emotion, or with adjectival or nominal focus. Sentences were recorded as natural stimuli and manipulated to contain only emotion- or focus-relevant segmental duration or F0 information or both, and then noise-vocoded with 5, 20, 80, 120, and 160 dB/octave filter slopes. Performance increased with steeper slopes, but only up to 120 dB/octave, with bigger effects for emotion than for focus perception. For emotion, results with both cues most closely resembled results with F0, while for focus results with both cues most closely resembled those with duration, showing emotion perception relies primarily on F0, and focus perception on duration. This suggests that filter slopes affect focus perception less than emotion perception because for emotion, F0 is both more informative and more affected. The performance increase until extreme filter slope values suggests that much performance improvement in prosody perception is still to be gained for CI users.


Assuntos
Implante Coclear/instrumentação , Implantes Cocleares , Sinais (Psicologia) , Emoções , Fonética , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adolescente , Adulto , Audiometria de Tons Puros , Audiometria da Fala , Limiar Auditivo , Discriminação Psicológica , Estimulação Elétrica , Feminino , Humanos , Masculino , Adulto Jovem
5.
Iperception ; 6(6): 2041669515613661, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-27551352

RESUMO

Two hypotheses have been advanced in the recent literature with respect to the so-called Interlanguage Speech Intelligibility Benefit (ISIB): a nonnative speaker will be better understood by a another nonnative listener than a native speaker of the target language will be (a) only when the nonnatives share the same native language (matched interlanguage) or (b) even when the nonnatives have different mother tongues (non-matched interlanguage). Based on a survey of published experimental materials, the present article will demonstrate that both the restricted (a) and the generalized (b) hypotheses are false when the ISIB effect is evaluated in terms of absolute intelligibility scores. We will then propose a simple way to compute a relative measure for the ISIB (R-ISIB), which we claim is a more insightful way of evaluating the interlanguage benefit, and test the hypotheses in relative (R-ISIB) terms on the same literature data. We then find that our R-ISIB measure only supports the more restricted hypothesis (a) while rejecting the more general hypothesis (b). This finding shows that the native language shared by the interactants biases the listener toward interpreting sounds in terms of the phonology of the shared mother tongue.

6.
Cochlear Implants Int ; 16(2): 77-87, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25001247

RESUMO

OBJECTIVES: Performance of cochlear implant (CI) users on linguistic intonation recognition is poorer than that of normal-hearing listeners, due to the limited spectral detail provided by the implant. A higher spectral resolution is provided by narrow rather than by broad filter slopes. The corresponding effect of the filter slope on the identification of linguistic intonation conveyed by pitch movements alone was tested using vocoder simulations. METHODS: Re-synthesized intonation variants of naturally produced phrases were processed by a 15-channel noise vocoder using a narrow (40 dB/octave) and a broad (20 dB/octave) filter slope. There were three different intonation patterns (rise/fall/rise-fall), differentiated purely by pitch and each associated to a different meaning. In both slope conditions as well as a condition with unprocessed stimuli, 24 normally hearing Dutch adults listened to a phrase, indicating which of two meanings was associated to it (i.e. a counterbalanced selection of two of the three contours). RESULTS: As expected, performance for the unprocessed stimuli was better than for the vocoded stimuli. No overall difference between the filter conditions was found. DISCUSSION AND CONCLUSIONS: These results are taken to indicate that neither the narrow (20 dB/octave) nor the shallow (40 dB/octave) slope provide enough spectral detail to identify pure F(0) intonation contours. For users of a certain class of CIs, results could imply that their intonation perception would not benefit from steeper slopes. For them, perception of pitch movements in language requires more extreme filter slopes, more electrodes, and/or additional (phonetic/contextual) cues.


Assuntos
Audiometria de Tons Puros/métodos , Implantes Cocleares , Discriminação da Altura Tonal , Percepção da Fala , Adolescente , Adulto , Feminino , Humanos , Masculino , Países Baixos , Acústica da Fala , Testes de Discriminação da Fala , Adulto Jovem
7.
Cogn Affect Behav Neurosci ; 14(3): 1104-14, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24515864

RESUMO

Recent evidence suggests a relative right-hemispheric specialization for emotional prosody perception, whereas linguistic prosody perception is under bilateral control. It is still unknown, however, how the hemispheric specialization for prosody perception might arise. Two main hypotheses have been put forward. Cue-dependent hypotheses, on the one hand, propose that hemispheric specialization is driven by specialization for the non-prosody-specific processing of acoustic cues. The functional lateralization hypothesis, on the other hand, proposes that hemispheric specialization is dependent on the communicative function of prosody, with emotional and linguistic prosody processing being lateralized to the right and left hemispheres, respectively. In the present study, the functional lateralization hypothesis of prosody perception was systematically tested by instructing one group of participants to evaluate the emotional prosody, and another group the linguistic prosody dimension of bidimensional prosodic stimuli in a dichotic-listening paradigm, while event-related potentials were recorded. The results showed that the right-ear advantage was associated with decreased latencies for an early negativity in the contralateral hemisphere. No evidence was found for functional lateralization. These findings suggest that functional lateralization effects for prosody perception are small and support the structural model of dichotic listening.


Assuntos
Dominância Ocular/fisiologia , Potenciais Evocados Auditivos/fisiologia , Percepção/fisiologia , Estimulação Acústica , Adolescente , Adulto , Análise de Variância , Mapeamento Encefálico , Testes com Listas de Dissílabos , Eletroencefalografia , Emoções/fisiologia , Feminino , Humanos , Linguística , Masculino , Tempo de Reação , Vocabulário , Adulto Jovem
8.
Soc Cogn Affect Neurosci ; 9(8): 1108-17, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23681887

RESUMO

How we perceive emotional signals from our environment depends on our personality. Alexithymia, a personality trait characterized by difficulties in emotion regulation has been linked to aberrant brain activity for visual emotional processing. Whether alexithymia also affects the brain's perception of emotional speech prosody is currently unknown. We used functional magnetic resonance imaging to investigate the impact of alexithymia on hemodynamic activity of three a priori regions of the prosody network: the superior temporal gyrus (STG), the inferior frontal gyrus and the amygdala. Twenty-two subjects performed an explicit task (emotional prosody categorization) and an implicit task (metrical stress evaluation) on the same prosodic stimuli. Irrespective of task, alexithymia was associated with a blunted response of the right STG and the bilateral amygdalae to angry, surprised and neutral prosody. Individuals with difficulty describing feelings deactivated the left STG and the bilateral amygdalae to a lesser extent in response to angry compared with neutral prosody, suggesting that they perceived angry prosody as relatively more salient than neutral prosody. In conclusion, alexithymia may be associated with a generally blunted neural response to speech prosody. Such restricted prosodic processing may contribute to problems in social communication associated with this personality trait.


Assuntos
Sintomas Afetivos/fisiopatologia , Encéfalo/fisiopatologia , Percepção da Fala/fisiologia , Mapeamento Encefálico , Emoções/fisiologia , Feminino , Humanos , Julgamento/fisiologia , Imageamento por Ressonância Magnética , Masculino , Escalas de Graduação Psiquiátrica , Acústica da Fala , Adulto Jovem
9.
Neuropsychologia ; 50(12): 2752-2763, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22841991

RESUMO

With the advent of neuroimaging considerable progress has been made in uncovering the neural network involved in the perception of emotional prosody. However, the exact neuroanatomical underpinnings of the emotional prosody perception process remain unclear. Furthermore, it is unclear what the intrahemispheric basis might be of the relative right-hemispheric specialization for emotional prosody perception that has been found previously in the lesion literature. In an attempt to shed light on these issues, quantitative meta-analyses of the neuroimaging literature were performed to investigate which brain areas are robustly associated with stimulus-driven and task-dependent perception of emotional prosody. Also, lateralization analyses were performed to investigate whether statistically reliable hemispheric specialization across studies can be found in these networks. A bilateral temporofrontal network was found to be implicated in emotional prosody perception, generally supporting previously proposed models of emotional prosody perception. Right-lateralized convergence across studies was found in (early) auditory processing areas, suggesting that the right hemispheric specialization for emotional prosody perception reported previously in the lesion literature might be driven by hemispheric specialization for non-prosody-specific fundamental acoustic dimensions of the speech signal.


Assuntos
Encéfalo/fisiologia , Emoções/fisiologia , Percepção da Fala/fisiologia , Mapeamento Encefálico , Feminino , Lateralidade Funcional , Neuroimagem Funcional , Humanos , Funções Verossimilhança , Imageamento por Ressonância Magnética , Masculino
10.
J Cogn Neurosci ; 24(8): 1725-41, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22360592

RESUMO

The phenomenon of affective priming has caught scientific interest for over 30 years, yet the nature of the affective priming effect remains elusive. This study investigated the underlying mechanism of cross-modal affective priming and the influence of affective incongruence in music and speech on negativities in the N400 time-window. In Experiment 1, participants judged the valence of affective targets (affective categorization). We found that music and speech targets were evaluated faster when preceded by affectively congruent visual word primes, and vice versa. This affective priming effect was accompanied by a significantly larger N400-like effect following incongruent targets. In this experiment, both spreading of activation and response competition could underlie the affective priming effect. In Experiment 2, participants categorized the same affective targets based on nonaffective characteristics. However, as prime valence was irrelevant to the response dimension, affective priming effects could no longer be attributable to response competition. In Experiment 2, affective priming effects were observed neither at the behavioral nor electrophysiological level. The results of this study indicate that both affective music and speech prosody can prime the processing of visual words with emotional connotations, and vice versa. Affective incongruence seems to be associated with N400-like effects during evaluative categorization. The present data further suggest a role of response competition during the affective categorization of music, prosody, and words with emotional connotations.


Assuntos
Afeto/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Música/psicologia , Fala/fisiologia , Adulto , Eletroencefalografia/instrumentação , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Adulto Jovem
11.
Phonetica ; 68(3): 120-32, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22143147

RESUMO

Although much has been written on the relative importance of acoustic correlates of linguistic stress for the listener, the role of spectral expansion/reduction has been much understudied. The present article is the first to study the role of spectral expansion/reduction in a two- parameter study together with temporal structure exploiting systematic variation of both parameters in a 7 × 7 stimulus space. We used a single minimal stress pair in Dutch, a language in which all classic acoustic correlates of stress were shown earlier to be relevant in single-parameter studies, i.e. pitch movement, intensity (loudness), temporal organization and spectral expansion/reduction. The results of our study reconfirmed that temporal organization is a strong cue to stress perception when target words are presented out of focus (i.e. without a pitch accent on the target). Spectral expansion/reduction was a very weak stress cue; its effect was noticeable only when temporal structure was ambiguous between initial and final stress. These results suggest that spectral expansion/reduction is indeed the weakest of the four cues traditionally identified in the literature, at least in stress-accent languages such as English and Dutch.


Assuntos
Sinais (Psicologia) , Fonética , Acústica da Fala , Percepção da Fala , Adolescente , Adulto , Feminino , Humanos , Idioma , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Países Baixos , Fatores de Tempo , Adulto Jovem
12.
Neuropsychologia ; 49(13): 3722-38, 2011 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-21964199

RESUMO

It is unclear whether there is hemispheric specialization for prosodic perception and, if so, what the nature of this hemispheric asymmetry is. Using the lesion-approach, many studies have attempted to test whether there is hemispheric specialization for emotional and linguistic prosodic perception by examining the impact of left vs. right hemispheric damage on prosodic perception task performance. However, so far no consensus has been reached. In an attempt to find a consistent pattern of lateralization for prosodic perception, a meta-analysis was performed on 38 lesion studies (including 450 left hemisphere damaged patients, 534 right hemisphere damaged patients and 491 controls) of prosodic perception. It was found that both left and right hemispheric damage compromise emotional and linguistic prosodic perception task performance. Furthermore, right hemispheric damage degraded emotional prosodic perception more than left hemispheric damage (trimmed g=-0.37, 95% CI [-0.66; -0.09], N=620 patients). It is concluded that prosodic perception is under bihemispheric control with relative specialization of the right hemisphere for emotional prosodic perception.


Assuntos
Lesões Encefálicas/fisiopatologia , Dominância Cerebral/fisiologia , Emoções/fisiologia , Percepção/fisiologia , Percepção da Fala/fisiologia , Bases de Dados Factuais/estatística & dados numéricos , Humanos , Linguística
13.
Phonetica ; 63(2-3): 149-74, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17028460

RESUMO

The aim of this study is to find experimental support for impressionistic claims that there are prosodic differences between the dialects of Orkney and Shetland. It was found that native listeners had no difficulty in discriminating between Orkney and Shetland dialects when presented with speech fragments containing only melodic information. The results of a subsequent acoustic investigation revealed that there is a striking difference in pitch-peak location, which can be characterised as a shift in the location of the entire rise, i.e. both the onset and the peak. Shetland has early alignment, whereas the accent-lending rise in Orkney occurs late, so that in disyllabic words with initial stress the pitch peak does not coincide with the stressed syllable, but is delayed until the post-stress syllable. Finally, the perceptual relevance of the prosodic parameters identified in the acoustic study was investigated.


Assuntos
Idioma , Fonação , Fonética , Espectrografia do Som , Acústica da Fala , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Escócia , Percepção da Fala
14.
Brain Lang ; 91(3): 282-93, 2004 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-15533554

RESUMO

We present an acoustic study of segmental and prosodic properties of words produced by a female speaker of Chinese with left-hemisphere brain damage. We measured the location of the point vowels /a, e, [Symbol: see text], i, y, o, u/ and determined their separation in the vowel plane, and their perceptual distinctivity. Similarly, the acoustic properties of the four lexical tones were measured in the F0 x time space. The data for our brain-damaged speaker were compared with those of a healthy control speaker. Results show that the patient's vowels hardly suffered from her lesion (relative to the vowel dispersion in the healthy control speaker), but that the identifiability of the four lexical tones was greatly compromised. These findings show that the tonal errors in aphasic speech behave independently of the segmental errors, even though both serve to maintain lexical contrasts in Chinese, and are therefore part of the lexical specification of Chinese words. The present study suggests that the specification of segmental and tonal aspects of lexical entries in Chinese, and in tone languages in general, are located or processed separately in the brain.


Assuntos
Povo Asiático , Encefalopatias/fisiopatologia , Transtornos da Linguagem/fisiopatologia , Idioma , Percepção da Fala , Adulto , Feminino , Lateralidade Funcional , Humanos , Masculino , Acústica da Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...