Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Psychon Bull Rev ; 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39028394

ABSTRACT

The perception of rhythm has been studied across a range of auditory signals, with speech presenting one of the particularly challenging cases to capture and explain. Here, we asked if rhythm perception in speech is guided by perceptual biases arising from native language structures, if it is shaped by the cognitive ability to perceive a regular beat, or a combination of both. Listeners of two prosodically distinct languages - English and French - heard sentences (spoken in their native and the foreign language, respectively) and compared the rhythm of each sentence to its drummed version (presented at inter-syllabic, inter-vocalic, or isochronous intervals). While English listeners tended to map sentence rhythm onto inter-vocalic and inter-syllabic intervals in this task, French listeners showed a perceptual preference for inter-vocalic intervals only. The native language tendency was equally apparent in the listeners' foreign language and was enhanced by individual beat perception ability. These findings suggest that rhythm perception in speech is shaped primarily by listeners' native language experience with a lesser influence of innate cognitive traits.

2.
J Acoust Soc Am ; 155(4): 2698-2706, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38639561

ABSTRACT

The notion of the "perceptual center" or the "P-center" has been put forward to account for the repeated finding that acoustic and perceived syllable onsets do not necessarily coincide, at least in the perception of simple monosyllables or disyllables. The magnitude of the discrepancy between acoustics and perception-the location of the P-center in the speech signal- has proven difficult to estimate, though acoustic models of the effect do exist. The present study asks if the P-center effect can be documented in natural connected speech of English and Japanese and examines if an acoustic model that defines the P-center as the moment of the fastest energy change in a syllabic amplitude envelope adequately reflects the P-center in the two languages. A sensorimotor synchronization paradigm was deployed to address the research questions. The results provide evidence for the existence of the P-center effect in speech of both languages while the acoustic P-center model is found to be less applicable to Japanese. Sensorimotor synchronization patterns further suggest that the P-center may reflect perceptual anticipation of a vowel onset.


Subject(s)
Speech Acoustics , Speech Perception , Humans , Phonetics , Speech , Language
3.
PLoS One ; 18(9): e0291642, 2023.
Article in English | MEDLINE | ID: mdl-37729156

ABSTRACT

We provide evidence that the roughness of chords-a psychoacoustic property resulting from unresolved frequency components-is associated with perceived musical stability (operationalized as finishedness) in participants with differing levels and types of exposure to Western or Western-like music. Three groups of participants were tested in a remote cloud forest region of Papua New Guinea (PNG), and two groups in Sydney, Australia (musicians and non-musicians). Unlike prominent prior studies of consonance/dissonance across cultures, we framed the concept of consonance as stability rather than as pleasantness. We find a negative relationship between roughness and musical stability in every group including the PNG community with minimal experience of musical harmony. The effect of roughness is stronger for the Sydney participants, particularly musicians. We find an effect of harmonicity-a psychoacoustic property resulting from chords having a spectral structure resembling a single pitched tone (such as produced by human vowel sounds)-only in the Sydney musician group, which indicates this feature's effect is mediated via a culture-dependent mechanism. In sum, these results underline the importance of both universal and cultural mechanisms in music cognition, and they suggest powerful implications for understanding the origin of pitch structures in Western tonal music as well as on possibilities for new musical forms that align with humans' perceptual and cognitive biases. They also highlight the importance of how consonance/dissonance is operationalized and explained to participants-particularly those with minimal prior exposure to musical harmony.


Subject(s)
Drama , Music , Humans , Australia , Cognition , Niacinamide
4.
Brain Sci ; 12(12)2022 Nov 25.
Article in English | MEDLINE | ID: mdl-36552078

ABSTRACT

Adults commonly struggle with perceiving and recognizing the sounds and words of a second language (L2), especially when the L2 sounds do not have a counterpart in the learner's first language (L1). We examined how L1 Mandarin L2 English speakers learned pseudo English words within a cross-situational word learning (CSWL) task previously presented to monolingual English and bilingual Mandarin-English speakers. CSWL is ambiguous because participants are not provided with direct mappings of words and object referents. Rather, learners discern word-object correspondences through tracking multiple co-occurrences across learning trials. The monolinguals and bilinguals tested in previous studies showed lower performance for pseudo words that formed vowel minimal pairs (e.g., /dit/-/dɪt/) than pseudo word which formed consonant minimal pairs (e.g., /bɔn/-/pɔn/) or non-minimal pairs which differed in all segments (e.g., /bɔn/-/dit/). In contrast, L1 Mandarin L2 English listeners struggled to learn all word pairs. We explain this seemingly contradicting finding by considering the multiplicity of acoustic cues in the stimuli presented to all participant groups. Stimuli were produced in infant-directed-speech (IDS) in order to compare performance by children and adults and because previous research had shown that IDS enhances L1 and L2 acquisition. We propose that the suprasegmental pitch variation in the vowels typical of IDS stimuli might be perceived as lexical tone distinctions for tonal language speakers who cannot fully inhibit their L1 activation, resulting in high lexical competition and diminished learning during an ambiguous word learning task. Our results are in line with the Second Language Linguistic Perception (L2LP) model which proposes that fine-grained acoustic information from multiple sources and the ability to switch between language modes affects non-native phonetic and lexical development.

5.
Front Psychol ; 13: 801263, 2022.
Article in English | MEDLINE | ID: mdl-35401340

ABSTRACT

Perception of music and speech is based on similar auditory skills, and it is often suggested that those with enhanced music perception skills may perceive and learn novel words more easily. The current study tested whether music perception abilities are associated with novel word learning in an ambiguous learning scenario. Using a cross-situational word learning (CSWL) task, nonmusician adults were exposed to word-object pairings between eight novel words and visual referents. Novel words were either non-minimal pairs differing in all sounds or minimal pairs differing in their initial consonant or vowel. In order to be successful in this task, learners need to be able to correctly encode the phonological details of the novel words and have sufficient auditory working memory to remember the correct word-object pairings. Using the Mistuning Perception Test (MPT) and the Melodic Discrimination Test (MDT), we measured learners' pitch perception and auditory working memory. We predicted that those with higher MPT and MDT values would perform better in the CSWL task and in particular for novel words with high phonological overlap (i.e., minimal pairs). We found that higher musical perception skills led to higher accuracy for non-minimal pairs and minimal pairs differing in their initial consonant. Interestingly, this was not the case for vowel minimal pairs. We discuss the results in relation to theories of second language word learning such as the Second Language Perception model (L2LP).

SELECTION OF CITATIONS
SEARCH DETAIL
...