Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cognition ; 224: 105051, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35219954

RESUMO

⁠This study investigates the dynamics of speech envelope tracking during speech production, listening and self-listening. We use a paradigm in which participants listen to natural speech (Listening), produce natural speech (Speech Production), and listen to the playback of their own speech (Self-Listening), all while their neural activity is recorded with EEG. After time-locking EEG data collection and auditory recording and playback, we used a Gaussian copula mutual information measure to estimate the relationship between information content in the EEG and auditory signals. In the 2-10 Hz frequency range, we identified different latencies for maximal speech envelope tracking during speech production and speech perception. Maximal speech tracking takes place approximately 110 ms after auditory presentation during perception and 25 ms before vocalisation during speech production. These results describe a specific timeline for speech tracking in speakers and listeners in line with the idea of a speech chain and hence, delays in communication.


Assuntos
Percepção da Fala , Fala , Percepção Auditiva , Encéfalo , Eletroencefalografia , Humanos
2.
J Cogn Neurosci ; 34(4): 618-638, 2022 03 05.
Artigo em Inglês | MEDLINE | ID: mdl-35061026

RESUMO

Spoken word recognition models and phonological theory propose that abstract features play a central role in speech processing. It remains unknown, however, whether auditory cortex encodes linguistic features in a manner beyond the phonetic properties of the speech sounds themselves. We took advantage of the fact that English phonology functionally codes stops and fricatives as voiced or voiceless with two distinct phonetic cues: Fricatives use a spectral cue, whereas stops use a temporal cue. Evidence that these cues can be grouped together would indicate the disjunctive coding of distinct phonetic cues into a functionally defined abstract phonological feature. In English, the voicing feature, which distinguishes the consonants [s] and [t] from [z] and [d], respectively, is hypothesized to be specified only for voiceless consonants (e.g., [s t]). Here, participants listened to syllables in a many-to-one oddball design, while their EEG was recorded. In one block, both voiceless stops and fricatives were the standards. In the other block, both voiced stops and fricatives were the standards. A critical design element was the presence of intercategory variation within the standards. Therefore, a many-to-one relationship, which is necessary to elicit an MMN, existed only if the stop and fricative standards were grouped together. In addition to the ERPs, event-related spectral power was also analyzed. Results showed an MMN effect in the voiceless standards block-an asymmetric MMN-in a time window consistent with processing in auditory cortex, as well as increased prestimulus beta-band oscillatory power to voiceless standards. These findings suggest that (i) there is an auditory memory trace of the standards based on the shared [voiceless] feature, which is only functionally defined; (ii) voiced consonants are underspecified; and (iii) features can serve as a basis for predictive processing. Taken together, these results point toward auditory cortex's ability to functionally code distinct phonetic cues together and suggest that abstract features can be used to parse the continuous acoustic signal.


Assuntos
Percepção da Fala , Voz , Sinais (Psicologia) , Humanos , Fonética , Fala , Acústica da Fala , Percepção da Fala/fisiologia
3.
Front Hum Neurosci ; 15: 609898, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33841113

RESUMO

How speech sounds are represented in the brain is not fully understood. The mismatch negativity (MMN) has proven to be a powerful tool in this regard. The MMN event-related potential is elicited by a deviant stimulus embedded within a series of repeating standard stimuli. Listeners construct auditory memory representations of these standards despite acoustic variability. In most designs that test speech sounds, however, this variation is typically intra-category: All standards belong to the same phonetic category. In the current paper, inter-category variation is presented in the standards. These standards vary in manner of articulation but share a common phonetic feature. In the standard retroflex experimental block, Mandarin Chinese speaking participants are presented with a series of "standard" consonants that share the feature [retroflex], interrupted by infrequent non-retroflex deviants. In the non-retroflex standard experimental block, non-retroflex standards are interrupted by infrequent retroflex deviants. The within-block MMN was calculated, as was the identity MMN (iMMN) to account for intrinsic differences in responses to the stimuli. We only observed a within-block MMN to the non-retroflex deviant embedded in the standard retroflex block. This suggests that listeners extract [retroflex] despite significant inter-category variation. In the non-retroflex standard block, because there is little on which to base a coherent auditory memory representation, no within-block MMN was observed. The iMMN to the retroflex was observed in a late time-window at centro-parieto-occipital electrode sites instead of fronto-central electrodes, where the MMN is typically observed, potentially reflecting the increased difficulty posed by the added variation in the standards. In short, participants can construct auditory memory representations despite significant acoustic and inter-category phonological variation so long as a shared phonetic feature binds them together.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...