Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cogn Affect Behav Neurosci ; 23(2): 340-353, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36823247

RESUMO

In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.


Assuntos
Percepção da Fala , Fala , Adulto , Humanos , Fala/fisiologia , Gestos , Compreensão/fisiologia , Eletroencefalografia , Linguística , Percepção da Fala/fisiologia
2.
Cortex ; 151: 70-88, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35397380

RESUMO

Successful spoken-word recognition relies on interplay between lexical and sublexical processing. Previous research demonstrated that listeners readily shift between more lexically-biased and more sublexically-biased modes of processing in response to the situational context in which language comprehension takes place. Recognizing words in the presence of background noise reduces the perceptual evidence for the speech signal and - compared to the clear - results in greater uncertainty. It has been proposed that, when dealing with greater uncertainty, listeners rely more strongly on sublexical processing. The present study tested this proposal using behavioral and electroencephalography (EEG) measures. We reasoned that such an adjustment would be reflected in changes in the effects of variables predicting recognition performance with loci at lexical and sublexical levels, respectively. We presented native speakers of Dutch with words featuring substantial variability in (1) word frequency (locus at lexical level), (2) phonological neighborhood density (loci at lexical and sublexical levels) and (3) phonotactic probability (locus at sublexical level). Each participant heard each word in noise (presented at one of three signal-to-noise ratios) and in the clear and performed a two-stage lexical decision and transcription task while EEG was recorded. Using linear mixed-effects analyses, we observed behavioral evidence that listeners relied more strongly on sublexical processing when speech quality decreased. Mixed-effects modelling of the EEG signal in the clear condition showed that sublexical effects were reflected in early modulations of ERP components (e.g., within the first 300 msec post word onset). In noise, EEG effects occurred later and involved multiple regions activated in parallel. Taken together, we found evidence - especially in the behavioral data - supporting previous accounts that the presence of background noise induces a stronger reliance on sublexical processing.


Assuntos
Percepção da Fala , Humanos , Idioma , Linguística , Fonética , Reconhecimento Psicológico/fisiologia , Fala , Percepção da Fala/fisiologia
3.
J Deaf Stud Deaf Educ ; 24(3): 223-233, 2019 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-30809665

RESUMO

Speech perception in noise remains challenging for Deaf/Hard of Hearing people (D/HH), even fitted with hearing aids or cochlear implants. The perception of sentences in noise by 20 implanted or aided D/HH subjects mastering Cued Speech (CS), a system of hand gestures complementing lip movements, was compared with the perception of 15 typically hearing (TH) controls in three conditions: audio only, audiovisual, and audiovisual + CS. Similar audiovisual scores were obtained for signal-to-noise ratios (SNRs) 11 dB higher in D/HH participants compared with TH ones. Adding CS information enabled D/HH participants to reach a mean score of 83% in the audiovisual + CS condition at a mean SNR of 0 dB, similar to the usual audio score for TH participants at this SNR. This confirms that the combination of lipreading and Cued Speech system remains extremely important for persons with hearing loss, particularly in adverse hearing conditions.


Assuntos
Surdez/psicologia , Ruído , Pessoas com Deficiência Auditiva/psicologia , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Criança , Sinais (Psicologia) , Feminino , Humanos , Leitura Labial , Masculino , Mascaramento Perceptivo/fisiologia , Estimulação Luminosa , Adulto Jovem
4.
J Neurosci ; 35(7): 3256-62, 2015 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-25698760

RESUMO

Psychophysical target detection has been shown to be modulated by slow oscillatory brain phase. However, thus far, only low-level sensory stimuli have been used as targets. The current human electroencephalography (EEG) study examined the influence of neural oscillatory phase on a lexical-decision task performed for stimuli embedded in noise. Neural phase angles were compared for correct versus incorrect lexical decisions using a phase bifurcation index (BI), which quantifies differences in mean phase angles and phase concentrations between correct and incorrect trials. Neural phase angles in the alpha frequency range (8-12 Hz) over right anterior sensors were approximately antiphase in a prestimulus time window, and thus successfully distinguished between correct and incorrect lexical decisions. Moreover, alpha-band oscillations were again approximately antiphase across participants for correct versus incorrect trials during a later peristimulus time window (∼500 ms) at left-central electrodes. Strikingly, lexical decision accuracy was not predicted by either event-related potentials (ERPs) or oscillatory power measures. We suggest that correct lexical decisions depend both on successful sensory processing, which is made possible by the alignment of stimulus onset with an optimal alpha phase, as well as integration and weighting of decisional information, which is coupled to alpha phase immediately following the critical manipulation that differentiated words from pseudowords. The current study constitutes a first step toward characterizing the role of dynamic oscillatory brain states for higher cognitive functions, such as spoken word recognition.


Assuntos
Ritmo alfa/fisiologia , Encéfalo/fisiologia , Tomada de Decisões/fisiologia , Ruído , Semântica , Estimulação Acústica , Adulto , Atenção/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino , Análise de Componente Principal , Psicofísica , Ritmo Teta/fisiologia , Vocabulário
5.
Front Hum Neurosci ; 8: 350, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24904385

RESUMO

Listening to speech is often demanding because of signal degradations and the presence of distracting sounds (i.e., "noise"). The question how the brain achieves the task of extracting only relevant information from the mixture of sounds reaching the ear (i.e., "cocktail party problem") is still open. In analogy to recent findings in vision, we propose cortical alpha (~10 Hz) oscillations measurable using M/EEG as a pivotal mechanism to selectively inhibit the processing of noise to improve auditory selective attention to task-relevant signals. We review initial evidence of enhanced alpha activity in selective listening tasks, suggesting a significant role of alpha-modulated noise suppression in speech. We discuss the importance of dissociating between noise interference in the auditory periphery (i.e., energetic masking) and noise interference with more central cognitive aspects of speech processing (i.e., informational masking). Finally, we point out the adverse effects of age-related hearing loss and/or cognitive decline on auditory selective inhibition. With this perspective article, we set the stage for future studies on the inhibitory role of alpha oscillations for speech processing in challenging listening situations.

6.
Neuroimage ; 97: 387-95, 2014 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-24747736

RESUMO

Slow neural oscillations (~1-15 Hz) are thought to orchestrate the neural processes of spoken language comprehension. However, functional subdivisions within this broad range of frequencies are disputed, with most studies hypothesizing only about single frequency bands. The present study utilizes an established paradigm of spoken word recognition (lexical decision) to test the hypothesis that within the slow neural oscillatory frequency range, distinct functional signatures and cortical networks can be identified at least for theta- (~3-7 Hz) and alpha-frequencies (~8-12 Hz). Listeners performed an auditory lexical decision task on a set of items that formed a word-pseudoword continuum: ranging from (1) real words over (2) ambiguous pseudowords (deviating from real words only in one vowel; comparable to natural mispronunciations in speech) to (3) pseudowords (clearly deviating from real words by randomized syllables). By means of time-frequency analysis and spatial filtering, we observed a dissociation into distinct but simultaneous patterns of alpha power suppression and theta power enhancement. Alpha exhibited a parametric suppression as items increasingly matched real words, in line with lowered functional inhibition in a left-dominant lexical processing network for more word-like input. Simultaneously, theta power in a bilateral fronto-temporal network was selectively enhanced for ambiguous pseudowords only. Thus, enhanced alpha power can neurally 'gate' lexical integration, while enhanced theta power might index functionally more specific ambiguity-resolution processes. To this end, a joint analysis of both frequency bands provides neural evidence for parallel processes in achieving spoken word recognition.


Assuntos
Ritmo alfa/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Ritmo Teta/fisiologia , Estimulação Acústica , Adulto , Tomada de Decisões/fisiologia , Eletroencefalografia , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Desempenho Psicomotor/fisiologia , Adulto Jovem
7.
Cortex ; 53: 9-26, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24561233

RESUMO

Speech signals are often compromised by disruptions originating from external (e.g., masking noise) or internal (e.g., inaccurate articulation) sources. Speech comprehension thus entails detecting and replacing missing information based on predictive and restorative neural mechanisms. The present study targets predictive mechanisms by investigating the influence of a speech segment's predictability on early, modality-specific electrophysiological responses to this segment's omission. Predictability was manipulated in simple physical terms in a single-word framework (Experiment 1) or in more complex semantic terms in a sentence framework (Experiment 2). In both experiments, final consonants of the German words Lachs ([laks], salmon) or Latz ([lats], bib) were occasionally omitted, resulting in the syllable La ([la], no semantic meaning), while brain responses were measured with multi-channel electroencephalography (EEG). In both experiments, the occasional presentation of the fragment La elicited a larger omission response when the final speech segment had been predictable. The omission response occurred ∼125-165 msec after the expected onset of the final segment and showed characteristics of the omission mismatch negativity (MMN), with generators in auditory cortical areas. Suggestive of a general auditory predictive mechanism at work, this main observation was robust against varying source of predictive information or attentional allocation, differing between the two experiments. Source localization further suggested the omission response enhancement by predictability to emerge from left superior temporal gyrus and left angular gyrus in both experiments, with additional experiment-specific contributions. These results are consistent with the existence of predictive coding mechanisms in the central auditory system, and suggestive of the general predictive properties of the auditory system to support spoken word recognition.


Assuntos
Encéfalo/fisiologia , Compreensão/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Atenção/fisiologia , Eletroencefalografia , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Desempenho Psicomotor/fisiologia , Semântica , Adulto Jovem
8.
J Cogn Neurosci ; 25(8): 1383-95, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23489145

RESUMO

Under adverse listening conditions, speech comprehension profits from the expectancies that listeners derive from the semantic context. However, the neurocognitive mechanisms of this semantic benefit are unclear: How are expectancies formed from context and adjusted as a sentence unfolds over time under various degrees of acoustic degradation? In an EEG study, we modified auditory signal degradation by applying noise-vocoding (severely degraded: four-band, moderately degraded: eight-band, and clear speech). Orthogonal to that, we manipulated the extent of expectancy: strong or weak semantic context (±con) and context-based typicality of the sentence-last word (high or low: ±typ). This allowed calculation of two distinct effects of expectancy on the N400 component of the evoked potential. The sentence-final N400 effect was taken as an index of the neural effort of automatic word-into-context integration; it varied in peak amplitude and latency with signal degradation and was not reliably observed in response to severely degraded speech. Under clear speech conditions in a strong context, typical and untypical sentence completions seemed to fulfill the neural prediction, as indicated by N400 reductions. In response to moderately degraded signal quality, however, the formed expectancies appeared more specific: Only typical (+con +typ), but not the less typical (+con -typ) context-word combinations led to a decrease in the N400 amplitude. The results show that adverse listening "narrows," rather than broadens, the expectancies about the perceived speech signal: limiting the perceptual evidence forces the neural system to rely on signal-driven expectancies, rather than more abstract expectancies, while a sentence unfolds over time.


Assuntos
Compreensão/fisiologia , Potenciais Evocados/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Semântica , Vocabulário , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...