Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Cogn Neurosci ; 32(11): 2145-2158, 2020 11 01.
Article in English | MEDLINE | ID: mdl-32662723

ABSTRACT

When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio-video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.


Subject(s)
Phonetics , Speech Perception , Auditory Perception/physiology , Humans , Learning , Lipreading , Speech Perception/physiology
2.
Psychon Bull Rev ; 27(4): 707-715, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32319002

ABSTRACT

When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories. Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.


Subject(s)
Adaptation, Psychological , Cues , Lipreading , Phonetics , Speech Perception , Visual Perception , Vocabulary , Adult , Comprehension , Female , Humans , Learning , Male , Reaction Time , Young Adult
3.
Atten Percept Psychophys ; 82(4): 2018-2026, 2020 May.
Article in English | MEDLINE | ID: mdl-31970708

ABSTRACT

To adapt to situations in which speech perception is difficult, listeners can adjust boundaries between phoneme categories using perceptual learning. Such adjustments can draw on lexical information in surrounding speech, or on visual cues via speech-reading. In the present study, listeners proved they were able to flexibly adjust the boundary between two plosive/stop consonants, /p/-/t/, using both lexical and speech-reading information and given the same experimental design for both cue types. Videos of a speaker pronouncing pseudo-words and audio recordings of Dutch words were presented in alternating blocks of either stimulus type. Listeners were able to switch between cues to adjust phoneme boundaries, and resulting effects were comparable to results from listeners receiving only a single source of information. Overall, audiovisual cues (i.e., the videos) produced the stronger effects, commensurate with their applicability for adapting to noisy environments. Lexical cues were able to induce effects with fewer exposure stimuli and a changing phoneme bias, in a design unlike most prior studies of lexical retuning. While lexical retuning effects were relatively weaker compared to audiovisual recalibration, this discrepancy could reflect how lexical retuning may be more suitable for adapting to speakers than to environments. Nonetheless, the presence of the lexical retuning effects suggests that it may be invoked at a faster rate than previously seen. In general, this technique has further illuminated the robustness of adaptability in speech perception, and offers the potential to enable further comparisons across differing forms of perceptual learning.


Subject(s)
Auditory Perception , Phonetics , Speech Perception , Humans , Language , Lipreading , Speech
4.
Cereb Cortex ; 24(10): 2796-806, 2014 Oct.
Article in English | MEDLINE | ID: mdl-23709644

ABSTRACT

Williams syndrome (WS) is a neurodevelopmental condition caused by a hemizygous deletion of ∼26-28 genes on chromosome 7q11.23. WS is associated with a distinctive pattern of social cognition. Accordingly, neuroimaging studies show that WS is associated with structural alterations of key brain regions involved in social cognition during adulthood. However, very little is currently known regarding the neuroanatomical structure of social cognitive brain networks during childhood in WS. This study used diffusion tensor imaging to investigate the structural integrity of a specific set of white matter pathways (inferior fronto-occipital fasciculus [IFOF] and uncinate fasciculus [UF]) and associated brain regions [fusiform gyrus (FG), amygdala, hippocampus, medial orbitofrontal gyrus (MOG)] known to be involved in social cognition in children with WS and a typically developing (TD) control group. Children with WS exhibited higher fractional anisotropy (FA) and axial diffusivity values and lower radial diffusivity and apparent diffusion coefficient (ADC) values within the IFOF and UF, higher FA values within the FG, amygdala, and hippocampus and lower ADC values within the FG and MOG compared to controls. These findings provide evidence that the WS genetic deletion affects the development of key white matter pathways and brain regions important for social cognition.


Subject(s)
Brain/pathology , Nerve Net/pathology , White Matter/pathology , Williams Syndrome/pathology , Adolescent , Child , Cognition Disorders/pathology , Diffusion Tensor Imaging , Emotions , Female , Humans , Male , Social Behavior
SELECTION OF CITATIONS
SEARCH DETAIL
...