Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Psychophysiology ; 58(6): e13811, 2021 06.
Article in English | MEDLINE | ID: mdl-33723870

ABSTRACT

Visual symbols or events may provide predictive information on to-be-expected sound events. When the perceived sound does not confirm the visual prediction, the incongruency response (IR), a prediction error signal of the event-related brain potentials, is elicited. It is unclear whether predictions are derived from lower-level local contingencies (e.g., recent events or repetitions) or from higher-level global rules applied top-down. In a recent study, sound pitch was predicted by a preceding note symbol. IR elicitation was confined to the condition where one of two sounds was presented more frequently and was not present with equal probability of both sounds. These findings suggest that local repetitions support predictive cross-modal processing. On the other hand, IR has also been observed with equal stimulus probabilities, where visual patterns predicted the upcoming sound sequence. This suggests the application of global rules. Here, we investigated the influence of stimulus repetition on the elicitation of the IR by presenting identical trial trains of a particular visual note symbol cueing a particular sound resulting either in a congruent or an incongruent pair. Trains of four different lengths: 1, 2, 4, or 7 were presented. The IR was observed already after a single presentation of a congruent visual-cue-sound combination and did not change in amplitude as trial train length increased. We conclude that higher-level associations applied in a top-down manner are involved in elicitation of the prediction error signal reflected by the IR, independent from local contingencies.


Subject(s)
Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Evoked Potentials/physiology , Sound , Visual Perception/physiology , Adult , Cues , Electroencephalography , Female , Humans , Male , Neuropsychology , Surveys and Questionnaires , Young Adult
2.
Atten Percept Psychophys ; 83(4): 1538-1551, 2021 May.
Article in English | MEDLINE | ID: mdl-33506354

ABSTRACT

What happens if a visual cue misleads auditory expectations? Previous studies revealed an early visuo-auditory incongruency effect, so-called incongruency response (IR) of the auditory event-related brain potential (ERP), occurring 100 ms after onset of the sound being incongruent to the preceding visual cue. So far, this effect has been ascribed to reflect the mismatch between auditory sensory expectation activated by visual predictive information and the actual sensory input. Thus, an IR should be confined to an asynchronous presentation of visual cue and sound. Alternatively, one could argue that frequently presented congruent visual-cue-sound combinations are integrated into a bimodal representation whereby violation of the visual-auditory relationship results in a bimodal feature mismatch (the IR should be obtained with asynchronous and with synchronous presentation). In an asynchronous condition, an either high-pitched or low-pitched sound was preceded by a visual note symbol presented above or below a fixation cross (90% congruent; 10% incongruent), while in a synchronous condition, both were presented simultaneously. High-pitched and low-pitched sounds were presented with different probabilities (83% vs. 17%) to form a strong association between bimodal stimuli. In both conditions, tones with pitch incongruent with the location of the note symbols elicited incongruency effects in the N2 and P3 ERPs; however, the IR was only elicited in the asynchronous condition. This finding supports the sensorial prediction error hypothesis stating that the amplitude of the auditory ERP 100 ms after sound onset is enhanced in response to unexpected compared with expected but otherwise identical sounds.


Subject(s)
Auditory Perception , Electroencephalography , Acoustic Stimulation , Evoked Potentials , Humans , Photic Stimulation , Visual Perception
3.
J Cogn Neurosci ; 31(8): 1110-1125, 2019 08.
Article in English | MEDLINE | ID: mdl-30912727

ABSTRACT

Predictions about forthcoming auditory events can be established on the basis of preceding visual information. Sounds being incongruent to predictive visual information have been found to elicit an enhanced negative ERP in the latency range of the auditory N1 compared with physically identical sounds being preceded by congruent visual information. This so-called incongruency response (IR) is interpreted as reduced prediction error for predicted sounds at a sensory level. The main purpose of this study was to examine the impact of probability manipulations on the IR. We manipulated the probability with which particular congruent visual-auditory pairs were presented (83/17 vs. 50/50 condition). This manipulation led to two conditions with different strengths of the association of visual with auditory information. A visual cue was presented either above or below a fixation cross and was followed by either a high- or low-pitched sound. In 90% of trials, the visual cue correctly predicted the subsequent sound. In one condition, one of the sounds was presented more frequently (83% of trials), whereas in the other condition both sounds were presented with equal probability (50% of trials). Therefore, in the 83/17 condition, one congruent combination of visual cue and corresponding sound was presented more frequently than the other combinations presumably leading to a stronger visual-auditory association. A significant IR for unpredicted compared with predicted but otherwise identical sounds was observed only in the 83/17 condition, but not in the 50/50 condition, where both congruent visual cue-sound combinations were presented with equal probability. We also tested whether the processing of the prediction violation is dependent on the task relevance of the visual information. Therefore, we contrasted a visual-auditory matching task with a pitch discrimination task. It turned out that the task only had an impact on the behavioral performance but not on the prediction error signals. Results suggest that the generation of visual-to-auditory sensory predictions is facilitated by a strong association between the visual cue and the predicted sound (83/17 condition) but is not influenced by the task relevance of the visual information.


Subject(s)
Anticipation, Psychological/physiology , Auditory Perception/physiology , Pitch Discrimination/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Adult , Cues , Female , Humans , Male , Probability , Young Adult
4.
J Speech Lang Hear Res ; 62(1): 177-189, 2019 01 30.
Article in English | MEDLINE | ID: mdl-30534994

ABSTRACT

Purpose For elderly listeners, it is more challenging to listen to 1 voice surrounded by other voices than for young listeners. This could be caused by a reduced ability to use acoustic cues-such as slight differences in onset time-for the segregation of concurrent speech signals. Here, we study whether the ability to benefit from onset asynchrony differs between young (18-33 years) and elderly (55-74 years) listeners. Method We investigated young (normal hearing, N = 20) and elderly (mildly hearing impaired, N = 26) listeners' ability to segregate 2 vowels with onset asynchronies ranging from 20 to 100 ms. Behavioral measures were complemented by a specific event-related brain potential component, the object-related negativity, indicating the perception of 2 distinct auditory objects. Results Elderly listeners' behavioral performance (identification accuracy of the 2 vowels) was considerably poorer than young listeners'. However, both age groups showed the same amount of improvement with increasing onset asynchrony. Object-related negativity amplitude also increased similarly in both age groups. Conclusion Both age groups benefit to a similar extent from onset asynchrony as a cue for concurrent speech segregation during active (behavioral measurement) and during passive (electroencephalographic measurement) listening.


Subject(s)
Speech Acoustics , Speech Perception/physiology , Adult , Age Factors , Aged , Analysis of Variance , Audiometry , Auditory Threshold , Cues , Electroencephalography , Female , Humans , Male , Middle Aged , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...