Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Psychophysiology ; 58(3): e13747, 2021 03.
Article in English | MEDLINE | ID: mdl-33314262

ABSTRACT

People with normal hearing can usually follow one of the several concurrent speakers. Speech tempo affects both the separation of concurrent speech streams and information extraction from them. The current study varied the tempo of two concurrent speech streams to investigate these processes in a multi-talker situation. Listeners performed a target-detection and a content-tracking task, while target-related ERPs and functional brain networks sensitive to speech tempo were extracted from the EEG signal. At slower than normal speech tempo, building the two streams required longer processing times, and possibly the utilization of higher-order, for example, syntactic and semantic cues. The observed longer reaction times and higher connectivity strength in a theta band network associated with frontal control over auditory/speech processing are compatible with this notion. With increasing tempo, target detection performance decreased and the N2b and the P3b amplitudes increased. These data suggest an increased need for strictly allocating target-detection-related resources at higher tempo. This was also reflected by the observed increase in the strength of gamma-band networks within and between frontal, temporal, and cingular areas. At the fastest tested speech tempo, there was a sharp drop in recognition memory performance, while target detection performance increased compared to the normal speech tempo. This was accompanied by a significant increase in the strength of a low alpha network associated with the suppression of task-irrelevant speech. These results suggest that participants prioritized the immediate target detection task over the continuous content tracking, likely due to some capacity limit reached the fastest speech tempo.


Subject(s)
Attention/physiology , Brain Waves/physiology , Cerebral Cortex/physiology , Connectome , Evoked Potentials/physiology , Nerve Net/physiology , Psychomotor Performance/physiology , Speech Perception/physiology , Time Perception/physiology , Adult , Event-Related Potentials, P300/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Young Adult
2.
PLoS One ; 14(2): e0212754, 2019.
Article in English | MEDLINE | ID: mdl-30818389

ABSTRACT

Human listeners can focus on one speech stream out of several concurrent ones. The present study aimed to assess the whole-brain functional networks underlying a) the process of focusing attention on a single speech stream vs. dividing attention between two streams and 2) speech processing on different time-scales and depth. Two spoken narratives were presented simultaneously while listeners were instructed to a) track and memorize the contents of a speech stream and b) detect the presence of numerals or syntactic violations in the same ("focused attended condition") or in the parallel stream ("divided attended condition"). Speech content tracking was found to be associated with stronger connectivity in lower frequency bands (delta band- 0,5-4 Hz), whereas the detection tasks were linked with networks operating in the faster alpha (8-10 Hz) and beta (13-30 Hz) bands. These results suggest that the oscillation frequencies of the dominant brain networks during speech processing may be related to the duration of the time window within which information is integrated. We also found that focusing attention on a single speaker compared to dividing attention between two concurrent speakers was predominantly associated with connections involving the frontal cortices in the delta (0.5-4 Hz), alpha (8-10 Hz), and beta bands (13-30 Hz), whereas dividing attention between two parallel speech streams was linked with stronger connectivity involving the parietal cortices in the delta and beta frequency bands. Overall, connections strengthened by focused attention may reflect control over information selection, whereas connections strengthened by divided attention may reflect the need for maintaining two streams in parallel and the related control processes necessary for performing the tasks.


Subject(s)
Attention/physiology , Multitasking Behavior/physiology , Nerve Net/physiology , Speech Perception/physiology , Acoustic Stimulation , Auditory Cortex/physiology , Electroencephalography , Female , Frontal Lobe/physiology , Healthy Volunteers , Humans , Male , Parietal Lobe/physiology , Young Adult
3.
Cogn Affect Behav Neurosci ; 18(5): 932-948, 2018 10.
Article in English | MEDLINE | ID: mdl-29949114

ABSTRACT

The notion of automatic syntactic analysis received support from some event-related potential (ERP) studies. However, none of these studies tested syntax processing in the presence of a concurrent speech stream. Here we present two concurrent continuous speech streams, manipulating two variables potentially affecting speech processing in a fully crossed design: attention (focused vs. divided) and task (lexical - detecting numerals vs. syntactical - detecting syntactic violations). ERPs elicited by syntactic violations and numerals as targets were compared with those for distractors (task-relevant events in the unattended speech stream) and attended and unattended task-irrelevant events. As was expected, only target numerals elicited the N2b and P3 components. The amplitudes of these components did not significantly differ between focused and divided attention. Both task-relevant and task-irrelevant syntactic violations elicited the N400 ERP component within the attended but not in the unattended speech stream. P600 was only elicited by target syntactic violations. These results provide no support for the notion of automatic syntactic analysis. Rather, it appears that task-relevance is a prerequisite of P600 elicitation, implying that in-depth syntactic analysis occurs only for attended speech under everyday listening situations.


Subject(s)
Attention/physiology , Brain/physiology , Linguistics , Speech Perception/physiology , Electroencephalography , Evoked Potentials , Female , Humans , Male , Multitasking Behavior/physiology , Young Adult
4.
Front Neurosci ; 8: 64, 2014.
Article in English | MEDLINE | ID: mdl-24778604

ABSTRACT

An audio-visual experiment using moving sound sources was designed to investigate whether the analysis of auditory scenes is modulated by synchronous presentation of visual information. Listeners were presented with an alternating sequence of two pure tones delivered by two separate sound sources. In different conditions, the two sound sources were either stationary or moving on random trajectories around the listener. Both the sounds and the movement trajectories were derived from recordings in which two humans were moving with loudspeakers attached to their heads. Visualized movement trajectories modeled by a computer animation were presented together with the sounds. In the main experiment, behavioral reports on sound organization were collected from young healthy volunteers. The proportion and stability of the different sound organizations were compared between the conditions in which the visualized trajectories matched the movement of the sound sources and when the two were independent of each other. The results corroborate earlier findings that separation of sound sources in space promotes segregation. However, no additional effect of auditory movement per se on the perceptual organization of sounds was obtained. Surprisingly, the presentation of movement-congruent visual cues did not strengthen the effects of spatial separation on segregating auditory streams. Our findings are consistent with the view that bistability in the auditory modality can occur independently from other modalities.

SELECTION OF CITATIONS
SEARCH DETAIL
...