Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 134
Filter
1.
Front Hum Neurosci ; 18: 1386207, 2024.
Article in English | MEDLINE | ID: mdl-38938291

ABSTRACT

During the first year of life, infants start to learn the lexicon of their native language. Word learning includes the establishment of longer-term representations for the phonological form and the meaning of the word in the brain, as well as the link between them. However, it is not known how the brain processes word forms immediately after they have been learned. We familiarized 12-month-old infants (N = 52) with two pseudowords and studied their neural signatures. Specifically, we determined whether a newly learned word form elicits neural signatures similar to those observed when a known word is recognized (i.e., when a well-established word representation is activated, eliciting enhanced mismatch responses) or whether the processing of a newly learned word form shows the suppression of the neural response along with the principles of predictive coding of a learned rule (i.e., the order of the syllables of the new word form). The pattern of results obtained in the current study suggests that recognized word forms elicit a mismatch response of negative polarity, similar to newly learned and previously known words with an established representation in long-term memory. In contrast, prediction errors caused by acoustic novelty or deviation from the expected order in a sequence of (pseudo)words elicit responses of positive polarity. This suggests that electric brain activity is not fully explained by the predictive coding framework.

2.
Hum Brain Mapp ; 45(8): e26747, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38825981

ABSTRACT

Electroencephalography (EEG) functional connectivity (FC) estimates are confounded by the volume conduction problem. This effect can be greatly reduced by applying FC measures insensitive to instantaneous, zero-lag dependencies (corrected measures). However, numerous studies showed that FC measures sensitive to volume conduction (uncorrected measures) exhibit higher reliability and higher subject-level identifiability. We tested how source reconstruction contributed to the reliability difference of EEG FC measures on a large (n = 201) resting-state data set testing eight FC measures (including corrected and uncorrected measures). We showed that the high reliability of uncorrected FC measures in resting state partly stems from source reconstruction: idiosyncratic noise patterns define a baseline resting-state functional network that explains a significant portion of the reliability of uncorrected FC measures. This effect remained valid for template head model-based, as well as individual head model-based source reconstruction. Based on our findings we made suggestions how to best use spatial leakage corrected and uncorrected FC measures depending on the main goals of the study.


Subject(s)
Connectome , Electroencephalography , Nerve Net , Humans , Electroencephalography/methods , Electroencephalography/standards , Adult , Connectome/standards , Connectome/methods , Female , Male , Reproducibility of Results , Nerve Net/diagnostic imaging , Nerve Net/physiology , Young Adult , Magnetic Resonance Imaging/standards , Brain/diagnostic imaging , Brain/physiology
3.
iScience ; 27(4): 109295, 2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38558934

ABSTRACT

The study investigates age-related decline in listening abilities, particularly in noisy environments, where the challenge lies in extracting meaningful information from variable sensory input (figure-ground segregation). The research focuses on peripheral and central factors contributing to this decline using a tone-cloud-based figure detection task. Results based on behavioral measures and event-related brain potentials (ERPs) indicate that, despite delayed perceptual processes and some deterioration in attention and executive functions with aging, the ability to detect sound sources in noise remains relatively intact. However, even mild hearing impairment significantly hampers the segregation of individual sound sources within a complex auditory scene. The severity of the hearing deficit correlates with an increased susceptibility to masking noise. The study underscores the impact of hearing impairment on auditory scene analysis and highlights the need for personalized interventions based on individual abilities.

4.
Cortex ; 172: 114-124, 2024 03.
Article in English | MEDLINE | ID: mdl-38295554

ABSTRACT

Event-related potentials (ERPs) acquired during task-free passive listening can be used to study how sensitivity to common pattern repetitions and rare deviations changes over time. These changes are purported to represent the formation and accumulation of precision in internal models that anticipate future states based on probabilistic and/or statistical learning. This study features an unexpected finding; a strong order-dependence in the speed with which deviant responses are elicited that anchors to first learning. Participants heard four repetitions of a sequence in which an equal number of short (30 msec) and long (60 msec) pure tones were arranged into four blocks in which one was common (the standard, p = .875) and the other rare (the deviant, p = .125) with probabilities alternating across blocks. Some participants always heard the sequences commencing with the 30 msec deviant block, and others always with the 60 msec deviant block first. A deviance-detection component known as mismatch negativity (MMN) was extracted from responses and the point in time at which MMN reached maximum amplitude was used as the dependent variable. The results show that if participants heard sequences commencing with the 60 msec deviant block first, the MMN to the 60 msec and 30 msec deviant peaked at an equivalent latency. However, if participants heard sequences commencing with the 30 msec deviant first, the MMN peaked earlier to the 60 msec deviant. Furthermore, while the 30 msec MMN latency did not differ as a function of sequence composition, the 60 msec MMN latency did and was earlier when the sequences began with a 30 msec deviant first. By examining MMN latency effects as a function of age and hearing level it was apparent that the differentiation in 30 msec and 60 msec MMN latency expands with older age and raised hearing threshold due to prolongation of the time taken for the 30 msec MMN to peak. The observations are discussed with reference to how the initial sound composition may tune the auditory system to be more sensitive to different cues (i.e., offset responses vs perceived loudness). The order-effect demonstrates a remarkably powerful anchoring to first learning that might reflect initial tuning to the most valuable discriminating feature within a given listening environment, an effect that defies explanation based on statistical information alone.


Subject(s)
Electroencephalography , Evoked Potentials, Auditory , Humans , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Electroencephalography/methods , Reaction Time/physiology , Evoked Potentials/physiology
5.
Cognition ; 243: 105670, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38016227

ABSTRACT

Newborn infants have been shown to extract temporal regularities from sound sequences, both in the form of learning regular sequential properties, and extracting periodicity in the input, commonly referred to as a regular pulse or the 'beat'. However, these two types of regularities are often indistinguishable in isochronous sequences, as both statistical learning and beat perception can be elicited by the regular alternation of accented and unaccented sounds. Here, we manipulated the isochrony of sound sequences in order to disentangle statistical learning from beat perception in sleeping newborn infants in an EEG experiment, as previously done in adults and macaque monkeys. We used a binary accented sequence that induces a beat when presented with isochronous timing, but not when presented with randomly jittered timing. We compared mismatch responses to infrequent deviants falling on either accented or unaccented (i.e., odd and even) positions. Results showed a clear difference between metrical positions in the isochronous sequence, but not in the equivalent jittered sequence. This suggests that beat processing is present in newborns. Despite previous evidence for statistical learning in newborns the effects of this ability were not detected in the jittered condition. These results show that statistical learning by itself does not fully explain beat processing in newborn infants.


Subject(s)
Auditory Perception , Music , Humans , Infant, Newborn , Acoustic Stimulation/methods , Auditory Perception/physiology , Periodicity
7.
Neuroimage ; 281: 120384, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37739198

ABSTRACT

The seemingly effortless ability of our auditory system to rapidly detect new events in a dynamic environment is crucial for survival. Whether the underlying brain processes are innate is unknown. To answer this question, electroencephalography was recorded while regularly patterned (REG) versus random (RAND) tone sequences were presented to sleeping neonates. Regular relative to random sequences elicited differential neural responses after only a single repetition of the pattern indicating the existence of an innate capacity of the auditory system to detect auditory sequential regularities. We show that the newborn auditory system accumulates evidence only somewhat longer than the minimum amount determined by the ideal Bayesian observer model (the prediction from a variable-order Markov chain model) before detecting a repeating pattern. Thus, newborns can quickly form representations for regular features of the sound input, preparing the way for learning the contingencies of the environment.


Subject(s)
Auditory Perception , Evoked Potentials, Auditory , Humans , Infant, Newborn , Acoustic Stimulation , Evoked Potentials, Auditory/physiology , Auditory Perception/physiology , Bayes Theorem , Brain/physiology , Electroencephalography
8.
Biol Psychol ; 182: 108651, 2023 09.
Article in English | MEDLINE | ID: mdl-37517603

ABSTRACT

Following a speaker in multi-talker environments requires the listener to separate the speakers' voices and continuously focus attention on one speech stream. While the dissimilarity of voices may make speaker separation easier, it may also affect maintaining the focus of attention. To assess these effects, electrophysiological (EEG) and behavioral data were collected from healthy young adults while they listened to two concurrent speech streams performing an online lexical detection task and an offline recognition memory task. Perceptual speaker similarity was manipulated on four levels: identical, similar, dissimilar, and opposite-gender speakers. Behavioral and electrophysiological data suggested that, while speaker similarity hinders auditory stream segregation, dissimilarity hinders maintaining the focus of attention by making the to-be-ignored speech stream more distracting. Thus, resolving the cocktail party situation poses different problems at different levels of perceived speaker similarity, resulting in different listening strategies.


Subject(s)
Speech Perception , Young Adult , Humans , Speech Perception/physiology , Acoustic Stimulation/methods , Evoked Potentials , Attention/physiology
9.
Trends Neurosci ; 46(9): 726-737, 2023 09.
Article in English | MEDLINE | ID: mdl-37344237

ABSTRACT

Learning to decode and produce speech is one of the most demanding tasks faced by infants. Nevertheless, infants typically utter their first words within a year, and phrases soon follow. Here we review cognitive abilities of newborn infants that promote language acquisition, focusing primarily on studies tapping neural activity. The results of these studies indicate that infants possess core adult auditory abilities already at birth, including statistical learning and rule extraction from variable speech input. Thus, the neonatal brain is ready to categorize sounds, detect word boundaries, learn words, and separate speech streams: in short, to acquire language quickly and efficiently from everyday linguistic input.


Subject(s)
Speech Perception , Infant , Infant, Newborn , Adult , Humans , Language Development , Language , Learning , Brain , Speech
10.
Sci Rep ; 13(1): 10287, 2023 06 24.
Article in English | MEDLINE | ID: mdl-37355709

ABSTRACT

The ability to process sound duration is crucial already at a very early age for laying the foundation for the main functions of auditory perception, such as object perception and music and language acquisition. With the availability of age-appropriate structural anatomical templates, we can reconstruct EEG source activity with much-improved reliability. The current study capitalized on this possibility by reconstructing the sources of event-related potential (ERP) waveforms sensitive to sound duration in 4- and 9-month-old infants. Infants were presented with short (200 ms) and long (300 ms) sounds equiprobable delivered in random order. Two temporally separate ERP waveforms were found to be modulated by sound duration. Generators of these waveforms were mainly located in the primary and secondary auditory areas and other language-related regions. The results show marked developmental changes between 4 and 9 months, partly reflected by scalp-recorded ERPs, but appearing in the underlying generators in a far more nuanced way. The results also confirm the feasibility of the application of anatomical templates in developmental populations.


Subject(s)
Auditory Cortex , Brain , Reproducibility of Results , Evoked Potentials , Auditory Perception , Electroencephalography/methods , Evoked Potentials, Auditory , Acoustic Stimulation
11.
Brain Res ; 1805: 148246, 2023 04 15.
Article in English | MEDLINE | ID: mdl-36657631

ABSTRACT

To process speech in a multi-talker environment, listeners need to segregate the mixture of incoming speech streams and focus their attention on one of them. Potentially, speech prosody could aid the segregation of different speakers, the selection of the desired speech stream, and detecting targets within the attended stream. For testing these issues, we recorded behavioral responses and extracted event-related potentials and functional brain networks from electroencephalographic signals recorded while participants listened to two concurrent speech streams, performing a lexical detection and a recognition memory task in parallel. Prosody manipulation was applied to the attended speech stream in one group of participants and to the ignored speech stream in another group. Naturally recorded speech stimuli were either intact, synthetically F0-flattened, or prosodically suppressed by the speaker. Results show that prosody - especially the parsing cues mediated by speech rate - facilitates stream selection, while playing a smaller role in auditory stream segmentation and target detection.


Subject(s)
Speech Perception , Humans , Speech Perception/physiology , Speech , Acoustic Stimulation/methods , Auditory Perception/physiology , Electroencephalography/methods
12.
Front Hum Neurosci ; 16: 952557, 2022.
Article in English | MEDLINE | ID: mdl-36393982

ABSTRACT

In the cocktail party situation, people with normal hearing usually follow a single speaker among multiple concurrent ones. However, there is no agreement in the literature as to whether the background is segregated into multiple streams/speakers. The current study varied the number of concurrent speech streams and investigated target detection and memory for the contents of a target stream as well as the processing of distractors. A male-voiced target stream was either presented alone (single-speech), together with one male-voiced distractor (one-distractor), or a male- and a female-voiced distractor (two-distractor). Behavioral measures of target detection and content tracking performance as well as target- and distractor detection related event-related brain potentials (ERPs) were assessed. We found that the N2 amplitude decreased whereas the P3 amplitude increased from the single-speech to the concurrent speech streams conditions. Importantly, the behavioral effect of distractors differed between the conditions with one vs. two distractor speech streams and the non-zero voltages in the N2 time window for distractor numerals and in the P3 time window for syntactic violations appearing in the non-target speech stream significantly differed between the one- and two-distractor conditions for the same (male) speaker. These results support the notion that the two background speech streams are segregated, as they show that distractors and syntactic violations appearing in the non-target streams are processed even when two speech non-target speech streams are delivered together with the target stream.

14.
Dev Cogn Neurosci ; 55: 101113, 2022 06.
Article in English | MEDLINE | ID: mdl-35605476

ABSTRACT

Infants are able to extract words from speech early in life. Here we show that the quality of forming longer-term representations for word forms at birth predicts expressive language ability at the age of two years. Seventy-five neonates were familiarized with two spoken disyllabic pseudowords. We then tested whether the neonate brain predicts the second syllable from the first one by presenting a familiarized pseudoword frequently, and occasionally violating the learned syllable combination by different rare pseudowords. Distinct brain responses were elicited by predicted and unpredicted word endings, suggesting that the neonates had learned the familiarized pseudowords. The difference between responses to predicted and unpredicted pseudowords indexing the quality of word-form learning during familiarization significantly correlated with expressive language scores (the mean length of utterance) at 24 months in the same infant. These findings suggest that 1) neonates can memorize disyllabic words so that a learned first syllable generates predictions for the word ending, and 2) early individual differences in the quality of word-form learning correlate with language skills. This relationship helps early identification of infants at risk for language impairment.


Subject(s)
Speech Perception , Child, Preschool , Humans , Infant , Infant, Newborn , Language , Learning/physiology , Speech , Speech Perception/physiology , Verbal Learning/physiology
15.
Sci Rep ; 12(1): 5905, 2022 04 07.
Article in English | MEDLINE | ID: mdl-35393525

ABSTRACT

Hearing is one of the earliest senses to develop and is quite mature by birth. Contemporary theories assume that regularities in sound are exploited by the brain to create internal models of the environment. Through statistical learning, internal models extrapolate from patterns to predictions about subsequent experience. In adults, altered brain responses to sound enable us to infer the existence and properties of these models. In this study, brain potentials were used to determine whether newborns exhibit context-dependent modulations of a brain response that can be used to infer the existence and properties of internal models. Results are indicative of significant context-dependence in the responsivity to sound in newborns. When common and rare sounds continue in stable probabilities over a very long period, neonates respond to all sounds equivalently (no differentiation). However, when the same common and rare sounds at the same probabilities alternate over time, the neonate responses show clear differentiations. The context-dependence is consistent with the possibility that the neonate brain produces more precise internal models that discriminate between contexts when there is an emergent structure to be discovered but appears to adopt broader models when discrimination delivers little or no additional information about the environment.


Subject(s)
Auditory Perception , Learning , Acoustic Stimulation/methods , Adult , Auditory Perception/physiology , Hearing , Humans , Infant, Newborn , Learning/physiology , Sound
16.
Cereb Cortex ; 32(11): 2412-2423, 2022 05 30.
Article in English | MEDLINE | ID: mdl-34564713

ABSTRACT

Many aspects of cognitive ability and brain function that change as we age look like deficits on account of measurable differences in comparison to younger adult groups. One such difference occurs in auditory sensory responses that index perceptual learning. Meta-analytic findings show reliable age-related differences in auditory responses to repetitive patterns of sound and to rare violations of those patterns, variously attributed to deficits in auditory sensory memory and inhibition. Here, we determine whether proposed deficits would render older adults less prone to primacy effects, robustly observed in young adults, which present as a tendency for first learning to have a disproportionate influence over later perceptual inference. The results confirm this reduced sensitivity to primacy effects but do not support impairment in auditory sensory memory as the origin of this difference. Instead, the aging brain produces data consistent with shorter timescales of contextual reference. In conclusion, age-related differences observed previously for perceptual inference appear highly context-specific necessitating reconsideration of whether and to what function the notion of deficit should be attributed, and even whether the notion of deficit is appropriate at all.


Subject(s)
Aging , Evoked Potentials, Auditory , Acoustic Stimulation/methods , Aged , Aging/physiology , Auditory Perception/physiology , Cognition/physiology , Evoked Potentials, Auditory/physiology , Humans , Memory/physiology , Memory Disorders , Young Adult
17.
Brain Lang ; 218: 104964, 2021 07.
Article in English | MEDLINE | ID: mdl-33964668

ABSTRACT

The effects of lexical meaning and lexical familiarity on auditory deviance detection were investigated by presenting oddball sequences of words, while participants ignored the stimuli. Stimulus sequences were composed of words that were varied in word class (nouns vs. functions words) and frequency of language use (high vs. low frequency) in a factorial design with the roles of frequently presented stimuli (Standards) and infrequently presented ones (Deviants) were fully crossed. Deviants elicited the Mismatch Negativity component of the event-related brain potential. Modulating effects of lexical meaning were obtained, revealing processing advantages for denotationally meaningful items. However, no effect of word frequency was observed. These results demonstrate that an apparently low-level function, such as auditory deviance detection utilizes information from the mental lexicon even for task-irrelevant stimuli.


Subject(s)
Evoked Potentials, Auditory , Language , Acoustic Stimulation , Brain , Electroencephalography , Evoked Potentials , Humans , Recognition, Psychology
18.
Psychophysiology ; 58(3): e13747, 2021 03.
Article in English | MEDLINE | ID: mdl-33314262

ABSTRACT

People with normal hearing can usually follow one of the several concurrent speakers. Speech tempo affects both the separation of concurrent speech streams and information extraction from them. The current study varied the tempo of two concurrent speech streams to investigate these processes in a multi-talker situation. Listeners performed a target-detection and a content-tracking task, while target-related ERPs and functional brain networks sensitive to speech tempo were extracted from the EEG signal. At slower than normal speech tempo, building the two streams required longer processing times, and possibly the utilization of higher-order, for example, syntactic and semantic cues. The observed longer reaction times and higher connectivity strength in a theta band network associated with frontal control over auditory/speech processing are compatible with this notion. With increasing tempo, target detection performance decreased and the N2b and the P3b amplitudes increased. These data suggest an increased need for strictly allocating target-detection-related resources at higher tempo. This was also reflected by the observed increase in the strength of gamma-band networks within and between frontal, temporal, and cingular areas. At the fastest tested speech tempo, there was a sharp drop in recognition memory performance, while target detection performance increased compared to the normal speech tempo. This was accompanied by a significant increase in the strength of a low alpha network associated with the suppression of task-irrelevant speech. These results suggest that participants prioritized the immediate target detection task over the continuous content tracking, likely due to some capacity limit reached the fastest speech tempo.


Subject(s)
Attention/physiology , Brain Waves/physiology , Cerebral Cortex/physiology , Connectome , Evoked Potentials/physiology , Nerve Net/physiology , Psychomotor Performance/physiology , Speech Perception/physiology , Time Perception/physiology , Adult , Event-Related Potentials, P300/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Young Adult
19.
Clin EEG Neurosci ; 52(1): 3-28, 2021 Jan.
Article in English | MEDLINE | ID: mdl-32975150

ABSTRACT

INTRODUCTION: The global COVID-19 pandemic has affected the economy, daily life, and mental/physical health. The latter includes the use of electroencephalography (EEG) in clinical practice and research. We report a survey of the impact of COVID-19 on the use of clinical EEG in practice and research in several countries, and the recommendations of an international panel of experts for the safe application of EEG during and after this pandemic. METHODS: Fifteen clinicians from 8 different countries and 25 researchers from 13 different countries reported the impact of COVID-19 on their EEG activities, the procedures implemented in response to the COVID-19 pandemic, and precautions planned or already implemented during the reopening of EEG activities. RESULTS: Of the 15 clinical centers responding, 11 reported a total stoppage of all EEG activities, while 4 reduced the number of tests per day. In research settings, all 25 laboratories reported a complete stoppage of activity, with 7 laboratories reopening to some extent since initial closure. In both settings, recommended precautions for restarting or continuing EEG recording included strict hygienic rules, social distance, and assessment for infection symptoms among staff and patients/participants. CONCLUSIONS: The COVID-19 pandemic interfered with the use of EEG recordings in clinical practice and even more in clinical research. We suggest updated best practices to allow safe EEG recordings in both research and clinical settings. The continued use of EEG is important in those with psychiatric diseases, particularly in times of social alarm such as the COVID-19 pandemic.


Subject(s)
COVID-19/virology , Consensus , Electroencephalography , SARS-CoV-2/pathogenicity , Brain/physiopathology , Brain Mapping/methods , COVID-19/physiopathology , Electroencephalography/adverse effects , Electroencephalography/methods , Humans , Mental Disorders/physiopathology
20.
Cortex ; 130: 387-400, 2020 09.
Article in English | MEDLINE | ID: mdl-32750602

ABSTRACT

Speech unfolds at different time scales. Therefore, neuronal mechanisms involved in speech processing should likewise operate at different (corresponding) time scales. The present study aimed to identify speech units relevant for selecting speech streams in a multi-talker situation. Functional connectivity was extracted from the continuous EEG while young adults detected targets within one stream in the presence of a different, task-irrelevant stream. In two separate groups, either the attended or the ignored stream was manipulated so that it contained intact, word-wise scrambled, syllable-wise scrambled, or spectrally scrambled speech. We found functional brain networks that were sensitive to the difference between the situations when speech was meaningful at sentence vs. at word level, but not between when speech was meaningful at word vs. only valid at syllable level, irrespective of whether the speech units were manipulated in the attended or the ignored stream. These functional brain networks operated in the delta and theta bands corresponding to integrating information from longer time windows. Further, the networks, which could be linked with suppressing information from the to-be-ignored stream included brain areas associated with high-level processing of speech. These results are compatible with late filtering models of auditory attention, as they suggest that the length of intact speech units in the to-be-ignored stream affects processes of attentional selection. However, we found no evidence for speech-to-brain coupling differences as a function of the intact unit of speech in either stream. Thus, although the current results do not rule out that early processes of speech processing affect stream selection in a cocktail party situation, neither do they provide supporting for it.


Subject(s)
Auditory Cortex , Speech Perception , Acoustic Stimulation , Attention , Brain , Electroencephalography , Humans , Speech , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...