Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
Add more filters










Publication year range
1.
J Am Acad Child Adolesc Psychiatry ; 63(2): 114-116, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37402465

ABSTRACT

Computer-based cognitive tasks aimed at assessing attention and executive function are used regularly-for both clinical and research purposes-with the belief that they provide an objective assessment of symptoms associated with attention-deficit/hyperactivity disorder (ADHD). As rates of the diagnosis of ADHD appear to be exploding, particularly since the onset of COVID-19,1 there is no doubt as to the need for reliable and valid diagnostic tools for ADHD. One of the most common types of such cognitive tests is continuous performance tasks (CPTs), which putatively not only help in diagnosing ADHD but can differentiate between ADHD subtypes. We urge diagnosticians to take a more cautious approach toward this practice and reconsider how CPTs are used given new evidence.


Subject(s)
Attention Deficit Disorder with Hyperactivity , Humans , Attention Deficit Disorder with Hyperactivity/psychology , Neuropsychological Tests , Executive Function , Emotions
2.
Cereb Cortex ; 34(1)2024 01 14.
Article in English | MEDLINE | ID: mdl-38142293

ABSTRACT

Selective attention to one speaker in multi-talker environments can be affected by the acoustic and semantic properties of speech. One highly ecological feature of speech that has the potential to assist in selective attention is voice familiarity. Here, we tested how voice familiarity interacts with selective attention by measuring the neural speech-tracking response to both target and non-target speech in a dichotic listening "Cocktail Party" paradigm. We measured Magnetoencephalography from n = 33 participants, presented with concurrent narratives in two different voices, and instructed to pay attention to one ear ("target") and ignore the other ("non-target"). Participants were familiarized with one of the voices during the week prior to the experiment, rendering this voice familiar to them. Using multivariate speech-tracking analysis we estimated the neural responses to both stimuli and replicate their well-established modulation by selective attention. Importantly, speech-tracking was also affected by voice familiarity, showing enhanced response for target speech and reduced response for non-target speech in the contra-lateral hemisphere, when these were in a familiar vs. an unfamiliar voice. These findings offer valuable insight into how voice familiarity, and by extension, auditory-semantics, interact with goal-driven attention, and facilitate perceptual organization and speech processing in noisy environments.


Subject(s)
Speech Perception , Voice , Humans , Speech , Speech Perception/physiology , Recognition, Psychology/physiology , Semantics
3.
J Neurosci ; 43(27): 5045-5056, 2023 07 05.
Article in English | MEDLINE | ID: mdl-37336758

ABSTRACT

The well-known "cocktail party effect" refers to incidental detection of salient words, such as one's own-name, in supposedly unattended speech. However, empirical investigation of the prevalence of this phenomenon and the underlying mechanisms has been limited to extremely artificial contexts and has yielded conflicting results. We introduce a novel empirical approach for revisiting this effect under highly ecological conditions, by immersing participants in a multisensory Virtual Café and using realistic stimuli and tasks. Participants (32 female, 18 male) listened to conversational speech from a character at their table, while a barista in the back of the café called out food orders. Unbeknownst to them, the barista sometimes called orders containing either their own-name or words that created semantic violations. We assessed the neurophysiological response-profile to these two probes in the task-irrelevant barista stream by measuring participants' brain activity (EEG), galvanic skin response and overt gaze-shifts.SIGNIFICANCE STATEMENT We found distinct neural and physiological responses to participants' own-name and semantic violations, indicating their incidental semantic processing despite being task-irrelevant. Interestingly, these responses were covert in nature and gaze-patterns were not associated with word-detection responses. This study emphasizes the nonexclusive nature of attention in multimodal ecological environments and demonstrates the brain's capacity to extract linguistic information from additional sources outside the primary focus of attention.


Subject(s)
Semantics , Speech Perception , Humans , Male , Female , Speech , Auditory Perception , Attention/physiology , Linguistics , Speech Perception/physiology
4.
Neuroimage ; 270: 119984, 2023 04 15.
Article in English | MEDLINE | ID: mdl-36854352

ABSTRACT

Speech comprehension is severely compromised when several people talk at once, due to limited perceptual and cognitive resources. In such circumstances, top-down attention mechanisms can actively prioritize processing of task-relevant speech. However, behavioral and neural evidence suggest that this selection is not exclusive, and the system may have sufficient capacity to process additional speech input as well. Here we used a data-driven approach to contrast two opposing hypotheses regarding the system's capacity to co-represent competing speech: Can the brain represent two speakers equally or is the system fundamentally limited, resulting in tradeoffs between them? Neural activity was measured using magnetoencephalography (MEG) as human participants heard concurrent speech narratives and engaged in two tasks: Selective Attention, where only one speaker was task-relevant and Distributed Attention, where both speakers were equally relevant. Analysis of neural speech-tracking revealed that both tasks engaged a similar network of brain regions involved in auditory processing, attentional control and speech processing. Interestingly, during both Selective and Distributed Attention the neural representation of competing speech showed a bias towards one speaker. This is in line with proposed 'bottlenecks' for co-representation of concurrent speech and suggests that good performance on distributed attention tasks may be achieved by toggling attention between speakers over time.


Subject(s)
Speech Perception , Humans , Speech , Auditory Perception , Hearing , Magnetoencephalography , Acoustic Stimulation
5.
Cereb Cortex ; 33(9): 5361-5374, 2023 04 25.
Article in English | MEDLINE | ID: mdl-36331339

ABSTRACT

Many situations require focusing attention on one speaker, while monitoring the environment for potentially important information. Some have proposed that dividing attention among 2 speakers involves behavioral trade-offs, due to limited cognitive resources. However the severity of these trade-offs, particularly under ecologically-valid circumstances, is not well understood. We investigated the capacity to process simultaneous speech using a dual-task paradigm simulating task-demands and stimuli encountered in real-life. Participants listened to conversational narratives (Narrative Stream) and monitored a stream of announcements (Barista Stream), to detect when their order was called. We measured participants' performance, neural activity, and skin conductance as they engaged in this dual-task. Participants achieved extremely high dual-task accuracy, with no apparent behavioral trade-offs. Moreover, robust neural and physiological responses were observed for target-stimuli in the Barista Stream, alongside significant neural speech-tracking of the Narrative Stream. These results suggest that humans have substantial capacity to process simultaneous speech and do not suffer from insufficient processing resources, at least for this highly ecological task-combination and level of perceptual load. Results also confirmed the ecological validity of the advantage for detecting ones' own name at the behavioral, neural, and physiological level, highlighting the contribution of personal relevance when processing simultaneous speech.


Subject(s)
Speech Perception , Speech , Humans , Speech/physiology , Speech Perception/physiology , Auditory Perception , Attention/physiology
6.
Cognition ; 231: 105313, 2023 02.
Article in English | MEDLINE | ID: mdl-36344304

ABSTRACT

For seventy years, auditory selective attention research has focused on studying the cognitive mechanisms of prioritizing the processing a 'main' task-relevant stimulus, in the presence of 'other' stimuli. However, a closer look at this body of literature reveals deep empirical inconsistencies and theoretical confusion regarding the extent to which this 'other' stimulus is processed. We argue that many key debates regarding attention arise, at least in part, from inappropriate terminological choices for experimental variables that may not accurately map onto the cognitive constructs they are meant to describe. Here we critically review the more common or disruptive terminological ambiguities, differentiate between methodology-based and theory-derived terms, and unpack the theoretical assumptions underlying different terminological choices. Particularly, we offer an in-depth analysis of the terms 'unattended' and 'distractor' and demonstrate how their use can lead to conflicting theoretical inferences. We also offer a framework for thinking about terminology in a more productive and precise way, in hope of fostering more productive debates and promoting more nuanced and accurate cognitive models of selective attention.


Subject(s)
Attention , Humans , Acoustic Stimulation
7.
J Speech Lang Hear Res ; 65(3): 923-939, 2022 03 08.
Article in English | MEDLINE | ID: mdl-35133867

ABSTRACT

PURPOSE: Humans have a near-automatic tendency to entrain their motor actions to rhythms in the environment. Entrainment has been hypothesized to play an important role in processing naturalistic stimuli, such as speech and music, which have intrinsically rhythmic properties. Here, we studied two facets of entraining one's rhythmic motor actions to an external stimulus: (a) synchronized finger tapping to auditory rhythmic stimuli and (b) memory-paced reproduction of a previously heard rhythm. METHOD: Using modifications of the Synchronization-Continuation tapping paradigm, we studied how these two rhythmic behaviors were affected by different stimulus and task features. We tested synchronization and memory-paced tapping for a broad range of rates, from stimulus onset asynchrony of subsecond to suprasecond, both for strictly isochronous tone sequences and for rhythmic speech stimuli (counting from 1 to 10), which are more ecological yet less isochronous. We also asked what role motor engagement plays in forming a stable internal representation for rhythms and guiding memory-paced tapping. RESULTS AND CONCLUSIONS: Our results show that individuals can flexibly synchronize their motor actions to a very broad range of rhythms. However, this flexibility does not extend to memory-paced tapping, which is accurate only in a narrower range of rates, around ~1.5 Hz. This pattern suggests that intrinsic rhythmic defaults in the auditory and/or motor system influence the internal representation of rhythms, in the absence of an external pacemaker. Interestingly, memory-paced tapping for speech rhythms and simple tone sequences shared similar "optimal rates," although with reduced accuracy, suggesting that internal constraints on rhythmic entrainment generalize to more ecological stimuli. Last, we found that actively synchronizing to tones versus passively listening to them led to more accurate memory-paced tapping performance, which emphasizes the importance of action-perception interactions in forming stable entrainment to external rhythms.


Subject(s)
Music , Speech , Auditory Perception , Humans
8.
Neurobiol Lang (Camb) ; 3(2): 214-234, 2022.
Article in English | MEDLINE | ID: mdl-37215560

ABSTRACT

Statistical learning (SL) is hypothesized to play an important role in language development. However, the measures typically used to assess SL, particularly at the level of individual participants, are largely indirect and have low sensitivity. Recently, a neural metric based on frequency-tagging has been proposed as an alternative measure for studying SL. We tested the sensitivity of frequency-tagging measures for studying SL in individual participants in an artificial language paradigm, using non-invasive electroencephalograph (EEG) recordings of neural activity in humans. Importantly, we used carefully constructed controls to address potential acoustic confounds of the frequency-tagging approach, and compared the sensitivity of EEG-based metrics to both explicit and implicit behavioral tests of SL. Group-level results confirm that frequency-tagging can provide a robust indication of SL for an artificial language, above and beyond potential acoustic confounds. However, this metric had very low sensitivity at the level of individual participants, with significant effects found only in 30% of participants. Comparison of the neural metric to previously established behavioral measures for assessing SL showed a significant yet weak correspondence with performance on an implicit task, which was above-chance in 70% of participants, but no correspondence with the more common explicit 2-alternative forced-choice task, where performance did not exceed chance-level. Given the proposed ubiquitous nature of SL, our results highlight some of the operational and methodological challenges of obtaining robust metrics for assessing SL, as well as the potential confounds that should be taken into account when using the frequency-tagging approach in EEG studies.

9.
Cereb Cortex ; 31(12): 5560-5569, 2021 10 22.
Article in English | MEDLINE | ID: mdl-34185837

ABSTRACT

Sensory perception is a product of interactions between the internal state of an organism and the physical attributes of a stimulus. It has been shown across the animal kingdom that perception and sensory-evoked physiological responses are modulated depending on whether or not the stimulus is the consequence of voluntary actions. These phenomena are often attributed to motor signals sent to relevant sensory regions that convey information about upcoming sensory consequences. However, the neurophysiological signature of action-locked modulations in sensory cortex, and their relationship with perception, is still unclear. In the current study, we recorded neurophysiological (using Magnetoencephalography) and behavioral responses from 16 healthy subjects performing an auditory detection task of faint tones. Tones were either generated by subjects' voluntary button presses or occurred predictably following a visual cue. By introducing a constant temporal delay between button press/cue and tone delivery, and applying source-level analysis, we decoupled action-locked and auditory-locked activity in auditory cortex. We show action-locked evoked-responses in auditory cortex following sound-triggering actions and preceding sound onset. Such evoked-responses were not found for button-presses that were not coupled with sounds, or sounds delivered following a predictive visual cue. Our results provide evidence for efferent signals in human auditory cortex that are locked to voluntary actions coupled with future auditory consequences.


Subject(s)
Auditory Cortex , Animals , Auditory Cortex/physiology , Auditory Perception/physiology , Humans , Magnetoencephalography/methods , Sound
10.
Elife ; 102021 05 04.
Article in English | MEDLINE | ID: mdl-33942722

ABSTRACT

Paying attention to one speaker in a noisy place can be extremely difficult, because to-be-attended and task-irrelevant speech compete for processing resources. We tested whether this competition is restricted to acoustic-phonetic interference or if it extends to competition for linguistic processing as well. Neural activity was recorded using Magnetoencephalography as human participants were instructed to attend to natural speech presented to one ear, and task-irrelevant stimuli were presented to the other. Task-irrelevant stimuli consisted either of random sequences of syllables, or syllables structured to form coherent sentences, using hierarchical frequency-tagging. We find that the phrasal structure of structured task-irrelevant stimuli was represented in the neural response in left inferior frontal and posterior parietal regions, indicating that selective attention does not fully eliminate linguistic processing of task-irrelevant speech. Additionally, neural tracking of to-be-attended speech in left inferior frontal regions was enhanced when competing with structured task-irrelevant stimuli, suggesting inherent competition between them for linguistic processing.


We are all familiar with the difficulty of trying to pay attention to a person speaking in a noisy environment, something often known as the 'cocktail party problem'. This can be especially challenging when the background noise we are trying to filter out is another conversation that we can understand. In order to avoid being distracted in these kinds of situation, we need selective attention, the cognitive process that allows us to attend to one stimulus and to ignore other irrelevant sensory information. How the brain processes the sounds in our environment and prioritizes them is still not clear. One of the central questions is whether we can take in information from several speakers at the same time or whether we can only understand speech from one speaker at a time. Neuroimaging techniques can shed light on this matter by measuring brain activity while participants listen to competing speech stimuli, helping researchers understand how this information is processed by the brain. Now, Har-Shai Yahav and Zion Golumbic measured the brain activity of 30 participants as they listened to two speech streams in their native language, Hebrew. They heard each speech in a different ear and tried to focus their attention on only one of the speakers. Participants always had to attend to natural speech, while the sound they had to ignore could be either natural speech or unintelligible syllable sequences. The activity of the brain was registered using magnetoencephalography, a non-invasive technique that measures the magnetic fields generated by the electrical activity of neurons in the brain. The results showed that unattended speech activated brain areas related to both hearing and language. Thus, unattended speech was processed not only at the acoustic level (as any other type of sound would be), but also at the linguistic level. In addition, the brain response to the attended speech in brain regions related to language was stronger when the competing sound was natural speech compared to random syllables. This suggests that the two speech inputs compete for the same processing resources, which may explain why we find it difficult to stay focused in a conversation when there are other people talking in the background. This study contributes to our understanding on how the brain processes multiple auditory inputs at once. In addition, it highlights the fact that selective attention is a dynamic process of balancing the cognitive resources allocated to competing information rather than an all-or-none process. A potential application of these findings could be the design of smart devices to help individuals focus their attention in noisy environments.


Subject(s)
Frontal Lobe/physiology , Language , Parietal Lobe/physiology , Speech , Adolescent , Adult , Attention , Female , Humans , Linguistics , Magnetoencephalography/methods , Male , Task Performance and Analysis , Young Adult
11.
Mind Brain Educ ; 15(4): 354-370, 2021 Nov.
Article in English | MEDLINE | ID: mdl-35875415

ABSTRACT

As the field of educational neuroscience continues to grow, questions have emerged regarding the ecological validity and applicability of this research to educational practice. Recent advances in mobile neuroimaging technologies have made it possible to conduct neuroscientific studies directly in naturalistic learning environments. We propose that embedding mobile neuroimaging research in a cycle (Matusz, Dikker, Huth, & Perrodin, 2019), involving lab-based, seminaturalistic, and fully naturalistic experiments, is well suited for addressing educational questions. With this review, we take a cautious approach, by discussing the valuable insights that can be gained from mobile neuroimaging technology, including electroencephalography and functional near-infrared spectroscopy, as well as the challenges posed by bringing neuroscientific methods into the classroom. Research paradigms used alongside mobile neuroimaging technology vary considerably. To illustrate this point, studies are discussed with increasingly naturalistic designs. We conclude with several ethical considerations that should be taken into account in this unique area of research.

12.
J Neurosci ; 40(44): 8530-8542, 2020 10 28.
Article in English | MEDLINE | ID: mdl-33023923

ABSTRACT

Natural conversation is multisensory: when we can see the speaker's face, visual speech cues improve our comprehension. The neuronal mechanisms underlying this phenomenon remain unclear. The two main alternatives are visually mediated phase modulation of neuronal oscillations (excitability fluctuations) in auditory neurons and visual input-evoked responses in auditory neurons. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans of both sexes, we find evidence for both mechanisms. Remarkably, auditory cortical neurons track the temporal dynamics of purely visual speech using the phase of their slow oscillations and phase-related modulations in broadband high-frequency activity. Consistent with known perceptual enhancement effects, the visual phase reset amplifies the cortical representation of concomitant auditory speech. In contrast to this, and in line with earlier reports, visual input reduces the amplitude of evoked responses to concomitant auditory input. We interpret the combination of improved phase tracking and reduced response amplitude as evidence for more efficient and reliable stimulus processing in the presence of congruent auditory and visual speech inputs.SIGNIFICANCE STATEMENT Watching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied these mechanisms by recording the electrical activity of the human brain through electrodes implanted surgically inside the brain. We found that visual inputs can operate by directly activating auditory cortical areas, and also indirectly by modulating the strength of cortical responses to auditory input. Our results help to understand the mechanisms by which the brain merges auditory and visual speech into a unitary perception.


Subject(s)
Auditory Cortex/physiology , Evoked Potentials/physiology , Nonverbal Communication/physiology , Adult , Drug Resistant Epilepsy/surgery , Electrocorticography , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Female , Humans , Middle Aged , Neurons/physiology , Nonverbal Communication/psychology , Photic Stimulation , Young Adult
13.
Data Brief ; 32: 106044, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32775563

ABSTRACT

Several studies have found that the motor rhythms that individuals produce spontaneously, for example during finger tapping, clapping or walking, are also rated perceptually as 'very comfortable' to listen to. This motivated proposal of the Preferred Period Hypothesis, suggesting that individuals have a characteristic preferred rhythm, that generalizes across perception and production. However, some of the experimental procedures used previously raise two methodological concerns: First, in many of these studies, the rhythms used for assessment of participants' Perceptual Preferred Tempo (PPT) were tailored specifically around each participant's personal Spontaneous Motor Tempo (SMT). This may have biased results toward the central rhythm used, artificially increasing the similarity between spontaneous motor and auditory perceptual preferences. Second, a key prediction of the Preferred Period Hypothesis is that the same default rhythms are repeatedly found within-subject. However, measures of consistency are seldom reported, and increased within-subject variability has sometimes been used to exclude participants. The current study was an attempt to replicate reports of a correspondence between motor and perceptual rhythms, and closely followed previous experimental protocols by conducting three tasks: SMT was evaluated by instructing participants to tap 'at their most comfortable rate'; PPT was assessed by asking participants to rate a 10 different rhythms according to how 'comfortable' they were; and motor-replication of rhythms was assessed using a Synchronization-Continuation task, over a wide range of rhythms. However, in contrast to previous studies, for all participants we use the same 10 perceptual rhythms in both the PPT and Synchronization-Continuation task, irrespective of their SMT. Moreover, we assessed and report measures of within- and between-trial consistency, in order to evaluate whether participants gave similar rating and produced similar motor rhythms across multiple sessions throughout the experiment. The data presented here fail to show any correlation between motor and perceptual preferences, nor do they support improved synchronization-continuation performance near an individual's so-called SMT or PPT. Rather, they demonstrate substantial within-subject variability in the spontaneous motor rhythms produced across repeated sessions, as well as their subjective rating of perceived rhythms. This report accompanies our article "Spontaneous and Stimulus-Driven Rhythmic Behaviors in ADHD Adults and Controls"[1], and provided motivation and insight for modifying the procedures used for SMT and PPT evaluation, and their interpretation.

14.
Atten Percept Psychophys ; 82(7): 3594-3605, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32676806

ABSTRACT

Managing attention in multispeaker environments is a challenging feat that is critical for human performance. However, why some people are better than others in allocating attention appropriately remains highly unknown. Here, we investigated the contribution of two factors-working memory capacity (WMC) and professional experience-to performance on two different types of attention task: selective attention to one speaker and distributed attention among multiple concurrent speakers. We compared performance across three groups: individuals with low (n = 20) and high (n = 25) WMC, and aircraft pilots (n = 24), whose profession poses extremely high demands for both selective and distributed attention to speech. Results suggests that selective attention is highly effective, with good performance maintained under increasingly adverse conditions, whereas performance decreases substantially with the requirement to distribute attention among a larger number of speakers. Importantly, both types of attention benefit from higher WMC, suggesting reliance on some common capacity-limited resources. However, only selective attention was further improved in the pilots, pointing to its flexible and trainable nature, whereas distributed attention seems to suffer from more fixed and severe processing bottlenecks.


Subject(s)
Memory, Short-Term , Speech Perception , Attention , Humans , Speech
15.
Cereb Cortex ; 30(11): 5792-5805, 2020 10 01.
Article in English | MEDLINE | ID: mdl-32518942

ABSTRACT

Dynamic attending theory suggests that predicting the timing of upcoming sounds can assist in focusing attention toward them. However, whether similar predictive processes are also applied to background noises and assist in guiding attention "away" from potential distractors, remains an open question. Here we address this question by manipulating the temporal predictability of distractor sounds in a dichotic listening selective attention task. We tested the influence of distractors' temporal predictability on performance and on the neural encoding of sounds, by comparing the effects of Rhythmic versus Nonrhythmic distractors. Using magnetoencephalography we found that, indeed, the neural responses to both attended and distractor sounds were affected by distractors' rhythmicity. Baseline activity preceding the onset of Rhythmic distractor sounds was enhanced relative to nonrhythmic distractor sounds, and sensory response to them was suppressed. Moreover, detection of nonmasked targets improved when distractors were Rhythmic, an effect accompanied by stronger lateralization of the neural responses to attended sounds to contralateral auditory cortex. These combined behavioral and neural results suggest that not only are temporal predictions formed for task-irrelevant sounds, but that these predictions bear functional significance for promoting selective attention and reducing distractibility.


Subject(s)
Attention/physiology , Auditory Cortex/physiology , Auditory Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetoencephalography , Male , Periodicity
16.
Neuropsychologia ; 146: 107544, 2020 09.
Article in English | MEDLINE | ID: mdl-32598965

ABSTRACT

Many aspects of human behavior are inherently rhythmic, requiring production of rhythmic motor actions as well as synchronizing to rhythms in the environment. It is well-established that individuals with ADHD exhibit deficits in temporal estimation and timing functions, which may impact their ability to accurately produce and interact with rhythmic stimuli. In the current study we seek to understand the specific aspects of rhythmic behavior that are implicated in ADHD. We specifically ask whether they are attributed to imprecision in the internal generation of rhythms or to reduced acuity in rhythm perception. We also test key predictions of the Preferred Period Hypothesis, which suggests that both perceptual and motor rhythmic behaviors are biased towards a specific personal 'default' tempo. To this end, we tested several aspects of rhythmic behavior and the correspondence between them, including spontaneous motor tempo (SMT), preferred auditory perceptual tempo (PPT) and synchronization-continuations tapping in a broad range of rhythms, from sub-second to supra-second intervals. Moreover, we evaluate the intra-subject consistency of rhythmic preferences, as a means for testing the reality and reliability of personal 'default-rhythms'. We used a modified operational definition for assessing SMT and PPT, instructing participants to tap or calibrate the rhythms most comfortable for them to count along with, to avoid subjective interpretations of the task. Our results shed new light on the specific aspect of rhythmic deficits implicated in ADHD adults. We find that individuals with ADHD are primarily challenged in producing and maintaining isochronous self-generated motor rhythms, during both spontaneous and memory-paced tapping. However, they nonetheless exhibit good flexibility for synchronizing to a broad range of external rhythms, suggesting that auditory-motor entrainment for simple rhythms is preserved in ADHD, and that the presence of an external pacer allows overcoming their inherent difficulty in self-generating isochronous motor rhythms. In addition, both groups showed optimal memory-paced tapping for rhythms near their 'counting-based' SMT and PPT, which were slightly faster in the ADHD group. This is in line with the predictions of the Preferred Period Hypothesis, indicating that at least for this well-defined rhythmic behavior (i.e., counting), individuals tend to prefer similar time-scales in both motor production and perceptual evaluation.


Subject(s)
Attention Deficit Disorder with Hyperactivity/physiopathology , Attention Deficit Disorder with Hyperactivity/psychology , Periodicity , Psychomotor Performance , Adult , Case-Control Studies , Female , Humans , Male , Reproducibility of Results , Young Adult
17.
Front Hum Neurosci ; 13: 386, 2019.
Article in English | MEDLINE | ID: mdl-31780911

ABSTRACT

Focusing attention on one speaker on the background of other irrelevant speech can be a challenging feat. A longstanding question in attention research is whether and how frequently individuals shift their attention towards task-irrelevant speech, arguably leading to occasional detection of words in a so-called unattended message. However, this has been difficult to gauge empirically, particularly when participants attend to continuous natural speech, due to the lack of appropriate metrics for detecting shifts in internal attention. Here we introduce a new experimental platform for studying the dynamic deployment of attention among concurrent speakers, utilizing a unique combination of Virtual Reality (VR) and Eye-Tracking technology. We created a Virtual Café in which participants sit across from and attend to the narrative of a target speaker. We manipulated the number and location of distractor speakers by placing additional characters throughout the Virtual Café. By monitoring participant's eye-gaze dynamics, we studied the patterns of overt attention-shifts among concurrent speakers as well as the consequences of these shifts on speech comprehension. Our results reveal important individual differences in the gaze-pattern displayed during selective attention to speech. While some participants stayed fixated on a target speaker throughout the entire experiment, approximately 30% of participants frequently shifted their gaze toward distractor speakers or other locations in the environment, regardless of the severity of audiovisual distraction. Critically, preforming frequent gaze-shifts negatively impacted the comprehension of target speech, and participants made more mistakes when looking away from the target speaker. We also found that gaze-shifts occurred primarily during gaps in the acoustic input, suggesting that momentary reductions in acoustic masking prompt attention-shifts between competing speakers, in line with "glimpsing" theories of processing speech in noise. These results open a new window into understanding the dynamics of attention as they wax and wane over time, and the different listening patterns employed for dealing with the influx of sensory input in multisensory environments. Moreover, the novel approach developed here for tracking the locus of momentary attention in a naturalistic virtual-reality environment holds high promise for extending the study of human behavior and cognition and bridging the gap between the laboratory and real-life.

18.
Neurosci Biobehav Rev ; 86: 150-165, 2018 03.
Article in English | MEDLINE | ID: mdl-29223770

ABSTRACT

Here we review the role of brain oscillations in sensory processing. We examine the idea that neural entrainment of intrinsic oscillations underlies the processing of rhythmic stimuli in the context of simple isochronous rhythms as well as in music and speech. This has been a topic of growing interest over recent years; however, many issues remain highly controversial: how do fluctuations of intrinsic neural oscillations-both spontaneous and entrained to external stimuli-affect perception, and does this occur automatically or can it be actively controlled by top-down factors? Some of the controversy in the literature stems from confounding use of terminology. Moreover, it is not straightforward how theories and findings regarding isochronous rhythms generalize to more complex, naturalistic stimuli, such as speech and music. Here we aim to clarify terminology, and distinguish between different phenomena that are often lumped together as reflecting "neural entrainment" but may actually vary in their mechanistic underpinnings. Furthermore, we discuss specific caveats and confounds related to making inferences about oscillatory mechanisms from human electrophysiological data.


Subject(s)
Brain/physiology , Periodicity , Humans , Music , Speech/physiology , Terminology as Topic
19.
J Neurosci ; 37(32): 7772-7781, 2017 08 09.
Article in English | MEDLINE | ID: mdl-28626013

ABSTRACT

The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep.SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech parsing are also preserved. We used a novel approach for studying the depth of speech processing across wakefulness and sleep while tracking neuronal activity with EEG. We found that responses to the auditory sound stream remained intact; however, the sleeping brain did not show signs of hierarchical parsing of the continuous stream of syllables into words, phrases, and sentences. The results suggest that sleep imposes a functional barrier between basic sensory processing and high-level cognitive processing. This paradigm also holds promise for studying residual cognitive abilities in a wide array of unresponsive states.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Sleep Stages/physiology , Speech Perception/physiology , Wakefulness/physiology , Adult , Electroencephalography/methods , Female , Humans , Male , Sleep/physiology , Speech Intelligibility/physiology , Young Adult
20.
J Neurosci ; 37(26): 6331-6341, 2017 06 28.
Article in English | MEDLINE | ID: mdl-28559379

ABSTRACT

Most humans have a near-automatic inclination to tap, clap, or move to the beat of music. The capacity to extract a periodic beat from a complex musical segment is remarkable, as it requires abstraction from the temporal structure of the stimulus. It has been suggested that nonlinear interactions in neural networks result in cortical oscillations at the beat frequency, and that such entrained oscillations give rise to the percept of a beat or a pulse. Here we tested this neural resonance theory using MEG recordings as female and male individuals listened to 30 s sequences of complex syncopated drumbeats designed so that they contain no net energy at the pulse frequency when measured using linear analysis. We analyzed the spectrum of the neural activity while listening and compared it to the modulation spectrum of the stimuli. We found enhanced neural response in the auditory cortex at the pulse frequency. We also showed phase locking at the times of the missing pulse, even though the pulse was absent from the stimulus itself. Moreover, the strength of this pulse response correlated with individuals' speed in finding the pulse of these stimuli, as tested in a follow-up session. These findings demonstrate that neural activity at the pulse frequency in the auditory cortex is internally generated rather than stimulus-driven. The current results are both consistent with neural resonance theory and with models based on nonlinear response of the brain to rhythmic stimuli. The results thus help narrow the search for valid models of beat perception.SIGNIFICANCE STATEMENT Humans perceive music as having a regular pulse marking equally spaced points in time, within which musical notes are temporally organized. Neural resonance theory (NRT) provides a theoretical model explaining how an internal periodic representation of a pulse may emerge through nonlinear coupling between oscillating neural systems. After testing key falsifiable predictions of NRT using MEG recordings, we demonstrate the emergence of neural oscillations at the pulse frequency, which can be related to pulse perception. These findings rule out alternative explanations for neural entrainment and provide evidence linking neural synchronization to the perception of pulse, a widely debated topic in recent years.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Biological Clocks/physiology , Cortical Synchronization/physiology , Evoked Potentials, Auditory/physiology , Periodicity , Acoustic Stimulation/methods , Action Potentials/physiology , Adult , Cues , Feedback, Physiological , Female , Humans , Male , Models, Neurological , Music
SELECTION OF CITATIONS
SEARCH DETAIL
...