Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
2.
Hear Res ; 410: 108350, 2021 10.
Article in English | MEDLINE | ID: mdl-34534892

ABSTRACT

Subtracting the sum of left and right monaural auditory brainstem responses (ABRs) from the corresponding binaural ABR isolates the binaural interaction component (ABR-BIC). In a previous investigation (Ikeda, 2015), during auditory yet not visual tasks, tone-pips elicited a significant difference in amplitude between summed monaural and binaural ABRs. With click stimulation, this amplitude difference was task-independent. This self-critical reanalysis's purpose was to establish that a difference waveform (i.e., ABR-BIC DN1) reflected an auditory selective attention effect that was isolable from stimulus factors. Regardless of whether stimuli were tone-pips or clicks, effect sizes of the DN1 peak amplitudes relative to zero improved during auditory tasks over visual tasks. Auditory selective attention effects on the monaural and binaural ABR wave-V amplitudes were tone-pip specific. Those wave-V effects thus could not explain the stimulus-universal effect of auditory selective attention on DN1 detectability, which was thus entirely binaural. In a manner isolated from auditory selective attention, multiple mediation analyses indicated that the higher right monaural wave-V amplitudes mediated individual differences in how clicks, relative to tone-pips, augmented DN1 amplitudes. There are implications of these findings for advancing ABR-BIC measurement.


Subject(s)
Evoked Potentials, Auditory, Brain Stem , Acoustic Stimulation , Humans , Individuality
3.
Atten Percept Psychophys ; 82(1): 350-362, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31290133

ABSTRACT

Classically, attentional selectivity has been conceptualized as a passive by-product of capacity limits on stimulus processing. Here, we examine the role of more active cognitive control processes in attentional selectivity, focusing on how distraction from task-irrelevant sound is modulated by levels of task engagement in a visually presented short-term memory task. Task engagement was varied by manipulating the load involved in the encoding of the (visually presented) to-be-remembered items. Using a list of Navon letters (where a large letter is composed of smaller, different-identity letters), participants were oriented to attend and serially recall the list of large letters (low encoding load) or to attend and serially recall the list of small letters (high encoding load). Attentional capture by a single deviant noise burst within a task-irrelevant tone sequence (the deviation effect) was eliminated under high encoding load (Experiment 1). However, distraction from a continuously changing sequence of tones (the changing-state effect) was immune to the influence of load (Experiment 2). This dissociation in the amenability of the deviation effect and the changing-state effect to cognitive control supports a duplex-mechanism over a unitary-mechanism account of auditory distraction in which the deviation effect is due to attentional capture whereas the changing-state effect reflects direct interference between the processing of the sound and processes involved in the focal task. That the changing-state effect survives high encoding load also goes against an alternative explanation of the attenuation of the deviation effect under high load in terms of the depletion of a limited perceptual resource that would result in diminished auditory processing.


Subject(s)
Attention , Auditory Perception , Perceptual Masking/physiology , Task Performance and Analysis , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Memory, Short-Term , Mental Recall , Noise , Sound , Young Adult
6.
IEEE Trans Neural Syst Rehabil Eng ; 25(11): 2169-2179, 2017 11.
Article in English | MEDLINE | ID: mdl-28475062

ABSTRACT

Research into brain-computer interfaces (BCIs), which spell words using brain signals, has revealed that a desktop version of such a speller, the edges paradigm, offers several advantages: This edges paradigm outperforms the benchmark row-column paradigm in terms of accuracy, bitrate, and user experience. It has remained unknown whether these advantages prevailed with a new version of the edges paradigm designed for a mobile device. This paper investigated and evaluated in a rolling wheelchair a mobile BCI, which implemented the edges paradigm on small displays with which visual crowding tends to occur. How the mobile edge paradigm outperforms the mobile row-column paradigm has implications for understanding how principles of visual neurocognition affect BCI speller use in a mobile context. This investigation revealed that all the advantages of the edges paradigm over the row-column paradigm prevailed in this setting. However, the reduction in adjacent errors for the edges paradigm was unprecedentedly limited to horizontal adjacent errors. The interpretation offered is that dimensional constraints of visual interface design on a smartphone thus affected the neurocognitive processes of crowding.


Subject(s)
Brain-Computer Interfaces , Communication Aids for Disabled , Wheelchairs , Adult , Algorithms , Cognition , Event-Related Potentials, P300 , Female , Healthy Volunteers , Humans , Male , Photic Stimulation , Psychomotor Performance/physiology , Smartphone , Software , Visual Perception , Young Adult
7.
Front Psychol ; 8: 390, 2017.
Article in English | MEDLINE | ID: mdl-28356906

ABSTRACT

[This corrects the article on p. 548 in vol. 6, PMID: 26052289.].

8.
Front Neurosci ; 10: 136, 2016.
Article in English | MEDLINE | ID: mdl-27242396

ABSTRACT

The rostral brainstem receives both "bottom-up" input from the ascending auditory system and "top-down" descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory.

9.
Front Psychol ; 6: 548, 2015.
Article in English | MEDLINE | ID: mdl-26052289

ABSTRACT

A dynamic interplay is known to exist between auditory processing and human cognition. For example, prior investigations of speech-in-noise have revealed there is more to learning than just listening: Even if all words within a spoken list are correctly heard in noise, later memory for those words is typically impoverished. These investigations supported a view that there is a "gap" between the intelligibility of speech and memory for that speech. Here, the notion was that this gap between speech intelligibility and memorability is a function of the extent to which the spoken message seizes limited immediate memory resources (e.g., Kjellberg et al., 2008). Accordingly, the more difficult the processing of the spoken message, the less resources are available for elaboration, storage, and recall of that spoken material. However, it was not previously known how increasing that difficulty affected the memory processing of semantically rich spoken material. This investigation showed that noise impairs higher levels of cognitive analysis. A variant of the Deese-Roediger-McDermott procedure that encourages semantic elaborative processes was deployed. On each trial, participants listened to a 36-item list comprising 12 words blocked by each of 3 different themes. Each of those 12 words (e.g., bed, tired, snore…) was associated with a "critical" lure theme word that was not presented (e.g., sleep). Word lists were either presented without noise or at a signal-to-noise ratio of 5 decibels upon an A-weighting. Noise reduced false recall of the critical words, and decreased the semantic clustering of recall. Theoretical and practical implications are discussed.

SELECTION OF CITATIONS
SEARCH DETAIL
...