Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 74
Filter
Add more filters










Publication year range
1.
J Neurosci ; 44(10)2024 Mar 06.
Article in English | MEDLINE | ID: mdl-38267259

ABSTRACT

Sound texture perception takes advantage of a hierarchy of time-averaged statistical features of acoustic stimuli, but much remains unclear about how these statistical features are processed along the auditory pathway. Here, we compared the neural representation of sound textures in the inferior colliculus (IC) and auditory cortex (AC) of anesthetized female rats. We recorded responses to texture morph stimuli that gradually add statistical features of increasingly higher complexity. For each texture, several different exemplars were synthesized using different random seeds. An analysis of transient and ongoing multiunit responses showed that the IC units were sensitive to every type of statistical feature, albeit to a varying extent. In contrast, only a small proportion of AC units were overtly sensitive to any statistical features. Differences in texture types explained more of the variance of IC neural responses than did differences in exemplars, indicating a degree of "texture type tuning" in the IC, but the same was, perhaps surprisingly, not the case for AC responses. We also evaluated the accuracy of texture type classification from single-trial population activity and found that IC responses became more informative as more summary statistics were included in the texture morphs, while for AC population responses, classification performance remained consistently very low. These results argue against the idea that AC neurons encode sound type via an overt sensitivity in neural firing rate to fine-grain spectral and temporal statistical features.


Subject(s)
Auditory Cortex , Inferior Colliculi , Female , Rats , Animals , Auditory Pathways/physiology , Inferior Colliculi/physiology , Mesencephalon/physiology , Sound , Auditory Cortex/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology
2.
Front Psychol ; 14: 1106562, 2023.
Article in English | MEDLINE | ID: mdl-37705948

ABSTRACT

The unity assumption hypothesis contends that higher-level factors, such as a perceiver's belief and prior experience, modulate multisensory integration. The McGurk illusion exemplifies such integration. When a visual velar consonant /ga/ is dubbed with an auditory bilabial /ba/, listeners unify the discrepant signals with knowledge that open lips cannot produce /ba/ and a fusion percept /da/ is perceived. Previous research claimed to have falsified the unity assumption hypothesis by demonstrating the McGurk effect occurs even when a face is dubbed with a voice of the opposite sex, and thus violates expectations from prior experience. But perhaps stronger counter-evidence is needed to prevent perceptual unity than just an apparent incongruence between unfamiliar faces and voices. Here we investigated whether the McGurk illusion with male/female incongruent stimuli can be disrupted by familiarization and priming with an appropriate pairing of face and voice. In an online experiment, the susceptibility of participants to the McGurk illusion was tested with stimuli containing either a male or female face with a voice of incongruent gender. The number of times participants experienced a McGurk illusion was measured before and after a familiarization block, which familiarized them with the true pairings of face and voice. After familiarization and priming, the susceptibility to the McGurk effects decreased significantly on average. The findings support the notion that unity assumptions modulate intersensory bias, and confirm and extend previous studies using male/female incongruent McGurk stimuli.

3.
Hear Res ; 438: 108857, 2023 10.
Article in English | MEDLINE | ID: mdl-37639922

ABSTRACT

Perception is sensitive to statistical regularities in the environment, including temporal characteristics of sensory inputs. Interestingly, implicit learning of temporal patterns in one modality can also improve their processing in another modality. However, it is unclear how cross-modal learning transfer affects neural responses to sensory stimuli. Here, we recorded neural activity of human volunteers using electroencephalography (EEG), while participants were exposed to brief sequences of randomly timed auditory or visual pulses. Some trials consisted of a repetition of the temporal pattern within the sequence, and subjects were tasked with detecting these trials. Unknown to the participants, some trials reappeared throughout the experiment across both modalities (Transfer) or only within a modality (Control), enabling implicit learning in one modality and its transfer. Using a novel method of analysis of single-trial EEG responses, we showed that learning temporal structures within and across modalities is reflected in neural learning curves. These putative neural correlates of learning transfer were similar both when temporal information learned in audition was transferred to visual stimuli and vice versa. The modality-specific mechanisms for learning of temporal information and general mechanisms which mediate learning transfer across modalities had distinct physiological signatures: temporal learning within modalities relied on modality-specific brain regions while learning transfer affected beta-band activity in frontal regions.


Subject(s)
Auditory Perception , Learning , Humans , Electroencephalography , Frontal Lobe , Healthy Volunteers
4.
J Neurosci ; 43(25): 4697-4708, 2023 06 21.
Article in English | MEDLINE | ID: mdl-37221094

ABSTRACT

Previous work has demonstrated that performance in an auditory selective attention task can be enhanced or impaired, depending on whether a task-irrelevant visual stimulus is temporally coherent with a target auditory stream or with a competing distractor. However, it remains unclear how audiovisual (AV) temporal coherence and auditory selective attention interact at the neurophysiological level. Here, we measured neural activity using EEG while human participants (men and women) performed an auditory selective attention task, detecting deviants in a target audio stream. The amplitude envelope of the two competing auditory streams changed independently, while the radius of a visual disk was manipulated to control the AV coherence. Analysis of the neural responses to the sound envelope demonstrated that auditory responses were enhanced largely independently of the attentional condition: both target and masker stream responses were enhanced when temporally coherent with the visual stimulus. In contrast, attention enhanced the event-related response evoked by the transient deviants, largely independently of AV coherence. These results provide evidence for dissociable neural signatures of bottom-up (coherence) and top-down (attention) effects in AV object formation.SIGNIFICANCE STATEMENT Temporal coherence between auditory stimuli and task-irrelevant visual stimuli can enhance behavioral performance in auditory selective attention tasks. However, how audiovisual temporal coherence and attention interact at the neural level has not been established. Here, we measured EEG during a behavioral task designed to independently manipulate audiovisual coherence and auditory selective attention. While some auditory features (sound envelope) could be coherent with visual stimuli, other features (timbre) were independent of visual stimuli. We find that audiovisual integration can be observed independently of attention for sound envelopes temporally coherent with visual stimuli, while the neural responses to unexpected timbre changes are most strongly modulated by attention. Our results provide evidence for dissociable neural mechanisms of bottom-up (coherence) and top-down (attention) effects on audiovisual object formation.


Subject(s)
Auditory Perception , Evoked Potentials , Male , Humans , Female , Evoked Potentials/physiology , Auditory Perception/physiology , Attention/physiology , Sound , Acoustic Stimulation , Visual Perception/physiology , Photic Stimulation
5.
Sci Rep ; 13(1): 3785, 2023 03 07.
Article in English | MEDLINE | ID: mdl-36882473

ABSTRACT

Spatial hearing remains one of the major challenges for bilateral cochlear implant (biCI) users, and early deaf patients in particular are often completely insensitive to interaural time differences (ITDs) delivered through biCIs. One popular hypothesis is that this may be due to a lack of early binaural experience. However, we have recently shown that neonatally deafened rats fitted with biCIs in adulthood quickly learn to discriminate ITDs as well as their normal hearing litter mates, and perform an order of magnitude better than human biCI users. Our unique behaving biCI rat model allows us to investigate other possible limiting factors of prosthetic binaural hearing, such as the effect of stimulus pulse rate and envelope shape. Previous work has indicated that ITD sensitivity may decline substantially at the high pulse rates often used in clinical practice. We therefore measured behavioral ITD thresholds in neonatally deafened, adult implanted biCI rats to pulse trains of 50, 300, 900 and 1800 pulses per second (pps), with either rectangular or Hanning window envelopes. Our rats exhibited very high sensitivity to ITDs at pulse rates up to 900 pps for both envelope shapes, similar to those in common clinical use. However, ITD sensitivity declined to near zero at 1800 pps, for both Hanning and rectangular windowed pulse trains. Current clinical cochlear implant (CI) processors are often set to pulse rates ≥ 900 pps, but ITD sensitivity in human CI listeners has been reported to decline sharply above ~ 300 pps. Our results suggest that the relatively poor ITD sensitivity seen at > 300 pps in human CI users may not reflect the hard upper limit of biCI ITD performance in the mammalian auditory pathway. Perhaps with training or better CI strategies good binaural hearing may be achievable at pulse rates high enough to allow good sampling of speech envelopes while delivering usable ITDs.


Subject(s)
Cochlear Implantation , Cochlear Implants , Adult , Humans , Animals , Rats , Heart Rate , Tachycardia , Auditory Pathways , Mammals
6.
bioRxiv ; 2023 Jan 20.
Article in English | MEDLINE | ID: mdl-36711896

ABSTRACT

Detecting patterns, and noticing unexpected pattern changes, in the environment is a vital aspect of sensory processing. Adaptation and prediction error responses are two components of neural processing related to these tasks, and previous studies in the auditory system in rodents show that these two components are partially dissociable in terms of the topography and latency of neural responses to sensory deviants. However, many previous studies have focused on repetitions of single stimuli, such as pure tones, which have limited ecological validity. In this study, we tested whether the auditory cortical activity shows adaptation to repetition of more complex sound patterns (bisyllabic pairs). Specifically, we compared neural responses to violations of sequences based on single stimulus probability only, against responses to more complex violations based on stimulus order. We employed an auditory oddball paradigm and monitored the auditory cortex (ACtx) activity of awake mice (N=8) using wide-field calcium imaging. We found that cortical responses were sensitive both to single stimulus probabilities and to more global stimulus patterns, as mismatch signals were elicited following both substitution deviants and transposition deviants. Notably, A2 area elicited larger mismatch signaling to those deviants than primary ACtx (A1), which suggests a hierarchical gradient of prediction error signaling in the auditory cortex. Such a hierarchical gradient was observed for late but not early peaks of calcium transients to deviants, suggesting that the late part of the deviant response may reflect prediction error signaling in response to more complex sensory pattern violations.

7.
Front Psychol ; 13: 1026116, 2022.
Article in English | MEDLINE | ID: mdl-36324794

ABSTRACT

Despite pitch being considered the primary cue for discriminating lexical tones, there are secondary cues such as loudness contour and duration, which may allow some cochlear implant (CI) tone discrimination even with severely degraded pitch cues. To isolate pitch cues from other cues, we developed a new disyllabic word stimulus set (Di) whose primary (pitch) and secondary (loudness) cue varied independently. This Di set consists of 270 disyllabic words, each having a distinct meaning depending on the perceived tone. Thus, listeners who hear the primary pitch cue clearly may hear a different meaning from listeners who struggle with the pitch cue and must rely on the secondary loudness contour. A lexical tone recognition experiment was conducted, which compared Di with a monosyllabic set of natural recordings. Seventeen CI users and eight normal-hearing (NH) listeners took part in the experiment. Results showed that CI users had poorer pitch cues encoding and their tone recognition performance was significantly influenced by the "missing" or "confusing" secondary cues with the Di corpus. The pitch-contour-based tone recognition is still far from satisfactory for CI users compared to NH listeners, even if some appear to integrate multiple cues to achieve high scores. This disyllabic corpus could be used to examine the performance of pitch recognition of CI users and the effectiveness of pitch cue enhancement based Mandarin tone enhancement strategies. The Di corpus is freely available online: https://github.com/BetterCI/DiTone.

8.
BMC Biol ; 20(1): 48, 2022 02 16.
Article in English | MEDLINE | ID: mdl-35172815

ABSTRACT

BACKGROUND: To localize sound sources accurately in a reverberant environment, human binaural hearing strongly favors analyzing the initial wave front of sounds. Behavioral studies of this "precedence effect" have so far largely been confined to human subjects, limiting the scope of complementary physiological approaches. Similarly, physiological studies have mostly looked at neural responses in the inferior colliculus, the main relay point between the inner ear and the auditory cortex, or used modeling of cochlear auditory transduction in an attempt to identify likely underlying mechanisms. Studies capable of providing a direct comparison of neural coding and behavioral measures of sound localization under the precedence effect are lacking. RESULTS: We adapted a "temporal weighting function" paradigm previously developed to quantify the precedence effect in human for use in laboratory rats. The animals learned to lateralize click trains in which each click in the train had a different interaural time difference. Computing the "perceptual weight" of each click in the train revealed a strong onset bias, very similar to that reported for humans. Follow-on electrocorticographic recording experiments revealed that onset weighting of interaural time differences is a robust feature of the cortical population response, but interestingly, it often fails to manifest at individual cortical recording sites. CONCLUSION: While previous studies suggested that the precedence effect may be caused by early processing mechanisms in the cochlea or inhibitory circuitry in the brainstem and midbrain, our results indicate that the precedence effect is not fully developed at the level of individual recording sites in the auditory cortex, but robust and consistent precedence effects are observable only in the auditory cortex at the level of cortical population responses. This indicates that the precedence effect emerges at later cortical processing stages and is a significantly "higher order" feature than has hitherto been assumed.


Subject(s)
Auditory Cortex , Inferior Colliculi , Sound Localization , Acoustic Stimulation/methods , Animals , Auditory Cortex/physiology , Hearing , Humans , Inferior Colliculi/physiology , Sound Localization/physiology
9.
Hear Res ; 412: 108357, 2021 12.
Article in English | MEDLINE | ID: mdl-34739889

ABSTRACT

Previous psychophysical studies have identified a hierarchy of time-averaged statistics which determine the identity of natural sound textures. However, it is unclear whether the neurons in the inferior colliculus (IC) are sensitive to each of these statistical features in the natural sound textures. We used 13 representative sound textures spanning the space of 3 statistics extracted from over 200 natural textures. The synthetic textures were generated by incorporating the statistical features in a step-by-step manner, in which a particular statistical feature was changed while the other statistical features remain unchanged. The extracellular activity in response to the synthetic texture stimuli was recorded in the IC of anesthetized rats. Analysis of the transient and sustained multiunit activity after each transition of statistical feature showed that the IC units were sensitive to the changes of all types of statistics, although to a varying extent. For example, we found that more neurons were sensitive to the changes in variance than that in the modulation correlations. Our results suggest that the sensitivity of the statistical features in the subcortical levels contributes to the identification and discrimination of natural sound textures.


Subject(s)
Inferior Colliculi , Acoustic Stimulation , Animals , Inferior Colliculi/physiology , Neurons/physiology , Rats , Sound
10.
Hear Res ; 409: 108331, 2021 09 15.
Article in English | MEDLINE | ID: mdl-34416492

ABSTRACT

While a large body of literature has examined the encoding of binaural spatial cues in the auditory midbrain, studies that ask how quantitative measures of spatial tuning in midbrain neurons compare with an animal's psychoacoustic performance remain rare. Researchers have tried to explain deficits in spatial hearing in certain patient groups, such as binaural cochlear implant users, in terms of declines in apparent reductions in spatial tuning of midbrain neurons of animal models. However, the quality of spatial tuning can be quantified in many different ways, and in the absence of evidence that a given neural tuning measure correlates with psychoacoustic performance, the interpretation of such finding remains very tentative. Here, we characterize ITD tuning in the rat inferior colliculus (IC) to acoustic pulse train stimuli with varying envelopes and at varying rates, and explore whether quality of tuning correlates behavioral performance. We quantified both mutual information (MI) and neural d' as measures of ITD sensitivity. Neural d' values paralleled behavioral ones, declining with increasing click rates or when envelopes changed from rectangular to Hanning windows, and they correlated much better with behavioral performance than MI. Meanwhile, MI values were larger in an older, more experienced cohort of animals than in naive animals, but neural d' did not differ between cohorts. However, the results obtained with neural d' and MI were highly correlated when ITD values were coded simply as left or right ear leading, rather than specific ITD values. Thus, neural measures of lateralization ability (e.g. d' or left/right MI) appear to be highly predictive of psychoacoustic performance in a two-alternative forced choice task.


Subject(s)
Cochlear Implantation , Cochlear Implants , Inferior Colliculi , Acoustic Stimulation , Animals , Hearing , Rats , Sound Localization
11.
PLoS One ; 16(6): e0238960, 2021.
Article in English | MEDLINE | ID: mdl-34161323

ABSTRACT

Sounds like "running water" and "buzzing bees" are classes of sounds which are a collective result of many similar acoustic events and are known as "sound textures". A recent psychoacoustic study using sound textures has reported that natural sounding textures can be synthesized from white noise by imposing statistical features such as marginals and correlations computed from the outputs of cochlear models responding to the textures. The outputs being the envelopes of bandpass filter responses, the 'cochlear envelope'. This suggests that the perceptual qualities of many natural sounds derive directly from such statistical features, and raises the question of how these statistical features are distributed in the acoustic environment. To address this question, we collected a corpus of 200 sound textures from public online sources and analyzed the distributions of the textures' marginal statistics (mean, variance, skew, and kurtosis), cross-frequency correlations and modulation power statistics. A principal component analysis of these parameters revealed a great deal of redundancy in the texture parameters. For example, just two marginal principal components, which can be thought of as measuring the sparseness or burstiness of a texture, capture as much as 64% of the variance of the 128 dimensional marginal parameter space, while the first two principal components of cochlear correlations capture as much as 88% of the variance in the 496 correlation parameters. Knowledge of the statistical distributions documented here may help guide the choice of acoustic stimuli with high ecological validity in future research.


Subject(s)
Auditory Perception/physiology , Sound , Acoustic Stimulation/methods , Acoustics , Cochlea/physiology , Databases, Factual , Humans , Models, Statistical , Noise , Principal Component Analysis/methods , Psychoacoustics
12.
iScience ; 24(6): 102527, 2021 Jun 25.
Article in English | MEDLINE | ID: mdl-34142039

ABSTRACT

An interdisciplinary approach to sensory information combination shows a correspondence between perceptual and neural measures of nonlinear multisensory integration. In psychophysics, sensory information combinations are often characterized by the Minkowski formula, but the neural substrates of many psychophysical multisensory interactions are unknown. We show that audiovisual interactions - for both psychophysical detection threshold data and cortical bimodal neurons - obey similar vector-like Minkowski models, suggesting that cortical bimodal neurons could underlie multisensory perceptual sensitivity. An alternative Bayesian model is not a good predictor of cortical bimodal response. In contrast to cortex, audiovisual data from superior colliculus resembles the 'City-Block' combination rule used in perceptual similarity metrics. Previous work found a simple power law amplification rule is followed for perceptual appearance measures and by cortical subthreshold multisensory neurons. The two most studied neural cell classes in cortical multisensory interactions may provide neural substrates for two important perceptual modes: appearance-based and performance-based perception.

13.
Front Neurosci ; 15: 610978, 2021.
Article in English | MEDLINE | ID: mdl-33790730

ABSTRACT

Learning of new auditory stimuli often requires repetitive exposure to the stimulus. Fast and implicit learning of sounds presented at random times enables efficient auditory perception. However, it is unclear how such sensory encoding is processed on a neural level. We investigated neural responses that are developed from a passive, repetitive exposure to a specific sound in the auditory cortex of anesthetized rats, using electrocorticography. We presented a series of random sequences that are generated afresh each time, except for a specific reference sequence that remains constant and re-appears at random times across trials. We compared induced activity amplitudes between reference and fresh sequences. Neural responses from both primary and non-primary auditory cortical regions showed significantly decreased induced activity amplitudes for reference sequences compared to fresh sequences, especially in the beta band. This is the first study showing that neural correlates of auditory pattern learning can be evoked even in anesthetized, passive listening animal models.

14.
Front Hum Neurosci ; 15: 613903, 2021.
Article in English | MEDLINE | ID: mdl-33597853

ABSTRACT

Mismatch negativity (MMN) is the electroencephalographic (EEG) waveform obtained by subtracting event-related potential (ERP) responses evoked by unexpected deviant stimuli from responses evoked by expected standard stimuli. While the MMN is thought to reflect an unexpected change in an ongoing, predictable stimulus, it is unknown whether MMN responses evoked by changes in different stimulus features have different magnitudes, latencies, and topographies. The present study aimed to investigate whether MMN responses differ depending on whether sudden stimulus change occur in pitch, duration, location or vowel identity, respectively. To calculate ERPs to standard and deviant stimuli, EEG signals were recorded in normal-hearing participants (N = 20; 13 males, 7 females) who listened to roving oddball sequences of artificial syllables. In the roving paradigm, any given stimulus is repeated several times to form a standard, and then suddenly replaced with a deviant stimulus which differs from the standard. Here, deviants differed from preceding standards along one of four features (pitch, duration, vowel or interaural level difference). The feature levels were individually chosen to match behavioral discrimination performance. We identified neural activity evoked by unexpected violations along all four acoustic dimensions. Evoked responses to deviant stimuli increased in amplitude relative to the responses to standard stimuli. A univariate (channel-by-channel) analysis yielded no significant differences between MMN responses following violations of different features. However, in a multivariate analysis (pooling information from multiple EEG channels), acoustic features could be decoded from the topography of mismatch responses, although at later latencies than those typical for MMN. These results support the notion that deviant feature detection may be subserved by a different process than general mismatch detection.

15.
Hear Res ; 399: 107894, 2021 01.
Article in English | MEDLINE | ID: mdl-31987647

ABSTRACT

Predictive coding is an influential theory of neural processing underlying perceptual inference. However, it is unknown to what extent prediction violations of different sensory features are mediated in different regions in auditory cortex, with different dynamics, and by different mechanisms. This study investigates the neural responses to synthesized acoustic syllables, which could be expected or unexpected, along several features. By using electrocorticography (ECoG) in rat auditory cortex (subjects: adult female Wistar rats with normal hearing), we aimed at mapping regional differences in mismatch responses to different stimulus features. Continuous streams of morphed syllables formed roving oddball sequences in which each stimulus was repeated several times (thereby forming a standard) and subsequently replaced with a deviant stimulus which differed from the standard along one of several acoustic features: duration, pitch, interaural level differences (ILD), or consonant identity. Each of these features could assume one of several different levels, and the resulting change from standard to deviant could be larger or smaller. The deviant stimuli were then repeated to form new standards. We analyzed responses to the first repetition of a new stimulus (deviant) and its last repetition in a stimulus train (standard). For the ECoG recording, we implanted urethane-anaesthetized rats with 8 × 8 surface electrode arrays covering a 3 × 3 mm cortical patch encompassing primary and higher-order auditory cortex. We identified the response topographies and latencies of population activity evoked by acoustic stimuli in the rat auditory regions, and mapped their sensitivity to expectation violations along different acoustic features. For all features, the responses to deviant stimuli increased in amplitude relative to responses to standard stimuli. Deviance magnitude did not further modulate these mismatch responses. Mismatch responses to different feature violations showed a heterogeneous distribution across cortical areas, with no evidence for systematic topographic gradients for any of the tested features. However, within rats, the spatial distribution of mismatch responses varied more between features than the spatial distribution of tone-evoked responses. This result supports the notion that prediction error signaling along different stimulus features is subserved by different cortical populations, albeit with substantial heterogeneity across individuals.


Subject(s)
Acoustics , Evoked Potentials, Auditory , Acoustic Stimulation , Animals , Auditory Cortex , Electroencephalography , Female , Rats , Rats, Wistar
16.
Curr Res Neurobiol ; 2: 100019, 2021.
Article in English | MEDLINE | ID: mdl-36246502

ABSTRACT

Continuous acoustic streams, such as speech signals, can be chunked into segments containing reoccurring patterns (e.g., words). Noninvasive recordings of neural activity in humans suggest that chunking is underpinned by low-frequency cortical entrainment to the segment presentation rate, and modulated by prior segment experience (e.g., words belonging to a familiar language). Interestingly, previous studies suggest that also primates and rodents may be able to chunk acoustic streams. Here, we test whether neural activity in the rat auditory cortex is modulated by previous segment experience. We recorded subdural responses using electrocorticography (ECoG) from the auditory cortex of 11 anesthetized rats. Prior to recording, four rats were trained to detect familiar triplets of acoustic stimuli (artificial syllables), three were passively exposed to the triplets, while another four rats had no training experience. While low-frequency neural activity peaks were observed at the syllable level, no triplet-rate peaks were observed. Notably, in trained rats (but not in passively exposed and naïve rats), familiar triplets could be decoded more accurately than unfamiliar triplets based on neural activity in the auditory cortex. These results suggest that rats process acoustic sequences, and that their cortical activity is modulated by the training experience even under subsequent anesthesia.

17.
Front Neurosci ; 14: 709, 2020.
Article in English | MEDLINE | ID: mdl-32765212

ABSTRACT

Neural implants that deliver multi-site electrical stimulation to the nervous systems are no longer the last resort but routine treatment options for various neurological disorders. Multi-site electrical stimulation is also widely used to study nervous system function and neural circuit transformations. These technologies increasingly demand dynamic electrical stimulation and closed-loop feedback control for real-time assessment of neural function, which is technically challenging since stimulus-evoked artifacts overwhelm the small neural signals of interest. We report a novel and versatile artifact removal method that can be applied in a variety of settings, from single- to multi-site stimulation and recording and for current waveforms of arbitrary shape and size. The method capitalizes on linear electrical coupling between stimulating currents and recording artifacts, which allows us to estimate a multi-channel linear Wiener filter to predict and subsequently remove artifacts via subtraction. We confirm and verify the linearity assumption and demonstrate feasibility in a variety of recording modalities, including in vitro sciatic nerve stimulation, bilateral cochlear implant stimulation, and multi-channel stimulation and recording between the auditory midbrain and cortex. We demonstrate a vast enhancement in the recording quality with a typical artifact reduction of 25-40 dB. The method is efficient and can be scaled to arbitrary number of stimulus and recording sites, making it ideal for applications in large-scale arrays, closed-loop implants, and high-resolution multi-channel brain-machine interfaces.

18.
Neuropsychologia ; 144: 107498, 2020 07.
Article in English | MEDLINE | ID: mdl-32442445

ABSTRACT

Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Sound , Visual Cortex/physiology , Acoustic Stimulation , Acoustics , Adult , Attentional Bias , Electroencephalography , Female , Humans , Male , Middle Aged , Young Adult
19.
R Soc Open Sci ; 7(3): 191194, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32269783

ABSTRACT

Previous research has shown that musical beat perception is a surprisingly complex phenomenon involving widespread neural coordination across higher-order sensory, motor and cognitive areas. However, the question of how low-level auditory processing must necessarily shape these dynamics, and therefore perception, is not well understood. Here, we present evidence that the auditory cortical representation of music, even in the absence of motor or top-down activations, already favours the beat that will be perceived. Extracellular firing rates in the rat auditory cortex were recorded in response to 20 musical excerpts diverse in tempo and genre, for which musical beat perception had been characterized by the tapping behaviour of 40 human listeners. We found that firing rates in the rat auditory cortex were on average higher on the beat than off the beat. This 'neural emphasis' distinguished the beat that was perceived from other possible interpretations of the beat, was predictive of the degree of tapping consensus across human listeners, and was accounted for by a spectrotemporal receptive field model. These findings strongly suggest that the 'bottom-up' processing of music performed by the auditory system predisposes the timing and clarity of the perceived musical beat.

20.
J Neurophysiol ; 123(4): 1536-1551, 2020 04 01.
Article in English | MEDLINE | ID: mdl-32186432

ABSTRACT

Contrast gain control is the systematic adjustment of neuronal gain in response to the contrast of sensory input. It is widely observed in sensory cortical areas and has been proposed to be a canonical neuronal computation. Here, we investigated whether shunting inhibition from parvalbumin-positive interneurons-a mechanism involved in gain control in visual cortex-also underlies contrast gain control in auditory cortex. First, we performed extracellular recordings in the auditory cortex of anesthetized male mice and optogenetically manipulated the activity of parvalbumin-positive interneurons while varying the contrast of the sensory input. We found that both activation and suppression of parvalbumin interneuron activity altered the overall gain of cortical neurons. However, despite these changes in overall gain, we found that manipulating parvalbumin interneuron activity did not alter the strength of contrast gain control in auditory cortex. Furthermore, parvalbumin-positive interneurons did not show increases in activity in response to high-contrast stimulation, which would be expected if they drive contrast gain control. Finally, we performed in vivo whole-cell recordings in auditory cortical neurons during high- and low-contrast stimulation and found that no increase in membrane conductance was observed during high-contrast stimulation. Taken together, these findings indicate that while parvalbumin-positive interneuron activity modulates the overall gain of auditory cortical responses, other mechanisms are primarily responsible for contrast gain control in this cortical area.NEW & NOTEWORTHY We investigated whether contrast gain control is mediated by shunting inhibition from parvalbumin-positive interneurons in auditory cortex. We performed extracellular and intracellular recordings in mouse auditory cortex while presenting sensory stimuli with varying contrasts and manipulated parvalbumin-positive interneuron activity using optogenetics. We show that while parvalbumin-positive interneuron activity modulates the gain of cortical responses, this activity is not the primary mechanism for contrast gain control in auditory cortex.


Subject(s)
Auditory Cortex/physiology , Interneurons/physiology , Neural Inhibition/physiology , Parvalbumins , Animals , Male , Mice , Optogenetics , Parvalbumins/metabolism , Patch-Clamp Techniques
SELECTION OF CITATIONS
SEARCH DETAIL
...