Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Open Mind (Camb) ; 8: 333-365, 2024.
Article in English | MEDLINE | ID: mdl-38571530

ABSTRACT

Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field's ecological validity by using and making available a new collection of high-quality auditory scenes. Participants rated scenes on 8 global properties (e.g., open vs. enclosed) and an acoustic analysis evaluated which low-level features predicted the ratings. We submitted the acoustic measures and average ratings of the global properties to separate exploratory factor analyses (EFAs). The EFA of the acoustic measures revealed a seven-factor structure explaining 57% of the variance in the data, while the EFA of the global property measures revealed a two-factor structure explaining 64% of the variance in the data. Regression analyses revealed each global property was predicted by at least one acoustic variable (R2 = 0.33-0.87). These findings were extended using deep neural network models where we examined correlations between human ratings of global properties and deep embeddings of two computational models: an object-based model and a scene-based model. The results support that participants' ratings are more strongly explained by a global analysis of the scene setting, though the relationship between scene perception and auditory perception is multifaceted, with differing correlation patterns evident between the two models. Taken together, our results provide evidence for the ability to perceive auditory scenes from a global perspective. Some of the acoustic measures predicted ratings of global scene perception, suggesting representations of auditory objects may be transformed through many stages of processing in the ventral auditory stream, similar to what has been proposed in the ventral visual stream. These findings and the open availability of our scene collection will make future studies on perception, attention, and memory for natural auditory scenes possible.

2.
Front Psychol ; 14: 1183303, 2023.
Article in English | MEDLINE | ID: mdl-37448716

ABSTRACT

Head position at any point in time plays a fundamental role in shaping the auditory information that reaches a listener, information that continuously changes as the head moves and reorients to different listening situations. The connection between hearing science and the kinesthetics of head movement has gained interest due to technological advances that have increased the feasibility of providing behavioral and biological feedback to assistive listening devices that can interpret movement patterns that reflect listening intent. Increasing evidence also shows that the negative impact of hearing deficits on mobility, gait, and balance may be mitigated by prosthetic hearing device intervention. Better understanding of the relationships between head movement, full body kinetics, and hearing health, should lead to improved signal processing strategies across a range of assistive and augmented hearing devices. The purpose of this review is to introduce the wider hearing community to the kinesiology of head movement and to place it in the context of hearing and communication with the goal of expanding the field of ecologically-specific listener behavior.

3.
Neurosci Conscious ; 2023(1): niac019, 2023.
Article in English | MEDLINE | ID: mdl-36751309

ABSTRACT

Current theories of perception emphasize the role of neural adaptation, inhibitory competition, and noise as key components that lead to switches in perception. Supporting evidence comes from neurophysiological findings of specific neural signatures in modality-specific and supramodal brain areas that appear to be critical to switches in perception. We used functional magnetic resonance imaging to study brain activity around the time of switches in perception while participants listened to a bistable auditory stream segregation stimulus, which can be heard as one integrated stream of tones or two segregated streams of tones. The auditory thalamus showed more activity around the time of a switch from segregated to integrated compared to time periods of stable perception of integrated; in contrast, the rostral anterior cingulate cortex and the inferior parietal lobule showed more activity around the time of a switch from integrated to segregated compared to time periods of stable perception of segregated streams, consistent with prior findings of asymmetries in brain activity depending on the switch direction. In sound-responsive areas in the auditory cortex, neural activity increased in strength preceding switches in perception and declined in strength over time following switches in perception. Such dynamics in the auditory cortex are consistent with the role of adaptation proposed by computational models of visual and auditory bistable switching, whereby the strength of neural activity decreases following a switch in perception, which eventually destabilizes the current percept enough to lead to a switch to an alternative percept.

5.
JASA Express Lett ; 2(12): 124402, 2022 12.
Article in English | MEDLINE | ID: mdl-36586966

ABSTRACT

The classic spatial release from masking (SRM) task measures speech recognition thresholds for discrete separation angles between a target and masker. Alternatively, this study used a modified SRM task that adaptively measured the spatial-separation angle needed between a continuous male target stream (speech with digits) and two female masker streams to achieve a specific SRM. On average, 20 young normal-hearing listeners needed less spatial separation for 6 dB release than 9 dB release, and the presence of background babble reduced across-listener variability on the paradigm. Future work is needed to better understand the psychometric properties of this adaptive procedure.


Subject(s)
Perceptual Masking , Speech Perception , Male , Humans , Female , Speech , Auditory Threshold , Speech Reception Threshold Test
6.
J Neurophysiol ; 127(3): 660-672, 2022 03 01.
Article in English | MEDLINE | ID: mdl-35108112

ABSTRACT

Correlated sounds presented to two ears are perceived as compact and centrally lateralized, whereas decorrelation between ears leads to intracranial image widening. Though most listeners have fine resolution for perceptual changes in interaural correlation (IAC), some investigators have reported large variability in IAC thresholds, and some normal-hearing listeners even exhibit seemingly debilitating IAC thresholds. It is unknown whether or not this variability across individuals and outlier manifestations are a product of task difficulty, poor training, or a neural deficit in the binaural auditory system. The purpose of this study was first to identify listeners with normal and abnormal IAC resolution, second to evaluate the neural responses elicited by IAC changes, and third to use a well-established model of binaural processing to determine a potential explanation for observed individual variability. Nineteen subjects were enrolled in the study, eight of whom were identified as poor performers in the IAC-threshold task. Global scalp responses (N1 and P2 amplitudes of an auditory change complex) in the individuals with poor IAC behavioral thresholds were significantly smaller than for listeners with better IAC resolution. Source-localized evoked responses confirmed this group effect in multiple subdivisions of the auditory cortex, including Heschl's gyrus, planum temporale, and the temporal sulcus. In combination with binaural modeling results, this study provides objective electrophysiological evidence of a binaural processing deficit linked to internal noise, that corresponds to very poor IAC thresholds in listeners that otherwise have normal audiometric profiles and lack spatial hearing complaints.NEW & NOTEWORTHY Group differences in the perception of interaural correlation (IAC) were observed in human adults with normal audiometric sensitivity. These differences were reflected in cortical-evoked activity measured via electroencephalography (EEG). For some participants, weak representation of the binaural cue at the cortical level in preattentive N1-P2 cortical responses may be indicative of a potential processing deficit. Such a deficit may be related to a poorly understood condition known as hidden hearing loss.


Subject(s)
Auditory Cortex , Deafness , Hearing Loss , Acoustic Stimulation , Adult , Auditory Cortex/physiology , Auditory Perception/physiology , Auditory Threshold , Evoked Potentials, Auditory/physiology , Hearing Tests , Humans , Noise
7.
Front Psychol ; 12: 720131, 2021.
Article in English | MEDLINE | ID: mdl-34621219

ABSTRACT

In the presence of a continually changing sensory environment, maintaining stable but flexible awareness is paramount, and requires continual organization of information. Determining which stimulus features belong together, and which are separate is therefore one of the primary tasks of the sensory systems. Unknown is whether there is a global or sensory-specific mechanism that regulates the final perceptual outcome of this streaming process. To test the extent of modality independence in perceptual control, an auditory streaming experiment, and a visual moving-plaid experiment were performed. Both were designed to evoke alternating perception of an integrated or segregated percept. In both experiments, transient auditory and visual distractor stimuli were presented in separate blocks, such that the distractors did not overlap in frequency or space with the streaming or plaid stimuli, respectively, thus preventing peripheral interference. When a distractor was presented in the opposite modality as the bistable stimulus (visual distractors during auditory streaming or auditory distractors during visual streaming), the probability of percept switching was not significantly different than when no distractor was presented. Conversely, significant differences in switch probability were observed following within-modality distractors, but only when the pre-distractor percept was segregated. Due to the modality-specificity of the distractor-induced resetting, the results suggest that conscious perception is at least partially controlled by modality-specific processing. The fact that the distractors did not have peripheral overlap with the bistable stimuli indicates that the perceptual reset is due to interference at a locus in which stimuli of different frequencies and spatial locations are integrated.

8.
Proc Natl Acad Sci U S A ; 114(36): E7602-E7611, 2017 09 05.
Article in English | MEDLINE | ID: mdl-28827357

ABSTRACT

Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Hearing/physiology , Sound Localization/physiology , Acoustic Stimulation/methods , Adult , Cues , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Sound , Young Adult
9.
Front Syst Neurosci ; 11: 35, 2017.
Article in English | MEDLINE | ID: mdl-28588457

ABSTRACT

Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC) represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI) to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD) applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical) and onset-only (illusory) lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset-which alters the physical but not the perceptual nature of the spatial cue-did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.

10.
J Assoc Res Otolaryngol ; 17(1): 37-53, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26466943

ABSTRACT

Interaural level and time differences (ILD and ITD), the primary binaural cues for sound localization in azimuth, are known to modulate the tuned responses of neurons in mammalian auditory cortex (AC). The majority of these neurons respond best to cue values that favor the contralateral ear, such that contralateral bias is evident in the overall population response and thereby expected in population-level functional imaging data. Human neuroimaging studies, however, have not consistently found contralaterally biased binaural response patterns. Here, we used functional magnetic resonance imaging (fMRI) to parametrically measure ILD and ITD tuning in human AC. For ILD, contralateral tuning was observed, using both univariate and multivoxel analyses, in posterior superior temporal gyrus (pSTG) in both hemispheres. Response-ILD functions were U-shaped, revealing responsiveness to both contralateral and­to a lesser degree­ipsilateral ILD values, consistent with rate coding by unequal populations of contralaterally and ipsilaterally tuned neurons. In contrast, for ITD, univariate analyses showed modest contralateral tuning only in left pSTG, characterized by a monotonic response-ITD function. A multivoxel classifier, however, revealed ITD coding in both hemispheres. Although sensitivity to ILD and ITD was distributed in similar AC regions, the differently shaped response functions and different response patterns across hemispheres suggest that basic ILD and ITD processes are not fully integrated in human AC. The results support opponent-channel theories of ILD but not necessarily ITD coding, the latter of which may involve multiple types of representation that differ across hemispheres.


Subject(s)
Auditory Cortex/physiology , Cues , Adolescent , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Sound Localization/physiology
11.
Neuroimage ; 120: 456-66, 2015 Oct 15.
Article in English | MEDLINE | ID: mdl-26163805

ABSTRACT

Whole-brain functional magnetic resonance imaging was used to measure blood-oxygenation-level-dependent (BOLD) responses in human auditory cortex (AC) to sounds with intensity varying independently in the left and right ears. Echoplanar images were acquired at 3 Tesla with sparse image acquisition once per 12-second block of sound stimulation. Combinations of binaural intensity and stimulus presentation rate were varied between blocks, and selected to allow measurement of response-intensity functions in three configurations: monaural 55-85 dB SPL, binaural 55-85 dB SPL with intensity equal in both ears, and binaural with average binaural level of 70 dB SPL and interaural level differences (ILD) ranging ±30 dB (i.e., favoring the left or right ear). Comparison of response functions equated for contralateral intensity revealed that BOLD-response magnitudes (1) generally increased with contralateral intensity, consistent with positive drive of the BOLD response by the contralateral ear, (2) were larger for contralateral monaural stimulation than for binaural stimulation, consistent with negative effects (e.g., inhibition) of ipsilateral input, which were strongest in the left hemisphere, and (3) also increased with ipsilateral intensity when contralateral input was weak, consistent with additional, positive, effects of ipsilateral stimulation. Hemispheric asymmetries in the spatial extent and overall magnitude of BOLD responses were generally consistent with previous studies demonstrating greater bilaterality of responses in the right hemisphere and stricter contralaterality in the left hemisphere. Finally, comparison of responses to fast (40/s) and slow (5/s) stimulus presentation rates revealed significant rate-dependent adaptation of the BOLD response that varied across ILD values.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Functional Laterality/physiology , Functional Neuroimaging/methods , Acoustic Stimulation , Adolescent , Adult , Echo-Planar Imaging , Female , Humans , Male , Middle Aged , Young Adult
12.
J Neurophysiol ; 112(6): 1566-83, 2014 Sep 15.
Article in English | MEDLINE | ID: mdl-24920021

ABSTRACT

Our understanding of the large-scale population dynamics of neural activity is limited, in part, by our inability to record simultaneously from large regions of the cortex. Here, we validated the use of a large-scale active microelectrode array that simultaneously records 196 multiplexed micro-electrocortigraphical (µECoG) signals from the cortical surface at a very high density (1,600 electrodes/cm(2)). We compared µECoG measurements in auditory cortex using a custom "active" electrode array to those recorded using a conventional "passive" µECoG array. Both of these array responses were also compared with data recorded via intrinsic optical imaging, which is a standard methodology for recording sound-evoked cortical activity. Custom active µECoG arrays generated more veridical representations of the tonotopic organization of the auditory cortex than current commercially available passive µECoG arrays. Furthermore, the cortical representation could be measured efficiently with the active arrays, requiring as little as 13.5 s of neural data acquisition. Next, we generated spectrotemporal receptive fields from the recorded neural activity on the active µECoG array and identified functional organizational principles comparable to those observed using intrinsic metabolic imaging and single-neuron recordings. This new electrode array technology has the potential for large-scale, temporally precise monitoring and mapping of the cortex, without the use of invasive penetrating electrodes.


Subject(s)
Auditory Cortex/physiology , Brain Mapping/instrumentation , Electroencephalography/instrumentation , Animals , Brain Mapping/methods , Electroencephalography/methods , Evoked Potentials, Auditory , Male , Microelectrodes , Optical Imaging/methods , Rats
13.
J Neurosci ; 32(45): 15759-68, 2012 Nov 07.
Article in English | MEDLINE | ID: mdl-23136415

ABSTRACT

A conserved feature of sound processing across species is the presence of multiple auditory cortical fields with topographically organized responses to sound frequency. Current organizational schemes propose that the ventral division of the medial geniculate body (MGBv) is a single functionally homogenous structure that provides the primary source of input to all neighboring frequency-organized cortical fields. These schemes fail to account for the contribution of MGBv to functional diversity between frequency-organized cortical fields. Here, we report response property differences for two auditory fields in the rat, and find they have nonoverlapping sources of thalamic input from the MGBv that are distinguished by the gene expression for type 1 vesicular glutamate transporter. These data challenge widely accepted organizational schemes and demonstrate a genetic plurality in the ascending glutamatergic pathways to frequency-organized auditory cortex.


Subject(s)
Auditory Cortex/metabolism , Auditory Pathways/metabolism , Auditory Perception/physiology , Glutamic Acid/metabolism , Vesicular Glutamate Transport Protein 2/metabolism , Acoustic Stimulation , Animals , Evoked Potentials, Auditory/physiology , Gene Expression , Male , Neurons/metabolism , Rats , Thalamus/metabolism
14.
J Comp Neurol ; 519(2): 177-93, 2011 Feb 01.
Article in English | MEDLINE | ID: mdl-21165970

ABSTRACT

Core auditory cortices are organized in parallel pathways that process incoming sensory information differently. In the rat, sound filtering properties of the primary (A1) and ventral (VAF) auditory fields are markedly different, yet both are core regions that by definition receive most of their thalamic input from the ventral nucleus (MGBv) of the medial geniculate body (MGB). For example, spike rate responses to sound intensity and frequency are more narrowly resolved in VAF vs. A1. Here we question whether there are anatomic correlates of the marked differences in response properties in these two core auditory fields. Combined Fourier optical imaging and multiunit recording methods were used to map tone frequency responses with high spatial resolution in A1, VAF, and neighboring cortices. The cortical distance representing a given octave was similar, yet response frequency resolution was about twice as large in VAF as in A1. Retrograde tracers were injected into low- and high-isofrequency contours of both regions to compare MGBv label patterns. The distance between clusters of MGBv neurons projecting to low- and high-isofrequency contours in the cortex was twice as large in caudal as in rostral MGB. This suggests that differences in A1 and VAF frequency resolution are related to the anatomic spatial resolution of frequency laminae in the thalamus, supporting a growing consensus that antecedents of cortical specialization can be attributed in part to the structural and functional characteristics of thalamocortical inputs.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Sound , Thalamus/physiology , Acoustic Stimulation , Animals , Auditory Cortex/anatomy & histology , Auditory Pathways/anatomy & histology , Electrophysiology , Neurons/cytology , Neurons/physiology , Rats , Rats, Wistar , Thalamus/anatomy & histology
15.
J Neurosci ; 30(43): 14522-32, 2010 Oct 27.
Article in English | MEDLINE | ID: mdl-20980610

ABSTRACT

Accurate orientation to sound under challenging conditions requires auditory cortex, but it is unclear how spatial attributes of the auditory scene are represented at this level. Current organization schemes follow a functional division whereby dorsal and ventral auditory cortices specialize to encode spatial and object features of sound source, respectively. However, few studies have examined spatial cue sensitivities in ventral cortices to support or reject such schemes. Here Fourier optical imaging was used to quantify best frequency responses and corresponding gradient organization in primary (A1), anterior, posterior, ventral (VAF), and suprarhinal (SRAF) auditory fields of the rat. Spike rate sensitivities to binaural interaural level difference (ILD) and average binaural level cues were probed in A1 and two ventral cortices, VAF and SRAF. Continuous distributions of best ILDs and ILD tuning metrics were observed in all cortices, suggesting this horizontal position cue is well covered. VAF and caudal SRAF in the right cerebral hemisphere responded maximally to midline horizontal position cues, whereas A1 and rostral SRAF responded maximally to ILD cues favoring more eccentric positions in the contralateral sound hemifield. SRAF had the highest incidence of binaural facilitation for ILD cues corresponding to midline positions, supporting current theories that auditory cortices have specialized and hierarchical functional organization.


Subject(s)
Auditory Cortex/physiology , Sound Localization/physiology , Acoustic Stimulation , Algorithms , Animals , Brain Mapping , Cues , Data Interpretation, Statistical , Fourier Analysis , Functional Laterality/physiology , Male , Rats , Rats, Inbred BN , Rats, Wistar
16.
J Comp Neurol ; 518(10): 1630-46, 2010 May 15.
Article in English | MEDLINE | ID: mdl-20232478

ABSTRACT

A hierarchical scheme proposed by Kaas and colleagues suggests that primate auditory cortex can be divided into core and belt regions based on anatomic connections with thalamus and distinctions among response properties. According to their model, core auditory cortex receives predominantly unimodal sensory input from the ventral nucleus of the medial geniculate body (MGBv); whereas belt cortex receives predominantly cross-modal sensory input from nuclei outside the MGBv. We previously characterized distinct response properties in rat primary (A1) versus ventral auditory field (VAF) cortex; however, it has been unclear whether VAF should be categorized as a core or belt auditory cortex. The current study employed high-resolution functional imaging to map intrinsic metabolic responses to tones and to guide retrograde tracer injections into A1 and VAF. The size and density of retrogradely labeled somas in the medial geniculate body (MGB) were examined as a function of their position along the caudal-to-rostral axis, subdivision of origin, and cortical projection target. A1 and VAF projecting neurons were found in the same subdivisions of the MGB but in rostral and caudal parts, respectively. Less than 3% of the cells projected to both regions. VAF projecting neurons were smaller than A1 projecting neurons located in dorsal (MGBd) and suprageniculate (SG) nuclei. Thus, soma size varied with both caudal-rostral position and cortical target. Finally, the majority (>70%) of A1 and VAF projecting neurons were located in MGBv. These MGB connection profiles suggest that rat auditory cortex, like primate auditory cortex, is made up of multiple distinct core regions.


Subject(s)
Auditory Cortex , Thalamus , Acoustic Stimulation , Animals , Auditory Cortex/anatomy & histology , Auditory Cortex/physiology , Auditory Pathways/anatomy & histology , Auditory Pathways/physiology , Brain Mapping , Geniculate Bodies/anatomy & histology , Geniculate Bodies/physiology , Neurons/cytology , Neurons/metabolism , Rats , Rats, Wistar , Staining and Labeling , Thalamus/anatomy & histology , Thalamus/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...