Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Curr Biol ; 27(23): 3643-3649.e3, 2017 Dec 04.
Article in English | MEDLINE | ID: mdl-29153327

ABSTRACT

Many behavioral measures of visual perception fluctuate continually in a rhythmic manner, reflecting the influence of endogenous brain oscillations, particularly theta (∼4-7 Hz) and alpha (∼8-12 Hz) rhythms [1-3]. However, it is unclear whether these oscillations are unique to vision or whether auditory performance also oscillates [4, 5]. Several studies report no oscillatory modulation in audition [6, 7], while those with positive findings suffer from confounds relating to neural entrainment [8-10]. Here, we used a bilateral pitch-identification task to investigate rhythmic fluctuations in auditory performance separately for the two ears and applied signal detection theory (SDT) to test for oscillations of both sensitivity and criterion (changes in decision boundary) [11, 12]. Using uncorrelated dichotic white noise to induce a phase reset of oscillations, we demonstrate that, as with vision, both auditory sensitivity and criterion showed strong oscillations over time, at different frequencies: ∼6 Hz (theta range) for sensitivity and ∼8 Hz (low alpha range) for criterion, implying distinct underlying sampling mechanisms [13]. The modulation in sensitivity in left and right ears was in antiphase, suggestive of attention-like mechanisms sampling alternatively from the two ears.


Subject(s)
Auditory Perception/physiology , Biological Clocks , Brain/physiology , Adolescent , Female , Humans , Male , Young Adult
2.
J Neurosci ; 37(16): 4381-4390, 2017 04 19.
Article in English | MEDLINE | ID: mdl-28330878

ABSTRACT

Recent work from several groups has shown that perception of various visual attributes in human observers at a given moment is biased toward what was recently seen. This positive serial dependency is a kind of temporal averaging that exploits short-term correlations in visual scenes to reduce noise and stabilize perception. To date, this stabilizing "continuity field" has been demonstrated on stable visual attributes such as orientation and face identity, yet it would be counterproductive to apply it to dynamic attributes in which change sensitivity is needed. Here, we tested this using motion direction discrimination and predict a negative perceptual dependency: a contrastive relationship that enhances sensitivity to change. Surprisingly, our data showed a cubic-like pattern of dependencies with positive and negative components. By interleaving various stimulus combinations, we separated the components and isolated a positive perceptual dependency for motion and a negative dependency for orientation. A weighted linear sum of the separate dependencies described the original cubic pattern well. The positive dependency for motion shows an integrative perceptual effect and was unexpected, although it is consistent with work on motion priming. These findings suggest that a perception-stabilizing continuity field occurs pervasively, occurring even when it obscures sensitivity to dynamic stimuli.SIGNIFICANCE STATEMENT Recent studies show that visual perception at a given moment is not entirely veridical, but rather biased toward recently seen stimuli: a positive serial dependency. This temporal smoothing process helps perceptual continuity by preserving stable aspects of the visual scene over time, yet, for dynamic stimuli, temporal smoothing would blur dynamics and reduce sensitivity to change. We tested whether this process is selective for stable attributes by examining dependencies in motion perception. We found a clear positive dependency for motion, suggesting that positive perceptual dependencies are pervasive. We also found a concurrent negative (contrastive) dependency for orientation. Both dependencies combined linearly to determine perception, showing that the brain can calculate contrastive and integrative dependencies simultaneously from recent stimulus history when making perceptual decisions.


Subject(s)
Motion Perception/physiology , Space Perception/physiology , Adult , Discrimination, Psychological , Female , Humans , Male , Orientation, Spatial , Repetition Priming
3.
Sci Rep ; 6: 29296, 2016 07 05.
Article in English | MEDLINE | ID: mdl-27377759

ABSTRACT

We tested whether fast flicker can capture attention using eight flicker frequencies from 20-96 Hz, including several too high to be perceived (>50 Hz). Using a 480 Hz visual display rate, we presented smoothly sampled sinusoidal temporal modulations at: 20, 30, 40, 48, 60, 69, 80, and 96 Hz. We first established flicker detection rates for each frequency. Performance was at or near ceiling until 48 Hz and dropped sharply to chance level at 60 Hz and above. We then presented the same flickering stimuli as pre-cues in a visual search task containing five elements. Flicker location varied randomly and was therefore congruent with target location on 20% of trials. Comparing congruent and incongruent trials revealed a very strong congruency effect (faster search for cued targets) for all detectable frequencies (20-48 Hz) but no effect for faster flicker rates that were detected at chance. This pattern of results (obtained with brief flicker cues: 58 ms) was replicated for long flicker cues (1000 ms) intended to allow for entrainment to the flicker frequency. These results indicate that only visible flicker serves as an exogenous attentional cue and that flicker rates too high to be perceived are completely ineffective.

4.
Sci Rep ; 6: 27725, 2016 06 13.
Article in English | MEDLINE | ID: mdl-27291488

ABSTRACT

A natural auditory scene often contains sound moving at varying velocities. Using a velocity contrast paradigm, we compared sensitivity to velocity changes between continuous and discontinuous trajectories. Subjects compared the velocities of two stimulus intervals that moved along a single trajectory, with and without a 1 second inter stimulus interval (ISI). We found thresholds were threefold larger for velocity increases in the instantaneous velocity change condition, as compared to instantaneous velocity decreases or thresholds for the delayed velocity transition condition. This result cannot be explained by the current static "snapshot" model of auditory motion perception and suggest a continuous process where the percept of velocity is influenced by previous history of stimulation.


Subject(s)
Auditory Threshold/physiology , Sound Localization/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Motion Perception , Sound , Young Adult
5.
J Assoc Res Otolaryngol ; 17(3): 209-21, 2016 06.
Article in English | MEDLINE | ID: mdl-27033087

ABSTRACT

The location of a sound is derived computationally from acoustical cues rather than being inherent in the topography of the input signal, as in vision. Since Lord Rayleigh, the descriptions of that representation have swung between "labeled line" and "opponent process" models. Employing a simple variant of a two-point separation judgment using concurrent speech sounds, we found that spatial discrimination thresholds changed nonmonotonically as a function of the overall separation. Rather than increasing with separation, spatial discrimination thresholds first declined as two-point separation increased before reaching a turning point and increasing thereafter with further separation. This "dipper" function, with a minimum at 6 ° of separation, was seen for regions around the midline as well as for more lateral regions (30 and 45 °). The discrimination thresholds for the binaural localization cues were linear over the same range, so these cannot explain the shape of these functions. These data and a simple computational model indicate that the perception of auditory space involves a local code or multichannel mapping emerging subsequent to the binaural cue coding.


Subject(s)
Auditory Perception , Sound Localization , Adult , Auditory Threshold , Cues , Female , Humans , Male
6.
Trends Hear ; 202016 Apr 19.
Article in English | MEDLINE | ID: mdl-27094029

ABSTRACT

The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception.


Subject(s)
Acoustics , Motion Perception , Sound Localization , Sound , Animals , Cues , Electroencephalography , Head Movements , Humans , Motion , Psychoacoustics , Signal Processing, Computer-Assisted , Space Perception , Time Factors
7.
J Renin Angiotensin Aldosterone Syst ; 16(4): 1193-201, 2015 Dec.
Article in English | MEDLINE | ID: mdl-25628311

ABSTRACT

PURPOSE: An ex vivo organotypic retinal explant model was developed to examine retinal survival mechanisms relevant to glaucoma mediated by the renin angiotensin system in the rodent eye. METHODS: Eyes from adult Sprague Dawley rats were enucleated immediately post-mortem and used to make four retinal explants per eye. Explants were treated either with irbesartan (10 µM), vehicle or angiotensin II (2 µM) for four days. Retinal ganglion cell density was estimated by ßIII tubulin immunohistochemistry. Live imaging of superoxide formation with dihydroethidium (DHE) was performed. Protein expression was determined by Western blotting, and mRNA expression was determined by RT-PCR. RESULTS: Irbesartan (10 µM) almost doubled ganglion cell survival after four days. Angiotensin II (2 µM) reduced cell survival by 40%. Sholl analysis suggested that irbesartan improved ganglion cell dendritic arborisation compared to control and angiotensin II reduced it. Angiotensin-treated explants showed an intense DHE fluorescence not seen in irbesartan-treated explants. Analysis of protein and mRNA expression determined that the angiotensin II receptor At1R was implicated in modulation of the NADPH-dependent pathway of superoxide generation. CONCLUSION: Angiotensin II blockers protect retinal ganglion cells in this model and may be worth further investigation as a neuroprotective treatment in models of eye disease.


Subject(s)
Angiotensin II Type 1 Receptor Blockers/pharmacology , Models, Biological , Neuroprotection/drug effects , Retinal Ganglion Cells/cytology , Angiotensin II/pharmacology , Animals , Biphenyl Compounds/pharmacology , Blotting, Western , Cell Count , Dendrites/drug effects , Imaging, Three-Dimensional , Irbesartan , Male , Membrane Glycoproteins/metabolism , NADP/metabolism , NADPH Oxidase 2 , NADPH Oxidases/metabolism , Protein Subunits/metabolism , RNA, Messenger/genetics , RNA, Messenger/metabolism , Rats, Sprague-Dawley , Reactive Oxygen Species/metabolism , Receptor, Angiotensin, Type 2/genetics , Receptor, Angiotensin, Type 2/metabolism , Staining and Labeling , Tetrazoles/pharmacology
8.
Front Neurosci ; 9: 493, 2015.
Article in English | MEDLINE | ID: mdl-26778952

ABSTRACT

The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual "bisensory" stimuli. Three metrics were measured-onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

9.
PLoS One ; 9(9): e108437, 2014.
Article in English | MEDLINE | ID: mdl-25269061

ABSTRACT

The present study examined the effects of spatial sound-source density and reverberation on the spatiotemporal window for audio-visual motion coherence. Three different acoustic stimuli were generated in Virtual Auditory Space: two acoustically "dry" stimuli via the measurement of anechoic head-related impulse responses recorded at either 1° or 5° spatial intervals (Experiment 1), and a reverberant stimulus rendered from binaural room impulse responses recorded at 5° intervals in situ in order to capture reverberant acoustics in addition to head-related cues (Experiment 2). A moving visual stimulus with invariant localization cues was generated by sequentially activating LED's along the same radial path as the virtual auditory motion. Stimuli were presented at 25°/s, 50°/s and 100°/s with a random spatial offset between audition and vision. In a 2AFC task, subjects made a judgment of the leading modality (auditory or visual). No significant differences were observed in the spatial threshold based on the point of subjective equivalence (PSE) or the slope of psychometric functions (ß) across all three acoustic conditions. Additionally, both the PSE and ß did not significantly differ across velocity, suggesting a fixed spatial window of audio-visual separation. Findings suggest that there was no loss in spatial information accompanying the reduction in spatial cues and reverberation levels tested, and establish a perceptual measure for assessing the veracity of motion generated from discrete locations and in echoic environments.


Subject(s)
Auditory Threshold/physiology , Pattern Recognition, Physiological/physiology , Pattern Recognition, Visual/physiology , Sound Localization/physiology , Acoustic Stimulation , Adult , Cues , Female , Hearing/physiology , Humans , Male , Motion , Photic Stimulation , Sound , Space Perception/physiology , Spatio-Temporal Analysis , Vision, Ocular/physiology
10.
J Speech Lang Hear Res ; 57(6): 2308-21, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25198800

ABSTRACT

PURPOSE: The aim of this study was to examine attention, memory, and auditory processing in children with reported listening difficulty in noise (LDN) despite having clinically normal hearing. METHOD: Twenty-one children with LDN and 15 children with no listening concerns (controls) participated. The clinically normed auditory processing tests included the Frequency/Pitch Pattern Test (FPT; Musiek, 2002), the Dichotic Digits Test (Musiek, 1983), the Listening in Spatialized Noise-Sentences (LiSN-S) test (Dillon, Cameron, Glyde, Wilson, & Tomlin, 2012), gap detection in noise (Baker, Jayewardene, Sayle, & Saeed, 2008), and masking level difference (MLD; Wilson, Moncrieff, Townsend, & Pillion, 2003). Also included were research-based psychoacoustic tasks, such as auditory stream segregation, localization, sinusoidal amplitude modulation (SAM), and fine structure perception. All were also evaluated on attention and memory test batteries. RESULTS: The LDN group was significantly slower switching their auditory attention and had poorer inhibitory control. Additionally, the group mean results showed significantly poorer performance on FPT, MLD, 4-Hz SAM, and memory tests. Close inspection of the individual data revealed that only 5 participants (out of 21) in the LDN group showed significantly poor performance on FPT compared with clinical norms. Further testing revealed the frequency discrimination of these 5 children to be significantly impaired. CONCLUSION: Thus, the LDN group showed deficits in attention switching and inhibitory control, whereas only a subset of these participants demonstrated an additional frequency resolution deficit.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Auditory Perceptual Disorders/physiopathology , Memory/physiology , Psychoacoustics , Acoustic Stimulation/methods , Adolescent , Auditory Perceptual Disorders/psychology , Case-Control Studies , Child , Dichotic Listening Tests , Female , Hearing Tests/methods , Humans , Male , Noise , Perceptual Masking , Signal-To-Noise Ratio
11.
PLoS One ; 9(7): e102864, 2014.
Article in English | MEDLINE | ID: mdl-25076211

ABSTRACT

Evidence that the auditory system contains specialised motion detectors is mixed. Many psychophysical studies confound speed cues with distance and duration cues and present sound sources that do not appear to move in external space. Here we use the 'discrimination contours' technique to probe the probabilistic combination of speed, distance and duration for stimuli moving in a horizontal arc around the listener in virtual auditory space. The technique produces a set of motion discrimination thresholds that define a contour in the distance-duration plane for different combination of the three cues, based on a 3-interval oddity task. The orientation of the contour (typically elliptical in shape) reveals which cue or combination of cues dominates. If the auditory system contains specialised motion detectors, stimuli moving over different distances and durations but defining the same speed should be more difficult to discriminate. The resulting discrimination contours should therefore be oriented obliquely along iso-speed lines within the distance-duration plane. However, we found that over a wide range of speeds, distances and durations, the ellipses aligned with distance-duration axes and were stretched vertically, suggesting that listeners were most sensitive to duration. A second experiment showed that listeners were able to make speed judgements when distance and duration cues were degraded by noise, but that performance was worse. Our results therefore suggest that speed is not a primary cue to motion in the auditory system, but that listeners are able to use speed to make discrimination judgements when distance and duration cues are unreliable.


Subject(s)
Cues , Discrimination, Psychological , Sound Localization , Speech Perception , Adult , Female , Humans , Male , Time Factors
12.
J Acoust Soc Am ; 136(1): EL20-5, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24993233

ABSTRACT

"Representational Momentum" (RM) is a mislocalization of the endpoint of a moving target in the direction of motion. In vision, RM has been shown to increase with target velocity. In audition, however, the effect of target velocity is unclear. Using a perceptual paradigm with moving broadband noise targets in Virtual Auditory Space resulted in a linear increase in RM from 0.9° to 2.3° for an increase in target velocity from 25°/s to 100°/s. Accounting for the effect of eye position also reduced variance. These results suggest that RM may be the result of similar underlying mechanisms in both modalities.

13.
Multisens Res ; 26(4): 333-45, 2013.
Article in English | MEDLINE | ID: mdl-24319927

ABSTRACT

Information about the world is captured by our separate senses, and must be integrated to yield a unified representation. This raises the issue of which signals should be integrated and which should remain separate, as inappropriate integration will lead to misrepresentation and distortions. One strong cue suggesting that separate signals arise from a single source is coincidence, in space and in time. We measured increment thresholds for discriminating spatial intervals defined by pairs of simultaneously presented targets, one flash and one auditory sound, for various separations. We report a 'dipper function', in which thresholds follow a 'U-shaped' curve, with thresholds initially decreasing with spatial interval, and then increasing for larger separations. The presence of a dip in the audiovisual increment-discrimination function is evidence that the auditory and visual signals both input to a common mechanism encoding spatial separation, and a simple filter model with a sigmoidal transduction function simulated the results well. The function of an audiovisual spatial filter may be to detect coincidence, a fundamental cue guiding whether to integrate or segregate.


Subject(s)
Auditory Perception/physiology , Cues , Reaction Time/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Female , Humans , Male , Photic Stimulation/methods
14.
Sci Rep ; 3: 1297, 2013.
Article in English | MEDLINE | ID: mdl-23416613

ABSTRACT

The aim of this research was to evaluate the ability to switch attention and selectively attend to relevant information in children (10-15 years) with persistent listening difficulties in noisy environments. A wide battery of clinical tests indicated that children with complaints of listening difficulties had otherwise normal hearing sensitivity and auditory processing skills. Here we show that these children are markedly slower to switch their attention compared to their age-matched peers. The results suggest poor attention switching, lack of response inhibition and/or poor listening effort consistent with a predominantly top-down (central) information processing deficit. A deficit in the ability to switch attention across talkers would provide the basis for this otherwise hidden listening disability, especially in noisy environments involving multiple talkers such as classrooms.

15.
J Acoust Soc Am ; 127(5): 3060-72, 2010 May.
Article in English | MEDLINE | ID: mdl-21117755

ABSTRACT

Free-field source localization experiments with 30 source locations, symmetrically distributed in azimuth, elevation, and front-back location, were performed with periodic tones having different phase relationships among their components. Although the amplitude spectra were the same for these different kinds of stimuli, the tones with certain phase relationships were successfully localized while the tones with other phases led to large elevation errors and front-back reversals, normally growing with stimulus level. The results show that it is not enough to have a smooth, broadband, long-term signal spectrum for successful sagittal-plane localization. Instead, temporal factors are important. A model calculation investigates the idea that the tonotopic details that mediate localization need to be simultaneously, or almost simultaneously, accessible in the auditory system in order to achieve normal elevation perception. A qualitative model based on lateral inhibition seems capable in principle of accounting for both the phase effects and level effects.


Subject(s)
Auditory Pathways/physiology , Pitch Perception , Signal Detection, Psychological , Sound Localization , Acoustic Stimulation , Adult , Audiometry, Pure-Tone , Auditory Threshold , Female , Humans , Male , Middle Aged , Noise/adverse effects , Perceptual Masking , Psychoacoustics , Time Factors , Young Adult
16.
Perception ; 38(7): 966-87, 2009.
Article in English | MEDLINE | ID: mdl-19764300

ABSTRACT

We investigated audiovisual speed perception to test the maximum-likelihood-estimation (MLE) model of multisensory integration. According to MLE, audiovisual speed perception will be based on a weighted average of visual and auditory speed estimates, with each component weighted by its inverse variance, a statistically optimal combination that produces a fused estimate with minimised variance and thereby affords maximal discrimination. We use virtual auditory space to create ecologically valid auditory motion, together with visual apparent motion around an array of 63 LEDs. To degrade the usual dominance of vision over audition, we added positional jitter to the motion sequences, and also measured peripheral trajectories. Both factors degraded visual speed discrimination, while auditory speed perception was unaffected by trajectory location. In the bimodal conditions, a speed conflict was introduced (48 degrees versus 60 degrees s(-1)) and two measures were taken: perceived audiovisual speed, and the precision (variability) of audiovisual speed discrimination. These measures showed only a weak tendency to follow MLE predictions. However, splitting the data into two groups based on whether the unimodal component weights were similar or disparate revealed interesting findings: similarly weighted components were integrated in a manner closely matching MLE predictions, while dissimilarity weighted components (greater than 3 : 1 difference) were integrated according to probability-summation predictions. These results suggest that different multisensory integration strategies may be implemented depending on relative component reliabilities, with MLE integration vetoed when component weights are highly disparate.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Motion Perception/physiology , Photic Stimulation/methods , Acoustic Stimulation/psychology , Humans , Likelihood Functions , Statistics as Topic
17.
Proc Natl Acad Sci U S A ; 105(17): 6492-7, 2008 Apr 29.
Article in English | MEDLINE | ID: mdl-18427118

ABSTRACT

Studies of spatial perception during visual saccades have demonstrated compressions of visual space around the saccade target. Here we psychophysically investigated perception of auditory space during rapid head turns, focusing on the "perisaccadic" interval. Using separate perceptual and behavioral response measures we show that spatial compression also occurs for rapid head movements, with the auditory spatial representation compressing by up to 50%. Similar to observations in the visual system, this occurred only when spatial locations were measured by using a perceptual response; it was absent for the behavioral measure involving a nose-pointing task. These findings parallel those observed in vision during saccades and suggest that a common neural mechanism may subserve these distortions of space in each modality.


Subject(s)
Auditory Perception/physiology , Head Movements/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...