Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
1.
Trends Neurosci ; 46(1): 5-7, 2023 01.
Article in English | MEDLINE | ID: mdl-36280458

ABSTRACT

Echolocating bats are among the only mammals capable of powered flight, and they rely on active sensing to find food and steer around obstacles in 3D environments. These natural behaviors depend on neural circuits that support 3D auditory localization, audio-motor integration, navigation, and flight control, which are modulated by spatial attention and action selection.


Subject(s)
Chiroptera , Echolocation , Sound Localization , Humans , Animals
2.
Proc Biol Sci ; 289(1984): 20220768, 2022 10 12.
Article in English | MEDLINE | ID: mdl-36196538

ABSTRACT

Early visual deprivation typically results in spatial impairments in other sensory modalities. It has been suggested that, since vision provides the most accurate spatial information, it is used for calibrating space in the other senses. Here we investigated whether sight restoration after prolonged early onset visual impairment can lead to the development of more accurate auditory space perception. We tested participants who were surgically treated for congenital dense bilateral cataracts several years after birth. In Experiment 1 we assessed participants' ability to understand spatial relationships among sounds, by asking them to spatially bisect three consecutive, laterally separated sounds. Participants performed better after surgery than participants tested before. However, they still performed worse than sighted controls. In Experiment 2, we demonstrated that single sound localization in the two-dimensional frontal plane improves quickly after surgery, approaching performance levels of sighted controls. Such recovery seems to be mediated by visual acuity, as participants gaining higher post-surgical visual acuity performed better in both experiments. These findings provide strong support for the hypothesis that vision calibrates auditory space perception. Importantly, this also demonstrates that this process can occur even when vision is restored after years of visual deprivation.


Subject(s)
Cataract , Sound Localization , Auditory Perception , Blindness , Calibration , Humans , Space Perception , Vision, Ocular
3.
Indian J Otolaryngol Head Neck Surg ; 74(Suppl 3): 3631-3637, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36742871

ABSTRACT

The advent of Cochlear Implants (CI) has bought with it, the goal of spoken language performance for severe-profound sensori-neural hearing loss (SNHL) as par with the normal hearing listeners. The aim of this study was to evaluate the outcome of paediatric cochlear implantation in auditory and speech skills below the age of 5 years. The present study consisted of 50 childrens. Auditory skills were assessed in terms of audiometric thresholds and localization abilities. Speech - Language skills were measured using; Categories of Auditory Performance -CAP; Meaningful Use of Speech Scale - MUSS; Meaningful Auditory Integration Scale - MAIS and Speech intelligibility Rating - SIR. Hearing thresholds obtained from all the subjects for pre implant and post implant conditions of 3rd, 6th, 9th, 12th month conditions, evidenced a high significant (p < 0.001) improvement across all test frequencies 500, 1000, 2000 and 4000 Hz. There was also a statistically significant difference across successive measurements of auditory and speech skills, as determined by ANOVA (F (4, 245) = 151.33, p < 0.001 for CAP; F (4, 245) = 89.636, p < 0.001 for SIR; F (4, 245) = 812.282 p < 0.001 for MAIS and F(4, 245) = 435.677 p < 0.001 for MUSS). Auditory localization abilities were also improved considerably over a period of one year. The present study added the evidence to the literature that cochlear implants significantly improved the hearing ability of children with severe-to-profound hearing loss. This study also demonstrated that, children were better able to make use of the auditory information perceived through the implant.

4.
Curr Biol ; 31(22): 5093-5101.e5, 2021 11 22.
Article in English | MEDLINE | ID: mdl-34555348

ABSTRACT

Congenitally blind infants are not only deprived of visual input but also of visual influences on the intact senses. The important role that vision plays in the early development of multisensory spatial perception1-7 (e.g., in crossmodal calibration8-10 and in the formation of multisensory spatial representations of the body and the world1,2) raises the possibility that impairments in spatial perception are at the heart of the wide range of difficulties that visually impaired infants show across spatial,8-12 motor,13-17 and social domains.8,18,19 But investigations of early development are needed to clarify how visually impaired infants' spatial hearing and touch support their emerging ability to make sense of their body and the outside world. We compared sighted (S) and severely visually impaired (SVI) infants' responses to auditory and tactile stimuli presented on their hands. No statistically reliable differences in the direction or latency of responses to auditory stimuli emerged, but significant group differences emerged in responses to tactile and audiotactile stimuli. The visually impaired infants showed attenuated audiotactile spatial integration and interference, weighted more tactile than auditory cues when the two were presented in conflict, and showed a more limited influence of representations of the external layout of the body on tactile spatial perception.20 These findings uncover a distinct phenotype of multisensory spatial perception in early postnatal visual deprivation. Importantly, evidence of audiotactile spatial integration in visually impaired infants, albeit to a lesser degree than in sighted infants, signals the potential of multisensory rehabilitation methods in early development. VIDEO ABSTRACT.


Subject(s)
Space Perception , Touch Perception , Auditory Perception/physiology , Blindness , Hand/physiology , Humans , Space Perception/physiology , Touch/physiology , Touch Perception/physiology , Visual Perception
5.
Proc Natl Acad Sci U S A ; 117(46): 29229-29238, 2020 11 17.
Article in English | MEDLINE | ID: mdl-33139550

ABSTRACT

Unlike other predators that use vision as their primary sensory system, bats compute the three-dimensional (3D) position of flying insects from discrete echo snapshots, which raises questions about the strategies they employ to track and intercept erratically moving prey from interrupted sensory information. Here, we devised an ethologically inspired behavioral paradigm to directly test the hypothesis that echolocating bats build internal prediction models from dynamic acoustic stimuli to anticipate the future location of moving auditory targets. We quantified the direction of the bat's head/sonar beam aim and echolocation call rate as it tracked a target that moved across its sonar field and applied mathematical models to differentiate between nonpredictive and predictive tracking behaviors. We discovered that big brown bats accumulate information across echo sequences to anticipate an auditory target's future position. Further, when a moving target is hidden from view by an occluder during a portion of its trajectory, the bat continues to track its position using an internal model of the target's motion path. Our findings also reveal that the bat increases sonar call rate when its prediction of target trajectory is violated by a sudden change in target velocity. This shows that the bat rapidly adapts its sonar behavior to update internal models of auditory target trajectories, which would enable tracking of evasive prey. Collectively, these results demonstrate that the echolocating big brown bat integrates acoustic snapshots over time to build prediction models of a moving auditory target's trajectory and enable prey capture under conditions of uncertainty.


Subject(s)
Acoustics , Auditory Perception/physiology , Chiroptera/physiology , Echolocation/physiology , Acoustic Stimulation , Animals , Auditory Pathways/physiology , Biosensing Techniques , Brain/physiology , Female , Head , Insecta , Male , Orientation/physiology , Predatory Behavior/physiology , Sound , Sound Localization
6.
Psychol Sci ; 31(9): 1129-1139, 2020 09.
Article in English | MEDLINE | ID: mdl-32846109

ABSTRACT

Vision is thought to support the development of spatial abilities in the other senses. If this is true, how does spatial hearing develop in people lacking visual experience? We comprehensively addressed this question by investigating auditory-localization abilities in 17 congenitally blind and 17 sighted individuals using a psychophysical minimum-audible-angle task that lacked sensorimotor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, or frontal and rear spaces. We observed unequivocal enhancement of spatial-hearing abilities in congenitally blind people, irrespective of the field of space that was assessed. Our results conclusively demonstrate that visual experience is not a prerequisite for developing optimal spatial-hearing abilities and that, in striking contrast, the lack of vision leads to a general enhancement of auditory-spatial skills.


Subject(s)
Sound Localization , Visually Impaired Persons , Blindness , Hearing , Humans , Space Perception , Vision, Ocular
7.
Brain Commun ; 2(2): fcaa195, 2020.
Article in English | MEDLINE | ID: mdl-33426527

ABSTRACT

Auditory localization (i.e. turning the head and/or the eyes towards an auditory stimulus) is often part of the clinical evaluation of patients recovering from coma. The objective of this study is to determine whether auditory localization could be considered as a new sign of minimally conscious state, using a multimodal approach. The presence of auditory localization and the clinical outcome at 2 years of follow-up were evaluated in 186 patients with severe brain injury, including 64 with unresponsive wakefulness syndrome, 28 in minimally conscious state minus, 71 in minimally conscious state plus and 23 who emerged from the minimally conscious state. Brain metabolism, functional connectivity and graph theory measures were investigated by means of 18F-fluorodeoxyglucose positron emission tomography, functional MRI and high-density electroencephalography in two subgroups of unresponsive patients, with and without auditory localization. These two subgroups were also compared to a subgroup of patients in minimally conscious state minus. Auditory localization was observed in 13% of unresponsive patients, 46% of patients in minimally conscious state minus, 62% of patients in minimally conscious state plus and 78% of patients who emerged from the minimally conscious state. The probability to observe an auditory localization increased along with the level of consciousness, and the presence of auditory localization could predict the level of consciousness. Patients with auditory localization had higher survival rates (at 2-year follow-up) than those without localization. Differences in brain function were found between unresponsive patients with and without auditory localization. Higher connectivity in unresponsive patients with auditory localization was measured between the fronto-parietal network and secondary visual areas, and in the alpha band electroencephalography network. Moreover, patients in minimally conscious state minus significantly differed from unresponsive patients without auditory localization in terms of brain metabolism and alpha network centrality, whereas no difference was found with unresponsive patients who presented auditory localization. Our multimodal findings suggest differences in brain function between unresponsive patients with and without auditory localization, which support our hypothesis that auditory localization should be considered as a new sign of minimally conscious state. Unresponsive patients showing auditory localization should therefore no longer be considered unresponsive but minimally conscious. This would have crucial consequences on these patients' lives as it would directly impact the therapeutic orientation or end-of-life decisions usually taken based on the diagnosis.

8.
Hear Res ; 377: 282-291, 2019 06.
Article in English | MEDLINE | ID: mdl-31029039

ABSTRACT

The present study investigated spatial hearing in children aged 6-12 years diagnosed with Auditory Processing Disorders (APD) and compared their results to those of a group of control children matched in age. Sound source localization accuracy was quantified using an absolute localization task and sound source discrimination by measuring the minimum audible angle. Low- and high-frequency noise bursts were presented from eight loudspeaker positions in the left and right hemifields (0°, 30, 60°, and 90° azimuth). Median absolute localization accuracy did not differ between children with APD and control children. However, the intra-individual variability of pointing behavior was higher for children with APD. In contrast, children with APD had significantly higher minimum audible angle thresholds than control children. These findings show that APD impairs sound source discrimination, but does not affect the median relationship between actual and judged sound source locations.


Subject(s)
Auditory Perceptual Disorders/psychology , Child Behavior , Discrimination, Psychological , Sound Localization , Acoustic Stimulation , Age Factors , Auditory Pathways/physiopathology , Auditory Perceptual Disorders/diagnosis , Auditory Perceptual Disorders/physiopathology , Case-Control Studies , Child , Female , Humans , Judgment , Male
9.
Phys Life Rev ; 31: 206-213, 2019 Dec.
Article in English | MEDLINE | ID: mdl-30744951

ABSTRACT

The focus of this study is auditory localization. Based on wave mechanics and structural mechanics, we analyze the sound field encircling external ear and the function of the soft tissues in middle ear respectively. And then, with the help of two rules, some details of the generation of spatial hearing are revealed. For auditory direction perception, three semicircular canals work as the reference coordinate system, and the cues are the direction of the concentrated force acting on tympanic membrane and the synchronous stress state. For the distance perception, because the distance information of the sound source is converted into some comparisons, the accuracy is closely related to the amount of useable sound sources. Therefore, the bad accuracy of the distance perception is inevitable in most cases. When it is necessary, many cues based on experience will be utilized to enhance the accuracy of the distance perception, which brings the diversity to auditory localization. Additionally, our results also can be used to explain some well known acoustic phenomena, such as auditory localization with one ear and the cocktail party effect.


Subject(s)
Auditory Perception/physiology , Models, Biological , Acoustics , Ear/physiology , Humans
10.
Hear Res ; 370: 155-167, 2018 12.
Article in English | MEDLINE | ID: mdl-30388573

ABSTRACT

Binaural integration of interaural temporal information is essential for sound source localization and segregation. Current models of binaural interaction have shown that accurate sound localization in the horizontal plane depends on the resolution of phase ambiguous information by across-frequency integration. However, as such models are mostly static, it is not clear how proximate in time binaural events in different frequency channels should occur to form an auditory object with a unique lateral position. The present study examined the spectrotemporal window required for effective integration of binaural cues across frequency to form the perception of a stationary position. In Experiment 1, listeners judged whether dichotic frequency-modulated (FM) sweeps with a constant large nominal interaural delay (1500 µs), whose perceived laterality was ambiguous depending on the sweep rate (1500, 3000, 6000, and 12,000 Hz/s), produced a percept of continuous motion or a stationary image. Motion detection performance, indexed by d-prime (d') values, showed a clear effect of sweep rate, with auditory motion effects most pronounced for low sweep rates, and a punctate stationary image at high rates. Experiment 2 examined the effect of modulation rate (0.5, 3, 20, and 50 Hz) on lateralizing sinusoidally frequency-modulated (SFM) tones to confirm the effect of sweep rate on motion detection, independent of signal duration. Lateralization accuracy increased with increasing modulation rate up to 20 Hz and saturated at 50 Hz, with poorest performance occurring below 3 Hz depending on modulator phase. Using the transition point where percepts changed from motion to stationary images, we estimated a spectrotemporal integration window of approximately 150 ms per octave required for effective integration of interaural temporal cues across frequency channels. A Monte Carlo simulation based on a cross-correlation model of binaural interaction predicted 90% of the variance on perceptual motion detection performance as a function of FM sweep rate. Findings suggest that the rate of frequency channel convergence of binaural cues is essential to binaural lateralization.


Subject(s)
Auditory Pathways/physiology , Cues , Pitch Perception , Sound Localization , Acoustic Stimulation , Adult , Computer Simulation , Dichotic Listening Tests , Female , Humans , Judgment , Male , Models, Theoretical , Monte Carlo Method , Perceptual Masking , Time Factors , Young Adult
11.
J Exp Biol ; 221(Pt 18)2018 09 17.
Article in English | MEDLINE | ID: mdl-29997156

ABSTRACT

Echolocating bats dynamically adapt the features of their sonar calls as they approach obstacles and track targets. As insectivorous bats forage, they increase sonar call rate with decreasing prey distance, and often embedded in bat insect approach sequences are clusters of sonar sounds, termed sonar sound groups (SSGs). The bat's production of SSGs has been observed in both field and laboratory conditions, and is hypothesized to sharpen spatiotemporal sonar resolution. When insectivorous bats hunt, they may encounter erratically moving prey, which increases the demands on the bat's sonar imaging system. Here, we studied the bat's adaptive vocal behavior in an experimentally controlled insect-tracking task, allowing us to manipulate the predictability of target trajectories and measure the prevalence of SSGs. With this system, we trained bats to remain stationary on a platform and track a moving prey item, whose trajectory was programmed either to approach the bat, or to move back and forth, before arriving at the bat. We manipulated target motion predictability by varying the order in which different target trajectories were presented to the bats. During all trials, we recorded the bat's sonar calls and later analysed the incidence of SSG production during the different target tracking conditions. Our results demonstrate that bats increase the production of SSGs when target unpredictability increases, and decrease the production of SSGs when target motion predictability increases. Furthermore, bats produce the same number of sonar vocalizations irrespective of the target motion predictability, indicating that the animal's temporal clustering of sonar call sequences to produce SSGs is purposeful, and therefore involves sensorimotor planning.


Subject(s)
Chiroptera/physiology , Echolocation , Predatory Behavior , Animals , Sound
12.
Vision (Basel) ; 2(1)2018 Mar 01.
Article in English | MEDLINE | ID: mdl-31735877

ABSTRACT

Understanding how the eyes work together to determine the direction of objects provided the impetus for examining integration of signals from the ears to locate sounds. However, the advantages of having two eyes were recorded long before those for two ears were appreciated. In part, this reflects the marked differences in how we can compare perception with one or two organs. It is easier to close one eye and examine monocular vision than to "close" one ear and study monaural hearing. Moreover, we can move our eyes either in the same or in opposite directions, but humans have no equivalent means of moving the ears in unison. Studies of binocular single vision can be traced back over two thousand years and they were implicitly concerned with visual directions from each eye. The location of any point in visual or auditory space can be described by specifying its direction and distance, from the vantage point of an observer. From the late 18th century experiments indicated that binocular direction involved an eye movement component and experimental studies of binaural direction commenced slightly later. However, these early binocular and binaural experiments were not incorporated into theoretical accounts until almost a century later. The early history of research on visual direction with two eyes is contrasted to that on auditory direction with two ears.

13.
Front Neurosci ; 11: 268, 2017.
Article in English | MEDLINE | ID: mdl-28491022

ABSTRACT

[This corrects the article on p. 76 in vol. 11, PMID: 28261053.].

14.
Front Neurosci ; 11: 76, 2017.
Article in English | MEDLINE | ID: mdl-28261053

ABSTRACT

Blind individuals show impairments for auditory spatial skills that require complex spatial representation of the environment. We suggest that this is partially due to the egocentric frame of reference used by blind individuals. Here we investigate the possibility of reducing the mentioned auditory spatial impairments with an audio-motor training. Our hypothesis is that the association between a motor command and the corresponding movement's sensory feedback can provide an allocentric frame of reference and consequently help blind individuals in understanding complex spatial relationships. Subjects were required to localize the end point of a moving sound before and after either 2-min of audio-motor training or a complete rest. During the training, subjects were asked to move their hand, and consequently the sound source, to freely explore the space around the setup and the body. Both congenital blind (N = 20) and blindfolded healthy controls (N = 28) participated in the study. Results suggest that the audio-motor training was effective in improving space perception of blind individuals. The improvement was not observed in those subjects that did not perform the training. This study demonstrates that it is possible to recalibrate the auditory spatial representation in congenital blind individuals with a short audio-motor training and provides new insights for rehabilitation protocols in blind people.

15.
Audiol Neurootol ; 22(6): 326-342, 2017.
Article in English | MEDLINE | ID: mdl-29495018

ABSTRACT

The present study investigated two measures of spatial acoustic perception in children and adolescents with sensorineural hearing loss (SNHL) tested without their hearing aids and compared it to age-matched controls. Auditory localization was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests to separately address spatial auditory processing based on interaural time and intensity differences. In SNHL children, localization (hit accuracy) was significantly reduced compared to normal-hearing children and intraindividual variability (dispersion) considerably increased. Given the respective impairments, the performance based on interaural time differences (low frequencies) was still better than that based on intensity differences (high frequencies). For MAA, age-matched comparisons yielded not only increased MAA values in SNHL children, but also no decrease with increasing age compared to normal-hearing children. Deficits in MAA were most apparent in the frontal azimuth. Thus, children with SNHL do not seem to benefit from frontal positions of the sound sources as do normal-hearing children. The results give an indication that the processing of spatial cues in SNHL children is restricted, which could also imply problems regarding speech understanding in challenging hearing situations.

16.
Eur J Neurosci ; 45(2): 278-289, 2017 01.
Article in English | MEDLINE | ID: mdl-27740711

ABSTRACT

Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour.


Subject(s)
Auditory Perception/physiology , Behavior, Animal/physiology , Brain/physiology , Orientation/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Animals , Female , Ferrets , Male , Photic Stimulation/methods , Reaction Time/physiology
17.
Biol Cybern ; 110(6): 455-471, 2016 12.
Article in English | MEDLINE | ID: mdl-27815630

ABSTRACT

Vision typically has better spatial accuracy and precision than audition and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small, visual capture is likely to occur, and when disparity is large, visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audiovisual disparities over which visual capture was likely to occur was narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner.


Subject(s)
Auditory Perception , Models, Biological , Animals , Bayes Theorem , Humans , Judgment , Sound Localization , Space Perception , Visual Perception
18.
Neuropsychologia ; 91: 120-140, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27424274

ABSTRACT

Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/physiopathology , Hemianopsia/physiopathology , Neural Networks, Computer , Superior Colliculi/physiopathology , Visual Perception/physiology , Cerebral Cortex/physiology , Computer Simulation , Humans , Models, Neurological , Neural Pathways/physiology , Neural Pathways/physiopathology , Neurons/physiology , Superior Colliculi/physiology , Synapses/physiology
19.
J Assoc Res Otolaryngol ; 17(3): 209-21, 2016 06.
Article in English | MEDLINE | ID: mdl-27033087

ABSTRACT

The location of a sound is derived computationally from acoustical cues rather than being inherent in the topography of the input signal, as in vision. Since Lord Rayleigh, the descriptions of that representation have swung between "labeled line" and "opponent process" models. Employing a simple variant of a two-point separation judgment using concurrent speech sounds, we found that spatial discrimination thresholds changed nonmonotonically as a function of the overall separation. Rather than increasing with separation, spatial discrimination thresholds first declined as two-point separation increased before reaching a turning point and increasing thereafter with further separation. This "dipper" function, with a minimum at 6 ° of separation, was seen for regions around the midline as well as for more lateral regions (30 and 45 °). The discrimination thresholds for the binaural localization cues were linear over the same range, so these cannot explain the shape of these functions. These data and a simple computational model indicate that the perception of auditory space involves a local code or multichannel mapping emerging subsequent to the binaural cue coding.


Subject(s)
Auditory Perception , Sound Localization , Adult , Auditory Threshold , Cues , Female , Humans , Male
20.
J Autism Dev Disord ; 46(7): 2539-47, 2016 Jul.
Article in English | MEDLINE | ID: mdl-27011323

ABSTRACT

Convergent research suggests that people with ASD have difficulties localizing sounds in space. These difficulties have implications for communication, the development of social behavior, and quality of life. Recently, a theory has emerged which treats perceptual symptoms in ASD as the product of impairments in implicit Bayesian inference; as suboptimalities in the integration of sensory evidence with prior perceptual knowledge. We present the results of an experiment that applies this new theory to understanding difficulties in auditory localization, and we find that adults with ASD integrate prior information less optimally when making perceptual judgments about the spatial sources of sounds. We discuss these results in terms of their implications for formal models of symptoms in ASD.


Subject(s)
Acoustic Stimulation/methods , Acoustic Stimulation/psychology , Autism Spectrum Disorder/diagnosis , Autism Spectrum Disorder/psychology , Sound Localization/physiology , Adult , Bayes Theorem , Female , Humans , Judgment/physiology , Male , Psychomotor Performance/physiology , Quality of Life/psychology , Reaction Time/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...