Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
Add more filters










Publication year range
1.
J Speech Lang Hear Res ; 66(9): 3665-3676, 2023 09 13.
Article in English | MEDLINE | ID: mdl-37556819

ABSTRACT

PURPOSE: For voice perception, two voice cues, the fundamental frequency (fo) and/or vocal tract length (VTL), seem to largely contribute to identification of voices and speaker characteristics. Acoustic content related to these voice cues is altered in cochlear implant transmitted speech, rendering voice perception difficult for the implant user. In everyday listening, there could be some facilitation from top-down compensatory mechanisms such as from use of linguistic content. Recently, we have shown a lexical content benefit on just-noticeable differences (JNDs) in VTL perception, which was not affected by vocoding. Whether this observed benefit relates to lexicality or phonemic content and whether additional sentence information can affect voice cue perception as well were investigated in this study. METHOD: This study examined lexical benefit on VTL perception, by comparing words, time-reversed words, and nonwords, to investigate the contribution of lexical (words vs. nonwords) or phonetic (nonwords vs. reversed words) information. In addition, we investigated the effect of amount of speech (auditory) information on fo and VTL voice cue perception, by comparing words to sentences. In both experiments, nonvocoded and vocoded auditory stimuli were presented. RESULTS: The outcomes showed a replication of the detrimental effect reversed words have on VTL perception. Smaller JNDs were shown for stimuli containing lexical and/or phonemic information. Experiment 2 showed a benefit in processing full sentences compared to single words in both fo and VTL perception. In both experiments, there was an effect of vocoding, which only interacted with sentence information for fo. CONCLUSIONS: In addition to previous findings suggesting a lexical benefit, the current results show, more specifically, that lexical and phonemic information improves VTL perception. fo and VTL perception benefits from more sentence information compared to words. These results indicate that cochlear implant users may be able to partially compensate for voice cue perception difficulties by relying on the linguistic content and rich acoustic cues of everyday speech. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.23796405.


Subject(s)
Cochlear Implants , Speech Perception , Voice , Humans , Cues , Speech Acoustics
2.
Ear Hear ; 44(4): 900-916, 2023.
Article in English | MEDLINE | ID: mdl-36695603

ABSTRACT

OBJECTIVES: Understanding speech in real life can be challenging and effortful, such as in multiple-talker listening conditions. Fundamental frequency ( fo ) and vocal-tract length ( vtl ) voice cues can help listeners segregate between talkers, enhancing speech perception in adverse listening conditions. Previous research showed lower sensitivity to fo and vtl voice cues when speech signal was degraded, such as in cochlear implant hearing and vocoder-listening compared to normal hearing, likely contributing to difficulties in understanding speech in adverse listening. Nevertheless, when multiple talkers are present, familiarity with a talker's voice, via training or exposure, could provide a speech intelligibility benefit. In this study, the objective was to assess how an implicit short-term voice training could affect perceptual discrimination of voice cues ( fo+vtl ), measured in sensitivity and listening effort, with or without vocoder degradations. DESIGN: Voice training was provided via listening to a recording of a book segment for approximately 30 min, and answering text-related questions, to ensure engagement. Just-noticeable differences (JNDs) for fo+vtl were measured with an odd-one-out task implemented as a 3-alternative forced-choice adaptive paradigm, while simultaneously collecting pupil data. The reference voice either belonged to the trained voice or an untrained voice. Effects of voice training (trained and untrained voice), vocoding (non-vocoded and vocoded), and item variability (fixed or variable consonant-vowel triplets presented across three items) on voice cue sensitivity ( fo+vtl JNDs) and listening effort (pupillometry measurements) were analyzed. RESULTS: Results showed that voice training did not have a significant effect on voice cue discrimination. As expected, fo+vtl JNDs were significantly larger for vocoded conditions than for non-vocoded conditions and with variable item presentations than fixed item presentations. Generalized additive mixed models analysis of pupil dilation over the time course of stimulus presentation showed that pupil dilation was significantly larger during fo+vtl discrimination while listening to untrained voices compared to trained voices, but only for vocoder-degraded speech. Peak pupil dilation was significantly larger for vocoded conditions compared to non-vocoded conditions and variable items increased the pupil baseline relative to fixed items, which could suggest a higher anticipated task difficulty. CONCLUSIONS: In this study, even though short voice training did not lead to improved sensitivity to small fo+vtl voice cue differences at the discrimination threshold level, voice training still resulted in reduced listening effort for discrimination among vocoded voice cues.


Subject(s)
Cochlear Implants , Speech Perception , Humans , Cues , Listening Effort , Voice Training , Auditory Perception , Speech Intelligibility
3.
J Acoust Soc Am ; 151(5): 3116, 2022 05.
Article in English | MEDLINE | ID: mdl-35649891

ABSTRACT

Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.g., participants' homes). Remote testing could provide greater access to participants, larger sample sizes, and opportunities to characterize performance in typical listening environments at the cost of reduced control of environmental conditions, less precise calibration, and inconsistency in attentional state and/or response behaviors from relatively smaller sample sizes and unintuitive experimental tasks. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. The results of this task force survey were made available online in the form of a set of Wiki pages and summarized in this report. This report outlines the state-of-the-art of remote testing in auditory-related research as of August 2021, which is based on the Wiki and a literature search of papers published in this area since 2020, and provides three case studies to demonstrate feasibility during practice.


Subject(s)
Acoustics , Auditory Perception , Attention/physiology , Humans , Prospective Studies , Sound
4.
J Acoust Soc Am ; 150(3): 1620, 2021 09.
Article in English | MEDLINE | ID: mdl-34598602

ABSTRACT

Perceptual differences in voice cues, such as fundamental frequency (F0) and vocal tract length (VTL), can facilitate speech understanding in challenging conditions. Yet, we hypothesized that in the presence of spectrotemporal signal degradations, as imposed by cochlear implants (CIs) and vocoders, acoustic cues that overlap for voice perception and phonemic categorization could be mistaken for one another, leading to a strong interaction between linguistic and indexical (talker-specific) content. Fifteen normal-hearing participants performed an odd-one-out adaptive task measuring just-noticeable differences (JNDs) in F0 and VTL. Items used were words (lexical content) or time-reversed words (no lexical content). The use of lexical content was either promoted (by using variable items across comparison intervals) or not (fixed item). Finally, stimuli were presented without or with vocoding. Results showed that JNDs for both F0 and VTL were significantly smaller (better) for non-vocoded compared with vocoded speech and for fixed compared with variable items. Lexical content (forward vs reversed) affected VTL JNDs in the variable item condition, but F0 JNDs only in the non-vocoded, fixed condition. In conclusion, lexical content had a positive top-down effect on VTL perception when acoustic and linguistic variability was present but not on F0 perception. Lexical advantage persisted in the most degraded conditions and vocoding even enhanced the effect of item variability, suggesting that linguistic content could support compensation for poor voice perception in CI users.


Subject(s)
Cochlear Implants , Speech Perception , Acoustic Stimulation , Acoustics , Cues , Humans , Linguistics
5.
Hear Res ; 406: 108255, 2021 07.
Article in English | MEDLINE | ID: mdl-33964552

ABSTRACT

Recently we showed that higher reward results in increased pupil dilation during listening (listening effort). Remarkably, this effect was not accompanied with improved speech reception. Still, increased listening effort may reflect more in-depth processing, potentially resulting in a better memory representation of speech. Here, we investigated this hypothesis by also testing the effect of monetary reward on recognition memory performance. Twenty-four young adults performed speech reception threshold (SRT) tests, either hard or easy, in which they repeated sentences uttered by a female talker masked by a male talker. We recorded the pupil dilation response during listening. Participants could earn a high or low reward and the four conditions were presented in a blocked fashion. After each SRT block, participants performed a visual sentence recognition task. In this task, the sentences that were presented in the preceding SRT task were visually presented in random order and intermixed with unfamiliar sentences. Participants had to indicate whether they had previously heard the sentence or not. The SRT and sentence recognition were affected by task difficulty but not by reward. Contrary to our previous results, peak pupil dilation did not reflect effects of reward. However, post-hoc time course analysis (GAMMs) revealed that in the hard SRT task, the pupil response was larger for high than low reward. We did not observe an effect of reward on visual sentence recognition. Hence, the current results provide no conclusive evidence that the effect of monetary reward on the pupil response relates to the memory encoding of speech.


Subject(s)
Listening Effort , Speech Intelligibility , Speech Perception , Female , Humans , Male , Noise/adverse effects , Pupil , Reward , Young Adult
6.
Trends Hear ; 22: 2331216518811444, 2018.
Article in English | MEDLINE | ID: mdl-30482105

ABSTRACT

Previous research has shown the effects of task demands on pupil responses in both normal hearing (NH) and hearing impaired (HI) adults. One consistent finding is that HI listeners have smaller pupil dilations at low levels of speech recognition performance (≤50%). This study aimed to examine the pupil dilation in adults with a normal pure-tone audiogram who experience serious difficulties when processing speech-in-noise. Hence, 20 adults, aged 26 to 62 years, with traumatic brain injury (TBI) or cerebrovascular accident (CVA) but with a normal audiogram participated. Their pupil size was recorded while they listened to sentences masked by fluctuating noise or interfering speech at 50% and 84% intelligibility. In each condition, participants rated their perceived performance, effort, and task persistence. In addition, participants performed the text reception threshold task-a visual sentence completion task-that measured language-related processing. Data were compared with those of age-matched NH and HI participants with no neurological problems obtained in earlier studies using the same setup and design. The TBI group had the same pure-tone audiogram and text reception threshold scores as the NH listeners, yet their speech reception thresholds were significantly worse. Although the pupil dilation responses on average did not differ between groups, self-rated effort scores were highest in the TBI group. Results of a correlation analyses showed that TBI participants with worse speech reception thresholds had a smaller pupil response. We speculate that increased distractibility or fatigue affected the ability of TBI participants to allocate effort during speech perception in noise.


Subject(s)
Brain Injuries, Traumatic/psychology , Cognition , Noise/adverse effects , Perceptual Masking , Pupil , Speech Perception , Acoustic Stimulation , Adult , Audiometry, Pure-Tone , Brain Injuries, Traumatic/diagnosis , Brain Injuries, Traumatic/physiopathology , Case-Control Studies , Humans , Male , Middle Aged , Persons With Hearing Impairments/psychology , Photic Stimulation , Recognition, Psychology , Speech Intelligibility , Speech Reception Threshold Test
7.
Trends Hear ; 22: 2331216518777174, 2018.
Article in English | MEDLINE | ID: mdl-30249172

ABSTRACT

The measurement of cognitive resource allocation during listening, or listening effort, provides valuable insight in the factors influencing auditory processing. In recent years, many studies inside and outside the field of hearing science have measured the pupil response evoked by auditory stimuli. The aim of the current review was to provide an exhaustive overview of these studies. The 146 studies included in this review originated from multiple domains, including hearing science and linguistics, but the review also covers research into motivation, memory, and emotion. The present review provides a unique overview of these studies and is organized according to the components of the Framework for Understanding Effortful Listening. A summary table presents the sample characteristics, an outline of the study design, stimuli, the pupil parameters analyzed, and the main findings of each study. The results indicate that the pupil response is sensitive to various task manipulations as well as interindividual differences. Many of the findings have been replicated. Frequent interactions between the independent factors affecting the pupil response have been reported, which indicates complex processes underlying cognitive resource allocation. This complexity should be taken into account in future studies that should focus more on interindividual differences, also including older participants. This review facilitates the careful design of new studies by indicating the factors that should be controlled for. In conclusion, measuring the pupil dilation response to auditory stimuli has been demonstrated to be sensitive method applicable to numerous research questions. The sensitivity of the measure calls for carefully designed stimuli.


Subject(s)
Acoustic Stimulation/methods , Attention/physiology , Hearing/physiology , Pupil/drug effects , Reaction Time , Auditory Perception/physiology , Auditory Threshold , Dilatation/methods , Female , Humans , Male , Mydriasis , Sensitivity and Specificity , Speech Perception/physiology
8.
Trends Hear ; 22: 2331216518799437, 2018.
Article in English | MEDLINE | ID: mdl-30208763

ABSTRACT

In recent years, the fields of Audiology and Cognitive Sciences have seen a burgeoning of research focusing on the assessment of the effort required during listening. Among approaches to this question, the pupil dilation response has shown to be an informative nonvolitional indicator of cognitive processing during listening. Currently, pupillometry is applied in laboratories throughout the world to assess how listening effort is influenced by various relevant factors, such as hearing loss, signal processing algorithms, cochlear implant rehabilitation, cognitive abilities, language competency, and daily-life hearing disability. The aim of this special issue is to provide an overview of the state of the art in research applying pupillometry, guidance for those considering embarking on pupillometry studies, and to illustrate the diverse ways in which it can be used to answer-and raise-pertinent research questions.


Subject(s)
Attention/physiology , Audiometry/methods , Auditory Perception/physiology , Hearing Loss/diagnosis , Pupil/physiology , Speech Perception/physiology , Cognition/physiology , Female , Humans , Male , Mydriasis , Reaction Time , Signal-To-Noise Ratio , Speech Intelligibility/physiology
9.
Trends Hear ; 22: 2331216518800869, 2018.
Article in English | MEDLINE | ID: mdl-30261825

ABSTRACT

Within the field of hearing science, pupillometry is a widely used method for quantifying listening effort. Its use in research is growing exponentially, and many labs are (considering) applying pupillometry for the first time. Hence, there is a growing need for a methods paper on pupillometry covering topics spanning from experiment logistics and timing to data cleaning and what parameters to analyze. This article contains the basic information and considerations needed to plan, set up, and interpret a pupillometry experiment, as well as commentary about how to interpret the response. Included are practicalities like minimal system requirements for recording a pupil response and specifications for peripheral, equipment, experiment logistics and constraints, and different kinds of data processing. Additional details include participant inclusion and exclusion criteria and some methodological considerations that might not be necessary in other auditory experiments. We discuss what data should be recorded and how to monitor the data quality during recording in order to minimize artifacts. Data processing and analysis are considered as well. Finally, we share insights from the collective experience of the authors and discuss some of the challenges that still lie ahead.


Subject(s)
Attention/physiology , Dilatation/methods , Hearing/physiology , Practice Guidelines as Topic , Pupil/physiology , Speech Perception/physiology , Audiometry, Pure-Tone/methods , Female , Humans , Male , Ophthalmology/methods , Reaction Time , Sensitivity and Specificity
10.
Hear Res ; 367: 106-112, 2018 09.
Article in English | MEDLINE | ID: mdl-30096490

ABSTRACT

Listening to speech in noise can be effortful but when motivated people seem to be more persevering. Previous research showed effects of monetary reward on autonomic responses like cardiovascular reactivity and pupil dilation while participants processed auditory information. The current study examined the effects of monetary reward on the processing of speech in noise and related listening effort as reflected by the pupil dilation response. Twenty-four participants (median age 21 yrs) performed two speech reception threshold (SRT) tasks, one tracking 50% correct (hard) and one tracking 85% correct (easy), both of which they listened to and repeated sentences uttered by a female talker. The sentences were presented with a single male talker or, in a control condition, in quiet. Participants were told that they could earn a high (5 euros) or low (0.20 euro) reward when repeating 70% or more of the sentences correctly. Conditions were presented in a blocked fashion and during each trial, pupil diameter was recorded. At the end of each block, participants rated the effort they had experienced, their performance, and their tendency to quit listening. Additionally, participants performed a working memory capacity task and filled in a need-for-recovery questionnaire as these tap into factors that influence the pupil dilation response. The results showed no effect of reward on speech perception performance as reflected by the SRT. The peak pupil dilation showed a significantly larger response for high than for low reward, for the easy and hard conditions, but not the control condition. Higher need for recovery was associated with a higher subjective tendency to quit listening. Consistent with the Framework for Understanding Effortful Listening, we conclude that listening effort as reflected by the peak pupil dilation is sensitive to the amount of monetary reward.


Subject(s)
Attention , Noise/adverse effects , Perceptual Masking , Pupil/physiology , Reflex , Speech Perception , Token Economy , Acoustic Stimulation , Adolescent , Adult , Auditory Threshold , Comprehension , Female , Humans , Male , Memory, Short-Term , Middle Aged , Reaction Time , Speech Intelligibility , Speech Reception Threshold Test , Time Factors , Young Adult
11.
Hear Res ; 369: 67-78, 2018 11.
Article in English | MEDLINE | ID: mdl-29858121

ABSTRACT

Difficulties arising in everyday speech communication often result from the acoustical environment, which may contain interfering background noise or competing speakers. Thus, listening and understanding speech in noise can be exhausting. Two experiments are presented in the current study that further explored the impact of masker type and Signal-to-Noise Ratio (SNR) on listening effort by means of pupillometry. In both studies, pupillary responses of participants were measured while performing the Danish Hearing in Noise Test (HINT; Nielsen and Dau, 2011). The first experiment aimed to replicate and extend earlier observed effects of noise type and semantic interference on listening effort (Koelewijn et al., 2012). The impact of three different masker types, i.e. a fluctuating noise, a 1-talker masker and a 4-talker masker on listening effort was examined at a fixed speech intelligibility. In a second experiment, effects of SNR on listening effort were examined while presenting the HINT sentences across a broad range of fixed SNRs corresponding to intelligibility scores ranging from 100% to 0% correct performance. A peak pupil dilation (PPD) was calculated and a Growth Curve Analysis (GCA) was performed to examine listening effort involved in speech recognition as a function of SNR. The results of two experiments showed that the pupil dilation response is highly affected by both masker type and SNR when performing the HINT. The PPD was highest, suggesting the highest level of effort, for speech recognition in the presence of the 1-talker masker in comparison to the 4-talker babble and the fluctuating noise masker. However, the disrupting effect of one competing talker disappeared for intelligibly levels around 50%. Furthermore, it was demonstrated that the pupillary response strongly varied as a function of SNRs. Listening effort was highest for intermediate SNRs with performance accuracies ranging between 30% and 70% correct. GCA revealed time-dependent effects of the SNR on the pupillary response that were not reflected in the PPD.


Subject(s)
Attention , Audiometry, Speech/methods , Noise/adverse effects , Perceptual Masking , Pupil/physiology , Speech Perception , Adolescent , Adult , Comprehension , Female , Humans , Male , Middle Aged , Speech Intelligibility , Young Adult
12.
Hear Res ; 354: 56-63, 2017 10.
Article in English | MEDLINE | ID: mdl-28869841

ABSTRACT

For people with hearing difficulties, following a conversation in a noisy environment requires substantial cognitive processing, which is often perceived as effortful. Recent studies with normal hearing (NH) listeners showed that the pupil dilation response, a measure of cognitive processing load, is affected by 'attention related' processes. How these processes affect the pupil dilation response for hearing impaired (HI) listeners remains unknown. Therefore, the current study investigated the effect of auditory attention on various pupil response parameters for 15 NH adults (median age 51 yrs.) and 15 adults with mild to moderate sensorineural hearing loss (median age 52 yrs.). Both groups listened to two different sentences presented simultaneously, one to each ear and partially masked by stationary noise. Participants had to repeat either both sentences or only one, for which they had to divide or focus attention, respectively. When repeating one sentence, the target sentence location (left or right) was either randomized or blocked across trials, which in the latter case allowed for a better spatial focus of attention. The speech-to-noise ratio was adjusted to yield about 50% sentences correct for each task and condition. NH participants had lower ('better') speech reception thresholds (SRT) than HI participants. The pupil measures showed no between-group effects, with the exception of a shorter peak latency for HI participants, which indicated a shorter processing time. Both groups showed higher SRTs and a larger pupil dilation response when two sentences were processed instead of one. Additionally, SRTs were higher and dilation responses were larger for both groups when the target location was randomized instead of fixed. We conclude that although HI participants could cope with less noise than the NH group, their ability to focus attention on a single talker, thereby improving SRTs and lowering cognitive processing load, was preserved. Shorter peak latencies could indicate that HI listeners adapt their listening strategy by not processing some information, which reduces processing time and thereby listening effort.


Subject(s)
Attention , Eye Movements , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/psychology , Persons With Hearing Impairments/psychology , Pupil , Sound Localization , Speech Perception , Speech Reception Threshold Test , Acoustic Stimulation , Adult , Aged , Auditory Threshold , Case-Control Studies , Cognition , Female , Hearing Loss, Sensorineural/diagnosis , Humans , Male , Middle Aged , Noise/adverse effects , Perceptual Masking , Reaction Time , Time Factors
13.
Hear Res ; 323: 81-90, 2015 May.
Article in English | MEDLINE | ID: mdl-25732724

ABSTRACT

Recent studies have shown that prior knowledge about where, when, and who is going to talk improves speech intelligibility. How related attentional processes affect cognitive processing load has not been investigated yet. In the current study, three experiments investigated how the pupil dilation response is affected by prior knowledge of target speech location, target speech onset, and who is going to talk. A total of 56 young adults with normal hearing participated. They had to reproduce a target sentence presented to one ear while ignoring a distracting sentence simultaneously presented to the other ear. The two sentences were independently masked by fluctuating noise. Target location (left or right ear), speech onset, and talker variability were manipulated in separate experiments by keeping these features either fixed during an entire block or randomized over trials. Pupil responses were recorded during listening and performance was scored after recall. The results showed an improvement in performance when the location of the target speech was fixed instead of randomized. Additionally, location uncertainty increased the pupil dilation response, which suggests that prior knowledge of location reduces cognitive load. Interestingly, the observed pupil responses for each condition were consistent with subjective reports of listening effort. We conclude that communicating in a dynamic environment like a cocktail party (where participants in competing conversations move unpredictably) requires substantial listening effort because of the demands placed on attentional processes.


Subject(s)
Attention , Noise/adverse effects , Perceptual Masking , Pupil/physiology , Sound Localization , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Audiometry, Speech , Blinking , Cognition , Cues , Eye Movements , Female , Humans , Male , Mental Recall , Miosis , Mydriasis , Reflex, Pupillary , Time Factors , Uncertainty , Young Adult
14.
Hear Res ; 312: 114-20, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24709275

ABSTRACT

Dividing attention over two streams of speech strongly decreases performance compared to focusing on only one. How divided attention affects cognitive processing load as indexed with pupillometry during speech recognition has so far not been investigated. In 12 young adults the pupil response was recorded while they focused on either one or both of two sentences that were presented dichotically and masked by fluctuating noise across a range of signal-to-noise ratios. In line with previous studies, the performance decreases when processing two target sentences instead of one. Additionally, dividing attention to process two sentences caused larger pupil dilation and later peak pupil latency than processing only one. This suggests an effect of attention on cognitive processing load (pupil dilation) during speech processing in noise.


Subject(s)
Attention/physiology , Pupil/physiology , Reflex, Pupillary/physiology , Sound Localization/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Female , Functional Laterality/physiology , Humans , Male , Noise , Signal-To-Noise Ratio , Young Adult
15.
J Acoust Soc Am ; 135(3): 1596-606, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24606294

ABSTRACT

A recent pupillometry study on adults with normal hearing indicates that the pupil response during speech perception (cognitive processing load) is strongly affected by the type of speech masker. The current study extends these results by recording the pupil response in 32 participants with hearing impairment (mean age 59 yr) while they were listening to sentences masked by fluctuating noise or a single-talker. Efforts were made to improve audibility of all sounds by means of spectral shaping. Additionally, participants performed tests measuring verbal working memory capacity, inhibition of interfering information in working memory, and linguistic closure. The results showed worse speech reception thresholds for speech masked by single-talker speech compared to fluctuating noise. In line with previous results for participants with normal hearing, the pupil response was larger when listening to speech masked by a single-talker compared to fluctuating noise. Regression analysis revealed that larger working memory capacity and better inhibition of interfering information related to better speech reception thresholds, but these variables did not account for inter-individual differences in the pupil response. In conclusion, people with hearing impairment show more cognitive load during speech processing when there is interfering speech compared to fluctuating noise.


Subject(s)
Cognition , Hearing Loss, Sensorineural/psychology , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Reflex, Pupillary , Speech Perception , Acoustic Stimulation , Adult , Aged , Auditory Threshold , Eye Movements , Female , Hearing Loss, Sensorineural/physiopathology , Humans , Male , Memory , Middle Aged , Neuropsychological Tests , Speech Intelligibility , Speech Reception Threshold Test
16.
Trends Amplif ; 17(2): 75-93, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23945955

ABSTRACT

The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations.


Subject(s)
Linguistics , Memory, Short-Term , Noise/adverse effects , Perceptual Masking , Recognition, Psychology , Speech Intelligibility , Speech Perception , Verbal Behavior , Cognition , Cues , Humans , Reading , Speech Reception Threshold Test
17.
Int J Otolaryngol ; 2012: 865731, 2012.
Article in English | MEDLINE | ID: mdl-23091495

ABSTRACT

It is often assumed that the benefit of hearing aids is not primarily reflected in better speech performance, but that it is reflected in less effortful listening in the aided than in the unaided condition. Before being able to assess such a hearing aid benefit the present study examined how processing load while listening to masked speech relates to inter-individual differences in cognitive abilities relevant for language processing. Pupil dilation was measured in thirty-two normal hearing participants while listening to sentences masked by fluctuating noise or interfering speech at either 50% and 84% intelligibility. Additionally, working memory capacity, inhibition of irrelevant information, and written text reception was tested. Pupil responses were larger during interfering speech as compared to fluctuating noise. This effect was independent of intelligibility level. Regression analysis revealed that high working memory capacity, better inhibition, and better text reception were related to better speech reception thresholds. Apart from a positive relation to speech recognition, better inhibition and better text reception are also positively related to larger pupil dilation in the single-talker masker conditions. We conclude that better cognitive abilities not only relate to better speech perception, but also partly explain higher processing load in complex listening conditions.

18.
Ear Hear ; 33(2): 291-300, 2012.
Article in English | MEDLINE | ID: mdl-21921797

ABSTRACT

OBJECTIVES: Recent research has demonstrated that pupil dilation, a measure of mental effort (cognitive processing load), is sensitive to differences in speech intelligibility. The present study extends this outcome by examining the effects of masker type and age on the speech reception threshold (SRT) and mental effort. DESIGN: In young and middle-aged adults, pupil dilation was measured while they performed an SRT task, in which spoken sentences were presented in stationary noise, fluctuating noise, or together with a single-talker masker. The masker levels were adjusted to achieve 50% or 84% sentence intelligibility. RESULTS: The results show better SRTs for fluctuating noise and a single-talker masker compared with stationary noise, which replicates results of previous studies. The peak pupil dilation, reflecting mental effort, was larger in the single-interfering speaker condition compared with the other masker conditions. Remarkably, in contrast to the thresholds, no differences in peak dilation were observed between fluctuating noise and stationary noise. This effect was independent of the intelligibility level and age. CONCLUSIONS: To maintain similar intelligibility levels, participants needed more mental effort for speech perception in the presence of a single-talker masker and then with the two other types of maskers. This suggests an additive interfering effect of speech information from the single-talker masker. The dissociation between these performance and mental effort measures underlines the importance of including measurements of pupil dilation as an independent index of mental effort during speech processing in different types of noisy environments and at different intelligibility levels.


Subject(s)
Hearing/physiology , Mental Processes/physiology , Perceptual Masking/physiology , Reflex, Pupillary/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Audiometry, Pure-Tone , Auditory Threshold/physiology , Female , Humans , Male , Middle Aged , Noise , Psychoacoustics , Speech Intelligibility , Young Adult
19.
Acta Psychol (Amst) ; 134(3): 372-84, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20427031

ABSTRACT

Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Brain/physiology , Visual Perception/physiology , Acoustic Stimulation , Humans , Photic Stimulation
20.
J Exp Psychol Hum Percept Perform ; 35(5): 1303-15, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19803638

ABSTRACT

It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control.


Subject(s)
Attention , Cues , Field Dependence-Independence , Inhibition, Psychological , Visual Perception , Acoustic Stimulation , Adolescent , Adult , Analysis of Variance , Auditory Perception , Awareness , Female , Humans , Male , Orientation , Photic Stimulation , Reference Values , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...