Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 77
Filter
1.
Ear Hear ; 45(4): 1045-1058, 2024.
Article in English | MEDLINE | ID: mdl-38523125

ABSTRACT

OBJECTIVES: Despite performing well in standard clinical assessments of speech perception, many cochlear implant (CI) users report experiencing significant difficulties when listening in real-world environments. We hypothesize that this disconnect may be related, in part, to the limited ecological validity of tests that are currently used clinically and in research laboratories. The challenges that arise from degraded auditory information provided by a CI, combined with the listener's finite cognitive resources, may lead to difficulties when processing speech material that is more demanding than the single words or single sentences that are used in clinical tests. DESIGN: Here, we investigate whether speech identification performance and processing effort (indexed by pupil dilation measures) are affected when CI users or normal-hearing control subjects are asked to repeat two sentences presented sequentially instead of just one sentence. RESULTS: Response accuracy was minimally affected in normal-hearing listeners, but CI users showed a wide range of outcomes, from no change to decrements of up to 45 percentage points. The amount of decrement was not predictable from the CI users' performance in standard clinical tests. Pupillometry measures tracked closely with task difficulty in both the CI group and the normal-hearing group, even though the latter had speech perception scores near ceiling levels for all conditions. CONCLUSIONS: Speech identification performance is significantly degraded in many (but not all) CI users in response to input that is only slightly more challenging than standard clinical tests; specifically, when two sentences are presented sequentially before requesting a response, instead of presenting just a single sentence at a time. This potential "2-sentence problem" represents one of the simplest possible scenarios that go beyond presentation of the single words or sentences used in most clinical tests of speech perception, and it raises the possibility that even good performers in single-sentence tests may be seriously impaired by other ecologically relevant manipulations. The present findings also raise the possibility that a clinical version of a 2-sentence test may provide actionable information for counseling and rehabilitating CI users, and for people who interact with them closely.


Subject(s)
Cochlear Implants , Speech Perception , Humans , Male , Female , Adult , Middle Aged , Aged , Case-Control Studies , Pupil/physiology , Young Adult , Cochlear Implantation
2.
Front Psychol ; 14: 1225752, 2023.
Article in English | MEDLINE | ID: mdl-38054180

ABSTRACT

Introduction: In spite of its apparent ease, comprehension of spoken discourse represents a complex linguistic and cognitive operation. The difficulty of such an operation can increase when the speech is degraded, as is the case with cochlear implant users. However, the additional challenges imposed by degraded speech may be mitigated to some extent by the linguistic context and pace of presentation. Methods: An experiment is reported in which young adults with age-normal hearing recalled discourse passages heard with clear speech or with noise-band vocoding used to simulate the sound of speech produced by a cochlear implant. Passages were varied in inter-word predictability and presented either without interruption or in a self-pacing format that allowed the listener to control the rate at which the information was delivered. Results: Results showed that discourse heard with clear speech was better recalled than discourse heard with vocoded speech, discourse with a higher average inter-word predictability was better recalled than discourse with a lower average inter-word predictability, and self-paced passages were recalled better than those heard without interruption. Of special interest was the semantic hierarchy effect: the tendency for listeners to show better recall for main ideas than mid-level information or detail from a passage as an index of listeners' ability to understand the meaning of a passage. The data revealed a significant effect of inter-word predictability, in that passages with lower predictability had an attenuated semantic hierarchy effect relative to higher-predictability passages. Discussion: Results are discussed in terms of broadening cochlear implant outcome measures beyond current clinical measures that focus on single-word and sentence repetition.

3.
Exp Aging Res ; : 1-24, 2023 Dec 07.
Article in English | MEDLINE | ID: mdl-38061985

ABSTRACT

BACKGROUND: In spite of declines in working memory and other processes, older adults generally maintain good ability to understand and remember spoken sentences. In part this is due to preserved knowledge of linguistic rules and their implementation. Largely overlooked, however, is the support older adults may gain from the presence of sentence prosody (pitch contour, lexical stress, intra-and inter-word timing) as an aid to detecting the structure of a heard sentence. METHODS: Twenty-four young and 24 older adults recalled recorded sentences in which the sentence prosody corresponded to the clausal structure of the sentence, when the prosody was in conflict with this structure, or when there was reduced prosody uninformative with regard to the clausal structure. Pupil size was concurrently recorded as a measure of processing effort. RESULTS: Both young and older adults' recall accuracy was superior for sentences heard with supportive prosody than for sentences with uninformative prosody or for sentences in which the prosodic marking and causal structure were in conflict. The measurement of pupil dilation suggested that the task was generally more effortful for the older adults, but with both groups showing a similar pattern of effort-reducing effects of supportive prosody. CONCLUSIONS: Results demonstrate the influence of prosody on young and older adults' ability to recall accurately multi-clause sentences, and the significant role effective prosody may play in preserving processing effort.

4.
Trends Hear ; 27: 23312165231203514, 2023.
Article in English | MEDLINE | ID: mdl-37941344

ABSTRACT

Speech that has been artificially accelerated through time compression produces a notable deficit in recall of the speech content. This is especially so for adults with cochlear implants (CI). At the perceptual level, this deficit may be due to the sharply degraded CI signal, combined with the reduced richness of compressed speech. At the cognitive level, the rapidity of time-compressed speech can deprive the listener of the ordinarily available processing time present when speech is delivered at a normal speech rate. Two experiments are reported. Experiment 1 was conducted with 27 normal-hearing young adults as a proof-of-concept demonstration that restoring lost processing time by inserting silent pauses at linguistically salient points within a time-compressed narrative ("time-restoration") returns recall accuracy to a level approximating that for a normal speech rate. Noise vocoder conditions with 10 and 6 channels reduced the effectiveness of time-restoration. Pupil dilation indicated that additional effort was expended by participants while attempting to process the time-compressed narratives, with the effortful demand on resources reduced with time restoration. In Experiment 2, 15 adult CI users tested with the same (unvocoded) materials showed a similar pattern of behavioral and pupillary responses, but with the notable exception that meaningful recovery of recall accuracy with time-restoration was limited to a subgroup of CI users identified by better working memory spans, and better word and sentence recognition scores. Results are discussed in terms of sensory-cognitive interactions in data-limited and resource-limited processes among adult users of cochlear implants.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Young Adult , Humans , Speech , Speech Perception/physiology , Noise
5.
Aging Brain ; 2: 100051, 2022.
Article in English | MEDLINE | ID: mdl-36908889

ABSTRACT

We investigated how the aging brain copes with acoustic and syntactic challenges during spoken language comprehension. Thirty-eight healthy adults aged 54 - 80 years (M = 66 years) participated in an fMRI experiment wherein listeners indicated the gender of an agent in short spoken sentences that varied in syntactic complexity (object-relative vs subject-relative center-embedded clause structures) and acoustic richness (high vs low spectral detail, but all intelligible). We found widespread activity throughout a bilateral frontotemporal network during successful sentence comprehension. Consistent with prior reports, bilateral inferior frontal gyrus and left posterior superior temporal gyrus were more active in response to object-relative sentences than to subject-relative sentences. Moreover, several regions were significantly correlated with individual differences in task performance: Activity in right frontoparietal cortex and left cerebellum (Crus I & II) showed a negative correlation with overall comprehension. By contrast, left frontotemporal areas and right cerebellum (Lobule VII) showed a negative correlation with accuracy specifically for syntactically complex sentences. In addition, laterality analyses confirmed a lack of hemispheric lateralization in activity evoked by sentence stimuli in older adults. Importantly, we found different hemispheric roles, with a left-lateralized core language network supporting syntactic operations, and right-hemisphere regions coming into play to aid in general cognitive demands during spoken sentence processing. Together our findings support the view that high levels of language comprehension in older adults are maintained by a close interplay between a core left hemisphere language network and additional neural resources in the contralateral hemisphere.

6.
Front Psychol ; 12: 629464, 2021.
Article in English | MEDLINE | ID: mdl-33796047

ABSTRACT

There is considerable evidence that listeners' understanding of a spoken sentence need not always follow from a full analysis of the words and syntax of the utterance. Rather, listeners may instead conduct a superficial analysis, sampling some words and using presumed plausibility to arrive at an understanding of the sentence meaning. Because this latter strategy occurs more often for sentences with complex syntax that place a heavier processing burden on the listener than sentences with simpler syntax, shallow processing may represent a resource conserving strategy reflected in reduced processing effort. This factor may be even more important for older adults who as a group are known to have more limited working memory resources. In the present experiment, 40 older adults (M age = 75.5 years) and 20 younger adults (M age = 20.7) were tested for comprehension of plausible and implausible sentences with a simpler subject-relative embedded clause structure or a more complex object-relative embedded clause structure. Dilation of the pupil of the eye was recorded as an index of processing effort. Results confirmed greater comprehension accuracy for plausible than implausible sentences, and for sentences with simpler than more complex syntax, with both effects amplified for the older adults. Analysis of peak pupil dilations for implausible sentences revealed a complex three-way interaction between age, syntactic complexity, and plausibility. Results are discussed in terms of models of sentence comprehension, and pupillometry as an index of intentional task engagement.

7.
J Speech Lang Hear Res ; 64(2): 315-327, 2021 02 17.
Article in English | MEDLINE | ID: mdl-33561353

ABSTRACT

Purpose The study examined age-related differences in the use of semantic context and in the effect of semantic competition in spoken sentence processing. We used offline (response latency) and online (eye gaze) measures, using the "visual world" eye-tracking paradigm. Method Thirty younger and 30 older adults heard sentences related to one of four images presented on a computer monitor. They were asked to touch the image corresponding to the final word of the sentence (target word). Three conditions were used: a nonpredictive sentence, a predictive sentence suggesting one of the four images on the screen (semantic context), and a predictive sentence suggesting two possible images (semantic competition). Results Online eye gaze data showed no age-related differences with nonpredictive sentences, but revealed slowed processing for older adults when context was presented. With the addition of semantic competition to context, older adults were slower to look at the target word after it had been heard. In contrast, offline latency analysis did not show age-related differences in the effects of context and competition. As expected, older adults were generally slower to touch the image than younger adults. Conclusions Traditional offline measures were not able to reveal the complex effect of aging on spoken semantic context processing. Online eye gaze measures suggest that older adults were slower than younger adults to predict an indicated object based on semantic context. Semantic competition affected online processing for older adults more than for younger adults, with no accompanying age-related differences in latency. This supports an early age-related inhibition deficit, interfering with processing, and not necessarily with response execution.


Subject(s)
Fixation, Ocular , Semantics , Aged , Aging , Humans , Language , Reaction Time
8.
J Acoust Soc Am ; 150(6): 4315, 2021 12.
Article in English | MEDLINE | ID: mdl-34972310

ABSTRACT

Speech prosody, including pitch contour, word stress, pauses, and vowel lengthening, can aid the detection of the clausal structure of a multi-clause sentence and this, in turn, can help listeners determine the meaning. However, for cochlear implant (CI) users, the reduced acoustic richness of the signal raises the question of whether CI users may have difficulty using sentence prosody to detect syntactic clause boundaries within sentences or whether this ability is rescued by the redundancy of the prosodic features that normally co-occur at clause boundaries. Twenty-two CI users, ranging in age from 19 to 77 years old, recalled three types of sentences: sentences in which the prosodic pattern was appropriate to the location of a clause boundary within the sentence (congruent prosody), sentences with reduced prosodic information, or sentences in which the location of the clause boundary and the prosodic marking of a clause boundary were placed in conflict. The results showed the presence of congruent prosody to be associated with superior sentence recall and a reduced processing effort as indexed by the pupil dilation. The individual differences in a standard test of word recognition (consonant-nucleus-consonant score) were related to the recall accuracy as well as the processing effort. The outcomes are discussed in terms of the redundancy of the prosodic features, which normally accompany a clause boundary and processing effort.


Subject(s)
Cochlear Implants , Speech Perception , Language , Mental Recall , Speech
9.
Front Hum Neurosci ; 14: 132, 2020.
Article in English | MEDLINE | ID: mdl-32327987

ABSTRACT

Studies of spoken word recognition have reliably shown that both younger and older adults' recognition of acoustically degraded words is facilitated by the presence of a linguistic context. Against this benefit, older adults' word recognition can be differentially hampered by interference from other words that could also fit the context. These prior studies have primarily used off-line response measures such as the signal-to-noise ratio needed for a target word to be correctly identified. Less clear is the locus of these effects; whether facilitation and interference have their influence primarily during response selection, or whether their effects begin to operate even before a sentence-final target word has been uttered. This question was addressed by tracking 20 younger and 20 older adults' eye fixations on a visually presented target word that corresponded to the final word of a contextually constraining or neutral sentence, accompanied by a second word on the computer screen that in some cases could also fit the sentence context. Growth curve analysis of the time-course of eye-gaze on a target word showed facilitation and inhibition effects begin to appear even as a spoken sentence is unfolding in time. Consistent with an age-related inhibition deficit, older adults' word recognition was slowed by the presence of a semantic competitor to a degree not observed for younger adults, with this effect operating early in the recognition process.

10.
Am J Audiol ; 28(2): 369-375, 2019 Jun 10.
Article in English | MEDLINE | ID: mdl-31091111

ABSTRACT

Purpose Many young adults with a mild hearing loss can appear unaware or unconcerned about their loss or its potential effects. A question that has not been raised in prior research is whether slight variability, even within the range of clinically normal hearing, may have a detrimental effect on comprehension of spoken sentences, especially when attempting to understand the meaning of sentences that offer an additional cognitive challenge. The purpose of this study was to address this question. Method An exploratory analysis was conducted on data from 3 published studies that included young adults, ages 18 to 29 years, with audiometrically normal hearing acuity (pure-tone average < 15 dB HL) tested for comprehension of sentences that conveyed the sentence meaning with simpler or more complex linguistic structures. A product-moment correlation was conducted between individuals' hearing acuity and their comprehension accuracy. Results A significant correlation appeared between hearing acuity and comprehension accuracy for syntactically complex sentences, but not for sentences with a simpler syntactic structure. Partial correlations confirmed this relationship to hold independent of participant age within this relatively narrow age range. Conclusion These findings suggest that slight elevations in hearing thresholds, even among young adults who pass a screen for normal hearing, can affect comprehension accuracy for spoken sentences when combined with cognitive demands imposed by sentences that convey their meaning with a complex linguistic structure. These findings support limited resource models of attentional allocation and argue for routine baseline hearing evaluations of young adults with current age-normal hearing acuity.


Subject(s)
Speech Perception/physiology , Adolescent , Adult , Audiometry, Pure-Tone , Auditory Perception/physiology , Auditory Threshold , Female , Healthy Volunteers , Hearing Loss/diagnosis , Humans , Male , Mass Screening , Young Adult
11.
Trends Hear ; 23: 2331216519839624, 2019.
Article in English | MEDLINE | ID: mdl-31010398

ABSTRACT

Individual differences in working memory capacity have been gaining recognition as playing an important role in speech comprehension, especially in noisy environments. Using the visual world eye-tracking paradigm, a recent study by Hadar and coworkers found that online spoken word recognition was slowed when listeners were required to retain in memory a list of four spoken digits (high load) compared with only one (low load). In the current study, we recognized that the influence of a digit preload might be greater for individuals who have a more limited memory span. We compared participants with higher and lower memory spans on the time course for spoken word recognition by testing eye-fixations on a named object, relative to fixations on an object whose name shared phonology with the named object. Results show that when a low load was imposed, differences in memory span had no effect on the time course of preferential fixations. However, with a high load, listeners with lower span were delayed by ∼550 ms in discriminating target from sound-sharing competitors, relative to higher span listeners. This follows an assumption that the interference effect of a memory preload is not a fixed value, but rather, its effect is greater for individuals with a smaller memory span. Interestingly, span differences affected the timeline for spoken word recognition in noise, but not offline accuracy. This highlights the significance of using eye-tracking as a measure for online speech processing. Results further emphasize the importance of considering differences in cognitive capacity, even when testing normal hearing young adults.


Subject(s)
Eye Movements/physiology , Memory, Short-Term/physiology , Speech Perception/physiology , Adult , Female , Humans , Individuality , Male , Noise , Speech , Young Adult
12.
Front Psychol ; 10: 2947, 2019.
Article in English | MEDLINE | ID: mdl-31998196

ABSTRACT

Task-evoked changes in pupil dilation have long been used as a physiological index of cognitive effort. Unlike this response, that is measured during or after an experimental trial, the baseline pupil dilation (BPD) is a measure taken prior to an experimental trial. As such, it is considered to reflect an individual's arousal level in anticipation of an experimental trial. We report data for 68 participants, ages 18 to 89, whose hearing acuity ranged from normal hearing to a moderate hearing loss, tested over a series 160 trials on an auditory sentence comprehension task. Results showed that BPDs progressively declined over the course of the experimental trials, with participants with poorer pure tone detection thresholds showing a steeper rate of decline than those with better thresholds. Data showed this slope difference to be due to participants with poorer hearing having larger BPDs than those with better hearing at the start of the experiment, but with their BPDs approaching that of the better hearing participants by the end of the 160 trials. A finding of increasing response accuracy over trials was seen as inconsistent with a fatigue or reduced task engagement account of the diminishing BPDs. Rather, the present results imply BPD as reflecting a heightened arousal level in poorer-hearing participants in anticipation of a task that demands accurate speech perception, a concern that dissipates over trials with task success. These data taken with others suggest that the baseline pupillary response may not reflect a single construct.

13.
J Acoust Soc Am ; 144(4): 2088, 2018 10.
Article in English | MEDLINE | ID: mdl-30404494

ABSTRACT

The rhythms of speech and the time scales of linguistic units (e.g., syllables) correspond remarkably to cortical oscillations. Previous research has demonstrated that in young adults, the intelligibility of time-compressed speech can be rescued by "repackaging" the speech signal through the regular insertion of silent gaps to restore correspondence to the theta oscillator. This experiment tested whether this same phenomenon can be demonstrated in older adults, who show age-related changes in cortical oscillations. The results demonstrated a similar phenomenon for older adults, but that the "rescue point" of repackaging is shifted, consistent with a slowing of theta oscillations.


Subject(s)
Aging/physiology , Brain/physiology , Speech Perception , Adolescent , Adult , Aged , Aged, 80 and over , Brain/growth & development , Female , Humans , Male , Middle Aged , Theta Rhythm
14.
Trends Hear ; 22: 2331216518790907, 2018.
Article in English | MEDLINE | ID: mdl-30235973

ABSTRACT

In recent years, there has been a growing interest in the relationship between effort and performance. Early formulations implied that, as the challenge of a task increases, individuals will exert more effort, with resultant maintenance of stable performance. We report an experiment in which normal-hearing young adults, normal-hearing older adults, and older adults with age-related mild-to-moderate hearing loss were tested for comprehension of recorded sentences that varied the comprehension challenge in two ways. First, sentences were constructed that expressed their meaning either with a simpler subject-relative syntactic structure or a more computationally demanding object-relative structure. Second, for each sentence type, an adjectival phrase was inserted that created either a short or long gap in the sentence between the agent performing an action and the action being performed. The measurement of pupil dilation as an index of processing effort showed effort to increase with task difficulty until a difficulty tipping point was reached. Beyond this point, the measurement of pupil size revealed a commitment of effort by the two groups of older adults who failed to keep pace with task demands as evidenced by reduced comprehension accuracy. We take these pupillometry data as revealing a complex relationship between task difficulty, effort, and performance that might not otherwise appear from task performance alone.


Subject(s)
Aging/physiology , Auditory Perception/physiology , Comprehension/physiology , Hearing Loss/diagnosis , Hearing Tests/methods , Speech Perception/physiology , Adult , Age Factors , Aged , Audiometry/methods , Female , Hearing Loss/epidemiology , Humans , Logistic Models , Male , Middle Aged , Predictive Value of Tests , Reference Values , Risk Assessment , Speech Reception Threshold Test , Task Performance and Analysis
15.
eNeuro ; 5(3)2018.
Article in English | MEDLINE | ID: mdl-29911176

ABSTRACT

In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18-41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause. All participants self-reported normal hearing, confirmed by audiometric testing, with some variation within a clinically normal range. We found that participants showed activity related to sentence processing in a left-lateralized frontotemporal network. Although accuracy was generally high, participants still made some errors, which were associated with increased activity in bilateral cingulo-opercular and frontoparietal attention networks. A whole-brain regression analysis revealed that activity in a right anterior middle frontal gyrus (aMFG) component of the frontoparietal attention network was related to individual differences in hearing acuity, such that listeners with poorer hearing showed greater recruitment of this region when successfully understanding a sentence. The activity in right aMFGs for listeners with poor hearing did not differ as a function of sentence type, suggesting a general mechanism that is independent of linguistic processing demands. Our results suggest that even modest variations in hearing ability impact the systems supporting auditory speech comprehension, and that auditory sentence comprehension entails the coordination of a left perisylvian network that is sensitive to linguistic variation with an executive attention network that responds to acoustic challenge.


Subject(s)
Brain/physiology , Comprehension , Hearing , Speech Perception/physiology , Speech , Acoustic Stimulation , Adolescent , Adult , Attention/physiology , Brain Mapping , Female , Functional Laterality , Humans , Magnetic Resonance Imaging , Male , Neural Pathways/physiology , Young Adult
16.
Psychol Aging ; 33(2): 246-258, 2018 03.
Article in English | MEDLINE | ID: mdl-29658746

ABSTRACT

Auditory attention is critical for selectively listening to speech from a single talker in a multitalker environment (e.g., Cherry, 1953). Listening in such situations is notoriously more difficult and more poorly encoded to long-term memory in older than in young adults (Tun, O'Kane, & Wingfield, 2002). Recent work by Payne, Rogers, Wingfield, and Sekuler (2017) in young adults demonstrated a neural correlate of auditory attention in the directed dichotic listening task (DDLT), where listeners attend to one ear while ignoring the other. Measured using electroencephalography, differences in alpha band power (8-14 Hz) between left and right hemisphere parietal regions mark the direction to which auditory attention is focused. Little prior research has been conducted on alpha power modulations in older adults, particularly with regard to auditory attention directed toward speech stimuli. In the current study, an older adult sample was administered the DDLT and delayed recognition procedures used by Payne et al. (2017). Compared to young adults, older adults showed reduced selective attention in the DDLT, evidenced by a higher rate of intrusions from the unattended ear. Moreover, older adults did not exhibit attention-related alpha modulation evidenced by young adults, nor did their event-related potentials (ERPs) to recognition probes differentiate between attended or unattended probes. Older adults' delayed recognition did not reveal a pattern of suppression of unattended items evidenced by young adults. These results serve as evidence for an age-related decline in selective auditory attention, potentially mediated by age-related decline in the ability to modulate alpha oscillations. (PsycINFO Database Record


Subject(s)
Dichotic Listening Tests/methods , Electroencephalography/methods , Speech Perception/physiology , Aged , Aging , Female , Humans , Male
17.
Ear Hear ; 39(1): 101-109, 2018.
Article in English | MEDLINE | ID: mdl-28700448

ABSTRACT

OBJECTIVES: The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. DESIGN: Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. RESULTS: Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. CONCLUSIONS: Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.


Subject(s)
Cochlear Implants , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Age Factors , Aged , Auditory Threshold , Deafness/physiopathology , Deafness/rehabilitation , Female , Humans , Male , Middle Aged , Semantics , Young Adult
18.
Psychophysiology ; 54(4): 528-535, 2017 04.
Article in English | MEDLINE | ID: mdl-28039860

ABSTRACT

Auditory selective attention makes it possible to pick out one speech stream that is embedded in a multispeaker environment. We adapted a cued dichotic listening task to examine suppression of a speech stream lateralized to the nonattended ear, and to evaluate the effects of attention on the right ear's well-known advantage in the perception of linguistic stimuli. After being cued to attend to input from either their left or right ear, participants heard two different four-word streams presented simultaneously to the separate ears. Following each dichotic presentation, participants judged whether a spoken probe word had been in the attended ear's stream. We used EEG signals to track participants' spatial lateralization of auditory attention, which is marked by interhemispheric differences in EEG alpha (8-14 Hz) power. A right-ear advantage (REA) was evident in faster response times and greater sensitivity in distinguishing attended from unattended words. Consistent with the REA, we found strongest parietal and right frontotemporal alpha modulation during the attend-right condition. These findings provide evidence for a link between selective attention and the REA during directed dichotic listening.


Subject(s)
Alpha Rhythm , Attention/physiology , Cerebral Cortex/physiology , Functional Laterality , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Cues , Dichotic Listening Tests , Electroencephalography , Female , Frontal Lobe/physiology , Humans , Male , Parietal Lobe/physiology , Prohibitins , Reaction Time , Temporal Lobe/physiology , Young Adult
19.
Ear Hear ; 37 Suppl 1: 35S-43S, 2016.
Article in English | MEDLINE | ID: mdl-27355768

ABSTRACT

The goal of this article is to trace the evolution of models of working memory and cognitive resources from the early 20th century to today. Linear flow models of information processing common in the 1960s and 1970s centered on the transfer of verbal information from a limited-capacity short-term memory store to long-term memory through rehearsal. Current conceptions see working memory as a dynamic system that includes both maintaining and manipulating information through a series of interactive components that include executive control and attentional resources. These models also reflect the evolution from an almost exclusive concentration on working memory for verbal materials to inclusion of a visual working memory component. Although differing in postulated mechanisms and emphasis, these evolving viewpoints all share the recognition that human information processing is a limited-capacity system with limits on the amount of information that can be attended to, remain activated in memory, and utilized at one time. These limitations take on special importance in spoken language comprehension, especially when the stimuli have complex linguistic structures or listening effort is increased by poor acoustic quality or reduced hearing acuity.


Subject(s)
Attention , Cognition , Memory, Short-Term , Executive Function , Humans , Linear Models , Models, Psychological , Speech Perception
20.
Ear Hear ; 37 Suppl 1: 5S-27S, 2016.
Article in English | MEDLINE | ID: mdl-27355771

ABSTRACT

The Fifth Eriksholm Workshop on "Hearing Impairment and Cognitive Energy" was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in the challenging situations of everyday life.


Subject(s)
Attention , Cognition , Hearing Loss/psychology , Speech Perception , Auditory Perception , Comprehension , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...