Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
J Am Acad Audiol ; 12(8): 390-6, 2001 Sep.
Article in English | MEDLINE | ID: mdl-11599873

ABSTRACT

Interlist equivalency and short-term practice effects were evaluated for the recorded stimuli of the Computer-Assisted Speech Perception Assessment (CASPA) Test. Twenty lists, each consisting of 10 consonant-vowel-consonant words, were administered to 20 adults with normal hearing. The lists were presented at 50 dB SPL (Leq) in the presence of spectrally matched steady-state noise (55 dB SPL Leq). Phoneme recognition scores for the first list presented were significantly lower than for the second through the twentieth list presented, indicating a small practice effect. Phoneme scores for 4 of the lists (3, 6, 7, and 16) were significantly higher than scores for the remaining 16 lists by approximately 10 percentage points. Eliminating the effects of interlist differences reduced the 95 percent confidence interval of a test score based on a single list from 18.4 to 16.1 percentage points. Although interlist differences have only a small effect on confidence limits, some clinicians may wish to eliminate them by excluding lists 3, 6, 7, and 16 from the test. The practice effect observed here can be eliminated by administering one 10-word practice list before beginning the test.


Subject(s)
Speech Discrimination Tests/instrumentation , Speech Perception/physiology , Adult , Analysis of Variance , Audiometry, Pure-Tone , Auditory Threshold/physiology , Computers , Humans , Phonetics , Random Allocation
2.
J Speech Lang Hear Res ; 44(1): 19-28, 2001 Feb.
Article in English | MEDLINE | ID: mdl-11218102

ABSTRACT

The purpose of this study was to determine the role of frequency selectivity and sequential stream segregation in the perception of simultaneous sentences by listeners with sensorineural hearing loss. Simultaneous sentence perception was tested in listeners with normal hearing and with sensorineural hearing loss using sentence pairs consisting of one sentence spoken by a male talker and one sentence spoken by a female talker. Listeners were asked to repeat both sentences and were scored on the number of words repeated correctly in each sentence. Separate scores were obtained for the first and second sentences repeated. Frequency selectivity was assessed using a notched-noise method in which thresholds for a 1,000 Hz pure-tone signal were measured in noise with spectral notch bandwidths of 0, 300, and 600 Hz. Sequential stream segregation was measured using tone sequences consisting of a fixed frequency (A) and a varying frequency tone (B). Tone sequences were presented in an ABA_ABA_... pattern starting at a frequency (B) either below or above the frequency of the fixed 1,000 Hz tone (A). Initially, the frequency difference was large and was gradually decreased until listeners indicated that they could no longer perceptually separate the two tones (fusion threshold). Scores for the first sentence repeated decreased significantly with increasing age. There was a strong relationship between fusion threshold and simultaneous sentence perception, which remained even after partialling out the effects of age. Smaller frequency differences at fusion thresholds were associated with higher sentence scores. There was no relationship between frequency selectivity and simultaneous sentence perception. Results suggest that the abilities to perceptually separate pitch patterns and separate sentences spoken simultaneously by different talkers are mediated by the same underlying perceptual and/or cognitive factors.


Subject(s)
Hearing Loss, Sensorineural/diagnosis , Speech Perception/physiology , Adult , Age Factors , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Auditory Threshold , Female , Humans , Male , Middle Aged , Noise , Psychoacoustics , Severity of Illness Index , Time Factors
3.
J Speech Lang Hear Res ; 43(3): 675-82, 2000 Jun.
Article in English | MEDLINE | ID: mdl-10877437

ABSTRACT

The purpose of the study was to determine if a divided-attention, sentence-recall task was more sensitive to distortion of the speech signal than a conventional focused-attention task. The divided-attention task required listeners to repeat both of two sentences delivered simultaneously to the same ear. The focused-attention task required listeners to repeat a single sentence presented to one ear in quiet or in amplitude-modulated noise (0 dB signal-to-noise ratio). Distortion was introduced by peak clipping. Eighteen listeners with normal hearing were tested under three levels of peak clipping: 0 dB, 11 dB, and 29 dB (re: waveform peak). The effects of clipping were similar, on average, for simultaneous sentences and single sentences in noise. When data were separated by sentence length, however, the effects of clipping were found to be greater for the simultaneous-sentence task, but only for the short sentences (6 words or fewer). The simultaneous-sentence test, in its present form, is not more sensitive to the effects of clipping than is a single-sentence test in noise. Modification of the simultaneous-sentence test to include only short sentences, however, may provide greater test sensitivity than more conventional tests using single sentences in noise.


Subject(s)
Speech Perception/physiology , Adult , Audiometry, Pure-Tone/methods , Auditory Threshold/physiology , Hearing Aids , Humans , Noise/adverse effects , Random Allocation , Reaction Time , Sensitivity and Specificity
4.
Ear Hear ; 20(6): 515-20, 1999 Dec.
Article in English | MEDLINE | ID: mdl-10613389

ABSTRACT

OBJECTIVE: The purpose of this study was to assess list equivalency and time-order effects of word recognition scores and response time measures obtained using a digital recording of the Modified Rhyme Test (MRT) with a response time monitoring task (Mackersie, Neuman, & Levitt, 1999). DESIGN: Response times and percent correct measures were obtained from listeners with normal hearing using the MRT materials presented at a signal to noise ratio of +3 dB. Listeners were tested using a word-monitoring task in which six alternatives were presented in series and listeners pushed a button when they heard the target word (as displayed on the computer monitor). Listeners were tested in two sessions. During each session each of the six MRT lists was administered once. Time-order effects were examined both between and within test sessions. RESULTS: All lists were equivalent for both speech recognition accuracy and response time except List 1, which showed slightly higher percent correct scores than the other lists. Varied patterns of systematic change over time were observed in 75% of the listeners for the response time measures and for 33% of the listeners for the percent correct measures. CONCLUSIONS: Lists 2 through 6 of this version of the MRT are equivalent, with List 1 producing slightly higher word recognition scores. Systematic changes over time in response time data for the majority of listeners suggest the need for careful implementation of the test to avoid time-order effects.


Subject(s)
Speech Perception/physiology , Adult , Humans , Reaction Time , Time Factors
5.
Ear Hear ; 20(2): 140-8, 1999 Apr.
Article in English | MEDLINE | ID: mdl-10229515

ABSTRACT

OBJECTIVES: The primary purpose of this study was to investigate the possibility of improving speech recognition testing sensitivity by incorporating response time measures as a metric. Two different techniques for obtaining response time were compared: a word-monitoring task and a closed-set identification task. DESIGN: Recordings of the Modified Rhyme Test were used to test 12 listeners with normal hearing. Data were collected using a word-monitoring and a closed-set identification task. Response times and percent correct scores were obtained for each task using signal to noise ratios (SNRs) of -3, 0, +3, +6, +9, and +12 dB. RESULTS: Both response time and percent correct measures were sensitive to changes in SNR, but greater sensitivity was found with the percent correct measures. Individual subject data showed that combining response time measures with percent correct scores improved test sensitivity for the monitoring task, but not for the closed-set identification task. CONCLUSIONS: The best test sensitivity was obtained by combining percent correct and response time measures for the monitoring task. Such an approach may hold promise for future clinical applications.


Subject(s)
Audiometry, Speech/methods , Speech Perception/physiology , Adult , Humans , Reaction Time , Sensitivity and Specificity
6.
J Acoust Soc Am ; 103(5 Pt 1): 2273-81, 1998 May.
Article in English | MEDLINE | ID: mdl-9604341

ABSTRACT

Two experiments were carried out to determine how manipulating the compression ratio and release time of a single-band wide dynamic range hearing aid affects sound quality. In experiment I, compression ratio was varied over the range from linear to 10:1 (low compression threshold, attack time = 5 ms, release time = 200 ms). In experiment II, compression ratios of 1.5, 2, and 3:1 were combined with release times of 60, 200, and 1000 ms (attack time = 5 ms). Twenty listeners with sensorineural hearing loss rated the clarity, pleasantness, background noise, loudness, and the overall impression of speech-in-noise (Ventilation, Apartment, Cafeteria) processed through a compression hearing aid. Results revealed that increasing compression ratio caused decreases in ratings on all scales. Increasing release time caused ratings of pleasantness to increase, and ratings of background noise and loudness to decrease. At the 3:1 compression ratio, increasing the release time caused increases in ratings of clarity, pleasantness, and overall impression, and a decrease in background noise. Significant correlations were found between scales. Regression analysis revealed that the contributions of the scales of clarity, pleasantness, background noise, and loudness to the prediction of overall impression differed as a function of the competing noise condition.


Subject(s)
Sound , Speech Perception/physiology , Acoustic Stimulation , Analysis of Variance , Hearing Loss, Sensorineural , Humans , Middle Aged , Noise , Time Factors
7.
J Acoust Soc Am ; 98(6): 3182-7, 1995 Dec.
Article in English | MEDLINE | ID: mdl-8550942

ABSTRACT

Paired-comparison judgments of quality were obtained from 20 hearing-impaired listeners for speech processed through simulated compression hearing aids varying in release time (60, 200, 1000 ms) at three different compression ratios (1.5, 2, 3:1) and for three different background noises (ventilation, apartment, cafeteria). Analysis revealed that the main effect of release time did not have a significant effect on perceived quality. The interaction between release time and noise type was found to be significant. While no significant difference in preference for release times was evident for the ventilation noise, the longer release times (200 and 1000 ms) were preferred for the higher level noises (apartment noise, cafeteria noise). Post hoc testing revealed that the mean preference scores for the 200- and 1000-ms release time were significantly greater than that of the 60-ms release time with the competing cafeteria noise (p < 0.05). Analysis of individual subject data revealed statistically significant preferences that differed from the group mean, suggesting that individualized fitting of this parameter of a compression hearing aid might be warranted.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Adolescent , Adult , Aged , Auditory Threshold , Computer Simulation , Female , Humans , Male , Middle Aged , Noise , Speech Perception , Speech Reception Threshold Test
8.
Am J Audiol ; 3(2): 52-8, 1994 Jul 01.
Article in English | MEDLINE | ID: mdl-26661607

ABSTRACT

Wave I latencies were used to predict the magnitude of conductive components in 80 infants and young children (122 ears) with normal hearing, conductive hearing loss due to otitis media or aural atresia, sensorineural hearing loss, and mixed hearing loss. Two prediction methods were used. The first method based predictions on a 0.03-ms wave I latency delay for each decibel of conductive hearing loss. The second method was based on a regression analysis of wave I latency delays and the magnitude of conductive component for the subjects in this study with normal cochlear status. On average, these prediction methods resulted in prediction errors of 15 dB or greater in over one-third of the ears with hearing loss. Therefore, the clinical use of wave I latencies to predict the presence or magnitude of conductive impairment is not recommended for infants and young children. Instead, bone-conduction ABR testing is recommended as a direct measure of cochlear status when behavioral evaluation is not possible.

9.
Hear Res ; 65(1-2): 61-8, 1993 Feb.
Article in English | MEDLINE | ID: mdl-8458760

ABSTRACT

Several studies have compared the frequency selectivity of waves I and V of the auditory brainstem response (ABR) in humans, however little is known about the frequency selectivity of the middle latency response (MLR). Simultaneous recordings of ABRs and MLRs to 60 dB peSPL 2000-Hz probe tones were obtained in the presence of 0.5, 1.0, 1.41, 2.0, 2.83 and 4.0 kHz maskers presented at 40, 60, and 80 dB SPL. ABR/MLR iso-intensity masking profiles showing the percentage of the unmasked amplitudes as a function of frequency were constructed for ABR peak V-Vn and MLR peaks Na-Pa and Nb-Pb at each masker intensity. No significant differences were found between the frequency selectivity of the ABR and MLR, and the effects of masking on the amplitudes of these responses were similar. These results are consistent with the suggestion that frequency tuning is similar up to the level of the primary auditory cortex.


Subject(s)
Evoked Potentials, Auditory, Brain Stem , Evoked Potentials, Auditory , Acoustic Stimulation , Adult , Audiometry, Pure-Tone , Auditory Pathways , Auditory Threshold , Female , Humans , Male , Perceptual Masking
SELECTION OF CITATIONS
SEARCH DETAIL
...