Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Biol Sex Differ ; 13(1): 49, 2022 09 16.
Artigo em Inglês | MEDLINE | ID: mdl-36114557

RESUMO

BACKGROUND: For more than 150 years, research studies have documented greater variability across males than across females ("greater male variability"-GMV) over a broad range of behavioral and morphological measures. In placental mammals, an ancient difference between males and females that may make an important contribution to GMV is the different pattern of activation of X chromosomes across cells in females (mosaic inactivation of one the two X chromosomes across cells) vs males (consistent activation of a single X chromosome in all cells). In the current study, variability in hearing thresholds was examined for human listeners with thresholds within the normal range. Initial analyses compared variability in thresholds across males vs. across females. If greater across-male than across-female variability was present, and if these differences in variability related to the different patterns X-chromosome activation in males vs. females, it was expected that correlations between related measures within a given subject (e.g., hearing thresholds at given frequency in the two ears) would be greater in males than females. METHODS: Hearing thresholds at audiometric test frequencies (500-6000 or 500-8000 Hz) were extracted from two datasets representing more than 8500 listeners with normal hearing (4590 males, 4376 females). Separate data analyses were carried out on each dataset to compare: (1) relative variability in hearing thresholds across males vs. across females at each test frequency; (2) correlations between both across-ear and within-ear hearing thresholds within  males vs. within  females, and (3) mean thresholds for females vs. males at each frequency. RESULTS: A consistent pattern of GMV in hearing thresholds was seen across frequencies in both datasets. In addition, both across-ear and within-ear correlations between thresholds were consistently greater in males than females. Previous studies have frequently reported lower mean thresholds for females than males for listeners with normal hearing. One of the datasets replicated this result, showing a clear and consistent pattern of lower mean thresholds for females. The second data set did not show clear evidence of this female advantage. CONCLUSIONS: Hearing thresholds showed clear evidence of greater variability across males than across females and higher correlations across related threshold measures within males than within females. The results support a link between the observed GMV and the mosaic pattern of X-activation for females that is not present in males.


Assuntos
Placenta , Caracteres Sexuais , Limiar Auditivo/fisiologia , Feminino , Audição , Humanos , Masculino , Gravidez , Cromossomo X
3.
J Am Acad Audiol ; 24(4): 274-92, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23636209

RESUMO

BACKGROUND: It is widely believed that suprathreshold distortions in auditory processing contribute to the speech recognition deficits experienced by hearing-impaired (HI) listeners in noise. Damage to outer hair cells and attendant reductions in peripheral compression and frequency selectivity may contribute to these deficits. In addition, reduced access to temporal fine structure (TFS) information in the speech waveform may play a role. PURPOSE: To examine how measures of peripheral compression, frequency selectivity, and TFS sensitivity relate to speech recognition performance by HI listeners. To determine whether distortions in processing reflected by these psychoacoustic measures are more closely associated with speech deficits in steady-state or modulated noise. RESEARCH DESIGN: Normal-hearing (NH) and HI listeners were tested on tasks examining frequency selectivity (notched-noise task), peripheral compression (temporal masking curve task), and sensitivity to TFS information (frequency modulation [FM] detection task) in the presence of random amplitude modulation. Performance was tested at 500, 1000, 2000, and 4000 Hz at several presentation levels. The same listeners were tested on sentence recognition in steady-state and modulated noise at several signal-to-noise ratios. STUDY SAMPLE: Ten NH and 18 HI listeners were tested. NH listeners ranged in age from 36 to 80 yr (M = 57.6). For HI listeners, ages ranged from 58 to 87 yr (M = 71.8). RESULTS: Scores on the FM detection task at 1 and 2 kHz were significantly correlated with speech scores in both noise conditions. Frequency selectivity and compression measures were not as clearly associated with speech performance. Speech Intelligibility Index (SII) analyses indicated only small differences in speech audibility across subjects for each signal-to-noise ratio (SNR) condition that would predict differences in speech scores no greater than 10% at a given SNR. Actual speech scores varied by as much as 80% across subjects. CONCLUSIONS: The results suggest that distorted processing of audible speech cues was a primary factor accounting for differences in speech scores across subjects and that reduced ability to use TFS cues may be an important component of this distortion. The influence of TFS cues on speech scores was comparable in steady-state and modulated noise. Speech recognition was not related to audibility, represented by the SII, once high-frequency sensitivity differences across subjects (beginning at 5 kHz) were removed statistically. This might indicate that high-frequency hearing loss is associated with distortions in processing in lower-frequency regions.


Assuntos
Limiar Auditivo/fisiologia , Perda Auditiva Neurossensorial/fisiopatologia , Audição/fisiologia , Pessoas com Deficiência Auditiva , Percepção da Fala/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria , Feminino , Perda Auditiva Neurossensorial/diagnóstico , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Mascaramento Perceptivo/fisiologia , Psicoacústica , Testes de Discriminação da Fala
4.
J Am Acad Audiol ; 24(4): 307-28, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23636211

RESUMO

BACKGROUND: Hearing-impaired (HI) individuals with similar ages and audiograms often demonstrate substantial differences in speech-reception performance in noise. Traditional models of speech intelligibility focus primarily on average performance for a given audiogram, failing to account for differences between listeners with similar audiograms. Improved prediction accuracy might be achieved by simulating differences in the distortion that speech may undergo when processed through an impaired ear. Although some attempts to model particular suprathreshold distortions can explain general speech-reception deficits not accounted for by audibility limitations, little has been done to model suprathreshold distortion and predict speech-reception performance for individual HI listeners. Auditory-processing models incorporating individualized measures of auditory distortion, along with audiometric thresholds, could provide a more complete understanding of speech-reception deficits by HI individuals. A computational model capable of predicting individual differences in speech-recognition performance would be a valuable tool in the development and evaluation of hearing-aid signal-processing algorithms for enhancing speech intelligibility. PURPOSE: This study investigated whether biologically inspired models simulating peripheral auditory processing for individual HI listeners produce more accurate predictions of speech-recognition performance than audiogram-based models. RESEARCH DESIGN: Psychophysical data on spectral and temporal acuity were incorporated into individualized auditory-processing models consisting of three stages: a peripheral stage, customized to reflect individual audiograms and spectral and temporal acuity; a cortical stage, which extracts spectral and temporal modulations relevant to speech; and an evaluation stage, which predicts speech-recognition performance by comparing the modulation content of clean and noisy speech. To investigate the impact of different aspects of peripheral processing on speech predictions, individualized details (absolute thresholds, frequency selectivity, spectrotemporal modulation [STM] sensitivity, compression) were incorporated progressively, culminating in a model simulating level-dependent spectral resolution and dynamic-range compression. STUDY SAMPLE: Psychophysical and speech-reception data from 11 HI and six normal-hearing listeners were used to develop the models. DATA COLLECTION AND ANALYSIS: Eleven individualized HI models were constructed and validated against psychophysical measures of threshold, frequency resolution, compression, and STM sensitivity. Speech-intelligibility predictions were compared with measured performance in stationary speech-shaped noise at signal-to-noise ratios (SNRs) of -6, -3, 0, and 3 dB. Prediction accuracy for the individualized HI models was compared to the traditional audibility-based Speech Intelligibility Index (SII). RESULTS: Models incorporating individualized measures of STM sensitivity yielded significantly more accurate within-SNR predictions than the SII. Additional individualized characteristics (frequency selectivity, compression) improved the predictions only marginally. A nonlinear model including individualized level-dependent cochlear-filter bandwidths, dynamic-range compression, and STM sensitivity predicted performance more accurately than the SII but was no more accurate than a simpler linear model. Predictions of speech-recognition performance simultaneously across SNRs and individuals were also significantly better for some of the auditory-processing models than for the SII. CONCLUSIONS: A computational model simulating individualized suprathreshold auditory-processing abilities produced more accurate speech-intelligibility predictions than the audibility-based SII. Most of this advantage was realized by a linear model incorporating audiometric and STM-sensitivity information. Although more consistent with known physiological aspects of auditory processing, modeling level-dependent changes in frequency selectivity and gain did not result in more accurate predictions of speech-reception performance.


Assuntos
Limiar Auditivo/fisiologia , Perda Auditiva Neurossensorial/fisiopatologia , Audição/fisiologia , Modelos Biológicos , Distorção da Percepção/fisiologia , Percepção da Fala/fisiologia , Algoritmos , Audiometria , Córtex Auditivo/fisiologia , Cóclea/fisiologia , Cognição/fisiologia , Feminino , Perda Auditiva Neurossensorial/diagnóstico , Humanos , Modelos Lineares , Masculino , Ruído , Psicoacústica , Testes de Discriminação da Fala
6.
J Acoust Soc Am ; 132(4): 2676-89, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23039460

RESUMO

Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups.


Assuntos
Ruído/efeitos adversos , Mascaramento Perceptivo , Percepção da Fala , Teste do Limiar de Recepção da Fala , Estimulação Acústica , Adulto , Algoritmos , Análise de Variância , Limiar Auditivo , Estudos de Viabilidade , Feminino , Humanos , Masculino , Acústica da Fala , Inteligibilidade da Fala , Adulto Jovem
7.
J Am Acad Audiol ; 20(10): 607-20, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-20503799

RESUMO

BACKGROUND: Although the benefits of amplification for persons with impaired hearing are well established, many potential candidates do not obtain and use hearing aids. In some cases, this is because the individual is not convinced that amplification will be of sufficient benefit in those everyday listening situations where he or she is experiencing difficulties. PURPOSE: To describe the development of a naturalistic approach to assessing hearing aid candidacy and motivating hearing aid use based on patient preferences for unamplified and amplified sound samples typical of those encountered in everyday living and to assess the validity of these preference ratings to predict hearing aid candidacy. RESEARCH DESIGN: Prospective experimental study comparing preference ratings for unamplified and amplified sound samples of patients with a clinical recommendation for hearing aid use and patients for whom amplification was not prescribed. STUDY SAMPLE: Forty-eight adults self-referred to the Army Audiology and Speech Center for a hearing evaluation. DATA COLLECTION AND ANALYSIS: Unamplified and amplified sound samples were presented to potential hearing aid candidates using a three-alternative forced-choice paradigm. Participants were free to switch at will among the three processing options (no gain, mild gain, moderate gain) until the preferred option was determined. Following this task, each participant was seen for a diagnostic hearing evaluation by one of eight staff audiologists with no knowledge of the preference data. Patient preferences for the three processing options were used to predict the attending audiologists' recommendations for amplification based on traditional audiometric measures. RESULTS: Hearing aid candidacy was predicted with moderate accuracy from the patients' preferences for amplified sounds typical of those encountered in everyday living, although the predictive validity of the various sound samples varied widely. CONCLUSIONS: Preferences for amplified sounds were generally predictive of hearing aid candidacy. However, the predictive validity of the preference ratings was not sufficient to replace traditional clinical determinations of hearing aid candidacy in individual patients. Because the sound samples are common to patients' everyday listening experiences, they provide a quick and intuitive method of demonstrating the potential benefit of amplification to patients who might otherwise not accept a prescription for hearing aids.


Assuntos
Aconselhamento Diretivo/métodos , Auxiliares de Audição , Perda Auditiva/psicologia , Perda Auditiva/reabilitação , Motivação , Aceitação pelo Paciente de Cuidados de Saúde , Atividades Cotidianas , Adulto , Criança , Comportamento de Escolha , Feminino , Humanos , Masculino , Seleção de Pacientes , Estudos Prospectivos , Reprodutibilidade dos Testes , Análise e Desempenho de Tarefas
8.
Ear Hear ; 29(2): 199-213, 2008 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-18595186

RESUMO

OBJECTIVES: Studies have shown that listener preferences for omnidirectional (OMNI) or directional (DIR) processing in hearing aids depend largely on the characteristics of the listening environment, including the relative locations of the listener, signal sources, and noise sources; and whether reverberation is present. Many modern hearing aids incorporate algorithms to switch automatically between microphone modes based on an analysis of the acoustic environment. Little work has been done, however, to evaluate these devices with respect to user preferences, or to compare the outputs of different signal processing algorithms directly to make informed choices between the different microphone modes. This study describes a strategy for automatically switching between DIR and OMNI microphone modes based on a direct comparison between acoustic speech signals processed by DIR and OMNI algorithms in the same listening environment. In addition, data are shown regarding how a decision to choose one microphone mode over another might change as a function of speech to noise ratio (SNR) and spatial orientation of the listener. DESIGN: Speech and noise signals were presented at a variety of SNR's and in different spatial orientations relative to a listener's head. Monaural recordings, made in both OMNI and DIR microphone processing modes, were analyzed using a model of auditory processing that highlights the spectral and temporal dynamics of speech. Differences between OMNI and DIR processing were expressed in terms of a modified spectrotemporal modulation index (mSTMI) developed specifically for this hearing aid application. Differences in mSTMI values were compared with intelligibility measures and user preference judgments made under the same listening conditions. RESULTS: A comparison between the results of the mSTMI analyses and behavioral data (intelligibility and preference judgments) showed excellent agreement, especially in stationary noise backgrounds. In addition, the mSTMI was found to be sensitive to changes in SNR as well as spatial orientation of the listener relative to signal and noise sources. Subsequent mSTMI analyses on hearing aid recordings obtained from real-life environments with more than one talker and modulated noise backgrounds also showed promise for predicting the preferred microphone setting in varied and complex listening environments.


Assuntos
Auxiliares de Audição , Transtornos da Audição/terapia , Estimulação Acústica/instrumentação , Algoritmos , Audiometria de Tons Puros , Limiar Auditivo/fisiologia , Meio Ambiente , Estudos de Viabilidade , Humanos , Ruído , Desenho de Prótese , Índice de Gravidade de Doença , Acústica da Fala , Percepção da Fala
9.
J Am Acad Audiol ; 19(9): 708-20, 2008 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-19418710

RESUMO

BACKGROUND: Hearing aids today often provide both directional (DIR) and omnidirectional (OMNI) processing options with the currently active mode selected automatically by the device. The most common approach to automatic switching involves "acoustic scene analysis" where estimates of various acoustic properties of the listening environment (e.g., signal-to-noise ratio [SNR], overall sound level) are used as a basis for switching decisions. PURPOSE: The current study was carried out to evaluate an alternative, "direct-comparison" approach to automatic switching that does not involve assumptions about how the listening environment may relate to microphone preferences. Predictions of microphone preference were based on whether DIR- or OMNI-processing of a given listening environment produced a closer match to a reference template representing the spectral and temporal modulations present in clean speech. RESEARCH DESIGN: A descriptive and correlational study. Predictions of OMNI/DIR preferences were determined based on degree of similarity between spectral and temporal modulations contained in a reference, clean-speech template, and in OMNI- and DIR-processed recordings of various listening environments. These predictions were compared with actual preference judgments (both real-world judgments and laboratory responses to the recordings). DATA COLLECTION AND ANALYSIS: Predictions of microphone preference were based on whether DIR- or OMNI-processing of a given listening environment produced a closer match to a reference template representing clean speech. The template is the output of an auditory processing model that characterizes the spectral and temporal modulations associated with a given input signal (clean speech in this case). A modified version of the spectro-temporal modulation index (mSTMI) was used to compare the template to both DIR- and OMNI-processed versions of a given listening environment, as processed through the same auditory model. These analyses were carried out on recordings (originally collected by Walden et al, 2007) of OMNI- and DIR-processed speech produced in a range of everyday listening situations. Walden et al reported OMNI/DIR preference judgments made by raters at the same time the field recordings were made and judgments based on laboratory presentations of these recordings to hearing-impaired and normal-hearing listeners. Preference predictions based on the mSTMI analyses were compared with both sets of preference judgments. RESULTS: The mSTMI analyses showed better than 92% accuracy in predicting the field preferences and 82-85% accuracy in predicting the laboratory preference judgments. OMNI processing tended to be favored over DIR processing in cases where the analysis indicated fairly similar mSTMI scores across the two processing modes. This is consistent with the common clinical assignment of OMNI mode as the default setting, most likely to be preferred in cases where neither mode produces a substantial improvement in SNR. Listeners experienced with switchable OMNI/DIR hearing aids were more likely than other listeners to favor the DIR mode in instances where mSTMI scores only slightly favored DIR processing. CONCLUSIONS: A direct-comparison approach to OMNI/DIR mode selection was generally successful in predicting user preferences in a range of listening environments. Future modifications to the approach to further improve predictive accuracy are discussed.


Assuntos
Estimulação Acústica/instrumentação , Auxiliares de Audição , Perda Auditiva/terapia , Processamento de Sinais Assistido por Computador , Localização de Som , Meio Ambiente , Desenho de Equipamento , Humanos , Modelos Biológicos , Satisfação do Paciente , Acústica da Fala , Percepção da Fala
10.
J Am Acad Audiol ; 18(5): 358-79, 2007 May.
Artigo em Inglês | MEDLINE | ID: mdl-17715647

RESUMO

Automatic directionality algorithms currently implemented in hearing aids assume that hearing-impaired persons with similar hearing losses will prefer the same microphone processing mode in a specific everyday listening environment. The purpose of this study was to evaluate the robustness of microphone preferences in everyday listening. Two hearing-impaired persons made microphone preference judgments (omnidirectional preferred, directional preferred, no preference) in a variety of everyday listening situations. Simultaneously, these acoustic environments were recorded through the omnidirectional and directional microphone processing modes. The acoustic recordings were later presented in a laboratory setting for microphone preferences to the original two listeners and other listeners who differed in hearing ability and experience with directional microphone processing. The original two listeners were able to replicate their live microphone preferences in the laboratory with a high degree of accuracy. This suggests that the basis of the original live microphone preferences were largely represented in the acoustic recordings. Other hearing-impaired and normal-hearing participants who listened to the environmental recordings also accurately replicated the original live omnidirectional preferences; however, directional preferences were not as robust across the listeners. When the laboratory rating did not replicate the live directional microphone preference, listeners almost always expressed no preference for either microphone mode. Hence, a preference for omnidirectional processing was rarely expressed by any of the participants to recorded sites where directional processing had been preferred as a live judgment and vice versa. These results are interpreted to provide little basis for customizing automatic directionality algorithms for individual patients. The implications of these findings for hearing aid design are discussed.


Assuntos
Meio Ambiente , Auxiliares de Audição , Perda Auditiva/terapia , Satisfação do Paciente , Percepção da Fala , Estimulação Acústica , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Estudos de Coortes , Feminino , Testes Auditivos , Humanos , Masculino , Pessoa de Meia-Idade , Inquéritos e Questionários
11.
J Acoust Soc Am ; 122(2): 1130-7, 2007 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-17672659

RESUMO

These experiments examined how high presentation levels influence speech recognition for high- and low-frequency stimuli in noise. Normally hearing (NH) and hearing-impaired (HI) listeners were tested. In Experiment 1, high- and low-frequency bandwidths yielding 70%-correct word recognition in quiet were determined at levels associated with broadband speech at 75 dB SPL. In Experiment 2, broadband and band-limited sentences (based on passbands measured in Experiment 1) were presented at this level in speech-shaped noise filtered to the same frequency bandwidths as targets. Noise levels were adjusted to produce approximately 30%-correct word recognition. Frequency bandwidths and signal-to-noise ratios supporting criterion performance in Experiment 2 were tested at 75, 87.5, and 100 dB SPL in Experiment 3. Performance tended to decrease as levels increased. For NH listeners, this "rollover" effect was greater for high-frequency and broadband materials than for low-frequency stimuli. For HI listeners, the 75- to 87.5-dB increase improved signal audibility for high-frequency stimuli and rollover was not observed. However, the 87.5- to 100-dB increase produced qualitatively similar results for both groups: scores decreased most for high-frequency stimuli and least for low-frequency materials. Predictions of speech intelligibility by quantitative methods such as the Speech Intelligibility Index may be improved if rollover effects are modeled as frequency dependent.


Assuntos
Perda Auditiva/fisiopatologia , Audição/fisiologia , Ruído , Inteligibilidade da Fala , Percepção da Fala , Fala , Idoso , Limiar Auditivo , Feminino , Perda Auditiva de Alta Frequência/fisiopatologia , Perda Auditiva Neurossensorial/fisiopatologia , Humanos , Idioma , Pessoa de Meia-Idade , Interface para o Reconhecimento da Fala
12.
J Speech Lang Hear Res ; 47(2): 245-56, 2004 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-15157127

RESUMO

Listeners with normal-hearing sensitivity recognize speech more accurately in the presence of fluctuating background sounds, such as a single competing voice, than in unmodulated noise at the same overall level. These performance differences are greatly reduced in listeners with hearing impairment, who generally receive little benefit from fluctuations in masker envelopes. If this lack of benefit is entirely due to elevated quiet thresholds and the resulting inaudibility of low-amplitude portions of signal + masker, then listeners with hearing impairment should derive increasing benefit from masker fluctuations as presentation levels increase. Listeners with normal-hearing (NH) sensitivity and listeners with hearing impairment (HI) were tested for sentence recognition at moderate and high presentation levels in competing speech-shaped noise, in competing speech by a single talker, and in competing time-reversed speech by the same talker. NH listeners showed more accurate recognition at moderate than at high presentation levels and better performance in fluctuating maskers than in unmodulated noise. For these listeners, modulated versus unmodulated performance differences tended to decrease at high presentation levels. Listeners with HI, as a group, showed performance that was more similar across maskers and presentation levels. Considered individually, only 2 out of 6 listeners with HI showed better overall performance and increasing benefit from masker fluctuations as presentation level increased. These results suggest that audibility alone does not completely account for the group differences in performance with fluctuating maskers; suprathreshold processing differences between groups also appear to play an important role. Competing speech frequently provided more effective masking than time-reversed speech containing temporal fluctuations of equal magnitude. This finding is consistent with "informational" masking resulting from competitive processing of words and phrases within the speech masker that would notoccur for time-reversed sentences.


Assuntos
Perda Auditiva/fisiopatologia , Ruído/efeitos adversos , Mascaramento Perceptivo , Percepção da Fala , Teste do Limiar de Recepção da Fala , Estimulação Acústica , Adulto , Idoso , Estudos de Casos e Controles , Humanos , Pessoa de Meia-Idade , Teste do Limiar de Recepção da Fala/métodos
14.
J Acoust Soc Am ; 114(1): 294-306, 2003 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-12880042

RESUMO

Harmonic complexes comprised of the same spectral components in either positive-Schroeder (+Schr) or negative-Schroeder (-Schr) phase [see Schroeder, IEEE Trans. Inf. Theory 16, 85-89 (1970)] have identical long-term spectra and similar waveform envelopes. However, localized patterns of basilar-membrane (BM) excitation can be quite different in response to these two stimuli. Measurements in chinchillas showed more modulated (peakier) BM excitation for +Schr than -Schr complexes [Recio and Rhode, J. Acoust. Soc. Am. 108, 2281-2298 (2000)]. In the current study, laser velocimetry was used to examine BM responses at a location tuned to approximately 17 kHz in the basal turn of the guinea-pig cochlea, for +Schr and -Schr complexes with a 203-Hz fundamental frequency and including 101 equal-amplitude components from 2031 to 22,344 Hz. At 35-dB SPL, +Schr response waveforms showed greater amplitude modulation than -Schr responses. With increasing stimulation level, internal modulation decreased for both complexes. To understand the observed phenomena quantitatively, responses were predicted on the basis of a linearized model of the cochlea. Prediction was based on an "indirect impulse response" measured in the same animal. Response waveforms for Schroeder-phase signals were accurately predicted, provided that the level of the indirect impulse used in prediction closely matched the level of the Schroeder-phase stimulus. This result confirms that the underlying model, which originally was developed for noise stimuli, is valid for stimuli that produce completely different response waveforms. Moreover, it justifies explanation of cochlear filtering (i.e., differential treatment of different frequencies) in terms of a linear system.


Assuntos
Membrana Basilar/fisiologia , Percepção Sonora/fisiologia , Percepção da Altura Sonora/fisiologia , Animais , Cobaias , Modelos Lineares , Psicoacústica , Reologia , Espectrografia do Som
15.
Ear Hear ; 24(2): 133-42, 2003 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-12677110

RESUMO

OBJECTIVE: Recent studies indicate that high-frequency amplification may provide little benefit for listeners with moderate-to-severe high-frequency hearing loss, and may even reduce speech recognition. Moore and colleagues have proposed a direct link between this lack of benefit and the presence of regions of nonfunctioning inner hair cells (dead regions) in the basal cochlea and have suggested that psychophysical tuning curves (PTCs) and tone detection thresholds in threshold-equalizing noise (TEN) are psychoacoustic measures that allow detection of dead regions ([Moore, Huss, Vickers, Glasberg, & Alcántara, 2000]; [Vickers, Moore, & Baer, 2001]). The experiments reported here examine the consistency of TEN and PTC tasks in identifying dead regions in listeners with high-frequency hearing loss. DESIGN: Seventeen listeners (18 ears) with steeply sloping moderate-to-severe high-frequency hearing loss were tested in PTC and TEN tasks intended to identify ears with high-frequency dead regions. In the PTC task, pure-tone signals of fixed level were masked by narrowband noise that slowly increased in center frequency. For a range of signal frequencies, noise levels at masked threshold were determined as a function of masker frequency. In the TEN task, masked thresholds for pure-tone signals were determined for a fixed-level, 70 dB/ERB TEN masker (for some listeners, 85 or 90 dB/ERB TEN was also tested at selected probe frequencies). RESULTS: TEN and PTC results agreed on the presence or absence of dead regions at all tested frequencies in 10 of 18 cases (approximately 56% agreement rate). Six ears showed results consistent with either mid- or high-frequency dead regions in both tasks, and four ears did not show evidence of dead regions in either task. In eight ears, the TEN and PTC tasks produced conflicting results at one or more frequencies. In instances where the TEN and PTC results disagreed, the TEN results suggested the presence of dead regions whereas the PTC results did not. CONCLUSIONS: The 56% agreement rate between the TEN and PTC tasks indicates that at least one of these tasks was only partially reliable as a diagnostic tool. Factors unrelated to the presence of dead regions may contribute to excess masking in TEN without producing tip shifts in PTCs. Thus it may be appropriate to view tuning curve results as more reliable in cases where TEN and PTC results disagree. The current results do not provide support for the TEN task as a reliable diagnostic tool for identification of dead regions.


Assuntos
Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Cóclea/fisiopatologia , Perda Auditiva de Alta Frequência/diagnóstico , Perda Auditiva de Alta Frequência/fisiopatologia , Psicofísica , Idoso , Audiometria de Tons Puros , Condução Óssea/fisiologia , Feminino , Perda Auditiva de Alta Frequência/epidemiologia , Humanos , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...