Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
Brain Sci ; 13(7)2023 Jun 29.
Article in English | MEDLINE | ID: mdl-37508940

ABSTRACT

Traditionally, speech perception training paradigms have not adequately taken into account the possibility that there may be modality-specific requirements for perceptual learning with auditory-only (AO) versus visual-only (VO) speech stimuli. The study reported here investigated the hypothesis that there are modality-specific differences in how prior information is used by normal-hearing participants during vocoded versus VO speech training. Two different experiments, one with vocoded AO speech (Experiment 1) and one with VO, lipread, speech (Experiment 2), investigated the effects of giving different types of prior information to trainees on each trial during training. The training was for four ~20 min sessions, during which participants learned to label novel visual images using novel spoken words. Participants were assigned to different types of prior information during training: Word Group trainees saw a printed version of each training word (e.g., "tethon"), and Consonant Group trainees saw only its consonants (e.g., "t_th_n"). Additional groups received no prior information (i.e., Experiment 1, AO Group; Experiment 2, VO Group) or a spoken version of the stimulus in a different modality from the training stimuli (Experiment 1, Lipread Group; Experiment 2, Vocoder Group). That is, in each experiment, there was a group that received prior information in the modality of the training stimuli from the other experiment. In both experiments, the Word Groups had difficulty retaining the novel words they attempted to learn during training. However, when the training stimuli were vocoded, the Word Group improved their phoneme identification. When the training stimuli were visual speech, the Consonant Group improved their phoneme identification and their open-set sentence lipreading. The results are considered in light of theoretical accounts of perceptual learning in relationship to perceptual modality.

2.
J Neurosci ; 43(27): 4984-4996, 2023 07 05.
Article in English | MEDLINE | ID: mdl-37197979

ABSTRACT

It has been postulated that the brain is organized by "metamodal," sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both "standard" and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain.SIGNIFICANCE STATEMENT It has been proposed that the brain is organized by "metamodal," sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals "to see" by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.


Subject(s)
Auditory Cortex , Speech Perception , Humans , Female , Speech , Auditory Perception , Brain , Temporal Lobe , Magnetic Resonance Imaging/methods , Acoustic Stimulation/methods
3.
Am J Audiol ; 31(2): 453-469, 2022 Jun 02.
Article in English | MEDLINE | ID: mdl-35316072

ABSTRACT

PURPOSE: The goal of this review article is to reinvigorate interest in lipreading and lipreading training for adults with acquired hearing loss. Most adults benefit from being able to see the talker when speech is degraded; however, the effect size is related to their lipreading ability, which is typically poor in adults who have experienced normal hearing through most of their lives. Lipreading training has been viewed as a possible avenue for rehabilitation of adults with an acquired hearing loss, but most training approaches have not been particularly successful. Here, we describe lipreading and theoretically motivated approaches to its training, as well as examples of successful training paradigms. We discuss some extensions to auditory-only (AO) and audiovisual (AV) speech recognition. METHOD: Visual speech perception and word recognition are described. Traditional and contemporary views of training and perceptual learning are outlined. We focus on the roles of external and internal feedback and the training task in perceptual learning, and we describe results of lipreading training experiments. RESULTS: Lipreading is commonly characterized as limited to viseme perception. However, evidence demonstrates subvisemic perception of visual phonetic information. Lipreading words also relies on lexical constraints, not unlike auditory spoken word recognition. Lipreading has been shown to be difficult to improve through training, but under specific feedback and task conditions, training can be successful, and learning can generalize to untrained materials, including AV sentence stimuli in noise. The results on lipreading have implications for AO and AV training and for use of acoustically processed speech in face-to-face communication. CONCLUSION: Given its importance for speech recognition with a hearing loss, we suggest that the research and clinical communities integrate lipreading in their efforts to improve speech recognition in adults with acquired hearing loss.


Subject(s)
Deafness , Hearing Loss , Speech Perception , Adult , Humans , Lipreading , Speech
4.
Am J Audiol ; 31(1): 57-77, 2022 Mar 03.
Article in English | MEDLINE | ID: mdl-34965362

ABSTRACT

PURPOSE: This study investigated the effects of external feedback on perceptual learning of visual speech during lipreading training with sentence stimuli. The goal was to improve visual-only (VO) speech recognition and increase accuracy of audiovisual (AV) speech recognition in noise. The rationale was that spoken word recognition depends on the accuracy of sublexical (phonemic/phonetic) speech perception; effective feedback during training must support sublexical perceptual learning. METHOD: Normal-hearing (NH) adults were assigned to one of three types of feedback: Sentence feedback was the entire sentence printed after responding to the stimulus. Word feedback was the correct response words and perceptually near but incorrect response words. Consonant feedback was correct response words and consonants in incorrect but perceptually near response words. Six training sessions were given. Pre- and posttraining testing included an untrained control group. Test stimuli were disyllable nonsense words for forced-choice consonant identification, and isolated words and sentences for open-set identification. Words and sentences were VO, AV, and audio-only (AO) with the audio in speech-shaped noise. RESULTS: Lipreading accuracy increased during training. Pre- and posttraining tests of consonant identification showed no improvement beyond test-retest increases obtained by untrained controls. Isolated word recognition with a talker not seen during training showed that the control group improved more than the sentence group. Tests of untrained sentences showed that the consonant group significantly improved in all of the stimulus conditions (VO, AO, and AV). Its mean words correct scores increased by 9.2 percentage points for VO, 3.4 percentage points for AO, and 9.8 percentage points for AV stimuli. CONCLUSIONS: Consonant feedback during training with sentences stimuli significantly increased perceptual learning. The training generalized to untrained VO, AO, and AV sentence stimuli. Lipreading training has potential to significantly improve adults' face-to-face communication in noisy settings in which the talker can be seen.


Subject(s)
Lipreading , Speech Perception , Adult , Feedback , Humans , Noise , Speech , Speech Perception/physiology
5.
Ear Hear ; 42(3): 673-690, 2021.
Article in English | MEDLINE | ID: mdl-33928926

ABSTRACT

OBJECTIVES: The ability to recognize words in connected speech under noisy listening conditions is critical to everyday communication. Many processing levels contribute to the individual listener's ability to recognize words correctly against background speech, and there is clinical need for measures of individual differences at different levels. Typical listening tests of speech recognition in noise require a list of items to obtain a single threshold score. Diverse abilities measures could be obtained through mining various open-set recognition errors during multi-item tests. This study sought to demonstrate that an error mining approach using open-set responses from a clinical sentence-in-babble-noise test can be used to characterize abilities beyond signal-to-noise ratio (SNR) threshold. A stimulus-response phoneme-to-phoneme sequence alignment software system was used to achieve automatic, accurate quantitative error scores. The method was applied to a database of responses from normal-hearing (NH) adults. Relationships between two types of response errors and words correct scores were evaluated through use of mixed models regression. DESIGN: Two hundred thirty-three NH adults completed three lists of the Quick Speech in Noise test. Their individual open-set speech recognition responses were automatically phonemically transcribed and submitted to a phoneme-to-phoneme stimulus-response sequence alignment system. The computed alignments were mined for a measure of acoustic phonetic perception, a measure of response text that could not be attributed to the stimulus, and a count of words correct. The mined data were statistically analyzed to determine whether the response errors were significant factors beyond stimulus SNR in accounting for the number of words correct per response from each participant. This study addressed two hypotheses: (1) Individuals whose perceptual errors are less severe recognize more words correctly under difficult listening conditions due to babble masking and (2) Listeners who are better able to exclude incorrect speech information such as from background babble and filling in recognize more stimulus words correctly. RESULTS: Statistical analyses showed that acoustic phonetic accuracy and exclusion of babble background were significant factors, beyond the stimulus sentence SNR, in accounting for the number of words a participant recognized. There was also evidence that poorer acoustic phonetic accuracy could occur along with higher words correct scores. This paradoxical result came from a subset of listeners who had also performed subjective accuracy judgments. Their results suggested that they recognized more words while also misallocating acoustic cues from the background into the stimulus, without realizing their errors. Because the Quick Speech in Noise test stimuli are locked to their own babble sample, misallocations of whole words from babble into the responses could be investigated in detail. The high rate of common misallocation errors for some sentences supported the view that the functional stimulus was the combination of the target sentence and its babble. CONCLUSIONS: Individual differences among NH listeners arise both in terms of words accurately identified and errors committed during open-set recognition of sentences in babble maskers. Error mining to characterize individual listeners can be done automatically at the levels of acoustic phonetic perception and the misallocation of background babble words into open-set responses. Error mining can increase test information and the efficiency and accuracy of characterizing individual listeners.


Subject(s)
Speech Perception , Speech , Acoustics , Adult , Hearing , Humans , Individuality , Phonetics
6.
Front Hum Neurosci ; 8: 829, 2014.
Article in English | MEDLINE | ID: mdl-25400566

ABSTRACT

In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

7.
Front Psychol ; 5: 934, 2014.
Article in English | MEDLINE | ID: mdl-25206344

ABSTRACT

Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We point out that while AV training could be an impediment to immediate unisensory perceptual learning in cochlear implant patients, it was also associated with higher scores during training.

8.
Front Hum Neurosci ; 7: 371, 2013.
Article in English | MEDLINE | ID: mdl-23882205

ABSTRACT

The visual mismatch negativity (vMMN), deriving from the brain's response to stimulus deviance, is thought to be generated by the cortex that represents the stimulus. The vMMN response to visual speech stimuli was used in a study of the lateralization of visual speech processing. Previous research suggested that the right posterior temporal cortex has specialization for processing simple non-speech face gestures, and the left posterior temporal cortex has specialization for processing visual speech gestures. Here, visual speech consonant-vowel (CV) stimuli with controlled perceptual dissimilarities were presented in an electroencephalography (EEG) vMMN paradigm. The vMMNs were obtained using the comparison of event-related potentials (ERPs) for separate CVs in their roles as deviant vs. their roles as standard. Four separate vMMN contrasts were tested, two with the perceptually far deviants (i.e., "zha" or "fa") and two with the near deviants (i.e., "zha" or "ta"). Only far deviants evoked the vMMN response over the left posterior temporal cortex. All four deviants evoked vMMNs over the right posterior temporal cortex. The results are interpreted as evidence that the left posterior temporal cortex represents speech contrasts that are perceived as different consonants, and the right posterior temporal cortex represents face gestures that may not be perceived as different CVs.

9.
Front Neurosci ; 7: 34, 2013.
Article in English | MEDLINE | ID: mdl-23515520

ABSTRACT

Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

10.
Brain Res ; 1348: 63-70, 2010 Aug 12.
Article in English | MEDLINE | ID: mdl-20550944

ABSTRACT

A new pneumatic tactile stimulator, called the TAC-Cell, was developed in our laboratory to non-invasively deliver patterned cutaneous stimulation to the face and hand in order to study the neuromagnetic response adaptation patterns within the primary somatosensory cortex (S1) in young adult humans. Individual TAC-Cells were positioned on the glabrous surface of the right hand, and midline of the upper and lower lip vermilion. A 151-channel magnetoencephalography (MEG) scanner was used to record the cortical response to a novel tactile stimulus which consisted of a repeating 6-pulse train delivered at three different frequencies through the active membrane surface of the TAC-Cell. The evoked activity in S1 (contralateral for hand stimulation, and bilateral for lip stimulation) was characterized from the best-fit dipoles of the earliest prominent response component. The S1 responses manifested significant modulation and adaptation as a function of the frequency of the punctate pneumatic stimulus trains and stimulus site (glabrous lip versus glabrous hand).


Subject(s)
Adaptation, Physiological/physiology , Hand/innervation , Lip/innervation , Physical Stimulation/instrumentation , Somatosensory Cortex/physiology , Touch/physiology , Adult , Analysis of Variance , Evoked Potentials, Somatosensory/physiology , Female , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging/methods , Magnetoencephalography , Physical Stimulation/methods , Reaction Time/physiology , Time Factors , Young Adult
11.
Neuroimage ; 52(4): 1477-86, 2010 Oct 01.
Article in English | MEDLINE | ID: mdl-20561996

ABSTRACT

Neuromagnetic evoked fields were recorded to compare the adaptation of the primary somatosensory cortex (SI) response to tactile stimuli delivered to the glabrous skin at the fingertips of the first three digits (condition 1) and between midline upper and lower lips (condition 2). The stimulation paradigm allowed to characterize the response adaptation in the presence of functional integration of tactile stimuli from adjacent skin areas in each condition. At each stimulation site, cutaneous stimuli (50 ms duration) were delivered in three runs, using trains of 6 pulses with regular stimulus onset asynchrony (SOA). The pulses were separated by SOAs of 500 ms, 250 ms or 125 ms in each run, respectively, while the inter-train interval was fixed (5s) across runs. The evoked activity in SI (contralateral to the stimulated hand, and bilaterally for lips stimulation) was characterized from the best-fit dipoles of the response component peaking around 70 ms for the hand stimulation, and 8 ms earlier (on average) for the lips stimulation. The SOA-dependent long-term adaptation effects were assessed from the change in the amplitude of the responses to the first stimulus in each train. The short-term adaptation was characterized by the lifetime of an exponentially saturating model function fitted to the set of suppression ratios of the second relative to the first SI response in each train. Our results indicate: 1) the presence of a rate-dependent long-term adaptation effect induced only by the tactile stimulation of the digits; and 2) shorter recovery lifetimes for the digits compared with the lips stimulation.


Subject(s)
Evoked Potentials, Somatosensory/physiology , Fingers/physiology , Lip/physiology , Magnetoencephalography , Skin Physiological Phenomena , Somatosensory Cortex/physiology , Touch/physiology , Adaptation, Physiological , Adult , Humans , Lip/innervation , Male , Physical Stimulation/methods , Skin/innervation
12.
J Am Acad Audiol ; 21(3): 163-8, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20211120

ABSTRACT

BACKGROUND: The visual speech signal can provide sufficient information to support successful communication. However, individual differences in the ability to appreciate that information are large, and relatively little is known about their sources. PURPOSE: Here a body of research is reviewed regarding the development of a theoretical framework in which to study speechreading and individual differences in that ability. Based on the hypothesis that visual speech is processed via the same perceptual-cognitive machinery as auditory speech, a theoretical framework was developed by adapting a theoretical framework originally developed for auditory spoken word recognition. CONCLUSION: The evidence to date is consistent with the conclusion that visual spoken word recognition is achieved via a process similar to auditory word recognition provided differences in perceptual similarity are taken into account. Words perceptually similar to many other words and that occur infrequently in the input stream are at a distinct disadvantage within this process. The results to date are also consistent with the conclusion that deaf individuals, regardless of speechreading ability, recognize spoken words via a process similar to individuals with hearing.


Subject(s)
Aptitude , Deafness/psychology , Deafness/rehabilitation , Lipreading , Deafness/etiology , Humans , Pattern Recognition, Physiological/physiology , Visual Perception/physiology
13.
Scand J Psychol ; 50(5): 419-25, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19778389

ABSTRACT

Spoken word recognition is thought to be achieved via competition in the mental lexicon between perceptually similar word forms. A review of the development and initial behavioral validations of computational models of visual spoken word recognition is presented, followed by a report of new empirical evidence. Specifically, a replication and extension of Mattys, Bernstein & Auer's (2002) study was conducted with 20 deaf participants who varied widely in speechreading ability. Participants visually identified isolated spoken words. Accuracy of visual spoken word recognition was influenced by the number of visually similar words in the lexicon and by the frequency of occurrence of the stimulus words. The results are consistent with the common view held within auditory word recognition that this task is accomplished via a process of activation and competition in which frequently occurring units are favored. Finally, future directions for visual spoken word recognition are discussed.


Subject(s)
Deafness/physiopathology , Lipreading , Pattern Recognition, Visual/physiology , Speech/physiology , Adolescent , Adult , Analysis of Variance , Humans , Photic Stimulation , Speech Intelligibility/physiology , Speech Perception/physiology , Vocabulary
14.
Brain Topogr ; 21(3-4): 207-15, 2009 May.
Article in English | MEDLINE | ID: mdl-19404730

ABSTRACT

The functional organization of cortical speech processing is thought to be hierarchical, increasing in complexity and proceeding from primary sensory areas centrifugally. The current study used the mismatch negativity (MMN) obtained with electrophysiology (EEG) to investigate the early latency period of visual speech processing under both visual-only (VO) and audiovisual (AV) conditions. Current density reconstruction (CDR) methods were used to model the cortical MMN generator locations. MMNs were obtained with VO and AV speech stimuli at early latencies (approximately 82-87 ms peak in time waveforms relative to the acoustic onset) and in regions of the right lateral temporal and parietal cortices. Latencies were consistent with bottom-up processing of the visible stimuli. We suggest that a visual pathway extracts phonetic cues from visible speech, and that previously reported effects of AV speech in classical early auditory areas, given later reported latencies, could be attributable to modulatory feedback from visual phonetic processing.


Subject(s)
Cerebral Cortex/physiology , Reaction Time/physiology , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Auditory Cortex/anatomy & histology , Auditory Cortex/physiology , Auditory Pathways/anatomy & histology , Auditory Pathways/physiology , Brain Mapping , Electroencephalography , Female , Humans , Male , Neuropsychological Tests , Parietal Lobe/anatomy & histology , Parietal Lobe/physiology , Photic Stimulation , Temporal Lobe/anatomy & histology , Temporal Lobe/physiology , Time Factors , Visual Pathways/anatomy & histology , Visual Pathways/physiology , Young Adult
15.
J Speech Lang Hear Res ; 51(3): 750-8, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18506048

ABSTRACT

PURPOSE: Sensitivity of subjective estimates of age of acquisition (AOA) and acquisition channel (AC; printed, spoken, signed) to differences in word exposure within and between populations that differ dramatically in perceptual experience was examined. METHODS: Fifty participants with early-onset deafness and 50 participants with normal hearing rated 175 words in terms of subjective AOA and AC. Additional data were collected using a standardized test of reading and vocabulary. RESULTS: Participants with early-onset deafness rated words as learned later (M = 10 years) than did participants with normal hearing (M = 8.5 years), F(1, 99) = 28.59, p < .01. Group-averaged item ratings of AOA were highly correlated across the groups (r = .971) and with normative order of acquisition (deaf: r = .950, hearing: r = .946). The groups differed in their ratings of AC (hearing: printed = 30%, spoken = 70%, signed = 0%; deaf: printed = 45%, spoken = 38%, signed = 17%). CONCLUSIONS: Subjective AOA and AC measures are sensitive to between- and within-group differences in word experience. The results demonstrate that these subjective measures can be applied as proxies for direct measures of lexical development in studies of lexical knowledge in adults with prelingual onset deafness.


Subject(s)
Child Language , Concept Formation , Semantics , Verbal Learning , Vocabulary , Adolescent , Adult , Age Factors , Child , Female , Humans , Language Tests , Male , Middle Aged , Surveys and Questionnaires
16.
Neuroimage ; 39(1): 423-35, 2008 Jan 01.
Article in English | MEDLINE | ID: mdl-17920933

ABSTRACT

The cortical processing of auditory-alone, visual-alone, and audiovisual speech information is temporally and spatially distributed, and functional magnetic resonance imaging (fMRI) cannot adequately resolve its temporal dynamics. In order to investigate a hypothesized spatiotemporal organization for audiovisual speech processing circuits, event-related potentials (ERPs) were recorded using electroencephalography (EEG). Stimuli were congruent audiovisual/ba/, incongruent auditory/ba/synchronized with visual/ga/, auditory-only/ba/, and visual-only/ba/and/ga/. Current density reconstructions (CDRs) of the ERP data were computed across the latency interval of 50-250 ms. The CDRs demonstrated complex spatiotemporal activation patterns that differed across stimulus conditions. The hypothesized circuit that was investigated here comprised initial integration of audiovisual speech by the middle superior temporal sulcus (STS), followed by recruitment of the intraparietal sulcus (IPS), followed by activation of Broca's area [Miller, L.M., d'Esposito, M., 2005. Perceptual fusion and stimulus coincidence in the cross-modal integration of speech. Journal of Neuroscience 25, 5884-5893]. The importance of spatiotemporally sensitive measures in evaluating processing pathways was demonstrated. Results showed, strikingly, early (<100 ms) and simultaneous activations in areas of the supramarginal and angular gyrus (SMG/AG), the IPS, the inferior frontal gyrus, and the dorsolateral prefrontal cortex. Also, emergent left hemisphere SMG/AG activation, not predicted based on the unisensory stimulus conditions was observed at approximately 160 to 220 ms. The STS was neither the earliest nor most prominent activation site, although it is frequently considered the sine qua non of audiovisual speech integration. As discussed here, the relatively late activity of the SMG/AG solely under audiovisual conditions is a possible candidate audiovisual speech integration response.


Subject(s)
Brain Mapping , Brain/physiology , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Language , Speech Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Male
17.
Percept Psychophys ; 69(7): 1070-83, 2007 Oct.
Article in English | MEDLINE | ID: mdl-18038946

ABSTRACT

A complete understanding of visual phonetic perception (lipreading) requires linking perceptual effects to physical stimulus properties. However, the talking face is a highly complex stimulus, affording innumerable possible physical measurements. In the search for isomorphism between stimulus properties and phoneticeffects, second-order isomorphism was examined between theperceptual similarities of video-recorded perceptually identified speech syllables and the physical similarities among the stimuli. Four talkers produced the stimulus syllables comprising 23 initial consonants followed by one of three vowels. Six normal-hearing participants identified the syllables in a visual-only condition. Perceptual stimulus dissimilarity was quantified using the Euclidean distances between stimuli in perceptual spaces obtained via multidimensional scaling. Physical stimulus dissimilarity was quantified using face points recorded in three dimensions by an optical motion capture system. The variance accounted for in the relationship between the perceptual and the physical dissimilarities was evaluated using both the raw dissimilarities and the weighted dissimilarities. With weighting and the full set of 3-D optical data, the variance accounted for ranged between 46% and 66% across talkers and between 49% and 64% across vowels. The robust second-order relationship between the sparse 3-D point representation of visible speech and the perceptual effects suggests that the 3-D point representation is a viable basis for controlled studies of first-order relationships between visual phonetic perception and physical stimulus attributes.


Subject(s)
Phonetics , Speech Perception , Visual Perception , Adolescent , Adult , Female , Humans , Male , Models, Psychological , Reaction Time
18.
J Speech Lang Hear Res ; 50(5): 1157-65, 2007 Oct.
Article in English | MEDLINE | ID: mdl-17905902

ABSTRACT

PURPOSE: L. E. Bernstein, M. E. Demorest, and P. E. Tucker (2000) demonstrated enhanced speechreading accuracy in participants with early-onset hearing loss compared with hearing participants. Here, the authors test the generalization of Bernstein et al.'s (2000) result by testing 2 new large samples of participants. The authors also investigated correlates of speechreading ability within the early-onset hearing loss group and gender differences in speechreading ability within both participant groups. METHOD: One hundred twelve individuals with early-onset hearing loss and 220 individuals with normal hearing identified 30 prerecorded sentences presented 1 at a time from visible speech information alone. RESULTS: The speechreading accuracy of the participants with early-onset hearing loss (M=43.55% words correct; SD=17.48) significantly exceeded that of the participants with normal hearing (M=18.57% words correct; SD=13.18), t(330)=14.576, p<.01. Within the early-onset hearing loss participants, speechreading ability was correlated with several subjective measures of spoken communication. Effects of gender were not reliably observed. CONCLUSION: The present results are consistent with the results of Bernstein et al. (2000). The need to rely on visual speech throughout life, and particularly for the acquisition of spoken language by individuals with early-onset hearing loss, can lead to enhanced speechreading ability.


Subject(s)
Hearing Loss/physiopathology , Lipreading , Adult , Female , Humans , Individuality , Male , Sex Factors
19.
Neuroreport ; 18(7): 645-8, 2007 May 07.
Article in English | MEDLINE | ID: mdl-17426591

ABSTRACT

Neuroplastic changes in auditory cortex as a result of lifelong perceptual experience were investigated. Adults with early-onset deafness and long-term hearing aid experience were hypothesized to have undergone auditory cortex plasticity due to somatosensory stimulation. Vibrations were presented on the hand of deaf and normal-hearing participants during functional MRI. Vibration stimuli were derived from speech or were a fixed frequency. Higher, more widespread activity was observed within auditory cortical regions of the deaf participants for both stimulus types. Life-long somatosensory stimulation due to hearing aid use could explain the greater activity observed with deaf participants.


Subject(s)
Auditory Cortex/physiology , Brain Mapping , Deafness , Hearing Aids , Neuronal Plasticity/physiology , Vibration , Adult , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male
20.
Otol Neurotol ; 26(4): 649-54, 2005 Jul.
Article in English | MEDLINE | ID: mdl-16015162

ABSTRACT

OBJECTIVE: To determine whether congenitally deafened adults achieve improved speech perception when auditory and visual speech information is available after cochlear implantation. STUDY DESIGN: Repeated-measures single subject analysis of speech perception in visual-alone, auditory-alone, and audiovisual conditions. SETTING: Neurotologic private practice and research institute. SUBJECTS: Eight subjects with profound congenital bilateral hearing loss who underwent cochlear implantation as adults (aged 18-55 years) between 1995 and 2002 and had at least 1 year of experience with the implant. MAIN OUTCOME MEASURES: Auditory, visual, and audiovisual speech perception. RESULTS: The median for speech perception scores were as follows: visual-alone, 25.9% (range, 12.7-58.1%); auditory-alone, 5.2% (range, 0-49.4%); and audiovisual, 50.7% (range, 16.5-90.8%). Seven of eight subjects did as well or better in the audiovisual condition than in either auditory-alone or visual-alone conditions alone. Three subjects had audiovisual scores greater than what would be expected from a simple additive effect of the information from the auditory-alone and visual-alone conditions alone, suggesting a superadditive effect of the combination of auditory-alone and visual-alone information. Three subjects had a simple additive effect of speech perception in the audiovisual condition. CONCLUSION: Some congenitally deafened subjects who undergo implantation as adults have significant gains in speech perception when auditory information from a cochlear implant and visual information by lipreading is available. This study shows that some congenitally deafened adults are able to integrate auditory information provided by the cochlear implant (despite the lack of auditory speech experience before implantation) with visual speech information.


Subject(s)
Cochlear Implants , Deafness/congenital , Deafness/surgery , Speech Perception , Adult , Deafness/physiopathology , Hearing , Humans , Lipreading , Middle Aged , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...