Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
Add more filters










Publication year range
1.
Hum Factors ; : 187208231216835, 2023 Nov 29.
Article in English | MEDLINE | ID: mdl-38029305

ABSTRACT

OBJECTIVE: This study investigated drivers' move-over behavior when receiving an Emergency Vehicle Approaching (EVA) warning. Furthermore, the possible effects of false alarms, driver experience, and modality on move-over behavior were explored. BACKGROUND: EVA warnings are one solution to encourage drivers to move over for emergency vehicles in a safe and timely manner. EVA warnings are distributed based on the predicted path of the emergency vehicle causing a risk of false alarms. Previous EVA studies have suggested a difference between inexperienced and experienced drivers' move-over behavior. METHOD: A driving simulator study was conducted with 110 participants, whereof 54 inexperienced and 56 experienced drivers. They were approached by an emergency vehicle three times. A control group received no EVA warnings, whereas the experimental groups received either true or false warnings, auditory or visual, 15 seconds before the emergency vehicle overtook them. RESULTS: Drivers who received EVA warnings moved over more quickly for the emergency vehicle compared to the control group. Drivers moved over more quickly for each emergency vehicle interaction. False alarms impaired move-over behavior. No difference in driver behavior based on driver experience or modality was observed. CONCLUSION: EVA warnings positively affect drivers' move-over behavior. However, false alarms can decrease drivers' future willingness to comply with the warning. APPLICATION: The findings regarding measurements of delay can be used to optimize the design of future EVA systems. Moreover, this research should be used to further understand the effect of false alarms in in-car warnings.

2.
J Occup Environ Med ; 65(9): 775-782, 2023 09 01.
Article in English | MEDLINE | ID: mdl-37311076

ABSTRACT

OBJECTIVES: This study investigated which work-related stressors are rated highest by train drivers and which are strongest correlated with consideration to change profession. METHODS: In a questionnaire, a total of 251 Swedish train drivers rated 17 work-related stressors, to which extent they had considered quitting their profession, and if they had experienced a PUT (person under train) accident. RESULTS: PUTs (when experienced) and irregular work hours are the main stressors, but the strongest predictors of consideration to change profession are those that are encountered often, and last over time (eg, irregular work hours, r = 0.61, and major organizational changes, r = 0.51). CONCLUSIONS: For effective reduction of stress and improved job satisfaction, focus should be on aspects that affect everyday life for drivers, such as better working shifts, less delays, and improved social climate.


Subject(s)
Occupational Stress , Stress, Psychological , Humans , Job Satisfaction , Depression , Occupations , Surveys and Questionnaires
3.
Front Psychol ; 14: 1294965, 2023.
Article in English | MEDLINE | ID: mdl-38259535

ABSTRACT

Background: Driving requires a series of cognitive abilities, many of which are affected by age and medical conditions. The psychosocial importance of continued driving ushers the need for valid measurements in fitness-to-drive assessments. A driving simulator test could prove useful in these assessments, having greater face validity than other off-road tests and being more cost-effective and safer than ordinary on-road testing. The aim of this study was to validate a driving simulator test for assessment of cognitive ability in fitness-to-drive assessments. Methods: The study included 67 healthy participants. Internal consistency of the simulator subtests was estimated. A correlation analysis between results on the simulator and the cognitive tests Trail Making Test (TMT) A and B and the Useful field of View test (UFOV) and multiple regression analysis were conducted. Finally, a comparison of results between age groups (>65 years) and (<65 years) was done. Results: Results showed good internal consistency. Significant and moderate correlations were found for all reaction time in the simulator's subtests and UFOV 3, and all but two with TMT A. Lane positioning in the simulator showed significant and low to moderate correlations with UFOV 3 in all subtests. Reaction time and Double reaction time on subtest 3 were significantly correlated with UFOV 2 and UFOV 3 and TMT A, respectively. Test on Centerline (position) in subtest 3 as dependent variable was significantly correlated with UFOV 3. Significant means differences and large effect sizes between the age groups were found for all reaction time and lane positioning tests. Conclusion: The findings of concurrent validity, especially with TMT A and UFOV 3 and its sensitivity for age-related differences, indicate potential for the simulator to be used as a complement in fitness-to-drive assessments. However, a clinical study is necessary to further examine its usefulness for patients with cognitive deficits.

4.
Ear Hear ; 40(2): 312-327, 2019.
Article in English | MEDLINE | ID: mdl-29870521

ABSTRACT

OBJECTIVE: We have previously shown that the gain provided by prior audiovisual (AV) speech exposure for subsequent auditory (A) sentence identification in noise is relatively larger than that provided by prior A speech exposure. We have called this effect "perceptual doping." Specifically, prior AV speech processing dopes (recalibrates) the phonological and lexical maps in the mental lexicon, which facilitates subsequent phonological and lexical access in the A modality, separately from other learning and priming effects. In this article, we use data from the n200 study and aim to replicate and extend the perceptual doping effect using two different A and two different AV speech tasks and a larger sample than in our previous studies. DESIGN: The participants were 200 hearing aid users with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. There were four speech tasks in the n200 study that were presented in both A and AV modalities (gated consonants, gated vowels, vowel duration discrimination, and sentence identification in noise tasks). The modality order of speech presentation was counterbalanced across participants: half of the participants completed the A modality first and the AV modality second (A1-AV2), and the other half completed the AV modality and then the A modality (AV1-A2). Based on the perceptual doping hypothesis, which assumes that the gain of prior AV exposure will be relatively larger relative to that of prior A exposure for subsequent processing of speech stimuli, we predicted that the mean A scores in the AV1-A2 modality order would be better than the mean A scores in the A1-AV2 modality order. We therefore expected a significant difference in terms of the identification of A speech stimuli between the two modality orders (A1 versus A2). As prior A exposure provides a smaller gain than AV exposure, we also predicted that the difference in AV speech scores between the two modality orders (AV1 versus AV2) may not be statistically significantly different. RESULTS: In the gated consonant and vowel tasks and the vowel duration discrimination task, there were significant differences in A performance of speech stimuli between the two modality orders. The participants' mean A performance was better in the AV1-A2 than in the A1-AV2 modality order (i.e., after AV processing). In terms of mean AV performance, no significant difference was observed between the two orders. In the sentence identification in noise task, a significant difference in the A identification of speech stimuli between the two orders was observed (A1 versus A2). In addition, a significant difference in the AV identification of speech stimuli between the two orders was also observed (AV1 versus AV2). This finding was most likely because of a procedural learning effect due to the greater complexity of the sentence materials or a combination of procedural learning and perceptual learning due to the presentation of sentential materials in noisy conditions. CONCLUSIONS: The findings of the present study support the perceptual doping hypothesis, as prior AV relative to A speech exposure resulted in a larger gain for the subsequent processing of speech stimuli. For complex speech stimuli that were presented in degraded listening conditions, a procedural learning effect (or a combination of procedural learning and perceptual learning effects) also facilitated the identification of speech stimuli, irrespective of whether the prior modality was A or AV.


Subject(s)
Hearing Loss, Sensorineural/physiopathology , Speech Perception/physiology , Visual Perception/physiology , Adult , Aged , Aged, 80 and over , Female , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Humans , Male , Middle Aged , Severity of Illness Index
5.
J Speech Lang Hear Res ; 60(9): 2687-2703, 2017 09 18.
Article in English | MEDLINE | ID: mdl-28651255

ABSTRACT

Purpose: We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. Method: The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Results: Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation. Conclusion: Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Phonetics , Recognition, Psychology , Speech Perception , Visual Perception , Adult , Aged , Aged, 80 and over , Auditory Threshold , Cognition , Female , Hearing Loss, Sensorineural/psychology , Humans , Male , Memory, Short-Term , Middle Aged , Pattern Recognition, Physiological , Persons With Hearing Impairments/psychology , Psychological Tests , Severity of Illness Index
6.
Front Psychol ; 8: 368, 2017.
Article in English | MEDLINE | ID: mdl-28348542

ABSTRACT

This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants' auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. CONCLUSION: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research.

7.
Int J Audiol ; 55(11): 623-42, 2016 11.
Article in English | MEDLINE | ID: mdl-27589015

ABSTRACT

OBJECTIVE: The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. STUDY SAMPLE: Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. DESIGN: LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. RESULTS: The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). CONCLUSIONS: All LEVEL 2 factors are important theoretically as well as for clinical assessment.


Subject(s)
Cognition , Correction of Hearing Impairment/instrumentation , Correction of Hearing Impairment/psychology , Hearing Aids , Hearing Disorders/psychology , Hearing Disorders/therapy , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Auditory Threshold , Comprehension , Executive Function , Female , Hearing , Hearing Disorders/diagnosis , Hearing Disorders/physiopathology , Humans , Male , Memory, Short-Term , Middle Aged , Neuropsychological Tests , Noise/adverse effects , Perceptual Masking
8.
Trends Hear ; 202016 06 17.
Article in English | MEDLINE | ID: mdl-27317667

ABSTRACT

The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.


Subject(s)
Cues , Hearing Aids , Speech Perception , Aged , Hearing Tests , Humans , Speech
9.
Front Psychol ; 6: 326, 2015.
Article in English | MEDLINE | ID: mdl-25859232

ABSTRACT

OBJECTIVE: To investigate working memory (WM), phonological skills, lexical skills, and reading comprehension in adults with Usher syndrome type 2 (USH2). DESIGN: The participants performed tests of phonological processing, lexical access, WM, and reading comprehension. The design of the test situation and tests was specifically considered for use with persons with low vision in combination with hearing impairment. The performance of the group with USH2 on the different cognitive measures was compared to that of a matched control group with normal hearing and vision (NVH). STUDY SAMPLE: Thirteen participants with USH2 aged 21-60 years and a control group of 10 individuals with NVH, matched on age and level of education. RESULTS: The group with USH2 displayed significantly lower performance on tests of phonological processing, and on measures requiring both fast visual judgment and phonological processing. There was a larger variation in performance among the individuals with USH2 than in the matched control group. CONCLUSION: The performance of the group with USH2 indicated similar problems with phonological processing skills and phonological WM as in individuals with long-term hearing loss. The group with USH2 also had significantly longer reaction times, indicating that processing of visual stimuli is difficult due to the visual impairment. These findings point toward the difficulties in accessing information that persons with USH2 experience, and could be part of the explanation of why individuals with USH2 report high levels of fatigue and feelings of stress (Wahlqvist et al., 2013).

10.
Trends Hear ; 182014 Jul 31.
Article in English | MEDLINE | ID: mdl-25085610

ABSTRACT

This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group difference in accuracy was observed. Finally, there were no significant between-group differences in terms of IPs or accuracy for final word identification in highly predictable sentences. Our results also showed that, among EHA users, greater working memory capacity was associated with earlier IPs and improved accuracy in consonant and word identification. Together, our findings demonstrate that the gated speech perception ability of EHA users was not at the level of ENH individuals, in terms of IPs and accuracy. In addition, gated speech perception was more cognitively demanding for EHA users than for ENH individuals in the absence of semantic context.


Subject(s)
Aging/psychology , Cognition , Correction of Hearing Impairment/instrumentation , Hearing Aids , Hearing Loss, Bilateral/rehabilitation , Hearing Loss, Sensorineural/rehabilitation , Persons With Hearing Impairments/rehabilitation , Speech Perception , Acoustic Stimulation , Age Factors , Aged , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Female , Hearing Loss, Bilateral/diagnosis , Hearing Loss, Bilateral/psychology , Hearing Loss, Sensorineural/diagnosis , Hearing Loss, Sensorineural/psychology , Humans , Male , Memory , Middle Aged , Persons With Hearing Impairments/psychology , Recognition, Psychology , Speech Acoustics
11.
J Acoust Soc Am ; 136(2): EL142-7, 2014 Aug.
Article in English | MEDLINE | ID: mdl-25096138

ABSTRACT

The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.


Subject(s)
Noise , Perceptual Masking , Speech Perception , Visual Perception , Acoustic Stimulation , Adult , Audiometry, Speech , Cues , Female , Humans , Male , Phonetics , Photic Stimulation , Random Allocation , Signal Detection, Psychological , Sound Spectrography , Task Performance and Analysis , Time Factors , Young Adult
12.
Front Psychol ; 5: 639, 2014.
Article in English | MEDLINE | ID: mdl-25009520

ABSTRACT

The effects of two types of auditory distracters (steady-state noise vs. four-talker babble) on visual-only speechreading accuracy were tested against a baseline (silence) in 23 participants with above-average speechreading ability. Their task was to speechread high frequency Swedish words. They were asked to rate their own performance and effort, and report how distracting each type of auditory distracter was. Only four-talker babble impeded speechreading accuracy. This suggests competition for phonological processing, since the four-talker babble demands phonological processing, which is also required for the speechreading task. Better accuracy was associated with lower self-rated effort in silence; no other correlations were found.

13.
Front Psychol ; 5: 531, 2014.
Article in English | MEDLINE | ID: mdl-24926274

ABSTRACT

This study aimed to measure the initial portion of signal required for the correct identification of auditory speech stimuli (or isolation points, IPs) in silence and noise, and to investigate the relationships between auditory and cognitive functions in silence and noise. Twenty-one university students were presented with auditory stimuli in a gating paradigm for the identification of consonants, words, and final words in highly predictable and low predictable sentences. The Hearing in Noise Test (HINT), the reading span test, and the Paced Auditory Serial Attention Test were also administered to measure speech-in-noise ability, working memory and attentional capacities of the participants, respectively. The results showed that noise delayed the identification of consonants, words, and final words in highly predictable and low predictable sentences. HINT performance correlated with working memory and attentional capacities. In the noise condition, there were correlations between HINT performance, cognitive task performance, and the IPs of consonants and words. In the silent condition, there were no correlations between auditory and cognitive tasks. In conclusion, a combination of hearing-in-noise ability, working memory capacity, and attention capacity is needed for the early identification of consonants and words in noise.

14.
Behav Res Methods ; 46(2): 499-516, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24197711

ABSTRACT

A method for creating and presenting video-recorded synchronized audiovisual stimuli at a high frame rate-which would be highly useful for psychophysical studies on, for example, just-noticeable differences and gating-is presented. Methods for accomplishing this include recording audio and video separately using an exact synchronization signal, editing the recordings and finding exact synchronization points, and presenting the synchronized audiovisual stimuli with a desired frame rate on a cathode ray tube display using MATLAB and Psychophysics Toolbox 3. The methods from an empirical gating study (Moradi, Lidestam, & Rönnberg, Frontiers in Psychology 4:359, 2013) are presented as an example of the implementation of playback at 120 fps.


Subject(s)
Audiovisual Aids , Data Display , Psychophysics/methods , Video Recording , Motion , Motion Pictures , Movement , Psychophysics/instrumentation , Software , Software Design , Time Factors , Video Recording/instrumentation
15.
Front Psychol ; 4: 359, 2013.
Article in English | MEDLINE | ID: mdl-23801980

ABSTRACT

This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and words (i.e., delayed IPs and lowered accuracy), but not the identification of final words in sentences. In comparison with the previous study by Moradi et al., it can be concluded that the provision of visual cues expedited IPs and increased the accuracy of speech stimuli identification in both silence and noise. The implication of the results is discussed in terms of models for speech understanding.

16.
Int J Pediatr Otorhinolaryngol ; 76(10): 1449-57, 2012 Oct.
Article in English | MEDLINE | ID: mdl-22795738

ABSTRACT

INTRODUCTION: Usher syndrome is a genetic condition causing deaf-blindness and is one of the most common causes of syndromic deafness. Individuals with USH1 in Sweden born during the last 15 years have typically received cochlear implants (CI) as treatment for their congenital, profound hearing loss. Recent research in genetics indicates that the cause of deafness in individuals with Usher type 1 (USH1) could be beneficial for the outcome with cochlear implants (CI). This population has not previously been the focus of cognitive research. OBJECTIVE: The present study aims to examine the phonological and lexical skills and working memory capacity (WMC) in children with USH1 and CI and to compare their performance with children with NH, children with hearing-impairment using hearing-aids and to children with non-USH1 deafness using CI. The participants were 7 children aged 7-16 years with USH1 and CI. METHODS: The participants performed 10 sets of tasks measuring phonological and lexical skills and working memory capacity. CONCLUSIONS: The results indicate that children with USH1 and CI as a group in general have a similar level of performance on the cognitive tasks as children with hearing impairment and hearing aids. The group with USH1 and CI has a different performance profile on the tests of working memory, phonological skill and lexical skill than children with non-USH1 deafness using CI, on tasks of phonological working memory and phonological skill.


Subject(s)
Cochlear Implants , Cognition/physiology , Usher Syndromes/surgery , Adolescent , Case-Control Studies , Child , Humans , Memory, Short-Term/physiology , Psychological Tests , Reading , Speech/physiology , Usher Syndromes/physiopathology
17.
Scand J Psychol ; 50(5): 427-35, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19778390

ABSTRACT

UNLABELLED: Discrimination of vowel duration was explored with regard to discrimination threshold, error bias, and effects of modality and consonant context. A total of 122 normal-hearing participants were presented with disyllabic-like items such as /lal-lal/ or /mam-mam/ in which the lengths of the vowels were systematically varied and were asked to judge whether the first or second vowel was longer. Presentation was either visual, auditory, or audiovisual. Vowel duration differences varied in 24 steps: 12 with a longer first /a/ and 12 with a longer last /a/ (range: +/-33-400 ms). RESULTS: 50% JNDs were smaller than the lowest tested step size (33 ms); 75% JNDs were in the 33-66 ms range for all conditions but V /lal/, with a 75% JND at 66-100 ms. Errors were greatest for visual presentation and for /lal-lal/ tokens. There was an error bias towards reporting the first vowel as longer, and this was strongest for /mam-mam/ and when both vowels were short, possibly reflecting a sublinguistic processing strategy.


Subject(s)
Discrimination, Psychological/physiology , Language , Speech Perception/physiology , Acoustic Stimulation , Adult , Analysis of Variance , Attention/physiology , Female , Humans , Male , Reaction Time/physiology , Speech Intelligibility/physiology
18.
J Speech Lang Hear Res ; 49(4): 835-47, 2006 Aug.
Article in English | MEDLINE | ID: mdl-16908878

ABSTRACT

PURPOSE: To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. METHOD: Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses. RESULTS: Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than the synthetic talker. The participants were more sensitive to the clusters of the least visible consonants with the human talker than with the synthetic talker. CONCLUSIONS: It is suggested that ability to distinguish between clusters of the least visually distinct phonemes is important in speechreading. Specifically, it reduces the number of candidates, and thereby facilitates lexical identification.


Subject(s)
Lipreading , Phonetics , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Analysis of Variance , Cluster Analysis , Female , Humans , Male , Speech Discrimination Tests , Speech Intelligibility
19.
Scand J Psychol ; 47(2): 93-101, 2006 Apr.
Article in English | MEDLINE | ID: mdl-16542351

ABSTRACT

Normal-hearing students (n = 72) performed sentence, consonant, and word identification in either A (auditory), V (visual), or AV (audiovisual) modality. The auditory signal had difficult speech-to-noise relations. Talker (human vs. synthetic), topic (no cue vs. cue-words), and emotion (no cue vs. facially displayed vs. cue-words) were varied within groups. After the first block, effects of modality, face, topic, and emotion on initial appraisal and motivation were assessed. After the entire session, effects of modality on longer-term appraisal and motivation were assessed. The results from both assessments showed that V identification was more positively appraised than A identification. Correlations were tentatively interpreted such that evaluation of self-rated performance possibly depends on subjective standard and is reflected on motivation (if below subjective standard, AV group), or on appraisal (if above subjective standard, A group). Suggestions for further research are presented.


Subject(s)
Motivation , Speech Intelligibility , Speech Perception , Adult , Cognition , Female , Humans , Lipreading , Male
20.
Ear Hear ; 26(2): 214-24, 2005 Apr.
Article in English | MEDLINE | ID: mdl-15809546

ABSTRACT

OBJECTIVE: This case study tested the threshold hypothesis (Rönnberg et al., 1998), which states that superior speechreading skill is possible only if high-order cognitive functions, such as capacious verbal working memory, enable efficient strategies. DESIGN: In a case study, a speechreading expert (AA) was tested on a number of speechreading and cognitive tasks and compared with control groups (z scores). Sentence-based speechreading tests, a word-decoding test, and a phoneme identification task were used to assess speechreading skill at different analytical levels. The cognitive test battery used included tasks of working memory (e.g., reading span), inference-making, phonological processing (e.g., rhyme-judgment), and central-executive functions (verbal fluency, Stroop task). RESULTS: Contrary to previous cases of extreme speechreading skill, AA excels on both low-order (phoneme identification: z = +2.83) and high-order (sentence-based: z = +8.12 and word-decoding: z = +4.21) speechreading tasks. AA does not display superior verbal inference-making ability (sentence-completion task: z = -0.36). Neither does he possess a superior working memory (reading span: z = +0.80). However, AA outperforms the controls on two measures of executive retrieval functions, the semantic (z = +3.77) and phonological verbal fluency tasks (z = +3.55). CONCLUSIONS: The performance profile is inconsistent with the threshold hypothesis. Extreme speechreading accuracy can be obtained in ways other than via well-developed high-order cognitive functions. It is suggested that AA's extreme speechreading skill, which capitalizes on low-order functions in combination with efficient central executive functions, is due to early onset of hearing impairment.


Subject(s)
Lipreading , Speech Perception , Adult , Attention , Discrimination, Psychological , Humans , Male , Memory , Middle Aged , Phonetics , Semantics , Time Factors , Verbal Behavior
SELECTION OF CITATIONS
SEARCH DETAIL
...