Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med 2 0 ; 2(2): e8, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-25075243

RESUMO

BACKGROUND: eHealth services can contribute to individuals' self-management, that is, performing lifestyle-related activities and decision making, to maintain a good health, or to mitigate the effect of an (chronic) illness on their health. But how effective are these services? Conducting a randomized controlled trial (RCT) is the golden standard to answer such a question, but takes extensive time and effort. The eHealth Analysis and Steering Instrument (eASI) offers a quick, but not dirty alternative. The eASI surveys how eHealth services score on 3 dimensions (ie, utility, usability, and content) and 12 underlying categories (ie, insight in health condition, self-management decision making, performance of self-management, involving the social environment, interaction, personalization, persuasion, description of health issue, factors of influence, goal of eHealth service, implementation, and evidence). However, there are no data on its validity and reliability. OBJECTIVE: The objective of our study was to assess the construct and predictive validity and interrater reliability of the eASI. METHODS: We found 16 eHealth services supporting self-management published in the literature, whose effectiveness was evaluated in an RCT and the service itself was available for rating. Participants (N=16) rated these services with the eASI. We analyzed the correlation of eASI items with the underlying three dimensions (construct validity), the correlation between the eASI score and the eHealth services' effect size observed in the RCT (predictive validity), and the interrater agreement. RESULTS: Three items did not fit with the other items and dimensions and were removed from the eASI; 4 items were replaced from the utility to the content dimension. The interrater reliabilities of the dimensions and the total score were moderate (total, κ=.53, and content, κ=.55) and substantial (utility, κ=.69, and usability, κ=.63). The adjusted eASI explained variance in the eHealth services' effect sizes (R(2) =.31, P<.001), as did the dimensions utility (R(2) =.49, P<.001) and usability (R(2) =.18, P=.021). Usability explained variance in the effect size on health outcomes (R(2) =.13, P=.028). CONCLUSIONS: After removing 3 items and replacing 4 items to another dimension, the eASI (3 dimensions, 11 categories, and 32 items) has a good construct validity and predictive validity. The eASI scales are moderately to highly reliable. Accordingly, the eASI can predict how effective an eHealth service is in regard to supporting self-management. Due to a small pool of available eHealth services, it is advised to reevaluate the eASI in the future with more services.

2.
Ear Hear ; 30(2): 262-72, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19194286

RESUMO

OBJECTIVE: The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. DESIGN: In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. RESULTS: The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. CONCLUSIONS: Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.


Assuntos
Surdez/reabilitação , Auxiliares de Audição , Audição , Memória de Curto Prazo , Percepção da Fala , Interface para o Reconhecimento da Fala , Adolescente , Adulto , Fatores Etários , Idoso , Limiar Auditivo , Auxiliares de Comunicação para Pessoas com Deficiência , Feminino , Humanos , Linguística , Masculino , Pessoa de Meia-Idade , Ruído , Leitura , Adulto Jovem
3.
Trends Amplif ; 13(1): 44-68, 2009 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19126551

RESUMO

This study examined the subjective benefit obtained from automatically generated captions during telephone-speech comprehension in the presence of babble noise. Short stories were presented by telephone either with or without captions that were generated offline by an automatic speech recognition (ASR) system. To simulate online ASR, the word accuracy (WA) level of the captions was 60% or 70% and the text was presented delayed to the speech. After each test, the hearing impaired participants (n = 20) completed the NASA-Task Load Index and several rating scales evaluating the support from the captions. Participants indicated that using the erroneous text in speech comprehension was difficult and the reported task load did not differ between the audio + text and audio-only conditions. In a follow-up experiment (n = 10), the perceived benefit of presenting captions increased with an increase of WA levels to 80% and 90%, and elimination of the text delay. However, in general, the task load did not decrease when captions were presented. These results suggest that the extra effort required to process the text could have been compensated for by less effort required to comprehend the speech. Future research should aim at reducing the complexity of the task to increase the willingness of hearing impaired persons to use an assistive communication system automatically providing captions. The current results underline the need for obtaining both objective and subjective measures of benefit when evaluating assistive communication systems.


Assuntos
Auxiliares de Comunicação para Pessoas com Deficiência , Correção de Deficiência Auditiva , Perda Auditiva Condutiva-Neurossensorial Mista/reabilitação , Perda Auditiva Neurossensorial/reabilitação , Percepção da Fala , Interface para o Reconhecimento da Fala , Telefone , Percepção Visual , Adulto , Idoso , Idoso de 80 Anos ou mais , Cognição , Compreensão , Sistemas Computacionais , Feminino , Humanos , Masculino , Memória , Pessoa de Meia-Idade , Ruído/efeitos adversos , Mascaramento Perceptivo , Teste do Limiar de Recepção da Fala , Inquéritos e Questionários , Fatores de Tempo
4.
Ear Hear ; 29(6): 838-52, 2008 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18633325

RESUMO

OBJECTIVES: The aim of this study was to evaluate the benefit that listeners obtain from visually presented output from an automatic speech recognition (ASR) system during listening to speech in noise. DESIGN: Auditory-alone and audiovisual speech reception thresholds (SRTs) were measured. The SRT is defined as the speech-to-noise ratio at which 50% of the test sentences are reproduced correctly. In the auditory-alone SRT tests, the test sentences were presented only auditorily; in the audiovisual SRT test, the ASR output of each test sentence was also presented textually. The ASR system was used in two recognition modes: recognition of spoken words (word output), or recognition of speech sounds or phones (phone output). The benefit obtained from the ASR output was defined as the difference between the auditory-alone and the audiovisual SRT. We also examined the readability of unimodally displayed ASR output (i.e., the percentage of sentences in which ASR errors were identified and accurately corrected). In experiment 1, the readability and benefit obtained from ASR word output (n = 14) was compared with the benefit obtained from ASR phone output (n = 10). In experiment 2, the effect of presenting an indication of the ASR confidence level was examined (n = 14). The effect of delaying the presentation of the text relative to the speech (up to 6 sec) was examined in experiment 3 (n = 24). The ASR accuracy level was varied systematically in each experiment. RESULTS: Mean readability scores ranged from 0 to 46%, depending on ASR accuracy. Speech comprehension improved when the ASR output was displayed. For example, when the ASR output corresponded to readability scores of only about 20% correct, the text improved the SRT by about 3 dB SNR in the audiovisual SRT test. This improvement corresponds to an increase in speech comprehension of about 35% in critical conditions. Equally readable phone and word output provides similar benefit in speech comprehension. For equal ASR accuracies, both the readability and the benefit from the word output generally exceeded the benefits from the phone output. Presenting information about the ASR confidence level did not influence either the readability or the benefit obtained from the word output. Delaying the text relative to the speech moderately decreased the benefit. CONCLUSIONS: The present study indicates that speech comprehension improves considerably by textual ASR output with moderate accuracies. The study shows that this improvement depends on the readability of the ASR output. Word output has better accuracy and readability than phone output. Listeners are therefore better able to use the ASR word output than phone output to improve speech comprehension. The ability of older listeners and listeners with hearing impairments to use ASR output in speech comprehension requires further study.


Assuntos
Auxiliares de Comunicação para Pessoas com Deficiência , Ruído , Percepção da Fala , Interface para o Reconhecimento da Fala , Fala , Estimulação Acústica , Adolescente , Adulto , Surdez/reabilitação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fonética , Estimulação Luminosa , Leitura , Teste do Limiar de Recepção da Fala , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...