Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 136(3): 1295, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25190402

RESUMO

A miniature accelerometer and microphone can be used to obtain Horii Oral-Nasal Coupling (HONC) scores to objectively measure nasalization of speech. While this instrumentation compares favorably in terms of size and cost relative to other objective measures of nasality, the metric has not been well characterized in children. Furthermore, the measure is known to be affected by vowel loading, as speech loaded with "high" vowels is consistently scored as more nasal than speech loaded with "low" vowels. Filtering the signals used in computation of the HONC score to better isolate the correlates of nasalization has been shown to reduce vowel-related effects on the metric, but the efficacy of filtering has thus far only been explored in adults. Here, HONC scores for running speech and the vowel portions of consonant-vowel-consonant tokens were calculated for the speech of 26 children, aged 4-9 yrs. Scores were computed using the broadband accelerometer and speech signals, as well as using filtered, low-frequency versions of these signals. HONC scores obtained using both broadband and filtered signals resulted in well-separated scores for nasal and non-nasal speech. HONC scores computed using filtered signals were found to exhibit less within-participant variability.


Assuntos
Acústica , Boca/fisiologia , Nariz/fisiologia , Acústica da Fala , Qualidade da Voz , Acústica/instrumentação , Fatores Etários , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Medida da Produção da Fala , Fatores de Tempo , Transdutores
2.
Artigo em Inglês | MEDLINE | ID: mdl-25570188

RESUMO

Many proposed EEG-based brain-computer interfaces (BCIs) make use of visual stimuli to elicit steady-state visual evoked potentials (SSVEP), the frequency of which can be mapped to a computer input. However, such a control scheme can be ineffective if a user has no motor control over their eyes and cannot direct their gaze towards a flashing stimulus to generate such a signal. Tactile-based methods, such as somatosensory steady-state evoked potentials (SSSEP), are a potentially attractive alternative in these scenarios. Here, we compare the neural signals elicited by SSSEP to those elicited by SSVEP in naïve BCI users towards evaluating the feasibility of SSSEP-based control of an EEG BCI.


Assuntos
Interfaces Cérebro-Computador , Potenciais Somatossensoriais Evocados/fisiologia , Potenciais Evocados Visuais/fisiologia , Eletroencefalografia , Feminino , Humanos , Estimulação Luminosa , Razão Sinal-Ruído , Adulto Jovem
3.
Front Hum Neurosci ; 7: 115, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23576968

RESUMO

Selective auditory attention is essential for human listeners to be able to communicate in multi-source environments. Selective attention is known to modulate the neural representation of the auditory scene, boosting the representation of a target sound relative to the background, but the strength of this modulation, and the mechanisms contributing to it, are not well understood. Here, listeners performed a behavioral experiment demanding sustained, focused spatial auditory attention while we measured cortical responses using electroencephalography (EEG). We presented three concurrent melodic streams; listeners were asked to attend and analyze the melodic contour of one of the streams, randomly selected from trial to trial. In a control task, listeners heard the same sound mixtures, but performed the contour judgment task on a series of visual arrows, ignoring all auditory streams. We found that the cortical responses could be fit as weighted sum of event-related potentials evoked by the stimulus onsets in the competing streams. The weighting to a given stream was roughly 10 dB higher when it was attended compared to when another auditory stream was attended; during the visual task, the auditory gains were intermediate. We then used a template-matching classification scheme to classify single-trial EEG results. We found that in all subjects, we could determine which stream the subject was attending significantly better than by chance. By directly quantifying the effect of selective attention on auditory cortical responses, these results reveal that focused auditory attention both suppresses the response to an unattended stream and enhances the response to an attended stream. The single-trial classification results add to the growing body of literature suggesting that auditory attentional modulation is sufficiently robust that it could be used as a control mechanism in brain-computer interfaces (BCIs).

4.
J Assoc Res Otolaryngol ; 13(3): 359-68, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22327619

RESUMO

Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.


Assuntos
Atenção , Percepção Auditiva , Sinais (Psicologia) , Adolescente , Animais , Feminino , Tentilhões , Humanos , Masculino , Ruído , Percepção Visual , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...