Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 12(1): 3206, 2022 02 25.
Artigo em Inglês | MEDLINE | ID: mdl-35217676

RESUMO

Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30-45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14-16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70-80%) showed better performance (by mean 4-6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical "critical periods" of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.


Assuntos
COVID-19 , Percepção da Fala , Adulto , Humanos , Ruído , Fala , Percepção da Fala/fisiologia , Tato
2.
PLoS One ; 11(3): e0151593, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27007812

RESUMO

Virtual environments are becoming ubiquitous, and used in a variety of contexts-from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.


Assuntos
Envelhecimento , Interface Usuário-Computador , Adulto , Feminino , Audição , Humanos , Masculino , Visão Ocular , Adulto Jovem
3.
Restor Neurol Neurosci ; 30(4): 313-23, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22596353

RESUMO

PURPOSE: Visual sensory substitution devices (SSDs) use sound or touch to convey information that is normally perceived by vision. The primary focus of prior research using SSDs was the perceptual components of learning to use SSDs and their neural correlates. However, sensorimotor integration is critical in the effort to make SSDs relevant for everyday tasks, like grabbing a cup of coffee efficiently. The purpose of this study was to test the use of a novel visual-to-auditory SSD to guide a fast reaching movement. METHODS: Using sound, the SSD device relays location, shape and color information. Participants were asked to make fast reaching movements to targets presented by the SSD. RESULTS: After only a short practice session, blindfolded sighted participants performed fast and accurate movements to presented targets, which did not differ significantly from movements performed with visual feedback in terms of movement time, peak speed, and path length. A small but significant difference was found between the endpoint accuracy of movements under the two feedback conditions; remarkably, in both cases the average error was smaller than 0.5 cm. CONCLUSIONS: Our findings combine with previous brain-imaging studies to support a theory of a modality-independent representation of spatial information. Task-specificity, rather than modality-specificity, of brain functions is crucially important for the rehabilitative use of SSDs in the blind and the visually impaired. We present the first direct comparison between movement trajectories performed with an SSD and ones performed under visual guidance. The accuracy level reached in this study demonstrates the potential applicability of using the visual-to-auditory SSD for performance of daily tasks which require fast, accurate reaching movements, and indicates a potential for rehabilitative use of the device.


Assuntos
Estimulação Acústica/instrumentação , Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Visão Ocular/fisiologia , Adulto , Retroalimentação , Feminino , Humanos , Masculino , Percepção/fisiologia , Fatores de Tempo , Pessoas com Deficiência Visual , Adulto Jovem
4.
Exp Brain Res ; 166(3-4): 559-71, 2005 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16028028

RESUMO

The perception of objects is a cognitive function of prime importance. In everyday life, object perception benefits from the coordinated interplay of vision, audition, and touch. The different sensory modalities provide both complementary and redundant information about objects, which may improve recognition speed and accuracy in many circumstances. We review crossmodal studies of object recognition in humans that mainly employed functional magnetic resonance imaging (fMRI). These studies show that visual, tactile, and auditory information about objects can activate cortical association areas that were once believed to be modality-specific. Processing converges either in multisensory zones or via direct crossmodal interaction of modality-specific cortices without relay through multisensory regions. We integrate these findings with existing theories about semantic processing and propose a general mechanism for crossmodal object recognition: The recruitment and location of multisensory convergence zones varies depending on the information content and the dominant modality.


Assuntos
Percepção de Forma/fisiologia , Imageamento por Ressonância Magnética , Reconhecimento Psicológico/fisiologia , Estimulação Acústica , Animais , Córtex Cerebral/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Estimulação Luminosa , Estimulação Física , Tato
5.
Nat Neurosci ; 4(3): 324-30, 2001 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-11224551

RESUMO

The ventral pathway is involved in primate visual object recognition. In humans, a central stage in this pathway is an occipito-temporal region termed the lateral occipital complex (LOC), which is preferentially activated by visual objects compared to scrambled images or textures. However, objects have characteristic attributes (such as three-dimensional shape) that can be perceived both visually and haptically. Therefore, object-related brain areas may hold a representation of objects in both modalities. Using fMRI to map object-related brain regions, we found robust and consistent somatosensory activation in the occipito-temporal cortex. This region showed clear preference for objects compared to textures in both modalities. Most somatosensory object-selective voxels overlapped a part of the visual object-related region LOC. Thus, we suggest that neuronal populations in the occipito-temporal cortex may constitute a multimodal object-related network.


Assuntos
Encéfalo/metabolismo , Reconhecimento Visual de Modelos/fisiologia , Estereognose/fisiologia , Vias Visuais/anatomia & histologia , Vias Visuais/metabolismo , Adulto , Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Mapeamento Encefálico , Feminino , Lateralidade Funcional/fisiologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa , Estimulação Física , Desempenho Psicomotor/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...