Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 5136-5139, 2022 07.
Article in English | MEDLINE | ID: mdl-36086298

ABSTRACT

Visual prostheses can improve vision for people with severe vision loss, but low image resolution and lack of peripheral vision limit their effectiveness. To address both problems, we developed a prototype advanced video processing system with a headworn depth camera and feature detection capabilities. We used computer vision algorithms to detect landmarks representing a goal and plan a path towards the goal, while removing unnecessary distractors from the video. If the landmark fell outside the visual prosthesis's field-of-view (20 degrees central vision) but within the camera's field-of-view (70 degrees), we provided vibrational cues to the left or right temple to guide the user in pointing the camera. We evaluated an Argus II retinal prosthesis participant with significant vision loss who could not complete the task (finding a door in a large room) with either his remaining vision or his retinal prosthesis. His success rate improved to 57%, 37.5%, and 100% while requiring 52.3, 83.0, and 58.8 seconds to reach the door using only vibration feedback, retinal prosthesis with modified video, and retinal prosthesis with modified video and vibration feedback, respectively. This case study demonstrates a possible means of augmenting artificial vision. Clinical Relevance- Retinal prostheses can be enhanced by adding computer vision and non-visual cues.


Subject(s)
Cues , Visual Prosthesis , Algorithms , Humans , Vision Disorders , Visual Fields , Visual Perception
2.
J Acoust Soc Am ; 150(4): 2526, 2021 10.
Article in English | MEDLINE | ID: mdl-34717521

ABSTRACT

The practical efficacy of deep learning based speaker separation and/or dereverberation hinges on its ability to generalize to conditions not employed during neural network training. The current study was designed to assess the ability to generalize across extremely different training versus test environments. Training and testing were performed using different languages having no known common ancestry and correspondingly large linguistic differences-English for training and Mandarin for testing. Additional generalizations included untrained speech corpus/recording channel, target-to-interferer energy ratios, reverberation room impulse responses, and test talkers. A deep computational auditory scene analysis algorithm, employing complex time-frequency masking to estimate both magnitude and phase, was used to segregate two concurrent talkers and simultaneously remove large amounts of room reverberation to increase the intelligibility of a target talker. Significant intelligibility improvements were observed for the normal-hearing listeners in every condition. Benefit averaged 43.5% points across conditions and was comparable to that obtained when training and testing were performed both in English. Benefit is projected to be considerably larger for individuals with hearing impairment. It is concluded that a properly designed and trained deep speaker separation/dereverberation network can be capable of generalization across vastly different acoustic environments that include different languages.


Subject(s)
Deep Learning , Hearing Loss , Speech Perception , Humans , Language , Perceptual Masking , Speech Intelligibility
SELECTION OF CITATIONS
SEARCH DETAIL
...