Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-29104712

ABSTRACT

People who are blind or low vision may have a harder time participating in exercise due to inaccessibility or lack of encouragement. To address this, we developed Eyes-Free Yoga using the Microsoft Kinect that acts as a yoga instructor and has personalized auditory feedback based on skeletal tracking. We conducted two different studies on two different versions of Eyes-Free Yoga: (1) a controlled study with 16 people who are blind or low vision to evaluate the feasibility of a proof-of-concept and (2) an 8-week in-home deployment study with 4 people who are blind or low vision, with a fully functioning exergame containing four full workouts and motivational techniques. We found that participants preferred the personalized feedback for yoga postures during the laboratory study. Therefore, the personalized feedback was used as a means to build the core components of the system used in the deployment study and was included in both study conditions. From the deployment study, we found that the participants practiced Yoga consistently throughout the 8-week period (Average hours = 17; Average days of practice = 24), almost reaching the American Heart Association recommended exercise guidelines. On average, motivational techniques increased participant's user experience and their frequency and exercise time. The findings of this work have implications for eyes-free exergame design, including engaging domain experts, piloting with inexperienced users, using musical metaphors, and designing for in-home use cases.

2.
ASSETS ; 2014: 281-282, 2014.
Article in English | MEDLINE | ID: mdl-25531011

ABSTRACT

People who are blind or visually impaired face difficulties accessing a growing array of everyday appliances, needed to perform a variety of daily activities, because they are equipped with electronic displays. We are developing a "Display Reader" smartphone app, which uses computer vision to help a user acquire a usable image of a display, to address this problem. The current prototype analyzes video from the smartphone's camera, providing real-time feedback to guide the user until a satisfactory image is acquired, based on automatic estimates of image blur and glare. Formative studies were conducted with several blind and visually impaired participants, whose feedback is guiding the development of the user interface. The prototype software has been released as a Free and Open Source (FOSS) project.

3.
Disabil Rehabil Assist Technol ; 4(4): 288-99, 2009 Jul.
Article in English | MEDLINE | ID: mdl-19565389

ABSTRACT

Efficient web access remains elusive for blind computer users. Previous efforts to improve web accessibility have focused on developer awareness, automated improvement, and legislation, but these approaches have left remaining concerns. First, while many tools can help produce accessible content, most are difficult to integrate into existing developer workflows and rarely offer specific suggestions that developers can implement. Second, tools that automatically improve web content for users generally solve specific problems and are difficult to combine and use on a diversity of existing assistive technology. Finally, although blind web users have proven adept at overcoming the shortcomings of the web and existing tools, they have been only marginally involved in improving the accessibility of their own web experience. In a step toward addressing these concerns, we have developed Accessmonkey, a common scripting framework that web users, web developers and web researchers can use to collaboratively improve accessibility. This framework advances the idea that Javascript and dynamic web content can be used to improve inaccessible content instead of being a cause of it. Using Accessmonkey, web users and developers on different platforms and with potentially different goals can collaboratively make the web more accessible. In this article, we first present the design of the Accessmonkey framework and offer several example scripts that demonstrate the utility of our approach. We conclude by discussing possible future extensions that will provide easy access to scripts as users browse the web and enable non-technical blind users to independently create and share improvements.


Subject(s)
Access to Information , Disabled Persons , Internet , Software , User-Computer Interface , Blindness , Humans
4.
Disabil Rehabil Assist Technol ; 3(1): 93-105, 2008 Jan.
Article in English | MEDLINE | ID: mdl-18416521

ABSTRACT

For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and two user studies with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eye tracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. The limited processing power of cell phones is a serious concern because a real-time video encoder and decoder will be needed. Choosing less complex settings for the encoder can reduce encoding time, but will affect video quality. We studied the intelligibility effects of this tradeoff and found that we can significantly speed up encoding time without severely affecting intelligibility. These results show promise for real-time access to the current low-bandwidth cell phone network through sign-language-specific encoding techniques.


Subject(s)
Cell Phone/instrumentation , Comprehension , Data Compression/methods , Deafness , Eye Movements/physiology , Self-Help Devices , Sign Language , Video Recording/instrumentation , Computer Peripherals , Disabled Persons , Focus Groups , Humans , Pilot Projects , Surveys and Questionnaires
5.
IEEE Trans Image Process ; 11(8): 901-11, 2002.
Article in English | MEDLINE | ID: mdl-18244684

ABSTRACT

This paper presents Group Testing for Wavelets (GTW), a novel embedded-wavelet-based image compression algorithm based on the concept of group testing. We explain how group testing is a generalization of the zerotree coding technique for wavelet-transformed images. We also show that Golomb coding is equivalent to Hwang's group testing algorithm. GTW is similar to SPIHT but replaces SPIHT's significance pass with a new group testing based method. Although no arithmetic coding is implemented, GTW performs competitively with SPIHT's arithmetic coding variant in terms of rate-distortion performance.

SELECTION OF CITATIONS
SEARCH DETAIL
...