Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Publication year range
1.
Article in English | MEDLINE | ID: mdl-35237772

ABSTRACT

Back-of-device interaction is a promising approach to interacting on smartphones. In this paper, we create a back-of-device command and text input technique called BackSwipe, which allows a user to hold a smartphone with one hand, and use the index finger of the same hand to draw a word-gesture anywhere at the back of the smartphone to enter commands and text. To support BackSwipe, we propose a back-of-device word-gesture decoding algorithm which infers the keyboard location from back-of-device gestures, and adjusts the keyboard size to suit the gesture scales; the inferred keyboard is then fed back into the system for decoding. Our user study shows BackSwipe is feasible and a promising input method, especially for command input in the one-hand holding posture: users can enter commands at an average accuracy of 92% with a speed of 5.32 seconds/command. The text entry performance varies across users. The average speed is 9.58 WPM with some users at 18.83 WPM; the average word error rate is 11.04% with some users at 2.85%. Overall, BackSwipe complements the extant smartphone interaction by leveraging the back of the device as a gestural input surface.

2.
PLoS One ; 14(1): e0210630, 2019.
Article in English | MEDLINE | ID: mdl-30650159

ABSTRACT

People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.


Subject(s)
Virtual Reality , Visually Impaired Persons , Female , Humans , Male , User-Computer Interface , Young Adult
3.
Optom Vis Sci ; 95(9): 727-737, 2018 09.
Article in English | MEDLINE | ID: mdl-29877901

ABSTRACT

SIGNIFICANCE: For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids. PURPOSE: The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss. METHODS: An AR application that translates spatial information into high-contrast visual patterns was developed. Two experiments assessed the efficacy of the application to improve vision: an exploratory study with four visually impaired participants and a main controlled study with participants with simulated vision loss (n = 48). In both studies, performance was tested on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Participants' accuracy and confidence were compared on these tasks with and without augmented vision, as well as their subjective responses about ease of mobility. RESULTS: In the main study, the AR application was associated with substantially improved accuracy and confidence in object recognition (all P < .001) and to a lesser degree in gesture recognition (P < .05). There was no significant change in performance on identifying body poses or in subjective assessments of mobility, as compared with a control group. CONCLUSIONS: Consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system. Current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome.


Subject(s)
Blindness/rehabilitation , Self-Help Devices , Sensory Aids , Virtual Reality Exposure Therapy , Vision, Low/rehabilitation , Visual Perception/physiology , Aged , Blindness/physiopathology , Equipment Design , Female , Humans , Male , Middle Aged , Vision, Low/physiopathology
4.
Zhonghua Wai Ke Za Zhi ; 41(8): 620-2, 2003 Aug.
Article in Chinese | MEDLINE | ID: mdl-14505541

ABSTRACT

OBJECTIVE: To explore if early fracture fixation can alleviate gut barrier function damage caused by multiple firearm injuries in pigs. METHODS: Twelve healthy pigs were subjected to tangential fracture of parietal bone and comminuted fractures of bilateral femora (ISS >or= 16) due to 5.8 mm bullets shooting and these pigs were divided randomly into 2 groups. Control group (n = 6) were not treated at all. Fracture fixation Group (n = 6) were managed by immediate fracture fixation of bilateral femora with intramedullary nails. Plasma concentration of D-lactate, DAO and endotoxin (in portal vein) were detected at different intervals before and after trauma. The portal vein blood was cultured and the percentage of positive isolation was calculated. The concentration of DAO in small bowel was also detected 72 hours later after trauma. RESULTS: In control group, the plasma concentrations of D-lactate, DAO and endotoxin increased at early stage and kept high till 72 hours after trauma; the percentage of positive blood culture was 63.3%. In Group F, the levels of plasma D-lactate, DAO and endotoxin were also elevated at early stage (6 - 12 h), but declined significantly from 24 h or 48 h after trauma compared with control group (P < 0.05), and the percentage of positive blood culture was lower (30.0%, P < 0.05). The concentrations of DAO in small bowel decreased in both groups, but to a less extent in Group F. CONCLUSION: Bacterial and endotoxin translocation emerged with increasing gut permeability after multiple firearm injuries. The damage of gut barrier function could be alleviated and the chance of enterogenous infection could be by early fracture fixation after trauma.


Subject(s)
Fracture Fixation, Internal , Multiple Trauma/surgery , Wounds, Gunshot/surgery , Animals , Disease Models, Animal , Intestinal Mucosa/metabolism , Multiple Trauma/physiopathology , Permeability , Random Allocation , Swine , Wounds, Gunshot/physiopathology
SELECTION OF CITATIONS
SEARCH DETAIL
...