Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Adv ; 8(12): eabj9220, 2022 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-35333568

RESUMO

Accurate transmission of biosignals without interference of surrounding noises is a key factor for the realization of human-machine interfaces (HMIs). We propose frequency-selective acoustic and haptic sensors for dual-mode HMIs based on triboelectric sensors with hierarchical macrodome/micropore/nanoparticle structure of ferroelectric composites. Our sensor shows a high sensitivity and linearity under a wide range of dynamic pressures and resonance frequency, which enables high acoustic frequency selectivity in a wide frequency range (145 to 9000 Hz), thus rendering noise-independent voice recognition possible. Our frequency-selective multichannel acoustic sensor array combined with an artificial neural network demonstrates over 95% accurate voice recognition for different frequency noises ranging from 100 to 8000 Hz. We demonstrate that our dual-mode sensor with linear response and frequency selectivity over a wide range of dynamic pressures facilitates the differentiation of surface texture and control of an avatar robot using both acoustic and mechanical inputs without interference from surrounding noise.

2.
Proc ACM Int Conf Inf Knowl Manag ; 2021: 58-67, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35173995

RESUMO

Video accessibility is crucial for blind screen-reader users as online videos are increasingly playing an essential role in education, employment, and entertainment. While there exist quite a few techniques and guidelines that focus on creating accessible videos, there is a dearth of research that attempts to characterize the accessibility of existing videos. Therefore in this paper, we define and investigate a diverse set of video and audio-based accessibility features in an effort to characterize accessible and inaccessible videos. As a ground truth for our investigation, we built a custom dataset of 600 videos, in which each video was assigned an accessibility score based on the number of its wins in a Swiss-system tournament, where human annotators performed pairwise accessibility comparisons of videos. In contrast to existing accessibility research where the assessments are typically done by blind users, we recruited sighted users for our effort, since videos comprise a special case where sight could be required to better judge if any particular scene in a video is presently accessible or not. Subsequently, by examining the extent of association between the accessibility features and the accessibility scores, we could determine the features that signifcantly (positively or negatively) impact video accessibility and therefore serve as good indicators for assessing the accessibility of videos. Using the custom dataset, we also trained machine learning models that leveraged our handcrafted features to either classify an arbitrary video as accessible/inaccessible or predict an accessibility score for the video. Evaluation of our models yielded an F 1 score of 0.675 for binary classification and a mean absolute error of 0.53 for score prediction, thereby demonstrating their potential in video accessibility assessment while also illuminating their current limitations and the need for further research in this area.

3.
MobileHCI ; 20212021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37547542

RESUMO

Gliding a finger on touchscreen to reach a target, that is, touch exploration, is a common selection method of blind screen-reader users. This paper investigates their gliding behavior and presents a model for their motor performance. We discovered that the gliding trajectories of blind people are a mixture of two strategies: 1) ballistic movements with iterative corrections relying on non-visual feedback, and 2) multiple sub-movements separated by stops, and concatenated until the target is reached. Based on this finding, we propose the mixture pointing model, a model that relates movement time to distance and width of the target. The model outperforms extant models, improving R2 from 0.65 for Fitts' law to 0.76, and is superior in cross-validation and information criteria. The model advances understanding of gliding-based target selection and serves as a tool for designing interface layouts for screen-reader based touch exploration.

4.
Artigo em Inglês | MEDLINE | ID: mdl-34327519

RESUMO

Modeling touch pointing is essential to touchscreen interface development and research, as pointing is one of the most basic and common touch actions users perform on touchscreen devices. Finger-Fitts Law [4] revised the conventional Fitts' law into a 1D (one-dimensional) pointing model for finger touch by explicitly accounting for the fat finger ambiguity (absolute error) problem which was unaccounted for in the original Fitts' law. We generalize Finger-Fitts law to 2D touch pointing by solving two critical problems. First, we extend two of the most successful 2D Fitts law forms to accommodate finger ambiguity. Second, we discovered that using nominal target width and height is a conceptually simple yet effective approach for defining amplitude and directional constraints for 2D touch pointing across different movement directions. The evaluation shows our derived 2D Finger-Fitts law models can be both principled and powerful. Specifically, they outperformed the existing 2D Fitts' laws, as measured by the regression coefficient and model selection information criteria (e.g., Akaike Information Criterion) considering the number of parameters. Finally, 2D Finger-Fitts laws also advance our understanding of touch pointing and thereby serve as the basis for touch interface designs.

5.
Artigo em Inglês | MEDLINE | ID: mdl-33585840

RESUMO

Gesture typing-entering a word by gliding the finger sequentially over letter to letter- has been widely supported on smartphones for sighted users. However, this input paradigm is currently inaccessible to blind users: it is difficult to draw shape gestures on a virtual keyboard without access to key visuals. This paper describes the design of accessible gesture typing, to bring this input paradigm to blind users. To help blind users figure out key locations, the design incorporates the familiar screen-reader supported touch exploration that narrates the keys as the user drags the finger across the keyboard. The design allows users to seamlessly switch between exploration and gesture typing mode by simply lifting the finger. Continuous touch-exploration like audio feedback is provided during word shape construction that helps the user glide in the right direction of the key locations constituting the word. Exploration mode resumes once word shape is completed. Distinct earcons help distinguish gesture typing mode from touch exploration mode, and thereby avoid unintended mix-ups. A user study with 14 blind people shows 35% increment in their typing speed, indicative of the promise and potential of gesture typing technology for non-visual text entry.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...