Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 14(6): 9522-45, 2014 May 28.
Article in English | MEDLINE | ID: mdl-24878593

ABSTRACT

One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.


Subject(s)
Auditory Perception , Head/physiology , Models, Biological , Robotics/instrumentation , Robotics/methods , Visual Perception , Algorithms , Humans , Image Processing, Computer-Assisted , Man-Machine Systems , Signal Processing, Computer-Assisted
2.
Cogn Process ; 14(1): 13-8, 2013 Mar.
Article in English | MEDLINE | ID: mdl-23328946

ABSTRACT

In biological vision systems, attention mechanisms are responsible for selecting the relevant information from the sensed field of view, so that the complete scene can be analyzed using a sequence of rapid eye saccades. In recent years, efforts have been made to imitate such attention behavior in artificial vision systems, because it allows optimizing the computational resources as they can be focused on the processing of a set of selected regions. In the framework of mobile robotics navigation, this work proposes an artificial model where attention is deployed at the level of objects (visual landmarks) and where new processes for estimating bottom-up and top-down (target-based) saliency maps are employed. Bottom-up attention is implemented through a hierarchical process, whose final result is the perceptual grouping of the image content. The hierarchical grouping is applied using a Combinatorial Pyramid that represents each level of the hierarchy by a combinatorial map. The process takes into account both image regions (faces in the map) and edges (arcs in the map). Top-down attention searches for previously detected landmarks, enabling their re-detection when the robot presumes that it is revisiting a known location. Landmarks are described by a combinatorial submap; thus, this search is conducted through an error-tolerant submap isomorphism procedure.


Subject(s)
Attention/physiology , Robotics/instrumentation , Robotics/methods , Visual Perception/physiology , Computer Simulation , Models, Theoretical , Space Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...