Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Acoust Soc Am ; 135(2): 742-53, 2014 Feb.
Article in English | MEDLINE | ID: mdl-25234883

ABSTRACT

Sound source localization using a two-microphone array is an active area of research, with considerable potential for use with video conferencing, mobile devices, and robotics. Based on the observed time-differences of arrival between sound signals, a probability distribution of the location of the sources is considered to estimate the actual source positions. However, these algorithms assume a given number of sound sources. This paper describes an updated research account on the solution presented in Escolano et al. [J. Acoust. Am. Soc. 132(3), 1257-1260 (2012)], where nested sampling is used to explore a probability distribution of the source position using a Laplacian mixture model, which allows both the number and position of speech sources to be inferred. This paper presents different experimental setups and scenarios to demonstrate the viability of the proposed method, which is compared with some of the most popular sampling methods, demonstrating that nested sampling is an accurate tool for speech localization.

2.
Sensors (Basel) ; 14(6): 9522-45, 2014 May 28.
Article in English | MEDLINE | ID: mdl-24878593

ABSTRACT

One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.


Subject(s)
Auditory Perception , Head/physiology , Models, Biological , Robotics/instrumentation , Robotics/methods , Visual Perception , Algorithms , Humans , Image Processing, Computer-Assisted , Man-Machine Systems , Signal Processing, Computer-Assisted
3.
J Acoust Soc Am ; 132(3): 1257-60, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22978853

ABSTRACT

The localization of active speakers with microphone arrays is an active research line with a considerable interest in many acoustic areas. Many algorithms for source localization are based on the computation of the Generalized Cross-Correlation function between microphone pairs employing phase transform weighting. Unfortunately, the performance of these methods is severely reduced when wall reflections and multiple sound sources are present in the acoustic environment. As a result, estimating the number of active sound sources and their actual directions becomes a challenging task. To effectively tackle this problem, a Bayesian inference framework is proposed. Based on a nested sampling algorithm, a mixture model and its parameters are estimated, indicating both the number of sources-model selection-and their angle of arrival-parameter estimation, respectively. A set of measured data demonstrates the accuracy of the proposed model.


Subject(s)
Acoustics/instrumentation , Bayes Theorem , Models, Theoretical , Signal Processing, Computer-Assisted , Speech , Algorithms , Humans , Motion , Signal-To-Noise Ratio , Sound , Time Factors , Transducers
SELECTION OF CITATIONS
SEARCH DETAIL
...