Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 22(21)2022 Oct 29.
Article in English | MEDLINE | ID: mdl-36366016

ABSTRACT

In order to recreate viable and human-like conversational responses, the artificial entity, i.e., an embodied conversational agent, must express correlated speech (verbal) and gestures (non-verbal) responses in spoken social interaction. Most of the existing frameworks focus on intent planning and behavior planning. The realization, however, is left to a limited set of static 3D representations of conversational expressions. In addition to functional and semantic synchrony between verbal and non-verbal signals, the final believability of the displayed expression is sculpted by the physical realization of non-verbal expressions. A major challenge of most conversational systems capable of reproducing gestures is the diversity in expressiveness. In this paper, we propose a method for capturing gestures automatically from videos and transforming them into 3D representations stored as part of the conversational agent's repository of motor skills. The main advantage of the proposed method is ensuring the naturalness of the embodied conversational agent's gestures, which results in a higher quality of human-computer interaction. The method is based on a Kanade-Lucas-Tomasi tracker, a Savitzky-Golay filter, a Denavit-Hartenberg-based kinematic model and the EVA framework. Furthermore, we designed an objective method based on cosine similarity instead of a subjective evaluation of synthesized movement. The proposed method resulted in a 96% similarity.


Subject(s)
Gestures , Speech , Humans , Biomechanical Phenomena , Speech/physiology , Semantics , Motor Skills
2.
ISA Trans ; 58: 380-8, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25956569

ABSTRACT

This paper presents the problems of implementation and adjustment (calibration) of a metrology engine embedded in NXP's EM773 series microcontroller. The metrology engine is used in a smart metering application to collect data about energy utilization and is controlled with the use of metrology engine adjustment (calibration) parameters. The aim of this research is to develop a method which would enable the operators to find and verify the optimum parameters which would ensure the best possible accuracy. Properly adjusted (calibrated) metrology engines can then be used as a base for variety of products used in smart and intelligent environments. This paper focuses on the problems encountered in the development, partial automatisation, implementation and verification of this method.

3.
J Acoust Soc Am ; 119(5 Pt 1): 3109-20, 2006 May.
Article in English | MEDLINE | ID: mdl-16708965

ABSTRACT

This paper presents a rule-based method to determine emotion-dependent features, which are defined from high-level features derived from the statistical measurements of prosodic parameters of speech. Emotion-dependent features are selected from high-level features using extraction rules. The ratio of emotional expression similarity between two speakers is defined by calculating the number and values of the emotion-dependent features that are present for the two speakers being compared. Emotional speech from Interface databases is used for evaluation of the proposed method, which was used to analyze emotional speech from five male and four female speakers in order to find any similarities and differences among individual speakers. The speakers are actors that have interpreted six emotions in four different languages. The results show that all the speakers share some universal signs regarding certain emotion-dependent features of emotional expression. Further analysis revealed that almost all speakers in the analysis used unique sets of emotion-dependent features and each speaker used unique values for the defined emotion-dependent features. The comparison among speakers shows that the expressed emotions can be analyzed according to two criteria. The first criterion is a defined set of emotion-dependent features and the second is an emotion-dependent feature value.


Subject(s)
Emotions , Speech Perception , Verbal Behavior , Adult , Female , Humans , Male , Principal Component Analysis , Psychoacoustics , Speech Acoustics , Speech Intelligibility , Speech Production Measurement
SELECTION OF CITATIONS
SEARCH DETAIL
...