Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Front Psychol ; 13: 911000, 2022.
Article in English | MEDLINE | ID: mdl-36248472

ABSTRACT

Eating is a fundamental part of human life and is, more than anything, a social activity. A new field, known as Computational Commensality has been created to computationally address various social aspects of food and eating. This paper illustrates a study on remote dining we conducted online in May 2021. To better understand this phenomenon, known as Digital Commensality, we recorded 11 pairs of friends sharing a meal online through a videoconferencing app. In the videos, participants consume a plate of pasta while chatting with a friend or a family member. After the remote dinner, participants were asked to fill in the Digital Commensality questionnaire, a validated questionnaire assessing the effects of remote commensal experiences, and provide their opinions on the shortcomings of currently available technologies. Besides presenting the study, the paper introduces the first Digital Commensality Data-set, containing videos, facial landmarks, quantitative and qualitative responses. After surveying multimodal data-sets and corpora that we could exploit to understand commensal behavior, we comment on the feasibility of using remote meals as a source to build data-sets to investigate commensal behavior. Finally, we explore possible future research directions emerging from our results.

2.
IEEE Trans Haptics ; PP2022 Dec 19.
Article in English | MEDLINE | ID: mdl-37015607

ABSTRACT

We investigate the recognition of the affective states of a person performing an action with an object, by processing the object-sensed data. We focus on sequences of basic actions such as grasping and rotating, which are constituents of daily-life interactions. iCube, a 5 cm cube, was used to collect tactile and kinematics data that consist of tactile maps (without information on the pressure applied to the surface), and rotations. We conduct two studies: classification of i) emotions and ii) the vitality forms. In both, the participants perform a semi-structured task composed of basic actions. For emotion recognition, 237 trials by 11 participants associated with anger, sadness, excitement, and gratitude were used to train models using 10 hand-crafted features. The classifier accuracy reaches up to 82.7%. Interestingly, the same classifier when learned exclusively with the tactile data performs on par with its counterpart modeled with all 10 features. For the second study, 1135 trials by 10 participants were used to classify two vitality forms. The best-performing model differentiated gentle actions from rude ones with an accuracy of 84.85%. The results also confirm that people touch objects differently when performing these basic actions with different affective states and attitudes.

3.
Front Psychol ; 11: 1111, 2020.
Article in English | MEDLINE | ID: mdl-32760305

ABSTRACT

Emotion, mood, and stress recognition (EMSR) has been studied in laboratory settings for decades. In particular, physiological signals are widely used to detect and classify affective states in lab conditions. However, physiological reactions to emotional stimuli have been found to differ in laboratory and natural settings. Thanks to recent technological progress (e.g., in wearables) the creation of EMSR systems for a large number of consumers during their everyday activities is increasingly possible. Therefore, datasets created in the wild are needed to insure the validity and the exploitability of EMSR models for real-life applications. In this paper, we initially present common techniques used in laboratory settings to induce emotions for the purpose of physiological dataset creation. Next, advantages and challenges of data collection in the wild are discussed. To assess the applicability of existing datasets to real-life applications, we propose a set of categories to guide and compare at a glance different methodologies used by researchers to collect such data. For this purpose, we also introduce a visual tool called Graphical Assessment of Real-life Application-Focused Emotional Dataset (GARAFED). In the last part of the paper, we apply the proposed tool to compare existing physiological datasets for EMSR in the wild and to show possible improvements and future directions of research. We wish for this paper and GARAFED to be used as guidelines for researchers and developers who aim at collecting affect-related data for real-life EMSR-based applications.

4.
Front Robot AI ; 6: 119, 2019.
Article in English | MEDLINE | ID: mdl-33501134

ABSTRACT

Food and eating are inherently social activities taking place, for example, around the dining table at home, in restaurants, or in public spaces. Enjoying eating with others, often referred to as "commensality," positively affects mealtime in terms of, among other factors, food intake, food choice, and food satisfaction. In this paper we discuss the concept of "Computational Commensality," that is, technology which computationally addresses various social aspects of food and eating. In the past few years, Human-Computer Interaction started to address how interactive technologies can improve mealtimes. However, the main focus has been made so far on improving the individual's experience, rather than considering the inherently social nature of food consumption. In this survey, we first present research from the field of social psychology on the social relevance of Food- and Eating-related Activities (F&EA). Then, we review existing computational models and technologies that can contribute, in the near future, to achieving Computational Commensality. We also discuss the related research challenges and indicate future applications of such new technology that can potentially improve F&EA from the commensality perspective.

5.
Front Hum Neurosci ; 8: 928, 2014.
Article in English | MEDLINE | ID: mdl-25477803

ABSTRACT

This study investigated which features of AVATAR laughter are perceived threatening for individuals with a fear of being laughed at (gelotophobia), and individuals with no gelotophobia. Laughter samples were systematically varied (e.g., intensity, laughter pitch, and energy for the voice, intensity of facial actions of the face) in three modalities: animated facial expressions, synthesized auditory laughter vocalizations, and motion capture generated puppets displaying laughter body movements. In the online study 123 adults completed, the GELOPH <15 > (Ruch and Proyer, 2008a,b) and rated randomly presented videos of the three modalities for how malicious, how friendly, how real the laughter was (0 not at all to 8 extremely). Additionally, an open question asked which markers led to the perception of friendliness/maliciousness. The current study identified features in all modalities of laughter stimuli that were perceived as malicious in general, and some that were gelotophobia specific. For facial expressions of AVATARS, medium intensity laughs triggered highest maliciousness in the gelotophobes. In the auditory stimuli, the fundamental frequency modulations and the variation in intensity were indicative of maliciousness. In the body, backwards and forward movements and rocking vs. jerking movements distinguished the most malicious from the least malicious laugh. From the open answers, the shape and appearance of the lips curling induced feelings that the expression was malicious for non-gelotophobes and that the movement round the eyes, elicited the face to appear as friendly. This was opposite for gelotophobes. Gelotophobia savvy AVATARS should be of high intensity, containing lip and eye movements and be fast, non-repetitive voiced vocalization, variable and of short duration. It should not contain any features that indicate a down-regulation in the voice or body, or indicate voluntary/cognitive modulation.

6.
Cogn Process ; 13 Suppl 2: 519-32, 2012 Oct.
Article in English | MEDLINE | ID: mdl-21989611

ABSTRACT

A smile may communicate different communicative intentions depending on subtle characteristics of the facial expression. In this article, we propose an algorithm to determine the morphological and dynamic characteristics of virtual agent's smiles of amusement, politeness, and embarrassment. The algorithm has been defined based on a virtual agent's smiles corpus constructed by users and analyzed with a decision tree classification technique. An evaluation, in different contexts, of the resulting smiles has enabled us to validate the proposed algorithm.


Subject(s)
Facial Expression , Smiling/psychology , Social Perception , User-Computer Interface , Adult , Algorithms , Communication , Female , Humans , Intention , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...