Your browser doesn't support javascript.
loading
Semantic learning from keyframe demonstration using object attribute constraints.
Sen, Busra; Elfring, Jos; Torta, Elena; van de Molengraft, René.
Affiliation
  • Sen B; Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands.
  • Elfring J; Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands.
  • Torta E; Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands.
  • van de Molengraft R; Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands.
Front Robot AI ; 11: 1340334, 2024.
Article in En | MEDLINE | ID: mdl-39092214
ABSTRACT
Learning from demonstration is an approach that allows users to personalize a robot's tasks. While demonstrations often focus on conveying the robot's motion or task plans, they can also communicate user intentions through object attributes in manipulation tasks. For instance, users might want to teach a robot to sort fruits and vegetables into separate boxes or to place cups next to plates of matching colors. This paper introduces a novel method that enables robots to learn the semantics of user demonstrations, with a particular emphasis on the relationships between object attributes. In our approach, users demonstrate essential task steps by manually guiding the robot through the necessary sequence of poses. We reduce the amount of data by utilizing only robot poses instead of trajectories, allowing us to focus on the task's goals, specifically the objects related to these goals. At each step, known as a keyframe, we record the end-effector pose, object poses, and object attributes. However, the number of keyframes saved in each demonstration can vary due to the user's decisions. This variability in each demonstration can lead to inconsistencies in the significance of keyframes, complicating keyframe alignment to generalize the robot's motion and the user's intention. Our method addresses this issue by focusing on teaching the higher-level goals of the task using only the required keyframes and relevant objects. It aims to teach the rationale behind object selection for a task and generalize this reasoning to environments with previously unseen objects. We validate our proposed method by conducting three manipulation tasks aiming at different object attribute constraints. In the reproduction phase, we demonstrate that even when the robot encounters previously unseen objects, it can generalize the user's intention and execute the task.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Front Robot AI Year: 2024 Document type: Article Affiliation country: Netherlands Country of publication: Switzerland

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Front Robot AI Year: 2024 Document type: Article Affiliation country: Netherlands Country of publication: Switzerland