Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
Add more filters











Publication year range
1.
Front Robot AI ; 11: 1340334, 2024.
Article in English | MEDLINE | ID: mdl-39092214

ABSTRACT

Learning from demonstration is an approach that allows users to personalize a robot's tasks. While demonstrations often focus on conveying the robot's motion or task plans, they can also communicate user intentions through object attributes in manipulation tasks. For instance, users might want to teach a robot to sort fruits and vegetables into separate boxes or to place cups next to plates of matching colors. This paper introduces a novel method that enables robots to learn the semantics of user demonstrations, with a particular emphasis on the relationships between object attributes. In our approach, users demonstrate essential task steps by manually guiding the robot through the necessary sequence of poses. We reduce the amount of data by utilizing only robot poses instead of trajectories, allowing us to focus on the task's goals, specifically the objects related to these goals. At each step, known as a keyframe, we record the end-effector pose, object poses, and object attributes. However, the number of keyframes saved in each demonstration can vary due to the user's decisions. This variability in each demonstration can lead to inconsistencies in the significance of keyframes, complicating keyframe alignment to generalize the robot's motion and the user's intention. Our method addresses this issue by focusing on teaching the higher-level goals of the task using only the required keyframes and relevant objects. It aims to teach the rationale behind object selection for a task and generalize this reasoning to environments with previously unseen objects. We validate our proposed method by conducting three manipulation tasks aiming at different object attribute constraints. In the reproduction phase, we demonstrate that even when the robot encounters previously unseen objects, it can generalize the user's intention and execute the task.

2.
Sensors (Basel) ; 24(2)2024 Jan 08.
Article in English | MEDLINE | ID: mdl-38257473

ABSTRACT

Dexterous manipulation concerns the control of a robot hand to manipulate an object in a desired manner. While classical dexterous manipulation strategies are based on stable grasping (or force closure), many human-like manipulation tasks do not maintain grasp stability and often utilize the dynamics of the object rather than the closed form of kinematic relation between the object and the robotic hand. Such manipulation strategies are referred as nonprehensile or dynamic dexterous manipulation in the literature. Nonprehensile manipulation often involves fast and agile movements such as throwing and flipping. Due to the complexity of such motions and uncertainties associated with them, it has been challenging to realize nonprehensile manipulation tasks in a reliable way. In this paper, we propose a new control strategy to realize practical nonprehensile manipulation. First, we make explicit use of multiple modalities of sensory data for the design of control law. Specifically, force data are employed for feedforward control, while position data are used for feedback control. Secondly, control signals (both feedback and feedforward) are obtained through multisensory learning from demonstration (LfD) experiments designed and performed for specific nonprehensile manipulation tasks of concern. To prove the concept of the proposed control strategy, experimental tests were conducted for a dynamic spinning task using a sensory-rich, two-finger robotic hand. The control performance (i.e., the speed and accuracy of the spinning task) was also compared with that of classical dexterous manipulation based on force closure and finger gaiting.

3.
Sensors (Basel) ; 24(2)2024 Jan 18.
Article in English | MEDLINE | ID: mdl-38257710

ABSTRACT

Robot grasping constitutes an essential capability in fulfilling the complexities of advanced industrial operations. This field has been extensively investigated to address a range of practical applications. However, the generation of a stable grasp remains challenging, principally due to the constraints imposed by object geometries and the diverse objectives of the tasks. In this work, we propose a novel learning from demonstration-based grasp-planning framework. This framework is designed to extract crucial human grasp skills, namely the contact region and approach direction, from a single demonstration. Then, it formulates an optimization problem that integrates the extracted skills to generate a stable grasp. Distinct from conventional methods that rely on learning implicit synergies through human demonstration or on mapping the dissimilar kinematics between human hands and robot grippers, our approach focuses on learning the intuitive human intent that involves the potential contact regions and the grasping approach direction. Furthermore, our optimization formulation is capable of identifying the optimal grasp by minimizing the surface fitting error between the demonstrated contact regions on the object and the gripper finger surface and imposing a penalty for any misalignment between the demonstrated and the gripper's approach directions. A series of experiments is conducted to verify the effectiveness of the proposed algorithm through both simulations and real-world scenarios.

4.
Sensors (Basel) ; 23(24)2023 Dec 12.
Article in English | MEDLINE | ID: mdl-38139627

ABSTRACT

Human-robot interaction is of the utmost importance as it enables seamless collaboration and communication between humans and robots, leading to enhanced productivity and efficiency. It involves gathering data from humans, transmitting the data to a robot for execution, and providing feedback to the human. To perform complex tasks, such as robotic grasping and manipulation, which require both human intelligence and robotic capabilities, effective interaction modes are required. To address this issue, we use a wearable glove to collect relevant data from a human demonstrator for improved human-robot interaction. Accelerometer, pressure, and flexi sensors were embedded in the wearable glove to measure motion and force information for handling objects of different sizes, materials, and conditions. A machine learning algorithm is proposed to recognize grasp orientation and position, based on the multi-sensor fusion method.


Subject(s)
Robotics , Wearable Electronic Devices , Humans , Robotics/methods , Algorithms , Hand Strength , Machine Learning
5.
Sensors (Basel) ; 23(21)2023 Oct 25.
Article in English | MEDLINE | ID: mdl-37960421

ABSTRACT

In modern logistics, the box-in-box insertion task is representative of a wide range of packaging applications, and automating compliant object insertion is difficult due to challenges in modelling the object deformation during insertion. Using Learning from Demonstration (LfD) paradigms, which are frequently used in robotics to facilitate skill transfer from humans to robots, can be one solution for complex tasks that are difficult to mathematically model. In order to automate the box-in-box insertion task for packaging applications, this study makes use of LfD techniques. The proposed framework has three phases. Firstly, a master-slave teleoperated robot system is used in the initial phase to haptically demonstrate the insertion task. Then, the learning phase involves identifying trends in the demonstrated trajectories using probabilistic methods, in this case, Gaussian Mixture Regression. In the third phase, the insertion task is generalised, and the robot adjusts to any object position using barycentric interpolation. This method is novel because it tackles tight insertion by taking advantage of the boxes' natural compliance, making it possible to complete the task even with a position-controlled robot. To determine whether the strategy is generalisable and repeatable, experimental validation was carried out.

6.
Front Robot AI ; 10: 1193388, 2023.
Article in English | MEDLINE | ID: mdl-37779578

ABSTRACT

Introduction: Handwriting is a complex task that requires coordination of motor, sensory, cognitive, memory, and linguistic skills to master. The extent these processes are involved depends on the complexity of the handwriting task. Evaluating the difficulty of a handwriting task is a challenging problem since it relies on subjective judgment of experts. Methods: In this paper, we propose a machine learning approach for evaluating the difficulty level of handwriting tasks. We propose two convolutional neural network (CNN) models for single- and multilabel classification where single-label classification is based on the mean of expert evaluation while the multilabel classification predicts the distribution of experts' assessment. The models are trained with a dataset containing 117 spatio-temporal features from the stylus and hand kinematics, which are recorded for all letters of the Arabic alphabet. Results: While single- and multilabel classification models achieve decent accuracy (96% and 88% respectively) using all features, the hand kinematics features do not significantly influence the performance of the models. Discussion: The proposed models are capable of extracting meaningful features from the handwriting samples and predicting their difficulty levels accurately. The proposed approach has the potential to be used to personalize handwriting learning tools and provide automatic evaluation of the quality of handwriting.

7.
Front Robot AI ; 10: 1152595, 2023.
Article in English | MEDLINE | ID: mdl-37501742

ABSTRACT

Introduction: In Interactive Task Learning (ITL), an agent learns a new task through natural interaction with a human instructor. Behavior Trees (BTs) offer a reactive, modular, and interpretable way of encoding task descriptions but have not yet been applied a lot in robotic ITL settings. Most existing approaches that learn a BT from human demonstrations require the user to specify each action step-by-step or do not allow for adapting a learned BT without the need to repeat the entire teaching process from scratch. Method: We propose a new framework to directly learn a BT from only a few human task demonstrations recorded as RGB-D video streams. We automatically extract continuous pre- and post-conditions for BT action nodes from visual features and use a Backchaining approach to build a reactive BT. In a user study on how non-experts provide and vary demonstrations, we identify three common failure cases of an BT learned from potentially imperfect initial human demonstrations. We offer a way to interactively resolve these failure cases by refining the existing BT through interaction with a user over a web-interface. Specifically, failure cases or unknown states are detected automatically during the execution of a learned BT and the initial BT is adjusted or extended according to the provided user input. Evaluation and results: We evaluate our approach on a robotic trash disposal task with 20 human participants and demonstrate that our method is capable of learning reactive BTs from only a few human demonstrations and interactively resolving possible failure cases at runtime.

8.
Biomimetics (Basel) ; 8(2)2023 Jun 10.
Article in English | MEDLINE | ID: mdl-37366843

ABSTRACT

Fish are capable of learning complex relations found in their surroundings, and harnessing their knowledge may help to improve the autonomy and adaptability of robots. Here, we propose a novel learning from demonstration framework to generate fish-inspired robot control programs with as little human intervention as possible. The framework consists of six core modules: (1) task demonstration, (2) fish tracking, (3) analysis of fish trajectories, (4) acquisition of robot training data, (5) generating a perception-action controller, and (6) performance evaluation. We first describe these modules and highlight the key challenges pertaining to each one. We then present an artificial neural network for automatic fish tracking. The network detected fish successfully in 85% of the frames, and in these frames, its average pose estimation error was less than 0.04 body lengths. We finally demonstrate how the framework works through a case study focusing on a cue-based navigation task. Two low-level perception-action controllers were generated through the framework. Their performance was measured using two-dimensional particle simulations and compared against two benchmark controllers, which were programmed manually by a researcher. The fish-inspired controllers had excellent performance when the robot was started from the initial conditions used in fish demonstrations (>96% success rate), outperforming the benchmark controllers by at least 3%. One of them also had an excellent generalisation performance when the robot was started from random initial conditions covering a wider range of starting positions and heading angles (>98% success rate), again outperforming the benchmark controllers by 12%. The positive results highlight the utility of the framework as a research tool to form biological hypotheses on how fish navigate in complex environments and design better robot controllers on the basis of biological findings.

9.
Int J Comput Assist Radiol Surg ; 18(5): 865-870, 2023 May.
Article in English | MEDLINE | ID: mdl-36484978

ABSTRACT

PURPOSE: The adjustment of medical devices in the operating room is currently done by the circulating nurses. As digital interfaces for the devices are not foreseeable in the near future and to incorporate legacy devices, the robotic operation of medical devices is an open topic. METHODS: We propose a teleoperated learning from demonstration process to acquire the high-level device functionality with given motion primitives. The proposed system is validated using an insufflator as an exemplary medical device. RESULTS: At the beginning of the proposed learning period, the teacher annotates the user interface to obtain the outline of the medical device. During the demonstrated interactions, the system observes the state change of the device to generalize logical rules describing the internal functionality. The combination of the internal logics with the interface annotations enable the robotic system to adjust the medical device autonomously. To interact with the device, a robotic manipulator with a finger-like end-effector is used while relying on haptic feedback from torque sensors. CONCLUSION: The proposed approach is a first step towards teaching a robotic system to operate medical devices. We aim at validating the system in an extensive user study with clinical personnel. The logical rule generalization and the logical rule inference based on computer vision methods will be focused in the future.


Subject(s)
Robotic Surgical Procedures , Robotics , Surgery, Computer-Assisted , Humans , Robotic Surgical Procedures/methods , Surgery, Computer-Assisted/methods , Feedback , Motion
10.
Sensors (Basel) ; 24(1)2023 Dec 19.
Article in English | MEDLINE | ID: mdl-38202883

ABSTRACT

A robot screwing skill learning framework based on teaching-learning is proposed to improve the generalization ability of robots for different scenarios and objects, combined with the experience of a human operation. This framework includes task-based teaching, learning, and summarization. We teach a robot to twist and gather the operation's trajectories, define the obstacles with potential functions, and counter the twisting of the robot using a skill-learning-based dynamic movement primitive (DMP) and Gaussian mixture model-Gaussian mixture regression (GMM-GMR). The hole-finding and screwing stages of the process are modeled. In order to verify the effectiveness of the robot tightening skill learning model and its adaptability to different tightening scenarios, obstacle avoidance trends and tightening experiments were conducted. Obstacle avoidance and tightening experiments were conducted on the robot tightening platform for bolts, plastic bottle caps, and faucets. The robot successfully avoided obstacles and completed the twisting task, verifying the effectiveness of the robot tightening skill learning model and its adaptability to different tightening scenarios.

11.
Front Neurorobot ; 16: 932652, 2022.
Article in English | MEDLINE | ID: mdl-36262461

ABSTRACT

Generalizing prior experiences to complete new tasks is a challenging and unsolved problem in robotics. In this work, we explore a novel framework for control of complex systems called Primitive Imitation for Control (PICO). The approach combines ideas from imitation learning, task decomposition, and novel task sequencing to generalize from demonstrations to new behaviors. Demonstrations are automatically decomposed into existing or missing sub-behaviors which allows the framework to identify novel behaviors while not duplicating existing behaviors. Generalization to new tasks is achieved through dynamic blending of behavior primitives. We evaluated the approach using demonstrations from two different robotic platforms. The experimental results show that PICO is able to detect the presence of a novel behavior primitive and build the missing control policy.

12.
Front Robot AI ; 9: 1001955, 2022.
Article in English | MEDLINE | ID: mdl-36274910

ABSTRACT

Industrial robots and cobots are widely deployed in most industrial sectors. However, robotic programming still needs a lot of time and effort in small batch sizes, and it demands specific expertise and special training, especially when various robotic platforms are required. Actual low-code or no-code robotic programming solutions are exorbitant and meager. This work proposes a novel approach for no-code robotic programming for end-users with adequate or no expertise in industrial robotic. The proposed method ensures intuitive and fast robotic programming by utilizing a finite state machine with three layers of natural interactions based on hand gesture, finger gesture, and voice recognition. The implemented system combines intelligent computer vision and voice control capabilities. Using a vision system, the human could transfer spatial information of a 3D point, lines, and trajectories using hand and finger gestures. The voice recognition system will assist the user in parametrizing robot parameters and interacting with the robot's state machine. Furthermore, the proposed method will be validated and compared with state-of-the-art "Hand-Guiding" cobot devices within real-world experiments. The results obtained are auspicious, and indicate the capability of this novel approach for real-world deployment in an industrial context.

13.
Front Robot AI ; 9: 779194, 2022.
Article in English | MEDLINE | ID: mdl-35783024

ABSTRACT

We developed a novel framework for deep reinforcement learning (DRL) algorithms in task constrained path generation problems of robotic manipulators leveraging human demonstrated trajectories. The main contribution of this article is to design a reward function that can be used with generic reinforcement learning algorithms by utilizing the Koopman operator theory to build a human intent model from the human demonstrated trajectories. In order to ensure that the developed reward function produces the correct reward, the demonstrated trajectories are further used to create a trust domain within which the Koopman operator-based human intent prediction is considered. Otherwise, the proposed algorithm asks for human feedback to receive rewards. The designed reward function is incorporated inside the deep Q-learning (DQN) framework, which results in a modified DQN algorithm. The effectiveness of the proposed learning algorithm is demonstrated using a simulated robotic arm to learn the paths for constrained end-effector motion and considering the safety of the human in the surroundings of the robot.

14.
Front Robot AI ; 9: 838059, 2022.
Article in English | MEDLINE | ID: mdl-35712549

ABSTRACT

One of the key challenges in implementing reinforcement learning methods for real-world robotic applications is the design of a suitable reward function. In field robotics, the absence of abundant datasets, limited training time, and high variation of environmental conditions complicate the task further. In this paper, we review reward learning techniques together with visual representations commonly used in current state-of-the-art works in robotics. We investigate a practical approach proposed in prior work to associate the reward with the stage of the progress in task completion based on visual observation. This approach was demonstrated in controlled laboratory conditions. We study its potential for a real-scale field application, autonomous pile loading, tested outdoors in three seasons: summer, autumn, and winter. In our framework, the cumulative reward combines the predictions about the process stage and the task completion (terminal stage). We use supervised classification methods to train prediction models and investigate the most common state-of-the-art visual representations. We use task-specific contrastive features for terminal stage prediction.

15.
Sensors (Basel) ; 22(7)2022 Mar 31.
Article in English | MEDLINE | ID: mdl-35408292

ABSTRACT

Robotic arms have been widely used in various industries and have the advantages of cost savings, high productivity, and efficiency. Although robotic arms are good at increasing efficiency in repetitive tasks, they still need to be re-programmed and optimized when new tasks are to be deployed, resulting in detrimental downtime and high cost. It is therefore the objective of this paper to present a learning from demonstration (LfD) robotic system to provide a more intuitive way for robots to efficiently perform tasks through learning from human demonstration on the basis of two major components: understanding through human demonstration and reproduction by robot arm. To understand human demonstration, we propose a vision-based spatial-temporal action detection method to detect human actions that focuses on meticulous hand movement in real time to establish an action base. An object trajectory inductive method is then proposed to obtain a key path for objects manipulated by the human through multiple demonstrations. In robot reproduction, we integrate the sequence of actions in the action base and the key path derived by the object trajectory inductive method for motion planning to reproduce the task demonstrated by the human user. Because of the capability of learning from demonstration, the robot can reproduce the tasks that the human demonstrated with the help of vision sensors in unseen contexts.


Subject(s)
Robotics , Humans , Motion , Movement , Upper Extremity , Vision, Ocular
16.
Sensors (Basel) ; 22(8)2022 Apr 08.
Article in English | MEDLINE | ID: mdl-35458847

ABSTRACT

This study focuses on the feasibility of collaborative robot implementation in a medical microbiology laboratory by demonstrating fine tasks using kinesthetic teaching. Fine tasks require sub-millimetre positioning accuracy. Bacterial colony picking and identification was used as a case study. Colonies were picked from Petri dishes and identified using matrix-assisted laser desorption/ionization (MALDI) time-of-flight (TOF) mass spectrometry. We picked and identified 56 colonies (36 colonies of Gram-negative Acinetobacter baumannii and 20 colonies of Gram-positive Staphylococcus epidermidis). The overall identification error rate was around 11%, although it was significantly lower for Gram-positive bacteria (5%) than Gram-negative bacteria (13.9%). Based on the identification scores, it was concluded that the system works similarly well as a manual operator. It was determined that tasks were successfully demonstrated using kinesthetic teaching and generalized using dynamic movement primitives (DMP). Further improvement of the identification error rate is possible by choosing a different deposited sample treatment method (e.g., semi-extraction, wet deposition).


Subject(s)
Robotics , Bacteria/chemistry , Gram-Negative Bacteria , Gram-Positive Bacteria , Spectrometry, Mass, Matrix-Assisted Laser Desorption-Ionization/methods
17.
Front Robot AI ; 9: 772228, 2022.
Article in English | MEDLINE | ID: mdl-35368435

ABSTRACT

In this paper, we present a novel means of control design for probabilistic movement primitives (ProMPs). Our proposed approach makes use of control barrier functions and control Lyapunov functions defined by a ProMP distribution. Thus, a robot may move along a trajectory within the distribution while guaranteeing that the system state never leaves more than a desired distance from the distribution mean. The control employs feedback linearization to handle nonlinearities in the system dynamics and real-time quadratic programming to ensure a solution exists that satisfies all safety constraints while minimizing control effort. Furthermore, we highlight how the proposed method may allow a designer to emphasize certain safety objectives that are more important than the others. A series of simulations and experiments demonstrate the efficacy of our approach and show it can run in real time.

18.
Front Neurorobot ; 16: 840240, 2022.
Article in English | MEDLINE | ID: mdl-35250529

ABSTRACT

In this article, an impedance control-based framework for human-robot composite layup skill transfer was developed, and the human-in-the-loop mechanism was investigated to achieve human-robot skill transfer. Although there are some works on human-robot skill transfer, it is still difficult to transfer the manipulation skill to robots through teleoperation efficiently and intuitively. In this article, we developed an impedance-based control architecture of telemanipulation in task space for the human-robot skill transfer through teleoperation. This framework not only achieves human-robot skill transfer but also provides a solution to human-robot collaboration through teleoperation. The variable impedance control system enables the compliant interaction between the robot and the environment, smooth transition between different stages. Dynamic movement primitives based learning from demonstration (LfD) is employed to model the human manipulation skills, and the learned skill can be generalized to different tasks and environments, such as the different shapes of components and different orientations of components. The performance of the proposed approach is evaluated on a 7 DoF Franka Panda through the robot-assisted composite layup on different shapes and orientations of the components.

19.
Front Robot AI ; 8: 726463, 2021.
Article in English | MEDLINE | ID: mdl-34970599

ABSTRACT

Many real-world applications require robots to use tools. However, robots lack the skills necessary to learn and perform many essential tool-use tasks. To this end, we present the TRansferrIng Skilled Tool Use Acquired Rapidly (TRI-STAR) framework for task-general robot tool use. TRI-STAR has three primary components: 1) the ability to learn and apply tool-use skills to a wide variety of tasks from a minimal number of training demonstrations, 2) the ability to generalize learned skills to other tools and manipulated objects, and 3) the ability to transfer learned skills to other robots. These capabilities are enabled by TRI-STAR's task-oriented approach, which identifies and leverages structural task knowledge through the use of our goal-based task taxonomy. We demonstrate this framework with seven tasks that impose distinct requirements on the usages of the tools, six of which were each performed on three physical robots with varying kinematic configurations. Our results demonstrate that TRI-STAR can learn effective tool-use skills from only 20 training demonstrations. In addition, our framework generalizes tool-use skills to morphologically distinct objects and transfers them to new platforms, with minor performance degradation.

20.
Front Robot AI ; 8: 767878, 2021.
Article in English | MEDLINE | ID: mdl-34805294

ABSTRACT

This paper presents a framework for programming in-contact tasks using learning by demonstration. The framework is demonstrated on an industrial gluing task, showing that a high quality robot behavior can be programmed using a single demonstration. A unified controller structure is proposed for the demonstration and execution of in-contact tasks that eases the transition from admittance controller for demonstration to parallel force/position control for the execution. The proposed controller is adapted according to the geometry of the task constraints, which is estimated online during the demonstration. In addition, the controller gains are adapted to the human behavior during demonstration to improve the quality of the demonstration. The considered gluing task requires the robot to alternate between free motion and in-contact motion; hence, an approach for minimizing contact forces during the switching between the two situations is presented. We evaluate our proposed system in a series of experiments, where we show that we are able to estimate the geometry of a curved surface, that our adaptive controller for demonstration allows users to achieve higher accuracy in a shorter demonstration duration when compared to an off-the-shelf controller for teaching implemented on a collaborative robot, and that our execution controller is able to reduce impact forces and apply a constant process force while adapting to the surface geometry.

SELECTION OF CITATIONS
SEARCH DETAIL