Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
Add more filters











Publication year range
1.
Sensors (Basel) ; 24(18)2024 Sep 11.
Article in English | MEDLINE | ID: mdl-39338650

ABSTRACT

With the rapid advancement of intelligent manufacturing technologies, the operating environments of modern robotic arms are becoming increasingly complex. In addition to the diversity of objects, there is often a high degree of similarity between the foreground and the background. Although traditional RGB-based object-detection models have achieved remarkable success in many fields, they still face the challenge of effectively detecting targets with textures similar to the background. To address this issue, we introduce the WoodenCube dataset, which contains over 5000 images of 10 different types of blocks. All images are densely annotated with object-level categories, bounding boxes, and rotation angles. Additionally, a new evaluation metric, Cube-mAP, is proposed to more accurately assess the detection performance of cube-like objects. In addition, we have developed a simple, yet effective, framework for WoodenCube, termed CS-SKNet, which captures strong texture features in the scene by enlarging the network's receptive field. The experimental results indicate that our CS-SKNet achieves the best performance on the WoodenCube dataset, as evaluated by the Cube-mAP metric. We further evaluate the CS-SKNet on the challenging DOTAv1.0 dataset, with the consistent enhancement demonstrating its strong generalization capability.

2.
Sensors (Basel) ; 24(15)2024 Jul 26.
Article in English | MEDLINE | ID: mdl-39123908

ABSTRACT

In recent years, the integration of deep learning into robotic grasping algorithms has led to significant advancements in this field. However, one of the challenges faced by many existing deep learning-based grasping algorithms is their reliance on extensive training data, which makes them less effective when encountering unknown objects not present in the training dataset. This paper presents a simple and effective grasping algorithm that addresses this challenge through the utilization of a deep learning-based object detector, focusing on oriented detection of key features shared among most objects, namely straight edges and corners. By integrating these features with information obtained through image segmentation, the proposed algorithm can logically deduce a grasping pose without being limited by the size of the training dataset. Experimental results on actual robotic grasping of unknown objects over 400 trials show that the proposed method can achieve a higher grasp success rate of 98.25% compared to existing methods.

3.
Sensors (Basel) ; 24(15)2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39124127

ABSTRACT

Robots execute diverse load operations, including carrying, lifting, tilting, and moving objects, involving load changes or transfers. This dynamic process can result in the shift of interactive operations from stability to instability. In this paper, we respond to these dynamic changes by utilizing tactile images captured from tactile sensors during interactions, conducting a study on the dynamic stability and instability in operations, and propose a real-time dynamic state sensing network by integrating convolutional neural networks (CNNs) for spatial feature extraction and long short-term memory (LSTM) networks to capture temporal information. We collect a dataset capturing the entire transition from stable to unstable states during interaction. Employing a sliding window, we sample consecutive frames from the collected dataset and feed them into the network for the state change predictions of robots. The network achieves both real-time temporal sequence prediction at 31.84 ms per inference step and an average classification accuracy of 98.90%. Our experiments demonstrate the network's robustness, maintaining high accuracy even with previously unseen objects.

4.
Micromachines (Basel) ; 15(5)2024 May 07.
Article in English | MEDLINE | ID: mdl-38793201

ABSTRACT

Currently, intelligent robotics is supplanting traditional industrial applications. It extends to business, service and care industries, and other fields. Stable robot grasping is a necessary prerequisite for all kinds of complex application scenarios. Herein, we propose a method for preparing an elastic porous material with adjustable conductivity, hardness, and elastic modulus. Based on this, we design a soft robot tactile fingertip that is gentle, highly sensitive, and has an adjustable range. It has excellent sensitivity (~1.089 kpa-1), fast response time (~35 ms), and measures minimum pressures up to 0.02 N and stability over 500 cycles. The baseline capacitance of a sensor of the same size can be increased by a factor of 5-6, and graphene adheres better to polyurethane sponge and has good shock absorption. In addition, we demonstrated the application of the tactile fingertip to a two-finger manipulator to achieve stable grasping. In this paper, we demonstrate the great potential of the soft robot tactile finger in the field of adaptive grasping for intelligent robots.

5.
Math Biosci Eng ; 21(2): 3448-3472, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38454735

ABSTRACT

Dexterous grasping is essential for the fine manipulation tasks of intelligent robots; however, its application in stacking scenarios remains a challenge. In this study, we aimed to propose a two-phase approach for grasp detection of sequential robotic grasping, specifically for application in stacking scenarios. In the initial phase, a rotated-YOLOv3 (R-YOLOv3) model was designed to efficiently detect the category and position of the top-layer object, facilitating the detection of stacked objects. Subsequently, a stacked scenario dataset with only the top-level objects annotated was built for training and testing the R-YOLOv3 network. In the next phase, a G-ResNet50 model was developed to enhance grasping accuracy by finding the most suitable pose for grasping the uppermost object in various stacking scenarios. Ultimately, a robot was directed to successfully execute the task of sequentially grasping the stacked objects. The proposed methodology demonstrated the average grasping prediction success rate of 96.60% as observed in the Cornell grasping dataset. The results of the 280 real-world grasping experiments, conducted in stacked scenarios, revealed that the robot achieved a maximum grasping success rate of 95.00%, with an average handling grasping success rate of 83.93%. The experimental findings demonstrated the efficacy and competitiveness of the proposed approach in successfully executing grasping tasks within complex multi-object stacked environments.

6.
Neural Netw ; 171: 332-342, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38113718

ABSTRACT

The 6-Degree-of-Freedom (6-DoF) robotic grasping is a fundamental task in robot manipulation, aimed at detecting graspable points and corresponding parameters in a 3D space, i.e affordance learning, and then a robot executes grasp actions with the detected affordances. Existing research works on affordance learning predominantly focus on learning local features directly for each grid in a voxel scene or each point in a point cloud scene, subsequently filtering the most promising candidate for execution. Contrarily, cognitive models of grasping highlight the significance of global descriptors, such as size, shape, and orientation, in grasping. These global descriptors indicate a grasp path closely tied to actions. Inspired by this, we propose a novel bio-inspired neural network that explicitly incorporates global feature encoding. In particular, our method utilizes a Truncated Signed Distance Function (TSDF) as input, and employs the recently proposed Transformer model to encode the global features of a scene directly. With the effective global representation, we then use deconvolution modules to decode multiple local features to generate graspable candidates. In addition, to integrate global and local features, we propose using a skip-connection module to merge lower-layer global features with higher-layer local features. Our approach, when tested on a recently proposed pile and packed grasping dataset for a decluttering task, surpassed state-of-the-art local feature learning methods by approximately 5% in terms of success and declutter rates. We also evaluated its running time and generalization ability, further demonstrating its superiority. We deployed our model on a Franka Panda robot arm, with real-world results aligning well with simulation data. This underscores our approach's effectiveness for generalization and real-world applications.


Subject(s)
Robotic Surgical Procedures , Robotics , Learning , Generalization, Psychological , Computer Simulation
7.
Front Comput Neurosci ; 17: 1268116, 2023.
Article in English | MEDLINE | ID: mdl-38077751

ABSTRACT

This paper proposes a neural network model that estimates the rotation angle of unknown objects from RGB images using an approach inspired by biological neural circuits. The proposed model embeds the understanding of rotational transformations into its architecture, in a way inspired by how rotation is represented in the ellipsoid body of Drosophila. To effectively capture the cyclic nature of rotation, the network's latent space is structured in a circular manner. The rotation operator acts as a shift in the circular latent space's units, establishing a direct correspondence between shifts in the latent space and angular rotations of the object in the world space. Our model accurately estimates the difference in rotation between two views of an object, even for categories of objects that it has never seen before. In addition, our model outperforms three state-of-the-art convolutional networks commonly used as the backbone for vision-based models in robotics.

8.
Sensors (Basel) ; 23(24)2023 Dec 12.
Article in English | MEDLINE | ID: mdl-38139627

ABSTRACT

Human-robot interaction is of the utmost importance as it enables seamless collaboration and communication between humans and robots, leading to enhanced productivity and efficiency. It involves gathering data from humans, transmitting the data to a robot for execution, and providing feedback to the human. To perform complex tasks, such as robotic grasping and manipulation, which require both human intelligence and robotic capabilities, effective interaction modes are required. To address this issue, we use a wearable glove to collect relevant data from a human demonstrator for improved human-robot interaction. Accelerometer, pressure, and flexi sensors were embedded in the wearable glove to measure motion and force information for handling objects of different sizes, materials, and conditions. A machine learning algorithm is proposed to recognize grasp orientation and position, based on the multi-sensor fusion method.


Subject(s)
Robotics , Wearable Electronic Devices , Humans , Robotics/methods , Algorithms , Hand Strength , Machine Learning
9.
Front Robot AI ; 10: 1176492, 2023.
Article in English | MEDLINE | ID: mdl-37830110

ABSTRACT

6D pose recognition has been a crucial factor in the success of robotic grasping, and recent deep learning based approaches have achieved remarkable results on benchmarks. However, their generalization capabilities in real-world applications remain unclear. To overcome this gap, we introduce 6IMPOSE, a novel framework for sim-to-real data generation and 6D pose estimation. 6IMPOSE consists of four modules: First, a data generation pipeline that employs the 3D software suite Blender to create synthetic RGBD image datasets with 6D pose annotations. Second, an annotated RGBD dataset of five household objects was generated using the proposed pipeline. Third, a real-time two-stage 6D pose estimation approach that integrates the object detector YOLO-V4 and a streamlined, real-time version of the 6D pose estimation algorithm PVN3D optimized for time-sensitive robotics applications. Fourth, a codebase designed to facilitate the integration of the vision system into a robotic grasping experiment. Our approach demonstrates the efficient generation of large amounts of photo-realistic RGBD images and the successful transfer of the trained inference model to robotic grasping experiments, achieving an overall success rate of 87% in grasping five different household objects from cluttered backgrounds under varying lighting conditions. This is made possible by fine-tuning data generation and domain randomization techniques and optimizing the inference pipeline, overcoming the generalization and performance shortcomings of the original PVN3D algorithm. Finally, we make the code, synthetic dataset, and all the pre-trained models available on GitHub.

10.
Front Neurorobot ; 17: 1136882, 2023.
Article in English | MEDLINE | ID: mdl-37383402

ABSTRACT

Accurately estimating the 6DoF pose of objects during robot grasping is a common problem in robotics. However, the accuracy of the estimated pose can be compromised during or after grasping the object when the gripper collides with other parts or occludes the view. Many approaches to improving pose estimation involve using multi-view methods that capture RGB images from multiple cameras and fuse the data. While effective, these methods can be complex and costly to implement. In this paper, we present a Single-Camera Multi-View (SCMV) method that utilizes just one fixed monocular camera and the initiative motion of robotic manipulator to capture multi-view RGB image sequences. Our method achieves more accurate 6DoF pose estimation results. We further create a new T-LESS-GRASP-MV dataset specifically for validating the robustness of our approach. Experiments show that the proposed approach outperforms many other public algorithms by a large margin. Quantitative experiments on a real robot manipulator demonstrate the high pose estimation accuracy of our method. Finally, the robustness of the proposed approach is demonstrated by successfully completing an assembly task on a real robot platform, achieving an assembly success rate of 80%.

11.
Neural Netw ; 164: 419-427, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37187108

ABSTRACT

Although reinforcement learning (RL) has made numerous breakthroughs in recent years, addressing reward-sparse environments remains challenging and requires further exploration. Many studies improve the performance of the agents by introducing the state-action pairs experienced by an expert. However, such kinds of strategies almost depend on the quality of the demonstration by the expert, which is rarely optimal in a real-world environment, and struggle with learning from sub-optimal demonstrations. In this paper, a self-imitation learning algorithm based on the task space division is proposed to realize an efficient high-quality demonstration acquire while the training process. To determine the quality of the trajectory, some well-designed criteria are defined in the task space for finding a better demonstration. The results show that the proposed algorithm will improve the success rate of robot control and achieve a high mean Q value per step. The algorithm framework proposed in this paper has illustrated a great potential to learn from a demonstration generated by using self-policy in sparse environments and can be used in reward-sparse environments where the task space can be divided.


Subject(s)
Algorithms , Artificial Intelligence , Reinforcement, Psychology , Reward
12.
Neural Netw ; 159: 125-136, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36565690

ABSTRACT

Artificial neural networks (ANNs) have been widely adopted as general computational tools both in computer science as well as many other engineering fields. Stochastic gradient descent (SGD) and adaptive methods such as Adam are popular as robust optimization algorithms used to train the ANNs. However, the effectiveness of these algorithms is limited because they calculate a search direction based on a first-order gradient. Although higher-order gradient methods such as Newton's method have been proposed, they require the Hessian matrix to be semi-definite, and its inversion incurs a high computational cost. Therefore, in this paper, we propose a variable three-term conjugate gradient (VTTCG) method that approximates the Hessian matrix to enhance search direction and uses a variable step size to achieve improved convergence stability. To evaluate the performance of the VTTCG method, we train different ANNs on benchmark image classification and generation datasets. We also conduct a similar experiment in which a grasp generation and selection convolutional neural network (GGS-CNN) is trained to perform intelligent robotic grasping. After considering a simulated environment, we also test the GGS-CNN with a physical grasping robot. The experimental results show that the performance of the VTTCG method is superior to that of four conventional methods, including SGD, Adam, AMSGrad, and AdaBelief.


Subject(s)
Neural Networks, Computer , Robotics , Algorithms , Benchmarking
13.
Proc Natl Acad Sci U S A ; 119(42): e2209819119, 2022 10 18.
Article in English | MEDLINE | ID: mdl-36215466

ABSTRACT

Grasping, in both biological and engineered mechanisms, can be highly sensitive to the gripper and object morphology, as well as perception and motion planning. Here, we circumvent the need for feedback or precise planning by using an array of fluidically actuated slender hollow elastomeric filaments to actively entangle with objects that vary in geometric and topological complexity. The resulting stochastic interactions enable a unique soft and conformable grasping strategy across a range of target objects that vary in size, weight, and shape. We experimentally evaluate the grasping performance of our strategy and use a computational framework for the collective mechanics of flexible filaments in contact with complex objects to explain our findings. Overall, our study highlights how active collective entanglement of a filament array via an uncontrolled, spatially distributed scheme provides options for soft, adaptable grasping.


Subject(s)
Robotics , Hand Strength , Robotics/methods
14.
Front Robot AI ; 9: 873558, 2022.
Article in English | MEDLINE | ID: mdl-35712551

ABSTRACT

Grasping and dexterous manipulation remain fundamental challenges in robotics, above all when performed with multifingered robotic hands. Having simulation tools to design and test grasp and manipulation control strategies is paramount to get functional robotic manipulation systems. In this paper, we present a framework for modeling and simulating grasps in the Simulink environment, by connecting SynGrasp, a well established MATLAB toolbox for grasp simulation and analysis, and Simscape Multibody, a Simulink Library allowing the simulation of physical systems. The proposed approach can be used to simulate the grasp dynamics in Simscape, and then analyse the obtained grasps in SynGrasp. The devised functions and blocks can be easily customized to simulate different hands and objects.

15.
Front Neurorobot ; 16: 1082550, 2022.
Article in English | MEDLINE | ID: mdl-36704717

ABSTRACT

As robots begin to collaborate with humans in their daily work spaces, they need to have a deeper understanding of the tasks of using tools. In response to the problem of using tools in collaboration between humans and robots, we propose a modular system based on collaborative tasks. The first part of the system is designed to find task-related operating areas, and a multi-layer instance segmentation network is used to find the tools needed for the task, and classify the object itself based on the state of the robot in the collaborative task. Thus, we generate the state semantic region with the "leader-assistant" state. In the second part, in order to predict the optimal grasp and handover configuration, a multi-scale grasping network (MGR-Net) based on the mask of state semantic area is proposed, it can better adapt to the change of the receptive field caused by the state semantic region. Compared with the traditional method, our method has higher accuracy. The whole system also achieves good results on untrained real-world tool dataset we constructed. To further verify the effectiveness of our generated grasp representations, A robot platform based on Sawyer is used to prove the high performance of our system.

16.
Sensors (Basel) ; 21(24)2021 Dec 14.
Article in English | MEDLINE | ID: mdl-34960434

ABSTRACT

A robot's ability to grasp moving objects depends on the availability of real-time sensor data in both the far-field and near-field of the gripper. This research investigates the potential contribution of tactile sensing to a task of grasping an object in motion. It was hypothesised that combining tactile sensor data with a reactive grasping strategy could improve its robustness to prediction errors, leading to a better, more adaptive performance. Using a two-finger gripper, we evaluated the performance of two algorithms to grasp a ball rolling on a horizontal plane at a range of speeds and gripper contact points. The first approach involved an adaptive grasping strategy initiated by tactile sensors in the fingers. The second strategy initiated the grasp based on a prediction of the position of the object relative to the gripper, and provided a proxy to a vision-based object tracking system. It was found that the integration of tactile sensor feedback resulted in a higher observed grasp robustness, especially when the gripper-ball contact point was displaced from the centre of the gripper. These findings demonstrate the performance gains that can be attained by incorporating near-field sensor data into the grasp strategy and motivate further research on how this strategy might be expanded for use in different manipulator designs and in more complex grasp scenarios.


Subject(s)
Robotics , Touch Perception , Fingers , Hand Strength , Touch
17.
Front Robot AI ; 8: 652681, 2021.
Article in English | MEDLINE | ID: mdl-34222349

ABSTRACT

The increased complexity of the tasks that on-orbit robots have to undertake has led to an increased need for manipulation dexterity. Space robots can become more dexterous by adopting grasping and manipulation methodologies and algorithms from terrestrial robots. In this paper, we present a novel methodology for evaluating the stability of a robotic grasp that captures a piece of space debris, a spent rocket stage. We calculate the Intrinsic Stiffness Matrix of a 2-fingered grasp on the surface of an Apogee Kick Motor nozzle and create a stability metric that is a function of the local contact curvature, material properties, applied force, and target mass. We evaluate the efficacy of the stability metric in a simulation and two real robot experiments. The subject of all experiments is a chasing robot that needs to capture a target AKM and pull it back towards the chaser body. In the V-REP simulator, we evaluate four grasping points on three AKM models, over three pulling profiles, using three physics engines. We also use a real robotic testbed with the capability of emulating an approaching robot and a weightless AKM target to evaluate our method over 11 grasps and three pulling profiles. Finally, we perform a sensitivity analysis to demonstrate how a variation on the grasping parameters affects grasp stability. The results of all experiments suggest that the grasp can be stable under slow pulling profiles, with successful pulling for all targets. The presented work offers an alternative way of capturing orbital targets and a novel example of how terrestrial robotic grasping methodologies could be extended to orbital activities.

18.
Front Robot AI ; 8: 787187, 2021.
Article in English | MEDLINE | ID: mdl-35004865

ABSTRACT

Bio-inspirations from soft-bodied animals provide a rich design source for soft robots, yet limited literature explored the potential enhancement from rigid-bodied ones. This paper draws inspiration from the tooth profiles of the rigid claws of the Boston Lobster, aiming at an enhanced soft finger surface for underwater grasping using an iterative design process. The lobsters distinguish themselves from other marine animals with a pair of claws capable of dexterous object manipulation both on land and underwater. We proposed a 3-stage design iteration process that involves raw imitation, design parametric exploration, and bionic parametric exploitation on the original tooth profiles on the claws of the Boston Lobster. Eventually, 7 finger surface designs were generated and fabricated with soft silicone. We validated each design stage through many vision-based robotic grasping attempts against selected objects from the Evolved Grasping Analysis Dataset (EGAD). Over 14,000 grasp attempts were accumulated on land (71.4%) and underwater (28.6%), where we selected the optimal design through an on-land experiment and further tested its capability underwater. As a result, we observed an 18.2% improvement in grasping success rate at most from a resultant bionic finger surface design, compared with those without the surface, and a 10.4% improvement at most compared with the validation design from the previous literature. Results from this paper are relevant and consistent with the bioresearch earlier in 1911, showing the value of bionics. The results indicate the capability and competence of the optimal bionic finger surface design in an amphibious environment, which can contribute to future research in enhanced underwater grasping using soft robots.

19.
Sensors (Basel) ; 20(21)2020 Oct 31.
Article in English | MEDLINE | ID: mdl-33142905

ABSTRACT

Considering the difficult problem of robot recognition and grasping in the scenario of disorderly stacked wooden planks, a recognition and positioning method based on local image features and point pair geometric features is proposed here and we define a local patch point pair feature. First, we used self-developed scanning equipment to collect images of wood boards and a robot to drive a RGB-D camera to collect images of disorderly stacked wooden planks. The image patches cut from these images were input to a convolutional autoencoder to train and obtain a local texture feature descriptor that is robust to changes in perspective. Then, the small image patches around the point pairs of the plank model are extracted, and input into the trained encoder to obtain the feature vector of the image patch, combining the point pair geometric feature information to form a feature description code expressing the characteristics of the plank. After that, the robot drives the RGB-D camera to collect the local image patches of the point pairs in the area to be grasped in the scene of the stacked wooden planks, also obtaining the feature description code of the wooden planks to be grasped. Finally, through the process of point pair feature matching, pose voting and clustering, the pose of the plank to be grasped is determined. The robot grasping experiment here shows that both the recognition rate and grasping success rate of planks are high, reaching 95.3% and 93.8%, respectively. Compared with the traditional point pair feature method (PPF) and other methods, the method present here has obvious advantages and can be applied to stacked wood plank grasping environments.

20.
Sensors (Basel) ; 20(13)2020 Jul 02.
Article in English | MEDLINE | ID: mdl-32630755

ABSTRACT

As there come to be more applications of intelligent robots, their task object is becoming more varied. However, it is still a challenge for a robot to handle unfamiliar objects. We review the recent work on the feature sensing and robotic grasping of objects with uncertain information. In particular, we focus on how the robot perceives the features of an object, so as to reduce the uncertainty of objects, and how the robot completes object grasping through the learning-based approach when the traditional approach fails. The uncertain information is classified into geometric information and physical information. Based on the type of uncertain information, the object is further classified into three categories, which are geometric-uncertain objects, physical-uncertain objects, and unknown objects. Furthermore, the approaches to the feature sensing and robotic grasping of these objects are presented based on the varied characteristics of each type of object. Finally, we summarize the reviewed approaches for uncertain objects and provide some interesting issues to be more investigated in the future. It is found that the object's features, such as material and compactness, are difficult to be sensed, and the object grasping approach based on learning networks plays a more important role when the unknown degree of the task object increases.

SELECTION OF CITATIONS
SEARCH DETAIL