Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Type of study
Language
Publication year range
2.
Front Robot AI ; 10: 1028411, 2023.
Article in English | MEDLINE | ID: mdl-37090892

ABSTRACT

Human-robot collaboration with traditional industrial robots is a cardinal step towards agile manufacturing and re-manufacturing processes. These processes require constant human presence, which results in lower operational efficiency based on current industrial collision avoidance systems. The work proposes a novel local and global sensing framework, which discusses a flexible sensor concept comprising a single 2D or 3D LiDAR while formulating occlusion due to the robot body. Moreover, this work extends the previous local global sensing methodology to incorporate local (co-moving) 3D sensors on the robot body. The local 3D camera faces toward the robot occlusion area, resulted from the robot body in front of a single global 3D LiDAR. Apart from the sensor concept, this work also proposes an efficient method to estimate sensitivity and reactivity of sensing and control sub-systems The proposed methodologies are tested with a heavy-duty industrial robot along with a 3D LiDAR and camera. The integrated local global sensing methods allow high robot speeds resulting in process efficiency while ensuring human safety and sensor flexibility.

3.
Front Robot AI ; 10: 1028329, 2023.
Article in English | MEDLINE | ID: mdl-36873582

ABSTRACT

Manual annotation for human action recognition with content semantics using 3D Point Cloud (3D-PC) in industrial environments consumes a lot of time and resources. This work aims to recognize, analyze, and model human actions to develop a framework for automatically extracting content semantics. Main Contributions of this work: 1. design a multi-layer structure of various DNN classifiers to detect and extract humans and dynamic objects using 3D-PC preciously, 2. empirical experiments with over 10 subjects for collecting datasets of human actions and activities in one industrial setting, 3. development of an intuitive GUI to verify human actions and its interaction activities with the environment, 4. design and implement a methodology for automatic sequence matching of human actions in 3D-PC. All these procedures are merged in the proposed framework and evaluated in one industrial Use-Case with flexible patch sizes. Comparing the new approach with standard methods has shown that the annotation process can be accelerated by 5.2 times through automation.

4.
Front Robot AI ; 9: 1001955, 2022.
Article in English | MEDLINE | ID: mdl-36274910

ABSTRACT

Industrial robots and cobots are widely deployed in most industrial sectors. However, robotic programming still needs a lot of time and effort in small batch sizes, and it demands specific expertise and special training, especially when various robotic platforms are required. Actual low-code or no-code robotic programming solutions are exorbitant and meager. This work proposes a novel approach for no-code robotic programming for end-users with adequate or no expertise in industrial robotic. The proposed method ensures intuitive and fast robotic programming by utilizing a finite state machine with three layers of natural interactions based on hand gesture, finger gesture, and voice recognition. The implemented system combines intelligent computer vision and voice control capabilities. Using a vision system, the human could transfer spatial information of a 3D point, lines, and trajectories using hand and finger gestures. The voice recognition system will assist the user in parametrizing robot parameters and interacting with the robot's state machine. Furthermore, the proposed method will be validated and compared with state-of-the-art "Hand-Guiding" cobot devices within real-world experiments. The results obtained are auspicious, and indicate the capability of this novel approach for real-world deployment in an industrial context.

5.
Front Robot AI ; 9: 1002226, 2022.
Article in English | MEDLINE | ID: mdl-36263251

ABSTRACT

In the era of Industry 4.0 and agile manufacturing, the conventional methodologies for risk assessment, risk reduction, and safety procedures may not fulfill the End-User requirements, especially the SMEs with their product diversity and changeable production lines and processes. This work proposes a novel approach for planning and implementing safe and flexible Human-Robot-Interaction (HRI) workspaces using multilayer HRI operation modes. The collaborative operation modes are grouped in different clusters and categorized at various levels systematically. In addition to that, this work proposes a safety-related finite-state machine for describing the transitions between these modes dynamically and properly. The proposed approach is integrated into a new dynamic risk assessment tool as a promising solution toward a new safety horizon in line with industry 4.0.

6.
Front Robot AI ; 9: 1030668, 2022.
Article in English | MEDLINE | ID: mdl-36714803

ABSTRACT

Most motion planners generate trajectories as low-level control inputs, such as joint torque or interpolation of joint angles, which cannot be deployed directly in most industrial robot control systems. Some industrial robot systems provide interfaces to execute planned trajectories by an additional control loop with low-level control inputs. However, there is a geometric and temporal deviation between the executed and the planned motions due to the inaccurate estimation of the inaccessible robot dynamic behavior and controller parameters in the planning phase. This deviation can lead to collisions or dangerous situations, especially in heavy-duty industrial robot applications where high-speed and long-distance motions are widely used. When deploying the planned robot motion, the actual robot motion needs to be iteratively checked and adjusted to avoid collisions caused by the deviation between the planned and the executed motions. This process takes a lot of time and engineering effort. Therefore, the state-of-the-art methods no longer meet the needs of today's agile manufacturing for robotic systems that should rapidly plan and deploy new robot motions for different tasks. We present a data-driven motion planning approach using a neural network structure to simultaneously learn high-level motion commands and robot dynamics from acquired realistic collision-free trajectories. The trained neural network can generate trajectory in the form of high-level commands, such as Point-to-Point and Linear motion commands, which can be executed directly by the robot control system. The result carried out in various experimental scenarios has shown that the geometric and temporal deviation between the executed and the planned motions by the proposed approach has been significantly reduced, even if without access to the "black box" parameters of the robot. Furthermore, the proposed approach can generate new collision-free trajectories up to 10 times faster than benchmark motion planners.

SELECTION OF CITATIONS
SEARCH DETAIL
...