Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Main subject
Language
Publication year range
1.
Front Robot AI ; 10: 1168694, 2023.
Article in English | MEDLINE | ID: mdl-37860633

ABSTRACT

Nowadays, robotics applications requiring the execution of complex tasks in real-world scenarios are still facing many challenges related to highly unstructured and dynamic environments in domains such as emergency response and search and rescue where robots have to operate for prolonged periods trading off computational performance with increased power autonomy and vice versa. In particular, there is a crucial need for robots capable of adapting to such settings while at the same time providing robustness and extended power autonomy. A possible approach to overcome the conflicting demand of a computational performing system with the need for long power autonomy is represented by cloud robotics, which can boost the computational capabilities of the robot while reducing the energy consumption by exploiting the offload of resources to the cloud. Nevertheless, the communication constraint due to limited bandwidth, latency, and connectivity, typical of field robotics, makes cloud-enabled robotics solutions challenging to deploy in real-world applications. In this context, we designed and realized the XBot2D software architecture, which provides a hybrid cloud manager capable of dynamically and seamlessly allocating robotics skills to perform a distributed computation based on the current network condition and the required latency, and computational/energy resources of the robot in use. The proposed framework leverage on the two dimensions, i.e., 2D (local and cloud), in a transparent way for the user, providing support for Real-Time (RT) skills execution on the local robot, as well as machine learning and A.I. resources on the cloud with the possibility to automatically relocate the above based on the required performances and communication quality. XBot2D implementation and its functionalities are presented and validated in realistic tasks involving the CENTAURO robot and the Amazon Web Service Elastic Computing Cloud (AWS EC2) infrastructure with different network conditions.

2.
Sensors (Basel) ; 23(18)2023 Sep 07.
Article in English | MEDLINE | ID: mdl-37765791

ABSTRACT

This manuscript introduces a mobile cobot equipped with a custom-designed high payload arm called RELAX combined with a novel unified multimodal interface that facilitates Human-Robot Collaboration (HRC) tasks requiring high-level interaction forces on a real-world scale. The proposed multimodal framework is capable of combining physical interaction, Ultra Wide-Band (UWB) radio sensing, a Graphical User Interface (GUI), verbal control, and gesture interfaces, combining the benefits of all these different modalities and allowing humans to accurately and efficiently command the RELAX mobile cobot and collaborate with it. The effectiveness of the multimodal interface is evaluated in scenarios where the operator guides RELAX to reach designated locations in the environment while avoiding obstacles and performing high-payload transportation tasks, again in a collaborative fashion. The results demonstrate that a human co-worker can productively complete complex missions and command the RELAX mobile cobot using the proposed multimodal interaction framework.


Subject(s)
Robotics , Humans , Culture , Gestures , Transportation
3.
Front Robot AI ; 8: 721001, 2021.
Article in English | MEDLINE | ID: mdl-34869611

ABSTRACT

The development of autonomous legged/wheeled robots with the ability to navigate and execute tasks in unstructured environments is a well-known research challenge. In this work we introduce a methodology that permits a hybrid legged/wheeled platform to realize terrain traversing functionalities that are adaptable, extendable and can be autonomously selected and regulated based on the geometry of the perceived ground and associated obstacles. The proposed methodology makes use of a set of terrain traversing primitive behaviors that are used to perform driving, stepping on, down and over and can be adapted, based on the ground and obstacle geometry and dimensions. The terrain geometrical properties are first obtained by a perception module, which makes use of point cloud data coming from the LiDAR sensor to segment the terrain in front of the robot, identifying possible gaps or obstacles on the ground. Using these parameters the selection and adaption of the most appropriate traversing behavior is made in an autonomous manner. Traversing behaviors can be also serialized in a different order to synthesise more complex terrain crossing plans over paths of diverse geometry. Furthermore, the proposed methodology is easily extendable by incorporating additional primitive traversing behaviors into the robot mobility framework and in such a way more complex terrain negotiation capabilities can be eventually realized in an add-on fashion. The pipeline of the above methodology was initially implemented and validated on a Gazebo simulation environment. It was then ported and verified on the CENTAURO robot enabling the robot to successfully negotiate terrains of diverse geometry and size using the terrain traversing primitives.

SELECTION OF CITATIONS
SEARCH DETAIL
...