Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Main subject
Language
Publication year range
1.
Sensors (Basel) ; 21(12)2021 Jun 16.
Article in English | MEDLINE | ID: mdl-34208723

ABSTRACT

This paper presents a crawling mechanism using a soft-tentacle gripper integrated into an unmanned aerial vehicle for pipe inspection in industrial environments. The objective was to allow the aerial robot to perch and crawl along the pipe, minimizing the energy consumption, and allowing to perform contact inspection. This paper introduces the design of the soft limbs of the gripper and also the internal mechanism that allows movement along pipes. Several tests have been carried out to ensure the grasping capability on the pipe and the performance and reliability of the developed system. This paper shows the complete development of the system using additive manufacturing techniques and includes the results of experiments performed in realistic environments.


Subject(s)
Robotics , Equipment Design , Hand Strength , Manufacturing and Industrial Facilities , Reproducibility of Results
2.
Sensors (Basel) ; 19(2)2019 Jan 14.
Article in English | MEDLINE | ID: mdl-30646535

ABSTRACT

This paper presents a robotic system using Unmanned Aerial Vehicles (UAVs) for bridge-inspection tasks that require physical contact between the aerial platform and the bridge surfaces, such as beam-deflection analysis or measuring crack depth with an ultrasonic sensor. The proposed system takes advantage of the aerodynamic ceiling effect that arises when the multirotor gets close to the bridge surface. Moreover, this paper describes how a UAV can be used as a sensor that is able to fly and touch the bridge to take measurements during an inspection by contact. A practical application of the system involving the measurement of a bridge's beam deflection using a laser tracking station is also presented. In order to validate our system, experiments on two different bridges involving the measurement of the deflection of their beams are shown.

3.
Sensors (Basel) ; 17(1)2017 Jan 07.
Article in English | MEDLINE | ID: mdl-28067851

ABSTRACT

The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

4.
Sensors (Basel) ; 16(5)2016 05 14.
Article in English | MEDLINE | ID: mdl-27187413

ABSTRACT

Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object's shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object's centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results.

SELECTION OF CITATIONS
SEARCH DETAIL
...