Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 24(10)2024 May 09.
Article in English | MEDLINE | ID: mdl-38793868

ABSTRACT

This paper focuses on mission planning and cooperative navigation algorithms for multi-drone systems aimed at LiDAR-based mapping. It aims at demonstrating how multi-UAV cooperation can be used to fulfill LiDAR data georeferencing accuracy requirements, as well as to improve data collection capabilities, e.g., increasing coverage per unit time and point cloud density. These goals are achieved by exploiting the CDGNSS/Vision paradigm and properly defining the formation geometry and the UAV trajectories. The paper provides analytical tools to estimate point density considering different types of scanning LIDAR and to define attitude/pointing requirements. These tools are then used to support centralized cooperation-aware mission planning aimed at complete coverage for different target geometries. The validity of the proposed framework is demonstrated through numerical simulations considering a formation of three vehicles tasked with a powerline inspection mission. The results show that cooperative navigation allows for the reduction of angular and positioning estimation uncertainties, which results in a georeferencing error reduction of an order of magnitude and equal to 16.7 cm in the considered case.

2.
Sensors (Basel) ; 21(11)2021 May 21.
Article in English | MEDLINE | ID: mdl-34064082

ABSTRACT

This paper describes a calibration technique aimed at combined estimation of onboard and external magnetic disturbances for small Unmanned Aerial Systems (UAS). In particular, the objective is to estimate the onboard horizontal bias components and the external magnetic declination, thus improving heading estimation accuracy. This result is important to support flight autonomy, even in environments characterized by significant magnetic disturbances. Moreover, in general, more accurate attitude estimates provide benefits for georeferencing and mapping applications. The approach exploits cooperation with one or more "deputy" UAVs and combines drone-to-drone carrier phase differential GNSS and visual measurements to attain magnetic-independent attitude information. Specifically, visual and GNSS information is acquired at different heading angles, and bias estimation is modelled as a non-linear least squares problem solved by means of the Levenberg-Marquardt method. An analytical error budget is derived to predict the achievable accuracy. The method is then demonstrated in flight using two customized quadrotors. A pointing analysis based on ground and airborne control points demonstrates that the calibrated heading estimate allows obtaining an angular error below 1°, thus resulting in a substantial improvement against the use of either the non-calibrated magnetic heading or the multi-sensor-based solution of the DJI onboard navigation filter, which determine angular errors of the order of several degrees.

3.
Sensors (Basel) ; 21(10)2021 May 14.
Article in English | MEDLINE | ID: mdl-34069288

ABSTRACT

This paper discusses the exploitation of a cooperative navigation strategy for improved in-flight estimation of inertial sensors biases on board unmanned aerial vehicles. The proposed multi-vehicle technique is conceived for a "chief" Unmanned Aerial Vehicle (UAV) and relies on one or more deputy aircrafts equipped with Global Navigation Satellite System (GNSS) antennas for differential positioning which also act as features for visual tracking. Combining carrier-phase differential GNSS and visual estimates, it is possible to retrieve accurate inertial-independent attitude information, thus potentially enabling improved bias estimation. Camera and carrier-phase differential GNSS measurements are integrated within a 15 states extended Kalman filter. Exploiting an ad hoc developed numerical environment, the paper analyzes the performance of the cooperative approach for inertial biases estimation as a function of number of deputies, formation geometry and distances, and absolute and relative dynamics. It is shown that exploiting two deputies it is possible to improve biases estimation, while a single deputy can be effective if changes of relative geometry and dynamics are also considered. Experimental proofs of concept based on two multi-rotors flying in formation are presented and discussed. The proposed framework is applicable beyond the domain of small UAVs.

4.
Sensors (Basel) ; 19(19)2019 Oct 07.
Article in English | MEDLINE | ID: mdl-31591368

ABSTRACT

The performance achievable by using Unmanned Aerial Vehicles (UAVs) for a large variety of civil and military applications, as well as the extent of applicable mission scenarios, can significantly benefit from the exploitation of formations of vehicles able to fly in a coordinated manner (swarms). In this respect, visual cameras represent a key instrument to enable coordination by giving each UAV the capability to visually monitor the other members of the formation. Hence, a related technological challenge is the development of robust solutions to detect and track cooperative targets through a sequence of frames. In this framework, this paper proposes an innovative approach to carry out this task based on deep learning. Specifically, the You Only Look Once (YOLO) object detection system is integrated within an original processing architecture in which the machine-vision algorithms are aided by navigation hints available thanks to the cooperative nature of the formation. An experimental flight test campaign, involving formations of two multirotor UAVs, is conducted to collect a database of images suitable to assess the performance of the proposed approach. Results demonstrate high-level accuracy, and robustness against challenging conditions in terms of illumination, background and target-range variability.

5.
Sensors (Basel) ; 18(12)2018 Nov 29.
Article in English | MEDLINE | ID: mdl-30501114

ABSTRACT

This paper presents an algorithm for multi-UAV path planning in scenarios with heterogeneous Global Navigation Satellite Systems (GNSS) coverage. In these environments, cooperative strategies can be effectively exploited when flying in GNSS-challenging conditions, e.g., natural/urban canyons, while the different UAVs can fly as independent systems in the absence of navigation issues (i.e., open sky conditions). These different flight environments are taken into account at path planning level, obtaining a distributed multi-UAV system that autonomously reconfigures itself based on mission needs. Path planning, formulated as a vehicle routing problem, aims at defining smooth and flyable polynomial trajectories, whose time of flight is estimated to guarantee coexistence of different UAVs at the same challenging area. The algorithm is tested in a simulation environment directly derived from a real-world 3D scenario, for variable number of UAVs and waypoints. Its solution and computational cost are compared with optimal planning methods. Results show that the computational burden is almost unaffected by the number of UAVs, and it is compatible with near real time implementation even for a relatively large number of waypoints. The provided solution takes full advantage from the available flight resources, reducing mission time for a given set of waypoints and for increasing UAV number.

6.
Sensors (Basel) ; 18(10)2018 Oct 10.
Article in English | MEDLINE | ID: mdl-30309035

ABSTRACT

This paper presents a visual-based approach that allows an Unmanned Aerial Vehicle (UAV) to detect and track a cooperative flying vehicle autonomously using a monocular camera. The algorithms are based on template matching and morphological filtering, thus being able to operate within a wide range of relative distances (i.e., from a few meters up to several tens of meters), while ensuring robustness against variations of illumination conditions, target scale and background. Furthermore, the image processing chain takes full advantage of navigation hints (i.e., relative positioning and own-ship attitude estimates) to improve the computational efficiency and optimize the trade-off between correct detections, false alarms and missed detections. Clearly, the required exchange of information is enabled by the cooperative nature of the formation through a reliable inter-vehicle data-link. Performance assessment is carried out by exploiting flight data collected during an ad hoc experimental campaign. The proposed approach is a key building block of cooperative architectures designed to improve UAV navigation performance either under nominal GNSS coverage or in GNSS-challenging environments.

7.
Sensors (Basel) ; 17(10)2017 Sep 24.
Article in English | MEDLINE | ID: mdl-28946651

ABSTRACT

In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective-n-Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal.

8.
Sensors (Basel) ; 16(12)2016 Dec 17.
Article in English | MEDLINE | ID: mdl-27999318

ABSTRACT

Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

9.
Sensors (Basel) ; 15(3): 6360-82, 2015 Mar 16.
Article in English | MEDLINE | ID: mdl-25785309

ABSTRACT

This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.

10.
ScientificWorldJournal ; 2014: 280478, 2014.
Article in English | MEDLINE | ID: mdl-25105154

ABSTRACT

Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection.

11.
Sensors (Basel) ; 13(10): 12771-93, 2013 Sep 25.
Article in English | MEDLINE | ID: mdl-24072023

ABSTRACT

An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.


Subject(s)
Algorithms , Geographic Information Systems , Imaging, Three-Dimensional/methods , Spacecraft , Stars, Celestial , Telescopes , Image Interpretation, Computer-Assisted
12.
Sensors (Basel) ; 12(1): 863-77, 2012.
Article in English | MEDLINE | ID: mdl-22368499

ABSTRACT

This paper describes the target detection algorithm for the image processor of a vision-based system that is installed onboard an unmanned helicopter. It has been developed in the framework of a project of the French national aerospace research center Office National d'Etudes et de Recherches Aérospatiales (ONERA) which aims at developing an air-to-ground target tracking mission in an unknown urban environment. In particular, the image processor must detect targets and estimate ground motion in proximity of the detected target position. Concerning the target detection function, the analysis has dealt with realizing a corner detection algorithm and selecting the best choices in terms of edge detection methods, filtering size and type and the more suitable criterion of detection of the points of interest in order to obtain a very fast algorithm which fulfills the computation load requirements. The compared criteria are the Harris-Stephen and the Shi-Tomasi, ones, which are the most widely used in literature among those based on intensity. Experimental results which illustrate the performance of the developed algorithm and demonstrate that the detection time is fully compliant with the requirements of the real-time system are discussed.


Subject(s)
Algorithms , Computer Systems , Micro-Electrical-Mechanical Systems/instrumentation , Miniaturization/instrumentation , Optical Devices , Robotics/instrumentation , Image Processing, Computer-Assisted , Normal Distribution , Time Factors
13.
Sensors (Basel) ; 10(1): 639-54, 2010.
Article in English | MEDLINE | ID: mdl-22315559

ABSTRACT

This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms).


Subject(s)
Acceleration , Aircraft/instrumentation , Aircraft/standards , Geographic Information Systems/instrumentation , Geographic Information Systems/standards , Robotics/instrumentation , Transducers/standards , Calibration/standards , Italy , Robotics/standards
SELECTION OF CITATIONS
SEARCH DETAIL
...