Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 22(5)2022 Feb 27.
Article in English | MEDLINE | ID: mdl-35271018

ABSTRACT

The Unmanned Aerial Vehicle (UAV) is one of the most remarkable inventions of the last 100 years. Much research has been invested in the development of this flying robot. The landing system is one of the more challenging aspects of this system's development. Artificial Intelligence (AI) has become the preferred technique for landing system development, including reinforcement learning. However, current research is more focused is on system development based on image processing and advanced geometry. A novel calibration based on our previous research had been used to ameliorate the accuracy of the AprilTag pose estimation. With the help of advanced geometry from camera and range sensor data, a process known as Inverse Homography Range Camera Fusion (IHRCF), a pose estimation that outperforms our previous work, is now possible. The range sensor used here is a Time of Flight (ToF) sensor, but the algorithm can be used with any range sensor. First, images are captured by the image acquisition device, a monocular camera. Next, the corners of the landing landmark are detected through AprilTag detection algorithms (ATDA). The pixel correspondence between the image and the range sensor is then calculated via the calibration data. In the succeeding phase, the planar homography between the real-world locations of sensor data and their obtained pixel coordinates is calculated. In the next phase, the pixel coordinates of the AprilTag-detected four corners are transformed by inverse planar homography from pixel coordinates to world coordinates in the camera frame. Finally, knowing the world frame corner points of the AprilTag, rigid body transformation can be used to create the pose data. A CoppeliaSim simulation environment was used to evaluate the IHRCF algorithm, and the test was implemented in real-time Software-in-the-Loop (SIL). The IHRCF algorithm outperformed the AprilTag-only detection approach significantly in both translational and rotational terms. To conclude, the conventional landmark detection algorithm can be ameliorated by incorporating sensor fusion for cameras with lower radial distortion.

2.
Sensors (Basel) ; 21(11)2021 May 22.
Article in English | MEDLINE | ID: mdl-34067380

ABSTRACT

The advancement of indoor Inertial Navigation Systems (INS) based on the low-cost Inertial Measurement Units (IMU) has been long reviewed in the field of pedestrian localization. There are various sources of error in these systems which lead to unstable and unreliable positioning results, especially in long term performances. These inaccuracies are usually caused by imperfect system modeling, inappropriate sensor fusion models, heading drift, biases of IMUs, and calibration methods. This article addresses the issues surrounding unreliability of the low-cost Micro-Electro-Mechanical System (MEMS)-based pedestrian INS. We designed a novel multi-sensor fusion method based on a Time of Flight (ToF) distance sensor and dual chest- and foot-mounted IMUs, aided by an online calibration technique. An Extended Kalman Filter (EKF) is accounted for estimating the attitude, position, and velocity errors, as well as estimation of IMU biases. A fusion architecture is derived to provide a consistent velocity measurement by operative contribution of ToF distance sensor and foot mounted IMU. In this method, the measurements of the ToF distance sensor are used for the time-steps in which the Zero Velocity Update (ZUPT) measurements are not active. In parallel, the chest mounted IMU is accounted for attitude estimation of the pedestrian's chest. As well, by designing a novel corridor detection filter, the heading drift is restricted in each straightway. Compared to the common INS method, developed system proves promising and resilient results in two-dimensional corridor spaces for durations of up to 11 min. Finally, the results of our experiments showed the position RMS error of less than 3 m and final-point error of less than 5 m.

SELECTION OF CITATIONS
SEARCH DETAIL
...