Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(21)2022 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-36365800

RESUMO

Feasible local motion planning for autonomous mobile robots in dynamic environments requires predicting how the scene evolves. Conventional navigation stakes rely on a local map to represent how a dynamic scene changes over time. However, these navigation stakes depend highly on the accuracy of the environmental map and the number of obstacles. This study uses semantic segmentation-based drivable area estimation as an alternative representation to assist with local motion planning. Notably, a realistic 3D simulator based on an Unreal Engine was created to generate a synthetic dataset under different weather conditions. A transfer learning technique was used to train the encoder-decoder model to segment free space from the occupied sidewalk environment. The local planner uses a nonlinear model predictive control (NMPC) scheme that inputs the estimated drivable space, the state of the robot, and a global plan to produce safe velocity commands that minimize the tracking cost and actuator effort while avoiding collisions with dynamic and static obstacles. The proposed approach achieves zero-shot transfer from a simulation to real-world environments that have never been experienced during training. Several intensive experiments were conducted and compared with the dynamic window approach (DWA) to demonstrate the effectiveness of our system in dynamic sidewalk environments.


Assuntos
Aprendizado Profundo , Robótica , Dinâmica não Linear , Robótica/métodos , Algoritmos , Movimento (Física)
2.
Sensors (Basel) ; 21(12)2021 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-34201390

RESUMO

Three-dimensional object detection utilizing LiDAR point cloud data is an indispensable part of autonomous driving perception systems. Point cloud-based 3D object detection has been a better replacement for higher accuracy than cameras during nighttime. However, most LiDAR-based 3D object methods work in a supervised manner, which means their state-of-the-art performance relies heavily on a large-scale and well-labeled dataset, while these annotated datasets could be expensive to obtain and only accessible in the limited scenario. Transfer learning is a promising approach to reduce the large-scale training datasets requirement, but existing transfer learning object detectors are primarily for 2D object detection rather than 3D. In this work, we utilize the 3D point cloud data more effectively by representing the birds-eye-view (BEV) scene and propose a transfer learning based point cloud semantic segmentation for 3D object detection. The proposed model minimizes the need for large-scale training datasets and consequently reduces the training time. First, a preprocessing stage filters the raw point cloud data to a BEV map within a specific field of view. Second, the transfer learning stage uses knowledge from the previously learned classification task (with more data for training) and generalizes the semantic segmentation-based 2D object detection task. Finally, 2D detection results from the BEV image have been back-projected into 3D in the postprocessing stage. We verify results on two datasets: the KITTI 3D object detection dataset and the Ouster LiDAR-64 dataset, thus demonstrating that the proposed method is highly competitive in terms of mean average precision (mAP up to 70%) while still running at more than 30 frames per second (FPS).


Assuntos
Condução de Veículo , Semântica , Aprendizado de Máquina
3.
Sensors (Basel) ; 21(7)2021 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-33916624

RESUMO

Autonomous navigation and collision avoidance missions represent a significant challenge for robotics systems as they generally operate in dynamic environments that require a high level of autonomy and flexible decision-making capabilities. This challenge becomes more applicable in micro aerial vehicles (MAVs) due to their limited size and computational power. This paper presents a novel approach for enabling a micro aerial vehicle system equipped with a laser range finder to autonomously navigate among obstacles and achieve a user-specified goal location in a GPS-denied environment, without the need for mapping or path planning. The proposed system uses an actor-critic-based reinforcement learning technique to train the aerial robot in a Gazebo simulator to perform a point-goal navigation task by directly mapping the noisy MAV's state and laser scan measurements to continuous motion control. The obtained policy can perform collision-free flight in the real world while being trained entirely on a 3D simulator. Intensive simulations and real-time experiments were conducted and compared with a nonlinear model predictive control technique to show the generalization capabilities to new unseen environments, and robustness against localization noise. The obtained results demonstrate our system's effectiveness in flying safely and reaching the desired points by planning smooth forward linear velocity and heading rates.

4.
Sensors (Basel) ; 19(15)2019 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-31370336

RESUMO

In recent years, demand has been increasing for target detection and tracking from aerial imagery via drones using onboard powered sensors and devices. We propose a very effective method for this application based on a deep learning framework. A state-of-the-art embedded hardware system empowers small flying robots to carry out the real-time onboard computation necessary for object tracking. Two types of embedded modules were developed: one was designed using a Jetson TX or AGX Xavier, and the other was based on an Intel Neural Compute Stick. These are suitable for real-time onboard computing power on small flying drones with limited space. A comparative analysis of current state-of-the-art deep learning-based multi-object detection algorithms was carried out utilizing the designated GPU-based embedded computing modules to obtain detailed metric data about frame rates, as well as the computation power. We also introduce an effective target tracking approach for moving objects. The algorithm for tracking moving objects is based on the extension of simple online and real-time tracking. It was developed by integrating a deep learning-based association metric approach with simple online and real-time tracking (Deep SORT), which uses a hypothesis tracking methodology with Kalman filtering and a deep learning-based association metric. In addition, a guidance system that tracks the target position using a GPU-based algorithm is introduced. Finally, we demonstrate the effectiveness of the proposed algorithms by real-time experiments with a small multi-rotor drone.

5.
Sensors (Basel) ; 18(10)2018 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-30360397

RESUMO

In recent years, machine learning (and as a result artificial intelligence) has experienced considerable progress. As a result, robots in different shapes and with different purposes have found their ways into our everyday life. These robots, which have been developed with the goal of human companionship, are here to help us in our everyday and routine life. These robots are different to the previous family of robots that were used in factories and static environments. These new robots are social robots that need to be able to adapt to our environment by themselves and to learn from their own experiences. In this paper, we contribute to the creation of robots with a high degree of autonomy, which is a must for social robots. We try to create an algorithm capable of autonomous exploration in and adaptation to unknown environments and implement it in a simulated robot. We go further than a simulation and implement our algorithm in a real robot, in which our sensor fusion method is able to overcome real-world noise and perform robust exploration.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...