Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 24(3)2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38339539

ABSTRACT

Recently, new semantic segmentation and object detection methods have been proposed for the direct processing of three-dimensional (3D) LiDAR sensor point clouds. LiDAR can produce highly accurate and detailed 3D maps of natural and man-made environments and is used for sensing in many contexts due to its ability to capture more information, its robustness to dynamic changes in the environment compared to an RGB camera, and its cost, which has decreased in recent years and which is an important factor for many application scenarios. The challenge with high-resolution 3D LiDAR sensors is that they can output large amounts of 3D data with up to a few million points per second, which is difficult to process in real time when applying complex algorithms and models for efficient semantic segmentation. Most existing approaches are either only suitable for relatively small point clouds or rely on computationally intensive sampling techniques to reduce their size. As a result, most of these methods do not work in real time in realistic field robotics application scenarios, making them unsuitable for practical applications. Systematic point selection is a possible solution to reduce the amount of data to be processed. Although our approach is memory and computationally efficient, it selects only a small subset of points, which may result in important features being missed. To address this problem, our proposed systematic sampling method called SyS3DS (Systematic Sampling for 3D Semantic Segmentation) incorporates a technique in which the local neighbours of each point are retained to preserve geometric details. SyS3DS is based on the graph colouring algorithm and ensures that the selected points are non-adjacent in order to obtain a subset of points that are representative of the 3D points in the scene. To take advantage of the ensemble learning method, we pass a different subset of nodes for each epoch. This leverages a new technique called auto-ensemble, where ensemble learning is proposed as a collection of different learning models instead of tuning different hyperparameters individually during training and validation. SyS3DS has been shown to process up to 1 million points in a single pass. It outperforms the state of the art in efficient semantic segmentation on large datasets such as Semantic3D. We also present a preliminary study on the validity of the performance of LiDAR-only data, i.e., intensity values from LiDAR sensors without RGB values for semi-autonomous robot perception.

2.
IEEE Trans Biomed Eng ; 71(6): 1950-1957, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38252565

ABSTRACT

This work proposes a new formulation for common spatial patterns (CSP), often used as a powerful feature extraction technique in brain-computer interfacing (BCI) and other neurological studies. In this approach, applied to multiple subjects' data and named as hyperCSP, the individual covariance and mutual correlation matrices between multiple simultaneously recorded subjects' electroencephalograms are exploited in the CSP formulation. This method aims at effectively isolating the common motor task between multiple heads and alleviate the effects of other spurious or undesired tasks inherently or intentionally performed by the subjects. This technique can provide a satisfactory classification performance while using small data size and low computational complexity. By using the proposed hyperCSP followed by support vector machines classifier, we obtained a classification accuracy of 81.82% over 8 trials in the presence of strong undesired tasks. We hope that this method could reduce the training error in multi-task BCI scenarios. The recorded valuable motor-related hyperscanning dataset is available for public use to promote the research in this area.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Signal Processing, Computer-Assisted , Support Vector Machine , Humans , Electroencephalography/methods , Algorithms , Adult , Male , Female , Brain/physiology
3.
Sensors (Basel) ; 23(15)2023 Jul 26.
Article in English | MEDLINE | ID: mdl-37571461

ABSTRACT

Forestry operations have become of great importance for a sustainable environment in the past few decades due to the increasing toll induced by rural abandonment and climate change. Robotics presents a promising solution to this problem; however, gathering the necessary data for developing and testing algorithms can be challenging. This work proposes a portable multi-sensor apparatus to collect relevant data generated by several onboard sensors. The system incorporates Laser Imaging, Detection and Ranging (LiDAR), two stereo depth cameras and a dedicated inertial measurement unit (IMU) to obtain environmental data, which are coupled with an Android app that extracts Global Navigation Satellite System (GNSS) information from a cell phone. Acquired data can then be used for a myriad of perception-based applications, such as localization and mapping, flammable material identification, traversability analysis, path planning and/or semantic segmentation toward (semi-)automated forestry actuation. The modular architecture proposed is built on Robot Operating System (ROS) and Docker to facilitate data collection and the upgradability of the system. We validate the apparatus' effectiveness in collecting datasets and its flexibility by carrying out a case study for Simultaneous Localization and Mapping (SLAM) in a challenging woodland environment, thus allowing us to compare fundamentally different methods with the multimodal system proposed.

4.
IEEE Trans Cybern ; 43(2): 699-711, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23014760

ABSTRACT

In this paper, we present a Bayesian framework for the active multimodal perception of 3-D structure and motion. The design of this framework finds its inspiration in the role of the dorsal perceptual pathway of the human brain. Its composing models build upon a common egocentric spatial configuration that is naturally fitting for the integration of readings from multiple sensors using a Bayesian approach. In the process, we will contribute with efficient and robust probabilistic solutions for cyclopean geometry-based stereovision and auditory perception based only on binaural cues, modeled using a consistent formalization that allows their hierarchical use as building blocks for the multimodal sensor fusion framework. We will explicitly or implicitly address the most important challenges of sensor fusion using this framework, for vision, audition, and vestibular sensing. Moreover, interaction and navigation require maximal awareness of spatial surroundings, which, in turn, is obtained through active attentional and behavioral exploration of the environment. The computational models described in this paper will support the construction of a simultaneously flexible and powerful robotic implementation of multimodal active perception to be used in real-world applications, such as human-machine interaction or mobile robot navigation.

5.
Cogn Process ; 13 Suppl 1: S155-9, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22806665

ABSTRACT

In this research work, we contribute with a behaviour learning process for a hierarchical Bayesian framework for multimodal active perception, devised to be emergent, scalable and adaptive. This framework is composed by models built upon a common spatial configuration for encoding perception and action that is naturally fitting for the integration of readings from multiple sensors, using a Bayesian approach devised in previous work. The proposed learning process is shown to reproduce goal-dependent human-like active perception behaviours by learning model parameters (referred to as "attentional sets") for different free-viewing and active search tasks. Learning was performed by presenting several 3D audiovisual virtual scenarios using a head-mounted display, while logging the spatial distribution of fixations of the subject (in 2D, on left and right images, and in 3D space), data which are consequently used as the training set for the framework. As a consequence, the hierarchical Bayesian framework adequately implements high-level behaviour resulting from low-level interaction of simpler building blocks by using the attentional sets learned for each task, and is able to change these attentional sets "on the fly," allowing the implementation of goal-dependent behaviours (i.e., top-down influences).


Subject(s)
Attention/physiology , Bayes Theorem , Learning/physiology , Perception/physiology , Robotics , Acoustic Stimulation , Computer Simulation , Humans , Models, Psychological , Photic Stimulation
SELECTION OF CITATIONS
SEARCH DETAIL
...