Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 20(16)2020 Aug 07.
Article in English | MEDLINE | ID: mdl-32784776

ABSTRACT

The use of UAVs for remote sensing is increasing. In this paper, we demonstrate a method for evaluating and selecting suitable hardware to be used for deployment of algorithms for UAV-based remote sensing under considerations of Size, Weight, Power, and Computational constraints. These constraints hinder the deployment of rapidly evolving computer vision and robotics algorithms on UAVs, because they require intricate knowledge about the system and architecture to allow for effective implementation. We propose integrating computational monitoring techniques-profiling-with an industry standard specifying software quality-ISO 25000-and fusing both in a decision-making model-the analytic hierarchy process-to provide an informed decision basis for deploying embedded systems in the context of UAV-based remote sensing. One software package is combined in three software-hardware alternatives, which are profiled in hardware-in-the-loop simulations. Three objectives are used as inputs for the decision-making process. A Monte Carlo simulation provides insights into which decision-making parameters lead to which preferred alternative. Results indicate that local weights significantly influence the preference of an alternative. The approach enables relating complex parameters, leading to informed decisions about which hardware is deemed suitable for deployment in which case.

2.
Biol Cybern ; 114(2): 209-229, 2020 04.
Article in English | MEDLINE | ID: mdl-32322978

ABSTRACT

We reveal how implementing the homogeneous, multi-scale mapping frameworks observed in the mammalian brain's mapping systems radically improves the performance of a range of current robotic localization techniques. Roboticists have developed a range of predominantly single- or dual-scale heterogeneous mapping approaches (typically locally metric and globally topological) that starkly contrast with neural encoding of space in mammalian brains: a multi-scale map underpinned by spatially responsive cells like the grid cells found in the rodent entorhinal cortex. Yet the full benefits of a homogeneous multi-scale mapping framework remain unknown in both robotics and biology: in robotics because of the focus on single- or two-scale systems and limits in the scalability and open-field nature of current test environments and benchmark datasets; in biology because of technical limitations when recording from rodents during movement over large areas. New global spatial databases with visual information varying over several orders of magnitude in scale enable us to investigate this question for the first time in real-world environments. In particular, we investigate and answer the following questions: why have multi-scale representations, how many scales should there be, what should the size ratio between consecutive scales be and how does the absolute scale size affect performance? We answer these questions by developing and evaluating a homogeneous, multi-scale mapping framework mimicking aspects of the rodent multi-scale map, but using current robotic place recognition techniques at each scale. Results in large-scale real-world environments demonstrate multi-faceted and significant benefits for mapping and localization performance and identify the key factors that determine performance.


Subject(s)
Brain Mapping , Robotics/methods , Spatial Navigation , Algorithms , Animals , Computer Simulation , Datasets as Topic , Entorhinal Cortex/physiology , Movement , Place Cells/physiology , Recognition, Psychology , Rodentia
3.
Biol Cybern ; 113(5-6): 515-545, 2019 12.
Article in English | MEDLINE | ID: mdl-31571007

ABSTRACT

Roboticists have long drawn inspiration from nature to develop navigation and simultaneous localization and mapping (SLAM) systems such as RatSLAM. Animals such as birds and bats possess superlative navigation capabilities, robustly navigating over large, three-dimensional environments, leveraging an internal neural representation of space combined with external sensory cues and self-motion cues. This paper presents a novel neuro-inspired 4DoF (degrees of freedom) SLAM system named NeuroSLAM, based upon computational models of 3D grid cells and multilayered head direction cells, integrated with a vision system that provides external visual cues and self-motion cues. NeuroSLAM's neural network activity drives the creation of a multilayered graphical experience map in a real time, enabling relocalization and loop closure through sequences of familiar local visual cues. A multilayered experience map relaxation algorithm is used to correct cumulative errors in path integration after loop closure. Using both synthetic and real-world datasets comprising complex, multilayered indoor and outdoor environments, we demonstrate NeuroSLAM consistently producing topologically correct three-dimensional maps.


Subject(s)
Brain/physiology , Computer Simulation , Models, Neurological , Neural Networks, Computer , Spatial Navigation/physiology , Animals , Brain Mapping/methods , Humans , Robotics/methods
4.
Ecol Evol ; 8(12): 6005-6015, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29988453

ABSTRACT

This study develops an approach to automating the process of vegetation cover estimates using computer vision and pattern recognition algorithms. Visual cover estimation is a key tool for many ecological studies, yet quadrat-based analyses are known to suffer from issues of consistency between people as well as across sites (spatially) and time (temporally). Previous efforts to estimate cover from photograps require considerable manual work. We demonstrate that an automated system can be used to estimate vegetation cover and the type of vegetation cover present using top-down photographs of 1 m by 1 m quadrats. Vegetation cover is estimated by modelling the distribution of color using a multivariate Gaussian. The type of vegetation cover is then classified, using illumination robust local binary pattern features, into two broad groups: graminoids (grasses) and forbs. This system is evaluated on two datasets from the globally distributed experiment, the Nutrient Network (NutNet). These NutNet sites were selected for analyses because repeat photographs were taken over time and these sites are representative of very different grassland ecosystems-a low stature subalpine grassland in an alpine region of Australia and a higher stature and more productive lowland grassland in the Pacific Northwest of the USA. We find that estimates of treatment effects on grass and forb cover did not differ between field and automated estimates for eight of nine experimental treatments. Conclusions about total vegetation cover did not correspond quite as strongly, particularly at the more productive site. A limitation with this automated system is that the total vegetation cover is given as a percentage of pixels considered to contain vegetation, but ecologists can distinguish species with overlapping coverage and thus can estimate total coverage to exceed 100%. Automated approaches such as this offer techniques for estimating vegetation cover that are repeatable, cheaper to use, and likely more reliable for quantifying changes in vegetation over the long-term. These approaches would also enable ecologists to increase the spatial and temporal depth of their coverage estimates with methods that allow for vegetation sampling over large spatial scales quickly.

5.
Biol Cybern ; 112(3): 209-225, 2018 06.
Article in English | MEDLINE | ID: mdl-29353330

ABSTRACT

Most robot navigation systems perform place recognition using a single-sensor modality and one, or at most two heterogeneous map scales. In contrast, mammals perform navigation by combining sensing from a wide variety of modalities including vision, auditory, olfactory and tactile senses with a multi-scale, homogeneous neural map of the environment. In this paper, we develop a multi-scale, multi-sensor system for mapping and place recognition that combines spatial localization hypotheses at different spatial scales from multiple different sensors to calculate an overall place recognition estimate. We evaluate the system's performance over three repeated 1.5-km day and night journeys across a university campus spanning outdoor and multi-level indoor environments, incorporating camera, WiFi and barometric sensory information. The system outperforms a conventional camera-only localization system, with the results demonstrating not only how combining multiple sensing modalities together improves performance, but also how combining these sensing modalities over multiple scales further improves performance over a single-scale approach. The multi-scale mapping framework enables us to analyze the naturally varying spatial acuity of different sensing modalities, revealing how the multi-scale approach captures each sensing modality at its optimal operation point where a single-scale approach does not, and enables us to then weight sensor contributions at different scales based on their utility for place recognition at that scale.


Subject(s)
Environmental Monitoring , Neural Networks, Computer , Pattern Recognition, Automated , Recognition, Psychology , Remote Sensing Technology , Algorithms , Equipment Design , Humans , Motion , Robotics
6.
J Physiol ; 594(22): 6559-6567, 2016 11 15.
Article in English | MEDLINE | ID: mdl-26844804

ABSTRACT

Complex brains evolved in order to comprehend and interact with complex environments in the real world. Despite significant progress in our understanding of perceptual representations in the brain, our understanding of how the brain carries out higher level processing remains largely superficial. This disconnect is understandable, since the direct mapping of sensory inputs to perceptual states is readily observed, while mappings between (unknown) stages of processing and intermediate neural states is not. We argue that testing theories of higher level neural processing on robots in the real world offers a clear path forward, since (1) the complexity of the neural robotic controllers can be staged as necessary, avoiding the almost intractable complexity apparent in even the simplest current living nervous systems; (2) robotic controller states are fully observable, avoiding the enormous technical challenge of recording from complete intact brains; and (3) unlike computational modelling, the real world can stand for itself when using robots, avoiding the computational intractability of simulating the world at an arbitrary level of detail. We suggest that embracing the complex and often unpredictable closed-loop interactions between robotic neuro-controllers and the physical world will bring about deeper understanding of the role of complex brain function in the high-level processing of information and the control of behaviour.


Subject(s)
Brain/physiology , Nerve Net/physiology , Neurons/physiology , Animals , Brain Mapping/methods , Computer Simulation , Humans , Robotics/methods
7.
Neural Netw ; 72: 48-61, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26576467

ABSTRACT

Robotic mapping and localization systems typically operate at either one fixed spatial scale, or over two, combining a local metric map and a global topological map. In contrast, recent high profile discoveries in neuroscience have indicated that animals such as rodents navigate the world using multiple parallel maps, with each map encoding the world at a specific spatial scale. While a number of theoretical-only investigations have hypothesized several possible benefits of such a multi-scale mapping system, no one has comprehensively investigated the potential mapping and place recognition performance benefits for navigating robots in large real world environments, especially using more than two homogeneous map scales. In this paper we present a biologically-inspired multi-scale mapping system mimicking the rodent multi-scale map. Unlike hybrid metric-topological multi-scale robot mapping systems, this new system is homogeneous, distinguishable only by scale, like rodent neural maps. We present methods for training each network to learn and recognize places at a specific spatial scale, and techniques for combining the output from each of these parallel networks. This approach differs from traditional probabilistic robotic methods, where place recognition spatial specificity is passively driven by models of sensor uncertainty. Instead we intentionally create parallel learning systems that learn associations between sensory input and the environment at different spatial scales. We also conduct a systematic series of experiments and parameter studies that determine the effect on performance of using different neural map scaling ratios and different numbers of discrete map scales. The results demonstrate that a multi-scale approach universally improves place recognition performance and is capable of producing better than state of the art performance compared to existing robotic navigation algorithms. We analyze the results and discuss the implications with respect to several recent discoveries and theories regarding how multi-scale neural maps are learnt and used in the mammalian brain.


Subject(s)
Learning , Neural Networks, Computer , Recognition, Psychology , Robotics/methods , Algorithms , Biomimetics
8.
Neurobiol Learn Mem ; 117: 109-21, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25079451

ABSTRACT

We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model's flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat's point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.


Subject(s)
Goals , Models, Neurological , Neural Networks, Computer , Robotics/methods , Spatial Navigation , Algorithms , Animals , Entorhinal Cortex/physiology , Environment , Hippocampus/physiology , Humans , Motion Perception/physiology , Neurons/physiology , Prefrontal Cortex/physiology , Recognition, Psychology , Visual Perception/physiology
9.
Philos Trans R Soc Lond B Biol Sci ; 369(1655)2014 Nov 05.
Article in English | MEDLINE | ID: mdl-25267826

ABSTRACT

Mobile robots and animals alike must effectively navigate their environments in order to achieve their goals. For animals goal-directed navigation facilitates finding food, seeking shelter or migration; similarly robots perform goal-directed navigation to find a charging station, get out of the rain or guide a person to a destination. This similarity in tasks extends to the environment as well; increasingly, mobile robots are operating in the same underwater, ground and aerial environments that animals do. Yet despite these similarities, goal-directed navigation research in robotics and biology has proceeded largely in parallel, linked only by a small amount of interdisciplinary research spanning both areas. Most state-of-the-art robotic navigation systems employ a range of sensors, world representations and navigation algorithms that seem far removed from what we know of how animals navigate; their navigation systems are shaped by key principles of navigation in 'real-world' environments including dealing with uncertainty in sensing, landmark observation and world modelling. By contrast, biomimetic animal navigation models produce plausible animal navigation behaviour in a range of laboratory experimental navigation paradigms, typically without addressing many of these robotic navigation principles. In this paper, we attempt to link robotics and biology by reviewing the current state of the art in conventional and biomimetic goal-directed navigation models, focusing on the key principles of goal-oriented robotic navigation and the extent to which these principles have been adapted by biomimetic navigation models and why.


Subject(s)
Biomimetics/methods , Goals , Locomotion , Robotics/methods , Animals
10.
PLoS Comput Biol ; 8(8): e1002651, 2012.
Article in English | MEDLINE | ID: mdl-22916006

ABSTRACT

Spatial navigation requires the processing of complex, disparate and often ambiguous sensory data. The neurocomputations underpinning this vital ability remain poorly understood. Controversy remains as to whether multimodal sensory information must be combined into a unified representation, consistent with Tolman's "cognitive map", or whether differential activation of independent navigation modules suffice to explain observed navigation behaviour. Here we demonstrate that key neural correlates of spatial navigation in darkness cannot be explained if the path integration system acted independently of boundary (landmark) information. In vivo recordings demonstrate that the rodent head direction (HD) system becomes unstable within three minutes without vision. In contrast, rodents maintain stable place fields and grid fields for over half an hour without vision. Using a simple HD error model, we show analytically that idiothetic path integration (iPI) alone cannot be used to maintain any stable place representation beyond two to three minutes. We then use a measure of place stability based on information theoretic principles to prove that featureless boundaries alone cannot be used to improve localization above chance level. Having shown that neither iPI nor boundaries alone are sufficient, we then address the question of whether their combination is sufficient and--we conjecture--necessary to maintain place stability for prolonged periods without vision. We addressed this question in simulations and robot experiments using a navigation model comprising of a particle filter and boundary map. The model replicates published experimental results on place field and grid field stability without vision, and makes testable predictions including place field splitting and grid field rescaling if the true arena geometry differs from the acquired boundary map. We discuss our findings in light of current theories of animal navigation and neuronal computation, and elaborate on their implications and significance for the design, analysis and interpretation of experiments.


Subject(s)
Cognition , Darkness , Animals , Rats , Vision, Ocular
11.
PLoS One ; 6(10): e25687, 2011.
Article in English | MEDLINE | ID: mdl-21991332

ABSTRACT

The head direction (HD) system in mammals contains neurons that fire to represent the direction the animal is facing in its environment. The ability of these cells to reliably track head direction even after the removal of external sensory cues implies that the HD system is calibrated to function effectively using just internal (proprioceptive and vestibular) inputs. Rat pups and other infant mammals display stereotypical warm-up movements prior to locomotion in novel environments, and similar warm-up movements are seen in adult mammals with certain brain lesion-induced motor impairments. In this study we propose that synaptic learning mechanisms, in conjunction with appropriate movement strategies based on warm-up movements, can calibrate the HD system so that it functions effectively even in darkness. To examine the link between physical embodiment and neural control, and to determine that the system is robust to real-world phenomena, we implemented the synaptic mechanisms in a spiking neural network and tested it on a mobile robot platform. Results show that the combination of the synaptic learning mechanisms and warm-up movements are able to reliably calibrate the HD system so that it accurately tracks real-world head direction, and that calibration breaks down in systematic ways if certain movements are omitted. This work confirms that targeted, embodied behaviour can be used to calibrate neural systems, demonstrates that 'grounding' of modelled biological processes in the real world can reveal underlying functional principles (supporting the importance of robotics to biology), and proposes a functional role for stereotypical behaviours seen in infant mammals and those animals with certain motor deficits. We conjecture that these calibration principles may extend to the calibration of other neural systems involved in motion tracking and the representation of space, such as grid cells in entorhinal cortex.


Subject(s)
Action Potentials/physiology , Movement/physiology , Neurons/physiology , Robotics , Animals , Calibration , Computer Simulation , Darkness , Head/physiology , Rats
12.
Hippocampus ; 21(6): 647-60, 2011 Jun.
Article in English | MEDLINE | ID: mdl-20232384

ABSTRACT

The CA3 region of the hippocampus has long been proposed as an autoassociative network performing pattern completion on known inputs. The dentate gyrus (DG) region is often proposed as a network performing the complementary function of pattern separation. Neural models of pattern completion and separation generally designate explicit learning phases to encode new information and assume an ideal fixed threshold at which to stop learning new patterns and begin recalling known patterns. Memory systems are significantly more complex in practice, with the degree of memory recall depending on context-specific goals. Here, we present our spike-timing separation and completion (STSC) model of the entorhinal cortex (EC), DG, and CA3 network, ascribing to each region a role similar to that in existing models but adding a temporal dimension by using a spiking neural network. Simulation results demonstrate that (a) spike-timing dependent plasticity in the EC-CA3 synapses provides a pattern completion ability without recurrent CA3 connections, (b) the race between activation of CA3 cells via EC-CA3 synapses and activation of the same cells via DG-CA3 synapses distinguishes novel from known inputs, and (c) modulation of the EC-CA3 synapses adjusts the learned versus test input similarity required to evoke a direct CA3 response prior to any DG activity, thereby adjusting the pattern completion threshold. These mechanisms suggest that spike timing can arbitrate between learning and recall based on the novelty of each individual input, ensuring control of the learn-recall decision resides in the same subsystem as the learned memories themselves. The proposed modulatory signal does not override this decision but biases the system toward either learning or recall. The model provides an explanation for empirical observations that a reduction in novelty produces a corresponding reduction in the latency of responses in CA3 and CA1.


Subject(s)
CA3 Region, Hippocampal/physiology , Computer Simulation , Mental Recall/physiology , Pattern Recognition, Visual/physiology , Acetylcholine/physiology , Algorithms , CA3 Region, Hippocampal/cytology , Dopamine/physiology , Entorhinal Cortex/physiology , Learning/physiology , Models, Neurological , Neural Pathways/cytology , Neural Pathways/physiology , Synapses/physiology , Time Factors
13.
PLoS Comput Biol ; 6(11): e1000995, 2010 Nov 11.
Article in English | MEDLINE | ID: mdl-21085643

ABSTRACT

To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our study to be a starting point for animal experiments that test navigation in perceptually ambiguous environments.


Subject(s)
Models, Neurological , Neural Networks, Computer , Robotics/methods , Visual Fields/physiology , Animals , Entorhinal Cortex/cytology , Entorhinal Cortex/physiology , Homing Behavior/physiology , Rats
SELECTION OF CITATIONS
SEARCH DETAIL
...