Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Philos Trans A Math Phys Eng Sci ; 379(2194): 20200091, 2021 Apr 05.
Article in English | MEDLINE | ID: mdl-33583264

ABSTRACT

The most mature aspect of applying artificial intelligence (AI)/machine learning (ML) to problems in the atmospheric sciences is likely post-processing of model output. This article provides some history and current state of the science of post-processing with AI for weather and climate models. Deriving from the discussion at the 2019 Oxford workshop on Machine Learning for Weather and Climate, this paper also presents thoughts on medium-term goals to advance such use of AI, which include assuring that algorithms are trustworthy and interpretable, adherence to FAIR data practices to promote usability, and development of techniques that leverage our physical knowledge of the atmosphere. The coauthors propose several actionable items and have initiated one of those: a repository for datasets from various real weather and climate problems that can be addressed using AI. Five such datasets are presented and permanently archived, together with Jupyter notebooks to process them and assess the results in comparison with a baseline technique. The coauthors invite the readers to test their own algorithms in comparison with the baseline and to archive their results. This article is part of the theme issue 'Machine learning for weather and climate modelling'.

2.
IEEE Trans Neural Netw Learn Syst ; 29(12): 6132-6144, 2018 12.
Article in English | MEDLINE | ID: mdl-29994007

ABSTRACT

We present here a learning system using the iCub humanoid robot and the SpiNNaker neuromorphic chip to solve the real-world task of object-specific attention. Integrating spiking neural networks with robots introduces considerable complexity for questionable benefit if the objective is simply task performance. But, we suggest, in a cognitive robotics context, where the goal is understanding how to compute, such an approach may yield useful insights to neural architecture as well as learned behavior, especially if dedicated neural hardware is available. Recent advances in cognitive robotics and neuromorphic processing now make such systems possible. Using a scalable, structured, modular approach, we build a spiking neural network where the effects and impact of learning can be predicted and tested, and the network can be scaled or extended to new tasks automatically. We introduce several enhancements to a basic network and show how they can be used to direct performance toward behaviorally relevant goals. Results show that using a simple classical spike-timing-dependent plasticity (STDP) rule on selected connections, we can get the robot (and network) to progress from poor task-specific performance to good performance. Behaviorally relevant STDP appears to contribute strongly to positive learning: "do this" but less to negative learning: "don't do that." In addition, we observe that the effect of structural enhancements tends to be cumulative. The overall system suggests that it is by being able to exploit combinations of effects, rather than any one effect or property in isolation, that spiking networks can achieve compelling, task-relevant behavior.


Subject(s)
Cognition , Learning , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Robotics , Action Potentials/physiology , Attention , Humans , Motivation , Photic Stimulation , Robotics/instrumentation
3.
Sci Rep ; 5: 12553, 2015 Jul 31.
Article in English | MEDLINE | ID: mdl-26228922

ABSTRACT

The mammalian visual system has been extensively studied since Hubel and Wiesel's work on cortical feature maps in the 1960s. Feature maps representing the cortical neurons' ocular dominance, orientation and direction preferences have been well explored experimentally and computationally. The predominant view has been that direction selectivity (DS) in particular, is a feature entirely dependent upon visual experience and as such does not exist prior to eye opening (EO). However, recent experimental work has shown that there is in fact a DS bias already present at EO. In the current work we use a computational model to reproduce the main results of this experimental work and show that the DS bias present at EO could arise purely from the cortical architecture without any explicit coding for DS and prior to any self-organising process facilitated by spontaneous activity or training. We explore how this latent DS (and its corresponding cortical map) is refined by training and that the time-course of development exhibits similar features to those seen in the experimental study. In particular we show that the specific cortical connectivity or 'proto-architecture' is required for DS to mature rapidly and correctly with visual experience.


Subject(s)
Models, Neurological , Visual Cortex/physiology , Visual Perception/physiology , Humans , Long-Term Potentiation , Models, Biological , Neuronal Plasticity , Orientation , Visual Fields/physiology
4.
PLoS One ; 9(7): e102908, 2014.
Article in English | MEDLINE | ID: mdl-25054209

ABSTRACT

Self-organizing artificial neural networks are a popular tool for studying visual system development, in particular the cortical feature maps present in real systems that represent properties such as ocular dominance (OD), orientation-selectivity (OR) and direction selectivity (DS). They are also potentially useful in artificial systems, for example robotics, where the ability to extract and learn features from the environment in an unsupervised way is important. In this computational study we explore a DS map that is already latent in a simple artificial network. This latent selectivity arises purely from the cortical architecture without any explicit coding for DS and prior to any self-organising process facilitated by spontaneous activity or training. We find DS maps with local patchy regions that exhibit features similar to maps derived experimentally and from previous modeling studies. We explore the consequences of changes to the afferent and lateral connectivity to establish the key features of this proto-architecture that support DS.


Subject(s)
Models, Neurological , Neural Networks, Computer , Neurons/physiology , Visual Cortex/physiology , Action Potentials/physiology , Computer Simulation , Dominance, Ocular/physiology , Humans , Orientation/physiology
5.
Neural Netw ; 44: 6-21, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23545539

ABSTRACT

This work investigates self-organising cortical feature maps (SOFMs) based upon the Kohonen Self-Organising Map (SOM) but implemented with spiking neural networks. In future work, the feature maps are intended as the basis for a sensorimotor controller for an autonomous humanoid robot. Traditional SOM methods require some modifications to be useful for autonomous robotic applications. Ideally the map training process should be self-regulating and not require predefined training files or the usual SOM parameter reduction schedules. It would also be desirable if the organised map had some flexibility to accommodate new information whilst preserving previous learnt patterns. Here methods are described which have been used to develop a cortical motor map training system which goes some way towards addressing these issues. The work is presented under the general term 'Adaptive Plasticity' and the main contribution is the development of a 'plasticity resource' (PR) which is modelled as a global parameter which expresses the rate of map development and is related directly to learning on the afferent (input) connections. The PR is used to control map training in place of a traditional learning rate parameter. In conjunction with the PR, random generation of inputs from a set of exemplar patterns is used rather than predefined datasets and enables maps to be trained without deciding in advance how much data is required. An added benefit of the PR is that, unlike a traditional learning rate, it can increase as well as decrease in response to the demands of the input and so allows the map to accommodate new information when the inputs are changed during training.


Subject(s)
Adaptation, Physiological , Cerebral Cortex , Neural Networks, Computer , Psychomotor Performance , Robotics/methods , Neurons
SELECTION OF CITATIONS
SEARCH DETAIL
...