Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Front Neurorobot ; 9: 7, 2015.
Article in English | MEDLINE | ID: mdl-26257640

ABSTRACT

Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

2.
Front Neurosci ; 9: 501, 2015.
Article in English | MEDLINE | ID: mdl-26834535

ABSTRACT

Freely behaving organisms need to rapidly calibrate their perceptual, cognitive, and motor decisions based on continuously changing environmental conditions. These plastic changes include sharpening or broadening of cognitive and motor attention and learning to match the behavioral demands that are imposed by changing environmental statistics. This article proposes that a shared circuit design for such flexible decision-making is used in specific cognitive and motor circuits, and that both types of circuits use acetylcholine to modulate choice selectivity. Such task-sensitive control is proposed to control thalamocortical choice of the critical features that are cognitively attended and that are incorporated through learning into prototypes of visual recognition categories. A cholinergically-modulated process of vigilance control determines if a recognition category and its attended features are abstract (low vigilance) or concrete (high vigilance). Homologous neural mechanisms of cholinergic modulation are proposed to focus attention and learn a multimodal map within the deeper layers of superior colliculus. This map enables visual, auditory, and planned movement commands to compete for attention, leading to selection of a winning position that controls where the next saccadic eye movement will go. Such map learning may be viewed as a kind of attentive motor category learning. The article hereby explicates a link between attention, learning, and cholinergic modulation during decision making within both cognitive and motor systems. Homologs between the mammalian superior colliculus and the avian optic tectum lead to predictions about how multimodal map learning may occur in the mammalian and avian brain and how such learning may be modulated by acetycholine.

3.
Article in English | MEDLINE | ID: mdl-22754524

ABSTRACT

Many cortical networks contain recurrent architectures that transform input patterns before storing them in short-term memory (STM). Theorems in the 1970's showed how feedback signal functions in rate-based recurrent on-center off-surround networks control this process. A sigmoid signal function induces a quenching threshold below which inputs are suppressed as noise and above which they are contrast-enhanced before pattern storage. This article describes how changes in feedback signaling, neuromodulation, and recurrent connectivity may alter pattern processing in recurrent on-center off-surround networks of spiking neurons. In spiking neurons, fast, medium, and slow after-hyperpolarization (AHP) currents control sigmoid signal threshold and slope. Modulation of AHP currents by acetylcholine (ACh) can change sigmoid shape and, with it, network dynamics. For example, decreasing signal function threshold and increasing slope can lengthen the persistence of a partially contrast-enhanced pattern, increase the number of active cells stored in STM, or, if connectivity is distance-dependent, cause cell activities to cluster. These results clarify how cholinergic modulation by the basal forebrain may alter the vigilance of category learning circuits, and thus their sensitivity to predictive mismatches, thereby controlling whether learned categories code concrete or abstract features, as predicted by Adaptive Resonance Theory. The analysis includes global, distance-dependent, and interneuron-mediated circuits. With an appropriate degree of recurrent excitation and inhibition, spiking networks maintain a partially contrast-enhanced pattern for 800 ms or longer after stimuli offset, then resolve to no stored pattern, or to winner-take-all (WTA) stored patterns with one or multiple winners. Strengthening inhibition prolongs a partially contrast-enhanced pattern by slowing the transition to stability, while strengthening excitation causes more winners when the network stabilizes.

4.
IEEE Pulse ; 3(1): 47-50, 2012 Jan.
Article in English | MEDLINE | ID: mdl-22344952

ABSTRACT

The researchers at Boston University (BU)'s Neuromorphics Laboratory, part of the National Science Foundation (NSF)-sponsored Center of Excellence for Learning in Education, Science, and Technology (CELEST), are working in collaboration with the engineers and scientists at Hewlett-Packard (HP) to implement neural models of intelligent processes for the next generation of dense, low-power, computer hardware that will use memristive technology to bring data closer to the processor where computation occurs. The HP and BU teams are jointly designing an optimal infrastructure, simulation, and software platform to build an artificial brain. The resulting Cog Ex Machina (Cog) software platform has been successfully used to implement a large-scale, multicomponent brain system that is able to simulate some key rat behavioral results in a virtual environment and has been applied to control robotic platforms as they learn to interact with their environment.


Subject(s)
Brain/physiology , Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Software , Animals , Humans
5.
J Comput Neurosci ; 32(2): 253-80, 2012 Apr.
Article in English | MEDLINE | ID: mdl-21779754

ABSTRACT

Recurrent networks are ubiquitous in the brain, where they enable a diverse set of transformations during perception, cognition, emotion, and action. It has been known since the 1970's how, in rate-based recurrent on-center off-surround networks, the choice of feedback signal function can control the transformation of input patterns into activity patterns that are stored in short term memory. A sigmoid signal function may, in particular, control a quenching threshold below which inputs are suppressed as noise and above which they may be contrast enhanced before the resulting activity pattern is stored. The threshold and slope of the sigmoid signal function determine the degree of noise suppression and of contrast enhancement. This article analyses how sigmoid signal functions and their shape may be determined in biophysically realistic spiking neurons. Combinations of fast, medium, and slow after-hyperpolarization (AHP) currents, and their modulation by acetylcholine (ACh), can control sigmoid signal threshold and slope. Instead of a simple gain in excitability that was previously attributed to ACh, cholinergic modulation may cause translation of the sigmoid threshold. This property clarifies how activation of ACh by basal forebrain circuits, notably the nucleus basalis of Meynert, may alter the vigilance of category learning circuits, and thus their sensitivity to predictive mismatches, thereby controlling whether learned categories code concrete or abstract information, as predicted by Adaptive Resonance Theory.


Subject(s)
Acetylcholine/metabolism , Action Potentials/physiology , Cerebral Cortex/cytology , Models, Neurological , Neurons/physiology , Acetylcholine/pharmacology , Action Potentials/drug effects , Animals , Feedback, Physiological/physiology , Humans , Nerve Net/drug effects , Nerve Net/physiology , Neurons/drug effects
6.
Atten Percept Psychophys ; 73(4): 1147-70, 2011 May.
Article in English | MEDLINE | ID: mdl-21336518

ABSTRACT

How do spatially disjoint and ambiguous local motion signals in multiple directions generate coherent and unambiguous representations of object motion? Various motion percepts, starting with those of Duncker (Induced motion, 1929/1938) and Johansson (Configurations in event perception, 1950), obey a rule of vector decomposition, in which global motion appears to be subtracted from the true motion path of localized stimulus components, so that objects and their parts are seen as moving relative to a common reference frame. A neural model predicts how vector decomposition results from multiple-scale and multiple-depth interactions within and between the form- and motion-processing streams in V1-V2 and V1-MST, which include form grouping, form-to-motion capture, figure-ground separation, and object motion capture mechanisms. Particular advantages of the model are that these mechanisms solve the aperture problem, group spatially disjoint moving objects via illusory contours, capture object motion direction signals on real and illusory contours, and use interdepth directional inhibition to cause a vector decomposition, whereby the motion directions of a moving frame at a nearer depth suppress those directions at a farther depth, and thereby cause a peak shift in the perceived directions of object parts moving with respect to the frame.


Subject(s)
Attention/physiology , Discrimination, Psychological/physiology , Motion Perception/physiology , Nerve Net/physiology , Optical Illusions/physiology , Orientation/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Association , Depth Perception/physiology , Field Dependence-Independence , Habituation, Psychophysiologic/physiology , Humans , Inhibition, Psychological , Interneurons/physiology , Neural Networks, Computer , Neurons/physiology , Psychophysics
7.
Int J Neural Syst ; 20(4): 249-65, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20726037

ABSTRACT

How do organisms select and organize relevant sensory input in working memory (WM) in order to deal with constantly changing environmental cues? Once information has been stored in WM, how is it protected from and altered by the continuous stream of sensory input and internally generated planning? The present study proposes a novel role for dopamine (DA) in the maintenance of WM in the prefrontal cortex (Pfc) neurons that begins to address these issues. In particular, DA mediates the alternation of the Pfc network between input-driven and internally-driven states, which in turn drives WM updates and storage. A biologically inspired neural network model of Pfc is formulated to provide a link between the mechanisms of state switching and the biophysical properties of Pfc neurons. This model belongs to the recurrent competitive fields(33) class of dynamical systems which have been extensively mathematically characterized and exhibit the two functional states of interest: input-driven and internally-driven. This hypothesis was tested with two working memory tasks of increasing difficulty: a simple working memory task and a delayed alternation task. The results suggest that optimal WM storage in spite of noise is achieved with a phasic DA input followed by a lower DA sustained activity. Hypo and hyper-dopaminergic activity that alter this ideal pattern lead to increased distractibility from non-relevant pattern and prolonged perseverations on presented patterns, respectively.


Subject(s)
Dopamine/metabolism , Memory, Short-Term/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Prefrontal Cortex/cytology , Prefrontal Cortex/physiology , Animals , Cues , Humans , Neurons/cytology
8.
J Comput Neurosci ; 28(2): 323-46, 2010 Apr.
Article in English | MEDLINE | ID: mdl-20111896

ABSTRACT

How spiking neurons cooperate to control behavioral processes is a fundamental problem in computational neuroscience. Such cooperative dynamics are required during visual perception when spatially distributed image fragments are grouped into emergent boundary contours. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity occur in response to binary spikes with irregular timing across many interacting cells. Some models have demonstrated spiking dynamics in recurrent laminar neocortical circuits, but not how perceptual grouping occurs. Other models have analyzed the fast speed of certain percepts in terms of a single feedforward sweep of activity, but cannot explain other percepts, such as illusory contours, wherein perceptual ambiguity can take hundreds of milliseconds to resolve by integrating multiple spikes over time. The current model reconciles fast feedforward with slower feedback processing, and binary spikes with analog network-level properties, in a laminar cortical network of spiking cells whose emergent properties quantitatively simulate parametric data from neurophysiological experiments, including the formation of illusory contours; the structure of non-classical visual receptive fields; and self-synchronizing gamma oscillations. These laminar dynamics shed new light on how the brain resolves local informational ambiguities through the use of properly designed nonlinear feedback spiking networks which run as fast as they can, given the amount of uncertainty in the data that they process.


Subject(s)
Contrast Sensitivity/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Visual Cortex/physiology , Computer Simulation , Form Perception/physiology , Optical Illusions/physiology , Pattern Recognition, Visual/physiology , Visual Pathways/physiology
9.
Neuroinformatics ; 6(4): 291-309, 2008.
Article in English | MEDLINE | ID: mdl-18695948

ABSTRACT

Making use of very detailed neurophysiological, anatomical, and behavioral data to build biologically-realistic computational models of animal behavior is often a difficult task. Until recently, many software packages have tried to resolve this mismatched granularity with different approaches. This paper presents KInNeSS, the KDE Integrated NeuroSimulation Software environment, as an alternative solution to bridge the gap between data and model behavior. This open source neural simulation software package provides an expandable framework incorporating features such as ease of use, scalability, an XML based schema, and multiple levels of granularity within a modern object oriented programming design. KInNeSS is best suited to simulate networks of hundreds to thousands of branched multi-compartmental neurons with biophysical properties such as membrane potential, voltage-gated and ligand-gated channels, the presence of gap junctions or ionic diffusion, neuromodulation channel gating, the mechanism for habituative or depressive synapses, axonal delays, and synaptic plasticity. KInNeSS outputs include compartment membrane voltage, spikes, local-field potentials, and current source densities, as well as visualization of the behavior of a simulated agent. An explanation of the modeling philosophy and plug-in development is also presented. Further development of KInNeSS is ongoing with the ultimate goal of creating a modular framework that will help researchers across different disciplines to effectively collaborate using a modern neural simulation platform.


Subject(s)
Central Nervous System/physiology , Computational Biology/methods , Computer Simulation , Neurophysiology/methods , Neurosciences/methods , Software , Action Potentials/physiology , Algorithms , Animals , Interdisciplinary Communication , Ion Channels/physiology , Neurons/physiology , Programming Languages , Synaptic Potentials/physiology
10.
Brain Res ; 1218: 278-312, 2008 Jul 07.
Article in English | MEDLINE | ID: mdl-18533136

ABSTRACT

This article develops the Synchronous Matching Adaptive Resonance Theory (SMART) neural model to explain how the brain may coordinate multiple levels of thalamocortical and corticocortical processing to rapidly learn, and stably remember, important information about a changing world. The model clarifies how bottom-up and top-down processes work together to realize this goal, notably how processes of learning, expectation, attention, resonance, and synchrony are coordinated. The model hereby clarifies, for the first time, how the following levels of brain organization coexist to realize cognitive processing properties that regulate fast learning and stable memory of brain representations: single-cell properties, such as spiking dynamics, spike-timing-dependent plasticity (STDP), and acetylcholine modulation; detailed laminar thalamic and cortical circuit designs and their interactions; aggregate cell recordings, such as current source densities and local field potentials; and single-cell and large-scale inter-areal oscillations in the gamma and beta frequency domains. In particular, the model predicts how laminar circuits of multiple cortical areas interact with primary and higher-order specific thalamic nuclei and nonspecific thalamic nuclei to carry out attentive visual learning and information processing. The model simulates how synchronization of neuronal spiking occurs within and across brain regions, and triggers STDP. Matches between bottom-up adaptively filtered input patterns and learned top-down expectations cause gamma oscillations that support attention, resonance, learning, and consciousness. Mismatches inhibit learning while causing beta oscillations during reset and hypothesis testing operations that are initiated in the deeper cortical layers. The generality of learned recognition codes is controlled by a vigilance process mediated by acetylcholine.


Subject(s)
Action Potentials/physiology , Attention/physiology , Cerebral Cortex/cytology , Cortical Synchronization , Learning/physiology , Neurons/physiology , Thalamus/cytology , Animals , Cerebral Cortex/physiology , Computer Simulation , Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Thalamus/physiology
11.
Neural Netw ; 18(5-6): 458-66, 2005.
Article in English | MEDLINE | ID: mdl-16095878

ABSTRACT

Temporal relationships between neuronal firing and plasticity have received significant attention in recent decades. Neurophysiological studies have shown the phenomenon of spike-timing-dependent plasticity (STDP). Various models were suggested to implement an STDP-like learning rule in artificial networks based on spiking neuronal representations. The rule presented here was developed under three constraints. First, it only depends on the information that is available at the synapse at the time of synaptic modification. Second, it naturally follows from neurophysiological and psychological research starting with Hebb's postulate [D. Hebb. (1949). The organization of behavior. Wiley, New York]. Third, it is simple, computationally cheap and its parameters are straightforward to determine. This rule is further extended by addition of four different types of gating derived from conventionally used types of gated decay in learning rules for continuous firing rate neural networks. The results show that the advantages of using these gatings are transferred to the new rule without sacrificing its dependency on spike-timing.


Subject(s)
Models, Neurological , Neural Networks, Computer , Neuronal Plasticity/physiology , Algorithms , Artificial Intelligence , Electrophysiology , Excitatory Postsynaptic Potentials , Ion Channel Gating/physiology , Models, Statistical , Synapses/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...