Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Front Neurorobot ; 13: 52, 2019.
Article in English | MEDLINE | ID: mdl-31354468

ABSTRACT

It is well-established that human decision making and instrumental control uses multiple systems, some which use habitual action selection and some which require deliberate planning. Deliberate planning systems use predictions of action-outcomes using an internal model of the agent's environment, while habitual action selection systems learn to automate by repeating previously rewarded actions. Habitual control is computationally efficient but are not very flexible in changing environments. Conversely, deliberate planning may be computationally expensive, but flexible in dynamic environments. This paper proposes a general architecture comprising both control paradigms by introducing an arbitrator that controls which subsystem is used at any time. This system is implemented for a target-reaching task with a simulated two-joint robotic arm that comprises a supervised internal model and deep reinforcement learning. Through permutation of target-reaching conditions, we demonstrate that the proposed is capable of rapidly learning kinematics of the system without a priori knowledge, and is robust to (A) changing environmental reward and kinematics, and (B) occluded vision. The arbitrator model is compared to exclusive deliberate planning with the internal model and exclusive habitual control instances of the model. The results show how such a model can harness the benefits of both systems, using fast decisions in reliable circumstances while optimizing performance in changing environments. In addition, the proposed model learns very fast. Finally, the system which includes internal models is able to reach the target under the visual occlusion, while the pure habitual system is unable to operate sufficiently under such conditions.

2.
Int J Psychophysiol ; 127: 62-72, 2018 05.
Article in English | MEDLINE | ID: mdl-29551656

ABSTRACT

The visual environment is filled with complex, multi-dimensional objects that vary in their value to an observer's current goals. When faced with multi-dimensional stimuli, humans may rely on biases to learn to select those objects that are most valuable to the task at hand. Here, we show that decision making in a complex task is guided by the sparsity bias: the focusing of attention on a subset of available features. Participants completed a gambling task in which they selected complex stimuli that varied randomly along three dimensions: shape, color, and texture. Each dimension comprised three features (e.g., color: red, green, yellow). Only one dimension was relevant in each block (e.g., color), and a randomly-chosen value ranking determined outcome probabilities (e.g., green > yellow > red). Participants were faster to respond to infrequent probe stimuli that appeared unexpectedly within stimuli that possessed a more valuable feature than to probes appearing within stimuli possessing a less valuable feature. Event-related brain potentials recorded during the task provided a neurophysiological explanation for sparsity as a learning-dependent increase in optimal attentional performance (as measured by the N2pc component of the human event-related potential) and a concomitant learning-dependent decrease in prediction errors (as measured by the feedback-elicited reward positivity). Together, our results suggest that the sparsity bias guides human reinforcement learning in complex environments.


Subject(s)
Attention/physiology , Bias , Evoked Potentials/physiology , Learning/physiology , Visual Perception/physiology , Adolescent , Brain Mapping , Decision Making/physiology , Electroencephalography , Female , Humans , Male , Photic Stimulation , Reaction Time/physiology , Reward , Young Adult
3.
Neural Netw ; 72: 13-30, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26559472

ABSTRACT

Humans can point fairly accurately to memorized states when closing their eyes despite slow or even missing sensory feedback. It is also common that the arm dynamics changes during development or from injuries. We propose a biologically motivated implementation of an arm controller that includes an adaptive observer. Our implementation is based on the neural field framework, and we show how a path integration mechanism can be trained from few examples. Our results illustrate successful generalization of path integration with a dynamic neural field by which the robotic arm can move in arbitrary directions and velocities. Also, by adapting the strength of the motor effect the observer implicitly learns to compensate an image acquisition delay in the sensory system. Our dynamic implementation of an observer successfully guides the arm toward the target in the dark, and the model produces movements with a bell-shaped velocity profile, consistent with human behavior data.


Subject(s)
Brain/physiology , Models, Neurological , Movement/physiology , Psychomotor Performance/physiology , Arm , Humans , Learning , Robotics
4.
J Neurosci Methods ; 245: 64-72, 2015 Apr 30.
Article in English | MEDLINE | ID: mdl-25701685

ABSTRACT

BACKGROUND: Event-related potentials (ERPs) may provide a non-invasive index of brain function for a range of clinical applications. However, as a lab-based technique, ERPs are limited by technical challenges that prevent full integration into clinical settings. NEW METHOD: To translate ERP capabilities from the lab to clinical applications, we have developed methods like the Halifax Consciousness Scanner (HCS). HCS is essentially a rapid, automated ERP evaluation of brain functional status. The present study describes the ERP components evoked from auditory tones and speech stimuli. ERP results were obtained using a 5-min test in 100 healthy individuals. The HCS sequence was designed to evoke the N100, the mismatch negativity (MMN), P300, the early negative enhancement (ENE), and the N400. These components reflected sensation, perception, attention, memory, and language perception, respectively. Component detection was examined at group and individual levels, and evaluated across both statistical and classification approaches. RESULTS: All ERP components were robustly detected at the group level. At the individual level, nonparametric statistical analyses showed reduced accuracy relative to support vector (SVM) machine classification, particularly for speech-based ERPs. Optimized SVM results were MMN: 95.6%; P300: 99.0%; ENE: 91.8%; and N400: 92.3%. CONCLUSIONS: A spectrum of individual-level ERPs can be obtained in a very short time. Machine learning classification improved detection accuracy across a large healthy control sample. Translating ERPs into clinical applications is increasingly possible at the individual level.


Subject(s)
Brain/physiology , Consciousness/physiology , Evoked Potentials/physiology , Point-of-Care Systems , Acoustic Stimulation , Adult , Aged , Analysis of Variance , Electroencephalography , Female , Humans , Language , Male , Middle Aged , Reaction Time/physiology , Young Adult
5.
Brain Inform ; 2(1): 1-12, 2015 Mar.
Article in English | MEDLINE | ID: mdl-27747499

ABSTRACT

Event-related potentials (ERPs) are tiny electrical brain responses in the human electroencephalogram that are typically not detectable until they are isolated by a process of signal averaging. Owing to the extremely smallsize of ERP components (ranging from less than 1 µV to tens of µV), compared to background brain rhythms, statistical analyses of ERPs are predominantly carried out in groups of subjects. This limitation is a barrier to the translation of ERP-based neuroscience to applications such as medical diagnostics. We show here that support vector machines (SVMs) are a useful method to detect ERP components in individual subjects with a small set of electrodes and a small number of trials for a mismatch negativity (MMN) ERP component. Such a reduced experiment setup is important for clinical applications. One hundred healthy individuals were presented with an auditory pattern containing pattern-violating deviants to evoke the MMN. Two-class SVMs were then trained to classify averaged ERP waveforms in response to the standard tone (tones that match the pattern) and deviant tone stimuli (tones that violate the pattern). The influence of kernel type, number of epochs, electrode selection, and temporal window size in the averaged waveform were explored. When using all electrodes, averages of all available epochs, and a temporal window from 0 to 900-ms post-stimulus, a linear SVM achieved 94.5 % accuracy. Further analyses using SVMs trained with narrower, sliding temporal windows confirmed the sensitivity of the SVM to data in the latency range associated with the MMN.

7.
Learn Behav ; 42(1): 22-38, 2014 Mar.
Article in English | MEDLINE | ID: mdl-23813103

ABSTRACT

When retrospective revaluation phenomena (e.g., unovershadowing: AB+, then A-, then test B) were discovered, simple elemental models were at a disadvantage because they could not explain such phenomena. Extensions of these models and novel models appealed to within-compound associations to accommodate these new data. Here, we present an elemental, neural network model of conditioning that explains retrospective revaluation apart from within-compound associations. In the model, previously paired stimuli (say, A and B, after AB+) come to activate similar ensembles of neurons, so that revaluation of one stimulus (A-) has the opposite effect on the other stimulus (B) through changes (decreases) in the strength of the inhibitory connections between neurons activated by B. The ventral striatum is discussed as a possible home for the structure and function of the present model.


Subject(s)
Action Potentials/physiology , Conditioning, Classical/physiology , Models, Psychological , Neural Networks, Computer , Neurons/physiology , Animals , Cues , Inhibition, Psychological
8.
J Cogn Neurosci ; 24(2): 315-36, 2012 Feb.
Article in English | MEDLINE | ID: mdl-21942761

ABSTRACT

During natural vision, eye movements are dynamically controlled by the combinations of goal-related top-down (TD) and stimulus-related bottom-up (BU) neural signals that map onto objects or locations of interest in the visual world. In primates, both BU and TD signals converge in many areas of the brain, including the intermediate layers of the superior colliculus (SCi), a midbrain structure that contains a retinotopically coded map for saccades. How TD and BU signals combine or interact within the SCi map to influence saccades remains poorly understood and actively debated. It has been proposed that winner-take-all competition between these signals occurs dynamically within this map to determine the next location for gaze. Here, we examine how TD and BU signals interact spatially within an artificial two-dimensional dynamic winner-take-all neural field model of the SCi to influence saccadic RT (SRT). We measured point images (spatially organized population activity on the SC map) physiologically to inform the TD and BU model parameters. In this model, TD and BU signals interacted nonlinearly within the SCi map to influence SRT via changes to the (1) spatial size or extent of individual signals, (2) peak magnitude of individual signals, (3) total number of competing signals, and (4) the total spatial separation between signals in the visual field. This model reproduced previous behavioral studies of TD and BU influences on SRT and accounted for multiple inconsistencies between them. This is achieved by demonstrating how, under different experimental conditions, the spatial interactions of TD and BU signals can lead to either increases or decreases in SRT. Our results suggest that dynamic winner-take-all modeling with local excitation and distal inhibition in two dimensions accurately reflects both the physiological activity within the SCi map and the behavioral changes in SRT that result from BU and TD manipulations.


Subject(s)
Models, Neurological , Neurons/physiology , Saccades/physiology , Superior Colliculi/physiology , Animals , Macaca mulatta , Male , Photic Stimulation , Reaction Time/physiology , Visual Fields/physiology
9.
Neural Netw ; 21(10): 1476-92, 2008 Dec.
Article in English | MEDLINE | ID: mdl-18980830

ABSTRACT

Centre-Surround Neural Field (CSNF) models were used to explain a possible mechanism by which information from different sources may be integrated into target likelihood maps that are then used to direct eye saccades. The CSNF model is a dynamic model in which each region in network excites near-by location and inhibits distant locations, thereby modeling competition for eye movements (saccades). The CSNF model was tested in a number of conditions analogous to a naturalistic search task in which the target was either (1) present in the expected location, (2) present in the unexpected location, or (3) absent. Simulations showed that the model predicted a pattern of accuracy results similar to those obtained by [Eckstein, M. P., Drescher, B. A., & Shimozaki, S. S. (2006). Attentional cues in real scenes, saccadic targeting, and Bayesian priors. Psychological Science, 17(11), 973-980] from human participants. However, the model predicts different saccadic latencies between conditions where Eckstein, Drescher, and Shimozaki (2006) found no significant differences. These discrepancies between model predictions and behavioural results are discussed. Additional simulations indicated that these models can also capture the qualitative flavor of eye movements in conditions with multiple targets as compared to [Findlay, J. M. (1997). Saccade target selection during visual search. Vision Research, 37(5), 617-631].


Subject(s)
Computer Simulation , Eye Movements , Neural Networks, Computer , Attention , Cues , Humans , Likelihood Functions , Psychomotor Performance , Software , Time Factors , Visual Perception
10.
Neural Netw ; 18(5-6): 620-7, 2005.
Article in English | MEDLINE | ID: mdl-16087317

ABSTRACT

Experimental evidence on the distribution of visual attention supports the idea of a spatial saliency map, whereby bottom-up and top-down influences on attention are integrated by a winner-take-all mechanism. We implement this map with a continuous attractor neural network, and test the ability of our model to explain experimental evidence on the distribution of spatial attention. The majority of evidence supports the view that attention is unitary, but recent experiments provide evidence for split attentional foci. We simulate two such experiments. Our results suggest that the ability to divide attention depends on sustained endogenous signals from short term memory to the saliency map, stressing the interplay between working memory mechanisms and attention.


Subject(s)
Attention/physiology , Models, Neurological , Neural Networks, Computer , Visual Perception/physiology , Algorithms , Computer Simulation
11.
Proc Biol Sci ; 269(1496): 1087-93, 2002 Jun 07.
Article in English | MEDLINE | ID: mdl-12061949

ABSTRACT

Medial temporal lobe structures including the hippocampus are implicated by separate investigations in both episodic memory and spatial function. We show that a single recurrent attractor network can store both the discrete memories that characterize episodic memory and the continuous representations that characterize physical space. Combining both types of representation in a single network is actually necessary if objects and where they are located in space must be stored. We thus show that episodic memory and spatial theories of medial temporal lobe function can be combined in a unified model.


Subject(s)
Computer Simulation , Memory/physiology , Models, Neurological , Animals , Cues , Mental Recall/physiology , N-Methylaspartate/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...