Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(21)2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37960579

ABSTRACT

Robots are becoming increasingly sophisticated in the execution of complex tasks. However, an area that requires development is the ability to act in dynamically changing environments. To advance this, developments have turned towards understanding the human brain and applying this to improve robotics. The present study used electroencephalogram (EEG) data recorded from 54 human participants whilst they performed a two-choice task. A build-up of motor activity starting around 400 ms before response onset, also known as the lateralized readiness potential (LRP), was observed. This indicates that actions are not simply binary processes but rather, response-preparation is gradual and occurs in a temporal window that can interact with the environment. In parallel, a robot arm executing a pick-and-place task was developed. The understanding from the EEG data and the robot arm were integrated into the final system, which included cell assemblies (CAs)-a simulated spiking neural network-to inform the robot to place the object left or right. Results showed that the neural data from the robot simulation were largely consistent with the human data. This neurorobotics study provides an example of how to integrate human brain recordings with simulated neural networks in order to drive a robot.


Subject(s)
Robotics , Humans , Robotics/methods , Neural Networks, Computer , Brain/physiology , Electroencephalography , Computer Simulation
2.
J Comput Neurosci ; 48(3): 299-316, 2020 08.
Article in English | MEDLINE | ID: mdl-32715350

ABSTRACT

Networks of spiking neurons can have persistently firing stable bump attractors to represent continuous spaces (like temperature). This can be done with a topology with local excitatory synapses and local surround inhibitory synapses. Activating large ranges in the attractor can lead to multiple bumps, that show repeller and attractor dynamics; however, these bumps can be merged by overcoming the repeller dynamics. A simple associative memory can include these bump attractors, allowing the use of continuous variables in these memories, and these associations can be learned by Hebbian rules. These simulations are related to biological networks, showing that this is a step toward a more complete neural cognitive associative memory.


Subject(s)
Action Potentials/physiology , Association , Memory/physiology , Models, Neurological , Neurons/physiology , Computer Simulation , Humans
3.
Front Neurorobot ; 12: 79, 2018.
Article in English | MEDLINE | ID: mdl-30534068

ABSTRACT

The best way to develop a Turing test passing AI is to follow the human model: an embodied agent that functions over a wide range of domains, is a human cognitive model, follows human neural functioning and learns. These properties will endow the agent with the deep semantics required to pass the test. An embodied agent functioning over a wide range of domains is needed to be exposed to and learn the semantics of those domains. Following human cognitive and neural functioning simplifies the search for sufficiently sophisticated mechanisms by reusing mechanisms that are already known to be sufficient. This is a difficult task, but initial steps have been taken, including the development of CABots, neural agents embodied in virtual environments. Several different CABots run in response to natural language commands, performing a cognitive mapping task. These initial agents are quite some distance from passing the test, and to develop an agent that passes will require broad collaboration. Several next steps are proposed, and these could be integrated using, for instance, the Platforms from the Human Brain Project as a foundation for this collaboration.

4.
Behav Brain Sci ; 39: e78, 2016 Jan.
Article in English | MEDLINE | ID: mdl-27561969

ABSTRACT

Humans process language with their neurons. Memory in neurons is supported by neural firing and by short- and long-term synaptic weight change; the emergent behaviour of neurons, synchronous firing, and cell assembly dynamics is also a form of memory. As the language signal moves to later stages, it is processed with different mechanisms that are slower but more persistent.


Subject(s)
Language , Neurons , Humans , Memory , Models, Neurological
5.
Cogn Neurodyn ; 8(4): 299-311, 2014 Aug.
Article in English | MEDLINE | ID: mdl-25009672

ABSTRACT

A system with some degree of biological plausibility is developed to categorise items from a widely used machine learning benchmark. The system uses fatiguing leaky integrate and fire neurons, a relatively coarse point model that roughly duplicates biological spiking properties; this allows spontaneous firing based on hypo-fatigue so that neurons not directly stimulated by the environment may be included in the circuit. A novel compensatory Hebbian learning algorithm is used that considers the total synaptic weight coming into a neuron. The network is unsupervised and entirely self-organising. This is relatively effective as a machine learning algorithm, categorising with just neurons, and the performance is comparable with a Kohonen map. However the learning algorithm is not stable, and behaviour decays as length of training increases. Variables including learning rate, inhibition and topology are explored leading to stable systems driven by the environment. The model is thus a reasonable next step toward a full neural memory model.

6.
Biol Cybern ; 107(3): 263-88, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23559034

ABSTRACT

Since the cell assembly (CA) was hypothesised, it has gained substantial support and is believed to be the neural basis of psychological concepts. A CA is a relatively small set of connected neurons, that through neural firing can sustain activation without stimulus from outside the CA, and is formed by learning. Extensive evidence from multiple single unit recording and other techniques provides support for the existence of CAs that have these properties, and that their neurons also spike with some degree of synchrony. Since the evidence is so broad and deep, the review concludes that CAs are all but certain. A model of CAs is introduced that is informal, but is broad enough to include, e.g. synfire chains, without including, e.g. holographic reduced representation. CAs are found in most cortical areas and in some sub-cortical areas, they are involved in psychological tasks including categorisation, short-term memory and long-term memory, and are central to other tasks including working memory. There is currently insufficient evidence to conclude that CAs are the neural basis of all concepts. A range of models have been used to simulate CA behaviour including associative memory and more process- oriented tasks such as natural language parsing. Questions involving CAs, e.g. memory persistence, CAs' complex interactions with brain waves and learning, remain unanswered. CA research involves a wide range of disciplines including biology and psychology, and this paper reviews literature directly related to the CA, providing a basis of discussion for this interdisciplinary community on this important topic. Hopefully, this discussion will lead to more formal and accurate models of CAs that are better linked to neuropsychological data.


Subject(s)
Association Learning/physiology , Memory/physiology , Models, Neurological , Neurons/physiology , Animals , Humans
7.
Neural Comput ; 24(7): 1906-25, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22428590

ABSTRACT

A neurocomputational model based on emergent massively overlapping neural cell assemblies (CAs) for resolving prepositional phrase (PP) attachment ambiguity is described. PP attachment ambiguity is a well-studied task in natural language processing and is a case where semantics is used to determine the syntactic structure. A large network of biologically plausible fatiguing leaky integrate-and-fire neurons is trained with semantic hierarchies (obtained from WordNet) on sentences with PP attachment ambiguity extracted from the Penn Treebank corpus. During training, overlapping CAs representing semantic similarities between the component words of the ambiguous sentences emerge and then act as categorizers for novel input. The resulting average resolution accuracy of 84.56% is on par with known machine learning algorithms.


Subject(s)
Algorithms , Models, Neurological , Natural Language Processing , Neural Networks, Computer , Neurons/physiology , Brain/physiology , Computer Simulation , Humans , Learning/physiology , Semantics
8.
Cogn Neurodyn ; 3(4): 317-30, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19301147

ABSTRACT

A natural language parser implemented entirely in simulated neurons is described. It produces a semantic representation based on frames. It parses solely using simulated fatiguing Leaky Integrate and Fire neurons, that are a relatively accurate biological model that is simulated efficiently. The model works on discrete cycles that simulate 10 ms of biological time, so the parser has a simple mapping to psychological parsing time. Comparisons to human parsing studies show that the parser closely approximates this data. The parser makes use of Cell Assemblies and the semantics of lexical items is represented by overlapping hierarchical Cell Assemblies so that semantically related items share neurons. This semantic encoding is used to resolve prepositional phrase attachment ambiguities encountered during parsing. Consequently, the parser provides a neurally-based cognitive model of parsing.

SELECTION OF CITATIONS
SEARCH DETAIL
...