Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Neural Netw Learn Syst ; 33(6): 2654-2663, 2022 06.
Article in English | MEDLINE | ID: mdl-34570710

ABSTRACT

In this article, we propose a novel architecture called hierarchical-task reservoir (HTR) suitable for real-time applications for which different levels of abstraction are available. We apply it to semantic role labeling (SRL) based on continuous speech recognition. Taking inspiration from the brain, this demonstrates the hierarchies of representations from perceptive to integrative areas, and we consider a hierarchy of four subtasks with increasing levels of abstraction (phone, word, part-of-speech (POS), and semantic role tags). These tasks are progressively learned by the layers of the HTR architecture. Interestingly, quantitative and qualitative results show that the hierarchical-task approach provides an advantage to improve the prediction. In particular, the qualitative results show that a shallow or a hierarchical reservoir, considered as baselines, does not produce estimations as good as the HTR model would. Moreover, we show that it is possible to further improve the accuracy of the model by designing skip connections and by considering word embedding (WE) in the internal representations. Overall, the HTR outperformed the other state-of-the-art reservoir-based approaches and it resulted in extremely efficient with respect to typical recurrent neural networks (RNNs) in deep learning (DL) [e.g., long short term memory (LSTMs)]. The HTR architecture is proposed as a step toward the modeling of online and hierarchical processes at work in the brain during language comprehension.


Subject(s)
Semantics , Speech , Brain , Neural Networks, Computer
3.
Neural Comput ; 32(1): 153-181, 2020 01.
Article in English | MEDLINE | ID: mdl-31703171

ABSTRACT

Gated working memory is defined as the capacity of holding arbitrary information at any time in order to be used at a later time. Based on electrophysiological recordings, several computational models have tackled the problem using dedicated and explicit mechanisms. We propose instead to consider an implicit mechanism based on a random recurrent neural network. We introduce a robust yet simple reservoir model of gated working memory with instantaneous updates. The model is able to store an arbitrary real value at random time over an extended period of time. The dynamics of the model is a line attractor that learns to exploit reentry and a nonlinearity during the training phase using only a few representative values. A deeper study of the model shows that there is actually a large range of hyperparameters for which the results hold (e.g., number of neurons, sparsity, global weight scaling) such that any large enough population, mixing excitatory and inhibitory neurons, can quickly learn to realize such gated working memory. In a nutshell, with a minimal set of hypotheses, we show that we can have a robust model of working memory. This suggests this property could be an implicit property of any random population, that can be acquired through learning. Furthermore, considering working memory to be a physically open but functionally closed system, we give account on some counterintuitive electrophysiological recordings.

4.
PeerJ Comput Sci ; 3: e142, 2017.
Article in English | MEDLINE | ID: mdl-34722870

ABSTRACT

Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.

5.
Brain Lang ; 150: 54-68, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26335997

ABSTRACT

Language production requires selection of the appropriate sentence structure to accommodate the communication goal of the speaker - the transmission of a particular meaning. Here we consider event meanings, in terms of predicates and thematic roles, and we address the problem that a given event can be described from multiple perspectives, which poses a problem of response selection. We present a model of response selection in sentence production that is inspired by the primate corticostriatal system. The model is implemented in the context of reservoir computing where the reservoir - a recurrent neural network with fixed connections - corresponds to cortex, and the readout corresponds to the striatum. We demonstrate robust learning, and generalization properties of the model, and demonstrate its cross linguistic capabilities in English and Japanese. The results contribute to the argument that the corticostriatal system plays a role in response selection in language production, and to the stance that reservoir computing is a valid potential model of corticostriatal processing.


Subject(s)
Cerebral Cortex/physiology , Corpus Striatum/physiology , Language , Models, Neurological , Neural Networks, Computer , Animals , Humans , Learning/physiology , Linguistics , Models, Psychological , Primates/physiology
6.
Front Neurorobot ; 8: 16, 2014.
Article in English | MEDLINE | ID: mdl-24834050

ABSTRACT

One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

7.
PLoS One ; 8(2): e52946, 2013.
Article in English | MEDLINE | ID: mdl-23383296

ABSTRACT

Sentence processing takes place in real-time. Previous words in the sentence can influence the processing of the current word in the timescale of hundreds of milliseconds. Recent neurophysiological studies in humans suggest that the fronto-striatal system (frontal cortex, and striatum--the major input locus of the basal ganglia) plays a crucial role in this process. The current research provides a possible explanation of how certain aspects of this real-time processing can occur, based on the dynamics of recurrent cortical networks, and plasticity in the cortico-striatal system. We simulate prefrontal area BA47 as a recurrent network that receives on-line input about word categories during sentence processing, with plastic connections between cortex and striatum. We exploit the homology between the cortico-striatal system and reservoir computing, where recurrent frontal cortical networks are the reservoir, and plastic cortico-striatal synapses are the readout. The system is trained on sentence-meaning pairs, where meaning is coded as activation in the striatum corresponding to the roles that different nouns and verbs play in the sentences. The model learns an extended set of grammatical constructions, and demonstrates the ability to generalize to novel constructions. It demonstrates how early in the sentence, a parallel set of predictions are made concerning the meaning, which are then confirmed or updated as the processing of the input sentence proceeds. It demonstrates how on-line responses to words are influenced by previous words in the sentence, and by previous sentences in the discourse, providing new insight into the neurophysiology of the P600 ERP scalp response to grammatical complexity. This demonstrates that a recurrent neural network can decode grammatical structure from sentences in real-time in order to generate a predictive representation of the meaning of the sentences. This can provide insight into the underlying mechanisms of human cortico-striatal function in sentence processing.


Subject(s)
Corpus Striatum/physiology , Frontal Lobe/physiology , Neural Networks, Computer , Semantics , Speech Perception/physiology , Artificial Intelligence , Computer Simulation , Humans
8.
J Physiol Paris ; 105(1-3): 16-24, 2011.
Article in English | MEDLINE | ID: mdl-21939760

ABSTRACT

Categorical encoding is crucial for mastering large bodies of related sensory-motor experiences, but what is its neural substrate? In an effort to respond to this question, recent single-unit recording studies in the macaque lateral prefrontal cortex (LPFC) have demonstrated two characteristic forms of neural encoding of the sequential structure of the animal's sensory-motor experience. One population of neurons encodes the specific behavioral sequences. A second population of neurons encodes the sequence category (e.g. ABAB, AABB or AAAA) and does not differentiate sequences within the category (Shima, K., Isoda, M., Mushiake, H., Tanji, J., 2007. Categorization of behavioural sequences in the prefrontal cortex. Nature 445, 315-318.). Interestingly these neurons are intermingled in the lateral prefrontal cortex, and not topographically segregated. Thus, LPFC may provide a neurophysiological basis for sensorimotor categorization. Here we report on a neural network simulation study that reproduces and explains these results. We model a cortical circuit composed of three layers (infragranular, granular, and supragranular) of 5*5 leaky integrator neurons with a sigmoidal output function, and we examine 1000 such circuits running in parallel. Crucially the three layers are interconnected with recurrent connections, thus producing a dynamical system that is inherently sensitive to the spatiotemporal structure of the sequential inputs. The model is presented with 11 four-element sequences following Shima et al. We isolated one subpopulation of neurons each of whose activity predicts individual sequences, and a second population that predicts category independent of the specific sequence. We argue that a richly interconnected cortical circuit is capable of internally generating a neural representation of category membership, thus significantly extending the scope of recurrent network computation. In order to demonstrate that these representations can be used to create an explicit categorization capability, we introduced an additional neural structure corresponding to the striatum. We showed that via cortico-striatal plasticity, neurons in the striatum could produce an explicit representation both of the identity of each sequence, and its category membership.


Subject(s)
Learning/physiology , Models, Neurological , Neurons/physiology , Prefrontal Cortex/physiology , Animals , Macaca , Memory/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...