Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
Add more filters










Publication year range
1.
Neural Netw ; 146: 200-219, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34894482

ABSTRACT

Despite significant improvements in contemporary machine learning, symbolic methods currently outperform artificial neural networks on tasks that involve compositional reasoning, such as goal-directed planning and logical inference. This illustrates a computational explanatory gap between cognitive and neurocomputational algorithms that obscures the neurobiological mechanisms underlying cognition and impedes progress toward human-level artificial intelligence. Because of the strong relationship between cognition and working memory control, we suggest that the cognitive abilities of contemporary neural networks are limited by biologically-implausible working memory systems that rely on persistent activity maintenance and/or temporal nonlocality. Here we present NeuroLISP, an attractor neural network that can represent and execute programs written in the LISP programming language. Unlike previous approaches to high-level programming with neural networks, NeuroLISP features a temporally-local working memory based on itinerant attractor dynamics, top-down gating, and fast associative learning, and implements several high-level programming constructs such as compositional data structures, scoped variable binding, and the ability to manipulate and execute programmatic expressions in working memory (i.e., programs can be treated as data). Our computational experiments demonstrate the correctness of the NeuroLISP interpreter, and show that it can learn non-trivial programs that manipulate complex derived data structures (multiway trees), perform compositional string manipulation operations (PCFG SET task), and implement high-level symbolic AI algorithms (first-order unification). We conclude that NeuroLISP is an effective neurocognitive controller that can replace the symbolic components of hybrid models, and serves as a proof of concept for further development of high-level symbolic programming in neural networks.


Subject(s)
Artificial Intelligence , Neural Networks, Computer , Algorithms , Humans , Machine Learning , Memory, Short-Term
2.
Front Neurorobot ; 15: 744031, 2021.
Article in English | MEDLINE | ID: mdl-34970133

ABSTRACT

We present a neurocomputational controller for robotic manipulation based on the recently developed "neural virtual machine" (NVM). The NVM is a purely neural recurrent architecture that emulates a Turing-complete, purely symbolic virtual machine. We program the NVM with a symbolic algorithm that solves blocks-world restacking problems, and execute it in a robotic simulation environment. Our results show that the NVM-based controller can faithfully replicate the execution traces and performance levels of a traditional non-neural program executing the same restacking procedure. Moreover, after programming the NVM, the neurocomputational encodings of symbolic block stacking knowledge can be fine-tuned to further improve performance, by applying reinforcement learning to the underlying neural architecture.

3.
Neural Netw ; 138: 78-97, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33631609

ABSTRACT

Compositionality refers to the ability of an intelligent system to construct models out of reusable parts. This is critical for the productivity and generalization of human reasoning, and is considered a necessary ingredient for human-level artificial intelligence. While traditional symbolic methods have proven effective for modeling compositionality, artificial neural networks struggle to learn systematic rules for encoding generalizable structured models. We suggest that this is due in part to short-term memory that is based on persistent maintenance of activity patterns without fast weight changes. We present a recurrent neural network that encodes structured representations as systems of contextually-gated dynamical attractors called attractor graphs. This network implements a functionally compositional working memory that is manipulated using top-down gating and fast local learning. We evaluate this approach with empirical experiments on storage and retrieval of graph-based data structures, as well as an automated hierarchical planning task. Our results demonstrate that compositional structures can be stored in and retrieved from neural working memory without persistent maintenance of multiple activity patterns. Further, memory capacity is improved by the use of a fast store-erase learning rule that permits controlled erasure and mutation of previously learned associations. We conclude that the combination of top-down gating and fast associative learning provides recurrent neural networks with a robust functional mechanism for compositional working memory.


Subject(s)
Machine Learning , Humans , Memory, Short-Term , Models, Neurological
4.
Neural Netw ; 119: 10-30, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31376635

ABSTRACT

We present a neural architecture that uses a novel local learning rule to represent and execute arbitrary, symbolic programs written in a conventional assembly-like language. This Neural Virtual Machine (NVM) is purely neurocomputational but supports all of the key functionality of a traditional computer architecture. Unlike other programmable neural networks, the NVM uses principles such as fast non-iterative local learning, distributed representation of information, program-independent circuitry, itinerant attractor dynamics, and multiplicative gating for both activity and plasticity. We present the NVM in detail, theoretically analyze its properties, and conduct empirical computer experiments that quantify its performance and demonstrate that it works effectively.


Subject(s)
Machine Learning , Neural Networks, Computer , Computers , Humans , Learning
5.
Hum Mov Sci ; 652019 Jun.
Article in English | MEDLINE | ID: mdl-30219273

ABSTRACT

Although the human mirror neuron system (MNS) is critical for action observation and imitation, most MNS investigations overlook the visuospatial transformation processes that allow individuals to interpret and imitate actions observed from differing perspectives. This problem is not trivial since accurately reaching for and grasping an object requires a visuospatial transformation mechanism capable of precisely remapping fine motor skills where the observer's and imitator's arms and hands may have quite different orientations and sizes. Accordingly, here we describe a novel neural model to investigate the dynamics between the fronto-parietal MNS and visuospatial processes during observation and imitation of a reaching and grasping action. Our model encompasses i) the inferior frontal gyrus (IFG) and inferior parietal lobule (IPL), regions that are postulated to produce neural drive and sensory predictions, respectively; ii) the middle temporal (MT) and middle superior temporal (MST) regions that are postulated to process visual motion of a particular action; and iii) the superior parietal lobule (SPL) and intra-parietal sulcus (IPS) that are hypothesized to encode the visuospatial transformations enabling action observation/imitation based on different visuospatial viewpoints. The results reveal that when a demonstrator executes an action, an imitator can reproduce it with similar kinematics, independently of differences in anthropometry, distance, and viewpoint. As with prior empirical findings, similar model synaptic activity was observed during both action observation and execution along with the existence of both view-independent and view-dependent neural populations in the frontal MNS. Importantly, this work generates testable behavioral and neurophysiological predictions. Namely, the model predicts that i) during observation/imitation the response time increases linearly as the rotation angle of the observed action increases but remain similar when performing both clockwise and counterclockwise rotation and ii) IPL embeds essentially view-independent neurons while SPL/IPS includes both view-independent and view-dependent neurons. Overall, this work suggests that MT/MST visuomotion processes combined with the SPL/IPS allow the MNS to observe and imitate actions independently of demonstrator-imitator spatial relationships.


Subject(s)
Imitative Behavior/physiology , Mirror Neurons/physiology , Models, Neurological , Space Perception/physiology , Brain Mapping/methods , Hand/physiology , Humans , Learning/physiology , Motor Skills/physiology , Nerve Net/physiology , Parietal Lobe/physiology , Psychomotor Performance/physiology , Reaction Time/physiology , Temporal Lobe/physiology
6.
IEEE Trans Neural Netw Learn Syst ; 29(8): 3636-3646, 2018 08.
Article in English | MEDLINE | ID: mdl-28858815

ABSTRACT

We introduce mathematical objects that we call "directional fibers," and show how they enable a new strategy for systematically locating fixed points in recurrent neural networks. We analyze this approach mathematically and use computer experiments to show that it consistently locates many fixed points in many networks with arbitrary sizes and unconstrained connection weights. Comparison with a traditional method shows that our strategy is competitive and complementary, often finding larger and distinct sets of fixed points. We provide theoretical groundwork for further analysis and suggest next steps for developing the method into a more powerful solver.

7.
Front Robot AI ; 5: 1, 2018.
Article in English | MEDLINE | ID: mdl-33500888

ABSTRACT

While the concept of a conscious machine is intriguing, producing such a machine remains controversial and challenging. Here, we describe how our work on creating a humanoid cognitive robot that learns to perform tasks via imitation learning relates to this issue. Our discussion is divided into three parts. First, we summarize our previous framework for advancing the understanding of the nature of phenomenal consciousness. This framework is based on identifying computational correlates of consciousness. Second, we describe a cognitive robotic system that we recently developed that learns to perform tasks by imitating human-provided demonstrations. This humanoid robot uses cause-effect reasoning to infer a demonstrator's intentions in performing a task, rather than just imitating the observed actions verbatim. In particular, its cognitive components center on top-down control of a working memory that retains the explanatory interpretations that the robot constructs during learning. Finally, we describe our ongoing work that is focused on converting our robot's imitation learning cognitive system into purely neurocomputational form, including both its low-level cognitive neuromotor components, its use of working memory, and its causal reasoning mechanisms. Based on our initial results, we argue that the top-down cognitive control of working memory, and in particular its gating mechanisms, is an important potential computational correlate of consciousness in humanoid robots. We conclude that developing high-level neurocognitive control systems for cognitive robots and using them to search for computational correlates of consciousness provides an important approach to advancing our understanding of consciousness, and that it provides a credible and achievable route to ultimately developing a phenomenally conscious machine.

8.
Neural Netw ; 85: 165-181, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27855307

ABSTRACT

Inspired by the oscillatory nature of cerebral cortex activity, we recently proposed and studied self-organizing maps (SOMs) based on limit cycle neural activity in an attempt to improve the information efficiency and robustness of conventional single-node, single-pattern representations. Here we explore for the first time the use of limit cycle SOMs to build a neural architecture that controls a robotic arm by solving inverse kinematics in reach-and-hold tasks. This multi-map architecture integrates open-loop and closed-loop controls that learn to self-organize oscillatory neural representations and to harness non-fixed-point neural activity even for fixed-point arm reaching tasks. We show through computer simulations that our architecture generalizes well, achieves accurate, fast, and smooth arm movements, and is robust in the face of arm perturbations, map damage, and variations of internal timing parameters controlling the flow of activity. A robotic implementation is evaluated successfully without further training, demonstrating for the first time that limit cycle maps can control a physical robot arm. We conclude that architectures based on limit cycle maps can be organized to function effectively as neural controllers.


Subject(s)
Arm/physiology , Neural Networks, Computer , Robotics/methods , Algorithms , Biomechanical Phenomena , Computer Simulation , Humans , Machine Learning
9.
Bioinspir Biomim ; 11(3): 036013, 2016 May 19.
Article in English | MEDLINE | ID: mdl-27194213

ABSTRACT

The human hand's versatility allows for robust and flexible grasping. To obtain such efficiency, many robotic hands include human biomechanical features such as fingers having their two last joints mechanically coupled. Although such coupling enables human-like grasping, controlling the inverse kinematics of such mechanical systems is challenging. Here we propose a cortical model for fine motor control of a humanoid finger, having its two last joints coupled, that learns the inverse kinematics of the effector. This neural model functionally mimics the population vector coding as well as sensorimotor prediction processes of the brain's motor/premotor and parietal regions, respectively. After learning, this neural architecture could both overtly (actual execution) and covertly (mental execution or motor imagery) perform accurate, robust and flexible finger movements while reproducing the main human finger kinematic states. This work contributes to developing neuro-mimetic controllers for dexterous humanoid robotic/prosthetic upper-extremities, and has the potential to promote human-robot interactions.


Subject(s)
Biomimetics/instrumentation , Finger Joint/physiology , Fingers/physiology , Motor Cortex/physiology , Nerve Net/physiology , Robotics/instrumentation , Animals , Biomechanical Phenomena , Biomimetics/methods , Computer Simulation , Computer-Aided Design , Equipment Design , Equipment Failure Analysis , Feedback, Physiological/physiology , Finger Joint/innervation , Fingers/innervation , Hand Strength/physiology , Humans , Models, Neurological , Neural Networks, Computer , Robotics/methods
10.
Comput Intell Neurosci ; 2015: 642429, 2015.
Article in English | MEDLINE | ID: mdl-26346488

ABSTRACT

Optimizing a neural network's topology is a difficult problem for at least two reasons: the topology space is discrete, and the quality of any given topology must be assessed by assigning many different sets of weights to its connections. These two characteristics tend to cause very "rough." objective functions. Here we demonstrate how self-assembly (SA) and particle swarm optimization (PSO) can be integrated to provide a novel and effective means of concurrently optimizing a neural network's weights and topology. Combining SA and PSO addresses two key challenges. First, it creates a more integrated representation of neural network weights and topology so that we have just a single, continuous search domain that permits "smoother" objective functions. Second, it extends the traditional focus of self-assembly, from the growth of predefined target structures, to functional self-assembly, in which growth is driven by optimality criteria defined in terms of the performance of emerging structures on predefined computational problems. Our model incorporates a new way of viewing PSO that involves a population of growing, interacting networks, as opposed to particles. The effectiveness of our method for optimizing echo state network weights and topologies is demonstrated through its performance on a number of challenging benchmark problems.


Subject(s)
Algorithms , Models, Theoretical , Neural Networks, Computer , Pattern Recognition, Automated/methods , Computer Simulation
11.
Neural Netw ; 63: 208-22, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25562568

ABSTRACT

Recent efforts to develop large-scale brain and neurocognitive architectures have paid relatively little attention to the use of self-organizing maps (SOMs). Part of the reason for this is that most conventional SOMs use a static encoding representation: each input pattern or sequence is effectively represented as a fixed point activation pattern in the map layer, something that is inconsistent with the rhythmic oscillatory activity observed in the brain. Here we develop and study an alternative encoding scheme that instead uses sparsely-coded limit cycles to represent external input patterns/sequences. We establish conditions under which learned limit cycle representations arise reliably and dominate the dynamics in a SOM. These limit cycles tend to be relatively unique for different inputs, robust to perturbations, and fairly insensitive to timing. In spite of the continually changing activity in the map layer when a limit cycle representation is used, map formation continues to occur reliably. In a two-SOM architecture where each SOM represents a different sensory modality, we also show that after learning, limit cycles in one SOM can correctly evoke corresponding limit cycles in the other, and thus there is the potential for multi-SOM systems using limit cycles to work effectively as hetero-associative memories. While the results presented here are only first steps, they establish the viability of SOM models based on limit cycle activity patterns, and suggest that such models merit further study.


Subject(s)
Algorithms , Models, Neurological , Neural Networks, Computer , Brain/physiology
12.
Artif Life ; 21(1): 55-71, 2015.
Article in English | MEDLINE | ID: mdl-25514434

ABSTRACT

The idea that there is an edge of chaos, a region in the space of dynamical systems having special meaning for complex living entities, has a long history in artificial life. The significance of this region was first emphasized in cellular automata models when a single simple measure, λCA, identified it as a transitional region between order and chaos. Here we introduce a parameter λNN that is inspired by λCA but is defined for recurrent neural networks. We show through a series of systematic computational experiments that λNN generally orders the dynamical behaviors of randomly connected/weighted recurrent neural networks in the same way that λCA does for cellular automata. By extending this ordering to larger values of λNN than has typically been done with λCA and cellular automata, we find that a second edge-of-chaos region exists on the opposite side of the chaotic region. These basic results are found to hold under different assumptions about network connectivity, but vary substantially in their details. The results show that the basic concept underlying the lambda parameter can usefully be extended to other types of complex dynamical systems than just cellular automata.

13.
Article in English | MEDLINE | ID: mdl-25570507

ABSTRACT

Dexterous arm reaching movements are a critical feature that allow human interactions with tools, the environment, and socially with others. Thus the development of a neural architecture providing unified mechanisms for actual, mental, observed and imitated actions could enhance robot performance, enhance human-robot social interactions, and inform specific human brain processes. Here we present a model, including a fronto-parietal network that implements sensorimotor transformations (inverse kinematics, workspace visuo-spatial rotations), for self-intended and imitation performance. Our findings revealed that this neural model can perform accurate and robust 3D actual/mental arm reaching while reproducing human-like kinematics. Also, using visuo-spatial remapping, the neural model can imitate arm reaching independently of a demonstrator-imitator viewpoint. This work is a first step towards providing the basis of a future neural architecture for combining cognitive and sensorimotor processing levels that will allow for multi-level mental simulation when executing actual, mental, observed, and imitated actions for dexterous arm movements.


Subject(s)
Arm/physiology , Biomechanical Phenomena/physiology , Brain/physiology , Models, Neurological , Humans
14.
Evol Psychol ; 11(3): 470-92, 2013 Jul 18.
Article in English | MEDLINE | ID: mdl-23864291

ABSTRACT

Comparative studies of language are difficult because few language precursors are recognized. In this paper we propose a framework for designing experiments that test for structural and semantic patterns indicative of simple or complex grammars as originally described by Chomsky. We argue that a key issue is whether animals can recognize full recursion, which is the hallmark of context-free grammar. We discuss limitations of recent experiments that have attempted to address this issue, and point out that experiments aimed at detecting patterns that follow a Fibonacci series have advantages over other artificial context-free grammars. We also argue that experiments using complex sequences of behaviors could, in principle, provide evidence for fully recursive thought. Some of these ideas could also be approached using artificial life simulations, which have the potential to reveal the types of evolutionary transitions that could occur over time. Because the framework we propose has specific memory and computational requirements, future experiments could target candidate genes with the goal of revealing the genetic underpinnings of complex cognition.


Subject(s)
Language , Linguistics , Animals , Cognition , Humans
15.
Neural Netw ; 44: 112-31, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23597599

ABSTRACT

Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises, and making predictions concerning future developments.


Subject(s)
Artificial Intelligence , Computer Simulation , Consciousness
16.
IEEE Trans Neural Netw Learn Syst ; 24(12): 1932-43, 2013 Dec.
Article in English | MEDLINE | ID: mdl-24805213

ABSTRACT

With the recent surge in availability of data sets containing not only individual attributes but also relationships, classification techniques that take advantage of predictive relationship information have gained in popularity. The most popular existing collective classification techniques have a number of limitations-some of them generate arbitrary and potentially lossy summaries of the relationship data, whereas others ignore directionality and strength of relationships. Popular existing techniques make use of only direct neighbor relationships when classifying a given entity, ignoring potentially useful information contained in expanded neighborhoods of radius greater than one. We present a new technique that we call recurrent neural collective classification (RNCC), which avoids arbitrary summarization, uses information about relationship directionality and strength, and through recursive encoding, learns to leverage larger relational neighborhoods around each entity. Experiments with synthetic data sets show that RNCC can make effective use of relationship data for both direct and expanded neighborhoods. Further experiments demonstrate that our technique outperforms previously published results of several collective classification methods on a number of real-world data sets.


Subject(s)
Algorithms , Feedback , Neural Networks, Computer , Pattern Recognition, Automated/methods
17.
Article in English | MEDLINE | ID: mdl-23366445

ABSTRACT

It has been suggested that the human mirror neuron system (MNS) plays a critical role in action observation and imitation. However, the transformation of perspective between the observed (allocentric) and the imitated (egocentric) actions has received little attention. We expand a previously proposed biologically plausible MNS model by incorporating general spatial transformation capabilities that are assumed to be encoded by the intraparietal sulcus (IPS) and the superior parietal lobule (SPL) as well as investigating their interactions with the inferior frontal gyrus and the inferior parietal lobule. The results reveal that the IPS/SPL could process the frame of reference and the viewpoint transformations, and provide invariant visual representations for the temporo-parieto-frontal circuit. This allows the imitator to imitate the action performed by a demonstrator under various perspectives while replicating results from the literatures. Our results confirm and extend the importance of perspective transformation processing during action observation and imitation.


Subject(s)
Frontal Lobe/physiology , Parietal Lobe/physiology , Brain Mapping , Humans , Imitative Behavior/physiology , Mirror Neurons/physiology , Psychomotor Performance
18.
Article in English | MEDLINE | ID: mdl-23366569

ABSTRACT

In order to approach human hand performance levels, artificial anthropomorphic hands/fingers have increasingly incorporated human biomechanical features. However, the performance of finger reaching movements to visual targets involving the complex kinematics of multi-jointed, anthropomorphic actuators is a difficult problem. This is because the relationship between sensory and motor coordinates is highly nonlinear, and also often includes mechanical coupling of the two last joints. Recently, we developed a cortical model that learns the inverse kinematics of a simulated anthropomorphic finger. Here, we expand this previous work by assessing if this cortical model is able to learn the inverse kinematics for an actual anthropomorphic humanoid finger having its two last joints coupled and controlled by pneumatic muscles. The findings revealed that single 3D reaching movements, as well as more complex patterns of motion of the humanoid finger, were accurately and robustly performed by this cortical model while producing kinematics comparable to those of humans. This work contributes to the development of a bioinspired controller providing adaptive, robust and flexible control of dexterous robotic and prosthetic hands.


Subject(s)
Biomechanical Phenomena , Fingers/physiology , Robotics , Algorithms , Feedback, Sensory , Humans , Models, Neurological , Movement/physiology
19.
Neural Netw ; 25(1): 70-83, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21803542

ABSTRACT

The long short term memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM's original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting its applicability to a small set of network architectures. Here we introduce the generalized long short-term memory(LSTM-g) training algorithm, which provides LSTM-like locality while being applicable without modification to a much wider range of second-order network architectures. With LSTM-g, all units have an identical set of operating instructions for both activation and learning, subject only to the configuration of their local environment in the network; this is in contrast to the original LSTM training algorithm, where each type of unit has its own activation and training instructions. When applied to LSTM architectures with peephole connections, LSTM-g takes advantage of an additional source of back-propagated error which can enable better performance than the original algorithm. Enabled by the broad architectural applicability of LSTM-g, we demonstrate that training recurrent networks engineered for specific tasks can produce better results than single-layer networks. We conclude that LSTM-g has the potential to both improve the performance and broaden the applicability of spatially and temporally local gradient-based training algorithms for recurrent neural networks.


Subject(s)
Algorithms , Learning , Memory, Long-Term , Memory, Short-Term , Neural Networks, Computer , Learning/physiology , Memory, Long-Term/physiology , Memory, Short-Term/physiology , Time Factors
20.
IEEE Trans Neural Netw Learn Syst ; 23(10): 1649-58, 2012 Oct.
Article in English | MEDLINE | ID: mdl-24808009

ABSTRACT

Simple recurrent error backpropagation networks have been widely used to learn temporal sequence data, including regular and context-free languages. However, the production of relatively large and opaque weight matrices during learning has inspired substantial research on how to extract symbolic human-readable interpretations from trained networks. Unlike feedforward networks, where research has focused mainly on rule extraction, most past work with recurrent networks has viewed them as dynamical systems that can be approximated symbolically by finite-state machine (FSMs). With this approach, the network's hidden layer activation space is typically divided into a finite number of regions. Past research has mainly focused on better techniques for dividing up this activation space. In contrast, very little work has tried to influence the network training process to produce a better representation in hidden layer activation space, and that which has been done has had only limited success. Here we propose a powerful general technique to bias the error backpropagation training process so that it learns an activation space representation from which it is easier to extract FSMs. Using four publicly available data sets that are based on regular and context-free languages, we show via computational experiments that the modified learning method helps to extract FSMs with substantially fewer states and less variance than unmodified backpropagation learning, without decreasing the neural networks' accuracy. We conclude that modifying error backpropagation so that it more effectively separates learned pattern encodings in the hidden layer is an effective way to improve contemporary FSM extraction methods.


Subject(s)
Algorithms , Feedback , Models, Statistical , Neural Networks, Computer , Pattern Recognition, Automated/methods , Symbolism , Computer Simulation
SELECTION OF CITATIONS
SEARCH DETAIL
...