Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
2.
Cognit Comput ; 15(4): 1190-1210, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37663748

ABSTRACT

Hippocampal area CA3 performs the critical auto-associative function underlying pattern completion in episodic memory. Without external inputs, the electrical activity of this neural circuit reflects the spontaneous spiking interplay among glutamatergic pyramidal neurons and GABAergic interneurons. However, the network mechanisms underlying these resting-state firing patterns are poorly understood. Leveraging the Hippocampome.org knowledge base, we developed a data-driven, large-scale spiking neural network (SNN) model of mouse CA3 with 8 neuron types, 90,000 neurons, 51 neuron-type specific connections, and 250,000,000 synapses. We instantiated the SNN in the CARLsim4 multi-GPU simulation environment using the Izhikevich and Tsodyks-Markram formalisms for neuronal and synaptic dynamics, respectively. We analyzed the resultant population activity upon transient activation. The SNN settled into stable oscillations with a biologically plausible grand-average firing frequency, which was robust relative to a wide range of transient activation. The diverse firing patterns of individual neuron types were consistent with existing knowledge of cell type-specific activity in vivo. Altered network structures that lacked neuron- or connection-type specificity were neither stable nor robust, highlighting the importance of neuron type circuitry. Additionally, external inputs reflecting dentate mossy fibers shifted the observed rhythms to the gamma band. We freely released the CARLsim4-Hippocampome framework on GitHub to test hippocampal hypotheses. Our SNN may be useful to investigate the circuit mechanisms underlying the computational functions of CA3. Moreover, our approach can be scaled to the whole hippocampal formation, which may contribute to elucidating how the unique neuronal architecture of this system subserves its crucial cognitive roles.

3.
Int J Technol Knowl Soc ; 19(1): 21-52, 2023.
Article in English | MEDLINE | ID: mdl-37273904

ABSTRACT

Tele-operated social robots (telerobots) offer an innovative means of allowing children who are medically restricted to their homes (MRH) to return to their local schools and physical communities. Most commercially available telerobots have three foundational features that facilitate child-robot interaction: remote mobility, synchronous two-way vision capabilities, and synchronous two-way audio capabilities. We conducted a comparative analysis between the Toyota Human Support Robot (HSR) and commercially available telerobots, focusing on these foundational features. Children who used these robots and these features on a daily basis to attend school were asked to pilot the HSR in a simulated classroom for learning activities. As the HSR has three additional features that are not available on commercial telerobots: (1) pan-tilt camera, (2) mapping and autonomous navigation, and (3) robot arm and gripper for children to "reach" into remote environments, participants were also asked to evaluate the use of these features for learning experiences. To expand on earlier work on the use of telerobots by remote children, this study provides novel empirical findings on (1) the capabilities of the Toyota HSR for robot-mediated learning similar to commercially available telerobots and (2) the efficacy of novel HSR features (i.e., pan-tilt camera, autonomous navigation, robot arm/hand hardware) for future learning experiences. We found that among our participants, autonomous navigation and arm/gripper hardware were rated as highly valuable for social and learning activities.

4.
IEEE Trans Neural Netw Learn Syst ; 32(6): 2521-2534, 2021 Jun.
Article in English | MEDLINE | ID: mdl-32687472

ABSTRACT

Disentangling the sources of visual motion in a dynamic scene during self-movement or ego motion is important for autonomous navigation and tracking. In the dynamic image segments of a video frame containing independently moving objects, optic flow relative to the next frame is the sum of the motion fields generated due to camera and object motion. The traditional ego-motion estimation methods assume the scene to be static, and the recent deep learning-based methods do not separate pixel velocities into object- and ego-motion components. We propose a learning-based approach to predict both ego-motion parameters and object-motion field (OMF) from image sequences using a convolutional autoencoder while being robust to variations due to the unconstrained scene depth. This is achieved by: 1) training with continuous ego-motion constraints that allow solving for ego-motion parameters independently of depth and 2) learning a sparsely activated overcomplete ego-motion field (EMF) basis set, which eliminates the irrelevant components in both static and dynamic segments for the task of ego-motion estimation. In order to learn the EMF basis set, we propose a new differentiable sparsity penalty function that approximates the number of nonzero activations in the bottleneck layer of the autoencoder and enforces sparsity more effectively than L1- and L2-norm-based penalties. Unlike the existing direct ego-motion estimation methods, the predicted global EMF can be used to extract OMF directly by comparing it against the optic flow. Compared with the state-of-the-art baselines, the proposed model performs favorably on pixelwise object- and ego-motion estimation tasks when evaluated on real and synthetic data sets of dynamic scenes.

5.
Front Neurorobot ; 14: 570308, 2020.
Article in English | MEDLINE | ID: mdl-33192435

ABSTRACT

Understanding why deep neural networks and machine learning algorithms act as they do is a difficult endeavor. Neuroscientists are faced with similar problems. One way biologists address this issue is by closely observing behavior while recording neurons or manipulating brain circuits. This has been called neuroethology. In a similar way, neurorobotics can be used to explain how neural network activity leads to behavior. In real world settings, neurorobots have been shown to perform behaviors analogous to animals. Moreover, a neuroroboticist has total control over the network, and by analyzing different neural groups or studying the effect of network perturbations (e.g., simulated lesions), they may be able to explain how the robot's behavior arises from artificial brain activity. In this paper, we review neurorobot experiments by focusing on how the robot's behavior leads to a qualitative and quantitative explanation of neural activity, and vice versa, that is, how neural activity leads to behavior. We suggest that using neurorobots as a form of computational neuroethology can be a powerful methodology for understanding neuroscience, as well as for artificial intelligence and machine learning.

SELECTION OF CITATIONS
SEARCH DETAIL
...