Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Comput Psychiatr ; 7(1): 14-29, 2023.
Article in English | MEDLINE | ID: mdl-38774640

ABSTRACT

Functional connectivity (FC) and neural excitability may interact to affect symptoms of autism spectrum disorder (ASD). We tested this hypothesis with neural network simulations, and applied it with functional magnetic resonance imaging (fMRI). A hierarchical recurrent neural network embodying predictive processing theory was subjected to a facial emotion recognition task. Neural network simulations examined the effects of FC and neural excitability on changes in neural representations by developmental learning, and eventually on ASD-like performance. Next, by mapping each neural network condition to subject subgroups on the basis of fMRI parameters, the association between ASD-like performance in the simulation and ASD diagnosis in the corresponding subject subgroup was examined. In the neural network simulation, the more homogeneous the neural excitability of the lower-level network, the more ASD-like the performance (reduced generalization and emotion recognition capability). In addition, in homogeneous networks, the higher the FC, the more ASD-like performance, while in heterogeneous networks, the higher the FC, the less ASD-like performance, demonstrating that FC and neural excitability interact. As an underlying mechanism, neural excitability determines the generalization capability of top-down prediction, and FC determines whether the model's information processing will be top-down prediction-dependent or bottom-up sensory-input dependent. In fMRI datasets, ASD was actually more prevalent in subject subgroups corresponding to the network condition showing ASD-like performance. The current study suggests an interaction between FC and neural excitability, and presents a novel framework for computational modeling and biological application of a developmental learning process underlying cognitive alterations in ASD.

2.
Front Robot AI ; 8: 748716, 2021.
Article in English | MEDLINE | ID: mdl-34651020

ABSTRACT

We propose a tool-use model that enables a robot to act toward a provided goal. It is important to consider features of the four factors; tools, objects actions, and effects at the same time because they are related to each other and one factor can influence the others. The tool-use model is constructed with deep neural networks (DNNs) using multimodal sensorimotor data; image, force, and joint angle information. To allow the robot to learn tool-use, we collect training data by controlling the robot to perform various object operations using several tools with multiple actions that leads different effects. Then the tool-use model is thereby trained and learns sensorimotor coordination and acquires relationships among tools, objects, actions and effects in its latent space. We can give the robot a task goal by providing an image showing the target placement and orientation of the object. Using the goal image with the tool-use model, the robot detects the features of tools and objects, and determines how to act to reproduce the target effects automatically. Then the robot generates actions adjusting to the real time situations even though the tools and objects are unknown and more complicated than trained ones.

4.
Sci Rep ; 11(1): 14684, 2021 07 26.
Article in English | MEDLINE | ID: mdl-34312400

ABSTRACT

The mechanism underlying the emergence of emotional categories from visual facial expression information during the developmental process is largely unknown. Therefore, this study proposes a system-level explanation for understanding the facial emotion recognition process and its alteration in autism spectrum disorder (ASD) from the perspective of predictive processing theory. Predictive processing for facial emotion recognition was implemented as a hierarchical recurrent neural network (RNN). The RNNs were trained to predict the dynamic changes of facial expression movies for six basic emotions without explicit emotion labels as a developmental learning process, and were evaluated by the performance of recognizing unseen facial expressions for the test phase. In addition, the causal relationship between the network characteristics assumed in ASD and ASD-like cognition was investigated. After the developmental learning process, emotional clusters emerged in the natural course of self-organization in higher-level neurons, even though emotional labels were not explicitly instructed. In addition, the network successfully recognized unseen test facial sequences by adjusting higher-level activity through the process of minimizing precision-weighted prediction error. In contrast, the network simulating altered intrinsic neural excitability demonstrated reduced generalization capability and impaired emotional clustering in higher-level neurons. Consistent with previous findings from human behavioral studies, an excessive precision estimation of noisy details underlies this ASD-like cognition. These results support the idea that impaired facial emotion recognition in ASD can be explained by altered predictive processing, and provide possible insight for investigating the neurophysiological basis of affective contact.


Subject(s)
Autism Spectrum Disorder/physiopathology , Facial Recognition , Datasets as Topic , Humans , Models, Psychological , Neural Networks, Computer , Neurons
5.
Neural Netw ; 138: 150-163, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33652371

ABSTRACT

Neurodevelopmental disorders are characterized by heterogeneous and non-specific nature of their clinical symptoms. In particular, hyper- and hypo-reactivity to sensory stimuli are diagnostic features of autism spectrum disorder and are reported across many neurodevelopmental disorders. However, computational mechanisms underlying the unusual paradoxical behaviors remain unclear. In this study, using a robot controlled by a hierarchical recurrent neural network model with predictive processing and learning mechanism, we simulated how functional disconnection altered the learning process and subsequent behavioral reactivity to environmental change. The results show that, through the learning process, long-range functional disconnection between distinct network levels could simultaneously lower the precision of sensory information and higher-level prediction. The alteration caused a robot to exhibit sensory-dominated and sensory-ignoring behaviors ascribed to sensory hyper- and hypo-reactivity, respectively. As long-range functional disconnection became more severe, a frequency shift from hyporeactivity to hyperreactivity was observed, paralleling an early sign of autism spectrum disorder. Furthermore, local functional disconnection at the level of sensory processing similarly induced hyporeactivity due to low sensory precision. These findings suggest a computational explanation for paradoxical sensory behaviors in neurodevelopmental disorders, such as coexisting hyper- and hypo-reactivity to sensory stimulus. A neurorobotics approach may be useful for bridging various levels of understanding in neurodevelopmental disorders and providing insights into mechanisms underlying complex clinical symptoms.


Subject(s)
Autism Spectrum Disorder/physiopathology , Machine Learning , Models, Neurological , Robotics/methods , Sensation , Humans
6.
Front Psychiatry ; 11: 762, 2020.
Article in English | MEDLINE | ID: mdl-32903328

ABSTRACT

Neurodevelopmental disorders, including autism spectrum disorder, have been intensively investigated at the neural, cognitive, and behavioral levels, but the accumulated knowledge remains fragmented. In particular, developmental learning aspects of symptoms and interactions with the physical environment remain largely unexplored in computational modeling studies, although a leading computational theory has posited associations between psychiatric symptoms and an unusual estimation of information uncertainty (precision), which is an essential aspect of the real world and is estimated through learning processes. Here, we propose a mechanistic explanation that unifies the disparate observations via a hierarchical predictive coding and developmental learning framework, which is demonstrated in experiments using a neural network-controlled robot. The results show that, through the developmental learning process, homogeneous intrinsic neuronal excitability at the neural level induced via self-organization changes at the information processing level, such as hyper sensory precision and overfitting to sensory noise. These changes led to multifaceted alterations at the behavioral level, such as inflexibility, reduced generalization, and motor clumsiness. In addition, these behavioral alterations were accompanied by fluctuating neural activity and excessive development of synaptic connections. These findings might bridge various levels of understandings in autism spectrum and other neurodevelopmental disorders and provide insights into the disease processes underlying observed behaviors and brain activities in individual patients. This study shows the potential of neurorobotics frameworks for modeling how psychiatric disorders arise from dynamic interactions among the brain, body, and uncertain environments.

7.
AAPS PharmSciTech ; 21(5): 150, 2020 May 20.
Article in English | MEDLINE | ID: mdl-32435858

ABSTRACT

Emulsions for oral delivery are not suitable for sustained drug absorption because such preparations diffuse rapidly in the gastrointestinal (GI) tract after oral administration. In order to generate sustained drug absorption and increase oral bioavailability, various polymers were added to a morin (MO) nanoemulsion to improve retention in the GI tract and alter the surface properties of oil droplets in the nanoemulsion. The influence of these polymers on the formulation properties was investigated. The area under the blood concentration-time curve (AUC) and the mean residence time (MRT) after oral administration of the nanoemulsions were measured, and the influence of the polymers on bioavailability was investigated. Chitosan (Chi) addition MO nanoemulsion (MO-Chi nanoemulsion) showed the highest AUC and MRT. MO-Chi nanoemulsion increased retention in the GI tract because of the relatively higher viscosity and high affinity between mucin and Chi covering the oil droplets. Furthermore, MO-Chi nanoemulsion could maintain the drug in oil droplets by suppression of drug release through the polymer hydration layer, and sustained drug release achieved continuous drug absorption. Nanoemulsions with sodium carboxymethylcellulose and poly-γ-glutamic acid potassium salt showed the next highest AUC and MRT after MO-Chi nanoemulsion. From these results, it was suggested that by increasing the viscosity of the nanoemulsion, there was high affinity between the added polymer and mucin, and sustained drug release was useful for enhancing the bioavailability of the polymer-containing nanoemulsions.


Subject(s)
Delayed-Action Preparations , Flavonoids/chemistry , Polymers/chemistry , Animals , Biological Availability , Drug Compounding , Drug Liberation , Emulsions , Flavonoids/pharmacokinetics , Male , Mice , Mice, Inbred ICR , Viscosity
8.
Front Neurorobot ; 12: 46, 2018.
Article in English | MEDLINE | ID: mdl-30087605

ABSTRACT

We propose an imitative learning model that allows a robot to acquire positional relations between the demonstrator and the robot, and to transform observed actions into robotic actions. Providing robots with imitative capabilities allows us to teach novel actions to them without resorting to trial-and-error approaches. Existing methods for imitative robotic learning require mathematical formulations or conversion modules to translate positional relations between demonstrators and robots. The proposed model uses two neural networks, a convolutional autoencoder (CAE) and a multiple timescale recurrent neural network (MTRNN). The CAE is trained to extract visual features from raw images captured by a camera. The MTRNN is trained to integrate sensory-motor information and to predict next states. We implement this model on a robot and conducted sequence to sequence learning that allows the robot to transform demonstrator actions into robot actions. Through training of the proposed model, representations of actions, manipulated objects, and positional relations are formed in the hierarchical structure of the MTRNN. After training, we confirm capability for generating unlearned imitative patterns.

9.
Comput Psychiatr ; 2: 164-182, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30627669

ABSTRACT

Recently, applying computational models developed in cognitive science to psychiatric disorders has been recognized as an essential approach for understanding cognitive mechanisms underlying psychiatric symptoms. Autism spectrum disorder is a neurodevelopmental disorder that is hypothesized to affect information processes in the brain involving the estimation of sensory precision (uncertainty), but the mechanism by which observed symptoms are generated from such abnormalities has not been thoroughly investigated. Using a humanoid robot controlled by a neural network using a precision-weighted prediction error minimization mechanism, it is suggested that both increased and decreased sensory precision could induce the behavioral rigidity characterized by resistance to change that is characteristic of autistic behavior. Specifically, decreased sensory precision caused any error signals to be disregarded, leading to invariability of the robot's intention, while increased sensory precision caused an excessive response to error signals, leading to fluctuations and subsequent fixation of intention. The results may provide a system-level explanation of mechanisms underlying different types of behavioral rigidity in autism spectrum and other psychiatric disorders. In addition, our findings suggest that symptoms caused by decreased and increased sensory precision could be distinguishable by examining the internal experience of patients and neural activity coding prediction error signals in the biological brain.

10.
Front Neurorobot ; 11: 70, 2017.
Article in English | MEDLINE | ID: mdl-29311891

ABSTRACT

An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as "not," "and," and "or" simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human-robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as "true," "false," and "not" work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word "and," which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word "or," which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system.

11.
IEEE Trans Neural Netw Learn Syst ; 28(4): 830-848, 2017 04.
Article in English | MEDLINE | ID: mdl-26595928

ABSTRACT

We suggest that different behavior generation schemes, such as sensory reflex behavior and intentional proactive behavior, can be developed by a newly proposed dynamic neural network model, named stochastic multiple timescale recurrent neural network (S-MTRNN). The model learns to predict subsequent sensory inputs, generating both their means and their uncertainty levels in terms of variance (or inverse precision) by utilizing its multiple timescale property. This model was employed in robotics learning experiments in which one robot controlled by the S-MTRNN was required to interact with another robot under the condition of uncertainty about the other's behavior. The experimental results show that self-organized and sensory reflex behavior-based on probabilistic prediction-emerges when learning proceeds without a precise specification of initial conditions. In contrast, intentional proactive behavior with deterministic predictions emerges when precise initial conditions are available. The results also showed that, in situations where unanticipated behavior of the other robot was perceived, the behavioral context was revised adequately by adaptation of the internal neural dynamics to respond to sensory inputs during sensory reflex behavior generation. On the other hand, during intentional proactive behavior generation, an error regression scheme by which the internal neural activity was modified in the direction of minimizing prediction errors was needed for adequately revising the behavioral context. These results indicate that two different ways of treating uncertainty about perceptual events in learning, namely, probabilistic modeling and deterministic modeling, contribute to the development of different dynamic neuronal structures governing the two types of behavior generation schemes.

12.
Methods Mol Biol ; 1461: 279-87, 2016.
Article in English | MEDLINE | ID: mdl-27424913

ABSTRACT

A bioluminescence-based assay system was fabricated for an efficient determination of the activities of air pollutants. The following four components were integrated into this assay system: (1) an 8-channel assay platform uniquely designed for simultaneously sensing multiple optical samples, (2) single-chain probes illuminating toxic chemicals or heavy metal cations from air pollutants, (3) a microfluidic system for circulating medium mimicking the human body, and (4) the software manimulating the above system. In the protocol, we briefly introduce how to integrate the components into the system and the application to the illumination of the metal cationic activities in air pollutants.


Subject(s)
Aerosols/chemistry , Air Pollutants/analysis , Cations/analysis , Environmental Monitoring/methods , Luminescent Measurements/methods , Metals/analysis , Animals , COS Cells , Chlorocebus aethiops , Environmental Monitoring/instrumentation , Humans , Luminescent Measurements/instrumentation , Particulate Matter/analysis
13.
Front Neurorobot ; 10: 5, 2016.
Article in English | MEDLINE | ID: mdl-27471463

ABSTRACT

To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

SELECTION OF CITATIONS
SEARCH DETAIL
...