Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 145: 356-373, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34808587

ABSTRACT

Graph neural networks (GNNs) have been widely used to learn vector representation of graph-structured data and achieved better task performance than conventional methods. The foundation of GNNs is the message passing procedure, which propagates the information in a node to its neighbors. Since this procedure proceeds one step per layer, the range of the information propagation among nodes is small in the lower layers, and it expands toward the higher layers. Therefore, a GNN model has to be deep enough to capture global structural information in a graph. On the other hand, it is known that deep GNN models suffer from performance degradation because they lose nodes' local information, which would be essential for good model performance, through many message passing steps. In this study, we propose multi-level attention pooling (MLAP) for graph-level classification tasks, which can adapt to both local and global structural information in a graph. It has an attention pooling layer for each message passing step and computes the final graph representation by unifying the layer-wise graph representations. The MLAP architecture allows models to utilize the structural information of graphs with multiple levels of localities because it preserves layer-wise information before losing them due to oversmoothing. Results of our experiments show that the MLAP architecture improves the graph classification performance compared to the baseline architectures. In addition, analyses on the layer-wise graph representations suggest that aggregating information from multiple levels of localities indeed has the potential to improve the discriminability of learned graph representations.


Subject(s)
Attention , Neural Networks, Computer , Learning
2.
Neural Comput ; 34(2): 360-377, 2022 01 14.
Article in English | MEDLINE | ID: mdl-34915580

ABSTRACT

Model-based control has great potential for use in real robots due to its high sampling efficiency. Nevertheless, dealing with physical contacts and generating accurate motions are inevitable for practical robot control tasks, such as precise manipulation. For a real-time, model-based approach, the difficulty of contact-rich tasks that requires precise movement lies in the fact that a model needs to accurately predict forthcoming contact events within a limited length of time rather than detect them afterward with sensors. Therefore, in this study, we investigate whether and how neural network models can learn a task-related model useful enough for model-based control, that is, a model predicting future states, including contact events. To this end, we propose a structured neural network model predictive control (SNN-MPC) method, whose neural network architecture is designed with explicit inertia matrix representation. To train the proposed network, we develop a two-stage modeling procedure for contact-rich dynamics from a limited number of samples. As a contact-rich task, we take up a trackball manipulation task using a physical 3-DoF finger robot. The results showed that the SNN-MPC outperformed MPC with a conventional fully connected network model on the manipulation task.


Subject(s)
Robotics , Learning , Motion , Neural Networks, Computer , Robotics/methods
3.
SLAS Technol ; 26(6): 650-659, 2021 12.
Article in English | MEDLINE | ID: mdl-34167357

ABSTRACT

In automated laboratories consisting of multiple different types of instruments, scheduling algorithms are useful for determining the optimal allocations of instruments to minimize the time required to complete experimental procedures. However, previous studies on scheduling algorithms for laboratory automation have not emphasized the time constraints by mutual boundaries (TCMBs) among operations, which is important in procedures involving live cells or unstable biomolecules. Here, we define the "scheduling for laboratory automation in biology" (S-LAB) problem as a scheduling problem for automated laboratories in which operations with TCMBs are performed by multiple different instruments. We formulate an S-LAB problem as a mixed-integer programming (MIP) problem and propose a scheduling method using the branch-and-bound algorithm. Simulations show that our method can find the optimal schedules of S-LAB problems that minimize overall execution time while satisfying the TCMBs. Furthermore, we propose the use of our scheduling method for the simulation-based design of job definitions and laboratory configurations.


Subject(s)
Automation, Laboratory , Biological Science Disciplines , Algorithms , Computer Simulation , Laboratories
4.
Sci Rep ; 10(1): 5280, 2020 03 24.
Article in English | MEDLINE | ID: mdl-32210297

ABSTRACT

Moving objects are often occluded behind larger, stationary objects, but we can easily predict when and where they reappear. Here, we show that the prediction of object reappearance is subject to adaptive learning. When monkeys generated predictive saccades to the location of target reappearance, systematic changes in the location or timing of target reappearance independently altered the endpoint or latency of the saccades. Furthermore, spatial adaptation of predictive saccades did not alter visually triggered reactive saccades, whereas adaptation of reactive saccades altered the metrics of predictive saccades. Our results suggest that the extrapolation of motion trajectory may be subject to spatial and temporal recalibration mechanisms located upstream from the site of reactive saccade adaptation. Repetitive exposure of visual error for saccades induces qualitatively different adaptation, which might be attributable to different regions in the cerebellum that regulate learning of trajectory prediction and saccades.


Subject(s)
Motion , Saccades/physiology , Adaptation, Physiological/physiology , Animals , Anticipation, Psychological , Cerebellum/physiology , Female , Fixation, Ocular , Learning , Macaca fuscata , Parietal Lobe/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...