Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters











Database
Language
Publication year range
1.
Sci Rep ; 13(1): 13221, 2023 Aug 14.
Article in English | MEDLINE | ID: mdl-37580464

ABSTRACT

We analyzed the X-ray photoelectron spectra (XPS) of carbon 1s states in graphene and oxygen-intercalated graphene grown on SiC(0001) using Bayesian spectroscopy. To realize highly accurate spectral decomposition of the XPS spectra, we proposed a framework for discovering physical constraints from the absence of prior quantified physical knowledge, in which we designed the prior probabilities based on the found constraints and the physically required conditions. This suppresses the exchange of peak components during replica exchange Monte Carlo iterations and makes possible to decompose XPS in the case where a reliable structure model or a presumable number of components is not known. As a result, we have successfully decomposed XPS of one monolayer (1ML), two monolayers (2ML), and quasi-freestanding 2ML (qfs-2ML) graphene samples deposited on SiC substrates with the meV order precision of the binding energy, in which the posterior probability distributions of the binding energies were obtained distinguishably between the different components of buffer layer even though they are observed as hump and shoulder structures because of their overlapping.

2.
Phys Rev E ; 103(3-1): 033303, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33862698

ABSTRACT

Understanding complex systems with their reduced model is one of the central roles in scientific activities. Although physics has greatly been developed with the physical insights of physicists, it is sometimes challenging to build a reduced model of such complex systems on the basis of insight alone. We propose a framework that can infer hidden conservation laws of a complex system from deep neural networks (DNNs) that have been trained with physical data of the system. The purpose of the proposed framework is not to analyze physical data with deep learning but to extract interpretable physical information from trained DNNs. With Noether's theorem and by an efficient sampling method, the proposed framework infers conservation laws by extracting the symmetries of dynamics from trained DNNs. The proposed framework is developed by deriving the relationship between a manifold structure of a time-series data set and the necessary conditions for Noether's theorem. The feasibility of the proposed framework has been verified in some primitive cases in which the conservation law is well known. We also apply the proposed framework to conservation law estimation for a more practical case, that is, a large-scale collective motion system in the metastable state, and we obtain a result consistent with that of a previous study.

3.
J Comput Neurosci ; 49(3): 251-257, 2021 08.
Article in English | MEDLINE | ID: mdl-33595764

ABSTRACT

Feed-forward deep neural networks have better performance in object categorization tasks than other models of computer vision. To understand the relationship between feed-forward deep networks and the primate brain, we investigated representations of upright and inverted faces in a convolutional deep neural network model and compared them with representations by neurons in the monkey anterior inferior-temporal cortex, area TE. We applied principal component analysis to feature vectors in each model layer to visualize the relationship between the vectors of the upright and inverted faces. The vectors of the upright and inverted monkey faces were more separated through the convolution layers. In the fully-connected layers, the separation among human individuals for upright faces was larger than for inverted faces. The Spearman correlation between each model layer and TE neurons reached a maximum at the fully-connected layers. These results indicate that the processing of faces in the fully-connected layers might resemble the asymmetric representation of upright and inverted faces by the TE neurons. The separation of upright and inverted faces might take place by feed-forward processing in the visual cortex, and separations among human individuals for upright faces, which were larger than those for inverted faces, might occur in area TE.


Subject(s)
Face , Models, Neurological , Animals , Neural Networks, Computer , Neurons , Photic Stimulation , Primates , Temporal Lobe
4.
Entropy (Basel) ; 22(7)2020 Jun 30.
Article in English | MEDLINE | ID: mdl-33286497

ABSTRACT

Integrated information theory (IIT) was initially proposed to describe human consciousness in terms of intrinsic-causal brain network structures. Particularly, IIT 3.0 targets the system's cause-effect structure from spatio-temporal grain and reveals the system's irreducibility. In a previous study, we tried to apply IIT 3.0 to an actual collective behaviour in Plecoglossus altivelis. We found that IIT 3.0 exhibits qualitative discontinuity between three and four schools of fish in terms of Φ value distributions. Other measures did not show similar characteristics. In this study, we followed up on our previous findings and introduced two new factors. First, we defined the global parameter settings to determine a different kind of group integrity. Second, we set several timescales (from Δ t = 5 / 120 to Δ t = 120 / 120 s). The results showed that we succeeded in classifying fish schools according to their group sizes and the degree of group integrity around the reaction time scale of the fish, despite the small group sizes. Compared with the short time scale, the interaction heterogeneity observed in the long time scale seems to diminish. Finally, we discuss one of the longstanding paradoxes in collective behaviour, known as the heap paradox, for which two tentative answers could be provided through our IIT 3.0 analysis.

5.
Sci Technol Adv Mater ; 21(1): 402-419, 2020 Jul 02.
Article in English | MEDLINE | ID: mdl-32939165

ABSTRACT

We develop an automatic peak fitting algorithm using the Bayesian information criterion (BIC) fitting method with confidence-interval estimation in spectral decomposition. First, spectral decomposition is carried out by adopting the Bayesian exchange Monte Carlo method for various artificial spectral data, and the confidence interval of fitting parameters is evaluated. From the results, an approximated model formula that expresses the confidence interval of parameters and the relationship between the peak-to-peak distance and the signal-to-noise ratio is derived. Next, for real spectral data, we compare the confidence interval of each peak parameter obtained using the Bayesian exchange Monte Carlo method with the confidence interval obtained from the BIC-fitting with the model selection function and the proposed approximated formula. We thus confirm that the parameter confidence intervals obtained using the two methods agree well. It is therefore possible to not only simply estimate the appropriate number of peaks by BIC-fitting but also obtain the confidence interval of fitting parameters.

6.
Sci Rep ; 10(1): 10437, 2020 Jun 26.
Article in English | MEDLINE | ID: mdl-32591546

ABSTRACT

Evaluating the creep deformation process of heat-resistant steels is important for improving the energy efficiency of power plants by increasing the operating temperature. There is an analysis framework that estimates the rupture time of this process by regressing the strain-time relationship of the creep process using a regression model called the creep constitutive equation. Because many creep constitutive equations have been proposed, it is important to construct a framework to determine which one is best for the creep processes of different steel types at various temperatures and stresses. A Bayesian model selection framework is one of the best frameworks for evaluating the constitutive equations. In previous studies, approximate-expression methods such as the Laplace approximation were used to develop the Bayesian model selection frameworks for creep. Such frameworks are not applicable to creep constitutive equations or data that violate the assumption of the approximation. In this study, we propose a universal Bayesian model selection framework for creep that is applicable to the evaluation of various types of creep constitutive equations. Using the replica exchange Monte Carlo method, we develop a Bayesian model selection framework for creep without an approximate-expression method. To assess the effectiveness of the proposed framework, we applied it to the evaluation of a creep constitutive equation called the Kimura model, which is difficult to evaluate by existing frameworks. Through a model evaluation using the creep measurement data of Grade 91 steel, we confirmed that our proposed framework gives a more reasonable evaluation of the Kimura model than existing frameworks. Investigating the posterior distribution obtained by the proposed framework, we also found a model candidate that could improve the Kimura model.

7.
Sci Technol Adv Mater ; 21(1): 219-228, 2020.
Article in English | MEDLINE | ID: mdl-32489481

ABSTRACT

There are two types of creep constitutive equation, one with a steady-state term (steady-state type) and the other with no steady-state term (non-steady-state type). We applied the Bayesian inference framework in order to examine which type is supported by experimental creep curves for a Grade 91 (Gr.91) steel. The Bayesian free energy was significantly lower for the steady-state type under all the test conditions in the ranges of 50-90 MPa at 923 K, 90-160 MPa at 873 K and 170-240 MPa at 823 K, leading to the conclusion that the posterior probability was virtually 1.0. These findings mean that the experimental data supported the steady-state-type equation. The dependence of the evaluated steady-state creep rate on the applied stress indicates that there is a transition in the mechanism governing creep deformation around 120 MPa.

8.
PLoS One ; 15(2): e0229573, 2020.
Article in English | MEDLINE | ID: mdl-32107495

ABSTRACT

Collective behaviours are known to be the result of diverse dynamics and are sometimes likened to living systems. Although many studies have revealed the dynamics of various collective behaviours, their main focus has been on the information processing performed by the collective, not on interactions within the collective. For example, the qualitative difference between three and four elements in a system has rarely been investigated. Tononi et al. proposed integrated information theory (IIT) to measure the degree of consciousness Φ. IIT postulates that the amount of information loss caused by the minimum information partition is equivalent to the degree of information integration in the system. This measure is not only useful for estimating the degree of consciousness but can also be applied to more general network systems. Here, we obtained two main results from the application of IIT (in particular, IIT 3.0) to the analysis of real fish schools (Plecoglossus altivelis). First, we observed that the discontinuity on 〈Φ(N)〉 distributions emerges for a school of four or more fish. This transition was not observed by measuring the mutual information or the sum of the transfer entropy. We also analysed the IIT on Boids simulations with respect to different coupling strengths; however, the results of the Boids model were found to be quite different from those of real fish. Second, we found a correlation between this discontinuity and the emergence of leadership. We discriminate leadership in this paper from its traditional meaning (e.g. defined by transfer entropy) because IIT-induced leadership refers not to group behaviour, as in other methods, but the degree of autonomy (i.e. group integrity). These results suggest that integrated information Φ can reveal the emergence of a new type of leadership which cannot be observed using other measures.


Subject(s)
Information Theory , Osmeriformes/physiology , Animals , Behavior, Animal/physiology , Brain/physiology , Cognition/physiology , Computer Simulation , Consciousness , Models, Neurological , Neural Networks, Computer , Systems Theory
9.
Philos Trans A Math Phys Eng Sci ; 375(2109)2017 Dec 28.
Article in English | MEDLINE | ID: mdl-29133449

ABSTRACT

A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation.This article is part of the themed issue 'Reconceptualizing the origins of life'.

SELECTION OF CITATIONS
SEARCH DETAIL