Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 18: 1346805, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38419664

RESUMO

Time-To-First-Spike (TTFS) coding in Spiking Neural Networks (SNNs) offers significant advantages in terms of energy efficiency, closely mimicking the behavior of biological neurons. In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding. Our focus is on two distinct types of skip connection architectures: (1) addition-based skip connections, and (2) concatenation-based skip connections. We find that addition-based skip connections introduce an additional delay in terms of spike timing. On the other hand, concatenation-based skip connections circumvent this delay but produce time gaps between after-convolution and skip connection paths, thereby restricting the effective mixing of information from these two paths. To mitigate these issues, we propose a novel approach involving a learnable delay for skip connections in the concatenation-based skip connection architecture. This approach successfully bridges the time gap between the convolutional and skip branches, facilitating improved information mixing. We conduct experiments on public datasets including MNIST and Fashion-MNIST, illustrating the advantage of the skip connection in TTFS coding architectures. Additionally, we demonstrate the applicability of TTFS coding on beyond image recognition tasks and extend it to scientific machine-learning tasks, broadening the potential uses of SNNs.

2.
Sci Rep ; 13(1): 1739, 2023 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-36720936

RESUMO

Spectral methods are an important part of scientific computing's arsenal for solving partial differential equations (PDEs). However, their applicability and effectiveness depend crucially on the choice of basis functions used to expand the solution of a PDE. The last decade has seen the emergence of deep learning as a strong contender in providing efficient representations of complex functions. In the current work, we present an approach for combining deep neural networks with spectral methods to solve PDEs. In particular, we use a deep learning technique known as the Deep Operator Network (DeepONet) to identify candidate functions on which to expand the solution of PDEs. We have devised an approach that uses the candidate functions provided by the DeepONet as a starting point to construct a set of functions that have the following properties: (1) they constitute a basis, (2) they are orthonormal, and (3) they are hierarchical, i.e., akin to Fourier series or orthogonal polynomials. We have exploited the favorable properties of our custom-made basis functions to both study their approximation capability and use them to expand the solution of linear and nonlinear time-dependent PDEs. The proposed approach advances the state of the art and versatility of spectral methods and, more generally, promotes the synergy between traditional scientific computing and machine learning.

3.
J Chem Phys ; 157(14): 144104, 2022 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-36243526

RESUMO

A Generalized Morse Potential (GMP) is an extension of the Morse Potential (MP) with an additional exponential term and an additional parameter that compensate for MP's erroneous behavior in the long range part of the interaction potential. Because of the additional term and parameter, the vibrational levels of the GMP cannot be solved analytically, unlike the case for the MP. We present several numerical approaches for solving the vibrational problem of the GMP based on Galerkin methods, namely, the Laguerre Polynomial Method (LPM), the Symmetrized LPM, and the Polynomial Expansion Method (PEM), and apply them to the vibrational levels of the homonuclear diatomic molecules B2, O2, and F2, for which high level theoretical near full configuration interaction (CI) electronic ground state potential energy surfaces and experimentally measured vibrational levels have been reported. Overall, the LPM produces vibrational states for the GMP that are converged to within spectroscopic accuracy of 0.01 cm-1 in between 1 and 2 orders of magnitude faster and with much fewer basis functions/grid points than the Colbert-Miller Discrete Variable Representation (CN-DVR) method for the three homonuclear diatomic molecules examined in this study. A Python library that fits and solves the GMP and similar potentials can be downloaded from https://gitlab.com/gds001uw/generalized-morse-solver.


Assuntos
Algoritmos , Vibração , Análise Espectral
4.
Proc Natl Acad Sci U S A ; 118(37)2021 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-34497124

RESUMO

While model order reduction is a promising approach in dealing with multiscale time-dependent systems that are too large or too expensive to simulate for long times, the resulting reduced order models can suffer from instabilities. We have recently developed a time-dependent renormalization approach to stabilize such reduced models. In the current work, we extend this framework by introducing a parameter that controls the time decay of the memory of such models and optimally select this parameter based on limited fully resolved simulations. First, we demonstrate our framework on the inviscid Burgers equation whose solution develops a finite-time singularity. Our renormalized reduced order models are stable and accurate for long times while using for their calibration only data from a full order simulation before the occurrence of the singularity. Furthermore, we apply this framework to the three-dimensional (3D) Euler equations of incompressible fluid flow, where the problem of finite-time singularity formation is still open and where brute force simulation is only feasible for short times. Our approach allows us to obtain a perturbatively renormalizable model which is stable for long times and includes all the complex effects present in the 3D Euler dynamics. We find that, in each application, the renormalization coefficients display algebraic decay with increasing resolution and that the parameter which controls the time decay of the memory is problem-dependent.

5.
Front Psychol ; 9: 1185, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30050485

RESUMO

As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training. We show that adding structure to the neural network leads to higher mutual information between layers. High mutual information between layers implies that the effective number of free parameters is exponentially smaller than the raw number of tunable weights, providing insight into why neural networks with far more weights than training points can be reliably trained.

6.
Proc Math Phys Eng Sci ; 471(2176): 20140446, 2015 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-27547070

RESUMO

Model reduction for complex systems is a rather active area of research. For many real-world systems, constructing an accurate reduced model is prohibitively expensive. The main difficulty stems from the tremendous range of spatial and temporal scales present in the solution of such systems. This leads to the need to develop reduced models where, inevitably, the resolved variables do not exhibit (spatial and/or temporal) scale separation from the unresolved ones. We present a brief survey of recent results on the construction of Mori-Zwanzig-reduced models for such systems. The construction is inspired by the concepts of scale dependence and renormalization which first appeared in the context of high-energy and statistical physics.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...