Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Epidemiol Infect ; 152: e65, 2024 Feb 29.
Article in English | MEDLINE | ID: mdl-38418421

ABSTRACT

Contra-posing panel data on the incidence of pulmonary tuberculosis (PTB) at the provincial level in China through the years of 2004-2021 and introducing a geographically and temporally weighted regression (GTWR) model were used to explore the effect of various factors on the incidence of PTB from the perspective of spatial heterogeneity. The principal component analysis (PCA) was used to extract the main information from twenty-two indexes under six macro-factors. The main influencing factors were determined by the Spearman correlation and multi-collinearity tests. After fitting different models, the GTWR model was used to analyse and obtain the distribution changes of regression coefficients. Six macro-factors and incidence of PTB were both correlated, and there was no collinearity between the variables. The fitting effect of the GTWR model was better than ordinary least-squares (OLS) and geographically weighted regression (GWR) models. The incidence of PTB in China was mainly affected by six macro-factors, namely medicine and health, transportation, environment, economy, disease, and educational quality. The influence degree showed an unbalanced trend in the spatial and temporal distribution.


Subject(s)
Tuberculosis, Pulmonary , Humans , China/epidemiology , Incidence , Models, Statistical , Principal Component Analysis , Risk Factors , Spatio-Temporal Analysis , Tuberculosis, Pulmonary/epidemiology
2.
Front Neurosci ; 16: 1018006, 2022.
Article in English | MEDLINE | ID: mdl-36518534

ABSTRACT

Introduction: In recent years, the application of deep learning models at the edge has gained attention. Typically, artificial neural networks (ANNs) are trained on graphics processing units (GPUs) and optimized for efficient execution on edge devices. Training ANNs directly at the edge is the next step with many applications such as the adaptation of models to specific situations like changes in environmental settings or optimization for individuals, e.g., optimization for speakers for speech processing. Also, local training can preserve privacy. Over the last few years, many algorithms have been developed to reduce memory footprint and computation. Methods: A specific challenge to train recurrent neural networks (RNNs) for processing sequential data is the need for the Back Propagation Through Time (BPTT) algorithm to store the network state of all time steps. This limitation is resolved by the biologically-inspired E-prop approach for training Spiking Recurrent Neural Networks (SRNNs). We implement the E-prop algorithm on a prototype of the SpiNNaker 2 neuromorphic system. A parallelization strategy is developed to split and train networks on the ARM cores of SpiNNaker 2 to make efficient use of both memory and compute resources. We trained an SRNN from scratch on SpiNNaker 2 in real-time on the Google Speech Command dataset for keyword spotting. Result: We achieved an accuracy of 91.12% while requiring only 680 KB of memory for training the network with 25 K weights. Compared to other spiking neural networks with equal or better accuracy, our work is significantly more memory-efficient. Discussion: In addition, we performed a memory and time profiling of the E-prop algorithm. This is used on the one hand to discuss whether E-prop or BPTT is better suited for training a model at the edge and on the other hand to explore architecture modifications to SpiNNaker 2 to speed up online learning. Finally, energy estimations predict that the SRNN can be trained on SpiNNaker2 with 12 times less energy than using a NVIDIA V100 GPU.

3.
IEEE Trans Biomed Circuits Syst ; 13(3): 579-591, 2019 06.
Article in English | MEDLINE | ID: mdl-30932847

ABSTRACT

Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources. However, the efficiency is often lost when one tries to port these findings to a silicon substrate, since brain-inspired algorithms often make extensive use of complex functions, such as random number generators, that are expensive to compute on standard general purpose hardware. The prototype chip of the second generation SpiNNaker system is designed to overcome this problem. Low-power advanced RISC machine (ARM) processors equipped with a random number generator and an exponential function accelerator enable the efficient execution of brain-inspired algorithms. We implement the recently introduced reward-based synaptic sampling model that employs structural plasticity to learn a function or task. The numerical simulation of the model requires to update the synapse variables in each time step including an explorative random term. To the best of our knowledge, this is the most complex synapse model implemented so far on the SpiNNaker system. By making efficient use of the hardware accelerators and numerical optimizations, the computation time of one plasticity update is reduced by a factor of 2. This, combined with fitting the model into to the local static random access memory (SRAM), leads to 62% energy reduction compared to the case without accelerators and the use of external dynamic random access memory (DRAM). The model implementation is integrated into the SpiNNaker software framework allowing for scalability onto larger systems. The hardware-software system presented in this paper paves the way for power-efficient mobile and biomedical applications with biologically plausible brain-inspired algorithms.


Subject(s)
Brain/physiology , Machine Learning , Models, Neurological , Neural Networks, Computer , Software , Synapses/physiology , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...