Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 177: 106368, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38761415

ABSTRACT

The circuitry and pathways in the brains of humans and other species have long inspired researchers and system designers to develop accurate and efficient systems capable of solving real-world problems and responding in real-time. We propose the Syllable-Specific Temporal Encoding (SSTE) to learn vocal sequences in a reservoir of Izhikevich neurons, by forming associations between exclusive input activities and their corresponding syllables in the sequence. Our model converts the audio signals to cochleograms using the CAR-FAC model to simulate a brain-like auditory learning and memorization process. The reservoir is trained using a hardware-friendly approach to FORCE learning. Reservoir computing could yield associative memory dynamics with far less computational complexity compared to RNNs. The SSTE-based learning enables competent accuracy and stable recall of spatiotemporal sequences with fewer reservoir inputs compared with existing encodings in the literature for similar purpose, offering resource savings. The encoding points to syllable onsets and allows recalling from a desired point in the sequence, making it particularly suitable for recalling subsets of long vocal sequences. The SSTE demonstrates the capability of learning new signals without forgetting previously memorized sequences and displays robustness against occasional noise, a characteristic of real-world scenarios. The components of this model are configured to improve resource consumption and computational intensity, addressing some of the cost-efficiency issues that might arise in future implementations aiming for compactness and real-time, low-power operation. Overall, this model proposes a brain-inspired pattern generation network for vocal sequences that can be extended with other bio-inspired computations to explore their potentials for brain-like auditory perception. Future designs could inspire from this model to implement embedded devices that learn vocal sequences and recall them as needed in real-time. Such systems could acquire language and speech, operate as artificial assistants, and transcribe text to speech, in the presence of natural noise and corruption on audio data.


Subject(s)
Memory , Neural Networks, Computer , Humans , Memory/physiology , Auditory Perception/physiology , Neurons/physiology , Learning/physiology , Models, Neurological
2.
Clin Auton Res ; 33(2): 165-189, 2023 04.
Article in English | MEDLINE | ID: mdl-37119426

ABSTRACT

PURPOSE: This systematic review aimed to evaluate the effect of transcutaneous auricular vagus nerve stimulation on heart rate variability and baroreflex sensitivity in healthy populations. METHOD: PubMed, Scopus, the Cochrane Library, Embase, and Web of Science were systematically searched for controlled trials that examined the effects of transcutaneous auricular vagus nerve stimulation on heart rate variability parameters and baroreflex sensitivity in apparently healthy individuals. Two independent researchers screened the search results, extracted the data, and evaluated the quality of the included studies. RESULTS: From 2458 screened studies, 21 were included. Compared with baseline measures or the comparison group, significant changes in the standard deviation of NN intervals, the root mean square of successive RR intervals, the proportion of consecutive RR intervals that differ by more than 50 ms, high-frequency power, low-frequency to high-frequency ratio, and low-frequency power were found in 86%, 75%, 69%, 47%, 36%, and 25% of the studies evaluating the effects of transcutaneous auricular vagus nerve stimulation on these indices, respectively. Baroreflex sensitivity was evaluated in six studies, of which a significant change was detected in only one. Some studies have shown that the worse the basic autonomic function, the better the response to transcutaneous auricular vagus nerve stimulation. CONCLUSION: The results were mixed, which may be mainly attributable to the heterogeneity of the study designs and stimulation delivery dosages. Thus, future studies with comparable designs are required to determine the optimal stimulation parameters and clarify the significance of autonomic indices as a reliable marker of neuromodulation responsiveness.


Subject(s)
Vagus Nerve Stimulation , Humans , Vagus Nerve Stimulation/methods , Heart Rate/physiology , Baroreflex/physiology , Healthy Volunteers , Vagus Nerve/physiology
3.
Front Neurosci ; 12: 698, 2018.
Article in English | MEDLINE | ID: mdl-30356803

ABSTRACT

Human intelligence relies on the vast number of neurons and their interconnections that form a parallel computing engine. If we tend to design a brain-like machine, we will have no choice but to employ many spiking neurons, each one has a large number of synapses. Such a neuronal network is not only compute-intensive but also memory-intensive. The performance and the configurability of the modern FPGAs make them suitable hardware solutions to deal with these challenges. This paper presents a scalable architecture to simulate a randomly connected network of Hodgkin-Huxley neurons. To demonstrate that our architecture eliminates the need to use a high-end device, we employ the XC7A200T, a member of the mid-range Xilinx Artix®-7 family, as our target device. A set of techniques are proposed to reduce the memory usage and computational requirements. Here we introduce a multi-core architecture in which each core can update the states of a group of neurons stored in its corresponding memory bank. The proposed system uses a novel method to generate the connectivity vectors on the fly instead of storing them in a huge memory. This technique is based on a cyclic permutation of a single prestored connectivity vector per core. Moreover, to reduce both the resource usage and the computational latency even more, a novel approximate two-level counter is introduced to count the number of the spikes at the synapse for the sparse network. The first level is a low cost saturated counter implemented on FPGA lookup tables that reduces the number of inputs to the second level exact adder tree. It, therefore, results in much lower hardware cost for the counter circuit. These techniques along with pipelining make it possible to have a high-performance, scalable architecture, which could be configured for either a real-time simulation of up to 5120 neurons or a large-scale simulation of up to 65536 neurons in an appropriate execution time on a cost-optimized FPGA.

SELECTION OF CITATIONS
SEARCH DETAIL
...