Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Front Neuroinform ; 16: 883700, 2022.
Article in English | MEDLINE | ID: mdl-36387586

ABSTRACT

Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian's CPU backend. Currently, Brian2CUDA is the only package that supports Brian's full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.

2.
Sci Rep ; 10(1): 410, 2020 01 15.
Article in English | MEDLINE | ID: mdl-31941893

ABSTRACT

"Brian" is a popular Python-based simulator for spiking neural networks, commonly used in computational neuroscience. GeNN is a C++-based meta-compiler for accelerating spiking neural network simulations using consumer or high performance grade graphics processing units (GPUs). Here we introduce a new software package, Brian2GeNN, that connects the two systems so that users can make use of GeNN GPU acceleration when developing their models in Brian, without requiring any technical knowledge about GPUs, C++ or GeNN. The new Brian2GeNN software uses a pipeline of code generation to translate Brian scripts into C++ code that can be used as input to GeNN, and subsequently can be run on suitable NVIDIA GPU accelerators. From the user's perspective, the entire pipeline is invoked by adding two simple lines to their Brian scripts. We have shown that using Brian2GeNN, two non-trivial models from the literature can run tens to hundreds of times faster than on CPU.

3.
F1000Res ; 9: 1174, 2020.
Article in English | MEDLINE | ID: mdl-33564396

ABSTRACT

In theory, neurons modelled as single layer perceptrons can implement all linearly separable computations. In practice, however, these computations may require arbitrarily precise synaptic weights. This is a strong constraint since both biological neurons and their artificial counterparts have to cope with limited precision. Here, we explore how non-linear processing in dendrites helps overcome this constraint. We start by finding a class of computations which requires increasing precision with the number of inputs in a perceptron and show that it can be implemented without this constraint in a neuron with sub-linear dendritic subunits. Then, we complement this analytical study by a simulation of a biophysical neuron model with two passive dendrites and a soma, and show that it can implement this computation. This work demonstrates a new role of dendrites in neural computation: by distributing the computation across independent subunits, the same computation can be performed more efficiently with less precise tuning of the synaptic weights. This work not only offers new insight into the importance of dendrites for biological neurons, but also paves the way for new, more efficient architectures of artificial neuromorphic chips.


Subject(s)
Dendrites , Neurons , Computer Simulation , Neural Networks, Computer
4.
Elife ; 82019 08 20.
Article in English | MEDLINE | ID: mdl-31429824

ABSTRACT

Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input.


Subject(s)
Computer Simulation , Models, Neurological , Nerve Net , Neurons/physiology , Software
5.
Front Neuroinform ; 12: 68, 2018.
Article in English | MEDLINE | ID: mdl-30455637

ABSTRACT

Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them.

6.
Elife ; 72018 03 20.
Article in English | MEDLINE | ID: mdl-29557782

ABSTRACT

In recent years, multielectrode arrays and large silicon probes have been developed to record simultaneously between hundreds and thousands of electrodes packed with a high density. However, they require novel methods to extract the spiking activity of large ensembles of neurons. Here, we developed a new toolbox to sort spikes from these large-scale extracellular data. To validate our method, we performed simultaneous extracellular and loose patch recordings in rodents to obtain 'ground truth' data, where the solution to this sorting problem is known for one cell. The performance of our algorithm was always close to the best expected performance, over a broad range of signal-to-noise ratios, in vitro and in vivo. The algorithm is entirely parallelized and has been successfully tested on recordings with up to 4225 electrodes. Our toolbox thus offers a generic solution to sort accurately spikes for up to thousands of electrodes.


Subject(s)
Action Potentials/physiology , Electrodes , Electrophysiology/instrumentation , Retinal Neurons/physiology , Algorithms , Animals , Computer Simulation , Electrophysiology/methods , Male , Mice , Models, Neurological , Rats, Long-Evans , Signal Processing, Computer-Assisted
7.
PeerJ Comput Sci ; 3: e142, 2017.
Article in English | MEDLINE | ID: mdl-34722870

ABSTRACT

Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.

8.
J Neurosci ; 35(39): 13351-62, 2015 Sep 30.
Article in English | MEDLINE | ID: mdl-26424883

ABSTRACT

New sensory stimuli can be learned with a single or a few presentations. Similarly, the responses of cortical neurons to a stimulus have been shown to increase reliably after just a few repetitions. Long-term memory is thought to be mediated by synaptic plasticity, but in vitro experiments in cortical cells typically show very small changes in synaptic strength after a pair of presynaptic and postsynaptic spikes. Thus, it is traditionally thought that fast learning requires stronger synaptic changes, possibly because of neuromodulation. Here we show theoretically that weak synaptic plasticity can, in fact, support fast learning, because of the large number of synapses N onto a cortical neuron. In the fluctuation-driven regime characteristic of cortical neurons in vivo, the size of membrane potential fluctuations grows only as √N, whereas a single output spike leads to potentiation of a number of synapses proportional to N. Therefore, the relative effect of a single spike on synaptic potentiation grows as √N. This leverage effect requires precise spike timing. Thus, the large number of synapses onto cortical neurons allows fast learning with very small synaptic changes. Significance statement: Long-term memory is thought to rely on the strengthening of coactive synapses. This physiological mechanism is generally considered to be very gradual, and yet new sensory stimuli can be learned with just a few presentations. Here we show theoretically that this apparent paradox can be solved when there is a tight balance between excitatory and inhibitory input. In this case, small synaptic modifications applied to the many synapses onto a given neuron disrupt that balance and produce a large effect even for modifications induced by a single stimulus. This effect makes fast learning possible with small synaptic changes and reconciles physiological and behavioral observations.


Subject(s)
Learning/physiology , Neuronal Plasticity/physiology , Synapses/physiology , Algorithms , Cerebral Cortex/physiology , Computer Simulation , Electrophysiological Phenomena/physiology , Excitatory Postsynaptic Potentials/physiology , Feedback, Sensory , Humans , Memory/physiology , Models, Neurological , Neural Pathways/physiology , Neurons/physiology
9.
Front Neuroinform ; 8: 6, 2014.
Article in English | MEDLINE | ID: mdl-24550820

ABSTRACT

Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator.

10.
Cereb Cortex ; 19(9): 2166-80, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19221143

ABSTRACT

In V1, local circuitry depends on the position in the orientation map: close to pinwheel centers, recurrent inputs show variable orientation preferences; within iso-orientation domains, inputs are relatively uniformly tuned. Physiological properties such as cell's membrane potentials, spike outputs, and temporal characteristics change systematically with map location. We investigate in a firing rate and a Hodgkin-Huxley network model what constraints these tuning characteristics of V1 neurons impose on the cortical operating regime. Systematically varying the strength of both recurrent excitation and inhibition, we test a wide range of model classes and find the likely models to account for the experimental observations. We show that recent intracellular and extracellular recordings from cat V1 provide the strongest evidence for a regime where excitatory and inhibitory recurrent inputs are balanced and dominate the feed-forward input. Our results are robust against changes in model assumptions such as spatial extent and strength of lateral inhibition. Intriguingly, the most likely recurrent regime is in a region of parameter space where small changes have large effects on the network dynamics, and it is close to a regime of "runaway excitation," where the network shows strong self-sustained activity. This could make the cortical response particularly sensitive to modulation.


Subject(s)
Action Potentials/physiology , Cognition/physiology , Evoked Potentials, Visual/physiology , Models, Neurological , Nerve Net/physiology , Visual Cortex/physiology , Visual Perception/physiology , Computer Simulation , Humans
11.
Front Neurosci ; 1(1): 145-59, 2007 Nov.
Article in English | MEDLINE | ID: mdl-18982125

ABSTRACT

Analysis of the timecourse of the orientation tuning of responses in primary visual cortex (V1) can provide insight into the circuitry underlying tuning. Several studies have examined the temporal evolution of orientation selectivity in V1 neurons, but there is no consensus regarding the stability of orientation tuning properties over the timecourse of the response. We have used reverse-correlation analysis of the responses to dynamic grating stimuli to re-examine this issue in cat V1 neurons. We find that the preferred orientation and tuning curve shape are stable in the majority of neurons; however, more than forty percent of cells show a significant change in either preferred orientation or tuning width between early and late portions of the response. To examine the influence of the local cortical circuit connectivity, we analyzed the timecourse of responses as a function of receptive field type, laminar position, and orientation map position. Simple cells are more selective, and reach peak selectivity earlier, than complex cells. There are pronounced laminar differences in the timing of responses: middle layer cells respond faster, deep layer cells have prolonged response decay, and superficial cells are intermediate in timing. The average timing of neurons near and far from pinwheel centers is similar, but there is more variability in the timecourse of responses near pinwheel centers. This result was reproduced in an established network model of V1 operating in a regime of balanced excitatory and inhibitory recurrent connections, confirming previous results. Thus, response dynamics of cortical neurons reflect circuitry based on both vertical and horizontal location within cortical networks.

SELECTION OF CITATIONS
SEARCH DETAIL
...