Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Neuroscience ; 489: 262-274, 2022 05 01.
Article in English | MEDLINE | ID: mdl-34364955

ABSTRACT

Computations on the dendritic trees of neurons have important constraints. Voltage dependent conductances in dendrites are not similar to arbitrary direct-current generation, they are the basis for dendritic nonlinearities and they do not allow converting positive currents into negative currents. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these biological constraints. Here we simulate models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by these constraints but may benefit from them. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks.


Subject(s)
Dendrites , Models, Neurological , Action Potentials/physiology , Dendrites/physiology , Neural Networks, Computer , Neurons/physiology , Synapses/physiology
2.
Neural Comput ; 33(6): 1554-1571, 2021 05 13.
Article in English | MEDLINE | ID: mdl-34496390

ABSTRACT

Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how aspects of a dendritic tree, such as its branched morphology or its repetition of presynaptic inputs, determine neural computation beyond this apparent nonlinearity. Here we use a simple model where the dendrite is implemented as a sequence of thresholded linear units. We manipulate the architecture of this model to investigate the impacts of binary branching constraints and repetition of synaptic inputs on neural computation. We find that models with such manipulations can perform well on machine learning tasks, such as Fashion MNIST or Extended MNIST. We find that model performance on these tasks is limited by binary tree branching and dendritic asymmetry and is improved by the repetition of synaptic inputs to different dendritic branches. These computational experiments further neuroscience theory on how different dendritic properties might determine neural computation of clearly defined tasks.


Subject(s)
Dendrites , Models, Neurological , Machine Learning , Neurons , Synapses
3.
Behav Brain Sci ; 42: e233, 2019 11 28.
Article in English | MEDLINE | ID: mdl-31775921

ABSTRACT

Many systems neuroscientists want to understand neurons in terms of mediation; we want to understand how neurons are involved in the causal chain from stimulus to behavior. Unfortunately, most tools are inappropriate for that while our language takes mediation for granted. Here we discuss the contrast between our conceptual drive toward mediation and the difficulty of obtaining meaningful evidence.


Subject(s)
Metaphor , Neurons , Brain , Language
4.
Neural Comput ; 31(11): 2075-2137, 2019 11.
Article in English | MEDLINE | ID: mdl-31525312

ABSTRACT

Any function can be constructed using a hierarchy of simpler functions through compositions. Such a hierarchy can be characterized by a binary rooted tree. Each node of this tree is associated with a function that takes as inputs two numbers from its children and produces one output. Since thinking about functions in terms of computation graphs is becoming popular, we may want to know which functions can be implemented on a given tree. Here, we describe a set of necessary constraints in the form of a system of nonlinear partial differential equations that must be satisfied. Moreover, we prove that these conditions are sufficient in contexts of analytic and bit-valued functions. In the latter case, we explicitly enumerate discrete functions and observe that there are relatively few. Our point of view allows us to compare different neural network architectures in regard to their function spaces. Our work connects the structure of computation graphs with the functions they can implement and has potential applications to neuroscience and computer science.


Subject(s)
Computer Simulation , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...