Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 12(1): 16003, 2022 09 29.
Artigo em Inglês | MEDLINE | ID: mdl-36175466

RESUMO

Real-time sequence identification is a core use-case of artificial neural networks (ANNs), ranging from recognizing temporal events to identifying verification codes. Existing methods apply recurrent neural networks, which suffer from training difficulties; however, performing this function without feedback loops remains a challenge. Here, we present an experimental neuronal long-term plasticity mechanism for high-precision feedforward sequence identification networks (ID-nets) without feedback loops, wherein input objects have a given order and timing. This mechanism temporarily silences neurons following their recent spiking activity. Therefore, transitory objects act on different dynamically created feedforward sub-networks. ID-nets are demonstrated to reliably identify 10 handwritten digit sequences, and are generalized to deep convolutional ANNs with continuous activation nodes trained on image sequences. Counterintuitively, their classification performance, even with a limited number of training examples, is high for sequences but low for individual objects. ID-nets are also implemented for writer-dependent recognition, and suggested as a cryptographic tool for encrypted authentication. The presented mechanism opens new horizons for advanced ANN algorithms.


Assuntos
Encéfalo , Neurônios , Redes Neurais de Computação , Plasticidade Neuronal , Receptores Proteína Tirosina Quinases , Reconhecimento Psicológico
2.
Sci Rep ; 12(1): 6571, 2022 04 28.
Artigo em Inglês | MEDLINE | ID: mdl-35484180

RESUMO

Synaptic plasticity is a long-lasting core hypothesis of brain learning that suggests local adaptation between two connecting neurons and forms the foundation of machine learning. The main complexity of synaptic plasticity is that synapses and dendrites connect neurons in series and existing experiments cannot pinpoint the significant imprinted adaptation location. We showed efficient backpropagation and Hebbian learning on dendritic trees, inspired by experimental-based evidence, for sub-dendritic adaptation and its nonlinear amplification. It has proven to achieve success rates approaching unity for handwritten digits recognition, indicating realization of deep learning even by a single dendrite or neuron. Additionally, dendritic amplification practically generates an exponential number of input crosses, higher-order interactions, with the number of inputs, which enhance success rates. However, direct implementation of a large number of the cross weights and their exhaustive manipulation independently is beyond existing and anticipated computational power. Hence, a new type of nonlinear adaptive dendritic hardware for imitating dendritic learning and estimating the computational capability of the brain must be built.


Assuntos
Dendritos , Plasticidade Neuronal , Dendritos/fisiologia , Aprendizado de Máquina , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia
3.
Phys Rev E ; 105(1-1): 014401, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35193251

RESUMO

Refractoriness is a fundamental property of excitable elements, such as neurons, indicating the probability for re-excitation in a given time lag, and is typically linked to the neuronal hyperpolarization following an evoked spike. Here we measured the refractory periods (RPs) in neuronal cultures and observed that an average anisotropic absolute RP could exceed 10 ms and its tail is 20 ms, independent of a large stimulation frequency range. It is an order of magnitude longer than anticipated and comparable with the decaying membrane potential time scale. It is followed by a sharp rise-time (relative RP) of merely ∼1 md to complete responsiveness. Extracellular stimulations result in longer absolute RPs than solely intracellular ones, and a pair of extracellular stimulations from two different routes exhibits distinct absolute RPs, depending on their order. Our results indicate that a neuron is an accurate excitable element, where the diverse RPs cannot be attributed solely to the soma and imply fast mutual interactions between different stimulation routes and dendrites. Further elucidation of neuronal computational capabilities and their interplay with adaptation mechanisms is warranted.

4.
Sci Rep ; 10(1): 19628, 2020 11 12.
Artigo em Inglês | MEDLINE | ID: mdl-33184422

RESUMO

Power-law scaling, a central concept in critical phenomena, is found to be useful in deep learning, where optimized test errors on handwritten digit examples converge as a power-law to zero with database size. For rapid decision making with one training epoch, each example is presented only once to the trained network, the power-law exponent increased with the number of hidden layers. For the largest dataset, the obtained test error was estimated to be in the proximity of state-of-the-art algorithms for large epoch numbers. Power-law scaling assists with key challenges found in current artificial intelligence applications and facilitates an a priori dataset size estimation to achieve a desired test accuracy. It establishes a benchmark for measuring training complexity and a quantitative hierarchy of machine learning tasks and algorithms.

5.
Sci Rep ; 10(1): 9356, 2020 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-32493994

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

6.
Sci Rep ; 10(1): 6923, 2020 04 23.
Artigo em Inglês | MEDLINE | ID: mdl-32327697

RESUMO

Attempting to imitate the brain's functionalities, researchers have bridged between neuroscience and artificial intelligence for decades; however, experimental neuroscience has not directly advanced the field of machine learning (ML). Here, using neuronal cultures, we demonstrate that increased training frequency accelerates the neuronal adaptation processes. This mechanism was implemented on artificial neural networks, where a local learning step-size increases for coherent consecutive learning steps, and tested on a simple dataset of handwritten digits, MNIST. Based on our on-line learning results with a few handwriting examples, success rates for brain-inspired algorithms substantially outperform the commonly used ML algorithms. We speculate this emerging bridge from slow brain function to ML will promote ultrafast decision making under limited examples, which is the reality in many aspects of human activity, robotic control, and network optimization.


Assuntos
Adaptação Fisiológica , Algoritmos , Inteligência Artificial , Encéfalo/fisiologia , Simulação por Computador , Humanos , Aprendizado de Máquina
7.
Sci Rep ; 9(1): 11558, 2019 08 09.
Artigo em Inglês | MEDLINE | ID: mdl-31399614

RESUMO

Recently, deep learning algorithms have outperformed human experts in various tasks across several domains; however, their characteristics are distant from current knowledge of neuroscience. The simulation results of biological learning algorithms presented herein outperform state-of-the-art optimal learning curves in supervised learning of feedforward networks. The biological learning algorithms comprise asynchronous input signals with decaying input summation, weights adaptation, and multiple outputs for an input signal. In particular, the generalization error for such biological perceptrons decreases rapidly with increasing number of examples, and it is independent of the size of the input. This is achieved using either synaptic learning, or solely through dendritic adaptation with a mechanism of swinging between reflecting boundaries, without learning steps. The proposed biological learning algorithms outperform the optimal scaling of the learning curve in a traditional perceptron. It also results in a considerable robustness to disparity between weights of two networks with very similar outputs in biological supervised learning scenarios. The simulation results indicate the potency of neurobiological mechanisms and open opportunities for developing a superior class of deep learning algorithms.

8.
Sci Rep ; 8(1): 13091, 2018 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-30166579

RESUMO

Experimental evidence recently indicated that neural networks can learn in a different manner than was previously assumed, using adaptive nodes instead of adaptive links. Consequently, links to a node undergo the same adaptation, resulting in cooperative nonlinear dynamics with oscillating effective link weights. Here we show that the biological reality of stationary log-normal distribution of effective link weights in neural networks is a result of such adaptive nodes, although each effective link weight varies significantly in time. The underlying mechanism is a stochastic restoring force emerging from a spontaneous temporal ordering of spike pairs, generated by strong effective link preceding by a weak one. In addition, for feedforward adaptive node networks the number of dynamical attractors can scale exponentially with the number of links. These results are expected to advance deep learning capabilities and to open horizons to an interplay between adaptive node rules and the distribution of network link weights.

9.
Front Neurosci ; 12: 358, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29910706

RESUMO

Introduction: rTMS has been proven effective in the treatment of neuropsychiatric conditions, with class A (definite efficacy) evidence for treatment of depression and pain (Lefaucheur et al., 2014). The efficacy in stimulation protocols is, however, quite heterogeneous. Saturation of neuronal firing by HFrTMS without allowing time for recovery may lead to neuronal response failures (NRFs) that compromise the efficacy of stimulation with higher frequencies. Objectives: To examine the efficacy of different rTMS temporal stimulation patterns focusing on a possible upper stimulation limit related to response failures. Protocol patterns were derived from published clinical studies on therapeutic rTMS for depression and pain. They were compared with conduction failures in cell cultures. Methodology: From 57 papers using protocols rated class A for depression and pain (Lefaucheur et al., 2014) we extracted Inter-train interval (ITI), average frequency, total duration and total number of pulses and plotted them against the percent improvement on the outcome scale. Specifically, we compared 10 Hz trains with ITIs of 8 s (protocol A) and 26 s (protocol B) in vitro on cultured cortical neurons. Results: In the in vitro experiments, protocol A with 8-s ITIs resulted in more frequent response failures, while practically no response failures occurred with protocol B (26-s intervals). The HFrTMS protocol analysis exhibited no significant effect of ITIs on protocol efficiency. Discussion: In the neuronal culture, longer ITIs appeared to allow the neuronal response to recover. In the available human dataset on both depression and chronic pain, data concerning shorter ITIs is does not allow a significant conclusion. Significance: NRF may interfere with the efficacy of rTMS stimulation protocols when the average stimulation frequency is too high, proposing ITIs as a variable in rTMS protocol efficacy. Clinical trials are necessary to examine effect of shorter ITIs on the clinical outcome in a controlled setting.

10.
ACS Chem Neurosci ; 9(6): 1230-1232, 2018 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-29727167

RESUMO

Experimental and theoretical results reveal a new underlying mechanism for fast brain learning process, dendritic learning, as opposed to the misdirected research in neuroscience over decades, which is based solely on slow synaptic plasticity. The presented paradigm indicates that learning occurs in closer proximity to the neuron, the computational unit, dendritic strengths are self-oscillating, and weak synapses, which comprise the majority of our brain and previously were assumed to be insignificant, play a key role in plasticity. The new learning sites of the brain call for a reevaluation of current treatments for disordered brain functionality and for a better understanding of proper chemical drugs and biological mechanisms to maintain, control and enhance learning.


Assuntos
Encéfalo/fisiologia , Dendritos/fisiologia , Aprendizagem/fisiologia , Plasticidade Neuronal/fisiologia , Animais , Humanos , Neurônios/fisiologia , Sinapses/fisiologia
11.
Sci Rep ; 8(1): 5100, 2018 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-29572466

RESUMO

Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is significantly larger. The nodal, neuronal, fast adaptation follows its relative anisotropic (dendritic) input timings, as indicated experimentally, similarly to the slow learning mechanism currently attributed to the links, synapses. It represents a non-local learning rule, where effectively many incoming links to a node concurrently undergo the same adaptation. The network dynamics is now counterintuitively governed by the weak links, which previously were assumed to be insignificant. This cooperative nonlinear dynamic adaptation presents a self-controlled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to learning by links, and also supports self-oscillations of the effective learning parameters. It hints on a hierarchical computational complexity of nodes, following their number of anisotropic inputs and opens new horizons for advanced deep learning algorithms and artificial intelligence based applications, as well as a new mechanism for enhanced and fast learning by neural networks.

12.
Sci Rep ; 7(1): 18036, 2017 12 21.
Artigo em Inglês | MEDLINE | ID: mdl-29269849

RESUMO

Neurons are the computational elements that compose the brain and their fundamental principles of activity are known for decades. According to the long-lasting computational scheme, each neuron sums the incoming electrical signals via its dendrites and when the membrane potential reaches a certain threshold the neuron typically generates a spike to its axon. Here we present three types of experiments, using neuronal cultures, indicating that each neuron functions as a collection of independent threshold units. The neuron is anisotropically activated following the origin of the arriving signals to the membrane, via its dendritic trees. The first type of experiments demonstrates that a single neuron's spike waveform typically varies as a function of the stimulation location. The second type reveals that spatial summation is absent for extracellular stimulations from different directions. The third type indicates that spatial summation and subtraction are not achieved when combining intra- and extra- cellular stimulations, as well as for nonlocal time interference, where the precise timings of the stimulations are irrelevant. Results call to re-examine neuronal functionalities beyond the traditional framework, and the advanced computational capabilities and dynamical properties of such complex systems.


Assuntos
Potenciais de Ação/fisiologia , Axônios/fisiologia , Encéfalo/fisiologia , Dendritos/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Animais
13.
Sci Rep ; 7(1): 2700, 2017 06 02.
Artigo em Inglês | MEDLINE | ID: mdl-28578398

RESUMO

We present an analytical framework that allows the quantitative study of statistical dynamic properties of networks with adaptive nodes that have memory and is used to examine the emergence of oscillations in networks with response failures. The frequency of the oscillations was quantitatively found to increase with the excitability of the nodes and with the average degree of the network and to decrease with delays between nodes. For networks of networks, diverse cluster oscillation modes were found as a function of the topology. Analytical results are in agreement with large-scale simulations and open the horizon for understanding network dynamics composed of finite memory nodes as well as their different phases of activity.

14.
Sci Rep ; 6: 36228, 2016 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-27824075

RESUMO

The increasing number of recording electrodes enhances the capability of capturing the network's cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.


Assuntos
Neurônios/fisiologia , Técnicas de Patch-Clamp/métodos , Análise de Célula Única/instrumentação , Potenciais de Ação , Animais , Células Cultivadas , Potenciais da Membrana , Neurônios/citologia , Ratos
15.
Sci Rep ; 6: 31674, 2016 08 17.
Artigo em Inglês | MEDLINE | ID: mdl-27530974

RESUMO

Catastrophic failures are complete and sudden collapses in the activity of large networks such as economics, electrical power grids and computer networks, which typically require a manual recovery process. Here we experimentally show that excitatory neural networks are governed by a non-Poissonian reoccurrence of catastrophic failures, where their repetition time follows a multimodal distribution characterized by a few tenths of a second and tens of seconds timescales. The mechanism underlying the termination and reappearance of network activity is quantitatively shown here to be associated with nodal time-dependent features, neuronal plasticity, where hyperactive nodes damage the response capability of their neighbors. It presents a complementary mechanism for the emergence of Poissonian catastrophic failures from damage conductivity. The effect that hyperactive nodes degenerate their neighbors represents a type of local competition which is a common feature in the dynamics of real-world complex networks, whereas their spontaneous recoveries represent a vitality which enhances reliable functionality.

16.
Artigo em Inglês | MEDLINE | ID: mdl-26578893

RESUMO

Broadband spontaneous macroscopic neural oscillations are rhythmic cortical firing which were extensively examined during the last century, however, their possible origination is still controversial. In this work we show how macroscopic oscillations emerge in solely excitatory random networks and without topological constraints. We experimentally and theoretically show that these oscillations stem from the counterintuitive underlying mechanism-the intrinsic stochastic neuronal response failures (NRFs). These NRFs, which are characterized by short-term memory, lead to cooperation among neurons, resulting in sub- or several- Hertz macroscopic oscillations which coexist with high frequency gamma oscillations. A quantitative interplay between the statistical network properties and the emerging oscillations is supported by simulations of large networks based on single-neuron in-vitro experiments and a Langevin equation describing the network dynamics. Results call for the examination of these oscillations in the presence of inhibition and external drives.


Assuntos
Córtex Cerebral/fisiologia , Fenômenos Eletrofisiológicos/fisiologia , Rede Nervosa/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Animais , Animais Recém-Nascidos , Redes Neurais de Computação , Ratos , Ratos Sprague-Dawley
17.
Artigo em Inglês | MEDLINE | ID: mdl-26124707

RESUMO

Realizations of low firing rates in neural networks usually require globally balanced distributions among excitatory and inhibitory links, while feasibility of temporal coding is limited by neuronal millisecond precision. We show that cooperation, governing global network features, emerges through nodal properties, as opposed to link distributions. Using in vitro and in vivo experiments we demonstrate microsecond precision of neuronal response timings under low stimulation frequencies, whereas moderate frequencies result in a chaotic neuronal phase characterized by degraded precision. Above a critical stimulation frequency, which varies among neurons, response failures were found to emerge stochastically such that the neuron functions as a low pass filter, saturating the average inter-spike-interval. This intrinsic neuronal response impedance mechanism leads to cooperation on a network level, such that firing rates are suppressed toward the lowest neuronal critical frequency simultaneously with neuronal microsecond precision. Our findings open up opportunities of controlling global features of network dynamics through few nodes with extreme properties.


Assuntos
Potenciais de Ação/fisiologia , Rede Nervosa/fisiologia , Neurônios/fisiologia , Neurotransmissores/farmacologia , Animais , Animais Recém-Nascidos , Células Cultivadas , Córtex Cerebral/citologia , Simulação por Computador , Estimulação Elétrica , Modelos Neurológicos , Ratos , Ratos Sprague-Dawley , Tempo de Reação/efeitos dos fármacos
18.
Front Neurosci ; 9: 508, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26834538

RESUMO

The experimental study of neural networks requires simultaneous measurements of a massive number of neurons, while monitoring properties of the connectivity, synaptic strengths and delays. Current technological barriers make such a mission unachievable. In addition, as a result of the enormous number of required measurements, the estimated network parameters would differ from the original ones. Here we present a versatile experimental technique, which enables the study of recurrent neural networks activity while being capable of dictating the network connectivity and synaptic strengths. This method is based on the observation that the response of neurons depends solely on their recent stimulations, a short-term memory. It allows a long-term scheme of stimulation and recording of a single neuron, to mimic simultaneous activity measurements of neurons in a recurrent network. Utilization of this technique demonstrates the spontaneous emergence of cooperative synchronous oscillations, in particular the coexistence of fast γ and slow δ oscillations, and opens the horizon for the experimental study of other cooperative phenomena within large-scale neural networks.

19.
Artigo em Inglês | MEDLINE | ID: mdl-24808856

RESUMO

In 1943 McCulloch and Pitts suggested that the brain is composed of reliable logic-gates similar to the logic at the core of today's computers. This framework had a limited impact on neuroscience, since neurons exhibit far richer dynamics. Here we propose a new experimentally corroborated paradigm in which the truth tables of the brain's logic-gates are time dependent, i.e., dynamic logic-gates (DLGs). The truth tables of the DLGs depend on the history of their activity and the stimulation frequencies of their input neurons. Our experimental results are based on a procedure where conditioned stimulations were enforced on circuits of neurons embedded within a large-scale network of cortical cells in-vitro. We demonstrate that the underlying biological mechanism is the unavoidable increase of neuronal response latencies to ongoing stimulations, which imposes a non-uniform gradual stretching of network delays. The limited experimental results are confirmed and extended by simulations and theoretical arguments based on identical neurons with a fixed increase of the neuronal response latency per evoked spike. We anticipate our results to lead to better understanding of the suitability of this computational paradigm to account for the brain's functionalities and will require the development of new systematic mathematical methods beyond the methods developed for traditional Boolean algebra.

20.
Front Neural Circuits ; 7: 176, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24198764

RESUMO

A classical view of neural coding relies on temporal firing synchrony among functional groups of neurons, however, the underlying mechanism remains an enigma. Here we experimentally demonstrate a mechanism where time-lags among neuronal spiking leap from several tens of milliseconds to nearly zero-lag synchrony. It also allows sudden leaps out of synchrony, hence forming short epochs of synchrony. Our results are based on an experimental procedure where conditioned stimulations were enforced on circuits of neurons embedded within a large-scale network of cortical cells in vitro and are corroborated by simulations of neuronal populations. The underlying biological mechanisms are the unavoidable increase of the neuronal response latency to ongoing stimulations and temporal or spatial summation required to generate evoked spikes. These sudden leaps in and out of synchrony may be accompanied by multiplications of the neuronal firing frequency, hence offering reliable information-bearing indicators which may bridge between the two principal neuronal coding paradigms.


Assuntos
Potenciais de Ação/fisiologia , Neurônios/fisiologia , Animais , Simulação por Computador , Modelos Neurológicos , Ratos , Ratos Sprague-Dawley
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...