Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 31
Filter
Add more filters










Publication year range
1.
J Chem Phys ; 159(21)2023 Dec 07.
Article in English | MEDLINE | ID: mdl-38047510

ABSTRACT

Systems with many stable configurations abound in nature, both in living and inanimate matter, encoding a rich variety of behaviors. In equilibrium, a multistable system is more likely to be found in configurations with lower energy, but the presence of an external drive can alter the relative stability of different configurations in unexpected ways. Living systems are examples par excellence of metastable nonequilibrium attractors whose structure and stability are highly dependent on the specific form and pattern of the energy flow sustaining them. Taking this distinctively lifelike behavior as inspiration, we sought to investigate the more general physical phenomenon of drive-specific selection in nonequilibrium dynamics. To do so, we numerically studied driven disordered mechanical networks of bistable springs possessing a vast number of stable configurations arising from the two stable rest lengths of each spring, thereby capturing the essential physical properties of a broad class of multistable systems. We found that there exists a range of forcing amplitudes for which the attractor states of driven disordered multistable mechanical networks are fine-tuned with respect to the pattern of external forcing to have low energy absorption from it. Additionally, we found that these drive-specific attractor states are further stabilized by precise matching between the multidimensional shape of their orbit and that of the potential energy well they inhabit. Lastly, we showed evidence of drive-specific selection in an experimental system and proposed a general method to estimate the range of drive amplitudes for drive-specific selection.

2.
PLoS Comput Biol ; 18(12): e1010776, 2022 12.
Article in English | MEDLINE | ID: mdl-36574424

ABSTRACT

Working memory has long been thought to arise from sustained spiking/attractor dynamics. However, recent work has suggested that short-term synaptic plasticity (STSP) may help maintain attractor states over gaps in time with little or no spiking. To determine if STSP endows additional functional advantages, we trained artificial recurrent neural networks (RNNs) with and without STSP to perform an object working memory task. We found that RNNs with and without STSP were able to maintain memories despite distractors presented in the middle of the memory delay. However, RNNs with STSP showed activity that was similar to that seen in the cortex of a non-human primate (NHP) performing the same task. By contrast, RNNs without STSP showed activity that was less brain-like. Further, RNNs with STSP were more robust to network degradation than RNNs without STSP. These results show that STSP can not only help maintain working memories, it also makes neural networks more robust and brain-like.


Subject(s)
Brain , Memory, Short-Term , Animals , Neural Networks, Computer , Primates , Neuronal Plasticity
3.
Neural Comput ; 33(3): 590-673, 2021 03.
Article in English | MEDLINE | ID: mdl-33513321

ABSTRACT

Stable concurrent learning and control of dynamical systems is the subject of adaptive control. Despite being an established field with many practical applications and a rich theory, much of the development in adaptive control for nonlinear systems revolves around a few key algorithms. By exploiting strong connections between classical adaptive nonlinear control techniques and recent progress in optimization and machine learning, we show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction. We begin by introducing first-order adaptation laws inspired by natural gradient descent and mirror descent. We prove that when there are multiple dynamics consistent with the data, these non-Euclidean adaptation laws implicitly regularize the learned model. Local geometry imposed during learning thus may be used to select parameter vectors-out of the many that will achieve perfect tracking or prediction-for desired properties such as sparsity. We apply this result to regularized dynamics predictor and observer design, and as concrete examples, we consider Hamiltonian systems, Lagrangian systems, and recurrent neural networks. We subsequently develop a variational formalism based on the Bregman Lagrangian. We show that its Euler Lagrange equations lead to natural gradient and mirror descent-like adaptation laws with momentum, and we recover their first-order analogues in the infinite friction limit. We illustrate our analyses with simulations demonstrating our theoretical results.

4.
PLoS Comput Biol ; 16(8): e1007659, 2020 08.
Article in English | MEDLINE | ID: mdl-32764745

ABSTRACT

The brain consists of many interconnected networks with time-varying, partially autonomous activity. There are multiple sources of noise and variation yet activity has to eventually converge to a stable, reproducible state (or sequence of states) for its computations to make sense. We approached this problem from a control-theory perspective by applying contraction analysis to recurrent neural networks. This allowed us to find mechanisms for achieving stability in multiple connected networks with biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These mechanisms included inhibitory Hebbian plasticity, excitatory anti-Hebbian plasticity, synaptic sparsity and excitatory-inhibitory balance. Our findings shed light on how stable computations might be achieved despite biological complexity. Crucially, our analysis is not limited to analyzing the stability of fixed geometric objects in state space (e.g points, lines, planes), but rather the stability of state trajectories which may be complex and time-varying.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neuronal Plasticity/physiology , Algorithms , Animals , Brain/physiology , Computational Biology , Computer Simulation , Humans
5.
PLoS One ; 15(8): e0236661, 2020.
Article in English | MEDLINE | ID: mdl-32750097

ABSTRACT

This paper considers the analysis of continuous time gradient-based optimization algorithms through the lens of nonlinear contraction theory. It demonstrates that in the case of a time-invariant objective, most elementary results on gradient descent based on convexity can be replaced by much more general results based on contraction. In particular, gradient descent converges to a unique equilibrium if its dynamics are contracting in any metric, with convexity of the cost corresponding to the special case of contraction in the identity metric. More broadly, contraction analysis provides new insights for the case of geodesically-convex optimization, wherein non-convex problems in Euclidean space can be transformed to convex ones posed over a Riemannian manifold. In this case, natural gradient descent converges to a unique equilibrium if it is contracting in any metric, with geodesic convexity of the cost corresponding to contraction in the natural metric. New results using semi-contraction provide additional insights into the topology of the set of optimizers in the case when multiple optima exist. Furthermore, they show how semi-contraction may be combined with specific additional information to reach broad conclusions about a dynamical system. The contraction perspective also easily extends to time-varying optimization settings and allows one to recursively build large optimization structures out of simpler elements. Extensions to natural primal-dual optimization and game-theoretic contexts further illustrate the potential reach of these new perspectives.

6.
Neural Comput ; 32(1): 36-96, 2020 01.
Article in English | MEDLINE | ID: mdl-31703177

ABSTRACT

We analyze the effect of synchronization on distributed stochastic gradient algorithms. By exploiting an analogy with dynamical models of biological quorum sensing, where synchronization between agents is induced through communication with a common signal, we quantify how synchronization can significantly reduce the magnitude of the noise felt by the individual distributed agents and their spatial mean. This noise reduction is in turn associated with a reduction in the smoothing of the loss function imposed by the stochastic gradient approximation. Through simulations on model nonconvex objectives, we demonstrate that coupling can stabilize higher noise levels and improve convergence. We provide a convergence analysis for strongly convex functions by deriving a bound on the expected deviation of the spatial mean of the agents from the global minimizer for an algorithm based on quorum sensing, the same algorithm with momentum, and the elastic averaging SGD (EASGD) algorithm. We discuss extensions to new algorithms that allow each agent to broadcast its current measure of success and shape the collective computation accordingly. We supplement our theoretical analysis with numerical experiments on convolutional neural networks trained on the CIFAR-10 data set, where we note a surprising regularizing property of EASGD even when applied to the non-distributed case. This observation suggests alternative second-order in time algorithms for nondistributed optimization that are competitive with momentum methods.

7.
Neural Comput ; 30(5): 1359-1393, 2018 05.
Article in English | MEDLINE | ID: mdl-29566357

ABSTRACT

Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.


Subject(s)
Models, Neurological , Neocortex/cytology , Nerve Net/physiology , Neurons/physiology , Problem Solving/physiology , Animals , Computer Simulation , Humans , Neural Networks, Computer , Nonlinear Dynamics
8.
J R Soc Interface ; 14(135)2017 10.
Article in English | MEDLINE | ID: mdl-28978747

ABSTRACT

Albatrosses can travel a thousand kilometres daily over the oceans. They extract their propulsive energy from horizontal wind shears with a flight strategy called dynamic soaring. While thermal soaring, exploited by birds of prey and sports gliders, consists of simply remaining in updrafts, extracting energy from horizontal winds necessitates redistributing momentum across the wind shear layer, by means of an intricate and dynamic flight manoeuvre. Dynamic soaring has been described as a sequence of half-turns connecting upwind climbs and downwind dives through the surface shear layer. Here, we investigate the optimal (minimum-wind) flight trajectory, with a combined numerical and analytic methodology. We show that contrary to current thinking, but consistent with GPS recordings of albatrosses, when the shear layer is thin the optimal trajectory is composed of small-angle, large-radius arcs. Essentially, the albatross is a flying sailboat, sequentially acting as sail and keel, and is most efficient when remaining crosswind at all times. Our analysis constitutes a general framework for dynamic soaring and more broadly energy extraction in complex winds. It is geared to improve the characterization of pelagic birds flight dynamics and habitat, and could enable the development of a robotic albatross that could travel with a virtually infinite range.


Subject(s)
Birds/physiology , Flight, Animal/physiology , Models, Biological , Animals
9.
Proc Biol Sci ; 283(1843)2016 11 30.
Article in English | MEDLINE | ID: mdl-27903878

ABSTRACT

We present a spiking neuron model of the motor cortices and cerebellum of the motor control system. The model consists of anatomically organized spiking neurons encompassing premotor, primary motor, and cerebellar cortices. The model proposes novel neural computations within these areas to control a nonlinear three-link arm model that can adapt to unknown changes in arm dynamics and kinematic structure. We demonstrate the mathematical stability of both forms of adaptation, suggesting that this is a robust approach for common biological problems of changing body size (e.g. during growth), and unexpected dynamic perturbations (e.g. when moving through different media, such as water or mud). To demonstrate the plausibility of the proposed neural mechanisms, we show that the model accounts for data across 19 studies of the motor control system. These data include a mix of behavioural and neural spiking activity, across subjects performing adaptive and static tasks. Given this proposed characterization of the biological processes involved in motor control of the arm, we provide several experimentally testable predictions that distinguish our model from previous work.


Subject(s)
Arm/physiology , Cerebellum/physiology , Models, Neurological , Motor Cortex/physiology , Humans , Neurons/physiology , Nonlinear Dynamics
10.
Sci Rep ; 5: 8422, 2015 Feb 12.
Article in English | MEDLINE | ID: mdl-25672476

ABSTRACT

Controlling complex networked systems to desired states is a key research goal in contemporary science. Despite recent advances in studying the impact of network topology on controllability, a comprehensive understanding of the synergistic effect of network topology and individual dynamics on controllability is still lacking. Here we offer a theoretical study with particular interest in the diversity of dynamic units characterized by different types of individual dynamics. Interestingly, we find a global symmetry accounting for the invariance of controllability with respect to exchanging the densities of any two different types of dynamic units, irrespective of the network topology. The highest controllability arises at the global symmetry point, at which different types of dynamic units are of the same density. The lowest controllability occurs when all self-loops are either completely absent or present with identical weights. These findings further improve our understanding of network controllability and have implications for devising the optimal control of complex networked systems in a wide range of fields.

11.
PLoS Comput Biol ; 11(1): e1004039, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25617645

ABSTRACT

Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable 'expansion' dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems.


Subject(s)
Computer Simulation , Models, Neurological , Models, Statistical , Computational Biology , Feedback, Physiological/physiology , Nerve Net/physiology , Neurons/physiology
12.
Nat Commun ; 4: 2002, 2013.
Article in English | MEDLINE | ID: mdl-23774965

ABSTRACT

Our ability to control complex systems is a fundamental challenge of contemporary science. Recently introduced tools to identify the driver nodes, nodes through which we can achieve full control, predict the existence of multiple control configurations, prompting us to classify each node in a network based on their role in control. Accordingly a node is critical, intermittent or redundant if it acts as a driver node in all, some or none of the control configurations. Here we develop an analytical framework to identify the category of each node, leading to the discovery of two distinct control modes in complex systems: centralized versus distributed control. We predict the control mode for an arbitrary network and show that one can alter it through small structural perturbations. The uncovered bimodality has implications from network security to organizational research and offers new insights into the dynamics and control of complex systems.

13.
C R Biol ; 336(1): 13-6, 2013 Jan.
Article in English | MEDLINE | ID: mdl-23537765

ABSTRACT

Quorum sensing is a decision-making process used by decentralized groups such as colonies of bacteria to trigger a coordinated behavior. The existence of decentralized coordinated behavior has also been suggested in the immune system. In this paper, we explore the possibility for quorum sensing mechanisms in the immune response. Cytokines are good candidates as inducer of quorum sensing effects on migration, proliferation and differentiation of immune cells. The existence of a quorum sensing mechanism should be explored experimentally. It may provide new perspectives into immune responses and could lead to new therapeutic strategies.


Subject(s)
Bacteria/immunology , Immunity, Cellular/physiology , Quorum Sensing/immunology , Quorum Sensing/physiology , Animals , Cell Differentiation/immunology , Cell Differentiation/physiology , Cell Movement/immunology , Cell Movement/physiology , Cell Proliferation , Cytokines/physiology , Humans
14.
Proc Natl Acad Sci U S A ; 110(7): 2460-5, 2013 Feb 12.
Article in English | MEDLINE | ID: mdl-23359701

ABSTRACT

A quantitative description of a complex system is inherently limited by our ability to estimate the system's internal state from experimentally accessible outputs. Although the simultaneous measurement of all internal variables, like all metabolite concentrations in a cell, offers a complete description of a system's state, in practice experimental access is limited to only a subset of variables, or sensors. A system is called observable if we can reconstruct the system's complete internal state from its outputs. Here, we adopt a graphical approach derived from the dynamical laws that govern a system to determine the sensors that are necessary to reconstruct the full internal state of a complex system. We apply this approach to biochemical reaction systems, finding that the identified sensors are not only necessary but also sufficient for observability. The developed approach can also identify the optimal sensors for target or partial observability, helping us reconstruct selected state variables from appropriately chosen outputs, a prerequisite for optimal biomarker design. Given the fundamental role observability plays in complex systems, these results offer avenues to systematically explore the dynamics of a wide range of natural, technological and socioeconomic systems.


Subject(s)
Biochemical Phenomena/physiology , Models, Biological , Systems Analysis , Systems Biology/methods , Systems Theory
15.
Sci Rep ; 3: 1067, 2013.
Article in English | MEDLINE | ID: mdl-23323210

ABSTRACT

A dynamical system is controllable if by imposing appropriate external signals on a subset of its nodes, it can be driven from any initial state to any desired state in finite time. Here we study the impact of various network characteristics on the minimal number of driver nodes required to control a network. We find that clustering and modularity have no discernible impact, but the symmetries of the underlying matching problem can produce linear, quadratic or no dependence on degree correlation coefficients, depending on the nature of the underlying correlations. The results are supported by numerical simulations and help narrow the observed gap between the predicted and the observed number of driver nodes in real networks.

16.
Phys Rev E Stat Nonlin Soft Matter Phys ; 86(4 Pt 1): 041914, 2012 Oct.
Article in English | MEDLINE | ID: mdl-23214622

ABSTRACT

Understanding synchronous and traveling-wave oscillations, particularly as they relate to transitions between different types of behavior, is a central problem in modeling biological systems. Here, we address this problem in the context of central pattern generators (CPGs). We use contraction theory to establish the global stability of a traveling-wave or synchronous oscillation, determined by the type of coupling. This opens the door to better design of coupling architectures to create the desired type of stable oscillations. We then use coupling that is both amplitude and phase dependent to create either globally stable synchronous or traveling-wave solutions. Using the CPG motor neuron network of a leech as an example, we show that while both traveling and synchronous oscillations can be achieved by several types of coupling, the transition between different types of behavior is dictated by a specific coupling architecture. In particular, it is only the "repulsive" but not the commonly used phase or rotational coupling that can explain the transition to high-frequency synchronous oscillations that have been observed in the heartbeat pattern generator of a leech. This shows that the overall dynamics of a CPG can be highly sensitive to the type of coupling used, even for coupling architectures that are widely believed to produce the same qualitative behavior.


Subject(s)
Biophysics/methods , Motor Neurons/physiology , Oscillometry , Algorithms , Animals , Computer Simulation , Diffusion , Interneurons/physiology , Lampreys , Leeches , Models, Neurological , Models, Statistical , Nerve Net , Oscillometry/methods , Periodicity , Urodela
17.
PLoS One ; 7(9): e44459, 2012.
Article in English | MEDLINE | ID: mdl-23028542

ABSTRACT

We introduce the concept of control centrality to quantify the ability of a single node to control a directed weighted network. We calculate the distribution of control centrality for several real networks and find that it is mainly determined by the network's degree distribution. We show that in a directed network without loops the control centrality of a node is uniquely determined by its layer index or topological position in the underlying hierarchical structure of the network. Inspired by the deep relation between control centrality and hierarchical structure in a general directed network, we design an efficient attack strategy against the controllability of malicious networks.


Subject(s)
Computer Simulation , Algorithms , Models, Theoretical
18.
Neural Comput ; 24(8): 2033-52, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22509969

ABSTRACT

Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily because their axons and dendrites are colocalized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this colocalization assumption is not valid. In this letter, we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron and even across different cortical areas. We prove by nonlinear contraction analysis and demonstrate by simulation that distributed WTA subsystems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with other.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neural Inhibition/physiology , Neural Networks, Computer , Neurons/physiology , Computer Simulation , Feedback , Nonlinear Dynamics
19.
Phys Rev E Stat Nonlin Soft Matter Phys ; 84(4 Pt 1): 041929, 2011 Oct.
Article in English | MEDLINE | ID: mdl-22181197

ABSTRACT

This paper discusses the interplay of symmetries and stability in the analysis and control of nonlinear dynamical systems and networks. Specifically, it combines standard results on symmetries and equivariance with recent convergence analysis tools based on nonlinear contraction theory and virtual dynamical systems. This synergy between structural properties (symmetries) and convergence properties (contraction) is illustrated in the contexts of network motifs arising, for example, in genetic networks, from invariance to environmental symmetries, and from imposing different patterns of synchrony in a network.


Subject(s)
Feedback, Physiological/physiology , Gene Expression Regulation/physiology , Models, Biological , Nonlinear Dynamics , Signal Transduction/physiology , Animals , Computer Simulation , Humans
20.
Neural Comput ; 23(11): 2915-41, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21732858

ABSTRACT

Learning and decision making in the brain are key processes critical to survival, and yet are processes implemented by nonideal biological building blocks that can impose significant error. We explore quantitatively how the brain might cope with this inherent source of error by taking advantage of two ubiquitous mechanisms, redundancy and synchronization. In particular we consider a neural process whose goal is to learn a decision function by implementing a nonlinear gradient dynamics. The dynamics, however, are assumed to be corrupted by perturbations modeling the error, which might be incurred due to limitations of the biology, intrinsic neuronal noise, and imperfect measurements. We show that error, and the associated uncertainty surrounding a learned solution, can be controlled in large part by trading off synchronization strength among multiple redundant neural systems against the noise amplitude. The impact of the coupling between such redundant systems is quantified by the spectrum of the network Laplacian, and we discuss the role of network topology in synchronization and in reducing the effect of noise. We discuss range of situations in which the mechanisms we model arise in brain science and draw attention to experimental evidence suggesting that cortical circuits capable of implementing the computations of interest here can be found on several scales. Finally, simulations comparing theoretical bounds to the relevant empirical quantities show that the theoretical estimates we derive can be tight.


Subject(s)
Algorithms , Brain/physiology , Decision Making/physiology , Learning/physiology , Models, Neurological , Animals , Computer Simulation , Humans , Nonlinear Dynamics
SELECTION OF CITATIONS
SEARCH DETAIL
...