Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Nat Commun ; 14(1): 4911, 2023 Aug 16.
Article in English | MEDLINE | ID: mdl-37587135

ABSTRACT

Approaching limitations of digital computing technologies have spurred research in neuromorphic and other unconventional approaches to computing. Here we argue that if we want to engineer unconventional computing systems in a systematic way, we need guidance from a formal theory that is different from the classical symbolic-algorithmic Turing machine theory. We propose a general strategy for developing such a theory, and within that general view, a specific approach that we call fluent computing. In contrast to Turing, who modeled computing processes from a top-down perspective as symbolic reasoning, we adopt the scientific paradigm of physics and model physical computing systems bottom-up by formalizing what can ultimately be measured in a physical computing system. This leads to an understanding of computing as the structuring of processes, while classical models of computing systems describe the processing of structures.

2.
R Soc Open Sci ; 9(1): 211243, 2022 Jan.
Article in English | MEDLINE | ID: mdl-35070344

ABSTRACT

Nanostructured ZnO has been widely investigated as a gas sensing material. Antimony is an important dopant for ZnO that catalyses its surface reactivity and thus strengthens its gas sensing capability. However, there are not enough studies on the gas sensing of antimony-doped ZnO single wires. We fabricated and characterized ZnO/ZnO:Sb core-shell micro-wires and demonstrated that individual wires are sensitive to oxygen gas flow. Temperature and light illumination strongly affect the oxygen gas sensitivity and stability of these individual wires. It was found that these micro- and nano-wire oxygen sensors at 200°C give the highest response to oxygen, yet a vanishingly small effect of light and temperature variations. The underlying physics and the interplay between these effects are discussed in terms of surface-adsorbed oxygen, oxygen vacancies and hydrogen doping.

3.
Nature ; 538(7626): 467-468, 2016 10 27.
Article in English | MEDLINE | ID: mdl-27732576
4.
Neural Netw ; 56: 10-21, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24815743

ABSTRACT

A method is provided for designing and training noise-driven recurrent neural networks as models of stochastic processes. The method unifies and generalizes two known separate modeling approaches, Echo State Networks (ESN) and Linear Inverse Modeling (LIM), under the common principle of relative entropy minimization. The power of the new method is demonstrated on a stochastic approximation of the El Niño phenomenon studied in climate research.


Subject(s)
Entropy , Neural Networks, Computer , Nonlinear Dynamics , Stochastic Processes , Algorithms , Computer Simulation , El Nino-Southern Oscillation , Linear Models , Time
5.
Biol Cybern ; 108(2): 145-57, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24515094

ABSTRACT

Dynamical systems which generate periodic signals are of interest as models of biological central pattern generators and in a number of robotic applications. A basic functionality that is required in both biological modelling and robotics is frequency modulation. This leads to the question of whether there are generic mechanisms to control the frequency of neural oscillators. Here we describe why this objective is of a different nature, and more difficult to achieve, than modulating other oscillation characteristics (like amplitude, offset, signal shape). We propose a generic way to solve this task which makes use of a simple linear controller. It rests on the insight that there is a bidirectional dependency between the frequency of an oscillation and geometric properties of the neural oscillator's phase portrait. By controlling the geometry of the neural state orbits, it is possible to control the frequency on the condition that the state space can be shaped such that it can be pushed easily to any frequency.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Robotics , Animals , Artificial Intelligence , Computer Simulation , Cybernetics
6.
Neural Netw ; 35: 1-9, 2012 Nov.
Article in English | MEDLINE | ID: mdl-22885243

ABSTRACT

An echo state network (ESN) consists of a large, randomly connected neural network, the reservoir, which is driven by an input signal and projects to output units. During training, only the connections from the reservoir to these output units are learned. A key requisite for output-only training is the echo state property (ESP), which means that the effect of initial conditions should vanish as time passes. In this paper, we use analytical examples to show that a widely used criterion for the ESP, the spectral radius of the weight matrix being smaller than unity, is not sufficient to satisfy the echo state property. We obtain these examples by investigating local bifurcation properties of the standard ESNs. Moreover, we provide new sufficient conditions for the echo state property of standard sigmoid and leaky integrator ESNs. We furthermore suggest an improved technical definition of the echo state property, and discuss what practicians should (and should not) observe when they optimize their reservoirs for specific tasks.


Subject(s)
Neural Networks, Computer , Neurons/physiology , Nonlinear Dynamics , Algorithms , Computer Simulation , Learning , Models, Neurological , Time Factors
7.
Neural Netw ; 24(2): 199-207, 2011 Mar.
Article in English | MEDLINE | ID: mdl-21036537

ABSTRACT

Neurodynamical models of working memory (WM) should provide mechanisms for storing, maintaining, retrieving, and deleting information. Many models address only a subset of these aspects. Here we present a rather simple WM model in which all of these performance modes are trained into a recurrent neural network (RNN) of the echo state network (ESN) type. The model is demonstrated on a bracket level parsing task with a stream of rich and noisy graphical script input. In terms of nonlinear dynamics, memory states correspond, intuitively, to attractors in an input-driven system. As a supplementary contribution, the article proposes a rigorous formal framework to describe such attractors, generalizing from the standard definition of attractors in autonomous (input-free) dynamical systems.


Subject(s)
Memory, Short-Term , Models, Neurological , Neural Networks, Computer , Memory, Short-Term/physiology , Neurons/physiology , Random Allocation
8.
Neural Comput ; 22(7): 1927-59, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20141473

ABSTRACT

Hidden Markov models (HMMs) are one of the most popular and successful statistical models for time series. Observable operator models (OOMs) are generalizations of HMMs that exhibit several attractive advantages. In particular, a variety of highly efficient, constructive, and asymptotically correct learning algorithms are available for OOMs. However, the OOM theory suffers from the negative probability problem (NPP): a given, learned OOM may sometimes predict negative probabilities for certain events. It was recently shown that it is undecidable whether a given OOM will eventually produce such negative values. We propose a novel variant of OOMs, called norm-observable operator models (NOOMs), which avoid the NPP by design. Like OOMs, NOOMs use a set of linear operators to update system states. But differing from OOMs, they represent probabilities by the square of the norm of system states, thus precluding negative probability values. While being free of the NPP, NOOMs retain most advantages of OOMs. For example, NOOMs also capture (some) processes that cannot be modeled by HMMs. More importantly, in principle, NOOMs can be learned from data in a constructive way, and the learned models are asymptotically correct. We also prove that NOOMs capture all Markov chain (MC) describable processes. This letter presents the mathematical foundations of NOOMs, discusses the expressiveness of the model class, and explains how a NOOM can be estimated from data constructively.


Subject(s)
Markov Chains , Models, Neurological , Models, Statistical , Neural Networks, Computer , Algorithms , Artificial Intelligence , Linear Models
9.
Neural Comput ; 21(12): 3460-86, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19686070

ABSTRACT

Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Markov Chains , Models, Statistical , Symbolism
10.
Neural Comput ; 21(9): 2687-712, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19548805

ABSTRACT

Observable operator models (OOMs) generalize hidden Markov models (HMMs) and can be represented in a structurally similar matrix formalism. The mathematical theory of OOMs gives rise to a family of constructive, fast, and asymptotically correct learning algorithms, whose statistical efficiency, however, depends crucially on the optimization of two auxiliary transformation matrices. This optimization task is nontrivial; indeed, even formulating computationally accessible optimality criteria is not easy. Here we derive how a bound on the modeling error of an OOM can be expressed in terms of these auxiliary matrices, which in turn yields an optimization procedure for them and finally affords us with a complete learning algorithm: the error-controlling algorithm. Models learned by this algorithm have an assured error bound on their parameters. The performance of this algorithm is illuminated by comparisons with two types of HMMs trained by the expectation-maximization algorithm, with the efficiency-sharpening algorithm, another recently found learning algorithm for OOMs, and with predictive state representations (Littman & Sutton, 2001 ) trained by methods representing the state of the art in that field.


Subject(s)
Algorithms , Association Learning/physiology , Models, Statistical , Neural Networks, Computer , Humans , Markov Chains
11.
Neural Netw ; 20(3): 335-52, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17517495

ABSTRACT

Standard echo state networks (ESNs) are built from simple additive units with a sigmoid activation function. Here we investigate ESNs whose reservoir units are leaky integrator units. Units of this type have individual state dynamics, which can be exploited in various ways to accommodate the network to the temporal characteristics of a learning task. We present stability conditions, introduce and investigate a stochastic gradient descent method for the optimization of the global learning parameters (input and output feedback scalings, leaking rate, spectral radius) and demonstrate the usefulness of leaky-integrator ESNs for (i) learning very slow dynamic systems and replaying the learnt system at different speeds, (ii) classifying relatively slow and noisy time series (the Japanese Vowel dataset--here we obtain a zero test error rate), and (iii) recognizing strongly time-warped dynamic patterns.


Subject(s)
Feedback , Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology , Animals , Humans , Learning/physiology , Nonlinear Dynamics , Recognition, Psychology , Stochastic Processes
12.
Science ; 304(5667): 78-80, 2004 Apr 02.
Article in English | MEDLINE | ID: mdl-15064413

ABSTRACT

We present a method for learning nonlinear systems, echo state networks (ESNs). ESNs employ artificial recurrent neural networks in a way that has recently been proposed independently as a learning mechanism in biological brains. The learning method is computationally efficient and easy to use. On a benchmark task of predicting a chaotic time series, accuracy is improved by a factor of 2400 over previous techniques. The potential for engineering applications is illustrated by equalizing a communication channel, where the signal error rate is improved by two orders of magnitude.

SELECTION OF CITATIONS
SEARCH DETAIL
...