Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
Eur Heart J Digit Health ; 5(3): 384-388, 2024 May.
Article in English | MEDLINE | ID: mdl-38774363

ABSTRACT

Aims: European and American clinical guidelines for implantable cardioverter defibrillators are insufficiently accurate for ventricular arrhythmia (VA) risk stratification, leading to significant morbidity and mortality. Artificial intelligence offers a novel risk stratification lens through which VA capability can be determined from the electrocardiogram (ECG) in normal cardiac rhythm. The aim of this study was to develop and test a deep neural network for VA risk stratification using routinely collected ambulatory ECGs. Methods and results: A multicentre case-control study was undertaken to assess VA-ResNet-50, our open source ResNet-50-based deep neural network. VA-ResNet-50 was designed to read pyramid samples of three-lead 24 h ambulatory ECGs to decide whether a heart is capable of VA based on the ECG alone. Consecutive adults with VA from East Midlands, UK, who had ambulatory ECGs as part of their NHS care between 2014 and 2022 were recruited and compared with all comer ambulatory electrograms without VA. Of 270 patients, 159 heterogeneous patients had a composite VA outcome. The mean time difference between the ECG and VA was 1.6 years (⅓ ambulatory ECG before VA). The deep neural network was able to classify ECGs for VA capability with an accuracy of 0.76 (95% confidence interval 0.66-0.87), F1 score of 0.79 (0.67-0.90), area under the receiver operator curve of 0.8 (0.67-0.91), and relative risk of 2.87 (1.41-5.81). Conclusion: Ambulatory ECGs confer risk signals for VA risk stratification when analysed using VA-ResNet-50. Pyramid sampling from the ambulatory ECGs is hypothesized to capture autonomic activity. We encourage groups to build on this open-source model. Question: Can artificial intelligence (AI) be used to predict whether a person is at risk of a lethal heart rhythm, based solely on an electrocardiogram (an electrical heart tracing)? Findings: In a study of 270 adults (of which 159 had lethal arrhythmias), the AI was correct in 4 out of every 5 cases. If the AI said a person was at risk, the risk of lethal event was three times higher than normal adults. Meaning: In this study, the AI performed better than current medical guidelines. The AI was able to accurately determine the risk of lethal arrhythmia from standard heart tracings for 80% of cases over a year away-a conceptual shift in what an AI model can see and predict. This method shows promise in better allocating implantable shock box pacemakers (implantable cardioverter defibrillators) that save lives.

2.
Article in English | MEDLINE | ID: mdl-38048242

ABSTRACT

Mammalian brains operate in very special surroundings: to survive they have to react quickly and effectively to the pool of stimuli patterns previously recognized as danger. Many learning tasks often encountered by living organisms involve a specific set-up centered around a relatively small set of patterns presented in a particular environment. For example, at a party, people recognize friends immediately, without deep analysis, just by seeing a fragment of their clothes. This set-up with reduced "ontology" is referred to as a "situation." Situations are usually local in space and time. In this work, we propose that neuron-astrocyte networks provide a network topology that is effectively adapted to accommodate situation-based memory. In order to illustrate this, we numerically simulate and analyze a well-established model of a neuron-astrocyte network, which is subjected to stimuli conforming to the situation-driven environment. Three pools of stimuli patterns are considered: external patterns, patterns from the situation associative pool regularly presented to the network and learned by the network, and patterns already learned and remembered by astrocytes. Patterns from the external world are added to and removed from the associative pool. Then, we show that astrocytes are structurally necessary for an effective function in such a learning and testing set-up. To demonstrate this we present a novel neuromorphic computational model for short-term memory implemented by a two-net spiking neural-astrocytic network. Our results show that such a system tested on synthesized data with selective astrocyte-induced modulation of neuronal activity provides an enhancement of retrieval quality in comparison to standard spiking neural networks trained via Hebbian plasticity only. We argue that the proposed set-up may offer a new way to analyze, model, and understand neuromorphic artificial intelligence systems.

3.
Entropy (Basel) ; 25(3)2023 Feb 28.
Article in English | MEDLINE | ID: mdl-36981320

ABSTRACT

Myocardial infarction (MI) occurs when an artery supplying blood to the heart is abruptly occluded. The "gold standard" method for imaging MI is cardiovascular magnetic resonance imaging (MRI) with intravenously administered gadolinium-based contrast (with damaged areas apparent as late gadolinium enhancement [LGE]). However, no "gold standard" fully automated method for the quantification of MI exists. In this work, we propose an end-to-end fully automatic system (MyI-Net) for the detection and quantification of MI in MRI images. It has the potential to reduce uncertainty due to technical variability across labs and the inherent problems of data and labels. Our system consists of four processing stages designed to maintain the flow of information across scales. First, features from raw MRI images are generated using feature extractors built on ResNet and MoblieNet architectures. This is followed by atrous spatial pyramid pooling (ASPP) to produce spatial information at different scales to preserve more image context. High-level features from ASPP and initial low-level features are concatenated at the third stage and then passed to the fourth stage where spatial information is recovered via up-sampling to produce final image segmentation output into: (i) background, (ii) heart muscle, (iii) blood and (iv) LGE areas. Our experiments show that the model named MI-ResNet50-AC provides the best global accuracy (97.38%), mean accuracy (86.01%), weighted intersection over union (IoU) of 96.47%, and bfscore of 64.46% for the global segmentation. However, in detecting only LGE tissue, a smaller model, MI-ResNet18-AC, exhibited higher accuracy (74.41%) than MI-ResNet50-AC (64.29%). New models were compared with state-of-the-art models and manual quantification. Our models demonstrated favorable performance in global segmentation and LGE detection relative to the state-of-the-art, including a four-fold better performance in matching LGE pixels to contours produced by clinicians.

4.
Europace ; 24(11): 1777-1787, 2022 11 22.
Article in English | MEDLINE | ID: mdl-36201237

ABSTRACT

AIMS: Most patients who receive implantable cardioverter defibrillators (ICDs) for primary prevention do not receive therapy during the lifespan of the ICD, whilst up to 50% of sudden cardiac death (SCD) occur in individuals who are considered low risk by conventional criteria. Machine learning offers a novel approach to risk stratification for ICD assignment. METHODS AND RESULTS: Systematic search was performed in MEDLINE, Embase, Emcare, CINAHL, Cochrane Library, OpenGrey, MedrXiv, arXiv, Scopus, and Web of Science. Studies modelling SCD risk prediction within days to years using machine learning were eligible for inclusion. Transparency and quality of reporting (TRIPOD) and risk of bias (PROBAST) were assessed. A total of 4356 studies were screened with 11 meeting the inclusion criteria with heterogeneous populations, methods, and outcome measures preventing meta-analysis. The study size ranged from 122 to 124 097 participants. Input data sources included demographic, clinical, electrocardiogram, electrophysiological, imaging, and genetic data ranging from 4 to 72 variables per model. The most common outcome metric reported was the area under the receiver operator characteristic (n = 7) ranging between 0.71 and 0.96. In six studies comparing machine learning models and regression, machine learning improved performance in five. No studies adhered to a reporting standard. Five of the papers were at high risk of bias. CONCLUSION: Machine learning for SCD prediction has been under-applied and incorrectly implemented but is ripe for future investigation. It may have some incremental utility in predicting SCD over traditional models. The development of reporting standards for machine learning is required to improve the quality of evidence reporting in the field.


Subject(s)
Death, Sudden, Cardiac , Defibrillators, Implantable , Humans , Death, Sudden, Cardiac/epidemiology , Death, Sudden, Cardiac/etiology , Death, Sudden, Cardiac/prevention & control , Electrocardiography , Machine Learning
5.
Entropy (Basel) ; 23(10)2021 Oct 19.
Article in English | MEDLINE | ID: mdl-34682092

ABSTRACT

Dealing with uncertainty in applications of machine learning to real-life data critically depends on the knowledge of intrinsic dimensionality (ID). A number of methods have been suggested for the purpose of estimating ID, but no standard package to easily apply them one by one or all at once has been implemented in Python. This technical note introduces scikit-dimension, an open-source Python package for intrinsic dimension estimation. The scikit-dimension package provides a uniform implementation of most of the known ID estimators based on the scikit-learn application programming interface to evaluate the global and local intrinsic dimension, as well as generators of synthetic toy and benchmark datasets widespread in the literature. The package is developed with tools assessing the code quality, coverage, unit testing and continuous integration. We briefly describe the package and demonstrate its use in a large-scale (more than 500 datasets) benchmarking of methods for ID estimation for real-life and synthetic data.

6.
Entropy (Basel) ; 23(9)2021 Aug 31.
Article in English | MEDLINE | ID: mdl-34573765

ABSTRACT

In this article, we consider a version of the challenging problem of learning from datasets whose size is too limited to allow generalisation beyond the training set. To address the challenge, we propose to use a transfer learning approach whereby the model is first trained on a synthetic dataset replicating features of the original objects. In this study, the objects were smartphone photographs of near-complete Roman terra sigillata pottery vessels from the collection of the Museum of London. Taking the replicated features from published profile drawings of pottery forms allowed the integration of expert knowledge into the process through our synthetic data generator. After this first initial training the model was fine-tuned with data from photographs of real vessels. We show, through exhaustive experiments across several popular deep learning architectures, different test priors, and considering the impact of the photograph viewpoint and excessive damage to the vessels, that the proposed hybrid approach enables the creation of classifiers with appropriate generalisation performance. This performance is significantly better than that of classifiers trained exclusively on the original data, which shows the promise of the approach to alleviate the fundamental issue of learning from small datasets.

7.
Entropy (Basel) ; 23(8)2021 Aug 22.
Article in English | MEDLINE | ID: mdl-34441230

ABSTRACT

This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special 'external' devices, correctors. Elementary correctors consist of two parts, a classifier that separates the situations with high risk of error from the situations in which the legacy AI system works well and a new decision that should be recommended for situations with potential errors. Input signals for the correctors can be the inputs of the legacy AI system, its internal signals, and outputs. If the intrinsic dimensionality of data is high enough then the classifiers for correction of small number of errors can be very simple. According to the blessing of dimensionality effects, even simple and robust Fisher's discriminants can be used for one-shot learning of AI correctors. Stochastic separation theorems provide the mathematical basis for this one-short learning. However, as the number of correctors needed grows, the cluster structure of data becomes important and a new family of stochastic separation theorems is required. We refuse the classical hypothesis of the regularity of the data distribution and assume that the data can have a rich fine-grained structure with many clusters and corresponding peaks in the probability density. New stochastic separation theorems for data with fine-grained structure are formulated and proved. On the basis of these theorems, the multi-correctors for granular data are proposed. The advantages of the multi-corrector technology were demonstrated by examples of correcting errors and learning new classes of objects by a deep convolutional neural network on the CIFAR-10 dataset. The key problems of the non-classical high-dimensional data analysis are reviewed together with the basic preprocessing steps including the correlation transformation, supervised Principal Component Analysis (PCA), semi-supervised PCA, transfer component analysis, and new domain adaptation PCA.

8.
Neural Netw ; 138: 33-56, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33621897

ABSTRACT

Phenomenon of stochastic separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities. In high-dimensional datasets under broad assumptions each point can be separated from the rest of the set by simple and robust Fisher's discriminant (is Fisher separable). Errors or clusters of errors can be separated from the rest of the data. The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same stochastic separability that holds the keys to understanding the fundamentals of robustness and adaptivity in high-dimensional data-driven AI. To manage errors and analyze vulnerabilities, the stochastic separation theorems should evaluate the probability that the dataset will be Fisher separable in given dimensionality and for a given class of distributions. Explicit and optimal estimates of these separation probabilities are required, and this problem is solved in the present work. The general stochastic separation theorems with optimal probability estimates are obtained for important classes of distributions: log-concave distribution, their convex combinations and product distributions. The standard i.i.d. assumption was significantly relaxed. These theorems and estimates can be used both for correction of high-dimensional data driven AI systems and for analysis of their vulnerabilities. The third area of application is the emergence of memories in ensembles of neurons, the phenomena of grandmother's cells and sparse coding in the brain, and explanation of unexpected effectiveness of small neural ensembles in high-dimensional brain.


Subject(s)
Machine Learning , Stochastic Processes
9.
Entropy (Basel) ; 22(1)2020 Jan 09.
Article in English | MEDLINE | ID: mdl-33285855

ABSTRACT

High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning. The well-known phenomenon of the "curse of dimensionality" states: many problems become exponentially difficult in high dimensions. Recently, the other side of the coin, the "blessing of dimensionality", has attracted much attention. It turns out that generic high-dimensional datasets exhibit fairly simple geometric properties. Thus, there is a fundamental tradeoff between complexity and simplicity in high dimensional spaces. Here we present a brief explanatory review of recent ideas, results and hypotheses about the blessing of dimensionality and related simplifying effects relevant to machine learning and neuroscience.

10.
Sci Rep ; 10(1): 16783, 2020 Oct 08.
Article in English | MEDLINE | ID: mdl-33033334

ABSTRACT

We report a novel state of active matter-a swirlonic state. It is comprised of swirlons, formed by groups of active particles orbiting their common center of mass. These quasi-particles demonstrate a surprising behavior: In response to an external load they move with a constant velocity proportional to the applied force, just as objects in viscous media. The swirlons attract each other and coalesce forming a larger, joint swirlon. The coalescence is extremely slow, decelerating process, resulting in a rarified state of immobile quasi-particles. In addition to the swirlonic state, we observe gaseous, liquid and solid states, depending on the inter-particle and self-driving forces. Interestingly, in contrast to molecular systems, liquid and gaseous states of active matter do not coexist. We explain this unusual phenomenon by the lack of fast particles in active matter. We perform extensive numerical simulations and theoretical analysis. The predictions of the theory agree qualitatively and quantitatively with the simulation results.

11.
Sci Rep ; 10(1): 7889, 2020 05 12.
Article in English | MEDLINE | ID: mdl-32398873

ABSTRACT

The widespread consensus argues that the emergence of abstract concepts in the human brain, such as a "table", requires complex, perfectly orchestrated interaction of myriads of neurons. However, this is not what converging experimental evidence suggests. Single neurons, the so-called concept cells (CCs), may be responsible for complex tasks performed by humans. This finding, with deep implications for neuroscience and theory of neural networks, has no solid theoretical grounds so far. Our recent advances in stochastic separability of highdimensional data have provided the basis to validate the existence of CCs. Here, starting from a few first principles, we layout biophysical foundations showing that CCs are not only possible but highly likely in brain structures such as the hippocampus. Three fundamental conditions, fulfilled by the human brain, ensure high cognitive functionality of single cells: a hierarchical feedforward organization of large laminar neuronal strata, a suprathreshold number of synaptic entries to principal neurons in the strata, and a magnitude of synaptic plasticity adequate for each neuronal stratum. We illustrate the approach on a simple example of acquiring "musical memory" and show how the concept of musical notes can emerge.


Subject(s)
Algorithms , Models, Neurological , Neuronal Plasticity/physiology , Neurons/physiology , Animals , Brain/cytology , Brain/physiology , Hippocampus/cytology , Hippocampus/physiology , Humans , Memory/physiology , Neurosciences/methods , Neurosciences/trends
13.
PLoS One ; 14(6): e0218304, 2019.
Article in English | MEDLINE | ID: mdl-31246978

ABSTRACT

Living neuronal networks in dissociated neuronal cultures are widely known for their ability to generate highly robust spatiotemporal activity patterns in various experimental conditions. Such patterns are often treated as neuronal avalanches that satisfy the power scaling law and thereby exemplify self-organized criticality in living systems. A crucial question is how these patterns can be explained and modeled in a way that is biologically meaningful, mathematically tractable and yet broad enough to account for neuronal heterogeneity and complexity. Here we derive and analyse a simple network model that may constitute a response to this question. Our derivations are based on few basic phenomenological observations concerning the input-output behavior of an isolated neuron. A distinctive feature of the model is that at the simplest level of description it comprises of only two variables, the network activity variable and an exogenous variable corresponding to energy needed to sustain the activity, and few parameters such as network connectivity and efficacy of signal transmission. The efficacy of signal transmission is modulated by the phenomenological energy variable. Strikingly, this simple model is already capable of explaining emergence of network spikes and bursts in developing neuronal cultures. The model behavior and predictions are consistent with published experimental evidence on cultured neurons. At the larger, cellular automata scale, introduction of the energy-dependent regulatory mechanism results in the overall model behavior that can be characterized as balancing on the edge of the network percolation transition. Network activity in this state shows population bursts satisfying the scaling avalanche conditions. This network state is self-sustainable and represents energetic balance between global network-wide processes and spontaneous activity of individual elements.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neurons/physiology , Action Potentials/physiology , Cells, Cultured , Computer Simulation
14.
Bull Math Biol ; 81(11): 4856-4888, 2019 11.
Article in English | MEDLINE | ID: mdl-29556797

ABSTRACT

Codifying memories is one of the fundamental problems of modern Neuroscience. The functional mechanisms behind this phenomenon remain largely unknown. Experimental evidence suggests that some of the memory functions are performed by stratified brain structures such as the hippocampus. In this particular case, single neurons in the CA1 region receive a highly multidimensional input from the CA3 area, which is a hub for information processing. We thus assess the implication of the abundance of neuronal signalling routes converging onto single cells on the information processing. We show that single neurons can selectively detect and learn arbitrary information items, given that they operate in high dimensions. The argument is based on stochastic separation theorems and the concentration of measure phenomena. We demonstrate that a simple enough functional neuronal model is capable of explaining: (i) the extreme selectivity of single neurons to the information content, (ii) simultaneous separation of several uncorrelated stimuli or informational items from a large set, and (iii) dynamic learning of new items by associating them with already "known" ones. These results constitute a basis for organization of complex memories in ensembles of single neurons. Moreover, they show that no a priori assumptions on the structural organization of neuronal ensembles are necessary for explaining basic concepts of static and dynamic memories.


Subject(s)
Brain/cytology , Brain/physiology , Learning/physiology , Memory/physiology , Models, Neurological , Neurons/physiology , Animals , Association Learning/physiology , CA1 Region, Hippocampal/cytology , CA1 Region, Hippocampal/physiology , CA3 Region, Hippocampal/cytology , CA3 Region, Hippocampal/physiology , Computer Simulation , Humans , Machine Learning , Mathematical Concepts , Neural Networks, Computer , Neuronal Plasticity/physiology , Photic Stimulation , Pyramidal Cells/cytology , Pyramidal Cells/physiology , Stochastic Processes
15.
Phys Life Rev ; 29: 55-88, 2019 07.
Article in English | MEDLINE | ID: mdl-30366739

ABSTRACT

Complexity is an indisputable, well-known, and broadly accepted feature of the brain. Despite the apparently obvious and widely-spread consensus on the brain complexity, sprouts of the single neuron revolution emerged in neuroscience in the 1970s. They brought many unexpected discoveries, including grandmother or concept cells and sparse coding of information in the brain. In machine learning for a long time, the famous curse of dimensionality seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of dimensionality becomes gradually more and more popular. Ensembles of non-interacting or weakly interacting simple units prove to be an effective tool for solving essentially multidimensional and apparently incomprehensible problems. This approach is especially useful for one-shot (non-iterative) correction of errors in large legacy artificial intelligence systems and when the complete re-training is impossible or too expensive. These simplicity revolutions in the era of complexity have deep fundamental reasons grounded in geometry of multidimensional data spaces. To explore and understand these reasons we revisit the background ideas of statistical physics. In the course of the 20th century they were developed into the concentration of measure theory. The Gibbs equivalence of ensembles with further generalizations shows that the data in high-dimensional spaces are concentrated near shells of smaller dimension. New stochastic separation theorems reveal the fine structure of the data clouds. We review and analyse biological, physical, and mathematical problems at the core of the fundamental question: how can high-dimensional brain organise reliable and fast learning in high-dimensional world of data by simple tools? To meet this challenge, we outline and setup a framework based on statistical physics of data. Two critical applications are reviewed to exemplify the approach: one-shot correction of errors in intellectual systems and emergence of static and associative memories in ensembles of single neurons. Error correctors should be simple; not damage the existing skills of the system; allow fast non-iterative learning and correction of new mistakes without destroying the previous fixes. All these demands can be satisfied by new tools based on the concentration of measure phenomena and stochastic separation theory. We show how a simple enough functional neuronal model is capable of explaining: i) the extreme selectivity of single neurons to the information content of high-dimensional data, ii) simultaneous separation of several uncorrelated informational items from a large set of stimuli, and iii) dynamic learning of new items by associating them with already "known" ones. These results constitute a basis for organisation of complex memories in ensembles of single neurons.


Subject(s)
Brain/physiology , Models, Biological , Neurons/physiology , Algorithms , Artificial Intelligence , Computer Simulation , Humans , Machine Learning , Memory
16.
Front Neurorobot ; 12: 49, 2018.
Article in English | MEDLINE | ID: mdl-30150929

ABSTRACT

We consider the fundamental question: how a legacy "student" Artificial Intelligent (AI) system could learn from a legacy "teacher" AI system or a human expert without re-training and, most importantly, without requiring significant computational resources. Here "learning" is broadly understood as an ability of one system to mimic responses of the other to an incoming stimulation and vice-versa. We call such learning an Artificial Intelligence knowledge transfer. We show that if internal variables of the "student" Artificial Intelligent system have the structure of an n-dimensional topological vector space and n is sufficiently high then, with probability close to one, the required knowledge transfer can be implemented by simple cascades of linear functionals. In particular, for n sufficiently large, with probability close to one, the "student" system can successfully and non-iteratively learn k ≪ n new examples from the "teacher" (or correct the same number of mistakes) at the cost of two additional inner products. The concept is illustrated with an example of knowledge transfer from one pre-trained convolutional neural network to another.

17.
Phys Rev E ; 97(5-1): 052308, 2018 May.
Article in English | MEDLINE | ID: mdl-29906958

ABSTRACT

Social learning is widely observed in many species. Less experienced agents copy successful behaviors exhibited by more experienced individuals. Nevertheless, the dynamical mechanisms behind this process remain largely unknown. Here we assume that a complex behavior can be decomposed into a sequence of n motor motifs. Then a neural network capable of activating motor motifs in a given sequence can drive an agent. To account for (n-1)! possible sequences of motifs in a neural network, we employ the winnerless competition approach. We then consider a teacher-learner situation: one agent exhibits a complex movement, while another one aims at mimicking the teacher's behavior. Despite the huge variety of possible motif sequences we show that the learner, equipped with the provided learning model, can rewire "on the fly" its synaptic couplings in no more than (n-1) learning cycles and converge exponentially to the durations of the teacher's motifs. We validate the learning model on mobile robots. Experimental results show that the learner is indeed capable of copying the teacher's behavior composed of six motor motifs in a few learning cycles. The reported mechanism of learning is general and can be used for replicating different functions, including, for example, sound patterns or speech.


Subject(s)
Interpersonal Relations , Machine Learning , Movement , Neural Networks, Computer , Time Factors
18.
Sci Rep ; 7(1): 13158, 2017 10 13.
Article in English | MEDLINE | ID: mdl-29030608

ABSTRACT

Complex networks emerging in natural and human-made systems tend to assume small-world structure. Is there a common mechanism underlying their self-organisation? Our computational simulations show that network diffusion (traffic flow or information transfer) steers network evolution towards emergence of complex network structures. The emergence is effectuated through adaptive rewiring: progressive adaptation of structure to use, creating short-cuts where network diffusion is intensive while annihilating underused connections. With adaptive rewiring as the engine of universal small-worldness, overall diffusion rate tunes the systems' adaptation, biasing local or global connectivity patterns. Whereas the former leads to modularity, the latter provides a preferential attachment regime. As the latter sets in, the resulting small-world structures undergo a critical shift from modular (decentralised) to centralised ones. At the transition point, network structure is hierarchical, balancing modularity and centrality - a characteristic feature found in, for instance, the human brain.

19.
Cogn Neurodyn ; 8(6): 479-97, 2014 Dec.
Article in English | MEDLINE | ID: mdl-26396647

ABSTRACT

A modular small-world topology in functional and anatomical networks of the cortex is eminently suitable as an information processing architecture. This structure was shown in model studies to arise adaptively; it emerges through rewiring of network connections according to patterns of synchrony in ongoing oscillatory neural activity. However, in order to improve the applicability of such models to the cortex, spatial characteristics of cortical connectivity need to be respected, which were previously neglected. For this purpose we consider networks endowed with a metric by embedding them into a physical space. We provide an adaptive rewiring model with a spatial distance function and a corresponding spatially local rewiring bias. The spatially constrained adaptive rewiring principle is able to steer the evolving network topology to small world status, even more consistently so than without spatial constraints. Locally biased adaptive rewiring results in a spatial layout of the connectivity structure, in which topologically segregated modules correspond to spatially segregated regions, and these regions are linked by long-range connections. The principle of locally biased adaptive rewiring, thus, may explain both the topological connectivity structure and spatial distribution of connections between neuronal units in a large-scale cortical architecture.

20.
Psychol Rev ; 120(4): 798-816, 2013 Oct.
Article in English | MEDLINE | ID: mdl-24219849

ABSTRACT

Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system's preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: The higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation.


Subject(s)
Neuronal Plasticity/physiology , Neurons/physiology , Stochastic Processes , Synapses/physiology , Vision, Ocular/physiology , Animals , Humans , Models, Theoretical
SELECTION OF CITATIONS
SEARCH DETAIL
...