Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
2.
Sci Rep ; 12(1): 22561, 2022 12 29.
Article in English | MEDLINE | ID: mdl-36581654

ABSTRACT

Single-molecule localization microscopy resolves objects below the diffraction limit of light via sparse, stochastic detection of target molecules. Single molecules appear as clustered detection events after image reconstruction. However, identification of clusters of localizations is often complicated by the spatial proximity of target molecules and by background noise. Clustering results of existing algorithms often depend on user-generated training data or user-selected parameters, which can lead to unintentional clustering errors. Here we suggest an unbiased algorithm (FINDER) based on adaptive global parameter selection and demonstrate that the algorithm is robust to noise inclusion and target molecule density. We benchmarked FINDER against the most common density based clustering algorithms in test scenarios based on experimental datasets. We show that FINDER can keep the number of false positive inclusions low while also maintaining a low number of false negative detections in densely populated regions.


Subject(s)
Microscopy , Single Molecule Imaging , Microscopy/methods , Single Molecule Imaging/methods , Algorithms , Cluster Analysis , Nanotechnology
3.
IEEE Trans Neural Netw Learn Syst ; 33(9): 4598-4609, 2022 09.
Article in English | MEDLINE | ID: mdl-33651697

ABSTRACT

Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-independent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favorable results as verified by practitioners.


Subject(s)
Neural Networks, Computer
4.
Chaos ; 31(8): 083119, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34470256

ABSTRACT

In recent years, the artificial intelligence community has seen a continuous interest in research aimed at investigating dynamical aspects of both training procedures and machine learning models. Of particular interest among recurrent neural networks, we have the Reservoir Computing (RC) paradigm characterized by conceptual simplicity and a fast training scheme. Yet, the guiding principles under which RC operates are only partially understood. In this work, we analyze the role played by Generalized Synchronization (GS) when training a RC to solve a generic task. In particular, we show how GS allows the reservoir to correctly encode the system generating the input signal into its dynamics. We also discuss necessary and sufficient conditions for the learning to be feasible in this approach. Moreover, we explore the role that ergodicity plays in this process, showing how its presence allows the learning outcome to apply to multiple input trajectories. Finally, we show that satisfaction of the GS can be measured by means of the mutual false nearest neighbors index, which makes effective to practitioners theoretical derivations.


Subject(s)
Artificial Intelligence , Neural Networks, Computer , Machine Learning
5.
Sci Rep ; 9(1): 13887, 2019 Sep 25.
Article in English | MEDLINE | ID: mdl-31554855

ABSTRACT

Among the various architectures of Recurrent Neural Networks, Echo State Networks (ESNs) emerged due to their simplified and inexpensive training procedure. These networks are known to be sensitive to the setting of hyper-parameters, which critically affect their behavior. Results show that their performance is usually maximized in a narrow region of hyper-parameter space called edge of criticality. Finding such a region requires searching in hyper-parameter space in a sensible way: hyper-parameter configurations marginally outside such a region might yield networks exhibiting fully developed chaos, hence producing unreliable computations. The performance gain due to optimizing hyper-parameters can be studied by considering the memory-nonlinearity trade-off, i.e., the fact that increasing the nonlinear behavior of the network degrades its ability to remember past inputs, and vice-versa. In this paper, we propose a model of ESNs that eliminates critical dependence on hyper-parameters, resulting in networks that provably cannot enter a chaotic regime and, at the same time, denotes nonlinear behavior in phase space characterized by a large memory of past inputs, comparable to the one of linear networks. Our contribution is supported by experiments corroborating our theoretical findings, showing that the proposed model displays dynamics that are rich-enough to approximate many common nonlinear systems used for benchmarking.

6.
Biosystems ; 184: 104014, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31401080

ABSTRACT

Despite the progresses of statistical and machine learning techniques, simultaneous recordings from many neurons hide important information and the connections characterizing the network remain generally undiscovered. Discerning the presence of direct links between neurons from data is still a not completely solved problem. We propose the use of copulas, to enlarge the number of tools for detecting the network structure, pursuing on a research direction we started in Sacerdote et al. (2012). Here, our aim is to distinguish different types of connections on a very simple network. Our proposal consists in choosing suitable random intervals in pairs of spike trains determining the shapes of their copulas. We show that this approach allows to detect different types of dependencies. We illustrate the features of the proposed method on synthetic data from suitably connected networks of two or three formal neurons directly connected or influenced by the surrounding network. We show how a smart choice of pairs of random times together with the use of empirical copulas allows to discern between direct and indirect interactions.


Subject(s)
Action Potentials/physiology , Algorithms , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Animals , Brain/cytology , Brain/physiology , Humans , Nerve Net/cytology
SELECTION OF CITATIONS
SEARCH DETAIL
...