Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Neural Netw ; 14(2): 282-95, 2003.
Article in English | MEDLINE | ID: mdl-18238012

ABSTRACT

Recent advances in the biophysics of computation and neurocomputing models have brought to the foreground the importance of dendritic structures in a single neuron cell. Dendritic structures are now viewed as the primary autonomous computational units capable of realizing logical operations. By changing the classic simplified model of a single neuron with a more realistic one that incorporates the dendritic processes, a novel paradigm in artificial neural networks is being established. In this work, we introduce and develop a mathematical model of dendrite computation in a morphological neuron based on lattice algebra. The computational capabilities of this enriched neuron model are demonstrated by means of several illustrative examples and by proving that any single layer morphological perceptron endowed with dendrites and their corresponding input and output synaptic processes is able to approximate any compact region in higher dimensional Euclidean space to within any desired degree of accuracy. Based on this result, we describe a training algorithm for single layer morphological perceptrons and apply it to some well-known nonlinear problems in order to exhibit its performance.

2.
IEEE Trans Image Process ; 9(8): 1420-30, 2000.
Article in English | MEDLINE | ID: mdl-18262978

ABSTRACT

Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

3.
Neural Netw ; 12(6): 851-867, 1999 Jul.
Article in English | MEDLINE | ID: mdl-12662661

ABSTRACT

The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we discuss a novel class of artificial neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different from those of traditional neural network models. The main emphasis of the research presented here is on morphological bidirectional associative memories (MBAMs). In particular, we establish a mathematical theory for MBAMs and provide conditions that guarantee perfect bidirectional recall for corrupted patterns. Some examples that illustrate performance differences between the morphological model and the traditional semilinear model are also given.

4.
IEEE Trans Neural Netw ; 9(2): 281-93, 1998.
Article in English | MEDLINE | ID: mdl-18252452

ABSTRACT

The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. A nonlinear activation function usually follows the linear operation in order to provide for nonlinearity of the network and set the next state of the neuron. In this paper we introduce a novel class of artificial neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before possible application of a nonlinear activation function. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. The main emphasis of the research presented here is on morphological associative memories. We examine the computing and storage capabilities of morphological associative memories and discuss differences between morphological models and traditional semilinear models such as the Hopfield net.

5.
IEEE Trans Image Process ; 4(2): 224-6, 1995.
Article in English | MEDLINE | ID: mdl-18289974

ABSTRACT

A new parallel binary image shrinking algorithm that can be considered as an improvement of Levialdi's (1972) parallel shrinking algorithm and can be used in many image labeling algorithms as a basic operation in order to reduce the storage requirements for local memory and speed up the labeling process is presented. This new parallel shrinking algorithm shrinks an nxn binary image to an image with no black pixels in O(n) parallel steps with a multiplicative constant of 1.5, preserving the 8-connectivity in the shrinking process.

SELECTION OF CITATIONS
SEARCH DETAIL
...