Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Neural Netw ; 13(6): 1432-49, 2002.
Article in English | MEDLINE | ID: mdl-18244539

ABSTRACT

In this paper, we explore some aspects of the problem of online unsupervised learning of a switching time series, i.e., a time series which is generated by a combination of several alternately activated sources. This learning problem can be solved by a two-stage approach: 1) separating and assigning each incoming datum to a specific dataset (one dataset corresponding to each source) and 2) developing one model per dataset (i.e., one model per source). We introduce a general data allocation (DA) methodology, which combines the two steps into an iterative scheme: existing models compete for the incoming data; data assigned to each model are used to refine the model. We distinguish between two modes of DA: in parallel DA, every incoming datablock is allocated to the model with lowest prediction error; in serial DA, the incoming datablock is allocated to the first model with prediction error below a prespecified threshold. We present sufficient conditions for asymptotically correct allocation of the data. We also present numerical experiments to support our theoretical analysis.

2.
Neural Netw ; 13(10): 1145-69, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11156192

ABSTRACT

In this work it is shown how fuzzy lattice neurocomputing (FLN) emerges as a connectionist paradigm in the framework of fuzzy lattices (FL-framework) whose advantages include the capacity to deal rigorously with: disparate types of data such as numeric and linguistic data, intervals of values, 'missing' and 'don't care' data. A novel notation for the FL-framework is introduced here in order to simplify mathematical expressions without losing content. Two concrete FLN models are presented, namely 'sigma-FLN' for competitive clustering, and 'FLN with tightest fits (FLNtf)' for supervised clustering. Learning by the sigma-FLN, is rapid as it requires a single pass through the data, whereas learning by the FLNtf, is incremental, data order independent, polynomial theta(n3), and it guarantees maximization of the degree of inclusion of an input in a learned class as explained in the text. Convenient geometric interpretations are provided. The sigma-FLN is presented here as fuzzy-ART's extension in the FL-framework such that sigma-FLN widens fuzzy-ART's domain of application to (mathematical) lattices by augmenting the scope of both of fuzzy-ART's choice (Weber) and match functions, and by enhancing fuzzy-ART's complement coding technique. The FLNtf neural model is applied to four benchmark data sets of various sizes for pattern recognition and rule extraction. The benchmark data sets in question involve jointly numeric and nominal data with 'missing' and/or 'don't care' attribute values, whereas the lattices involved include the unit-hypercube, a probability space, and a Boolean algebra. The potential of the FL-framework in computing is also delineated.


Subject(s)
Fuzzy Logic , Neural Networks, Computer
3.
IEEE Trans Inf Technol Biomed ; 3(4): 268-77, 1999 Dec.
Article in English | MEDLINE | ID: mdl-10719477

ABSTRACT

Stapedotomy is a surgical procedure aimed at the treatment of hearing impairment due to otosclerosis. The treatment consists of drilling a hole through the stapes bone in the inner ear in order to insert a prosthesis. Safety precautions require knowledge of the nonmeasurable stapes thickness. The technical goal herein has been the design of high-level controls for an intelligent mechatronics drilling tool in order to enable the estimation of stapes thickness from measurable drilling data. The goal has been met by learning a map between drilling features, hence no model of the physical system has been necessary. Learning has been achieved as explained in this paper by a scheme, namely the d-sigma Fuzzy Lattice Neurocomputing (d sigma-FLN) scheme for classification, within the framework of fuzzy lattices. The successful application of the d sigma-FLN scheme is demonstrated in estimating the thickness of a stapes bone "on-line" using drilling data obtained experimentally in the laboratory.


Subject(s)
Deafness/surgery , Surgical Procedures, Operative/methods , Fuzzy Logic , Learning
4.
IEEE Trans Neural Netw ; 9(5): 862-76, 1998.
Article in English | MEDLINE | ID: mdl-18255772

ABSTRACT

We introduce a hybrid neural-genetic multimodel parameter estimation algorithm. The algorithm is applied to structured system identification of nonlinear dynamical systems. The main components of the algorithm are 1) a recurrent incremental credit assignment (ICRA) neural network, which computes a credit function for each member of a generation of models and 2) a genetic algorithm which uses the credit functions as selection probabilities for producing new generations of models. The neural network and genetic algorithm combination is applied to the task of finding the parameter values which minimize the total square output error: the credit function reflects the closeness of each model's output to the true system output and the genetic algorithm searches the parameter space by a divide-and-conquer technique. The algorithm is evaluated by numerical simulations of parameter estimation for a planar robotic manipulator and a waste water treatment plant.

5.
IEEE Trans Neural Netw ; 9(5): 877-90, 1998.
Article in English | MEDLINE | ID: mdl-18255773

ABSTRACT

This paper proposes two hierarchical schemes for learning, one for clustering and the other for classification problems. Both schemes can be implemented on a fuzzy lattice neural network (FLNN) architecture, to be introduced herein. The corresponding two learning models draw on adaptive resonance theory (ART) and min-max neurocomputing principles but their application domain is a mathematical lattice. Therefore they can handle more general types of data in addition to N-dimensional vectors. The FLNN neural model stems from a cross-fertilization of lattice theory and fuzzy set theory. Hence a novel theoretical foundation is introduced in this paper, that is the framework of fuzzy lattices or FL-framework, based on the concepts fuzzy lattice and inclusion measure. Sufficient conditions for the existence of an inclusion measure in a mathematical lattice are shown. The performance of the two FLNN schemes, that is for clustering and for classification, compares quite well with other methods and it is demonstrated by examples on various data sets including several benchmark data sets.

6.
Article in English | MEDLINE | ID: mdl-18255983

ABSTRACT

We present a specific varying fitness function technique in genetic algorithm (GA) constrained optimization. This technique incorporates the problem's constraints into the fitness function in a dynamic way. It consists of forming a fitness function with varying penalty terms. The resulting varying fitness function facilitates the GA search. The performance of the technique is tested on two optimization problems: the cutting stock, and the unit commitment problems. Also, new domain-specific operators are introduced. Solutions obtained by means of the varying and the conventional (nonvarying) fitness function techniques are compared. The results show the superiority of the proposed technique.

7.
IEEE Trans Neural Netw ; 7(1): 73-86, 1996.
Article in English | MEDLINE | ID: mdl-18255559

ABSTRACT

We apply the partition algorithm to the problem of time-series classification. We assume that the source that generates the time series belongs to a finite set of candidate sources. Classification is based on the computation of posterior probabilities. Prediction error is used to adaptively update the posterior probability of each source. The algorithm is implemented by a hierarchical, modular, recurrent network. The bottom (partition) level of the network consists of neural modules, each one trained to predict the output of one candidate source. The top (decision) level consists of a decision module, which computes posterior probabilities and classifies the time series to the source of maximum posterior probability. The classifier network is formed from the composition of the partition and decision levels. This method applies to deterministic as well as probabilistic time series. Source switching can also be accommodated. We give some examples of application to problems of signal detection, phoneme, and enzyme classification. In conclusion, the algorithm presented here gives a systematic method for the design of modular classification networks. The method can be extended by various choices of the partition and decision components.

8.
IEEE Trans Neural Netw ; 6(6): 1536-41, 1995.
Article in English | MEDLINE | ID: mdl-18263447

ABSTRACT

This paper investigates the properties of the so-called feedforward method, which is a very simple training law suitable for on-chip learning. Its merit is conceptual and implementational simplicity. Its signals do not propagate in both directions and it works for various types of activation function, a feature that makes it particularly effective in the case of unmodeled activation functions. Extensive simulation has shown that this method is usually faster than backpropagation.

SELECTION OF CITATIONS
SEARCH DETAIL
...