Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Cybern ; 53(3): 1790-1801, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34936563

RESUMO

Designing effective and efficient classifiers is a challenging task given the facts that data may exhibit different geometric structures and complex intrarelationships may exist within data. As a fundamental component of granular computing, information granules play a key role in human cognition. Therefore, it is of great interest to develop classifiers based on information granules such that highly interpretable human-centric models with higher accuracy can be constructed. In this study, we elaborate on a novel design methodology of granular classifiers in which information granules play a fundamental role. First, information granules are formed on the basis of labeled patterns following the principle of justifiable granularity. The diversity of samples embraced by each information granule is quantified and controlled in terms of the entropy criterion. This design implies that the information granules constructed in this way form sound homogeneous descriptors characterizing the structure and the diversity of available experimental data. Next, granular classifiers are built in the presence of formed information granules. The classification result for any input instance is determined by summing the contents of the related information granules weighted by membership degrees. The experiments concerning both synthetic data and publicly available datasets demonstrate that the proposed models exhibit better prediction abilities than some commonly encountered classifiers (namely, linear regression, support vector machine, Naïve Bayes, decision tree, and neural networks) and come with enhanced interpretability.

2.
IEEE Trans Cybern ; PP2022 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-35727789

RESUMO

In this study, we establish a new design methodology of granular models realized by augmenting the existing numeric models through analyzing and modeling their associated prediction error. Several novel approaches to the construction of granular architectures through augmenting existing numeric models by incorporating modeling errors are proposed in order to improve and quantify the numeric models' prediction abilities. The resulting construct arises as a granular model that produces granular outcomes generated as a result of the aggregation of the outputs produced by the numeric model (or its granular counterpart) and the corresponding error terms. Three different architectural developments are formulated and analyzed. In comparison with the numeric models, which strive to achieve the highest accuracy, granular models are developed in a way such that they produce comprehensive prediction outcomes realized as information granules. In virtue of the granular nature of results, the coverage and specificity of the constructed information granules express the quality of the results of prediction in a more descriptive and comprehensive manner. The performance of the granular constructs is evaluated using the criteria of coverage and specificity, which are pertinent to granular outputs produced by the granular models.

3.
IEEE Trans Cybern ; 52(6): 4126-4135, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33119518

RESUMO

Information granulation and degranulation play a fundamental role in granular computing (GrC). Given a collection of information granules (referred to as reference information granules), the essence of the granulation process (encoding) is to represent each data (either numeric or granular) in terms of these reference information granules. The degranulation process (decoding) that realizes the reconstruction of original data is associated with a certain level of reconstruction error. An important issue is how to reduce the reconstruction error such that the data could be reconstructed more accurately. In this study, the granulation process is realized by involving fuzzy clustering. A novel neural network is leveraged in the consecutive degranulation process, which could help significantly reduce the reconstruction error. We show that the proposed degranulation architecture exhibits improved capabilities in reconstructing original data in comparison with other methods. A series of experiments with the use of synthetic data and publicly available datasets coming from the machine-learning repository demonstrates the superiority of the proposed method over some existing alternatives.


Assuntos
Algoritmos , Reconhecimento Automatizado de Padrão , Análise por Conglomerados , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos
4.
IEEE Trans Cybern ; 52(7): 7029-7038, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33151886

RESUMO

Rule-based fuzzy models play a dominant role in fuzzy modeling and come with extensive applications in the system modeling area. Due to the presence of system modeling error, it is impossible to construct a model that fits exactly the experimental evidence and, at the same time, exhibits high generalization capabilities. To alleviate these problems, in this study, we elaborate on a realization of granular outputs for rule-based fuzzy models with the aim of effectively quantifying the associated modeling errors. Through analyzing the characteristics of modeling errors, an error model is constructed to characterize deviations among the estimated outputs and the expected ones. The resulting granular model comes into play as an aggregation of the regression model and the error model. Information granularity plays a central role in the construction of granular outputs (intervals). The quality of the produced interval estimates is quantified in terms of the coverage and specificity criteria. The optimal allocation of information granularity is determined through a combined index involving these two criteria pertinent to the evaluation of interval outputs. A series of experimental studies is provided to demonstrate the effectiveness of the proposed approach and show its superiority over the traditional statistical-based method.


Assuntos
Algoritmos , Lógica Fuzzy
5.
IEEE Trans Cybern ; 52(4): 2214-2224, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32721903

RESUMO

In this article, we are concerned with the formation of type-2 information granules in a two-stage approach. We present a comprehensive algorithmic framework which gives rise to information granules of a higher type (type-2, to be specific) such that the key structure of the local granular data, their topologies, and their diversities become fully reflected and quantified. In contrast to traditional collaborative clustering where local structures (information granules) are obtained by running algorithms on the local datasets and communicating findings across sites, we propose a way of characterizing granular data (formed) by forming a suite of higher type information granules to reveal an overall structure of a collection of locally available datasets. Information granules built at the lower level on a basis of local sources of data are weighted by the number of data they represent while the information granules formed at the higher level of hierarchy are more abstract and general, thus facilitating a formation of a hierarchical description of data realized at different levels of detail. The construction of information granules is completed by resorting to fuzzy clustering algorithms (more specifically, the well-known Fuzzy C-Means). In the formation of information granules, we follow the fundamental principle of granular computing, viz., the principle of justifiable granularity. Experimental studies concerning selected publicly available machine-learning datasets are reported.


Assuntos
Algoritmos , Reconhecimento Automatizado de Padrão , Análise por Conglomerados
6.
IEEE Trans Cybern ; 51(3): 1639-1650, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30892261

RESUMO

In this paper, we elaborate on a new design approach to the development and analysis of granular input spaces and ensuing granular modeling. Given a numeric model (no matter what specific design methodology has been used to construct it and what architecture has been adopted), we form a granular input space through allocating a certain level of information granularity across the input variables. The formation of granular input space helps us gain a better insight into the ranking of input variables with respect to their precision (the variables with a lower level of information granularity need to be specified in a precise way when estimating the inputs). As a consequence, for granular inputs, the outputs of the granular model are also information granules (say, intervals, fuzzy sets, rough sets, etc.). It is shown that the process of forming granular input space can be sought as an optimization of allocation of information granularity across the input variables so that the specificity of the corresponding granular outputs of the granular model becomes the highest while coverage of data becomes maximized. The construction of granular input space dwells upon two fundamental principles of granular computing-the principle of justifiable granularity and the optimal allocation of information granularity. The quality of the granular input space is quantified in terms of the two conflicting criteria, that is, the specificity of the results produced by the granular model and the coverage of experimental data delivered by this model. In the ensuing optimization problem, one maximizes a product of specificity and coverage. Differential evolution is engaged in this optimization task. The experimental studies involve both synthetic dataset and data coming from the machine learning repository.

7.
IEEE Trans Neural Netw Learn Syst ; 31(9): 3606-3619, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-31722490

RESUMO

In this article, we propose a design and evaluation framework of granular neural networks realized in the presence of information granules. Neural networks realized in this manner are able to process both nonnumerical data, such as information granules as well as numerical data. Information granules are meaningful and semantically sound entities formed by organizing existing knowledge and available experimental data. The directional nature of mapping between the input and output data needs to be considered when building information granules. The development of neural networks advocated in this article is realized as a two-phase process. First, a collection of information granules is formed through granulation of numeric data in the input and output spaces. Second, neural networks are constructed on the basis of information granules rather than original (numeric) data. The proposed method leads to the construction of neural networks in a completely new way. In comparison with traditional (numeric) neural networks, the networks developed in the presence of granular data require shorter learning time. They also produce the results (outputs) that are information granules rather than numeric entities. The quality of granular outputs generated by our neural networks is evaluated in terms of the coverage and specificity criteria that are pertinent to the characterization of the information granules.

8.
IEEE Trans Cybern ; 47(12): 4475-4484, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28113415

RESUMO

Granular computing (GrC) has emerged as a unified conceptual and processing framework. Information granules are fundamental constructs that permeate concepts and models of GrC. This paper is concerned with a design of a collection of meaningful, easily interpretable ellipsoidal information granules with the use of the principle of justifiable granularity by taking into consideration reconstruction abilities of the designed information granules. The principle of justifiable granularity supports designing of information granules based on numeric or granular evidence, and aims to achieve a compromise between justifiability and specificity of the information granules to be constructed. A two-stage development strategy behind the construction of justifiable information granules is considered. First, a collection of numeric prototypes is determined with the use of fuzzy clustering. Second, the lengths of the semi-axes of ellipsoidal information granules to be formed around such prototypes are optimized. Two optimization criteria are introduced and studied. Experimental studies involving synthetic data set and data sets coming from the machine learning repository are reported.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...