Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw Learn Syst ; 28(1): 149-163, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-26685272

RESUMO

In this paper, we present a multiclass data classifier, denoted by optimal conformal transformation kernel (OCTK), based on learning a specific kernel model, the CTK, and utilize it in two types of image recognition tasks, namely, face recognition and object categorization. We show that the learned CTK can lead to a desirable spatial geometry change in mapping data from the input space to the feature space, so that the local spatial geometry of the heterogeneous regions is magnified to favor a more refined distinguishing, while that of the homogeneous regions is compressed to neglect or suppress the intraclass variations. This nature of the learned CTK is of great benefit in image recognition, since in image recognition we always have to face a challenge that the images to be classified are with a large intraclass diversity and interclass similarity. Experiments on face recognition and object categorization show that the proposed OCTK classifier achieves the best or second best recognition result compared with that of the state-of-the-art classifiers, no matter what kind of feature or feature representation is used. In computational efficiency, the OCTK classifier can perform significantly faster than the linear support vector machine classifier (linear LIBSVM) can.

2.
IEEE Trans Neural Netw ; 21(10): 1685-90, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-20729164

RESUMO

The Gaussian process (GP) approaches to classification synthesize Bayesian methods and kernel techniques, which are developed for the purpose of small sample analysis. Here we propose a GP model and investigate it for the facial expression recognition in the Japanese female facial expression dataset. By the strategy of leave-one-out cross validation, the accuracy of the GP classifiers reaches 93.43% without any feature selection/extraction. Even when tested on all expressions of any particular expressor, the GP classifier trained by the other samples outperforms some frequently used classifiers significantly. In order to survey the robustness of this novel method, the random trial of 10-fold cross validations is repeated many times to provide an overview of recognition rates. The experimental results demonstrate a promising performance of this application.


Assuntos
Algoritmos , Identificação Biométrica/métodos , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Inteligência Artificial , Feminino , Humanos , Japão , Distribuição Normal
3.
Int J Data Min Bioinform ; 4(2): 142-57, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20423017

RESUMO

Small sample size is one of the biggest challenges in microarray data analysis. With microarray data being dramatically accumulated, integrating data from related studies represents a natural way to increase sample size so that more reliable statistical analysis may be performed. In this paper, we present a simple and effective integration scheme, called Normalised Linear Transform (NLT), to combine data from different microarray platforms. The NLT scheme is compared with three other integration schemes for two tasks: classification analysis and gene marker selection. Our experiments demonstrate that the NLT scheme performs best in terms of classification accuracy, and leads to more biologically significant marker genes.


Assuntos
Análise de Sequência com Séries de Oligonucleotídeos/métodos , Marcadores Genéticos , Tamanho da Amostra , Estatística como Assunto
4.
Artigo em Inglês | MEDLINE | ID: mdl-17975270

RESUMO

One important application of gene expression analysis is to classify tissue samples according to their gene expression levels. Gene expression data are typically characterized by high dimensionality and small sample size, which makes the classification task quite challenging. In this paper, we present a data-dependent kernel for microarray data classification. This kernel function is engineered so that the class separability of the training data is maximized. A bootstrapping-based resampling scheme is introduced to reduce the possible training bias. The effectiveness of this adaptive kernel for microarray data classification is illustrated with a k-Nearest Neighbor (KNN) classifier. Our experimental study shows that the data-dependent kernel leads to a significant improvement in the accuracy of KNN classifiers. Furthermore, this kernel-based KNN scheme has been demonstrated to be competitive to, if not better than, more sophisticated classifiers such as Support Vector Machines (SVMs) and the Uncorrelated Linear Discriminant Analysis (ULDA) for classifying gene expression data.


Assuntos
Biologia Computacional/métodos , Perfilação da Expressão Gênica , Regulação Neoplásica da Expressão Gênica , Neoplasias/diagnóstico , Análise de Sequência com Séries de Oligonucleotídeos , Algoritmos , Linhagem Celular Tumoral , Feminino , Humanos , Masculino , Modelos Estatísticos , Neoplasias/patologia , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Software
5.
BMC Bioinformatics ; 7: 299, 2006 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-16774678

RESUMO

BACKGROUND: The most fundamental task using gene expression data in clinical oncology is to classify tissue samples according to their gene expression levels. Compared with traditional pattern classifications, gene expression-based data classification is typically characterized by high dimensionality and small sample size, which make the task quite challenging. RESULTS: In this paper, we present a modified K-nearest-neighbor (KNN) scheme, which is based on learning an adaptive distance metric in the data space, for cancer classification using microarray data. The distance metric, derived from the procedure of a data-dependent kernel optimization, can substantially increase the class separability of the data and, consequently, lead to a significant improvement in the performance of the KNN classifier. Intensive experiments show that the performance of the proposed kernel-based KNN scheme is competitive to those of some sophisticated classifiers such as support vector machines (SVMs) and the uncorrelated linear discriminant analysis (ULDA) in classifying the gene expression data. CONCLUSION: A novel distance metric is developed and incorporated into the KNN scheme for cancer classification. This metric can substantially increase the class separability of the data in the feature space and, hence, lead to a significant improvement in the performance of the KNN classifier.


Assuntos
Inteligência Artificial , Biomarcadores Tumorais/metabolismo , Proteínas de Neoplasias/metabolismo , Neoplasias/metabolismo , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Reconhecimento Automatizado de Padrão/métodos , Análise Serial de Tecidos/métodos , Bases de Dados de Proteínas , Diagnóstico por Computador/métodos , Análise Discriminante , Perfilação da Expressão Gênica/métodos , Humanos , Neoplasias/diagnóstico , Neoplasias/genética
6.
IEEE Trans Neural Netw ; 16(2): 460-74, 2005 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-15787152

RESUMO

In this paper, we present a method of kernel optimization by maximizing a measure of class separability in the empirical feature space, an Euclidean space in which the training data are embedded in such a way that the geometrical structure of the data in the feature space is preserved. Employing a data-dependent kernel, we derive an effective kernel optimization algorithm that maximizes the class separability of the data in the empirical feature space. It is shown that there exists a close relationship between the class separability measure introduced here and the alignment measure defined recently by Cristianini. Extensive simulations are carried out which show that the optimized kernel is more adaptive to the input data, and leads to a substantial, sometimes significant, improvement in the performance of various data classification algorithms.


Assuntos
Pesquisa Empírica , Redes Neurais de Computação
7.
IEEE Trans Neural Netw ; 15(2): 417-29, 2004 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-15384534

RESUMO

This paper presents a new self-creating model of a neural network in which a branching mechanism is incorporated with competitive learning. Unlike other self-creating models, the proposed scheme, called branching competitive learning (BCL), adopts a special node-splitting criterion, which is based mainly on the geometrical measurements of the movement of the synaptic vectors in the weight space. Compared with other self-creating and nonself-creating competitive learning models, the BCL network is more efficient to capture the spatial distribution of the input data and, therefore, tends to give better clustering or quantization results. We demonstrate the ability of the BCL model to appropriately estimate the cluster number in a data distribution, show its adaptability to nonstationary data inputs and, moreover, present a scheme leading to a multiresolution data clustering. Extensive experiments on vector quantization of image compression are given to illustrate the effectiveness of the BCL algorithm.


Assuntos
Redes Neurais de Computação , Inteligência Artificial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...