Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Image Process ; 27(2): 949-963, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29757738

RESUMO

The recent years have witnessed the emerging of vector quantization (VQ) techniques for efficient similarity search. VQ partitions the feature space into a set of codewords and encodes data points as integer indices using the codewords. Then the distance between data points can be efficiently approximated by simple memory lookup operations. By the compact quantization, the storage cost, and searching complexity are significantly reduced, thereby facilitating efficient large-scale similarity search. However, the performance of several celebrated VQ approaches degrades significantly when dealing with noisy data. In addition, it can barely facilitate a wide range of applications as the distortion measurement only limits to ℓ2 norm. To address the shortcomings of the squared Euclidean (ℓ2,2 norm) loss function employed by the VQ approaches, in this paper, we propose a novel robust and general VQ framework, named RGVQ, to enhance both robustness and generalization of VQ approaches. Specifically, a ℓp,q-norm loss function is proposed to conduct the ℓp-norm similarity search, rather than the ℓ2 norm search, and the q-th order loss is used to enhance the robustness. Despite the fact that changing the loss function to ℓp,q norm makes VQ approaches more robust and generic, it brings us a challenge that a non-smooth and non-convex orthogonality constrained ℓp,q-norm function has to be minimized. To solve this problem, we propose a novel and efficient optimization scheme and specify it to VQ approaches and theoretically prove its convergence. Extensive experiments on benchmark data sets demonstrate that the proposed RGVQ is better than the original VQ for several approaches, especially when searching similarity in noisy data.

2.
IEEE Trans Image Process ; 26(11): 5257-5269, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28767370

RESUMO

The task of feature selection is to find the most representative features from the original high-dimensional data. Because of the absence of the information of class labels, selecting the appropriate features in unsupervised learning scenarios is much harder than that in supervised scenarios. In this paper, we investigate the potential of locally linear embedding (LLE), which is a popular manifold learning method, in feature selection task. It is straightforward to apply the idea of LLE to the graph-preserving feature selection framework. However, we find that this straightforward application suffers from some problems. For example, it fails when the elements in the feature are all equal; it does not enjoy the property of scaling invariance and cannot capture the change of the graph efficiently. To solve these problems, we propose a new filter-based feature selection method based on LLE in this paper, which is named as LLE score. The proposed criterion measures the difference between the local structure of each feature and that of the original data. Our experiments of classification task on two face image data sets, an object image data set, and a handwriting digits data set show that LLE score outperforms state-of-the-art methods, including data variance, Laplacian score, and sparsity score.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...