Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Image Process ; 27(1): 236-248, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28945594

RESUMO

Identifying different types of data outliers with abnormal behaviors in multi-view data setting is challenging due to the complicated data distributions across different views. Conventional approaches achieve this by learning a new latent feature representation with the pairwise constraint on different view data. In this paper, we argue that the existing methods are expensive in generalizing their models from two-view data to three-view (or more) data, in terms of the number of introduced variables and detection performance. To address this, we propose a novel multi-view outlier detection method with consensus regularization on the latent representations. Specifically, we explicitly characterize each kind of outliers by the intrinsic cluster assignment labels and sample-specific errors. Moreover, we make a thorough discussion about the proposed consensus-regularization and the pairwise-regularization. Correspondingly, an optimization solution based on augmented Lagrangian multiplier method is proposed and derived in details. In the experiments, we evaluate our method on five well-known machine learning data sets with different outlier settings. Further, to show its effectiveness in real-world computer vision scenario, we tailor our proposed model to saliency detection and face reconstruction applications. The extensive results of both standard multi-view outlier detection task and the extended computer vision tasks demonstrate the effectiveness of our proposed method.

2.
IEEE Trans Image Process ; 27(1): 304-313, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28976316

RESUMO

Domain adaptation nowadays attracts increasing interests in pattern recognition and computer vision field, since it is an appealing technique in fighting off weakly labeled or even totally unlabeled target data by leveraging knowledge from external well-learned sources. Conventional domain adaptation assumes that target data are still accessible in the training stage. However, we would always confront such cases in reality that the target data are totally blind in the training stage. This is extremely challenging since we have no prior knowledge of the target. In this paper, we develop a deep domain generalization framework with structured low-rank constraint to facilitate the unseen target domain evaluation by capturing consistent knowledge across multiple related source domains. Specifically, multiple domain-specific deep neural networks are built to capture the rich information within multiple sources. Meanwhile, a domain-invariant deep neural network is jointly designed to uncover most consistent and common knowledge across multiple sources so that we can generalize it to unseen target domains in the test stage. Moreover, structured low-rank constraint is exploited to align multiple domain-specific networks and the domain-invariant one in order to better transfer knowledge from multiple sources to boost the learning problem in unseen target domains. Extensive experiments are conducted on several cross-domain benchmarks and the experimental results show the superiority of our algorithm by comparing it with state-of-the-art domain generalization approaches.

3.
IEEE Trans Image Process ; 26(6): 3028-3037, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28436876

RESUMO

Classifying human actions from varied views is challenging due to huge data variations in different views. The key to this problem is to learn discriminative view-invariant features robust to view variations. In this paper, we address this problem by learning view-specific and view-shared features using novel deep models. View-specific features capture unique dynamics of each view while view-shared features encode common patterns across views. A novel sample-affinity matrix is introduced in learning shared features, which accurately balances information transfer within the samples from multiple views and limits the transfer across samples. This allows us to learn more discriminative shared features robust to view variations. In addition, the incoherence between the two types of features is encouraged to reduce information redundancy and exploit discriminative information in them separately. The discriminative power of the learned features is further improved by encouraging features in the same categories to be geometrically closer. Robust view-invariant features are finally learned by stacking several layers of features. Experimental results on three multi-view data sets show that our approaches outperform the state-of-the-art approaches.


Assuntos
Aprendizado Profundo , Atividades Humanas/classificação , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Bases de Dados Factuais , Humanos , Gravação em Vídeo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...