Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Neural Netw Learn Syst ; 29(5): 1525-1538, 2018 05.
Article in English | MEDLINE | ID: mdl-28320678

ABSTRACT

High-dimensional data present in the real world is often corrupted by noise and gross outliers. Principal component analysis (PCA) fails to learn the true low-dimensional subspace in such cases. This is the reason why robust versions of PCA, which put a penalty on arbitrarily large outlying entries, are preferred to perform dimension reduction. In this paper, we argue that it is necessary to study the presence of outliers not only in the observed data matrix but also in the orthogonal complement subspace of the authentic principal subspace. In fact, the latter can seriously skew the estimation of the principal components. A reinforced robustification of principal component pursuit is designed in order to cater to the problem of finding out both types of outliers and eliminate their influence on the final subspace estimation. Simulation results under different design situations clearly show the superiority of our proposed method as compared with other popular implementations of robust PCA. This paper also showcases possible applications of our method in critically tough scenarios of face recognition and video background subtraction. Along with approximating a usable low-dimensional subspace from real-world data sets, the technique can capture semantically meaningful outliers.

2.
IEEE Trans Neural Netw Learn Syst ; 27(10): 1997-2008, 2016 10.
Article in English | MEDLINE | ID: mdl-26672049

ABSTRACT

Deep hierarchical representations of the data have been found out to provide better informative features for several machine learning applications. In addition, multilayer neural networks surprisingly tend to achieve better performance when they are subject to an unsupervised pretraining. The booming of deep learning motivates researchers to identify the factors that contribute to its success. One possible reason identified is the flattening of manifold-shaped data in higher layers of neural networks. However, it is not clear how to measure the flattening of such manifold-shaped data and what amount of flattening a deep neural network can achieve. For the first time, this paper provides quantitative evidence to validate the flattening hypothesis. To achieve this, we propose a few quantities for measuring manifold entanglement under certain assumptions and conduct experiments with both synthetic and real-world data. Our experimental results validate the proposition and lead to new insights on deep learning.

SELECTION OF CITATIONS
SEARCH DETAIL
...