Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
ISA Trans ; 114: 1-14, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33388145

ABSTRACT

Since the data are often polluted by numerous measured noise or outliers, traditional subspace discriminant analysis is difficult to extract optimal diagnostic information. To alleviate the impact of the problem, a robust principal subspace discriminant analysis algorithm for fault diagnosis is designed. On the premise of decreasing the impact of redundant information, the optimal latent features can be calculated. Specifically, in the algorithm, dual constraints of the weighted principal subspace center and l2,1-norm are introduced into the objective function to suppress outliers and noise. Besides, considering that the current changes of the data in a dynamic process rely on past observations, merely analyzing the current data may lead to an incorrect interpretation of the mechanism model, especially in the presence of similar variable data under the two different conditions. Therefore, based on the robust principal subspace discriminant analysis, we further develop its dynamic enhanced version. The dynamic enhanced method utilizes the dynamic augmented matrix to enhance the latent features of historical data into current shifted features, so as to enlarge the difference between similar modes. Finally, the experimental results arranged on the Tennessee Eastman process and a commercial multi-phase flow process demonstrate that the proposed method has advanced diagnostic performance and satisfactory convergence speed.

2.
ISA Trans ; 108: 106-120, 2021 Feb.
Article in English | MEDLINE | ID: mdl-32854955

ABSTRACT

It is crucial to adopt an efficient process monitoring technique that ensures process operation safety and improves product quality. Toward this endeavor, a modified canonical variate analysis based on dynamic kernel decomposition (DKDCVA) approach is proposed for dynamic nonlinear process quality monitoring. Different from traditional canonical variate analysis and its expansive kernel methods, the chief intention of the our proposed method is to establish a partial-correlation nonlinear model between input dynamic kernel latent variables and output variables, and ensures the extracted feature information can be maximized. More specifically, the dynamic nonlinear model is orthogonally decomposed to obtain quality-related and independent subspace by singular value decomposition. From the perspective of quality monitoring, Hankel matrices of past and future vectors of quality-related subspace are derived in detail, and corresponding statistical metrics are constructed. Furthermore, given the existence of non-Gaussian process variables, kernel density estimation evaluates the upper control limit instead of traditional control limits. Finally, the experimental results conducted on a simple numerical example, the Tennessee Eastman process and the hot strip mill process indicate that the DKDCVA approach can be preferable to monitor abnormal operation for the dynamic nonlinear process.

3.
IEEE Trans Neural Netw Learn Syst ; 29(6): 2278-2293, 2018 06.
Article in English | MEDLINE | ID: mdl-28436895

ABSTRACT

This paper proposes a new online learning algorithm which is based on adaptive control (AC) theory, thus, we call this proposed algorithm as AC algorithm. Comparing to the gradient descent (GD) and exponential gradient (EG) algorithm which have been applied to online prediction problems, we find a new form of AC theory for online prediction problems and investigate two key questions: how to get a new update law which has a tighter upper bound on the error than the square loss? How to compare the upper bound for accumulated losses for the three algorithms? We obtain a new update law which fully utilizes model reference AC theory. Moreover, we present upper bound on the worst-case expected loss for AC algorithm and compare it with previously known bounds for the GD and EG algorithm. The loss bound we get in this paper is a time-varying function, which provides increasingly accurate estimates for upper bound. The AC algorithm has a much smaller loss only if the number of the samples meets certain conditions which can be seen in this paper. We also performed experiments which show that our update law is reasonably feasible and our upper bound is quite tight on both simple artificial and real data sets. The main contributions of this paper are twofold. First of all, we develop a new online algorithm called AC algorithm, and second, we obtain improved bounds, see Theorems 2-4 in this paper.

SELECTION OF CITATIONS
SEARCH DETAIL
...