Model-driven deep unrolling: Towards interpretable deep learning against noise attacks for intelligent fault diagnosis.
ISA Trans
; 129(Pt B): 644-662, 2022 Oct.
Article
in En
| MEDLINE
| ID: mdl-35249725
Intelligent fault diagnosis (IFD) has experienced tremendous progress owing to a great deal to deep learning (DL)-based methods over the decades. However, the "black box" nature of DL-based methods still seriously hinders wide applications in industry, especially in aero-engine IFD, and how to interpret the learned features is still a challenging problem. Furthermore, IFD based on vibration signals is often affected by the heavy noise, leading to a big drop in accuracy. To address these two problems, we develop a model-driven deep unrolling method to achieve ante-hoc interpretability, whose core is to unroll a corresponding optimization algorithm of a predefined model into a neural network, which is naturally interpretable and robust to noise attacks. Motivated by the recent multi-layer sparse coding (ML-SC) model, we herein propose to solve a general sparse coding (GSC) problem across different layers and deduce the corresponding layered GSC (LGSC) algorithm. Based on the ideology of deep unrolling, the proposed algorithm is unfolded into LGSC-Net, whose relationship with the convolutional neural network (CNN) is also discussed in depth. The effectiveness of the proposed model is verified by an aero-engine bevel gear fault experiment and a helical gear fault experiment with three kinds of adversarial noise attacks. The interpretability is also discussed from the perspective of the core of model-driven deep unrolling and its inductive reconstruction property.
Key words
Full text:
1
Collection:
01-internacional
Database:
MEDLINE
Main subject:
Deep Learning
Type of study:
Diagnostic_studies
Language:
En
Journal:
ISA Trans
Year:
2022
Document type:
Article
Country of publication:
United States