Remaining useful life predictions for turbofan engine degradation using semi-supervised deep architecture

被引:374
作者
Ellefsen, Andre Listou [1 ]
Bjorlykhaug, Emil [1 ]
Aesoy, Vilmar [1 ]
Ushakov, Sergey [2 ]
Zhang, Houxiang [1 ]
机构
[1] Norwegian Univ Sci & Technol, Dept Ocean Operat & Civil Engn, N-6009 Alesund, Norway
[2] Norwegian Univ Sci & Technol, Dept Marine Technol, N-7491 Trondheim, Norway
关键词
C-MAPSS; Deep learning; Genetic algorithm; Prognostics and health management; Remaining useful life; Semi-supervised learning; ALGORITHM; NETWORKS;
D O I
10.1016/j.ress.2018.11.027
中图分类号
T [工业技术];
学科分类号
120111 [工业工程];
摘要
In recent years, research has proposed several deep learning (DL) approaches to providing reliable remaining useful life (RUL) predictions in Prognostics and Health Management (PHM) applications. Although supervised DL techniques, such as Convolutional Neural Network and Long-Short Term Memory, have outperformed traditional prognosis algorithms, they are still dependent on large labeled training datasets. With respect to real-life PHM applications, high-quality labeled training data might be both challenging and time-consuming to acquire. Alternatively, unsupervised DL techniques introduce an initial pre-training stage to extract degradation related features from raw unlabeled training data automatically. Thus, the combination of unsupervised and supervised (semi-supervised) learning has the potential to provide high RUL prediction accuracy even with reduced amounts of labeled training data. This paper investigates the effect of unsupervised pre-training in RUL predictions utilizing a semi-supervised setup. Additionally, a Genetic Algorithm (GA) approach is applied in order to tune the diverse amount of hyper-parameters in the training procedure. The advantages of the proposed semi-supervised setup have been verified on the popular C-MAPSS dataset. The experimental study, compares this approach to purely supervised training, both when the training data is completely labeled and when the labeled training data is reduced, and to the most robust results in the literature. The results suggest that unsupervised pre-training is a promising feature in RUL predictions subjected to multiple operating conditions and fault modes.
引用
收藏
页码:240 / 251
页数:12
相关论文
共 39 条
[1]
[Anonymous], DEEPLEARNING4J OP SO
[2]
[Anonymous], 2013, THESIS
[3]
[Anonymous], 2017, COMMUN ACM, DOI DOI 10.1145/3065386
[4]
[Anonymous], 1992, ADV NEURAL INFORM PR
[5]
[Anonymous], ARXIV131261142013
[6]
[Anonymous], UNDERSTANDING LSTM N
[7]
[Anonymous], CORR
[8]
LEARNING LONG-TERM DEPENDENCIES WITH GRADIENT DESCENT IS DIFFICULT [J].
BENGIO, Y ;
SIMARD, P ;
FRASCONI, P .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02) :157-166
[9]
Big Data Deep Learning: Challenges and Perspectives [J].
Chen, Xue-Wen ;
Lin, Xiaotong .
IEEE ACCESS, 2014, 2 :514-525
[10]
A Multimodal Anomaly Detector for Robot-Assisted Feeding Using an LSTM-Based Variational Autoencoder [J].
Park D. ;
Hoshi Y. ;
Kemp C.C. .
IEEE Robotics and Automation Letters, 2018, 3 (03) :1544-1551