Artificial Neural Networks for prognostics: Exercises...Train an ANN to reproduce the analytic...

Post on 30-Jun-2020

10 views 0 download

Transcript of Artificial Neural Networks for prognostics: Exercises...Train an ANN to reproduce the analytic...

Artificial Neural Networks for prognostics: Exercises

( ) )cos()sin(, yxyxf += [-2 , 2 ], [-2 , 2 ]x yπ π π π∈ ∈

Train an ANN to reproduce the analytic function f(x,y).Show the capability of the net to estimate the values of a surface with

a large number of peaks and valleys (local maxima and minima)

Exercise 1: function regression

-5

0

5

-5

0

5

-2

-1

0

1

2

• The first step is to create samples of the function. Each pattern is composed by three signals:

input1 input2 outputex1: x1 y1 f1ex2: x2 y2 f2… … … …exn: xn yn fn

The number of patterns has to be large enough to guarantee the possibility of:1) training the ANN good coverage of all the training space. In this case, since f has

many peaks and valleys, we need a large number of training pattern (e.g. N_train=800)

2) Test the performance of the developed ANN we need a test set of n_test patterns (e.g. n_test=800)

We simulate a dataset of n=(n_train+n_test)= 1600 patterns .

%% Step 1: create a set of n pattens from the function f. %% n=n_train+n_testn=1600; % user defined number of simulated patters% simulate the patternsx=-2*pi+4*pi*rand([1,n]);y=-2*pi+4*pi*rand([1,n]);f=sin(x)+cos(y);%% Plot of the training patternsn_train=n/2; %plot3(x(1:n_train),y(1:n_train),f(1:n_train),'o')

-5

0

5

-5

0

5

-2

-1

0

1

2

%% Step 2: data normalization in [0,1]x_norm=(x-min(x(1:n_train)))./((max(x(1:n_train))-min(x(1:n_train))));

% x_norm=(x-min(x)/(max(x)-min(x))y_norm=(y-min(y(1:n_train)))./((max(y(1:n_train))-

min(y(1:n_train))));f_norm=(f-min(f(1:n_train)))./(max(f(1:n_train))-

min(f(1:n_train)))%%% Step 3: Create training setsP=([x_norm(1:n_train);y_norm(1:n_train)]); %training set

inputT=f_norm(1:n_train); %training set output (target)

%% Step 4: Create a feed-forward backpropagation network%% Let’s start with a single layer network with 12 nodes.net = newff(P,T,12);

%% Step 5: The network is trained for 500 epochsnet.trainParam.epochs = 500; net = train (net,P,T);

MATLABNeural Network

TOOLBOX

%%% Finally, we can verify the performance of our network on the test set made of 800 test patterns, not yet used for the training.

%%% Step 6: Build the test setP_test=([x_norm(n_train+1:end);y_norm(n_train+1:end)]);

%test set input

%% Step 7: The network output is computedoutput_ANN_n= sim(net,P_test);

%% Step 8: denormalize outputoutput_ANN=min(f(1:n_train))+output_ANN_n*(max(f(1:n_train))-min(f(1:n_train)));

%% step 9: provide a measure of the ANN error on the test set

MSE=sum((output_ANN-f(n_train+1:end)).^2)/n_test;

%% Step 9: Plot the results in the denormalized domainfigureplot3(x(n_train+1:end),y(n_train+1:end),f(n_train+1:end),'o')axis([-2*pi 2*pi -2*pi 2*pi -2 2])hold onplot3(x(n_train+1:end),y(n_train+1:end), output_ANN,'r*')grid on

-5

0

5

-5

0

5

-2

-1

0

1

2

Comments

The training of the ANN is based on available data representing the input/output non-linear relation

The training phase automatically sets the ANN parameters (weights). The objective is to perform the best interpolation of the input/output data

Without physical/mathematical modeling efforts, outputs approximate the true values

Exercise 1: Sensibility to the number of training patterns

Run the code assuming that you have available only 100 patterns and you use 50 of them to train the ANN and the remaining 50 to test its performance

What is the minimum number of training patterns necessary to obtain a satisfactory ANN

• The first step is to create samples of the function. Each pattern is composed by three signals:

input1 input2 outputex1: x1 y1 f1ex2: x2 y2 f2… … … …exn: xn yn fn

The number of patterns available is divided into:1) A training set used to compute the sysnapsis weights2) A validation set used to find the optimal ANN architecture (optimal number of

neurons in the hidden layer)3) A test set used to verify the final model performance

n_train=800 patternsn_val=400 patternsn_test= 400 patterns

.

Exercise 1: How to set the optimal number of neurons in the hidden layer?

Exercise 2: Prediction of the RUL of an Aircraft Engine

Methodological steps

Data Preprocessing•NORMALIZATION

0 20 40 60 80 1000

20

40

60

80

100

120

140

160

180

200Test Transient Number 47

Real RULPredicted RUL

Construction of degradation indicators

Detection of the degradation onset

RUL Prediction (ANN)

C-MAPPS Dataset

C-MAPPS Dataset: Aircraft Turbofan Engine in Variable Operating Conditions

C-MAPPS: Variable Operating Conditions

• 260 run to failure trajectories • 1 Failure mode

• 21 Measured Signals• 6 signals represent Operating Condition

Clustering of the operating Conditions

0 50 100 150 200 250 300 350 40036

38

40

42

44

46

48

50Plot of Signal 11 in all the operating conditions

Op C 1Op C 2Op C 3Op C 4Op C 5Op C 6

010

2030

4050

0

0.5

160

70

80

90

100

TRA

Altitude (kft)Mach Number

Op C 1

Op C 2

Op C 3

Op C 4

Op C 5

Op C 6

Time

Sign

al V

alue

Data Preprocessing•NORMALIZATION

Operating Conditions Influence

Signal Normalization Procedure(For Each Operating Condition of each Signal)

• Computation of mean µi and standard deviation σi of the i-th signal

• Normalization of the signal:

• The normalized signal is now independent from the current operating conditioni

ii

inorm

tSigtSigσ

µ−=

)()(

0 50 100 150 200 250 30036

38

40

42

44

46

48

50Plot of the Original Signal 11

Time (Cycle)

Sig

nal V

alue

s

0 50 100 150 200 250 300-2

-1

0

1

2

3

4

5

6

Time (Cycle)

Nor

mal

ized

Sig

nal V

alue

s

Plot of Normalized Signal 11

Original Signal Normalized Signal

IDENTIFICATION OF THE

DEGRADATION TREND

inormSig

Data Preprocessing: Signal Normalization

Feature Selection• PROGNOSTIC METRICS

COMPUTATION

21 measured signals

24Method: performance of an HI

Monotonicity

Trendability

Prognosability

OK KO

KO

KO

OK

OK

𝑀𝑀𝑖𝑖 =𝑛𝑛𝑛𝑛. 𝑛𝑛𝑜𝑜 𝑑𝑑

𝑑𝑑𝑑𝑑 > 0𝑛𝑛𝑖𝑖 − 1

−𝑛𝑛𝑛𝑛. 𝑛𝑛𝑜𝑜 𝑑𝑑

𝑑𝑑𝑑𝑑 < 0𝑛𝑛𝑖𝑖 − 1

𝑇𝑇𝑇𝑇𝑇𝑇𝑛𝑛𝑑𝑑𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 = min( 𝑐𝑐𝑛𝑛𝑇𝑇𝑇𝑇𝑐𝑐𝑛𝑛𝑇𝑇𝑜𝑜𝑖𝑖𝑖𝑖 )

𝑃𝑃𝑇𝑇𝑛𝑛𝑃𝑃𝑛𝑛𝑛𝑛𝑃𝑃𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 = 𝑇𝑇𝑑𝑑𝑒𝑒 −𝑃𝑃𝑇𝑇𝑑𝑑(𝐻𝐻𝐼𝐼𝑓𝑓𝑓𝑓𝑖𝑖𝑓𝑓)

𝑚𝑚𝑇𝑇𝑇𝑇𝑛𝑛 𝐻𝐻𝐼𝐼𝑠𝑠𝑠𝑠𝑓𝑓𝑠𝑠𝑠𝑠 − 𝐻𝐻𝐼𝐼𝑓𝑓𝑓𝑓𝑖𝑖𝑓𝑓

𝑀𝑀𝑛𝑛𝑛𝑛𝑛𝑛𝑇𝑇𝑛𝑛𝑛𝑛𝑇𝑇𝑐𝑐𝑇𝑇𝑇𝑇𝑇𝑇 =1𝑁𝑁�𝑖𝑖=1

𝑁𝑁

𝑀𝑀𝑖𝑖

Feature Selection: Prognostic Metrics (I)

Computation of the Prognostic Metrics for each considered feature and each signals

−−

= Healthyfail

fail

µµ

σ

explityPrognosabi

−−

−=

1

#

1

#tyMonotonici

ndx

dneg

ndx

dposmean

( )ijcorrcoeffmintyTrendabili =

tyTrendabililityPrognosabityMonotonicinctionFitness_Fu ⋅+⋅+⋅= tpm www

1.0=tw

8.0=pw

05.0=mw

Feature Selection: Prognostic Metrics (II)

• Best set of signals identified: [2 3 4 11 15 17]

0 5 10 15 20 250

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Signal

Fitn

ess

Mea

sure

EXP 0.0875EXP 0.0875 CuttedEXP 0.05EXP 0.2Lin 20Gau 20

Fault Detection•Z-TEST for

ELBOW POINT IDENTIFICATION

0 50 100 150 200 250 300 350 400-0.5

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Time

Val

ueSlope Correction of the Average

Fault Detection New Approach – Artificial case (Slope Correction of the average)

RUL PredictionANN

Information available

• A matlab structure:“trans” which contains the 260 degradation trajectories

Trans{i}=‘matrix of size: (duration of the engine, number of signals=6)

0 20 40 60 80 100 120 140-1

-0.5

0

0.5

1

1.5

Plot of column 2 of Trans{i}

time

Signal 2

Exercise 2

• Start considering a model which receives in input only the current signal values and 6 nodes in the hidden layer.

• Use the first 130 run-to-failure trajectories for training an ANN model for RUL prediction

• Verify the performance of the trained model using the remaining 130 run-to-failure trajectories

Exercise 3

• Start considering a time window of m=4 and 6 nodes in the hidden layer. Then, investigate the sensibility of the results to these 2 parameters

ANN

…RUL