Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v...

13
Artificial Neural Networks 2 Tim de Bruin Robert Babuˇ ska [email protected] Knowledge-Based Control Systems (SC42050) Cognitive Robotics 3mE, Delft University of Technology, The Netherlands 07-03-2018 Recap artificial neural networks part 1 ... x 1 x p x 2 w 1 w p w 2 z v y 1 y 2 1 z σ() z 0 -1 z σ() z 0 1 z σ() z 0 1 Foward pass: y = f(x; w) output network structure input input weights weights nonlinearity 2 / 49 Recap artificial neural networks part 1 Backward pass: calculate W J and use it in an optimization algorithm to iteratively update the weights of the network to minimize the loss J . ... x 1 x p x 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/ y 1 J/ y 2 J/ v J/ z J/ w J/ v 1 3 / 49 Outline Last lecture: 1 Introduction to artificial neural networks 2 Simple networks & approximation properties 3 Deep Learning 4 Optimization This lecture: 1 Regularization & Validation 2 Specialized network architectures 3 Beyond supervised learning 4 Examples 4 / 49

Transcript of Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v...

Page 1: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Artificial Neural Networks 2

Tim de Bruin Robert Babuska

[email protected]

Knowledge-Based Control Systems (SC42050)Cognitive Robotics

3mE, Delft University of Technology, The Netherlands

07-03-2018

Recap artificial neural networks part 1

...

x1

xp

x2

w1

wp

w2

zv

y1

y2

1

z

σ( )z

0

-1

z

σ( )z

0

1

z

σ( )z

0

1

Foward pass:

y = f(x; w)output

network structure

input

input

weights

weights

nonlinearity

2 / 49

Recap artificial neural networks part 1

Backward pass: calculate ∇W J and use it in an optimizationalgorithm to iteratively update the weights of the network to minimizethe loss J.

...

x1

xp

x2

w1

wp

w2

zv

J(y,t)

target output

network output

Loss functionJ/ y1

J/ y2

J/ vJ/ z

J/ w

J/ v1

3 / 49

Outline

Last lecture:

1 Introduction to artificial neural networks

2 Simple networks & approximation properties

3 Deep Learning

4 Optimization

This lecture:

1 Regularization & Validation

2 Specialized network architectures

3 Beyond supervised learning

4 Examples

4 / 49

Page 2: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Outline

1 Regularization & Validation

2 Specialized structures

3 (Semi) Unsupervised Learning & Reinforcement Learning

4 Examples

5 / 49

Approximation error vs. number of parameters

number of parameters

training data

ap

pro

xim

ati

on

erro

r

J

6 / 49

Approximation error vs. number of parameters

number of parameters

training data

ap

pro

xim

ati

on

erro

r

J

new data

7 / 49

Underfitting

x

y

8 / 49

Page 3: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Good fit

x

y

9 / 49

Overfitting

x

y

10 / 49

Validation

System: y = f (x) or y(k + 1) = f (x(k),u(k))Model: y = F (x; θ) or y(k + 1) = F (x(k),u(k); θ)

True criterion:I = ∫

X∥f (x) − F (x)∥dx (1)

Usually cannot be computed as f (x) is not available,use available data to numerically approximate (1)

� use a validation set

� cross-validation (randomize)

11 / 49

Validation Data Set

x

training data

y validation data

12 / 49

Page 4: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Cross-Validation

� Regularity criterion (for two data sets):

RC =1

2

⎡⎢⎢⎢⎣

1

NA

NA

∑i=1

(yA(i) − yAB (i))2+

1

NB

NB

∑i=1

(yB(i) − yBA (i))2⎤⎥⎥⎥⎦

� v -fold cross-validation

13 / 49

Some Common Criteria

� Mean squared error (root mean square error):

MSE =1

N

N

∑i=1

(y(i) − y(i))2

� Variance accounted for (VAF):

VAF = 100% ⋅ [1 −var(y − y)

var(y)]

� Check the correlation of the residual y − y to u, y and itself.

14 / 49

Test set

The validation set is used to select the right hyper-parameters.

� Structure of the network

� Cost function

� Optimization parameters

� ...

What might go wrong?

Use a separate test set to verify the hyper-parameters have not beenover-fitted to the validation set.

15 / 49

Regularization

Regularization: Any strategy that attempts to improve the testperformance, but not the training performance

� Limit model capacity (smaller network)

� Early stopping of the optimization algorithm

� Penalizing large weights (1 or 2 norm)

� Ensembles (dropout)

� ...

16 / 49

Page 5: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Weight penalties

Cost function: Jr(y , t,w) = J∗(y , t) + λ∣∣w∣∣pp

� p = 1: L1 ∶ Leads to0-weights (sparsity,feature selection)

� p = 2: L2 ∶ Leads tosmall weightsDemo - Overfitting

Demo - L1 regularization

Demo - L2 regularizationL1

L2

w*

w1

w2

=

=0

17 / 49

Model ensembles

What if we train multiple models instead of one?

For k models, where the errors made are zero mean, normallydistributed, with variance v = E[ε2

i ], covariance c = E[εiεj]. Thevariance of the ensemble is:

E⎡⎢⎢⎢⎢⎣

(1

k∑i

εi)

2⎤⎥⎥⎥⎥⎦

=1

k2E⎡⎢⎢⎢⎢⎣

∑i

⎝ε2i +∑

j≠i

εiεj⎞

⎤⎥⎥⎥⎥⎦

=1

kv +

k − 1

kc

When the errors are not fully correlated (c < v), the variance will reduce.

18 / 49

Dropout

Practical approximation of an automatic ensemble method. Duringtraining, drop out units (neurons) with probability p. During testing useall units, multiply weights by (1 − p).

x1 x2

...

y1 y2 y3

x1 x2

...

y1 y2 y3

x1

...

y1 y2 y3

randomly drop units during each training update, creating a

new network (with shared parameters) every time.To use the network, include all units

but scale weights.

=

19 / 49

More data

The best regularization strategy is more real data

Spend time on getting a dataset and think about the biases it contains.

20 / 49

Page 6: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Data augmentation

Sometimes existing data can be transformed to get more data.Noise can be added to inputs, weights, outputs (what do these do,respectively?) Make noise realistic.

" Over tting " " Over tting " " Over tting "

,

21 / 49

Outline

1 Regularization & Validation

2 Specialized structuresRecurrent Neural NetworksConvolutional Neural Networks

3 (Semi) Unsupervised Learning & Reinforcement Learning

4 Examples

22 / 49

Prior knowledge for simplification

Use prior knowledge to limit the model search space

Sacrifice some potential accuracy to gain a lot of simplicity

Example from control theory

Reality: y(t) = f (x ,u, t), x = g(x ,u, t)

Usual LTI approximation: y = Cx +Du, x = Ax +Bu

23 / 49

Neural network analog

Predict yt given yt−n, ..., yt−1,ut−n, ...,ut

Strategy so far:

yt-4

ut-4

yt-3

ut-3

yt-2

ut-2

y t-1

ut-1

ut

y t

Feedforward

network

24 / 49

Page 7: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Neural network analog

Lets assume y(t) = f (x(t), t) and x(t) = g(x(t − 1),u(t), t):

ut-4 ut-3 ut-2 ut-1 ut

xt-4 xt-3 xt-2 xt-1 xt

yt-4 yt-3 yt-2 yt-1 yt

w1i

w1h w2

h w3h w4

h

w1o w2

o w3o w4

o w5o

w2i w3

i w4i w5

i

25 / 49

Weight sharing: temporal invariance

Lets add temporal invariance:y(t) = f (x(t)) and x(t) = g(x(t − 1),u(t));w1 = w2 = w3 = w4 = w5 = w

ut-4 ut-3 ut-2 ut-1

xt-4 xt-3 xt-2 xt-1

yt-4 yt-3 yt-2 yt-1

w1i

w1h w2

h w3h w4

h

w1o w2

o w3o w4

o

w2i w3

i w4i

ut

xt

yt

w5o

w5i

ut

xt

yt

wo

wi

wh

Recurrent

Neural

Network

(RNN)

Significant reduction in the number of parameters w

26 / 49

RNN training: Back Propagation Through Time (BPTT)

1 Make n copies of the network, calculate y1, . . . , yn

2 Start at time step n and propagate the loss backwards through theunrolled networks

3 Update the weights based on the average gradient of the networkcopies: ∇wJ =

1n ∑

ni=1∇wi J

ut-4 ut-3 ut-2 ut-1

xt-4 xt-3 xt-2 xt-1

yt-4 yt-3 yt-2 yt-1

w1h w2

h w3h w4

h

w1o w2

o w3o w4

o

w4i

ut

xt

yt

w5o

w5i

ut

xt

yt

wo

wi

wh

�yt J�yt-1 J�yt-1 J�yt-1 J�yt-1 J

w2i w3

iw1i

27 / 49

The exploding / vanishing gradients problem

Scalar case with no input: xn = wn ⋅ x0

For w < 1, xn → 0, for w > 1, xn →∞.This makes it hard to learn long term dependencies.

1 2 3 4 5 1 2 3 4 5Time

Input

Hidden

Output

RNN some memory control?

28 / 49

Page 8: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Gating

One more network component:Element-wise multiplication of activations ⊗

Example: LSTM memory cell

M

memory cell

input gate

forget

gate

output gate

29 / 49

Weight sharing: spatial equivariance

How to process grid like information (eg. images)? So far:

..............................

6

..............................

6

entirely di�erent!

30 / 49

Weight sharing: spatial equivariance

We want spatial invariance /equivariance.

� Share pieces of network(eg our 6 feature detector).

� Copy the part of the networkacross the input space, enforcethat the weights remain equal.

W1 W2 W3 W4

w1 = w2 = w3 = w4 = w

31 / 49

Convolution

� Instead of thinking of copyingparts of the network over theinputs, we can think of thesame operation as sliding anetwork part over the input.

� Step 1: Convolution:S(i , j) = (I ∗K)(i , j) =

∑m∑n I (m,n)K(i −m, j − n)

K (Kernel)

I (Input)

S (feature map)

32 / 49

Page 9: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Convolutional layer

� Step 1: Convolution:S(i , j) = (I ∗K)(i , j) =

∑m∑n I (m,n)K(i −m, j − n)

� Step 2: Detector stage:nonlinearities on top of thefeature map

What if we want invariance?

K (Kernel)

I (Input)

S (feature map)

nonlinearities

33 / 49

Pooling

� Step 1: Convolution:S(i , j) = (I ∗K)(i , j) =

∑m∑n I (m,n)K(i −m, j − n)

� Step 2: Detector stage:nonlinearities on top of thefeature map

� Step 3 (optional) Pooling:Take some function (eg max)of an area

K (Kernel)

I (Input)

S (feature map)

nonlinearities

pooling

34 / 49

Outline

1 Regularization & Validation

2 Specialized structures

3 (Semi) Unsupervised Learning & Reinforcement Learning

4 Examples

35 / 49

NN training: so far, we have seen supervised learning

Supervisedlearning

Reinforcementlearning

Unsupervisedlearning

more informative feedback less informative feedback

36 / 49

Page 10: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

From SL to RL

So far: get a database of inputs x and target outputs t , minimize someloss between network predictions y(x , θ) and the targets t by adaptingthe network parameters θ:

x n

y m

37 / 49

RL with function approximation

Didn’t we do this last week?

Global function approximation makes things trickier but potentiallymore useful, especially for high-dimensional state-spaces.

38 / 49

From SL to RL

DQN example: get a database of inputs x and target outputs t ,minimize some loss between network predictions Q(x , θ) and the targetst by adapting the network parameters θ:

� Data {x, u, x’, r} is collected on-line by following the explorationpolicy and stored in a buffer.

� t(x , a) = r + γmaxaQ(x ′, θ−): target network with parameters θ−

that slowly track θ for stability.

x n

Q a

39 / 49

Additional training criteria

Inputs x are often much easier toobtain than targets t.

� For deep networks, many ofthe earlier layers perform verygeneral functions (e.g. edgedetection).

� These layers can be trained ondifferent tasks for which thereis data.

HAPPY

SAD

MORE

GENERAL

MORE

TASK

SPECIFIC

40 / 49

Page 11: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Additional training criteria

Previous lecture: data clustered around a (or some) low dimensionalmanifold(s) embedded in the high dimensional input space.

space of all

images

faces

manifold

1

Can we learn a mapping to this manifold with only input data x?1D. P. Kingma and M. Welling (2013). “Auto-encoding variational bayes”. In: arXiv preprint arXiv:1312.6114

41 / 49

Additional training criteria - auto encoders

� Unsupervised Learning (UL):find some structure in inputdata without extrainformation(e.g. clustering).

� Auto Encoders (AE) do this byreconstructing their input(t = x).

x n

m

x n^

Compressed

representation

42 / 49

Additional training criteria: regularization and optimization

Auxiliary training objectives can beadded

� Because they are easier andallow the optimization to makefaster initial progress.

� To force the network to keepmore generic features, as aregularization technique.

HAPPY

SAD

MALE

FEMALE

43 / 49

Generative models

Auto-Encoders consist oftwo parts:

� Encoder: compressesthe input, useful featurehierarchy for latersupervised tasks.

� Decoder:decompresses the input,can be used as agenerative model.

x n

m

x n^

Compressed

representation

decoder

encoder

44 / 49

Page 12: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Outline

1 Regularization & Validation

2 Specialized structures

3 (Semi) Unsupervised Learning & Reinforcement Learning

4 Examples

45 / 49

Applications of neural nets

� Black-box modeling of systems from input-output data.

� Reconstruction (estimation) – soft sensors.

� Classification.

� Neurocomputing.

� Neurocontrol.

46 / 49

Example: object recognition

winner 2016

Demo - movie

47 / 49

Example: control from images

2

2S. Levine, C. Finn, T. Darrell, and P. Abbeel (2016). “End-to-end training of deep visuomotor policies”. In: Journal ofMachine Learning Research 17.39, pp. 1–40

48 / 49

Page 13: Artificial Neural Networks 2 - dcsc.tudelft.nlsc42050/2018/Materials/180307... · 2 w 1 w p w 2 z v J(y,t) target output network output Loss function J/y 1 J/y 2 J/v J/z J/w J/v1

Summary

(Over-)fitting training data can be easy, we want to generalize to newdata.

� Use separate validation and test data-sets to measuregeneralization performance.

� Use regularization strategies to prevent over-fitting.

� Use prior knowledge to make specific network structures that limitthe model search space and the number of weights needed (e.g.RNN, CNN).

� Be aware of the biases and accidental regularities contained in thedataset.

49 / 49