MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and...

56
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 20: Dictionary Learning

Transcript of MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and...

Page 1: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

MIT 9.520/6.860, Fall 2017Statistical Learning Theory and Applications

Class 20: Dictionary Learning

Page 2: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

What is data representation?

Let X be a data-space

F XXM

�(M) � �(M)

A data representation is a map

Φ : X → F ,

from the data space to a representation space F .

A data reconstruction is a map

Ψ : F → X .9.520/6.860 Fall 2017

Page 3: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Road map

Last class:

I Prologue: Learning theory and data representation

I Part I: Data representations by design

This class:

I Part II: Data representations by unsupervised learning

– Dictionary Learning– PCA– Sparse coding– K-means, K-flats

Next class:

I Part III: Deep data representations

9.520/6.860 Fall 2017

Page 4: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Notation

X : data space

I X = Rd or X = Cd (also more general later).

I x ∈ X

Data representation: Φ : X → F .

∀x ∈ X ,∃z ∈ F : Φ(x)

F : representation space

I F = Rp or F = Cp

I z ∈ F

Data reconstruction: Ψ : F → X .

∀z ∈ F ,∃x ∈ X : Ψ(z) = x

9.520/6.860 Fall 2017

Page 5: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Why learning?

Ideally: automatic, autonomous learning

I with as little prior information as possible,

but also.... . .

I . . . with as little human supervision as possible.

f (x) = 〈w ,Φ(x)〉F , ∀x ∈ X

Two-step learning scheme:

I supervised or unsupervised learning of Φ:X → FI supervised learning of w in F

9.520/6.860 Fall 2017

Page 6: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Unsupervised representation learning

Samples from a distribution ρ on input space X

S = {x1, . . . , xn} ∼ ρn

Training set S from ρ (supported on Xρ).

Goal: find Φ(x) which is “good” not only for S but for other x ∼ ρ.

Principles for unsupervised learning of “good” representations?

9.520/6.860 Fall 2017

Page 7: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Unsupervised representation learning principles

Two main concepts:

1. Similarity preservation, it holds

Φ(x) ∼ Φ(x ′)⇔ x ∼ x ′, ∀x ∈ X

2. Reconstruction, there exists a map Ψ : F → X such that

Ψ ◦ Φ(x) ∼ x , ∀x ∈ X

9.520/6.860 Fall 2017

Page 8: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Plan

We will first introduce a reconstruction based framework for learningdata representation, and then discuss in some detail several examples.

We will mostly consider X = Rd and F = Rp

I Representation: Φ : X → F .

I Reconstruction: Ψ : F → X .

If linear maps:

I Representation: Φ(x) = Cx (coding)

I Reconstruction: Ψ(z) = Dz (decoding)

9.520/6.860 Fall 2017

Page 9: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Reconstruction based data representation

Basic idea: the quality of a representation Φ is measured by thereconstruction error provided by an associated reconstruction Ψ

‖x −Ψ ◦ Φ(x)‖ ,

Ψ ◦ Φ: denotes the composition of Φ and Ψ

9.520/6.860 Fall 2017

Page 10: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Empirical data and population

Given S = {x1, . . . , xn} minimize the empirical reconstruction error

E(Φ,Ψ) =1

n

n∑

i=1

‖xi −Ψ ◦ Φ(xi )‖2,

as a proxy to the expected reconstruction error

E(Φ,Ψ) =

Xdρ(x) ‖x −Ψ ◦ Φ(x)‖2

,

where ρ is the data distribution (fixed but uknown).

9.520/6.860 Fall 2017

Page 11: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Empirical data and population

minΦ,ΨE(Φ,Ψ), E(Φ,Ψ) =

Xdρ(x) ‖x −Ψ ◦ Φ(x)‖2

,

CaveatReconstruction alone is not enough...copying data, i.e. Ψ ◦ Φ = I , gives zero reconstruction error!

9.520/6.860 Fall 2017

Page 12: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Parsimonious reconstruction

Reconstruction is meaningful only with constraints!

I constraints implement some form of parsimonious reconstruction,

I identified with a form of regularization,

I choice of the constraints corresponds to different algorithms.

Fundamental difference with supervised learning: problem is not welldefined!

9.520/6.860 Fall 2017

Page 13: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Parsimonious reconstruction

F XXM

�(M) � �(M)

9.520/6.860 Fall 2017

Page 14: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Dictionary learning

‖x −Ψ ◦ Φ(x)‖

Let X = Rd , F = Rp.

1. linear reconstruction

Ψ(z) = Dz , D ∈ D,

with D a subset of the space of linear maps from X to F .

2. nearest neighbor representation,

Φ(x) = ΦΨ(x) = arg minz∈Fλ

‖x − Dz‖2, D ∈ D, Fλ ⊂ F .

9.520/6.860 Fall 2017

Page 15: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Linear reconstruction and dictionaries

Reconstruction D ∈ D can be identified by a d × p dictionary matrixwith columns

a1, . . . , ap ∈ Rd .

Reconstruction of x ∈ X corresponds to a suitable linear expansion onthe dictionary D with coefficients βk = zk , z ∈ Fλ

x = Dz =

p∑

k=1

akzk =

p∑

k=1

akβk , β1, . . . , βk ∈ R.

9.520/6.860 Fall 2017

Page 16: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Nearest neighbor representation

Φ(x) = ΦΨ(x) = arg minz∈Fλ

‖x − Dz‖2, D ∈ D, Fλ ⊂ F .

Nearest neighbor (NN) representation since, for D ∈ D and letting

Xλ = DFλ,

Φ(x) provides the closest point to x in Xλ,

d(x ,Xλ) = minx′∈Xλ

‖x − x ′‖2= min

z′∈Fλ

‖x − Dz ′‖2.

9.520/6.860 Fall 2017

Page 17: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Nearest neighbor representation (cont.)

NN representation are defined by a constrained inverse problem,

minz∈Fλ

‖x − Dz‖2.

Alternatively, let Fλ = F and add a regularization term R : F → R

minz∈F

{‖x − Dz‖2 + λR(z)

}.

Note: Formulations coincide for R(z) = 1IFλ, z ∈ F .

9.520/6.860 Fall 2017

Page 18: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Dictionary learning

Empirical reconstruction error minimization

minΦ,ΨE(Φ,Ψ) = min

Φ,Ψ

1

n

n∑

i=1

‖xi −Ψ ◦ Φ(xi )‖2

for joint dictionary and representation learning:

minD∈D︸︷︷︸

Dictionary learning

1

n

n∑

i=1

minzi∈Fλ

‖xi − Dzi‖2

︸ ︷︷ ︸Representation learning

.

Dictionary learning

I learning a regularized representation on a dictionary,

I while simultaneously learning the dictionary itself.

9.520/6.860 Fall 2017

Page 19: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Examples

The DL framework encompasses a number of approaches.

I PCA (& kernel PCA)

I K-SVD

I Sparse coding

I K-means

I K-flats

I . . .

9.520/6.860 Fall 2017

Page 20: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Principal Component Analysis (PCA)

Let Fλ = Fk = Rk , k ≤ min{n, d}, and

D = {D : F → X , linear | D∗D = I}.

I D is a d × k matrix with orthogonal, unit norm columns

I Reconstruction:

Dz =k∑

j=1

ajzj , z ∈ F

I Representation:

D∗ : X → F , D∗x = (〈a1, x〉 , . . . , 〈ak , x〉), x ∈ X

9.520/6.860 Fall 2017

Page 21: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

PCA and subset selection

DD∗ : X → X , DD∗x =k∑

j=1

aj 〈aj , x〉 , x ∈ X .

P = DD∗ is a projection1 on subspace of Rd spanned by a1, . . . , ak .

1P = P2 (idempotent) 9.520/6.860 Fall 2017

Page 22: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Rewriting PCA

minD∈D

1

n

n∑

i=1

minzi∈Fk

‖xi − Dzi‖2

︸ ︷︷ ︸Representation learning

.

Note that:

Φ(x) = D∗x = arg minz∈Fk

‖x − Dz‖2, ∀x ∈ X ,

Rewrite minimization (set z = D∗x) as

minD∈D

1

n

n∑

i=1

‖xi − DD∗xi‖2.

Subspace learningFinding the k−dimensional orthogonal projection D∗ with the best(empirical) reconstruction.

9.520/6.860 Fall 2017

Page 23: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Learning a linear representation with PCA

Subspace learningFinding the k−dimensional orthogonal projection with the bestreconstruction.

X

9.520/6.860 Fall 2017

Page 24: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

PCA computation

Recall the solution for k = 1.

For all x ∈ X ,DD∗x = 〈a, x〉 a,

‖x − 〈a, x〉 a‖2 = ‖x‖2 − | 〈a, x〉 |2

with a ∈ Rd such that ‖a‖ = 1.

Then, equivalently:

minD∈D

1

n

n∑

i=1

‖xi − DD∗xi‖2 ⇔ maxa∈Rd ,‖a‖=1

1

n

n∑

i=1

| 〈a, xi 〉 |2.

9.520/6.860 Fall 2017

Page 25: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

PCA computation (cont.)

Let X the n × d data matrix and V = 1n X

T X .

1

n

n∑

i=1

| 〈a, xi 〉 |2 =1

n

n∑

i=1

〈a, xi 〉 〈a, xi 〉 =

⟨a,

1

n

n∑

i=1

〈a, xi 〉 xi⟩

= 〈a,Va〉 .

Then, equivalently:

maxa∈Rd ,‖a‖=1

1

n

n∑

i=1

| 〈a, xi 〉 |2 ⇔ maxa∈Rd ,‖a‖=1

〈a,Va〉

9.520/6.860 Fall 2017

Page 26: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

PCA is an eigenproblem

maxa∈Rd ,‖a‖=1

〈a,Va〉

I Solutions are the stationary points of the Lagrangian

L(a, λ) = 〈a,Va〉 − λ(‖a‖2 − 1).

I Set ∂L/∂a = 0, then

Va = λa, 〈a,Va〉 = λ

.

Optimization problem is solved by the eigenvector of V associated to thelargest eigenvalue.

Note: reasoning extends to k > 1 – solution is given by the first keigenvectors of V .

9.520/6.860 Fall 2017

Page 27: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

PCA model

Assumes the support of the data distribution is well approximated by alow dimensional linear subspace.

X

Can we consider an affine representation?

Can we consider non-linear representations using PCA?9.520/6.860 Fall 2017

Page 28: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

PCA and affine dictionaries

Consider the problem, with D as in PCA:

minD∈D,b∈Rd

1

n

n∑

i=1

minzi∈Fk

‖xi − Dzi − b‖2.

The above problem is equivalent to

minD∈D

1

n

n∑

i=1

∥∥∥∥∥∥x i − DD∗︸︷︷︸

P

x i

∥∥∥∥∥∥

2

with x i = xi −m, i = 1 . . . , n.

Note:- Computations are unchanged but need to consider centered data.

9.520/6.860 Fall 2017

Page 29: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

PCA and affine dictionaries (cont.)

minD∈D,b∈Rd

1

n

n∑

i=1

minzi∈Fk

‖xi − Dzi − b‖2 ⇔ minD∈D

1

n

n∑

i=1

‖x i − DD∗x i‖2

Proof.I Note that Φ(x) = D∗(x − b) (by optimality for z), so that

minD∈D,b∈Rd

1

n

n∑i=1

‖xi − b − P(xi − b)‖2 = minD∈D,b∈Rd

1

n

n∑i=1

‖Q(xi − b)‖2 ,

with P = DD∗ and Q = I − P.

I Solving with respect to b,

Qb = Qm, m =1

n

n∑i=1

xi ,

so thatΦ(x) = D∗(x −m).

9.520/6.860 Fall 2017

Page 30: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Projective coordinates

We can rewriteDz − b = D ′z ′,

if we let

I D ′: matrix obtained by adding to D a column equal to b

I z ′: vector obtained by adding to z a coordinate equal to 1.

9.520/6.860 Fall 2017

Page 31: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

PCA beyond linearity

X

9.520/6.860 Fall 2017

Page 32: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

PCA beyond linearity

X

9.520/6.860 Fall 2017

Page 33: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

PCA beyond linearity

X

9.520/6.860 Fall 2017

Page 34: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Kernel PCA

Consider a feature map and associated (reproducing) kernel.

Φ : X → F , and K (x , x ′) =⟨

Φ(x), Φ(x ′)⟩F

Empirical reconstruction error in the feature space,

minD∈D

1

n

n∑

i=1

minzi∈Fk

∥∥∥Φ(xi )− Dzi

∥∥∥2

F.

9.520/6.860 Fall 2017

Page 35: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Kernel PCA (cont.)

Similar to (linear) PCA (for k = 1),

maxa∈F,‖a‖F=1

〈a,Va〉F

where

Va =1

n

n∑

i=1

⟨Φ(xi ), a

⟩F

Φ(xi ).

Representation is given by:

Φ(x) =⟨v , Φ(x)

⟩F,∀x ∈ X ,

with v is the eigenvector of V with largest eigenvalue.

This can be computed for arbitrary feature map/kernel.

9.520/6.860 Fall 2017

Page 36: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

A representer theorem for kernel PCA

Φ(x) =⟨

Φ(x), v⟩F

=1

n∑

i=i

K (xi , x)ui .

Proof Linear case: K(x , x ′) = 〈x , x ′〉, for all x , x ′ ∈ X .

I Let 1nK = 1

nX XT , V = 1

nXT X .

I V and K have same (non-zero) eigenvalues.

I If u is an eigenvector of K with eigenvalue σ, Ku = σu

v =1

nσXTu =

1

n∑i=i

xiui

is an eigenvector of V also with eigenvalue σ.

Then, for all x ∈ X ,

Φ(x) = 〈x , v〉 =1

n∑i=i

〈xi , x〉 ui .

Extends to any arbitrary kernel: x 7→ Φ(x),⟨

Φ(x), Φ(x ′)⟩F

= K (x , x ′).9.520/6.860 Fall 2017

Page 37: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Comments on PCA, KPCA

I PCA allows to find good representation for data distributionsupported close to a linear/affine subspace.

I Non-linear extension using kernels.

Note:

I Connection between KPCA and manifold learning, e.g.Laplacian/Diffusion maps.

I Off-set/re-centering not needed if kernel is rich enough.

9.520/6.860 Fall 2017

Page 38: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Sparse coding

One of the first and most famous dictionary learning techniques.

It corresponds to

I F = Rp,

I p ≥ d , Fλ = {z ∈ F : ‖z‖1 ≤ λ}, λ > 0,

I D = {D : F → X | ‖Dej‖F ≤ 1}.

Hence,

minD∈D︸︷︷︸

dictionary learning

1

n

n∑

i=1

minzi∈Fλ

‖xi − Dzi‖2

︸ ︷︷ ︸sparse representation

9.520/6.860 Fall 2017

Page 39: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Computations for sparse coding

minD∈D

1

n

n∑

i=1

minzi∈Rp,‖zi‖1≤λ

‖xi − Dzi‖2

I not convex jointly in (D, {zi})...

I separately convex in the {zi} and D.

I Alternating Minimization is natural

– Fix D, compute {zi}.– Fix {zi}, compute D.

I (other approaches possible–see e.g. [Schnass ’15, Elad et al. ’06])

9.520/6.860 Fall 2017

Page 40: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Representation computation

1. Given dictionary D,

minzi∈Rp,‖zi‖1≤λ

‖xi − Dzi‖2, i = 1, . . . , n

Problems are convex and correspond to a sparse estimation.

Solved using convex optimization techniques.

Splitting/proximal methods

z (0), z (t+1) = Sλ(z (t) − γtD∗(xi − Dz (t))), t = 0, . . . , tmax

with Sλ the soft-thresholding operator,

Sλ(u) = max{|u| − λ, 0} u

|u| , u ∈ R

.

9.520/6.860 Fall 2017

Page 41: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Dictionary computation

2. Given the representation {Φ(xi ) = zi}, i = 1, . . . , n

minD∈D

1

n

n∑

i=1

‖xi − DΦ(xi )‖2 = minD∈D

1

n

∥∥∥X − Z∗D∥∥∥

2

F,

where Z is the n × p matrix with rows zi and ‖·‖F , the Frobenius norm.

Problem is convex. Solvable using convex optimization techniques.

Splitting/proximal methods

D(0), D(t+1) = P(D(t) − γtB∗(X − D(t)B)), t = 0, . . . , tmax

with P the prox operator (projection) from the constraints (‖Dej‖F ≤ 1)

P(D j) = D j/∥∥∥D j

∥∥∥ , if∥∥∥D j

∥∥∥ > 1,

P(D j) = D j , if∥∥∥D j

∥∥∥ ≤ 1.

9.520/6.860 Fall 2017

Page 42: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Sparse coding model

I Assumes support of the data distribution to be a union of(ps

)

subspaces, i.e. all possible s-dimensional subspaces in Rp, where sis the sparsity level. 2

I More general penalties, more general geometric assumptions.

2Image credit: Elhamifar, Eldar, 2013 9.520/6.860 Fall 2017

Page 43: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

K-means & vector quantization

Typically seen as a clustering algorithm in machine learning. . .but it is also a classical vector quantization (VQ) approach. 3

We revisit this point of view from a data representation perspective.

3Image:Wikipedia 9.520/6.860 Fall 2017

Page 44: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

K-means & vector quantization (cont.)

K-means corresponds to

I Fλ = Fk = {e1, . . . , ek}, the canonical basis in Rk , k ≤ n

I D = {D : F → X | linear}.

Empirical reconstruction error:

minD∈D

1

n

n∑

i=1

minzi∈{e1,...,ek}

‖xi − Dzi‖2

Problem is not convex (in (D, {zi}). Approximate solution through AM.

9.520/6.860 Fall 2017

Page 45: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

K-means solution

Alternating minimization (Lloyd’s algorithm)

Initialize dictionary D.

1. Let {Φ(xi ) = zi}, i = 1, . . . , n be the solutions of problems

minzi∈{e1,...,ek}

‖xi − Dzi‖2, i = 1, . . . , n.

Assignment:Vj = {x ∈ S | Φ(x) = z = ej}.

(multiple points have same representation since k ≤ n).

2. Update: Let aj = Dej (single dictionary atom)

minD∈D

1

n

n∑

i=1

‖xi − DΦ(xi )‖2 = mina1,...,ak∈Rd

1

n

k∑

j=1

x∈Vj

‖x − aj‖2.

9.520/6.860 Fall 2017

Page 46: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Step 1: assignment

Solving the discrete problem:

minzi∈{e1,...,ek}

‖xi − Dzi‖2, i = 1, . . . , n.

K-means Illustrated

Piecewise Constant -- Adaptive Tree for Point Clouds

c3

c2c1

Voronoi sets - Data clusters

Vj = {x ∈ S | z = Φ(x) = ej}, j = 1 . . . k9.520/6.860 Fall 2017

Page 47: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Step 2: dictionary update

minD∈D

1

n

n∑

i=1

‖xi − DΦ(xi )‖2 = mina1,...,ak∈Rd

1

n

k∑

j=1

x∈Vj

‖x − aj‖2.

where Φ(xi ) = zi , aj = Dej .

Minimization wrt. each column aj of D is independent to all others.

Centroid computation

cj = arg minaj∈Rd

x∈Vj

‖x − aj‖2 =1

|Vj |∑

x∈Vj

x =, j = 1, . . . , k .

Minimimum for each column is the centroid of corresponding Voronoi set.

9.520/6.860 Fall 2017

Page 48: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

K-means convergence

Algorithm for solving K-means is known as Lloyd’s algorithm.

I Alternating minimization approach:=⇒ value of the objective function can be shown to be

non-increasing with the iterations.

I Only a finite number of possible partitions in k clusters:=⇒ ensured to converge to a local minimum in a finite number

of steps.

9.520/6.860 Fall 2017

Page 49: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

K-means initialization

Convergence to a global minimum can be ensured (with highprobability), provided a suitable initialization.

Intuition: spreading out the initial k centroids.

K-means++ [Arthur, Vassilvitskii;07]

1. Choose a centroid uniformly at random from the data.

2. Compute distances of data to the nearest centroid already chosen.

D(x , {cj}) = mincj‖x − cj‖2

,∀x ∈ S , j < k

3. Choose a new centroid from the data using probabilities proportionalto such distances.

4. Repeat steps 2 and 3 until k centers have been chosen.

9.520/6.860 Fall 2017

Page 50: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

K-means model

M = supp{⇢}

K-means Illustrated

Piecewise Constant -- Adaptive Tree for Point Clouds

x ⇡⇥

c1 c2 c3

⇤24

010

35

c3

c2c1

I representation: extreme sparse representation, only one non-zerocoefficient (vector quantization).

I reconstruction: piecewise constant approximation of the data,each point is reconstructed by the nearest mean.

Extensions considering higher order approximation, e.g. piecewise linear.

9.520/6.860 Fall 2017

Page 51: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

K-flats & piece-wise linear representationK-Flats

supp(�)

K-Flats illustrated

M = supp{⇢}

Piecewise Linear Approximation -- Adaptive Tree

x ⇡⇥ 1 2 3

⇤24

0c2

0

35

1 2

3

I k-flats representation: structured sparse representation,coefficients are projection on flat.

I k-flats reconstruction: piecewise linear approximation of the data,each point is reconstructed by projection on the nearest flat.

9.520/6.860 Fall 2017

Page 52: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Remarks on K-flatsK-Flats

supp(�)

K-Flats illustrated

M = supp{⇢}

Piecewise Linear Approximation -- Adaptive Tree

x ⇡⇥ 1 2 3

⇤24

0c2

0

35

1 2

3

I Principled way to enrich k-means representation (cfr softmax).

I Generalized VQ.

I Geometric structured dictionary learning.

I Non-local approximations.

9.520/6.860 Fall 2017

Page 53: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

K-flats computations

Alternating minimization

1. Initialize flats Ψ1, . . . ,Ψk .

2. Assign point to nearest flat,

Vj = {x ∈ S |∥∥x −ΨjΨ

∗j x∥∥ ≤ ‖x −ΨtΨ

∗t x‖ , t 6= j}.

3. Update flats by computing (local) PCA in each cell Vj , j = 1, . . . , k.

9.520/6.860 Fall 2017

Page 54: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Kernel K-means & K-flats

It is easy to extend K-means & K-flats using kernels.

Φ : X → H, and K (x , x) =⟨

Φ(x), Φ(x ′)⟩H

Consider the empirical reconstruction problem in the feature space,

minD∈D

1

n

n∑

i=1

minzi∈{e1,...,ek}⊂H

∥∥∥Φ(xi )− Dzi

∥∥∥2

H.

Note: Computation can be performed in closed form

I Kernel K-means: distance computation.

I Kernel K-flats: distance computation + local KPCA.

9.520/6.860 Fall 2017

Page 55: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Wrap up

Parsimonious reconstructionAlgorithms, computations & models.

Have not talk about:

I Statistics/stability

P

(∣∣∣∣∣minD

1

n

n∑

i=1

minzi∈Fk

‖xi − Dzi‖2 −minD

∫dρ(x) min

z∈Fk

‖x − Dz‖2

∣∣∣∣∣ > ε

)

I Geometry/quantization

limk→∞

minD

∫dρ(x) min

z∈Fk

‖x − Dz‖2 → 0

I Computations: non convex optimization? algorithmic guarantees?

9.520/6.860 Fall 2017

Page 56: MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and ...9.520/fall17/slides/class20_ldr2_dl.pdf · Road map Last class: I Prologue: Learning theory and data representation I

Road map

This class:

I Part II: Data representations by unsupervised learning

– Dictionary Learning– PCA– Sparse coding– K-means, K-flats

Next class:

I Part III: Deep data representations (unsupervised, supervised)

– Neural Networks basics– Autoencoders– ConvNets

9.520/6.860 Fall 2017