Lecture 13 - Principal Component Analysis · Lecture 13 Computing Principal Components Some Linear...

27
Lecture 13 Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26

Transcript of Lecture 13 - Principal Component Analysis · Lecture 13 Computing Principal Components Some Linear...

Lecture 13Principal Component Analysis

Brett Bernstein

CDS at NYU

April 25, 2017

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26

Lecture 13 Initial Question

Intro Question

Question

Let S ∈ Rn×n be symmetric.

1 How does traceS relate to the spectral decomposition S = WΛW T

where W is orthogonal and Λ is diagonal?

2 How do you solve w∗ = arg max‖w‖2=1 wTSw? What is wT

∗ Sw∗?

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 2 / 26

Lecture 13 Initial Question

Intro Solution

Solution

1 We use the following useful property of traces: traceAB = traceBAfor any matrices A,B where the dimensions allow. Thus we have

traceS = traceW (ΛW T ) = trace (ΛW T )W = traceΛ,

so the trace of S is the sum of its eigenvalues.

2 w∗ is an eigenvector with the largest eigenvalue. Then wT∗ Sw∗ is the

largest eigenvalue.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 3 / 26

Lecture 13 Principal Component Analysis (PCA)

Unsupervised Learning

1 Where did the y ’s go?

2 Try to find intrinsic structure in unlabeled data.

3 With PCA, we are looking for a low dimensional affine subspace thatapproximates our data well.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 4 / 26

Lecture 13 Definition of Principal Components

Centered Data

1 Throughout this lecture we will work with centered data.

2 Suppose X ∈ Rn×d is our data matrix. Define

x =1

n

n∑i=1

xi .

3 Let X ∈ Rn×d be the matrix with x in every row.

4 Define the centered data:

X̃ = X − X , x̃i = xi − x .

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 5 / 26

Lecture 13 Definition of Principal Components

Variance Along A Direction

Definition

Let x̃1, . . . , x̃n ∈ Rd be the centered data. Fix a direction w ∈ Rd with‖w‖2 = 1. The sample variance along w is given by

1

n − 1

n∑i=1

(x̃Ti w)2.

This is the sample variance of the components

x̃T1 w , . . . , x̃Tn w .

1 This is also the sample variance of

xT1 w , . . . , xTn w ,

using the uncentered data.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 6 / 26

Lecture 13 Definition of Principal Components

Variance Along A Direction

x2

x1

x̃1

x̃2

x̃3

x̃4 x̃5

x̃6

x̃7

w

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 7 / 26

Lecture 13 Definition of Principal Components

Variance Along A Direction

x2

x1

x̃1

x̃2

x̃3

x̃4 x̃5

x̃6

x̃7

wT x̃i-values

w

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 8 / 26

Lecture 13 Definition of Principal Components

First Principal Component

1 Define the first loading vector w(1) to be the direction giving thehighest variance:

w(1) = arg max‖w‖2=1

1

n − 1

n∑i=1

(x̃Ti w)2.

2 Maximizer is not unique, so we choose one.

Definition

The first principal component of x̃i is x̃Ti w(1).

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 9 / 26

Lecture 13 Definition of Principal Components

Principal Components

1 Define the kth loading vector w(k) to be the direction giving thehighest variance that is orthogonal to the first k − 1 loading vectors:

w(k) = arg max‖w‖2=1

w⊥w(1),...,w(k−1)

1

n − 1

n∑i=1

(x̃Ti w)2.

2 The complete set of loading vectors w(1), . . . ,w(d) form an

orthonormal basis for Rd .

Definition

The kth principal component of x̃i is x̃Ti w(k).

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 10 / 26

Lecture 13 Definition of Principal Components

Principal Components

1 Let W denote the matrix with the kth loading vector w(k) as its kthcolumn.

2 Then W T x̃i gives the principal components of x̃i as a column vector.

3 X̃W gives a new data matrix in terms of principal components.

4 If we compute the singular value decomposition (SVD) of X̃ we get

X̃ = VDW T ,

where D ∈ Rn×d is diagonal with non-negative entries, andV ∈ Rn×n, W ∈ Rd×d are orthogonal.

5 Then X̃T X̃ = WDTDW T . Thus we can use the SVD on our datamatrix to obtain the loading vectors W and the eigenvaluesΛ = 1

n−1DTD.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 11 / 26

Lecture 13 Computing Principal Components

Some Linear Algebra

Recall that w(1) is defined by

w(1) = arg max‖w‖2=1

1

n − 1

n∑i=1

(x̃Ti w)2.

We now perform some algebra to simplify this expression. Note that

n∑i=1

(x̃Ti w)2 =n∑

i=1

(x̃Ti w)(x̃Ti w)

=n∑

i=1

(wT x̃i )(x̃Ti w)

= wT

[n∑

i=1

x̃i x̃Ti

]w

= wT X̃T X̃w .

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 12 / 26

Lecture 13 Computing Principal Components

Some Linear Algebra

1 This shows

w(1) = arg max‖w‖2=1

1

n − 1wT X̃T X̃w = arg max

‖w‖2=1wTSw ,

where S = 1n−1 X̃

T X̃ is the sample covariance matrix.

2 By the introductory problem this implies w(1) is the eigenvectorcorresponding to the largest eigenvalue of S .

3 We also learn that the variance along w(1) is λ1, the largesteigenvalue of S .

4 With a bit more work we can see that w(k) is the eigenvectorcorresponding to the kth largest eigenvalue, with λk giving theassociated variance.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 13 / 26

Lecture 13 Computing Principal Components

PCA Example

Example

A collection of people come to a testing site to have their heightsmeasured twice. The two testers use different measuring devices, each ofwhich introduces errors into the measurement process. Below we depictsome of the measurements computed (already centered).

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 14 / 26

Lecture 13 Computing Principal Components

PCA Example

−20

−10

10

20Tester 2

−20 −10 10Tester 1

1 Describe (vaguely) what you expect the sample covariance matrix tolook like.

2 What do you think w(1) and w(2) are?

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 15 / 26

Lecture 13 Computing Principal Components

PCA Example: Solutions

1 We expect tester 2 to have a larger variance than tester 1, and to benearly perfectly correlated. The sample covariance matrix is

S =

(40.5154 93.506993.5069 232.8653

).

2 We have

S = WΛW T ,W =

(0.3762 −0.92650.9265 0.3762

),Λ =

(270.8290 0

0 2.5518

).

Note that traceΛ = traceS .

Since λ2 is small, it shows that w(2) is almost in the null space of S .This suggests −.9265x̃1 + .3762x̃2 ≈ 0 for data points (x̃1, x̃2). Inother words, x̃2 ≈ 2.46x̃1. Maybe tester 2 used centimeters and tester1 used inches.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 16 / 26

Lecture 13 Computing Principal Components

PCA Example: Plot In Terms of Principal Components

−20

−10

10

20Tester 2

−20 −10 10Tester 1

−1.25

2.5

6.25w(2)

−20 −10 10 20w(1)

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 17 / 26

Lecture 13 Computing Principal Components

Uses of PCA: Dimensionality Reduction

1 In our height example above, we can replace our two features withonly a single feature, the first principal component.

2 This can be used as a preprocessing step in a supervised learningalgorithm.

3 When performing dimensionality reduction, one must choose howmany principal components to use. This is often done using a screeplot: a plot of the eigenvalues of S in descending order.

4 Often people look for an “elbow” in the scree plot: a point where theplot becomes much less steep.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 18 / 26

Lecture 13 Computing Principal Components

Scree Plot

1From Jolliffe, Principal Component AnalysisBrett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 19 / 26

Lecture 13 Computing Principal Components

Uses of PCA: Visualization

1 Visualization: If we have high dimensional data, it can be hard to plotit effectively. Sometimes plotting the first two principal componentscan reveal interesting geometric structure in the data.

1https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2735096/Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 20 / 26

Lecture 13 Computing Principal Components

Uses of PCA: Principal Component Regression

1 Want to build a linear model with a dataset

D = {(x1, y1), . . . , (xn, yn)}.

2 We can choose some k and replace each x̃i with its first k principalcomponents. Afterward we perform linear regression.

3 This is called principal component regression, and can be thought ofas a discrete variant of ridge regression (see HTF 3.4.1).

4 Correlated features may be grouped together into a single principalcomponent that averages their values (like with ridge regression).Think about the 2 tester example from before.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 21 / 26

Lecture 13 Other Comments About PCA

Standardization

1 What happens if you scale one of the features by a huge factor?

2 It will have a huge variance and become a dominant part of the firstprincipal component.

3 To add scale-invariance to the process, people often standardize theirdata (center and normalize) before running PCA.

4 This is the same as using the correlation matrix in place of thecovariance matrix.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 22 / 26

Lecture 13 Other Comments About PCA

Standardization

1 What happens if you scale one of the features by a huge factor?

2 It will have a huge variance and become a dominant part of the firstprincipal component.

3 To add scale-invariance to the process, people often standardize theirdata (center and normalize) before running PCA.

4 This is the same as using the correlation matrix in place of thecovariance matrix.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 22 / 26

Lecture 13 Other Comments About PCA

Dispersion Of The Data

1 One measure of how dispersed our data is the following:

∆ =1

n − 1

n∑i=1

‖xi − x‖22 =

1

n − 1

n∑i=1

‖x̃i‖22.

2 A little algebra shows this is traceS , where S is the samplecovariance matrix.

3 If we project onto the first k principal components, the resulting datahas dispersion λ1 + · · ·+ λk .

4 We can choose k to account for a desired percentage of ∆.

5 The subspace spanned by the first k loading vectors maximizes theresulting dispersion over all possible k-dimensional subspaces.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 23 / 26

Lecture 13 Other Comments About PCA

Other Comments

1 The k-dimensional subspace V spanned by w(1), . . . ,w(k) best fits thecentered data in the least-squares sense. More precisely, it minimizes

n∑i=1

‖xi − PV (xi )‖22

over all k-dimensional subspaces, where PV orthogonally projectsonto V .

2 Converting your data into principal components can sometimes hurtinterpretability since the new features are linear combinations (i.e.,blends or baskets) of your old features.

3 The smallest principal components, if they correspond to smalleigenvalues, are nearly in the null space of X , and thus can reveallinear dependencies in the centered data.

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 24 / 26

Lecture 13 Other Comments About PCA

Principal Components Are Linear

Suppose we have the following labeled data.

x2

x1

How can we apply PCA and obtain a single principal component thatdistinguishes the colored clusters?

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 25 / 26

Lecture 13 Other Comments About PCA

Principal Components Are Linear

1 In general, can deal with non-linear by adding features or usingkernels.

2 Using kernels results in the technique called Kernel PCA.

3 Below we added the feature ‖x̃i‖2 and took the first principalcomponent.

w(1)

Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 26 / 26