Download - Machine Learning - cs.cmu.edu

Transcript
Page 1: Machine Learning - cs.cmu.edu

1

Machine LearningMachine Learning

Advanced topics in MaxAdvanced topics in Max--Margin Margin LearningLearning

1010--701/15701/15--781, Fall 2011781, Fall 2011

Eric XingEric Xing

Lecture 20, November 21, 2011

1© Eric Xing @ CMU, 2006-2010

Recap: the SVM problemWe solve the following constrained opt problem:

∑∑mm 1

This is a quadratic programming problem.

∑∑==

−=ji

jTijiji

ii yy

11 21

,)()(max xxααααα J

.0

,,1 0, s.t.

1∑

=

=

=≥m

iii

i

y

mi

α

α K

q p g g pA global maximum of αi can always be found.

The solution:

How to predict:

∑=

=m

iiii yw

1xα

2© Eric Xing @ CMU, 2006-2010

Page 2: Machine Learning - cs.cmu.edu

2

Non-linearly Separable Problems

Class 2

We allow “error” ξi in classification; it is based on the output of the discriminant function wTx+bξi approximates the number of misclassified samples

Class 1

3© Eric Xing @ CMU, 2006-2010

Soft Margin HyperplaneNow we have a slightly different opt problem:

, ,)(

s.ti

ibxwy

i

iiT

i

∀≥∀−≥+

01

ξξ

∑=

+m

ii

Tbw Cww

121 ξ,min

ξi are “slack variables” in optimizationNote that ξi=0 if there is no error for xi

ξi is an upper bound of the number of errorsC : tradeoff parameter between error and margin

4© Eric Xing @ CMU, 2006-2010

Page 3: Machine Learning - cs.cmu.edu

3

Lagrangian Duality, cont.Recall the Primal Problem:

)(i βL

The Dual Problem:

Theorem (weak duality):

),,(maxmin ,, βααβα wiw L0≥

),,(minmax ,, βααβα wwiL0≥

** )(i)(id ≤ ββ LL

Theorem (strong duality):Iff there exist a saddle point of , we have

,,,, ),,(maxmin ),,(minmax pwwdii ww =≤= ≥≥ βαβα αβααβα LL 00

** pd =

),,( βαwL

5© Eric Xing @ CMU, 2006-2011

A sketch of strong and weak duality

Now, ignoring h(x) for simplicity, let's look at what's happening graphically in the duality theorems.

** )()(maxmin )()(minmax pwgw fwgwfd Tw

Tw ii

=+≤+= ≥≥ αα αα 00

)(wf

)(wg

6© Eric Xing @ CMU, 2006-2011

Page 4: Machine Learning - cs.cmu.edu

4

A sketch of strong and weak duality

Now, ignoring h(x) for simplicity, let's look at what's happening graphically in the duality theorems.

** )()(maxmin )()(minmax pwgw fwgwfd Tw

Tw ii

=+≤+= ≥≥ αα αα 00

)(wf

)(wg

7© Eric Xing @ CMU, 2006-2011

The KKT conditionsIf there exists some saddle point of L, then the saddle point satisfies the following "Karush-Kuhn-Tucker" (KKT) g ( )conditions:

miwgα

liw

kiww

ii

i

i

,,1,0)(

,,1 ,0 ),,(

,,1 ,0 ),,(

K

K

K

==

==∂∂

==∂∂

βαβ

βα

L

L

Theorem: If w*, α* and β* satisfy the KKT condition, then it is also a solution to the primal and the dual problems.

mimiwgmiwgα

i

i

ii

,,1 ,0,,1 ,0)(,,1 ,0)(

K

K

K

=≥=≤

α

8© Eric Xing @ CMU, 2006-2011

Page 5: Machine Learning - cs.cmu.edu

5

The Optimization ProblemThe dual of this new constrained optimization problem is

∑∑==

−=m

jij

Tijiji

m

ii yy

11 21

,

)()(max xxααααα J

.0

,,1 ,0 s.t.

1∑

=

=

=≤≤m

iii

i

y

miC

α

α K

This is very similar to the optimization problem in the linear separable case, except that there is an upper bound C on αi nowOnce again, a QP solver can be used to find αi

9© Eric Xing @ CMU, 2006-2010

The SMO algorithmConsider solving the unconstrained opt problem:

We’ve already see three opt algorithms! Coordinate ascent Gradient ascent Newton-Raphson

Coordinate ascend:

10© Eric Xing @ CMU, 2006-2010

Page 6: Machine Learning - cs.cmu.edu

6

Coordinate ascend

11© Eric Xing @ CMU, 2006-2010

Sequential minimal optimizationConstrained optimization:

∑∑==

−=m

jij

Tijiji

m

ii yy

11 21

,

)()(max xxααααα J

.0

,,1 ,0 s.t.

1∑

=

=

=≤≤m

iii

i

y

miC

α

α K

Question: can we do coordinate along one direction at a time (i.e., hold all α[-i] fixed, and update αi?)

12© Eric Xing @ CMU, 2006-2010

Page 7: Machine Learning - cs.cmu.edu

7

The SMO algorithm

Repeat till convergence

1. Select some pair αi and αj to update next (using a heuristic that tries to pick the two that will allow us to make the biggest progress towards the global maximum).

2. Re-optimize J(α) with respect to αi and αj, while holding all the other αk 's (k ≠ i; j) fixed.k ( ; j)

Will this procedure converge?

13© Eric Xing @ CMU, 2006-2010

Convergence of SMO

∑∑ −=m

jTijiji

m

i yy )(21)(max xxααααα J

Let’s hold α3 ,…, α fixed and reopt J w r t α1 and α2

== jii 1,1 2

.

,, ,0 s.t.

∑=

=

=≤≤m

iii

i

y

kiC

10

1

α

α K

KKT:

Let s hold α3 ,…, αm fixed and reopt J w.r.t. α1 and α2

14© Eric Xing @ CMU, 2006-2010

Page 8: Machine Learning - cs.cmu.edu

8

Convergence of SMOThe constraints:

The objective:j

Constrained opt:

15© Eric Xing @ CMU, 2006-2010

Cross-validation error of SVMThe leave-one-out cross-validation error does not depend on the dimensionality of the feature space but only on the # of y p ysupport vectors!

examples trainingof #ctorssupport ve #error CVout -one-Leave =

16© Eric Xing @ CMU, 2006-2010

Page 9: Machine Learning - cs.cmu.edu

9

Advanced topics in Max-Margin Learning

∑∑ −=m

jTijiji

m

i yy21 )()(max xxααααα J

Kernel

∑∑== ji

jjji 11 2 ,

Point rule or average rule

Can we predict vec(y)?

17© Eric Xing @ CMU, 2006-2010

Outline

The Kernel trick

Maximum entropy discrimination

Structured SVM, aka, Maximum Margin Markov Networks

18© Eric Xing @ CMU, 2006-2010

Page 10: Machine Learning - cs.cmu.edu

10

(1) Non-linear Decision BoundarySo far, we have only considered large-margin classifier with a linear decision boundaryyHow to generalize it to become nonlinear?Key idea: transform xi to a higher dimensional space to “make life easier”

Input space: the space the point xi are locatedFeature space: the space of φ(xi) after transformation

Why transform?yLinear operation in the feature space is equivalent to non-linear operation in input spaceClassification can become easier with a proper transformation. In the XOR problem, for example, adding a new feature of x1x2 make the problem linearly separable (homework)

19© Eric Xing @ CMU, 2006-2010

Is this data linearly-separable?

The Kernel Trick

How about a quadratic mapping φ(xi)?

20© Eric Xing @ CMU, 2006-2010

Page 11: Machine Learning - cs.cmu.edu

11

The Kernel TrickRecall the SVM optimization problem

mm 1

The data points only appear as inner product

∑∑==

−=m

jij

Tijiji

m

ii yy

11 21

,

)()(max xxααααα J

.0

,,1 ,0 s.t.

1∑

=

=

=≤≤m

iii

i

y

miC

α

α K

p y pp pAs long as we can calculate the inner product in the feature space, we do not need the mapping explicitlyMany common geometric operations (angles, distances) can be expressed by inner productsDefine the kernel function K by )()(),( j

TijiK xxxx φφ=

21© Eric Xing @ CMU, 2006-2010

Computation depends on feature spaceBad if its dimension is much larger than input space

II. The Kernel Trick

g p p

( )∑∑==

−m

jijijiji

m

ii Kyy

1,1

,21max xxαααα

0

,,1 ,0 s.t.

∑ =

=≥m

i

y

ki

α

α K

.0 1

∑=

=i

ii yα

Where K(xi,xj) = φ(xi)t φ(xj) ( ) ⎟⎠

⎞⎜⎝

⎛+= ∑

bzKyzySVi

iii ,sign)(* xα

22© Eric Xing @ CMU, 2006-2010

Page 12: Machine Learning - cs.cmu.edu

12

Transforming the Data

φ( )φ( )

φ( )φ( )φ( )

φ( )

φ( )φ( )φ( )

φ( )

φ( )φ( )

φ(.) φ( )φ( )

φ( )φ( )

φ( )

φ( )

φ( )φ( ) φ( )

Feature spaceInput spaceNote: feature space is of higher

Computation in the feature space can be costly because it is high dimensional

The feature space is typically infinite-dimensional!

The kernel trick comes to rescue

Note: feature space is of higher dimension than the input space in practice

23© Eric Xing @ CMU, 2006-2010

An Example for feature mapping and kernels

Consider an input x=[x1,x2]Suppose φ( ) is given as followsSuppose φ(.) is given as follows

An inner product in the feature space is

2122

2121

2

1 2221 xxxxxxxx

,,,,,=⎟⎟⎠

⎞⎜⎜⎝

⎛⎥⎦

⎤⎢⎣

⎡φ

=⎟⎠

⎞⎜⎜⎝

⎛⎥⎦

⎤⎢⎣

⎡⎟⎠

⎞⎜⎜⎝

⎛⎥⎦

⎤⎢⎣

⎡''

,2

1

2

1

xx

xx

φφ

So, if we define the kernel function as follows, there is no need to carry out φ(.) explicitly

⎠⎝ ⎦⎣⎠⎝ ⎦⎣ 22 xx

( )21 ')',( xxxx TK +=

24© Eric Xing @ CMU, 2006-2010

Page 13: Machine Learning - cs.cmu.edu

13

More examples of kernel functions

Linear kernel (we've seen it)

')'( xxxx TK

Polynomial kernel (we just saw an example)

where p = 2, 3, … To get the feature vectors we concatenate all pth order polynomial terms of the components of x (weighted appropriately)

')',( xxxxK =

( )pTK ')',( xxxx += 1

Radial basis kernel

In this case the feature space consists of functions and results in a non-parametric classifier.

⎟⎠⎞

⎜⎝⎛ −−= 2

21 'exp)',( xxxxK

25© Eric Xing @ CMU, 2006-2010

The essence of kernelFeature mapping, but “without paying a cost”

E.g., polynomial kernelg p y

How many dimensions we’ve got in the new space?How many operations it takes to compute K()?

Kernel design, any principle?K(x,z) can be thought of as a similarity function between x and zThis intuition can be well reflected in the following “Gaussian” functionThis intuition can be well reflected in the following Gaussian function(Similarly one can easily come up with other K() in the same spirit)

Is this necessarily lead to a “legal” kernel?(in the above particular case, K() is a legal one, do you know how many dimension φ(x) is?

26© Eric Xing @ CMU, 2006-2010

Page 14: Machine Learning - cs.cmu.edu

14

Kernel matrixSuppose for now that K is indeed a valid kernel corresponding to some feature mapping φ, then for x1, …, xm, we can pp g φ, 1, , m,compute an m×m matrix , where

This is called a kernel matrix!

Now, if a kernel function is indeed a valid kernel, and its elements are dot-product in the transformed feature space, it

t ti fmust satisfy:Symmetry K=KT

proof

Positive –semidefiniteproof?

27© Eric Xing @ CMU, 2006-2010

Mercer kernel

28© Eric Xing @ CMU, 2006-2010

Page 15: Machine Learning - cs.cmu.edu

15

SVM examples

29© Eric Xing @ CMU, 2006-2010

(2) Model averagingInputs x, class y = +1, -1data D = { (x1 y1) (x y ) }data D = { (x1,y1), …. (xm,ym) }

Point Rule:

learn fopt(x) discriminant functionfrom F = {f} family of discriminants

classify y = sign fopt(x)

E.g., SVM

30© Eric Xing @ CMU, 2006-2010

Page 16: Machine Learning - cs.cmu.edu

16

Model averagingThere exist many f with near optimal performance

Instead of choosing fopt, average over all f in F

Q(f) = weight of f

How to specify:F = { f } family of discriminant functions?

How to learn Q(f) distribution over F?

31© Eric Xing @ CMU, 2006-2010

Bayesian learning:

Recall Bayesian Inference

Bayes Predictor (model averaging):

Bayes Learner

What p0?

Recall in SVM:

32© Eric Xing @ CMU, 2006-2010

Page 17: Machine Learning - cs.cmu.edu

17

How to score distributions?

EntropyEntropy H(X) of a random variable XEntropy H(X) of a random variable X

H(X) is the expected number of bits needed to encode a randomly drawn value of X (under most efficient code)Why?

Information theory:Most efficient code assigns -log2P(X=i) bits to encode the message X=I, So, expected number of bits to code one random X is:

33© Eric Xing @ CMU, 2006-2010

Given data set , find

Maximum Entropy Discrimination

solution Q correctly classifies Dsolution QME correctly classifies Damong all admissible Q, QME has max entropymax entropy "minimum assumption" about f

34© Eric Xing @ CMU, 2006-2010

Page 18: Machine Learning - cs.cmu.edu

18

Introducing PriorsPrior Q0( f )

Minimum Relative Entropy Discrimination

p

Convex problem: QMRE unique solutionMER "minimum additional assumption" over Q0 about f

35© Eric Xing @ CMU, 2006-2010

Convex problem: QME unique

Solution: Q ME as a projection

uniformα=0Theorem: Q0

QME

admissible Q

αME

αi ≥ 0 Lagrange multipliers

finding QM : start with αi = 0 and follow gradient of unsatisfied constraints

36© Eric Xing @ CMU, 2006-2010

Page 19: Machine Learning - cs.cmu.edu

19

Solution to MED

Theorem (Solution to MED):– Posterior Distribution:

– Dual Optimization Problem:

Algorithm: to computer αt , t = 1,...T

start with αt = 0 (uniform distribution)

iterative ascent on J(α) until convergence

37© Eric Xing @ CMU, 2006-2010

Examples: SVMsTheorem

For f(x) =wTx + b, Q0(w) = Normal( 0, I ), Q0(b) = non-informative prior,the Lagrange multipliers α are obtained by maximizing J(α) subject to 0≤αt ≤C and ∑t αtyt = 0, where

Separable D SVM recovered exactlyInseparable D SVM recovered with different misclassification penalty

38© Eric Xing @ CMU, 2006-2010

Page 20: Machine Learning - cs.cmu.edu

20

SVM extensionsExample: Leptograpsus Crabs (5 inputs, Ttrain=80, Ttest=120)

Linear SVM

Max Likelihood Gaussian

MRE Gaussian

39© Eric Xing @ CMU, 2006-2010

(3) Structured PredictionUnstructured prediction

Structured predictionPart of speech tagging

“Do you want sugar in it?” ⇒ <verb pron verb noun prep pron>

Image segmentation

40© Eric Xing @ CMU, 2006-2010

Page 21: Machine Learning - cs.cmu.edu

21

OCR example

x y

braceSequential structure

y

a-z

a-z

a-z

a-z

a-z

y

x

41© Eric Xing @ CMU, 2006-2010

Inputs: a set of training samples , where

Classical Classification Models

g pand

Outputs:a predictive function :

Examples:SVM:

Logistic Regression:

where

42© Eric Xing @ CMU, 2006-2010

Page 22: Machine Learning - cs.cmu.edu

22

Structured Models

Assumptions:

Linear combination of features

space of feasible outputsdiscriminant function

Linear combination of features

Sum of partial scores: index p represents a part in the structure

Random fields or Markov network features:

43© Eric Xing @ CMU, 2006-2010

Discriminative Learning Strategies

Max Conditional LikelihoodWe predict based on:

⎫⎧

And we learn based on:

Max Margin:

⎭⎬⎫

⎩⎨⎧

== ∑c

ccc fwZ

p ),(exp),(

1)|(maxarg|* yxxw

xyxy wy

{ } ∏ ∑∏⎭⎬⎫

⎩⎨⎧

==i c

iiccii

iiii fwZ

p ),(exp),(

1)|(maxarg,|* yxxw

xyxyw ww

Max Margin:We predict based on:

And we learn based on:

),(maxarg),(maxarg|* yxwyxxyy

ffw T

ycccc == ∑

{ } ( )⎟⎠⎞⎜

⎝⎛ −=

∀≠),(),(minmaxarg,|

,

*iii

T

iii ff

ixyxywxyw

yyw

44© Eric Xing @ CMU, 2006-2010

Page 23: Machine Learning - cs.cmu.edu

23

E.g. Max-Margin Markov Networks

Convex Optimization Problem:

Feasible subspace of weights:

Predictive Function:

45© Eric Xing @ CMU, 2006-2010

OCR Example

We want:argmax wT f( word) = “brace”argmaxword wT f( , word) = brace

Equivalently:wT f( ,“brace”) > wT f( ,“aaaaa”)wT f( ,“brace”) > wT f( ,“aaaab”)… a lot!…wT f( ,“brace”) > wT f( ,“zzzzz”)

46© Eric Xing @ CMU, 2006-2010

Page 24: Machine Learning - cs.cmu.edu

24

Brute force enumeration of constraints:

Min-max Formulation

The constraints are exponential in the size of the structure

Alternative: min-max formulation add only the most violated constraint

Handles more general loss functionsOnly polynomial # of constraints neededSeveral algorithms exist …

47© Eric Xing @ CMU, 2006-2010

Results: Handwriting Recognition

Length: ~8 charsL tt 16 8 i l

30

acte

r) rawpixels

quadratickernel

cubickernel

Letter: 16x8 pixels 10-fold Train/Test5000/50000 letters600/6000 words

Models:Multiclass-SVMs*

10

15

20

25

rror

(ave

rage

per

-cha

ra

33% error reduction over multiclass

better

M3 nets

Crammer & Singer 01

0

5

MC–SVMs M^3 nets

Test

er33% error reduction over multiclass

SVMs

48© Eric Xing @ CMU, 2006-2010

Page 25: Machine Learning - cs.cmu.edu

25

Discriminative Learning Paradigms

SVM SVM b r a c e

M3N M3N

?

b r a c e

MEDMED MED MN

49

? MED MED MED-MN= SMED + Bayesian M3N

See [Zhu and Xing, 2008]

49© Eric Xing @ CMU, 2006-2010

SummaryMaximum margin nonlinear separator

Kernel trickProject into linearly separatable space (possibly high or infinite dimensional)No need to know the explicit projection function

Max-entropy discriminationAverage rule for prediction, Average taken over a posterior distribution of w who defines the separation hyperplaneP(w) is obtained by max-entropy or min-KL principle, subject to expected marginal constraints on the training examples

50

Max-margin Markov networkMulti-variate, rather than uni-variate output YVariable in the outputs are not independent of each other (structured input/output)Margin constraint over every possible configuration of Y (exponentially many!)

50© Eric Xing @ CMU, 2006-2010