PAC-learning, VC Dimension

21
1 ©2005-2007 Carlos Guestrin 1 PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University October 24 th , 2007 ©2005-2007 Carlos Guestrin 2 A simple setting… Classification m data points Finite number of possible hypothesis (e.g., dec. trees of depth d) A learner finds a hypothesis h that is consistent with training data Gets zero error in training – error train (h) = 0 What is the probability that h has more than ε true error? error true (h) ε

Transcript of PAC-learning, VC Dimension

Page 1: PAC-learning, VC Dimension

1

©2005-2007 Carlos Guestrin 1

PAC-learning, VC Dimension

Machine Learning – 10701/15781Carlos GuestrinCarnegie Mellon University

October 24th, 2007

©2005-2007 Carlos Guestrin 2

A simple setting…

Classificationm data pointsFinite number of possible hypothesis (e.g., dec. trees of depth d)

A learner finds a hypothesis h that is consistentwith training data

Gets zero error in training – errortrain(h) = 0

What is the probability that h has more than εtrue error?

errortrue(h) ≥ ε

Page 2: PAC-learning, VC Dimension

2

©2005-2007 Carlos Guestrin 3

How likely is a bad hypothesis to get m data points right?

Hypothesis h that is consistent with training data →got m i.i.d. points right

h “bad” if it gets all this data right, but has high true error

Prob. h with errortrue(h) ≥ ε gets one data point right

Prob. h with errortrue(h) ≥ ε gets m data points right

©2005-2007 Carlos Guestrin 4

But there are many possible hypothesis that are consistent with training data

Page 3: PAC-learning, VC Dimension

3

©2005-2007 Carlos Guestrin 5

How likely is learner to pick a bad hypothesis

Prob. h with errortrue(h) ≥ ε gets m data points right

There are k hypothesis consistent with dataHow likely is learner to pick a bad one?

©2005-2007 Carlos Guestrin 6

Union bound

P(A or B or C or D or …)

Page 4: PAC-learning, VC Dimension

4

©2005-2007 Carlos Guestrin 7

How likely is learner to pick a bad hypothesis

Prob. h with errortrue(h) ≥ ε gets m data points right

There are k hypothesis consistent with dataHow likely is learner to pick a bad one?

©2005-2007 Carlos Guestrin 8

Review: Generalization error in finite hypothesis spaces [Haussler ’88]

Theorem: Hypothesis space H finite, dataset Dwith m i.i.d. samples, 0 < ε < 1 : for any learned hypothesis h that is consistent on the training data:

Page 5: PAC-learning, VC Dimension

5

©2005-2007 Carlos Guestrin 9

Using a PAC bound

Typically, 2 use cases:1: Pick ε and δ, give you m2: Pick m and δ, give you ε

©2005-2007 Carlos Guestrin 10

Review: Generalization error in finite hypothesis spaces [Haussler ’88]

Theorem: Hypothesis space H finite, dataset Dwith m i.i.d. samples, 0 < ε < 1 : for any learned hypothesis h that is consistent on the training data:

Even if h makes zero errors in training data, may make errors in test

Page 6: PAC-learning, VC Dimension

6

©2005-2007 Carlos Guestrin 11

Limitations of Haussler ‘88 bound

Consistent classifier

Size of hypothesis space

©2005-2007 Carlos Guestrin 12

What if our classifier does not have zero error on the training data?

A learner with zero training errors may make mistakes in test setWhat about a learner with errortrain(h) in training set?

Page 7: PAC-learning, VC Dimension

7

©2005-2007 Carlos Guestrin 13

Simpler question: What’s the expected error of a hypothesis?The error of a hypothesis is like estimating the parameter of a coin!

Chernoff bound: for m i.i.d. coin flips, x1,…,xm, where xi ∈ {0,1}. For 0<ε<1:

©2005-2007 Carlos Guestrin 14

Using Chernoff bound to estimate error of a single hypothesis

Page 8: PAC-learning, VC Dimension

8

©2005-2007 Carlos Guestrin 15

But we are comparing many hypothesis: Union bound

For each hypothesis hi:

What if I am comparing two hypothesis, h1 and h2?

©2005-2007 Carlos Guestrin 16

Generalization bound for |H| hypothesis

Theorem: Hypothesis space H finite, dataset Dwith m i.i.d. samples, 0 < ε < 1 : for any learned hypothesis h:

Page 9: PAC-learning, VC Dimension

9

©2005-2007 Carlos Guestrin 17

PAC bound and Bias-Variance tradeoff

Important: PAC bound holds for all h, but doesn’t guarantee that algorithm finds best h!!!

or, after moving some terms around,with probability at least 1-δ:

©2005-2007 Carlos Guestrin 18

What about the size of the hypothesis space?

How large is the hypothesis space?

Page 10: PAC-learning, VC Dimension

10

©2005-2007 Carlos Guestrin 19

Boolean formulas with n binary features

©2005-2007 Carlos Guestrin 20

Number of decision trees of depth k

Recursive solution Given n attributesHk = Number of decision trees of depth kH0 =2Hk+1 = (#choices of root attribute) *

(# possible left subtrees) *(# possible right subtrees)

= n * Hk * Hk

Write Lk = log2 HkL0 = 1Lk+1 = log2 n + 2LkSo Lk = (2k-1)(1+log2 n) +1

Page 11: PAC-learning, VC Dimension

11

©2005-2007 Carlos Guestrin 21

PAC bound for decision trees of depth k

Bad!!!Number of points is exponential in depth!

But, for m data points, decision tree can’t get too big…

Number of leaves never more than number data points

©2005-2007 Carlos Guestrin 22

Number of decision trees with k leaves

Hk = Number of decision trees with k leavesH0 =2

Loose bound: Reminder:

Page 12: PAC-learning, VC Dimension

12

©2005-2007 Carlos Guestrin 23

PAC bound for decision trees with k leaves – Bias-Variance revisited

©2005-2007 Carlos Guestrin 24

Announcements

Midterm:Thursday Oct. 25th, Thursday 5-6:30pm, MM A14

All content up to, and including SVMs and KernelsNot learning theory

any book, class notes, your printouts of class materials that are on the class website, including my annotated slides and relevant readings, and Andrew Moore’s tutorials. You cannot use materials brought by other students. Calculators are not necessary.No laptops, PDAs or cellphones.

Page 13: PAC-learning, VC Dimension

13

©2005-2007 Carlos Guestrin 25

What did we learn from decision trees?

Bias-Variance tradeoff formalized

Moral of the story:Complexity of learning not measured in terms of size hypothesis space, but in maximum number of points that allows consistent classification

Complexity m – no bias, lots of varianceLower than m – some bias, less variance

©2005-2007 Carlos Guestrin 26

What about continuous hypothesis spaces?

Continuous hypothesis space: |H| = ∞Infinite variance???

As with decision trees, only care about the maximum number of points that can be classified exactly!

Page 14: PAC-learning, VC Dimension

14

©2005-2007 Carlos Guestrin 27

How many points can a linear boundary classify exactly? (1-D)

©2005-2007 Carlos Guestrin 28

How many points can a linear boundary classify exactly? (2-D)

Page 15: PAC-learning, VC Dimension

15

©2005-2007 Carlos Guestrin 29

How many points can a linear boundary classify exactly? (d-D)

©2005-2007 Carlos Guestrin 30

PAC bound using VC dimension

Number of training points that can be classified exactly is VC dimension!!!

Measures relevant size of hypothesis space, as with decision trees with k leaves

Page 16: PAC-learning, VC Dimension

16

©2005-2007 Carlos Guestrin 31

Shattering a set of points

©2005-2007 Carlos Guestrin 32

VC dimension

Page 17: PAC-learning, VC Dimension

17

©2005-2007 Carlos Guestrin 33

PAC bound using VC dimension

Number of training points that can be classified exactly is VC dimension!!!

Measures relevant size of hypothesis space, as with decision trees with k leavesBound for infinite dimension hypothesis spaces:

©2005-2007 Carlos Guestrin 34

Examples of VC dimension

Linear classifiers: VC(H) = d+1, for d features plus constant term b

Neural networksVC(H) = #parametersLocal minima means NNs will probably not find best parameters

1-Nearest neighbor?

Page 18: PAC-learning, VC Dimension

18

©2005-2007 Carlos Guestrin 35

Another VC dim. example -What can we shatter?

What’s the VC dim. of decision stumps in 2d?

©2005-2007 Carlos Guestrin 36

Another VC dim. example -What can’t we shatter?

What’s the VC dim. of decision stumps in 2d?

Page 19: PAC-learning, VC Dimension

19

©2005-2007 Carlos Guestrin 37

What you need to know

Finite hypothesis spaceDerive resultsCounting number of hypothesisMistakes on Training data

Complexity of the classifier depends on number of points that can be classified exactly

Finite case – decision treesInfinite case – VC dimension

Bias-Variance tradeoff in learning theoryRemember: will your algorithm find best classifier?

©2005-2007 Carlos Guestrin 38

Big PictureMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon University

October 24th, 2007

Page 20: PAC-learning, VC Dimension

20

©2005-2007 Carlos Guestrin 39

What you have learned thus farLearning is function approximationPoint estimationRegressionNaïve BayesLogistic regressionBias-Variance tradeoffNeural netsDecision treesCross validationBoostingInstance-based learningSVMsKernel trickPAC learning VC dimensionMargin boundsMistake bounds

©2005-2007 Carlos Guestrin 40

Review material in terms of…

Types of learning problems

Hypothesis spaces

Loss functions

Optimization algorithms

Page 21: PAC-learning, VC Dimension

21

©2005-2007 Carlos Guestrin 41

BIG PICTURE (a few points of comparison)

Naïve Bayes

Logistic regression

NeuralNets

Boosting

SVMs

Instance-basedLearning

SVM regression

kernelregression

linearregression

Decisiontrees

Log-loss/MLELLMargin-basedMrg

RegressionReg

Squared errorRMS

ClassificationCl

density estimationDElearning

task

lossfunction

DE, LL

DE, LL

DE,Cl,Reg,RMS

Cl, exp-loss

DE,Cl,Reg

DE,Cl,Reg

Cl, MrgReg, Mrg

Reg, RMS

Reg, RMS

This is a very incomplete view!!!