CHAPTER 15

20
CHAPTER 15 CHAPTER 15 Supervised Classification Supervised Classification CLASSIFICATION CLASSIFICATION A. Dermanis

description

CLASSIFICATION. CHAPTER 15. Supervised Classification. A. Dermanis. . . 1 n i. m i = x. 1 n i. x  S i. x  S i. C i = ( x – m i )( x – m i ) T. Supervised Classification. The known pixels in each one of the predecided classes ω 1 , ω 2 , ..., ω K , - PowerPoint PPT Presentation

Transcript of CHAPTER 15

CHAPTER 15CHAPTER 15

Supervised ClassificationSupervised Classification

CLASSIFICATIONCLASSIFICATION

A. Dermanis

xSi

Ci = (x – mi)(x – mi)T 1ni

mi = xxSi

1ni

Supervised classification methods:

ParallelepipedEuclidean distance (minimization)Mahalanobis distance (minimization)Maximum likelihoodBayesian (maximum a posteriori probability density)

Supervised classification methods:

ParallelepipedEuclidean distance (minimization)Mahalanobis distance (minimization)Maximum likelihoodBayesian (maximum a posteriori probability density)

The known pixels in each one of the predecided classes ω1, ω2, ..., ωK,

form corresponding “sample sets” S1, S2, ..., SK

with n1, n2, ..., nK number of pixels respectively.

The known pixels in each one of the predecided classes ω1, ω2, ..., ωK,

form corresponding “sample sets” S1, S2, ..., SK

with n1, n2, ..., nK number of pixels respectively.

Estimates from each sample set Si, (i = 1, 2, …, K ) :

Class mean vectors: Class covariance matrices:

Supervised ClassificationSupervised Classification

A. Dermanis

|| x – mi || = min || x – mk || x ik

dE(x, x) = || x – x || = (x1 – x1)2 + (x2 – x2)2 + … + (xB – xB)2

(a) Simple(a) Simple

Assign each pixel to the class of the closest center (class mean)

Boundaries between class regions =

= perpendicular at middle of segment joining the class centers

Classification with Euclidean distance Classification with Euclidean distance

A. Dermanis

|| x – mi || > T, i x 0

|| x – mi || = min || x – mk ||k x i

|| x – mi || T

(b) with threshold T(b) with threshold T

Assign each pixel to the class of the closest center (class mean)ifdistance < threshold

Leave pixel unclassified (class ω0)ifall class centers are at distanceslarger than threshold

dE(x, x) = || x – x || = (x1 – x1)2 + (x2 – x2)2 + … + (xB – xB)2

Classification with Euclidean distance Classification with Euclidean distance

A. Dermanis

The role of statistics (dispersion) in classification

Classification with Euclidean distance Classification with Euclidean distance

RIGHT WRONG

dE(x, x) = || x – x || = (x1 – x1)2 + (x2 – x2)2 + … + (xB – xB)2

A. Dermanis

ij = (Ci)jj j = 1, 2, …, B

standard deviations for each band

Classification with the parallelepiped method Classification with the parallelepiped method

x Pj x ix Pj x i

x Pi x 0x Pi x 0ii

Classification:Classification:

x = [x1 … xj … xB]T Pj

mij – k i

j xj mij + k i

j

j = 1, 2, …, B

parallelepipeds Pi

A. Dermanis

dM(x,mi) > T, i x0

dM(x,mi) < dM(x,mk), ki

dM(x,mi) T, xi

C = (x – mi) (x – mi)T = ni Ci 1

N i xSi

1

Ni

dM(x, x) = (x – x)T C–1 (x – x) Mahalanobis distance:

Classification (simple):

Classification with threshold:

Classification with the Mahalanobis distance Classification with the Mahalanobis distance

(total covariance matrix)

dM(x,mi) < dM(x,mk), ki xi

A. Dermanis

di(x) > dk(x) k i xi

di(x) = 2 ln[li(x)] + B ln(2) = – ln | Ci | – (x – mi)T Ci–1 (x – mi)

li(x) > lk(x) k i xi

12

li(x) = exp [ – (x – mi)T Ci–1 (x – mi) ](2)B/2 | Ci |1/2

1

Probability distribution density function or likelihood function of class ωi:

Classification:

Equivalent use of decision function:

Classification with the maximum likelihood methodClassification with the maximum likelihood method

A. Dermanis

N : total number of pixels in the image (i.e. in each band)

B : number of bands,

ω1, ω2, …, ωK : the K classes present in the image

Ni : number of image pixels belonging to the class ωi (i = 1,2, …, K)

nx : number of pixels with value x (= vector of values in all bands)

nxi : number of pixels with value x which also belong to the class ωi

Classification using the Bayesian approachClassification using the Bayesian approach

A. Dermanis

N : total number of pixels in the image (i.e. in each band)

B : number of bands,

ω1, ω2, …, ωK : the K classes present in the image

Ni : number of image pixels belonging to the class ωi (i = 1,2, …, K)

nx : number of pixels with value x (= vector of values in all bands)

nxi : number of pixels with value x which also belong to the class ωi

Nn x

x NNi

i ii Nn x

x i

i

n n x x

Classification using the Bayesian approachClassification using the Bayesian approach

A. Dermanis

N : total number of pixels in the image (i.e. in each band)

B : number of bands,

ω1, ω2, …, ωK : the K classes present in the image

Ni : number of image pixels belonging to the class ωi (i = 1,2, …, K)

nx : number of pixels with value x (= vector of values in all bands)

nxi : number of pixels with value x which also belong to the class ωi

Nn x

x NNi

i ii Nn x

x i

i

n n x x

xi

xi

xx

nn N

nnN

Basic identity:

Classification using the Bayesian approachClassification using the Bayesian approach

A. Dermanis

N : total number of pixels in the image (i.e. in each band)

B : number of bands,

ω1, ω2, …, ωK : the K classes present in the image

Ni : number of image pixels belonging to the class ωi (i = 1,2, …, K)

nx : number of pixels with value x (= vector of values in all bands)

nxi : number of pixels with value x which also belong to the class ωi

Nn x

x NNi

i ii Nn x

x i

i

n n x x

xi

xi

xx

nn N

nnN

Basic identity:

xi i

i

x

n N

N Nn

N

Classification using the Bayesian approachClassification using the Bayesian approach

A. Dermanis

N : total number of pixels in the image (i.e. in each band)

B : number of bands,

ω1, ω2, …, ωK : the K classes present in the image

Ni : number of image pixels belonging to the class ωi (i = 1,2, …, K)

nx : number of pixels with value x (= vector of values in all bands)

nxi : number of pixels with value x which also belong to the class ωi

Nn x

x NNi

i ii Nn x

x i

i

n n x x

xi

xi

xx

nn N

nnN

Basic identity:

xi i

i

x

n N

N Nn

N

xi i

ixi

xx

n N

N Nn

nnN

Classification using the Bayesian approachClassification using the Bayesian approach

A. Dermanis

p(i) =Ni

N

p(x) =nx

N

p(x | i) =nxi

Ni

p(x, i) =nxi

N

p(i | x) =nxi

nx

probability of a pixel to belong to the class ωi

probability of a pixel to have the value x

probability of a pixel belonging to the class ωi to have value x (conditional probability)

probability of a pixel having value x to belong to the class ωi (conditional probability)

probability of a pixel to have the value x and to simultaneously belong to ωi (joint probability)

A. Dermanis

p(i) =Ni

N

p(x) =nx

N

p(x | i) =nxi

Ni

p(x, i) =nxi

N

p(i | x) =nxi

nx

probability of a pixel to belong to the class ωi

probability of a pixel to have the value x

probability of a pixel belonging to the class ωi to have value x (conditional probability)

probability of a pixel having value x to belong to the class ωi (conditional probability)

probability of a pixel to have the value x and to simultaneously belong to ωi (joint probability)

xi i

ixi

xx

n N

N Nn

nnN

A. Dermanis

p(i) =Ni

N

p(x) =nx

N

p(x | i) =nxi

Ni

p(x, i) =nxi

N

p(i | x) =nxi

nx

probability of a pixel to belong to the class ωi

probability of a pixel to have the value x

probability of a pixel belonging to the class ωi to have value x (conditional probability)

probability of a pixel having value x to belong to the class ωi (conditional probability)

probability of a pixel to have the value x and to simultaneously belong to ωi (joint probability)

( | ) ( )( | )

( )i i

ip ω pω

pωp

=x

xx

xi i

ixi

xx

n N

N Nn

nnN

Þ

formula of Bayes

A. Dermanis

p(x|i) p(i)p(i|x) = p(x)

Pr(B | A) =Pr(A | B) Pr(B)

Pr(A)

Pr(A | B) =Pr(AB)

Pr(B)

Pr(A | B) Pr(B) = Pr(AB) = Pr(B | A) Pr(A)

p(x |i) p(i) > p(x |k) p(k) k i xi

p(i |x) > p(k |x) k i xi

p(x) = not necessary (common constant factor)

The Bayes theorem:

event A = occurrence of the value x event B = occurence of the class ωi

Classification:

Classification:

A. Dermanis

(x – mi)T Ci–1 (x – mi) + ln[ | Ci | + ln[p(i)] = min

p(x | i) p(i) = max

ln[p(x | i) p(i)] = ln[p(x | i) + ln[p(i) = max

p(x | i) = li(x) = exp{– – (x – mi)T Ci–1 (x – mi) }

(2)B/2 | Ci |1/2

1 12

Instead of

equivalent

Classification:

for Gaussian distribution:

or finally:

p(x|i) p(i) = max [p(x|k) p(k) xi k

– – (x – mi)T Ci–1 (x – mi) – – ln[ | Ci | + ln[p(i)] = max1

212

A. Dermanis

p(1) = p(2) = … = p(K)

C1 = C2 = … = CK = C

p(1) = p(2) = … = p(K)

C1 = C2 = … = CK = I

p(1) = p(2) = … = p(K)

(x – mi)T (x – mi) = min

(x – mi)T Ci–1 (x – mi) = min

(x – mi)T Ci–1 (x – mi) + ln[ | Ci | = min

(x – mi)T Ci–1 (x – mi) + ln[ | Ci | + ln[p(i)] = min

Bayesian Classification for Gaussian distribution :

SPECIAL CASES:

Maximum Likelihood !

Mahalanobis distance !

Euclidean distance !

A. Dermanis

Want to learn more ?

A. DermanisL. Biagi:

Telerilevamento

Casa Editrice Ambrosiana