Stat 312: Lecture 04 Moment Matchingmchung/teaching/stat312/lecture04.pdf · Stat 312: Lecture 04...

1
Stat 312: Lecture 04 Moment Matching Moo K. Chung [email protected] September 14, 2004 1. Given a random sample X 1 , ··· ,X n , a linear estimator of parameter θ is an estimator of form ˆ θ = n X i=1 c i X i . Then it can be shown that ¯ X is the MVUE for population mean among all linear unbi- ased estimators. Proof. Case n =2 will be proved. The gen- eral statement follows inductively. Consider linear estimators ˆ μ = c 1 X 1 + c 2 X 2 . To be unbiased, c 1 + c 2 =1. To be most ef- ficient among all unbiased linear estimators, the variance has to be minimized. The vari- ance is V ˆ μ = c 2 1 VX 1 + c 2 2 VX 2 = £ c 2 1 + (1 - c 1 ) 2 / σ 2 The quadratic term in the bracket 2c 2 1 - 2c 1 + 1 is minimized when c 1 =1/2. 2. Given a random sample X 1 , ··· ,X n , the k- th sample moment is M k = n j =1 X k j /n. The moment estimators of population param- eters are obtained by matching the sample moments to correspond population moments and solving the resulting equations simulta- neously. 3. Exponential distribution. X is an exponen- tial distribution with parameter λ if the den- sity function is f (x)= λe -λx for x 0. It can be shown that X = 1 λ , VX = 1 λ 2 . 4. Given random sample X 1 , ··· ,X n , the like- lihood function is given as the product of probability or density functions, i.e. L(θ)= f (x 1 ; θ)f (x 2 ; θ) ··· f (x n ; θ). The maximum likelihood estimatate of θ is an estimate that maximizes L(θ). If we denote ˆ θ = θ(x 1 , ··· ,x n ) to be the maximum like- lihood estimate, The maximum likelihood estimator (MLE) of θ is denoted by ˆ θ = ˆ θ(X 1 , ··· ,X n ). Review Problems. Example 6.12. Example 6.16. Example 6.17.

Transcript of Stat 312: Lecture 04 Moment Matchingmchung/teaching/stat312/lecture04.pdf · Stat 312: Lecture 04...

Stat 312: Lecture 04Moment Matching

Moo K. [email protected]

September 14, 2004

1. Given a random sampleX1, · · · , Xn, a linearestimator of parameterθ is an estimator ofform

θ̂ =n∑

i=1

ciXi.

Then it can be shown that̄X is the MVUEfor population mean among all linear unbi-ased estimators.Proof. Casen = 2 will be proved. The gen-eral statement follows inductively. Considerlinear estimators

µ̂ = c1X1 + c2X2.

To be unbiased,c1 + c2 = 1. To be most ef-ficient among all unbiased linear estimators,the variance has to be minimized. The vari-ance is

Vµ̂ = c21VX1 + c2

2VX2 =[c21 +(1− c1)

2]σ2

The quadratic term in the bracket2c21−2c1 +

1 is minimized whenc1 = 1/2.

2. Given a random sampleX1, · · · , Xn, thek-th sample momentis Mk =

∑nj=1 Xk

j /n.Themoment estimatorsof population param-eters are obtained by matching the samplemoments to correspond population momentsand solving the resulting equations simulta-neously.

3. Exponential distribution.X is an exponen-tial distribution with parameterλ if the den-sity function isf(x) = λe−λx for x ≥ 0. Itcan be shown thatX = 1

λ,VX = 1

λ2 .

4. Given random sampleX1, · · · , Xn, the like-lihood function is given as the product ofprobability or density functions, i.e.

L(θ) = f(x1; θ)f(x2; θ) · · · f(xn; θ).

Themaximum likelihood estimatateof θ is anestimate that maximizesL(θ). If we denoteθ̂ = θ(x1, · · · , xn) to be the maximum like-lihood estimate, Themaximum likelihoodestimator (MLE) of θ is denoted byθ̂ =θ̂(X1, · · · , Xn).

Review Problems.Example 6.12. Example 6.16. Example 6.17.