Con dence Intervals, Testing and ANOVA Summary 1 One...

21
Confidence Intervals, Testing and ANOVA Summary 1 One–Sample Tests 1.1 One Sample z –test: Mean (σ known) Let X 1 , ··· ,X n a r.s. from N(μ, σ) or n> 30. Let H 0 : μ = μ 0 . The test statistic is z = ¯ x - μ 0 σ/ n N(0, 1) for H 0 . A (1 - α)100% confidence interval for μ is ¯ x ± z ? σ n . Sample size for margin of error, m, is n = z ? σ m 2 . 1

Transcript of Con dence Intervals, Testing and ANOVA Summary 1 One...

Page 1: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

Confidence Intervals, Testing and ANOVASummary

1 One–Sample Tests

1.1 One Sample z–test: Mean (σ known)

Let X1, · · · , Xn a r.s. from N(µ, σ) or n > 30. Let

H0 : µ = µ0.

The test statistic is

z =x− µ0

σ/√n∼ N(0, 1)

for H0.

A (1 − α)100% confidence interval for µ is x ± z?σ√n

. Sample size for

margin of error, m, is

n =

[z?σ

m

]2

.

1

Page 2: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

1.2 One Sample t–test: Mean (σ unknown)

Let X1, · · · , Xn a random sample and let either

� the population is normal or

� 15 ≤ n < 40 and there are no outliers or strong skewness or

� n ≥ 40.

LetH0 : µ = µ0.

The test statistic is

t =x− µ0

s/√n∼ t(n− 1)

for H0.

A (1− α)100% confidence interval for µ is x± t?(n− 1)s√n

.

1.3 Matched Pairs

Let (X1, Y1) · · · (Xn, Yn) be a r.s. and define Dj = Xj−Yj. Assume n > 30,or the Dj’s are normal (or pretty much so). Let

H0 : µD = d.

The test statistic is

t =d− dsD/√n∼ t(n− 1)

for H0.A (1− α)100% CI for µD is

d± t?(n− 1)sD√n.

2

Page 3: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

2 Two Sample Tests

2.1 Two Sample z–test: Mean (σX and σY both known)

Let X1, · · · , XnX and Y1, · · · , YnY be independent r.s.’s. Assume nX > 30and nY > 30, or that both r.s.’s are normal. Let

H0 : µX = µY

The test statistic is

z =x− y√σ2X

nX+

σ2Y

nY

∼ N(0, 1)

for H0.A (1− α)100% confidence interval for µX − µY is

x− y ± z?√σ2X

nX+σ2Y

nY.

3

Page 4: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

2.2 Two Sample t–test: Mean (σX and σY both unknown)

Let X1, · · · , XnX and Y1, · · · , YnY be independent r.s.’s. Assume nX > 30and nY > 30, or that both r.s.’s are normal. Let

H0 : µX = µY

The test statistic is

t =x− y√s2XnX

+s2YnY

∼ t(df)

for H0. Welch’s t Test lets df =

(s2XnX

+s2YnY

)2

1nX−1

(s2XnX

)2

+ 1nY −1

(s2YnY

)2 . The con-

servative Welch’s t Test is to let df be the largest integer that is lessthan or equal to the df of Welch’s Test. An even more conservative test isto let the df = min(nX − 1, nY − 1).A (1− α)100% confidence interval for µX − µY is

x− y ± t?(df)

√s2X

nX+s2Y

nY.

4

Page 5: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

2.3 Two Sample t–test: Mean (σY = σY unknown)

Let X1, · · · , XnX and Y1, · · · , YnY be independent r.s.’s. Assume nX > 30and nY > 30, or both r.s.’s are normal. Let

H0 : µX = µY

Define the pooled estimate of σ2X = σ2

Y to be

s2p =

(nX − 1)s2X + (nY − 1)s2

Y

nX + nY − 2.

The test statistic is

t =x− y

sp√

1nX

+ 1nY

∼ t(nX + nY − 2)

for H0.A (1− α)100% CI for µX − µY is

(x− y)± t?(nX + nY − 2)sp

√1

nX+

1

nY.

Note: It is generally difficult to verify that the two variances are equal, so it is safernot to make this assumption unless one is confident the variances are equal.

5

Page 6: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

2.4 Two Sample f–test: Standard Deviation

Let X1, · · · , XnX and Y1, · · · , YnY be independent normal r.s.’s, where thefirst r.s. is the one with the larger sample variance. Let

H0 : σX = σY

The test statistic is

f =s2X

s2Y

∼ F (nX − 1, nY − 1)

for H0. Use the right hand tail for critical values, f ?, for a two–sided test.

Warning: the above f–test is not robust with respect to the normality assumption.

6

Page 7: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

3 Proportion Tests

3.1 One Sample Large Sample Population Proportion z–test

Let X1, · · · , Xn be a r.s. from Xj ∼ BIN(1, p),

H0 : p = p0

andnp0 ≥ 10 and n(1− p0) ≥ 10,

(some books use 5 instead of 10 here). Then let p =# heads

nand the test

statistic be

z =p− p0√

p0(1− p0)/n∼ N(0, 1)

(X = p is assumed to be normal by CLT) for H0.When # heads and # tails are both ≥ 15, a (1−α)100% confidence interval

for p is p± z?√p(1− p)

nwhen α ≤ 0.1.

Sample size for margin of error, m, is

n =

{(z?)2p(1−p)

m2 p known(z?)2

4m2 p unknown.

A plus four (1−α)100% confidence interval for p is obtained by usingabove procedure, but first adding two heads and two tails to the randomsample (increasing the sample size to n+ 4). Use when sample size is ≥ 10and α ≤ 0.1.

7

Page 8: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

3.2 Two Sample Proportions z–test

Let X1, · · · , XnX and Y1, · · · , YnY be independent r.s. where Xj ∼BIN(1, pX) and Yk ∼ BIN(1, pY ). Let

H0 : pX = pY = p

where p is unknown. Let p =# heads# tosses

. Assume the number of heads and

tails in each sample is at least 5. Define the pooled estimate of pX andpY to be

p =nX pX + nY pYnX + nY

and the test statistic be

z =pX − pY√

p(1−p)nX

+ p(1−p)nY

∼ N(0, 1)

for H0.A (1 − α)100% CI for pX − pY when the number of heads and tails is atleast 10 for each sample is

(pX − pY )± z?√pX(1− pX)

nX+pY (1− pY )

nY.

A plus four (1 − α)100% confidence interval for pX − pY is obtainedby using above procedure, but first adding one head and one tail to each ofthe random samples (increasing each sample size by 2). Use when α ≤ 0.1..

8

Page 9: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

4 Correlation

The linear correlation coefficient for (x1, y1), · · · , (xn, yn) is

r =n∑n

j=1 xjyj −(∑n

j=1 xj

)(∑nj=1 yj

)√n∑n

j=1 x2j −

(∑nj=1 xj

)2√n∑n

j=1 y2j −

(∑nj=1 yj

)2.

The test statistic forH0 : ρ = 0

is

r

√n− 2

1− r2∼ t(n− 2).

for H0.

9

Page 10: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

5 Chi–Squared Tests

5.1 Goodness of Fit

Let X1, · · · , Xn be a categorical r.s. where there is a total of k categoriesandP (X = jth category) = pj. Let

H0 : p1 = a1, · · · , pk = ak

where the aj’s are given. Define

ojdef= # of jth categories observed

ejdef= naj = # of jth categories expected under H0

and assume that ej ≥ 1 for all j’s and that no more than a fifth of theexpected counts are < 5. In this case, the test statistic is

k∑j=1

(oj − ej)2

ej∼ χ2(k − 1)

under H0 and one rejects H0 for large χ2 values.

10

Page 11: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

5.2 Chi–Squared Test for Independence

Given a two–way table, oij, of observed outcomes, with r possible rowoutcomes and c possible column outcomes, let

H0 : there is no relationship between column and row variables.

Define

oijdef= cell ij total

eijdef=

(ith row total)(jth column total)

table total= expected count in cell ij under H0

and assume that eij ≥ 1 for all cells and that no more than a fifth of theexpected counts are < 5. In this case, the test statistic is

r∑i=1

r∑j=1

(oij − eij)2

eij∼ χ2((r − 1)(c− 1))

under H0 and one rejects H0 for large χ2 values.

6 Simple Regression

Given the bivariate random sample, (x1, y1) · · · , (xn, yn)

Statistical Model of Simple Linear Regression: Given a predictor,x, the response, y is

y = β0 + β1x+ εx

where β0 + β1x is the mean response for x. The noise terms, the εx’s,are assumed to be independent of each other and to be randomly sampledfrom N(0, σ).

11

Page 12: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

Estimating β0, β1 and σ: The least–squares regression line, y = b0 + b1xis obtained by letting

b1 = r

(sysx

)=n(∑n

j=1 xjyj)− (∑n

j=1 xj)(∑n

j=1 yj)

n∑n

j=1 x2j − (

∑nj=1 xj)

2and b0 = y−b1x.

where b0 is an unbiased estimator of β0 and b1 is an unbiased estimator ofβ1.The variance of the observed yi’s about the predicted yi’s is

s2 def=

∑(yj − yj)2

n− 2=

∑y2j − b0

∑yj − b1

∑xjyj

n− 2,

which is an unbiased estimator of σ2. The standard error of estimate(also called the residual standard error) is s, an estimator of σ.

Hypothesis Tests and Confidence Intervals for β0 and β1:Let

SEb1def=

s√∑nj=1(xj − x)2

and SEb0def=

√1

n+

x2∑nj=1(xj − x)2

.

SEb0 and SEb1 are the standard error of the intercept, β0, and the slope,β1, for the least–squares regression line.

To test the hypothesis H0 : β1 = 0 use the test statistic t ∼ b1SEb1∼ t(n−2).

A level (1−α)100% confidence interval for the slope β1 is b1±t∗(n−2)×SEb1 .

To test the hypothesis H0 : β0 = b use the test statistic t ∼ b0−bSEb0

∼t(n − 2). A level (1 − α)100% confidence interval for the intercept β0 isb0 ± t∗(n− 2)× SEb0 .

Accepting H0 : β1 = 0 is equivalent to accepting H0 : ρ = 0.

12

Page 13: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

(1− α)100% Confidence Interval for a mean response, µy:A (1−α)100 % confidence interval for the mean response, µy when x takeson the value x? is µy ±m where the margin of error is

m = tα/2(n− 2) s

√1

n+

(x? − x)2∑nj=1(xj − x)2︸ ︷︷ ︸

SEµ

.

The standard error of the mean response is SEµ.

(1 − α)100% Prediction Interval for future observation y givenx = x?:A (1 − α)100% Prediction Interval for y given x = x? is y ± m wherey = b0 + b1x

? and the margin of error is

m = tα/2(n− 2) s

√1 +

1

n+

(x? − x)2∑nj=1(xj − x)2︸ ︷︷ ︸

SEy

.

Test for Correlation:Consider the hypotheses

H0 : ρ = 0 vs HA : ρ 6= 0

The test statistic is

t =r√1−r2n−2

∼ t(n− 2) for H0.

13

Page 14: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

The following holds for sum of squares:n∑j=1

(yj − y)2

︸ ︷︷ ︸SSTOT

=

n∑j=1

(yj − y)2

︸ ︷︷ ︸SSA

+n∑j=1

(yj − yj)2

︸ ︷︷ ︸SSE

. The mean squares which equal the

sum of squares divided by it’s corresponding degree of freedom,

MSAdef= Mean Square of Model =

SSA

1and

MSEdef= Mean Square of Error = s2 =

SSE

n− 2.

The coefficient of determination is the portion of the variation in yexplained by the regression equation

r2 def=

SSA

SSTOT

=

∑nj=1(yj − y)2∑nj=1(yj − y)2

.

ANOVA F Test for Simple Linear Regression:Consider

H0 : β1 = 0 versus HA : β1 6= 0.

If H0 holds, fdef= MSA

MSEis from F (1, n− 2) and one uses a right–sided test.

The following is an ANOVA Table for Simple Linear Regression:

Source SS df MS ANOVA F Statistic p–valueModel SSA 1 MSA f P (F (1, n− 2) ≥ f)Error SSE n− 2 MSE

Total SSTOT n− 1

14

Page 15: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

7 Multivariate Regression

Given multivariate variate random sample

(x(1)1 , x

(1)2 , · · · , x(1)

k , y1), (x(2)1 , x

(2)2 , · · · , x(2)

k , y2), · · · , (x(n)1 , x

(n)2 , · · · , x(n)

k , yn)

Statistical Model of Multivariate Linear Regression: Given a kdimensional multivariate predictor, (x

(i)1 , x

(i)2 , · · · , x

(i)k ), the response, yi, is

yi = β0 + β1x(i)1 + · · ·+ βkx

(i)k + εi

where β0 + β1x(i)1 + · · ·+ βkx

(i)k is the mean response. The noise terms,

the εi’s are assumed to be independent of each other and to be randomlysampled from N(0, σ).

Given a multivariate normal sample,(x

(1)1 , · · · , x(1)

k , y1

), · · · ,

(x

(n)1 , · · · , x(n)

k , yn

),

the least–squares multiple regression equation, y = b0 + b1x1 +· · · + bkxk, is the linear equation that minimizes

∑nj=1 (yj − yj)2, where

yjdef= b0 + b1x

(j)1 + · · ·+ bkx

(j)k . There must be at least k+ 2 data points to

do obtain the estimators b0, bj’s and s2 def=

∑nj=1(yi−yi)2

n−k−1of β0, βj’s and σ2,

where

� b0, the y–intercept, is the unbiased, least square estimator of β0.

� bj, the coefficient of xj, is the unbiased, least square estimator of βj.

� s2 is an unbiased estimator of σ2 and s is an estimator of σ.

Due to computational intensity, computers are used to obtain b0, bj’s ands2.

15

Page 16: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

Hypothesis Tests and Confidence Intervals for the βj’s:To test the hypothesis H0 : βj = 0 use the test statistic

t ∼ bjSEbj

∼ t(n− k − 1) for H0.

A level (1− α)100% confidence interval for βj is bj ± t∗(n− k − 1)SEbj .

SEbj is the standard error of βj (obtained from computer calculations).

Accepting H0 : βj = 0 is accepting that there is no linear association between Xj

and Y , ie that correlation between Xj and Y is zero.

16

Page 17: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

ANOVA Tables for Multivariate Regression:

The following holds for sum of squares:n∑j=1

(yj − y)2

︸ ︷︷ ︸SSTOT

=

n∑j=1

(yj − y)2

︸ ︷︷ ︸SSA

+n∑j=1

(yj − yj)2

︸ ︷︷ ︸SSE

. The mean squares which equal the

sum of squares divided by it’s corresponding degree of freedom:

MSAdef= Mean Square of Model =

SSA

kand

MSEdef= Mean Square of Error = s2 =

SSE

n− k − 1.

ANOVA F Test for Multivariate Regression: The test statistic for

H0 : β1 = β2 = · · · = βk = 0 versus HA : not H0

is f = MSA

MSE. The p–value of the above test is P (F ≥ f) where F ∼

F (k, n− k − 1).

ANOVA Table:

Source df Sum of Squares Mean Square F p–value

Model k SSA MSAMSA

MSEP (F (k, n− k − 1) ≥ f)

Error n− k − 1 SSE MSETotal n− 1 SSTOT

17

Page 18: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

Multiple Correlation Coefficient:

The squared multiple correlation, R2 def= SSA

SSTOT, measures the portion

of the total variation that is explained by the model. The multiple cor-relation coefficient is just R =

√R2. The

adjusted coefficient of determination = R2adj

def= 1− n− 1

n− k − 1(1−R2)

is a more accurate R2 for large k.

18

Page 19: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

8 One–Way ANOVA

k = # of levels

nj = sample size from jth level population

N =k∑j=1

nj = total # of r.v.’s

xj = sample mean from jth level population

s2j = sample var from jth level population

¯x = sample mean from all level populations

SSTOT =k∑i=1

ni∑j=1

(xij − ¯x)2 = Sum of Squares total

SSA =k∑j=1

nj(xj − ¯x)2 = SS between levels of treatment A

SSE =k∑j=1

(nj − 1)s2j = SS within levels of treatment A

MSTOT =SSTOT

N − 1= Mean Squares Total

MSA =SSA

K − 1= Mean Squares Treatment

MSE =SSE

N − k= Mean Squares Error

f =MSA

MSE

SSTOT = SSA + SSE.

Source df SS MS F p

Between k − 1 SSA MSAMSA

MSEP (F(k − 1, N − I) ≥ f)

Within N − k SSE MSE

Total N − 1 SSTOT

19

Page 20: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

9 Two–Way ANOVA (2 treatments)

Idef= #levels for Treatment A

Jdef= #levels for Treatment B

SSAdef= Sum of Squares of for Treatment A

MSAdef=

SSA

I − 1= Mean Squares of Treatment A

SSBdef= Sum of Squares of for Treatment B

MSBdef=

SSB

J − 1= Mean Squares of Treatment B

SSABdef= Sum of Squares of Non–additive part

MSABdef=

SSAB

(I − 1)(J − 1)= Mean Squares of Non–additive part

SSEdef= Sum of Squares within treatments

MSEdef=

SSE

n− IJ= Mean Squares within treatments

SSTOTdef= Total Sum of Squares

SSTOT = SSA + SSB + SSAB + SSE.

Source df SS MS F p

Treatment A I − 1 SSA MSAMSAMSE

P (F(I − 1, N − IJ) ≥ observed F)

Treatment B J − 1 SSB MSBMSBMSE

P (F(J − 1, N − IJ) ≥ observed F)

Interaction (I − 1)(J − 1) SSAB MSABMSABMSE

P (F((J − 1)(I − 1), N − IJ) ≥ observed F)

Error N − IJ SSE MSE

Total N − 1 SSTOT

20

Page 21: Con dence Intervals, Testing and ANOVA Summary 1 One ...math.newhaven.edu/mhm/courses/estat/sum.pdf · 5.2 Chi{Squared Test for Independence Given a two{way table, o ij, of observed

10 Addendum

The rules for the minimum sample size to use a test are human convention and differsomewhat from statistician to statistician and book to book.

21