Hypothesis test flow chart

download Hypothesis test flow chart

of 26

  • date post

    16-Feb-2016
  • Category

    Documents

  • view

    51
  • download

    0

Embed Size (px)

description

Hypothesis test flow chart. χ 2 test for i ndependence (19.9) Table I. Test H 0 : r =0 (17.2) Table G . n umber of correlations. n umber of variables. f requency data. c orrelation (r). 1. 2. Measurement scale. 1. 2. b asic χ 2 test (19.5) Table I . - PowerPoint PPT Presentation

Transcript of Hypothesis test flow chart

Slide 1

Hypothesis test flow chartfrequencydataMeasurement scalenumber of variables1basic 2 test(19.5)Table I 2 test for independence(19.9) Table I2correlation (r)number of correlations1Test H0: r=0(17.2)Table G 2Test H0: r1= r2(17.4) Tables H and Anumber of meansMeansDo you know s?1Yesz -test(13.1) Table At -test(13.14)Table D 2independentsamples?YesTest H0: m1= m2(15.6) Table DNoTest H0: D=0(16.4)Table D More than 2number of factors11-way ANOVACh 20Table E22-way ANOVACh 21Table ENoSTART HEREChapter 18: Testing for difference among three or more groups: One way Analysis of Variance (ANOVA)ABC 81 72 75Means 87 61 75 81 68 67 76 87 67 79 69 84 68 81 74 92 71 80 76 74 79 93 62 83 78 62 79 84 85 62Suppose you wanted to compare the results of three tests (A, B and C) to see if there was any differences difficulty. To test this, you randomly sample these ten scores from each of the three populations of test scores.

How would you test to see if there was any difference across the mean scores for these three tests?

The first thing is obvious calculate the mean for each of the three samples of 10 scores.

But then what? You could run an two-sample t-test on each of the pairs (A vs. B, A vs. C and B vs. C). Note: well be skipping sections 18.10, 18.12, 18.13, 18.14, 1.15, 18.16, 18.17 18.18, and 18.19 from the bookABC 81 72 75Means 87 61 75 81 68 67 76 87 67 79 69 84 68 81 74 92 71 80 76 74 79 93 62 83 78 62 79 84 85 62You could run an two-sample t-test on each of the pairs (A vs. B, A vs. C and B vs. C).

There are two problems with this:

The three tests wouldnt be truly independent of each other, since they contain common values, and

We run into the problem of making multiple comparisons: If we use an a value of .05, the probability of obtaining at least one significant comparison by chance is 1-(1-.05)3, or about .14So how do we test the null hypothesis: H0: mA = mB = mC ?ABC 81 72 75Means 87 61 75 81 68 67 76 87 67 79 69 84 68 81 74 92 71 80 76 74 79 93 62 83 78 62 79 84 85 62In the 1920s Sir Ronald Fisher developed a method called Analysis of Variance or ANOVA to test hypotheses like this.

The trick is to look at the amount of variability between the means.

So far in this class, weve usually talked about variability in terms of standard deviations. ANOVAs focus on variances instead, which (of course) is the square of the standard deviation. The intuition is the same.

The variance of these three mean scores (81, 72 and 75) is 22.5

Intuitively, you can see that if the variance of the means scores is large, then we should reject H0.

But what do we compare this number 22.5 to?So how do we test the null hypothesis: H0: mA = mB = mC ?ABC 81 72 75Means 87 61 75 81 68 67 76 87 67 79 69 84 68 81 74 92 71 80 76 74 79 93 62 83 78 62 79 84 85 62The variance of these three mean scores (81, 72 and 75) is 22.5

How large is 22.5?

Suppose we knew the standard deviation of the population of scores (s).

If the null hypothesis is true, then all scores across all three columns are drawn from a population with standard deviation s.

It follows that the mean of n scores should be drawn from a population with standard deviation:

This means multiplying the variance of the means by n gives us an estimate of the variance of the population.So how do we test the null hypothesis: H0: mA = mB = mC ?

With a little algebra:ABC 81 72 75Means 87 61 75 81 68 67 76 87 67 79 69 84 68 81 74 92 71 80 76 74 79 93 62 83 78 62 79 84 85 62The variance of these three mean scores (81, 72 and 75) is 22.5

Multiplying the variance of the means by n gives us an estimate of the variance of the population.

For our example,

We typically dont know what s2 is. But like we do for t-tests, we can use the variance within our samples to estimate it. The variance of the 10 numbers in each column (61, 94, and 55) should each provide an estimate of s2.

We can combine these three estimates of s2 by taking their average, which is 70. 61 94 55VariancesMean of variances 70n x Variance of means225ABC 81 72 75Means 87 61 75 81 68 67 76 87 67 79 69 84 68 81 74 92 71 80 76 74 79 93 62 83 78 62 79 84 85 62If H0: mA = mB = mC is true, we now have two separate estimates of the variance of the population (s2).

One is n times the variance of the means of each column.The other is the mean of the variances of each column.

If H0 is true, then these two numbers should be, on average, the same, since theyre both estimates of the same thing (s2).

For our example, these two numbers (225 and 70) seem quite different.

Remember our intuition that a large variance of the means should be evidence against H0. Now we have something to compare it to. 225 seems large compared to 70. 61 94 55VariancesMean of variances 70n x Variance of means225ABC 81 72 75Means 87 61 75 81 68 67 76 87 67 79 69 84 68 81 74 92 71 80 76 74 79 93 62 83 78 62 79 84 85 62If H0 is true, then the value of F should be around 1. If H0 is not true, then F should be significantly greater than 1.

We determine how large F should be for rejecting H0 by looking up Fcrit in Table E. F distributions depend on two separate degrees of freedom one for the numerator and one for the denominator.

df for the numerator is k-1, where k is the number of columns or treatments. For our example, df is 3-1 =2.

df for the denominator is N-k, where N is the total number of scores. In our case, df is 30-3 = 27. 61 94 55VariancesMean of variances 70n x Variance of means225Ratio (F)3.23Fcrit for a = .05 and dfs of 2 and 27 is 3.35.

Since Fobs = 3.23 is less than Fcrit, we fail to reject H0. We cannot conclude that the exam scores come from populations with different means.When conducting an ANOVA, we compute the ratio of these two estimates of s2. This ratio is called the F statistic. For our example, 225/70 = 3.23.Fcrit for a = .05 and dfs of 2 and 27 is 3.35.

Since Fobs = 3.23 is less than Fcrit, we fail to reject H0. We cannot conclude that the exam scores come from populations with different means.Instead of finding Fcrit in Table E, we could have calculated the p-value using our F-calculator. Reporting p-values is standard.

Our p-value for F=3.23 with 2 and 27 degrees of freedom is p=.0552

Since our p-value is greater then .05, we fail to reject H0Example: Consider the following n=12 samples drawn from k=5 groups. Use an ANOVA to test the hypothesis that the means of the populations that these 5 groups were drawn from are different.

Answer: The 5 means and variances are calculated below, along with n x variance of means, and the mean of variances.

Our resulting F statistic is 15.32.

Our two dfs are k-1=4 (numerator) and 60-5 = 55(denominator). Table E shows that Fcrit for 4 and 55 is 2.54.

Fobs > Fcrit so we reject H0. 78 96 74 78100 97 76 46 75149MeansVariancesABCDE 87 87 99 78 91 81 69105 70 51 76 81104 75 64 79 71108 91 86 68 74104 83 81 92 62 87 78 72 76 62104 66 79 93 85111 74 65 78 75107 76 65 84 67 76 69 78 61 67 97 72 90 68 84 97 79 82Mean of variances 93n x Variance of means1429Ratio (F)15.32What does the probability distribution F(dfb,dfw) look like?012345F(2,5)012345F(2,10)012345F(2,50)012345F(2,100)012345F(10,5)012345F(10,10)012345F(10,50)012345F(10,100)012345F(50,5)012345F(50,10)012345F(50,50)012345F(50,100)For a typical ANOVA, the number of samples in each group may be different, but the intuition is the same - compute F which is the ratio of the variance of the means over the mean of the variances.

Formally, the variance is divided up the following way:

Given a table of k groups, each containing ni scores (i= 1,2, , k), we can represent the deviation of a given score, X from the mean of all scores, called the grand mean as:

Deviation of X from the grand meanDeviation of X from the mean of the groupDeviation of the mean of the group from the grand mean

Total sum of squares:SStotalWithin-groups sum of squares:SSwithinBetween-groups sum of squares:SSbetween The total sums of squares can be partitioned into two numbers:SSbetween (or SSbet) is a measure of the variability between groups. It is used as the numerator in our F-tests

The variance between groups is calculated by dividing SSbet by its degrees of freedom dfbet = k-1

s2bet=SSbet/dfbet and is another estimate of s2 if H0 is true. This is essentially n times the variance of the means.

If H0 is not true, then s2bet is an estimate of s2 plus any treatment effect that would add to a difference between the means..

Total sum of squares:SStotalWithin-groups sum of squares:SSwithinBetween-groups sum of squares:SSbetween The total sums of squares can be partitioned into two numbers:SSwithin (or SSw) is a measure of the variability within each group. It is used as the denominator in all F-tests.

The variance within each group is calculated by dividing SSwithin by its degrees of freedom dfw = ntotal k

s2w=SSw/dfw This is an estimate of s2 This is essentially the mean of the variances within each group. (It is exactly the mean of variances if our sample sizes are all the same.)SStotal = SSwithin + SSbetweenSStotalSSwithinSSbetween

dftotal =ntotal-1dfwithin =ntotal-k

dfbetween k-1

dftotal = dfwithin + dfbetweens2between=SSbetween/dfbetweens2within=SSwithin/dfwithinThe F ratio is calculated by dividing up the sums of squares and df into between and withinVariances are then calculated by dividing SS by dfF is the ratio of variances between and within

Finally, the F ratio is the ratio of s2bet and s2betWe can write all these calculated values in a summary table like this:SourceSSdfs2FBetweenk-1s2bet=SSbet/dfbetWithinnt