E FFECT S IZE More to life than statistical significance Reporting effect size

Click here to load reader

download E FFECT S IZE More to life than statistical significance Reporting effect size

of 31

  • date post

    25-Dec-2015
  • Category

    Documents

  • view

    212
  • download

    0

Embed Size (px)

Transcript of E FFECT S IZE More to life than statistical significance Reporting effect size

  • Slide 1
  • E FFECT S IZE More to life than statistical significance Reporting effect size
  • Slide 2
  • OUTLINE NHST problems Context and types of Effect Size Comparing samples: The d family of effect sizes Interpretation Understanding the individual: Case Level effect sizes The r family of effect R 2 and 2 Interval estimates of effect size Summary
  • Slide 3
  • S TATISTICAL SIGNIFICANCE As we have discussed, it turns out a lot of researchers do not know what precisely p 1.0 indicates one group has relatively more extreme scores Here, tail ratio = p 2/ p 1:">
  • O THER CASE - LEVEL EFFECT SIZES Tail ratios (Feingold, 1995): Relative proportion of scores from two different groups that fall in the upper extreme (i.e., either the left or right tail) of the combined frequency distribution Extreme is usually defined relatively in terms of the number of standard deviations away from the grand mean Tail ratio > 1.0 indicates one group has relatively more extreme scores Here, tail ratio = p 2/ p 1:
  • Slide 21
  • O THER CASE - LEVEL EFFECT SIZES Common language effect size (McGraw & Wong, 1992) is the predicted probability that a random score from the upper group exceeds a random score from the lower group Find area to the right of that value Range.5 1.0
  • Slide 22
  • A SSOCIATION A measure of association describes the amount of the covariation between the independent and dependent variables It is expressed in an unsquared standardized metric or its squared valuethe former is usually a correlation 1, the latter a variance- accounted-for effect size A squared multiple correlation (R 2 ) calculated in ANOVA is called the correlation ratio or estimated eta-squared ( 2 )
  • Slide 23
  • A NOTHER MEASURE OF EFFECT SIZE The point-biserial correlation, r pb, is the Pearson correlation between membership in one of two groups and a continuous outcome variable It is calculated exactly how we would have the Pearson r, just noted by a different name to communicate the types of variables involved As mentioned r pb has a direct relationship to t and d When squared it is a special case of 2 in ANOVA An one-way ANOVA for a two-group factor: 2 = R 2 from a regression approach = r 2 pb
  • Slide 24
  • E TA - SQUARED A measure of the degree to which variability among observations can be attributed to conditions Example: 2 =.50 50% of the variability seen in the scores is due to the independent variable. Relationship to t in the two group setting
  • Slide 25
  • O MEGA - SQUARED Another effect size measure that is less biased and interpreted in the same way as eta-squared. Think adjusted R 2, where k is the number of groups
  • Slide 26
  • C ONFIDENCE I NTERVALS FOR E FFECT S IZE Effect size statistics such as Hedges g and 2 have complex distributions Traditional methods of interval estimation rely on approximate standard errors assuming large sample sizes General form for d is the form for intervals weve seen in general
  • Slide 27
  • P ROBLEM However, CIs formulated in this manner are only approximate, and are based on the central (t) distribution centered on zero The true (exact) CI depends on a noncentral distribution and additional parameter Noncentrality parameter What the alternative hypothesis distribution is centered on (further from zero, less belief in the null) d is a function of this parameter, such that if ncp = 0 (i.e. is centered on the null hype value), then d = 0 (i.e. no effect)
  • Slide 28
  • C ONFIDENCE I NTERVALS FOR E FFECT S IZE Similar situation for r and 2 effect size measures Gist: well need a computer program to help us find the correct noncentrality parameters to use in calculating exact confidence intervals for effect sizes MBESS library in R provides these easily (but notice how wide) ci.smd(smd =.35, n.1=72, n.2=60, conf.level=.90) $Lower.Conf.Limit.smd [1] 0.060 $Upper.Conf.Limit.smd [1] 0.639 Bootstrapped intervals are to be preferred in cases where assumptions do not hold, if not in general BootCI 1 = 0.638, 0.076
  • Slide 29
  • L IMITATIONS OF EFFECT SIZE MEASURES Standardized mean differences: Heterogeneity of within-conditions variances across studies can limit their usefulnessthe unstandardized contrast may be better in this case Measures of association: Correlations can be affected by sample variances and whether the samples are independent or not, the design is balanced or not, or the factors are fixed or not Also affected by artifacts such as missing observations, range restriction, categorization of continuous variables, and measurement error (see Hunter & Schmidt, 1994, for various corrections) Variance-accounted-for indexes can make some effects look smaller than they really are in terms of their substantive significance
  • Slide 30
  • L IMITATIONS OF EFFECT SIZE MEASURES How to fool yourself with effect size estimation: 1. Examine effect size only at the group level 2. Apply generic definitions of effect size magnitude without first looking to the literature in your area 3. Believe that an effect size judged as large according to generic definitions must be an important result and that a small effect is unimportant (see Prentice & Miller, 1992) 4. Ignore the question of how theoretical or practical significance should be gauged in your research area 5. Estimate effect size only for statistically significant results
  • Slide 31
  • L IMITATIONS OF EFFECT SIZE MEASURES 6. Believe that finding large effects somehow lessens the need for replication 7. Forget that effect sizes are subject to sampling error 8. Forget that effect sizes for fixed factors are specific to the particular levels selected for study 9. Forget that standardized effect sizes encapsulate other quantities such as the unstandardized effect size, error variance, and experimental design 10. As a journal editor or reviewer, substitute effect size magnitude for statistical significance as a criterion for whether a work is published 11. Think that effect size = cause size