The Jens Forster saga continues. Results are too good to be true, but how did this happen?
A previous blog examined how and why Dr. Förster’s data showed incredibly improbable linearity.
The main hypothesis was that two experimental manipulations have opposite effects on a dependent variable.
Assuming that the average effect size of a single manipulation is similar to effect sizes in social psychology, a single manipulation is expected to have an effect size of d = .5 (change by half a standard deviation). As the two manipulations are expected to have opposite effects, the mean difference between the two experimental groups should be one standard deviation (0.5 + 0.5 = 1). With N = 40, and d = 1, a study has 87% power to produce a significant effect (p < .05, two-tailed). With power of this magnitude, it would not be surprising to get significant results in 12 comparisons (Table 1).
The R-Index for the comparison of the two experimental groups in Table is…
View original post 1,322 more words