This blog introduces a simple excel spreadsheet that simulates the effect of excluding non-significant results from an unbiased set of studies.
The results in the most left column show the results for an unbiased set of 100 studies (N = 100, dropped = 0). The power value is used to compute the observed power in the 100 studies based on a normal distribution around the non-centrality parameter corresponding to the power value (e.g., power = .50, ncp = 1.96).
For an unbiased set of studies, median observed power is equivalent to the success rate (percentage of significant results) in a set of studies. For example, with 50% power, the median observed ncp is 1.96, which is equivalent to the true ncp of 1.96 that corresponds to 50% power. In this case, the success rate is 50%. As the success rate is equivalent to median observed power, there is no inflation in the success rate and the inflation rate is 0. As a result, the R-Index is equivalent to median observed power and success rate. R-Index = Median Observed Power – Inflation Rate; .50 = .50 – 0.
Moving to the right, studies with the lowest observed ncp values (equivalent to the highest p-values) are dropped in sets of 5 studies. However, you can make changes to the way results are excluded or altered to simulate questionable research practices. When non-significant studies are dropped, median observed power and success rate increase. Eventually, the success rate increases faster than median observed power, leading to a positive inflation rate. As the inflation rate is subtracted from median observed power, the R-Index starts to correct for publication bias. For example, in the example with 50% true power, median observed power is inflated to 63% by dropping 25 non-significant results. The success rate is 67%, the inflation rate is 4% and the R-Index is 59%. Thus, the R-Index still overestimates true power by 9%, but it provides a better estimate of true power than median observed power without a correction (63%).
An important special case is the scenario where all non-significant results are dropped. This scenario is automatically highlighted with orange cells for the number of studies and success rate. With 50% true power, the event occurs when 50% of the studies are dropped. In this scenario, median observed power is 76%, the success rate is 100%, inflation rate is 24% and the R-Index is 51%. These values are slightly different from more exact simulations which show 75% median observed power, 25% inflation rate and an R-Index of 50%.
The table below lists the results for different levels of true power when all non-significant results are dropped. The scenario with 5% power implies that the null-hypothesis is true, but that 5% of significant results are obtained due to sampling error alone.
True Power MOP IR R-Index
5% 66 34 32
30% 70 30 40
50% 75 25 50
65% 80 20 60
80% 87 13 73
95% 96 04 91
Success Rate is fixed at 100%; MOP = median observed power; IR = Inflation Rate, R-Index
The results show that the R-Index tracks observed power, but it is not an unbiased estimate of true power. In real data the process that leads to bias is unknown and it is impossible to obtain an unbiased estimate of true power from a biased set of studies. This is the reason why it is important to eliminate biases in publications as much as possible. However, the R-Index provides some useful information about the true power and replicability in a biased set of studies.
Simulation R-Index [click on link to download spreadsheet]