In 2000, APA declared the following decade the decade of behavior. The current decade may be considered the decade of replicability or rather the lack thereof. The replicability crisis started with the publication of Bem’s (2011) infamous “Feeling the future” article. In response, psychologists have started the painful process of self-examination.
Preregistered replication reports and systematic studies of reproducibility have demonstrated that many published findings are difficult to replicate and when they can be replicated, actual effect sizes are about 50% smaller than reported effect sizes in original articles (OSC, Science, 2016).
To examine which studies in psychology produced replicable results, I created ReplicabilityReports. Replicability reports use statistical tools that can detect publication bias and questionable research practices to examine the replicability of research findings in a particular research area. The first replicability report examined the large literature of ego-depletion studies and found that only about a dozen studies may have produced replicable results.
This replicability report focuses on a smaller literature that used mating primes (images of potential romantic partners / imagining a romantic scenario) to test evolutionary theories of human behavior. Most studies use the typical priming design, where participants are randomly assigned to one or more mating prime conditions or a control condition. After the priming manipulation the effect of activating mating-related motives and thoughts on a variety of measures is examined. Typically, an interaction with gender is predicted with the hypothesis that mating primes have stronger effects on male participants. Priming manipulations vary from subliminal presentations to instructions to think about romantic scenarios for several minutes; sometimes with the help of visual stimuli. Dependent variables range from attitudes towards risk-taking to purchasing decisions.
Shanks et al. (2015) conducted a meta-analysis of a subset of mating priming studies that focus on consumption and risk-taking. A funnel plot showed clear evidence of bias in the published literature. The authors also conducted several replication studies. The replication studies failed to produce any significant results. Although this outcome might be due to low power to detect small effects, a meta-analysis of all replication studies also produced no evidence for reliable priming effects (average d = 00, 95%CI = -.12 | .11).
This replicability report aims to replicate and extend Shanks et al.’s findings in three ways. First, I expanded the data base by including all articles that mentioned the word mating primes in a full text search of social psychology journals. This expanded the set of articles from 15 to 36 articles and the set of studies from 42 to 92. Second, I used a novel and superior bias test. Shanks et al. used Funnel plots and Egger’s regression of effect sizes on sampling error to examine bias. The problem with this approach is that heterogeneity in effect sizes can produce a negative correlation between effect sizes and sample sizes. Power-based bias tests do not suffer from this problem (Schimmack, 2014). A set of studies with average power of 60% cannot produce more than 60% significant results (Sterling et al., 1995). Thus, the discrepancy between observed power and reported success rate provides clear evidence of selection bias. Powergraphs also make it possible to estimate the actual power of studies after correcting for publication bias and questionable research practices. Finally, replicability reports use bias tests that can be applied to small sets of studies. This makes it possible to find studies with replicable results even if most studies have low replicability.
The dataset consists of 36 articles and 92 studies. The median sample size of a study was N = 103 and the total number of participants was N = 11,570. The success rate including marginally significant results, z > 1.65, was 100%. The success rate excluding marginally significant results, z > 1.96, was 90%. Median observed power for all 92 studies was 66%. This discrepancy shows that the published results are biased towards significance. When bias is present, median observed power overestimates actual power. To correct for this bias, the R-Index subtracts the inflation rate from median observed power. The R-Index is 66 – 34 = 32. An R-Index below 50% implies that most studies will not replicate a significant result in an exact replication study with the same sample size and power as the original studies. The R-Index for the 15 studies included in Shanks et al. was 34% and the R-Index for the additional studies was 36%. This shows that convergent results were obtained for two independent samples based on different sampling procedures and that Shanks et al.’s limited sample was representative of the wider literature.
For each study, a focal hypothesis test was identified and the result of the statistical test was converted into an absolute z-score. These absolute z-scores can vary as a function of random sampling error or differences in power and should follow a mixture of normal distributions. Powergraphs find the best mixture model that minimizes the discrepancy between observed and predicted z-scores.
The histogram of z-scores shows clear evidence of selection bias. The steep cliff on the left side of the criterion for significance (z = 1.96) shows a lack of non-significant results. The few non-significant results are all in the range of marginal significance and were reported as evidence for an effect.
The histogram also shows evidence of the use of questionable research practices. Selection bias would only produce a cliff to the left of the significance criterion, but a mixture-normal distribution on the right side of the significance criterion. However, the graph also shows a second cliff around z = 2.8. This cliff can be explained by questionable research practices that inflate effect sizes to produce significant results. These questionable research practices are much more likely to produce z-scores in the range between 2 and 3 than z-scores greater than 3.
The large amount of z-scores in the range between 1.96 and 2.8 makes it impossible to distinguish between real effects with modest power and questionable effects with much lower power that will not replicate. To obtain a robust estimate of power, power is estimated only for z-scores greater than 2.8 (k = 17). The power estimate is 73% based. This power estimate suggests that some studies may have reported real effects that can be replicated.
The grey curve shows the predicted distribution for a set of studies with 73% power. As can be seen, there are too many observed z-scores in the range between 1.96 and 2.8 and too few z-scores in the range between 0 and 1.96 compared to the predicted distribution based on z-scores greater than 2.8.
The powergraph analysis confirms and extends Shanks et al.’s (2016) findings. First, the analysis provides strong evidence that selection bias and questionable research practices contribute to the high success rate in the mating-prime literature. Second, the analysis suggests that a small portion of studies may actually have reported true effects that can be replicated.
REPLICABILITY OF INDIVIDUAL ARTICLES
The replicability of results published in individual articles was examined with the Test of Insufficient Variance (TIVA) and the Replicability-Index. TIVA tests bias by comparing the variance of observed z-scores against the variance that is expected based on sampling error. As sampling error for z-scores is 1, observed z-scores should have at least a variance of 1. If there is heterogeneity, variance can be even greater, but it cannot be smaller than 1. TIVA uses the chi-square test for variances to compute the probability that a variance less than 1 was simply due to chance. A p-value less than .10 is used to flag an article as questionable.
The Replicability-Index (R-Index) used observed power to test bias. Z-scores are converted into a measure of observed power and median observed power is used as an estimate of power. The success rate (percentage of significant results) should match observed power. The difference between success rate and median power shows an inflated success rate. The R-Index subtracts inflation from median observed power. A value of 50% is used as the minimum criterion for replicability.
Articles that pass both tests are examined in more detail to identify studies with high replicability. Only three articles passed this test.
1 Greitemeyer, Kastenmüller, and Fischer (2013) [R-Index = .80]
The article with the highest R-Index reported 4 studies. The high R-Index for this article is due to Studies 2 to 4. Studies 3 and 4 used a 2 x 3 between subject design with gender and three priming conditions. Both studies produced strong evidence for an interaction effect, Study 3: F(2,111) = 12.31, z = 4.33, Study 4: F(2,94) = 7.46, z = 3.30. The pattern of the interaction is very similar in the two studies. For women, the means are very similar and not significantly different for each other. For men, the two mating prime conditions are very similar and significantly different from the control condition. The standardized effect sizes for the difference between the combined mating prime conditions and the control conditions are large, Study 3: t(110) = 6.09, p < .001, z = 5.64, d = 1.63; Study 4: t(94) = 5.12, d = 1.30.
Taken at face value, these results are highly replicable, but there are some concerns about the reported results. The means in conditions that are not predicted to differ from each other are very similar. I tested the probability of this event to occur using TIVA and compared the means of the two mating prime conditions for men and women in the two studies. The four z-scores were z = 0.53, 0.08, 0.09, and -0.40. The variance should be 1, but the observed variance is only Var(z) = 0.14. The probability of this reduction in variance to occur by chance is p = .056. Thus, even though the overall R-Index for this article is high and the reported effect sizes are very high, it is likely that an actual replication study will produce weaker effects and may not replicate the original findings.
Study 2 also produced strong evidence for a priming x gender interaction, F(1,81) = 11.23, z = 3.23. In contrast to studies 3 and 4, this interaction was a cross-over interaction with opposite effects of primes for males and females. However, there is some concern about the reliability of this interaction because the post-hoc tests for males and females were both just significant, males: t(40) = 2.61, d = .82, females, t(41) = 2.10, d = .63. As these post-hoc tests are essentially two independent studies, it is possible to use TIVA to test whether these results are too similar, Var(z) = 0.11, p = .25. The R-Index for this set of studies is low, R-Index = .24 (MOP = .62). Thus, a replication study may replicate an interaction effect, but the chance of replicating significant results for males or females separately are lower.
Importantly, Shanks et al. (2016) conducted two close replication of Greitemeyer’s studies with risky driving, gambling, and sexual risk taking as dependent variables. Study 5 compared the effects of short-term mate primes on risky driving. Although the sample size was small, the large effect size in the original study implies that this study had high power to replicate the effect, but it did not, t(77) = = -0.85, p = .40, z = -.85. The negative sign indicates that the pattern of means was reversed, but not significantly so. Study 6 failed to replicate the interaction effect for sexual risk taking reported by Greitemeyer et al., F(1, 93) = 1.15, p = .29. The means for male participants were in the opposite direction showing a decrease in risk taking after mating priming. The study also failed to replicate the significant decrease in risk taking for female participants. Study 6 also produced non-significant results for gambling and substance risk taking. These failed replication studies raise further concerns about the replicability of the original results with extremely large effect sizes.
Jon K. Maner, Matthew T. Gailliot, D. Aaron Rouby, and Saul L. Miller (JPSP, 2007) [R-Index = .62]
This article passed TIVA only due to the low power of TIVA for a set of three studies, TIVA: Var(z) = 0.15, p = .14. In Study 1, male and female participants were randomly assigned to a sexual-arousal priming condition or a happiness control condition. Participants also completed a measure of socio-sexual orientation (i.e., interest in casual and risky sex) and were classified into groups of unrestricted and restricted participants. The dependent variable was performance on a dot-probe task. In a dot-probe task, participants have to respond to a dot that appears in the location of two stimuli that compete for visual attention. In theory, participants are faster to respond to the dot if appears in the location of a stimulus that attracts more attention. Stimuli were pictures of very attractive or less attractive members of the same or opposite sex. The time between the presentation of the pictures and the dot was also manipulated. The authors reported that they predicted a three-way way interaction between priming condition, target picture, and stimulus-onset time. The authors did not predict an interaction with gender. The ANOVA showed a significant three-way interaction, F(1,111) = 10.40, p = .002, z = 3.15. A follow-up two-way ANOVA showed an interaction between priming condition and target for unrestricted participants, F(1,111) = 7.69, p = .006, z = 2.72.
Study 2 replicated Study 1 with a sentence unscrambling task which is used as a subtler priming manipulation. The study closely replicated the results of Study 1. The three way interaction was significant, F(1,153) = 9.11, and the follow up two-way interaction for unrestricted participants was also significant, F(1,153) = 8.22, z = 2.75.
Study 3 changed the primes to jealousy or anxiety/frustration. Jealousy is a mating related negative emotion and was predicted to influence participants like mating primes. In this study, participants were classified into groups with high or low sexual vigilance based on a jealousy scale. The predicted three-way interaction was significant, F(1,153) = 5.74, p = .018, z = 2.37. The follow-up two-way interaction only for participants high in sexual vigilance was also significant, F(1,153) = 8.13, p = .005, z = 2.81.
A positive feature of this set of studies is that the manipulation of targets within subjects reduces within-cell variability and increases power to produce significant results. However, a problem is that the authors also report studies for specific targets and do not mention that they used reaction times to other targets as covariate. These analyses have low power due to the high variability in reaction times across participants. However, surprisingly each study still produced the predicted significant result.
Study 1: “Planned analyses clarified the specific pattern of hypothesized effects. Multiple regression evaluated the hypothesis that priming would interact with participants’ sociosexual orientation to increase attentional adhesion to attractive opposite-sex targets. Attention to those targets was regressed on experimental condition, SOI, participant sex, and their centered interactions (nonsignificant interactions were dropped). Results confirmed the hypothesized interaction between priming condition and SOI, beta = .19, p < .05 (see Figure 1).”
I used r = .19 and N = 113 and obtained t(111) = 2.04, p = .043, z = 2.02.
Study 2: “Planned analyses clarified the specific pattern of hypothesized effects. Regression evaluated the hypothesis that the mate-search prime would interact with sociosexual orientation to increase attentional adhesion to attractive opposite-sex targets. Attention to these targets was regressed on experimental condition, SOI score, participant sex, and their centered interactions (nonsignificant interactions were dropped). As in Study 1, results revealed the predicted interaction between priming condition and sociosexual orientation, beta = .15, p = .04, one-tailed (see Figure 2)”
I used r = .15 and N = 155 and obtained t(153) = 1.88, p = .06 (two-tailed!), z = 1.86.
Study 3: “We also observed a significant main effect of intrasexual vigilance, beta = .25, p < .001, partial r = .26, and, more important, the hypothesized two-way interaction between priming condition and level of intrasexual vigilance, beta = .15, p < .05, partial r = .16 (see Figure 3).”
I used r = .16 and N = 155 and obtained t(153) = 2.00, p = .047, z = 1.99.
The problem is that the results of these three independent analyses are too similar, z = 2.02, 1.86, 1.99; Var(z) < .001, p = .007.
In conclusion, there are some concerns about the replicability of these results and even if the results replicate they do not provide support for the hypothesis that mating primes have a hard-wired effect on males. Only one of the three studies produced a significant two-way interaction between priming and target (F-value not reported), and none of the three studies produced a significant three-way interaction between priming, target, and gender. Thus, the results are inconsistent with other studies that found either main effects of mating primes or mating prime by gender interactions.
3. Bram Van den Bergh and Siegfried Dewitte (Proc. R. Soc. B, 2006) [R-index = .58]
This article reports three studies that examined the influence of mating primes on behavior in the ultimatum game.
Study 1 had a small sample size of 40 male participants who were randomly assigned to seeing pictures of non-nude female models or landscapes. The study produced a significant main effect, F(1,40) = 4.75, p = .035, z = 2.11, and a significant interaction with finger digit ratio, F(1,40) = 4.70, p = .036, z = 2.10. I used the main effect for analysis because it is theoretically more important than the interaction effect, but the results are so similar that it does not matter which effect is used.
Study 2 used rating of women’s t-shirts or bras as manipulation. The study produced strong evidence that mating primes (rating bras) lead to lower minimum acceptance rates in the ultimatum game than the control condition (rating t-shirts), F(1,33) = 8.88, p = .005, z = 2.78. Once more the study also produced a significant interaction with finger digit ratio, F(1,33) = 8.76, p = .006, z = 2.77.
Study 3 had three experimental conditions, namely non-sexual pictures of older and young women, and pictures of young non-nude female models. The study produced a significant effect of condition, F(2,87) = 5.49, p = .006, z = 2.77. Once more the interaction with finger-digit ratio was also significant, F(2,87) = 5.42.
This article barely passed the test of insufficient variance in the primary analysis that uses one focal test per study, Var(z) = 0.15, p = .14. However, the main effect and the interaction effects are statistically independent and it is possible to increase the power of TIVA by using the z-scores for the three main effects and the three interactions. This test produces significant evidence for bias, Var(z) = 0.12, p = .01.
In conclusion, it is unlikely that the results reported in this article will replicate.
The replicability crisis in psychology has created doubt about the credibility of published results. Numerous famous priming studies have failed to replicate in large replication studies. Shanks et al. (2016) reported problems with the specific literature of romantic and mating priming. This replicability report provided further evidence that the mating prime literature is not credible. Using an expanded set of 92 studies, analysis with powergraphs, the test of insufficient variance, and the replicability index showed that many significant results were obtained with the help of questionable research practices that inflate observed effect sizes and provide misleading evidence about the strength and replicability of published results. Only three articles passed the test with TIVA and R-Index and detailed examination of these studies also showed statistical problems with the evidence in these articles. Thus, this replicability analysis of 36 articles failed to identify a single credible article. The lack of credible evidence is consistent with Shanks et al.’s failure to produce significant results in 15 independent replication studies.
Of course, these results do not imply that evolutionary theory is wrong or that sexual stimuli have no influence on human behavior. For example, in my own research I have demonstrated that sexually arousing opposite-sex pictures capture men’s and women’s attention (Schimmack, 2005). However, these responses occurred in response to specific stimuli and not as carry-over effects of a priming manipulation. Thus, the problem with mating prime studies is probably that priming effects are weak and may have no notable influence on unrelated behaviors like consumer behavior or risk taking in investments. Given the replication problems with other priming studies, it seems necessary to revisit the theoretical assumptions underlying this paradigm. For example, Shanks et al. (2016) pointed out that behavioral priming effects are theoretically implausible because these predictions contradict well-established theories that behavior is guided by the cognitive appraisal of the situation at hand rather than unconscious residual information from previous situations. This makes evolutionary sense because behavior has to respond to the adaptive problem at hand to ensure survival and reproduction.
I recommend that textbook writers, journalists, and aspiring social psychologists treat claims about human behavior based on mating priming studies with a healthy dose of skepticism. The results reported in these articles may reveal more about the motives of researchers than their participants.