It is 2019 and everybody is familiar with fake news. Liberals accuse Fox News of spreading fake news and conservatives return the favor by accusing the liberal media of fake news. Each side has experts and facts to support their biased and ideological construction of reality.
Innocent consumers of scientific information, like many undergraduate students who pay money for an education, may assume that universities are a safe space where professors are given a good salary and job security to search for the truth.
While I cannot speak for all science, I can say that this naive assumption about scientists does not describe the behavior of eminent social psychologists.
The problem for social psychologists is that many of their studies were obtained with questionable research practices (John et al., 2012; Schimmack, 2015). As a result, many of the published results in social psychology do not replicate. Since 2011, over 100 social psychology experiments have been replicated and less than 25% produced a significant result (Open Science Collaboration, 2015; Curate Science, 2018). The replication crisis in social psychology suggests that many published results in social psychology textbooks may not replicate and could be false positive results.
An unflinching, scientific response to replication failures in social psychology would require a thorough revision of social psychology textbooks. However, there is no evidence that textbook writers are able or willing to tell undergraduate students about the replication crisis in their field.
For example, Gilovich, Keltner, Chen, and Nisbett (2019) claim that replication failures are best explained by problems with the replication studies and point to more successful replications in other behavioral sciences such as economics to suggest that students should trust social psychology (Schimmack, 2018a). However, it can be shown that the replication failures in social psychology are mostly caused by the use of questionable research practices that inflate effect sizes and the risk of false discoveries (2018b), rather than incompetent replication attempts.
On page 57 of their textbook the authors pledge to their readers that they “tried to be scrupulous about noting when the evidence about a given point is mixed” (p. 57). Students could simply trust the authors, but it is always better to check whether they deserve students’ trust.
There is probably no better example for mixed evidence than social priming. As Daniel Kahneman wrote in an open letter to social priming researchers
your field is now the poster child for doubts about the integrity of psychological research… people have now attached a question mark to the field, and it is your responsibility to remove it… all I have personally at stake is that I recently wrote a book that emphasizes priming research as a new approach to the study of associative memory…Count me as a general believer… My reason for writing this letter is that I see a train wreck looming.
Several years later, the train wreck has materialized. First, social priming researchers have not responded to Kahneman’s plea to replicate their findings in their own laboratories to demonstrate that replication failures were caused by improper replication studies. Second, other researchers have published numerous replication studies that failed to reproduce the original findings (Curate Science, 2018). Finally, statistical analysis of social priming studies show evidence that questionable research practices were used to produce textbook findings of social priming (Schimmack, 2017; Schimmack, 2017b).
How does the 5th edition, 2019, of the textbook differ from previous versions in the presentation of priming research?
Several priming studies that were included in the 3rd edition (2013) are no longer mentioned in the 5th (2019) edition.
“activating the concept ‘professor’ actually makes students do better on a trivia test” (Dijksterhuis & van Knippenberg, 1998).
“More remarkably still, Dijksterhuis, van Knippenberg, and their colleagues demonstrated that activating the stereotype of professor or supermodel led participants to perform in a manner consistent with the stereotype, but activating a specific (extreme) example of the stereotyped group (for example, Albert Einstein or Claudia Schiffer) led participants to perform in a manner inconsistent with the stereotype” (p. 128).
“just mentioning words that call to mind the elderly (cane, Florida) causes college students to walk down a hall more slowly” (Bargh, Chen, & Burrows, 1996).
The following sentence has not been changed, but the reference to Bargh, Raymond, Pryor, & Strack, 1995, has been removed.
“Dispositionally high-powered individuals or individuals primed with feelings of power are more likely to touch others and approach them closely, to have sexual ideas running through their minds, to feel attraction to a stranger, to overestimate other people’s sexual interest in them, and to flirt in an overly forward fashion (Bargh, Raymond, Pryor & Strack, 1995, Kuntsman & Maner, 2011; Rudman & Borgida, 1995)
While it is interesting to see that the textbook authors lost confidence in some priming studies, the quiet removal of these studies contradicts the earlier claim that the authors would inform students about mixed evidence. More important, other priming studies are cited without mentioning widespread doubt about social priming among psychologists.
In Chapter 1, “An Invitation to Social Psychology” (which would be better called an indoctrination to social psychology) the authors cite Bargh and Pietromonaco (1982) to claim that “often we can’t even identify some of the crucial factors that affect our beliefs and behavior” (p. 16). The cited article reports the results of a single subliminal priming study with a just significant result, F(1,128) = 4.15, p = .044. It is doubtful that this finding could be replicated today in a preregistered replication study.
In Chapter 4 the authors make strong claims about priming effects.
“We’ve seen how schemas influence our attention, memory, and construal. Can they also influence behavior? Absolutely. Studies have shown that certain types of behavior are elicited automatically when people are exposed to stimuli in the environment that bring to mind a particular action or schema (Loersch & Payne, 2011; Weingarten et al., 2016). Such exposure is called priming a concept or schema.”
The 2011 reference is pre-crisis and the Weingarten et al. (2016) article reports a meta-analysis of studies that used questionable research practices. Without taking QRPs into account, conclusions based on such meta-analysis are as doubtful as conclusions drawn from original studies that used QRPs.
To examine the credibility of the evidence on priming in this textbook, I z-curved the test-statistics in the cited priming articles (see DATA for the list of studies). Most of the articles are cited in Chapter 4 on pages 120-121. I coded 36 empirical articles that contained 89 studies with useful information. 84 tests were significant at p < .05 and the other 5 studies reported a marginally significant result as support for a priming effect.
The Figure shows the z-curve based on the 84 significant test statistics converted into z-scores. The blue line shows the density distribution of the observed test statistics. The grey line shows the fit of the model to the observed data. The grey line is also projected into the range of non-significant z-scores. The area under the curve shows the estimated size of the file-drawer if dropping non-significant studies were the only questionable practice used to produce significant priming results. The file drawer ratio suggests that for ever published significant result there would be about 12 non-significant results that remained unpublished.
More important is the estimate of replicability for the 84 studies with significant results. The estimated success rate is 33% with a 95%CI ranging from 11% to 41%. This estimate is consistent with other analysis of priming studies and the low success rate in actual replication studies (Curate Science, 2018). It suggests that at best one-third of the cited studies could be successfully replicated even if it were possible to redo the study exactly. As it is often impossible to do exact replication studies, the actual success rate is likely to be lower.
The maximum false discovery rate is also very high with 75%. This estimate means that it is possible to fit a model to the data that fixed the false discovery rate at 75% and model fit was close to the model fit of the unconstrained model. Thus, most of the cited results could be false positives where the actual effect size is close to zero. Effect sizes are important because social psychologists claim that stimuli outside our awareness can have a strong influence on behavior.
Finally, the numbers below the x-axis show mean power for different levels of z-scores. For just significant results with p-values greater than .01, mean power is only 15%. These results are unlikely to replicate even with much larger sample sizes. A z-score greater than 3 (p < .001) is needed to have a chance greater than 50% to have a successful replication outcome. Interested readers can examine the data file to see which studies fall into this category.
In conclusion, the textbook authors do not provide an honest, balanced, and self-critical introduction to social priming studies. Contrary to their claim to be “scrupulous about noting when the evidence about a given point is mixed,” they present social priming as a well-established phenomenon that generalizes across modalities and behaviors, while they are well aware that some studies failed to replicate. Rather than informing readers about these failures, some spectacular replication failures like professor priming were simply removed.
The Power of the Situation
Social psychologists emphasize situational influences on behavior. The textbook authors’ behavior illustrates how powerful situational influences can be. While stereotypes of professors suggest that they have meta-human strength to overcome human weaknesses, they are just as susceptible to situational influences like peer-pressure and monetary incentives.
The textbook authors probably believe that they tried to be objective and we can consider the removable of some priming studies as evidence that they tried to be objective, but their position as eminent social psychologists who earn money from publishing a textbook ensured that they would be unable to present replication failures of social priming studies. The desire to invite (initiate) a new generation of students into social psychology was too strong to talk openly about the replication crisis.
Another psychologists once described the power of situational influences in science as being “a prisoner of a paradigm.” A paradigm is a scientific belief system that guides researchers’ beliefs, values, and actions. To accept the replication crisis in social psychology as a fact would trigger an existential crisis and a paradigm shift. Paradigm shift only when new researchers that are outsiders are seeing the problems that insiders cannot see. The older researchers are unable to change their beliefs because doing so would trigger an existential crisis and probably a depressive episode. As Freud pointed out, humans use powerful defense mechanisms to protect the self from threatening thoughts.
For instructors who are looking for a scientific textbook about social psychology and for students who want to learn about social psychology, I recommend using a different textbook, although at this point, I cannot recommend one that is less biased than this one. At the moment, it is best to read about social psychology with a healthy dose of skepticism unless a particular finding has been successfully replicated in credible preregistered replication studies.