Humanistic psychologists have a positive image of human nature. Given the right environment, humans would act in the interest of the greater good. Similarly, academia was founded on idealistic ideals of shared understanding of the world based on empirical facts. Prominent representative of psychological science still present this naive image of science.
Our field has always encouraged — required, really — peer critiques.
(Susan T. Fiske, 2016).
The notion of peer criticism is naive because scientific peers are both active players and referees. If you don’t think that the World Cup final could be refereed by the players, you should also not believe that scientists can be objective when they have to critique their own science. This is not news to social psychologists, who teach about motivated biases in their classes, but suddenly these rules of human behavior don’t apply to social psychologists as if they were meta-humans.
Should active researchers write introductory textbooks?
It can be difficult to be objective in the absence of strong empirical evidence. Thus, disagreements among scientists are part of the normal scientific process of searching for a scientific answer to an important question. However, textbooks are supposed to introduce a new generation of students to fundamental facts that serve as the foundation for further discoveries. There is no excuse for self-serving biases in introductory textbooks.
Some textbooks are written by professional textbook writers. However, other textbooks are written by active and often eminent researchers. Everything we know about human behavior predicts that they will be unable to present criticism of their field objectively. And the discussion of the replication crisis in social psychology in Gilovich, Keltner, Chen, and Nisbett (2019) confirms this prediction.
The Replication Crisis in Social Psychology in a Social Psychology Textbook
During the past decade social psychology has been rocked by scandals ranging from outright fraud to replication failures of some of the most celebrated textbook findings like unconscious priming of social behavior (Bargh) or ego-depletion (Baumeister), and the finding that a representative sample of replication studies failed to replicate 75% of published results in social psychology (OSC, 2015).
The forthcoming 5th edition of this social psychology textbook does mention the influential OSC reproducibility project. However, the presentation is severely biased and fails to inform students that many findings in social psychology were obtained with questionable research practices and may not replicate.
How to Whitewash Replication Failures in Social Psychology
The textbook starts with the observation that replication failures generate controversy, but ends with the optimistic conclusion that scientists then reach a consensus about the reasons why a replication failure occurred.
“These debates usually result in a consensus about whether a particular finding should be accepted or not. In this way, science is self-correcting”
This rosy picture of science is contradicted by the authors own response to the replication failure in the Open Science Reproducibility Project. There is no consensus about the outcome of the reproducibility project and social psychologists’ views are very different from outsiders’ interpretation of these results.
“In 2015, Brian Nosek and dozens of other psychologists published an article in the journal Science reporting on attempts to replicate [attempts to replicate!!!] 100 psychological studies (Open Science Collaboration, 2015). They found that depending on the criterion used, 36-47 percent of the original studies were successfully replicated.”
They leave out that the article also reported different success rates for social psychology, the focus of the textbook, and cognitive psychology. The success rate for social psychology was only 25%, but this also included some personality studies. The success rate for the classic between-subject experiment in social psychology was only 4%! This information is missing, although (or because?) it would make undergraduate students wonder about the robustness of the empirical studies in their textbook.
Next students are informed that they should not trust the results of this study.
“The findings received heavy criticism from some quarters (Gilbert, King, Pettigrew, & Wilson, 2016).”
No mention is made who these people are or that Wilson is a student of textbook author Nisbett.
“The most prominent concern was that many of the attempted replications utilized procedures that differed substantially from the original studies and thus weren’t replications at all.”
What is “many” and what is a “substantial” difference? Students are basically told that the replication project was carried out in the most incompetent way (replication studies weren’t replications) and that the editor of the most prominent journal for all Sciences didn’t realize this. This is the way social psychologists often create their facts; with a stroke of a pen and without empirical evidence to back it up.
Students are then told that other studies have produced much higher estimates of replicability that reassure students that textbook findings are credible.
“Other systematic efforts to reproduce the results of findings reported in behavioral science journals have yielded higher replication rates, on the order of 75-85 percent (Camerer et al., 2016; Klein et al., 2014). “
I have been following the replication crisis since its beginning and I have never seen success rates of this magnitude. Thus, I fact checked these estimates that are presented to undergraduate students as the “real” replication rates of psychology, presumably including social psychology.
The Cramerer et al. (2016) article is titled “Evaluating replicability of laboratory experiments in economics” Economics! Even if the success rate in this article were 75%, it would have no relevance for the majority of studies reported in a social psychology textbook. Maybe telling students that replicability in economics is much better than in psychology would make some students switch to economics.
The Klein et al. (2014) article did report on the replicability of studies in psychology. However, it only replicated 13 studies and the studies were not a representative sample of studies, which makes it impossible to generalize the success rate to a population of studies like the studies in a social psychology textbook.
We all know the saying, there are lies, damn lies, and statistics. The 75-85% success rate in “good” replication study is a damn lie with statistics. It misrepresents the extent of the replication crisis in social psychology. An analysis of a representative set of hundreds of original results leads to the conclusion that no more than 50% of exact replication studies would reproduce a significant result even if the study could be replicated exactly (How replicable is psychological science). Telling students otherwise is misleading.
The textbook authors do acknowledge that failed replication studies can sometimes reveal shoddy work by original researchers.
“In those cases, investigators who report failed attempts to replicate do a great service to everyone for setting the record straight.”
They also note that social psychologists are slowly changing research practices to reduce the number of significant results that are obtained with “shoddy practices” that do not replicate.
“Foremost among these changes has been an increase in the sample sizes generally used in research.”
One wonders why these changes are needed if success rates are already 75% or higher.
The discussion of the replication crisis ends with the reassurance that probably most of the reported results in the article are credible and that evidence is presented objectively.
“In this textbook we have tried to be scrupulous about noting when the evidence about a given point is mixed.”
How credible is this claim when the authors misrepresent the OSC (2015) article as a collection of amateur studies that can be ignored and then cite a study of economics to claim that social psychology is replicable?
Moreover, the authors have a conflict of interest because they have a monetary incentive to present social psychology in the most positive light so that students take social psychology courses and buy social psychology textbooks.
A more rigorous audit of this and other social psychology textbooks by independent investigators is needed because we cannot trust social psychologists to be objective in the assessment of their field. After all, they are human.