4.2 Reasons for Non-Replication

Brian Wansink is just one of many examples of the replication crisis that is currently facing psychology. There are many other reasons for the replication crisis, which we will discuss below.

Questionable Data Practices

Some suggested that the low replicability of many studies is evidence of the widespread use of questionable research practices by psychological researchers. These may include:

  1. The selective deletion of outliers to influence (usually by artificially inflating) statistical relationships among the measured variables.
  2. The selective reporting of results, that is, cherry-picking only those findings that support one’s hypotheses.
  3. Mining the data without an a priori hypothesis, only to claim that a statistically significant result had been originally predicted, a practice referred to as “HARKing” or hypothesising after the results are known (Kerr, 1998).[1]
  4. A practice colloquially known as “p-hacking”, in which a researcher might perform inferential statistical calculations to see if a result was significant before deciding whether to recruit additional participants and collect more data (Head et al., 2015).[2] As you will learn later on,  the probability of finding a statistically significant result is influenced by the number of participants in the study.
  5. Outright fabrication of data (as the case for Brian Wansink’s studies) — although this would be a case of fraud rather than a “research practice.”

Small Sample Sizes

Another reason for non-replication is that, in studies with small sample sizes, statistically significant results may often be the result of chance. For example, if you ask five people if they believe that aliens from other planets visit Earth and regularly abduct humans, you may get three people who agree with this notion — simply by chance. Their answers may, in fact, not be at all representative of the larger population. On the other hand, if you survey one thousand people, there is a higher probability that their belief in alien abductions reflects the actual attitudes of society. Now consider this scenario in the context of replication: if you try to replicate the first study — the one in which you interviewed only five people — there is only a small chance that you will randomly draw five new people with exactly the same (or similar) attitudes. It’s far more likely that you will be able to replicate the findings using another large sample because it is simply more likely that the findings are accurate.

Results may be True for Some People, in Some Circumstances

Another reason for non-replication is that, while the findings in an original study may be true, they may only be true for some people in some circumstances and not necessarily universal or enduring. Imagine that a survey in the 1950s found a strong majority of respondents trust government officials. Now imagine the same survey administered today, with vastly different results. This example of non-replication does not invalidate the original results. Rather, it suggests that attitudes have shifted over time.

Systemic Issues

Others have interpreted this situation as evidence of systemic problems with conventional scholarship in psychology, including a publication bias that favours the discovery and publication of counter-intuitive but statistically significant findings instead of the duller (but incredibly vital) process of replicating previous findings to test their robustness (Aschwanden, 2015; Pashler & Harris, 2012).[3][4]

 

Chapter attribution

This chapter contains taken and adapted material from Research methods in psychology by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler and Dana C. Leighton, used under a CC BY-NC-SA 4.0 licence.


  1. Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196-217. http://doi.org/10.1207/s15327957pspr0203_4
  2. Head M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biology, 13(3): e1002106. http://doi.org/10.1371/journal.pbio.1002106
  3. Aschwanden, C. (2015, August 19). Science isn't broken: It's just a hell of a lot harder than we give it credit for. FiveThirtyEight. http://fivethirtyeight.com/features/science-isnt-broken/
  4. Pashler, H., & Harris, C. R. (2012). Is the replicability crisis overblown? Three arguments explained. Perspectives on Psychological Science, 7(6), 531-536. https://doi.org10.1177/1745691612463401

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

4.2 Reasons for Non-Replication Copyright © 2023 by Klaire Somoray is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.