4.1 How we Think Science Should Work

It is difficult to get a man to understand something, when his salary depends on his not understanding it.
– Upton Sinclair

The scientific method movement believes that truth can be objectively discovered, supposedly separate from the subjective biases of the person seeking it. The scientific method — a systematic approach that includes forming a hypothesis, conducting experiments, and analysing and refining results — was introduced as a way to achieve this objective truth.

How we Think Science Should Work

Let’s say that we are interested in a research project on how children choose what to eat. This is a question that was asked in a study by the well-known eating researcher Brian Wansink and his colleagues in 2012.[1] The standard (and, as we will see, somewhat naive) view goes something like this:

  • You start with a hypothesis
    • Branding with popular characters should cause children to choose “healthy” food more often
  • You collect some data
    • Offer children the choice between a cookie and an apple with either an Elmo-branded sticker or a control sticker, and record what they choose
  • You do statistics to test the null hypothesis. The null hypothesis test states that you don’t find an effect – we will learn more about this later.
    • “The preplanned comparison shows Elmo-branded apples were associated with an increase in a child’s selection of an apple over a cookie, from 20.7% to 33.8%”.
  • You make a conclusion based on the data
    • “This study suggests that the use of branding or appealing branded characters may benefit healthier foods more than they benefit indulgent, more highly processed foods. Just as attractive names have been shown to increase the selection of healthier foods in school lunchrooms, brands and cartoon characters could do the same with young children.”

However, it has since been recognised that the illusion of objectivity inherent in the scientific method can lead to a false sense of confidence in one’s ability to uncover the truth and that the results of scientific research are not certain. This was exemplified by the replication crisis[2], which revealed the extent to which subjective biases and self-interest can influence scientific findings.

How Science (Sometimes) Actually Works

Let’s look at what happened with Brian Wansink’s studies below:

Examples

Brian Wansink is well known for his books on “Mindless Eating”, and his fee for corporate speaking engagements is in the tens of thousands of dollars. In 2017, a set of researchers began to scrutinise some of his published research, starting with a set of papers about how much pizza people ate at a buffet. The researchers asked Wansink to share the data from the studies but he refused, so they dug into his published papers and found a large number of inconsistencies and statistical problems in the papers. The publicity around this analysis led a number of others to dig into Wansink’s past, including obtaining emails between Wansink and his collaborators. As reported by Stephanie Lee at Buzzfeed, these emails showed just how far Wansink’s actual research practices were from the naive model:

…back in September 2008, when Payne was looking over the data soon after it had been collected, he found no strong apples-and-Elmo link — at least not yet. … “I have attached some initial results of the kid study to this message for your report,” Payne wrote to his collaborators. “Do not despair. It looks like stickers on fruit may work (with a bit more wizardry).” … Wansink also acknowledged the paper was weak as he was preparing to submit it to journals. The p-value was 0.06, just shy of the gold standard cutoff of 0.05. It was a “sticking point,” as he put it in a Jan. 7, 2012, email. … “It seems to me it should be lower,” he wrote, attaching a draft. “Do you want to take a look at it and see what you think. If you can get the data, and it needs some tweeking, it would be good to get that one value below .05.” … Later in 2012, the study appeared in the prestigious JAMA Pediatrics, the 0.06 p-value intact. But in September 2017, it was retracted and replaced with a version that listed a p-value of 0.02. And a month later, it was retracted yet again for an entirely different reason: Wansink admitted that the experiment had not been done on 8- to 11-year-olds, as he’d originally claimed, but on preschoolers (Lee, 2017).[3]

This kind of behaviour finally caught up with Wansink; fifteen of his research studies have been retracted and in 2018 he resigned from his faculty position at Cornell University.

 

Chapter attribution

This chapter contains taken and adapted material from Statistical thinking for the 21st Century by Russell A. Poldrack, used under a CC BY-NC 4.0 licence.


  1. Wansink, B., Just, D. R., & Payne, C. R. (2012). Can branding improve school lunches? Archives of pediatrics & adolescent medicine166(10), 967-968. https://doi.org/10.1001/archpediatrics.2012.999
  2. https://en.wikipedia.org/wiki/Replication_crisis
  3. Lee, S. (2017, September 25). How A Star Cornell Food Scientist Wowed Prestigious Journals With His "Artful Pizzazz". BuzzFeed News. https://www.buzzfeednews.com/article/stephaniemlee/brian-wansink-cornell-p-hacking

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

A Contemporary Approach to Research and Statistics in Psychology Copyright © 2023 by Klaire Somoray is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.