Research and Stats

Ep 245: The Results of the Reproducibility Project: Incentives Out of Whack

The Reproducibility ProjectHave you heard that about 100 Psychology studies were replicated and only about 1/3 confirmed the original findings? Why did this happen? Well, one reason has to do with incentives that are out of whack. The “real world” of scientific research is far from the lone researcher looking for the truth. And the other reason has to do with, well, you and the internet. You see, you like to click on things that are surprising or weird (I like to do that too I admit) and that behavior encourages bad research. Let’s find out how these things are all connected in this episode of The Psych Files.

Get Better Grades with my Psych Mnemonics App!

[image src=”6524″ alt=”Psychology Mnemonics Apps” href=”” title=”” info_content=”” lightbox_caption=”” id=”” class=”” style=””][image link=”true” target=”blank” src=”6463″ alt=”Psych Mnemonics app for iOS” href=”http://bitly.com/PM_Homepage” title=”The PsycExplorer app for iOS” info_content=”” lightbox_caption=”” id=”” class=”” style=””]

Incentives Out of Whack

  • Researchers (college professors usually) must do short term studies that must get significant results in order to “publish or perish”. That is, they’ll lose their jobs if they don’t publish.
  • Journals like to publish interesting articles to maintain subscribers and get media attention
  • Websites “dumb down” and exaggerate research findings in order to draw you in and get you to click on advertisements

References

Previous Episodes Related to This Topic

“Scientists aim to contribute reliable knowledge, but also need to produce results that help them keep their job as a researcher,” he said. “To thrive in science, researchers need to earn publications, and some kinds of results are easier to publish than others, particularly ones that are novel and show unexpected or exciting new directions.”

As a consequence, according to Nosek and his co-authors, many scientists pursue innovative research in the interest of their careers, even at the cost of reproducibility of the findings. The authors say that research with new, surprising findings is more likely to be published than research examining when, why or how existing findings can be reproduced.

Of course psychology is a science. But psychologists deal with a fantastically complex system, and it is extremely difficult to design perfectly controlled experiments. The same is true of other complex systems, like ecology. Does that mean ecology is not a science because ecological experiments are harder to replicate than experiments in (for example) molecular genetics? Of course not. “Noise” in the system, generating a range of results, increases with the complexity of the system.

…it is discouraging to see the “NYT Picks” with so many comments that disparage social sciences, with some claiming that social science is nothing more than speculation. I have an engineering background and am familiar with the difficulties of measuring physical and chemical phenomena, which is difficult enough, although enormously more straightforward than measuring how people and even lab animals respond. This is particularly true when trying to understand how their brains work (or don’t). The complexities involved with how people and animals behave are amazingly difficult to reduce to a handful of metrics that respond in a consistent manner to a given set of conditions. It’s not that social sciences are worthless, but more that they are dealing with systems that we do not adequately understand, with limited (although expanding) means of measurement. There is certainly much to be done to improve the field. But promoting the perception that social sciences are merely uninformed speculation contributes to the broader misunderstanding of scientific endeavors. In the absence of efforts to make progress in these fields, even if those efforts are far from perfect, we are left with no more than legend, myth, and superstition upon which to base our “understanding” of how people and societies behave. -98_6 California

…studies published in “top journals” are the most likely to be inaccurate. This is initially surprising, but it is to be expected as the “top journals” select studies that are new and sexy rather than reliable. A series published in The Lancet in 2014 has shown that 85 per cent of medical research is wasted because of poor methods, bias and poor quality control. A study in Nature showed that more than 85 per cent of preclinical studies could not be replicated… – The peer review drugs don’t work

Michael

About Author

Leave a comment

Your email address will not be published. Required fields are marked *

You may also like

Critical Thinking Research and Stats

Episode 3: Predictions, Predictions

Have you ever heard of someone who says they can predict the future? Perhaps you’ve seen magazine articles in which
Research and Stats

Episode 13: What Your Grocery Store Knows About You

Does your local grocery store know more about you than you do? How do they get you to buy? Find