Skip to main content
SearchLoginLogin or Signup

Publication Biases in Science

A guide to articles about publication bias

Published onFeb 13, 2023
Publication Biases in Science
·
history

You're viewing an older Release (#3) of this Pub.

  • This Release (#3) was created on Aug 30, 2023 ()
  • The latest Release (#4) was created on Apr 15, 2024 ().

This post provides a quick overview of claim articles in New Things Under the Sun related to publication bias in science.

One Question, Many Answers

  • Multiple-analyst studies ask many researchers or research teams to independently answer the same question with the same data

  • These studies find there is a very large dispersion in the conclusions of teams: different groups can come up with very different answers

  • The dispersion is not well explained by differences in expertise or pre-existing beliefs

  • Instead, it seems to be the case that there are many decisions in the research process, and different but defensible research choices add up over time to deliver different answers

  • But signals do seem to add up, and there is evidence multi-analyst teams can converge in their beliefs after discussion. Synthesis of many studies can also help us arrive at true beliefs (or so I hope!).

  • (go to article)

Publication Bias is Real

  • Publication bias occurs when publication of a study is contingent on its results

  • If we only see studies that deliver “surprising” results, our impression of the results of studies will be skewed.

  • Four different approaches suggest this is a real problem:

    • Experiments show reviewers are more likely to recommend publication when shown research that obtains positive results

    • When we have a complete inventory of research projects and their results, and we can check to see if results predict what is published.

    • Systematic replication efforts give us an estimate of what result distributions should look like, absent publication bias.

    • The distribution of results with precise standard errors can help us estimate what the distribution should be for results with less precise standard errors.

  • (go to article)

Why is publication bias worse in some disciplines than others?

  • We have some evidence that publication bias is worse in some fields

  • Publication bias might differ across fields for a few reasons:

    • Surprise theory: easier to publish surprising work, because reviewers prioritize evidence that can shift views. Maybe fields vary in what kind of evidence is surprising?

    • Skepticism theory: easier to publish positive results than null results, because researchers believe null results are more likely to be methodologically unsound. Maybe fields vary in the reliability of empirical methods?

  • Experimental studies mostly (not entirely) seem to suggest the skepticism theory is dominant.

  • In economics too, subfield where a narrower range of results is predicted by theory (and hence more results are “surprising”) show more publication bias

  • (go to article)

Publication bias without editors? The case of preprint servers

  • One proposal for addressing peer review is to create a class of journals that publish articles with unsurprising results that would normally make them challenging to publish.

  • This article suggests preprint servers may perform a similar function, and looks to see if the articles on preprint servers exhibit less publication bias.

  • There is some evidence of less publication bias for papers on preprint servers, compared to those published in journals. However the effect is small.

  • This might be because even article that end up on a preprint server forever were initially targeted to journals, where the authors might have inferred some kinds of results are difficult to publish.

  • We have some evidence non-significant results are never written up, and that statistical evidence of p-hacking is roughly the same on preprints and in journals.

  • While preprint servers don’t seem too different from published papers, there are some significant difference in apparent rates of p-hacking among different methodologies.

  • (go to article)

Publish-or-perish and the quality of science

  • If science rewards the publication of many positive results, simulations show we should expect the proliferation of low effort research using methods prone to false positives.

  • Evidence from both structural biology and “twin” discoveries in science and industry both suggest scientists working outside academic incentives may face less pressure to rush results.

  • It’s harder to precisely measure the impact of pressure-to-publish at broad levels, but attempts to do so find only mixed evidence publishing pressure results in a higher volume of low quality work.

  • Bottom line: I think pressure to publish has negative effects on the quality of science, but the effects are not very large.

  • (go to article)

Comments
0
comment
No comments here
Why not start the discussion?