A guide to articles about publication bias
This post provides a quick overview of claim articles in New Things Under the Sun related to publication bias in science. Click one of the following links to jump to an article overview, or simply scroll down. Click on the title of any given overview to jump to the associated claim article on New Things Under the Sun.
Why is publication bias worse in some disciplines than others?
Publication bias without editors? The case of preprint servers
Multiple-analyst studies ask many researchers or research teams to independently answer the same question with the same data
These studies find there is a very large dispersion in the conclusions of teams: different groups can come up with very different answers
The dispersion is not well explained by differences in expertise or pre-existing beliefs
Instead, it seems to be the case that there are many decisions in the research process, and different but defensible research choices add up over time to deliver different answers
But signals do seem to add up, and there is evidence multi-analyst teams can converge in their beliefs after discussion. Synthesis of many studies can also help us arrive at true beliefs (or so I hope!).
Publication bias occurs when publication of a study is contingent on its results
If we only see studies that deliver “surprising” results, our impression of the results of studies will be skewed.
Four different approaches suggest this is a real problem:
Experiments show reviewers are more likely to recommend publication when shown research that obtains positive results
When we have a complete inventory of research projects and their results, and we can check to see if results predict what is published.
Systematic replication efforts give us an estimate of what result distributions should look like, absent publication bias.
The distribution of results with precise standard errors can help us estimate what the distribution should be for results with less precise standard errors.
We have some evidence that publication bias is worse in some fields
Publication bias might differ across fields for a few reasons:
Surprise theory: easier to publish surprising work, because reviewers prioritize evidence that can shift views. Maybe fields vary in what kind of evidence is surprising?
Skepticism theory: easier to publish positive results than null results, because researchers believe null results are more likely to be methodologically unsound. Maybe fields vary in the reliability of empirical methods?
Experimental studies mostly (not entirely) seem to suggest the skepticism theory is dominant.
In economics too, subfield where a narrower range of results is predicted by theory (and hence more results are “surprising”) show more publication bias
One proposal for addressing peer review is to create a class of journals that publish articles with unsurprising results that would normally make them challenging to publish.
This article suggests preprint servers may perform a similar function, and looks to see if the articles on preprint servers exhibit less publication bias.
There is some evidence of less publication bias for papers on preprint servers, compared to those published in journals. However the effect is small.
This might be because even article that end up on a preprint server forever were initially targeted to journals, where the authors might have inferred some kinds of results are difficult to publish.
We have some evidence non-significant results are never written up, and that statistical evidence of p-hacking is roughly the same on preprints and in journals.
While preprint servers don’t seem too different from published papers, there are some significant difference in apparent rates of p-hacking among different methodologies.
If science rewards the publication of many positive results, simulations show we should expect the proliferation of low effort research using methods prone to false positives.
Evidence from both structural biology and “twin” discoveries in science and industry both suggest scientists working outside academic incentives may face less pressure to rush results.
It’s harder to precisely measure the impact of pressure-to-publish at broad levels, but attempts to do so find only mixed evidence publishing pressure results in a higher volume of low quality work.
Bottom line: I think pressure to publish has negative effects on the quality of science, but the effects are not very large.