Skip to main content
SearchLoginLogin or Signup

Science Responds Quickly to Retraction

Some evidence that science does self-correct

Published onJun 08, 2021
Science Responds Quickly to Retraction
·

What happens when research doesn’t pan out? Do scientists keep barreling down the same dead-end street or do they change directions?

One thing that makes this a tough question to answer is that even identifying when research “doesn’t pan out” can be tricky. But there is one domain where it’s pretty ambiguous: retraction. To assess how science responds to an information shock about the quality of prior research, a number of papers all take the approach of matching papers “tainted” by retraction with control papers, and then comparing the citations received by tainted and control papers.

For example, Furman, Jensen, and Murray (2012) compare retracted biomedical papers to (un-retracted) controls published immediately before and after them in the same journal. The retraction penalty relative to neighbor articles is swift and severe.

From Furman, Jensen, and Murray (2012)

Within 1 year, retracted articles are receiving 45% of the citations of the controls, a number that steadily falls to 20%.

Note though, that retracted articles still receive some citations. To see what’s going on, Furman, Jensen, and Murray delve into the post-retraction citations received by 20 retracted papers. The majority of these citations either seem aware the paper was retracted, or do not cite it to build on it’s findings (instead, for example, citing merely to argue the topic is interesting). So it looks like scientists do shun retracted work.1

Lu, Bin, Uzzi and Jones (2013) verify these results and go further. They match papers in the Web of Science to those in the same field with similar pre-retraction citation trajectories. They also document a sharp citation penalty for retracted articles relative to controls:

From Lu, Bin, Uzzi, and Jones (2013)

But Lu, Bin, Uzzi, and Jones also look at the impact of retraction on an author’s work published prior to the retraction event and not retracted themselves. They use the same approach to identify control articles with similar citation trajectories prior to the retraction event. There’s no detectable effect on an author’s other work if they were the ones who reported the problem that led to retraction (~22% of retractions). It seems scientists give people the benefit of the doubt in that case.

But if someone else discovered and reported the problem, there’s a 10% citation penalty for the author’s un-retracted work after a few years. Retraction breeds suspicion.

From Lu, Bin, Uzzi, and Jones (2013)

The same team returned to this question in 2019, this time focusing on how “blame” is allocated among authors, when a retracted paper has multiple co-authors. Jin, Jones, Lu, and Uzzi (2019) look at the citation penalty suffered by different members of the team tainted by retraction. Again, they’re looking at the impact of retraction on authors’ un-retracted work.

From Jin, Jones, Lu, and Uzzi (2019)

It looks like the author with less reputation gets the blame for retraction events. Jin, Jones, Lu, and Uzzi define the “eminence” or authors variously by the number of publications, citations, and h-index score (all computed for the year prior to retraction). In the above figure, authors in the top 10% of eminence measure don’t really see any citation impact on their other work, while those in the bottom 90% experience significantly fewer citations to their other work.

What about work merely in the same field as retracted articles?

Azoulay, Furman, Krieger, and Murray (2015) identify PubMed articles similar to retracted ones by the overlap of MeSH keywords, but which share no common coauthors. These are papers on similar topics as retracted papers, but for which there is no other (observable) reason to be suspicious of them. They compare the citations of these articles to controls published immediately before and after in the same journal.

They find that papers similar to the retracted article also suffer significant citation penalties! The penalty is strongest when the retraction calls into question the validity of the retracted articles findings (not, for example, when it’s basically right but was published without permission from the data vendor).

From Azoulay, Furman, Krieger, and Murray (2015)

So scientists are pretty responsive to news that research is flawed. They rapidly stop citing retracted work and they look more skeptically at work by the same people, especially when they have less reason to trust the author (either because they didn’t self-report or they have a less prestigious track record). They even exercise more caution in citing work in similar fields.

Can we draw any broader lessons about how science self-corrects? Before extrapolating from these findings to science more generally, a few special features of retractions are worth pointing out:

  1. Retractions tend to occur rapidly or not at all (Furman, Jensen, and Murray find most retractions happen within 2 years of publication), whereas broader failures in a research program may come much later, possibly after an active research program has emerged.

  2. Whereas retractions are rare events (1.4 in 10,000 papers for biology and medicine, less for other fields according to Lu, Bin, Uzzi, and Jones), other kinds of “failure”, such as a failure to replicate, is disturbingly high.

  3. Retractions are relatively unambiguous, even though they can come in many flavors. In contrast, other failures may be open to much more dispute and debate.

New articles and updates to existing articles are typically added to this site every two weeks. To learn what’s new on New Things Under the Sun, subscribe to the newsletter.


Cited Above

Do Academic Citations Measure the Impact of Ideas?

Related

Publish-and-perish and the quality of science

Science is good at making useful knowledge

How a field fixes itself: the applied turn in economics


Articles Cited:

Furman, Jeffrey L., Kyle Jensen, and Fiona Murray. 2012. Governing knowledge in the scientific community: Exploring the role of retractions in biomedicine. Research Policy 41(2): 276-290. https://doi.org/10.1016/j.respol.2011.11.001

Lu, Susan Feng, Ginger Zhe Jin, Brian Uzzi, and Benjamin Jones. 2013. The Retraction Penalty: Evidence from the Web of Science. Scientific Reports 3: 3146. https://doi.org/10.1038/srep03146

Jin, Ginger Zhe, Benjamin Jones, Susan Feng Lu, and Brian Uzzi. 2019. The Reverse Matthew Effect: Consequence of Retraction in Scientific Teams. The Review of Economics and Statistics 101(3): 492-506. https://doi.org/10.1162/rest_a_00780

Azoulay, Pierre, Jeffrey L. Furman, Joshua L. Krieger, and Fiona Murray. 2015. Retractions. The Review of Economics and Statistics 97(5): 1118-1136. https://doi.org/10.1162/REST_a_00469

Comments
0
comment
No comments here
Why not start the discussion?