Moara Icon
moara
← Back to Blog
December 26, 20258 min read

The Biggest Retraction Stories of 2025 (And What They Mean for Your Research)

Roughly 10,000 papers get retracted every year now, growing at 22% annually. Here's what that means for literature reviews.

If you're conducting literature reviews in 2025, here's a number you need to know: 10,000.

That's roughly how many papers get retracted every year now. The growth rate is 22% annually - far outpacing the 6% growth rate of overall publications.

Last week's Friday Lab covered the biggest retraction stories of 2025 and the broader patterns they reveal. If you missed it, here's what you need to know.

Growth Patterns

Retraction rates vary significantly by field. The highest rates are in electrical engineering and computer science (33 per 10K papers), followed by clinical and life sciences (14 per 10K), and engineering and materials science (8 per 10K).

One key finding from recent research: rapid publication growth in a field correlates with non-linear increases in retractions. In other words, fields experiencing publication surges see disproportionately higher retraction rates.

Main Causes

Based on recent data through 2024-2025, here's what's driving retractions:

  • ~6,400 retractions due to faulty or fake peer review processes
  • ~2,300 involved paper mills - operations churning out fake publications with fabricated author emails
  • ~2,100 involved AI-generated content, like tortured phrases or obvious artifacts
  • ~1,800 involved citation manipulation, including irrelevant citations or misrepresented findings

Notable Cases from 2025

Dana-Farber: $15M Settlement

Dana-Farber agreed to a $15 million settlement after allegations that researchers reused or misrepresented images in NIH grant applications. Dozens of papers required retractions or corrections.

The case highlighted systematic oversight failures. Flawed data passed peer review, supported federal funding, and persisted for years despite being publicly detectable.

The whistleblower received $2.63M under the False Claims Act, a U.S. law allowing private citizens to sue on behalf of the government when federal funds are obtained through fraud.

The Arsenic Life Paper: 15 Years Later

A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus was finally retracted in July 2025, fifteen years after publication.

The retraction focused on whether the study's results actually supported its conclusions. The bacteria weren't using arsenic instead of phosphate - the arsenic was simply contaminated with small amounts of phosphate.

This reflects a widening scope of what triggers retractions beyond fraud and misconduct to include inadequate methodology.

Climate Change Paper: 168 Citations

The economic commitment of climate change was retracted in December 2025 following criticism of data and methodology.

The paper had been:

  • Accessed over 300K times
  • Cited 168 times
  • Covered by Forbes, the San Diego Union-Tribune, and other major outlets
  • Published in Nature just 20 months before the retraction

The paper claimed climate change would reduce GDP 62% by 2030 - roughly 3x larger than prior estimates. One specific data source was found to have caused the inflated estimate.

Frontiers Network: 4,000+ Papers Implicated

Frontiers retracted 122 articles after discovering a coordinated network of authors engaged in systematic misconduct. The investigation identified over 4,000 additional articles linked to the same network across 7 different publishers.

The key fraud mechanisms were citation manipulation (artificially boosting citation counts) and reviewing papers without disclosing conflicts of interest.

AI-Generated Artifacts

Multiple papers were retracted in 2025 for obvious AI artifacts:

  • One autism diagnosis paper was retracted after investigation found AI-generated figures "containing nonsense," vague framework descriptions, and no source code
  • A spermatogonial stem cells paper was retracted just days after publication due to AI-generated images with typos
  • An optical solutions paper contained the phrase "Regenerate Response" - literally copied from ChatGPT

The Retrieval Cost Problem

AI has reduced the marginal cost of adding citations to near zero. Researchers can scan summaries and understand a paper's directional finding without reading the full text.

This means bad papers - including those eventually retracted - propagate more quickly through citation networks. The lower barrier to citation increases the risk of citing flawed or fraudulent work.

Formal screening processes, similar to those used in systematic reviews, help mitigate this risk.

Tools That Help

The Problematic Paper Screener (PPS), created by Guillaume Cabanac, scans literature at scale for:

  • Tortured phrases (awkward AI-generated text)
  • "Regenerate response" and similar artifacts
  • Papers over-reliant on retracted work
  • Citation inconsistencies

Other useful tools include the Retraction Watch Database (the authoritative source for retraction tracking), LibKey (which flags retracted papers automatically and is integrated into moara.io), and PubPeer for community-driven post-publication peer review.

What You Can Do

A few practical steps based on what we've learned from 2025's retractions:

  • Check your citations against multiple databases like Retraction Watch, LibKey, PubMed, and Crossref
  • Review any unresolved concerns on PubPeer for data inconsistencies or methodological questions
  • Screen especially rigorously in fast-growing fields, where publication rate spikes tend to be followed by higher retraction rates
  • Document your validation process and make citation vetting part of your methodology
  • Use modern screening tools - automated checks can catch issues that peer review missed

How moara.io Helps

We integrate with LibKey to automatically flag retracted papers during screening. In the Friday Lab demo, we uploaded a retracted arsenic paper (the one mentioned above), enriched it to find the DOI, and watched the retraction notice appear automatically in the screening interface.

No manual checking required. The system flags retracted papers as part of the normal screening workflow.

The Bigger Picture

Literature reviews accumulate outdated information over time. As retractions increase, the need for versioning and updating becomes more apparent.

The challenge is developing workflows that can incorporate new information - including retractions - into existing reviews without requiring complete reconstruction.

Try moara.io free: https://moara.io

Upcoming Friday Labs:

  • What Other Fields Can Learn from Systematic Reviews (1/2)
  • Institutional AI-Teaching Policies (1/9)
  • Lessons from AI's Impact on Software Development (1/16)

Sign up for Friday Labs