Moara Icon
moara
← Back to Blog
January 2, 20269 min read

What All Fields Can Learn from Systematic Reviews

The average systematic review takes 67 weeks to complete. Yet 80% happen in just one field. Here's what everyone else is missing.

The average systematic review takes 67 weeks to complete. That's over a year of rigorous, structured work to comprehensively review the literature on a single topic.

And yet, roughly 80% of systematic reviews happen in just one field: health sciences.

Last week's Friday Lab explored what researchers in other disciplines can learn from systematic review methodology. If you missed it, here's the breakdown.

What Makes Systematic Reviews Different

If you're trying to understand how a systematic review differs from a typical literature review, the key word is formal.

You have a formal search strategy that's written down on paper. You have formal screening with literal yes/no inclusion/exclusion criteria. There's formal data extraction - a documented process for what you'll pull from each paper. And crucially, there's formal process documentation.

This is the type of thing you might find in the methodology section of an econometrics paper, but you certainly wouldn't find it accompanying a standard literature review section.

The Eight Stages

According to a 2011 study by Lindsey Eumann, systematic reviews follow eight core stages:

  1. Document your research question - This itself is an important task
  2. Define inclusion/exclusion criteria - Maybe you're only studying populations over 18, for example
  3. Develop search strategy - These are the five queries you'll use across these four databases
  4. Select the studies - Often means downloading literally every citation in PubMed associated with that topic
  5. Extract the data - Everything from screening to full-text data extraction
  6. Assess study quality - Often called risk of bias (ROB) assessments
  7. Analyze and interpret results
  8. Disseminate findings - Get published

These stages are fairly timeless, even though the study is from 2011.

The Economics of It

I'm an economist, so I think about this in terms of microeconomics. Systematic reviews have high fixed costs in terms of setup and gathering citations. But that upfront investment is designed to make the process of screening and reviewing papers more scalable - both in efficiency and quality.

Higher fixed cost upfront to reduce selection bias and information asymmetry downstream. You take time to build infrastructure, then you can rip through a huge number of papers very effectively.

Why They're Awesome (And Underused)

Even before discovering this field, I've always felt that literature reviews are underappreciated as a data source. If I'm studying a new topic, I'm often more interested in prior literature than in some new fancy cutting-edge data source.

In fields like econometrics, where there are dozens or hundreds of papers on any given topic - say, price elasticity of demand in consumer products - you often get a nice three to four paragraph literature review section. And it's just kind of a checkbox. Here's what other studies have said.

I think there's too little synthesis between those studies and papers. It's too informal how those papers are selected. And it's not reproducible enough. In an average econometrics paper, five different authors would have five different sets of studies at every stage of their process.

The concept of a systematic review makes literature reviews into a science. That's what I really like about it.

Systematic vs Narrative Reviews

After doing research, I wanted to boil it down simply:

Systematic reviews are best for empirical work and evidence synthesis. Narrative reviews are best for theory, history, and conceptual work.

If you're working on theory-based insights and conclusions, you have to be creative and judgmental about what insights you pull from other papers. There needs to be an inspirational aspect to narrative literature reviews.

But if you're studying an empirical question - something that can be turned into regression models - that's where systematic reviews really come into play.

Why 80% Happen in Health Sciences

By my estimation (not from a study, just observation), 80%+ of systematic reviews happen in health sciences. I think there are two major reasons:

First, that's where they were invented. Health sciences developed excellent syntax, keywords, and classifications that allow for systematic reviews. You don't have the same infrastructure in social sciences or materials sciences.

Second, what's at stake. There are more review publications in health sciences that literally just catalog prior studies. More investment is required and mandated to get a tighter distribution of statistical results on a particular topic. In many cases, it's life or death. They're expected by regulators. The scrutiny is much higher.

The Growth Trend

According to a 2024 study, systematic reviews per year have skyrocketed:

  • 1990s: 50 systematic reviews per year
  • 2010: 6,000 systematic reviews per year
  • 2022: 36,000 systematic reviews per year

That growth rate far exceeds the growth rate of publications overall. This isn't just explained by publication growth or population growth.

Barriers to Entry

There are very large barriers to entry for conducting systematic reviews. If you don't have significant training, education, and experience, it takes a lot to get a handle on the protocols.

Even in health sciences, many researchers aren't educated on the process. So if you're in economics and you want to do a systematic review and check that box, you need to understand numerous frameworks. You need field-specific reporting checklists. You often need dual independent reviewers - two or three people screen papers, then there's conflict resolution logic. There are grading frameworks and risk of bias assessments.

It's not as simple as formalizing all your procedures. There are quite a bit of regulations involved.

What Other Fields Can Learn

My perspective is that the lessons we can take from systematic reviews for non-health science researchers are not "here's how you do a Prisma checklist" or "here's how to stay compliant with Cochrane Central Guidelines."

Instead, new technologies like moara.io and many other products are helping to make certain systematic review protocols economical and accessible for a broader audience.

The most actionable takeaway for non-health science and non-systematic review researchers: start broad and screen with a plan.

This is especially important given that newer AI literature search engines base their summaries and evidence synthesis heavily on their search results. It's still on researchers to make sure you're selecting papers and citations that you think are relevant. You don't want to be dependent on one particular search engine.

As economics researchers, for example, get more and more econometric studies versus theory-based papers, we need to get better at doing comprehensive reviews. You can't just pick off five studies when there are a hundred that tackle the same research question.

The Retraction Problem

I want to revisit a point from last week's retraction discussion. AI is dramatically reducing the cost of retrieval. The marginal cost of adding a citation to your library, to your reference list, to your publication is as low as ever.

You can look at a quick AI summary and figure out the directional finding of a paper without reading it. The result: bad papers and even papers that eventually get retracted propagate faster than ever. They infect all their downstream cited papers.

For a number of reasons, formal screening of the kind used in systematic reviews is increasingly necessary. This is especially true for highly empirical questions - questions that can be tackled with regression models.

The Future of Systematic Reviews

I hope systematic reviews (or some form of them) will become more common across disciplines. Here's what I see happening:

  • More living and continuously updated reviews - like open-source software rather than point-in-time publications
  • Less compliance-heavy, more principle-driven approaches
  • More structured data using tables
  • Definitely more automation in search, screening, and updates

It's still an incredibly human process and very iterative. But a lot of the tedious aspects - like determining whether an abstract addresses your include/exclude criteria - should be automated.

Based on a 2025 study, AI can create 50%+ time savings in systematic reviews, especially in two areas:

  1. Title and abstract screening - This is the most obvious application
  2. Data extraction from full-text articles - This one's more nuanced

I think researchers should still pour through full-text articles themselves, particularly in fields where the empirical question isn't crystal clear. The less clear-cut your empirical question, the more you should be reading through prior authors' work yourself.

But if you have the clearest empirical question in the world, maybe you could screen a thousand full-text articles instead of 20 using AI.

A Real Example

Here's a practical example of what systematic reviews look like in practice. A recent systematic review on characterizing long COVID started with 6,500 studies and ended up covering 39.

That's incredible comprehensiveness. And the alternative in narrative reviews is basically allowing search engines to do that screening process for you - which is sometimes helpful, sometimes not comprehensive enough. Then you're using heuristics-based mental models to screen papers.

For highly theory-based work where you need inspiration and can't be formulaic with include/exclude criteria, that's fine. But for empirical questions, there really should be something more scientific and formulaic going on.

How moara.io Helps

We're embedding formal workflows into both systematic and narrative review processes. Our platform includes:

  • AI-assisted search strategy development - Based on your research question, suggesting keyword combinations and Boolean operators
  • Include/exclude criteria development - AI suggestions for defining what you'll screen for
  • Automated record keeping for Prisma workflows - Running in the background, no manual tracking needed
  • Evidence synthesis - Recognizing themes between papers, organizing them chronologically, building narrative stories

The goal is to help users who haven't been doing systematic reviews their whole careers to actually perform one easily.

Semi-Systematic Reviews Should Be Normal

This is a hill I'm willing to die on: semi-systematic reviews should be far more prevalent across every discipline.

Set aside the Cochrane guidelines, set aside Prisma workflows, set aside risk of bias assessments. Something that looks and feels more like a systematic review - with formal search strategies and formal structured screening processes - should be standard.

When push comes to shove, how most economics literature reviews happen is that the screening process is extremely judgmental and happens at the point of search engine results. You have folks typing something into Google Scholar or EconLit, judgmentally selecting which ones look interesting.

Very often, there's no documented inclusion criteria or prioritization method. It's just "hey, these 10-15 papers look cool, that abstract is easy to understand, okay that's my bibliography."

There's no reason that needs to be the case, particularly given every field is becoming more quantitative.

The Bigger Picture

Literature reviews accrue technical debt over time. Reviews need versioning, patching, and maintenance - like software.

The end game? More dynamic libraries that can be updated as new information emerges.

We're not there yet. But as systematic reviews grow and AI makes them more accessible, we're heading in that direction.

Try moara.io free: https://moara.io

Upcoming Friday Labs:

  • Institutional AI-Teaching Policies (1/9)
  • Lessons from AI's Impact on Software Development (1/16)

Sign up for Friday Labs