X

Library Services

UCL LIBRARY SERVICES

  • Guides and databases
  • Library skills
  • Systematic reviews

Formulating a research question

  • What are systematic reviews?
  • Types of systematic reviews
  • Identifying studies
  • Searching databases
  • Describing and appraising studies
  • Synthesis and systematic maps
  • Software for systematic reviews
  • Online training and support
  • Live and face to face training
  • Individual support
  • Further help

Searching for information

Clarifying the review question leads to specifying what type of studies can best address that question and setting out criteria for including such studies in the review. This is often called inclusion criteria or eligibility criteria. The criteria could relate to the review topic, the research methods of the studies, specific populations, settings, date limits, geographical areas, types of interventions, or something else.

Systematic reviews address clear and answerable research questions, rather than a general topic or problem of interest. They also have clear criteria about the studies that are being used to address the research questions. This is often called inclusion criteria or eligibility criteria.

Six examples of types of question are listed below, and the examples show different questions that a review might address based on the topic of influenza vaccination. Structuring questions in this way aids thinking about the different types of research that could address each type of question. Mneumonics can help in thinking about criteria that research must fulfil to address the question. The criteria could relate to the context, research methods of the studies, specific populations, settings, date limits, geographical areas, types of interventions, or something else.

Examples of review questions

  • Needs - What do people want? Example: What are the information needs of healthcare workers regarding vaccination for seasonal influenza?
  • Impact or effectiveness - What is the balance of benefit and harm of a given intervention? Example: What is the effectiveness of strategies to increase vaccination coverage among healthcare workers. What is the cost effectiveness of interventions that increase immunisation coverage?
  • Process or explanation - Why does it work (or not work)? How does it work (or not work)?  Example: What factors are associated with uptake of vaccinations by healthcare workers?  What factors are associated with inequities in vaccination among healthcare workers?
  • Correlation - What relationships are seen between phenomena? Example: How does influenza vaccination of healthcare workers vary with morbidity and mortality among patients? (Note: correlation does not in itself indicate causation).
  • Views / perspectives - What are people's experiences? Example: What are the views and experiences of healthcare workers regarding vaccination for seasonal influenza?
  • Service implementation - What is happening? Example: What is known about the implementation and context of interventions to promote vaccination for seasonal influenza among healthcare workers?

Examples in practice :  Seasonal influenza vaccination of health care workers: evidence synthesis / Loreno et al. 2017

Example of eligibility criteria

Research question: What are the views and experiences of UK healthcare workers regarding vaccination for seasonal influenza?

  • Population: healthcare workers, any type, including those without direct contact with patients.
  • Context: seasonal influenza vaccination for healthcare workers.
  • Study design: qualitative data including interviews, focus groups, ethnographic data.
  • Date of publication: all.
  • Country: all UK regions.
  • Studies focused on influenza vaccination for general population and pandemic influenza vaccination.
  • Studies using survey data with only closed questions, studies that only report quantitative data.

Consider the research boundaries

It is important to consider the reasons that the research question is being asked. Any research question has ideological and theoretical assumptions around the meanings and processes it is focused on. A systematic review should either specify definitions and boundaries around these elements at the outset, or be clear about which elements are undefined. 

For example if we are interested in the topic of homework, there are likely to be pre-conceived ideas about what is meant by 'homework'. If we want to know the impact of homework on educational attainment, we need to set boundaries on the age range of children, or how educational attainment is measured. There may also be a particular setting or contexts: type of school, country, gender, the timeframe of the literature, or the study designs of the research.

Research question: What is the impact of homework on children's educational attainment?

  • Scope : Homework - Tasks set by school teachers for students to complete out of school time, in any format or setting.
  • Population: children aged 5-11 years.
  • Outcomes: measures of literacy or numeracy from tests administered by researchers, school or other authorities.
  • Study design: Studies with a comparison control group.
  • Context: OECD countries, all settings within mainstream education.
  • Date Limit: 2007 onwards.
  • Any context not in mainstream primary schools.
  • Non-English language studies.

Mnemonics for structuring questions

Some mnemonics that sometimes help to formulate research questions, set the boundaries of question and inform a search strategy.

Intervention effects

PICO  Population – Intervention– Outcome– Comparison

Variations: add T on for time, or ‘C’ for context, or S’ for study type,

Policy and management issues

ECLIPSE : Expectation – Client group – Location – Impact ‐ Professionals involved – Service

Expectation encourages  reflection on what the information is needed for i.e. improvement, innovation or information.  Impact looks at what  you would like to achieve e.g. improve team communication .

  • How CLIP became ECLIPSE: a mnemonic to assist in searching for health policy/management information / Wildridge & Bell, 2002

Analysis tool for management and organisational strategy

PESTLE:  Political – Economic – Social – Technological – Environmental ‐ Legal

An analysis tool that can be used by organizations for identifying external factors which may influence their strategic development, marketing strategies, new technologies or organisational change.

  • PESTLE analysis / CIPD, 2010

Service evaluations with qualitative study designs

SPICE:  Setting (context) – Perspective– Intervention – Comparison – Evaluation

Perspective relates to users or potential users. Evaluation is how you plan to measure the success of the intervention.

  • Clear and present questions: formulating questions for evidence based practice / Booth, 2006

Read more about some of the frameworks for constructing review questions:

  • Formulating the Evidence Based Practice Question: A Review of the Frameworks / Davis, 2011
  • << Previous: Stages in a systematic review
  • Next: Identifying studies >>
  • Last Updated: May 30, 2024 4:38 PM
  • URL: https://library-guides.ucl.ac.uk/systematic-reviews

University of Tasmania, Australia

Systematic reviews for health: 1. formulate the research question.

  • Handbooks / Guidelines for Systematic Reviews
  • Standards for Reporting
  • Registering a Protocol
  • Tools for Systematic Review
  • Online Tutorials & Courses
  • Books and Articles about Systematic Reviews
  • Finding Systematic Reviews
  • Critical Appraisal
  • Library Help
  • Bibliographic Databases
  • Grey Literature
  • Handsearching
  • Citation Searching
  • 1. Formulate the Research Question
  • 2. Identify the Key Concepts
  • 3. Develop Search Terms - Free-Text
  • 4. Develop Search Terms - Controlled Vocabulary
  • 5. Search Fields
  • 6. Phrase Searching, Wildcards and Proximity Operators
  • 7. Boolean Operators
  • 8. Search Limits
  • 9. Pilot Search Strategy & Monitor Its Development
  • 10. Final Search Strategy
  • 11. Adapt Search Syntax
  • Documenting Search Strategies
  • Handling Results & Storing Papers

how to formulate a research question systematic review

Step 1. Formulate the Research Question

A systematic review is based on a pre-defined specific research question ( Cochrane Handbook, 1.1 ). The first step in a systematic review is to determine its focus - you should clearly frame the question(s) the review seeks to answer  ( Cochrane Handbook, 2.1 ). It may take you a while to develop a good review question - it is an important step in your review.  Well-formulated questions will guide many aspects of the review process, including determining eligibility criteria, searching for studies, collecting data from included studies, and presenting findings ( Cochrane Handbook, 2.1 ).

The research question should be clear and focused - not too vague, too specific or too broad.

You may like to consider some of the techniques mentioned below to help you with this process. They can be useful but are not necessary for a good search strategy.

PICO - to search for quantitative review questions

Richardson, WS, Wilson, MC, Nishikawa, J & Hayward, RS 1995, 'The well-built clinical question: A key to evidence-based decisions', ACP Journal Club , vol. 123, no. 3, pp. A12-A12 .

We do not have access to this article at UTAS.

A variant of PICO is PICOS . S stands for Study designs . It establishes which study designs are appropriate for answering the question, e.g. randomised controlled trial (RCT). There is also PICO C (C for context) and PICO T (T for timeframe).

You may find this document on PICO / PIO / PEO useful:

  • Framing a PICO / PIO / PEO question Developed by Teesside University

SPIDER - to search for qualitative and mixed methods research studies

Cooke, A, Smith, D & Booth, A 2012, 'Beyond pico the spider tool for qualitative evidence synthesis', Qualitative Health Research , vol. 22, no. 10, pp. 1435-1443.

This article is only accessible for UTAS staff and students.

SPICE - to search for qualitative evidence

Cleyle, S & Booth, A 2006, 'Clear and present questions: Formulating questions for evidence based practice', Library hi tech , vol. 24, no. 3, pp. 355-368.

ECLIPSE - to search for health policy/management information

Wildridge, V & Bell, L 2002, 'How clip became eclipse: A mnemonic to assist in searching for health policy/management information', Health Information & Libraries Journal , vol. 19, no. 2, pp. 113-115.

There are many more techniques available. See the below guide from the CQUniversity Library for an extensive list:

  • Question frameworks overview from Framing your research question guide, developed by CQUniversity Library

This is the specific research question used in the example:

"Is animal-assisted therapy more effective than music therapy in managing aggressive behaviour in elderly people with dementia?"

Within this question are the four PICO concepts :

S - Study design

This is a therapy question. The best study design to answer a therapy question is a randomised controlled trial (RCT). You may decide to only include studies in the systematic review that were using a RCT, see  Step 8 .

See source of example

Need More Help? Book a consultation with a  Learning and Research Librarian  or contact  [email protected] .

  • << Previous: Building Search Strategies
  • Next: 2. Identify the Key Concepts >>
  • Last Updated: May 27, 2024 11:04 AM
  • URL: https://utas.libguides.com/SystematicReviews

Australian Aboriginal Flag

Home

  • Duke NetID Login
  • 919.660.1100
  • Duke Health Badge: 24-hour access
  • Accounts & Access
  • Databases, Journals & Books
  • Request & Reserve
  • Training & Consulting
  • Request Articles & Books
  • Renew Online
  • Reserve Spaces
  • Reserve a Locker
  • Study & Meeting Rooms
  • Course Reserves
  • Pay Fines/Fees
  • Recommend a Purchase
  • Access From Off Campus
  • Building Access
  • Computers & Equipment
  • Wifi Access
  • My Accounts
  • Mobile Apps
  • Known Access Issues
  • Report an Access Issue
  • All Databases
  • Article Databases
  • Basic Sciences
  • Clinical Sciences
  • Dissertations & Theses
  • Drugs, Chemicals & Toxicology
  • Grants & Funding
  • Interprofessional Education
  • Non-Medical Databases
  • Search for E-Journals
  • Search for Print & E-Journals
  • Search for E-Books
  • Search for Print & E-Books
  • E-Book Collections
  • Biostatistics
  • Global Health
  • MBS Program
  • Medical Students
  • MMCi Program
  • Occupational Therapy
  • Path Asst Program
  • Physical Therapy
  • Researchers
  • Community Partners

Conducting Research

  • Archival & Historical Research
  • Black History at Duke Health
  • Data Analytics & Viz Software
  • Data: Find and Share
  • Evidence-Based Practice
  • NIH Public Access Policy Compliance
  • Publication Metrics
  • Qualitative Research
  • Searching Animal Alternatives

Systematic Reviews

  • Test Instruments

Using Databases

  • JCR Impact Factors
  • Web of Science

Finding & Accessing

  • COVID-19: Core Clinical Resources
  • Health Literacy
  • Health Statistics & Data
  • Library Orientation

Writing & Citing

  • Creating Links
  • Getting Published
  • Reference Mgmt
  • Scientific Writing

Meet a Librarian

  • Request a Consultation
  • Find Your Liaisons
  • Register for a Class
  • Request a Class
  • Self-Paced Learning

Search Services

  • Literature Search
  • Systematic Review
  • Animal Alternatives (IACUC)
  • Research Impact

Citation Mgmt

  • Other Software

Scholarly Communications

  • About Scholarly Communications
  • Publish Your Work
  • Measure Your Research Impact
  • Engage in Open Science
  • Libraries and Publishers
  • Directions & Maps
  • Floor Plans

Library Updates

  • Annual Snapshot
  • Conference Presentations
  • Contact Information
  • Gifts & Donations
  • What is a Systematic Review?
  • Types of Reviews
  • Manuals and Reporting Guidelines
  • Our Service
  • 1. Assemble Your Team

2. Develop a Research Question

  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Additional Resources
  • Finding Full-Text Articles

A well-developed and answerable question is the foundation for any systematic review. This process involves:

  • Systematic review questions typically follow a PICO-format (patient or population, intervention, comparison, and outcome)
  • Using the PICO framework can help team members clarify and refine the scope of their question. For example, if the population is breast cancer patients, is it all breast cancer patients or just a segment of them? 
  • When formulating your research question, you should also consider how it could be answered. If it is not possible to answer your question (the research would be unethical, for example), you'll need to reconsider what you're asking
  • Typically, systematic review protocols include a list of studies that will be included in the review. These studies, known as exemplars, guide the search development but also serve as proof of concept that your question is answerable. If you are unable to find studies to include, you may need to reconsider your question

Other Question Frameworks

PICO is a helpful framework for clinical research questions, but may not be the best for other types of research questions. Did you know there are at least  25 other question frameworks  besides variations of PICO?  Frameworks like PEO, SPIDER, SPICE, and ECLIPS can help you formulate a focused research question. The table and example below were created by the  Medical University of South Carolina (MUSC) Libraries .

The PEO question framework is useful for qualitative research topics. PEO questions identify three concepts: population, exposure, and outcome. Research question : What are the daily living experiences of mothers with postnatal depression?

The SPIDER question framework is useful for qualitative or mixed methods research topics focused on "samples" rather than populations. SPIDER questions identify five concepts: sample, phenomenon of interest, design, evaluation, and research type.

Research question : What are the experiences of young parents in attendance at antenatal education classes?

The SPICE question framework is useful for qualitative research topics evaluating the outcomes of a service, project, or intervention. SPICE questions identify five concepts: setting, perspective, intervention/exposure/interest, comparison, and evaluation.

Research question : For teenagers in South Carolina, what is the effect of provision of Quit Kits to support smoking cessation on number of successful attempts to give up smoking compared to no support ("cold turkey")?

The ECLIPSE framework is useful for qualitative research topics investigating the outcomes of a policy or service. ECLIPSE questions identify six concepts: expectation, client group, location, impact, professionals, and service.

Research question:  How can I increase access to wireless internet for hospital patients?

  • << Previous: 1. Assemble Your Team
  • Next: 3. Write and Register a Protocol >>
  • Last Updated: May 22, 2024 8:22 PM
  • URL: https://guides.mclibrary.duke.edu/sysreview
  • Duke Health
  • Duke University
  • Duke Libraries
  • Medical Center Archives
  • Duke Directory
  • Seeley G. Mudd Building
  • 10 Searle Drive
  • [email protected]

Systematic Reviews: Formulating Your Research Question

  • What Type of Review is Right for You?
  • What is in a Systematic Review
  • Finding and Appraising Systematic Reviews
  • Formulating Your Research Question
  • Inclusion and Exclusion Criteria
  • Creating a Protocol
  • Results and PRISMA Flow Diagram
  • Searching the Published Literature
  • Searching the Gray Literature
  • Methodology and Documentation
  • Managing the Process
  • Scoping Reviews

Types of Questions

Research questions should be answerable and also fill important gaps in the knowledge. Developing a good question takes time and may not fit in the traditional framework.  Questions can be broad or narrow and there are advantages and disadvantages to each type. 

Questions can be about interventions, diagnosis, screening, measuring, patients/student/customer experiences, or even management strategies. They can also be about policies. As the field of systematic reviews grow, more and more people in humanities and social sciences are embracing systematic reviews and creating questions that fit within their fields of practice. 

More information can be found here:

Thomas J, Kneale D, McKenzie JE, Brennan SE, Bhaumik S. Chapter 2: Determining the scope of the review and the questions it will address. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors).  Cochrane Handbook for Systematic Reviews of Interventions  version 6.0 (updated July 2019). Cochrane, 2019. Available from  www.training.cochrane.org/handbook .

Frameworks are used to develop the question being asked. The type of framework doesn't matter as much as the question being selected.

Think of these frameworks as you would for a house or building. A framework is there to provide support and to be a scaffold for the rest of the structure. In the same way, a research question framework can also help structure your evidence synthesis question.  

Organizing Your Question

  • Formulating non-PICO questions Although the PICO formulation should apply easily to the majority of effectiveness questions and a great number besides you may encounter questions that are not easily accommodated within this particular framework. Below you will find a number of acceptable alternatives:
  • Using The PICOS Model To Design And Conduct A Systematic Search: A Speech Pathology Case Study
  • 7 STEPS TO THE PERFECT PICO SEARCH Searching for high-quality clinical research evidence can be a daunting task, yet it is an integral part of the evidence-based practice process. One way to streamline and improve the research process for nurses and researchers of all backgrounds is to utilize the PICO search strategy. PICO is a format for developing a good clinical research question prior to starting one’s research. It is a mnemonic used to describe the four elements of a sound clinical foreground question (Yale University’s Cushing/Whitney Medical Library)

to search for quantitative review questions

P: Patient or Population

I: Intervention (or Exposure)

C: Comparison (or Control)

Variations Include:

S: Study Design

T: Timeframe

to search for qualitative evidence

S: Setting (where?)

P: Perspecitve (for whom?)

I: Intervention (what?)

C: Comparison (compared with what?)    

E: Evaluation (with what result?)

 to search for qualitative and mixed methods research studies

S: Sample

PI: Phenomenon of Interest    

E: Evaluation    

R: Research type

to search for health policy/management information

E: Expectation (improvement or information or innovation)

C: Client group (at whom the service is aimed)    

L: Location (where is the service located?)    

I: Impact (outcomes)

P: Professionals (who is involved in providing/improving the service)

Se: Service (for which service are you looking for information)

PICO Template Questions

Try words from your topic in these templates.  Your PICO should fit only one type of question in the list.

For an intervention/therapy:

In _______(P), what is the effect of _______(I) on ______(O) compared with _______(C) within ________ (T)?

For etiology:

Are ____ (P) who have _______ (I) at ___ (Increased/decreased) risk for/of_______ (O) compared with ______ (P) with/without ______ (C) over _____ (T)?

Diagnosis or diagnostic test:

Are (is) _________ (I) more accurate in diagnosing ________ (P) compared with ______ (C) for _______ (O)?

Prevention:

For ________ (P) does the use of ______ (I) reduce the future risk of ________ (O) compared with _________ (C)?

Prognosis/Predictions

In__________ (P) how does ________ (I) compared to _______(C) influence _______ (O) over ______ (T)?

How do ________ (P) diagnosed with _______ (I) perceive ______ (O) during _____ (T)?

Template taken from Southern Illinois University- Edwardsville

Example PICO Questions

Intervention/Therapy:

In school-age children (P), what is the effect of a school-based physical activity program (I) on a reduction in the incidence of childhood obesity (O) compared with no intervention (C) within a 1 year period (T)?

In high school children (P), what is the effect of a nurse-led presentation on bullying (I) on a reduction in reported incidences of bullying (O) compared with no intervention (C) within a 6 month time frame (T)?

Are males 50 years of age and older (P) who have a history of 1 year of smoking or less (I) at an increased risk of developing esophageal cancer (O) compared with males age 50 and older (P) who have no smoking history (C)?

Are women ages 25-40 (P) who take oral contraceptives (I) at greater risk for developing blood clots (O) compared with women ages 25-40 (P) who use IUDs for contraception (C) over a 5 year time frame (T)?

Diagnosis/Diagnostic Test:

Is a yearly mammogram (I) more effective in detecting breast cancer (O) compared with a mammogram every 3 years (C) in women under age 50 (P)?

Is a colonoscopy combined with fecal occult blood testing (I) more accurate in detecting colon cancer (O) compared with a colonoscopy alone (C) in adults over age 50 (P)?

For women under age 60 (P), does the daily use of 81mg low-dose Aspirin (I) reduce the future risk of stroke (O) compared with no usage of low-dose Aspirin (C)?

For adults over age 65 (P) does a daily 30 minute exercise regimen (I) reduce the future risk of heart attack (O) compared with no exercise regimen (C)?

Prognosis/Predictions:

Does daily home blood pressure monitoring (I) influence compliance with medication regimens for hypertension (O) in adults over age 60 who have hypertension (P) during the first year after being diagnosed with the condition (T)?

Does monitoring blood glucose 4 times a day (I) improve blood glucose control (O) in people with Type 1 diabetes (P) during the first six months after being diagnosed with the condition (T)?

How do teenagers (P) diagnosed with cancer (I) perceive chemotherapy and radiation treatments (O) during the first 6 months after diagnosis (T)?

How do first-time mothers (P) of premature babies in the NICU (I) perceive bonding with their infant (O) during the first month after birth (T)?

  • << Previous: Finding and Appraising Systematic Reviews
  • Next: Inclusion and Exclusion Criteria >>
  • Last Updated: May 13, 2024 12:23 PM
  • URL: https://guides.lib.lsu.edu/Systematic_Reviews

Provide Website Feedback Accessibility Statement

Systematic and systematic-like review toolkit: Step 1: Formulating the research question

Systematic and systematic-like review toolkit.

  • Systematic and systematic-like reviews overview

Step 1: Formulating the research question

  • Step 2: Developing the search
  • Step 3: Screening and selection of articles
  • Step 4: Appraisal of articles
  • Step 5: Writing and publishing
  • Filters and complex search examples
  • Evidence synthesis support services

Tip: Look for these icons for guidance on which technique is required

Systematic Review

Email your Librarians

The first stage in a review is formulating the research question. The research question accurately and succinctly sums up the review's line of inquiry. This page outlines approaches to developing a research question that can be used as the basis for a review.

Research question frameworks

It can be useful to use a framework to aid in the development of a research question. Frameworks can help you identify searchable parts of a question and focus your search on relevant results

A technique often used in research for formulating a clinical research question is the PICO model. Slightly different versions of this concept are used to search for quantitative and qualitative reviews.

The PICO/ PECO   framework is an adaptable approach to help you focus your research question and guide you in developing search terms. The framework prompts you to consider your question in terms of these four elements:

P : P atient/ P opulation/ P roblem

I/E : I ntervention/ I ndicator/ E xposure/ E vent

C : C omparison/ C ontrol

O : O utcome

For more detail, there are also the PICOT and PICOS additions:

PICO T - adds T ime  

PICO S - adds S tudy design

PICO example

Consider this scenario:

Current guidelines indicate that nicotine replacement therapies (NRTs) should not be used as an intervention in young smokers.  Counselling is generally the recommended best practice for young smokers, however youth who are at high risk for smoking often live in regional or remote communities with limited access to counselling services.  You have been funded to review the evidence for the effectiveness of NRTs for smoking cessation in Australian youths to update the guidelines.

The research question stemming from this scenario could be phrased in this way:

In (P) adolescent smokers , how does (I) nicotine replacement therapy compared with (C) counselling affect (O) smoking cessation rates ?

Alternative frameworks

PICO is one of the most frequently used frameworks, but there are several other frameworks available to use, depending on your question.

Question type

  • Qualitative; Aetiology or risk
  • Services, policy, social care
  • Prevalence & prognosis; Economics

Structuring qualitative questions?

Try PIC or SPIDER :

  • P opulation, Phenomena of I nterest, C ontext
  • S ample, P henomenon of I nterest, D esign, E valuation, R esearch type   

Cooke, A., Smith, D., & Booth, A. (2012). Beyond PICO: the SPIDER tool for qualitative evidence synthesis . Qualitative health research, 22(10), 1435-1443.

Question about aetiology or risk? 

  • P opulation, E xposure, O utcomes

Moola, Sandeep; Munn, Zachary; Sears, Kim; Sfetcu, Ralucac; Currie, Marian; Lisy, Karolina; Tufanaru, Catalin; Qureshi, Rubab; Mattis, Patrick; Mu, Peifanf. Conducting systematic reviews of association (etiology) , International Journal of Evidence-Based Healthcare: September 2015 - Volume 13 - Issue 3 - p 163-169.

Evaluating an intervention, policy or service? 

Try SPICE :

  • S etting, P opulation or P erspective, I ntervention, C omparison, E valuation

Booth, A. (2006), " Clear and present questions: formulating questions for evidence based practice ", Library Hi Tech, Vol. 24 No. 3, pp. 355-368. https://doi-org.ezproxy-b.deakin.edu.au/10.1108/07378830610692127

Investigating the outcome of a service or policy? 

Try ECLIPSE :

  • E xpectation, C lient group, L ocation, I mpact, P rofessionals, SE rvice  

Wildridge, V., & Bell, L. (2002). How CLIP became ECLIPSE: a mnemonic to assist in searching for health policy/management information . Health Information & Libraries Journal, 19(2), 113-115.

Working out prevalence or incidence? 

Try CoCoPop :

  • Co ndition, Co ntext, Pop ulation

Munn, Z., Moola, S., Lisy, K., Riitano, D., & Tufanaru, C. (2015). Methodological guidance for systematic reviews of observational epidemiological studies reporting prevalence and cumulative incidence data . International journal of evidence-based healthcare, 13(3), 147-153.

Determining prognosis?

  • P opulation, Prognostic F actors, O utcome

Conducting an economic evaluation? 

Try PICOC :

  • P opulation, I ntervention, C omparator/s, O utomes, Context

Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences: a practical guide . Blackwell Pub.

how to formulate a research question systematic review

JBI recommends the PCC (Population (or Participants), Concept, and Context) search framework to develop the research question of a scoping review. In some instances, just the concept and context are used in the search.

The University of Notre Dame Australia provides information on some different frameworks available to help structure the research question.

Further Readings

Booth A, Noyes J, Flemming K, et al, Formulating questions to explore complex interventions within qualitative evidence synthesis . BMJ Global Health 2019;4:e001107. This paper explores the importance of focused, relevant questions in qualitative evidence syntheses to address complexity and context in interventions.

Kim, K. W., Lee, J., Choi, S. H., Huh, J., & Park, S. H. (2015). Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: a practical review for clinical researchers-part I. General guidance and tips . Korean journal of radiology, 16(6), 1175-1187. As the use of systematic reviews and meta-analyses is increasing in the field of diagnostic test accuracy (DTA), this first of a two-part article provides a practical guide on how to conduct, report, and critically appraise studies of DTA. 

Methley, A. M., Campbell, S., Chew-Graham, C., McNally, R., & Cheraghi-Sohi, S. (2014). PICO, PICOS and SPIDER: A comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews . BMC Health Services Research, 14(1), 579. In this article the ‘SPIDER’ search framework, developed for more effective searching of qualitative research, was evaluated against PICO and PICOD. 

Munn, Z., Stern, C., Aromataris, E., Lockwood, C., & Jordan, Z. (2018). What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences . BMC medical research methodology, 18(1), 5. https://doi.org/10.1186/s12874-017-0468-4 This article aligns review types to question development frameworks.

Search for existing reviews

Before you start searching, find out whether any systematic reviews have been conducted recently on your topic. This is because similar systematic reviews could help with identifying your search terms, and information on your topic. It is also helpful to know if there is already a systematic review on your topic as it may mean you need to change your question.  

Cochrane Library and Joanna Briggs Institute publish systematic reviews. You can also search for the term "systematic review" in any of the subject databases. You can also search PROSPERO , an international register of systematic reviews, to see if there are any related reviews underway but not yet published; there are additional review registers detailed below.  

Watch this video to find out how to search for published systematic reviews

Protocols and Guidelines for reviews

It is recommended that authors consult relevant guidelines and create a protocol for their review.  

Protocols provide a clear plan for how the review will be conducted, including what will and will not be included in the final review. Protocols are widely recommended for any systematic review and are increasingly a requirement for publication of a completed systematic review.

Guidelines provide specific information on how to perform a review in your field of study. A completed review may be evaluated against the relevant guidelines by peer reviewers or readers, so it makes sense to follow the guidelines as best you can.

Click the headings below to learn more about the importance of protocols and guidelines.

how to formulate a research question systematic review

Your protocol (or plan for conducting your review) should include the rationale, objectives, hypothesis, and planned methods used in searching, screening and analysing identified studies used in the review. The rationale should clearly state what will be included and excluded from the review. The aim is to minimise any bias by having pre-defined eligibility criteria.

Base the protocol on the relevant guidelines for the review that you are conducting.  PRISMA-P was developed for reporting and development of protocols for systematic reviews. Their Explanation and Elaboration paper includes examples of what to write in your protocol. York's CRD has also created a document on how to submit a protocol to PROSPERO .

There are several registers of protocols, often associated with the organisation publishing the review. Cochrane and Joanna Briggs Institute both have their own protocol registries, and PROSPERO is a wide-reaching registry covering protocols for Cochrane, non-Cochrane and non-JBI reviews on a range of health, social care, education, justice, and international development topics.

Before beginning your protocol, search within protocol registries such as those listed above, or Open Science Framework or Research Registry , or journals such as Systematic Reviews and BMJ Open . This is a useful step to see if a protocol has already been submitted on your review topic and to find examples of protocols in similar areas of research.    

While a protocol will contain details of the intended search strategy, a protocol should be registered before the search strategy is finalised and run, so that you can show that your intention for the review has remained true and to limit duplication of in progress reviews.  

A protocol should typically address points that define the kind of studies to be included and the kind of data required to ensure the systematic review is focused on the appropriate studies for the topic. Some points to think about are:

  • What study types are you looking for? For example, randomised controlled trials, cohort studies, qualitative studies
  • What sample size is acceptable in each study (power of the study)? 
  • What population are you focusing on? Consider age ranges, gender, disease severity, geography of patients.
  • What type of intervention are you focusing on?
  • What outcomes are of importance to the review, including how those outcomes are measured?
  • What context should you be looking for in a study? A lab, acute care, school, community...
  • How will you appraise the studies? What methodology will you use?
  • Does the study differentiate between the target population and other groups in the data? How will you handle it if it does not?
  • Is the data available to access if the article does not specify the details you need? If not, what will you do?
  • What languages are you able to review? Do you have funding to translate articles from languages other than English?  

Further reading

PLoS Medicine Editors. (2011). Best practice in systematic reviews: the importance of protocols and registration . PLoS medicine, 8(2), e1001009.

Systematic Review guidelines

The Cochrane handbook of systematic reviews of interventions is a world-renowned resource for information on designing systematic reviews of intervention.  

Many other guidelines have been developed from these extensive guidelines.

General systematic reviews

  • The  PRISMA Statement  includes the well-used Checklist and Flow Diagram.
  • Systematic Reviews: CRD's guidance on undertaking reviews in health care . One of the founding institutions that developed systematic review procedure. CRD's guide gives detailed clearly written explanations for different fields in Health.
  • National Academies Press (US); 2011. 3, Standards for Finding and Assessing Individual Studies. Provides guidance on searching, screening, data collection, and appraisal of individual studies for a systematic review.

Meta-analyses

  • An alternative to PRISMA is the Meta‐analysis Of Observational Studies in Epidemiology (MOOSE) for observational studies. It is a 35‐item checklist. It pays more attention to certain aspects of the search strategy, in particular the inclusion of unpublished and non‐English‐language studies.

Surgical systematic reviews

  • Systematic reviews in surgery-recommendations from the Study Center of the German Society of Surgery . Provides recommendations for systematic reviews in surgery with or without meta-analysis, for each step of the process with specific recommendations important to surgical reviews.

Nursing/Allied Health systematic reviews

Joanna Briggs Institute Manual for Evidence Synthesis  a comprehensive guide to conducting JBI systematic and similar reviews

Nutrition systematic reviews

  • Academy of Nutrition and Dietetics Evidence Analysis Manual  is designed to guide expert workgroup members and evidence analysts to understand and carry out the process of conducting a systematic review.

Occupational therapy

  • American Occupational Therapy Association: Guidelines for Systematic reviews . The American Journal of Occupational Therapy (AJOT) provides guidance for authors conducting systematic reviews.

Education/Law/ Sociology systematic reviews

  • Campbell Collaboration, Cochrane's sister organisation provides guidelines for systematic reviews in the social sciences:  MECIR
  • Systematic Reviews in Educational Research: Methodology, Perspectives and Application

Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy

COSMIN Guideline for Systematic Reviews of Outcome Measurement Instruments – This was developed for patient reported outcomes (PROMs) but has since been adapted for use with other types of outcome measurements in systematic reviews.

Prinsen, C.A.C., Mokkink, L.B., Bouter, L.M. et al. COSMIN guideline for systematic reviews of patient-reported outcome measures . Qual Life Res 27, 1147–1157 (2018). https://doi.org/10.1007/s11136-018-1798-3

HuGENet™ Handbook of systematic reviews – particularly useful for describing population-based data and human genetic variants.

AHRQ: Methods Guide for Effectiveness and Comparative Effectiveness Reviews - from the US Department of Health and Human Services, guidelines on conducting systematic reviews of existing research on the effectiveness, comparative effectiveness, and harms of different health care interventions.

Mariano, D. C., Leite, C., Santos, L. H., Rocha, R. E., & de Melo-Minardi, R. C. (2017). A guide to performing systematic literature reviews in bioinformatics . arXiv preprint arXiv:1707.05813.

Integrative Review guidelines

how to formulate a research question systematic review

Integrative reviews may incorporate experimental and non-experimental data, as well as theoretical information.  They differ from systematic reviews in the diversity of the study methodologies included.

Guidelines:

  • Whittemore, R. and Knafl, K. (2005), The integrative review: updated methodology. Journal of Advanced Nursing, 52: 546–553. doi:10.1111/j.1365-2648.2005.03621.x
  • A step-by-step guide to conducting an Integrative Review (2020), edited by C.E. Toronto & Ruth Remington, Springer Books

Rapid Review guidelines

how to formulate a research question systematic review

Rapid reviews differ from systematic reviews in the shorter timeframe taken and reduced comprehensiveness of the search.

Cochrane has a methods group to inform the conduct of rapid reviews with a bibliography of relevant publications .

A modified approach to systematic review guidelines can be used for rapid reviews, but guidelines are beginning to appear:

Crawford C, Boyd C, Jain S, Khorsan R and Jonas W (2015), Rapid Evidence Assessment of the Literature (REAL©): streamlining the systematic review process and creating utility for evidence-based health care . BMC Res Notes 8:631 DOI 10.1186/s13104-015-1604-z

Philip Moons, Eva Goossens, David R. Thompson, Rapid reviews: the pros and cons of an accelerated review process , European Journal of Cardiovascular Nursing, Volume 20, Issue 5, June 2021, Pages 515–519, https://doi.org/10.1093/eurjcn/zvab041

Rapid Review Guidebook: Steps for conducting a rapid review National Collaborating Centre for Methods and Tools (McMaster University and Public Health Agency Canada) 2017

Tricco AC, Langlois EV, Straus SE, editors (2017) Rapid reviews to strengthen health policy and systems: a practical guide (World Health Organization). This guide is particularly aimed towards developing rapid reviews to inform health policy. 

Scoping Review guidelines

how to formulate a research question systematic review

Scoping reviews can be used to map an area, or to determine the need for a subsequent systematic review. Scoping reviews tend to have a broader focus than many other types of reviews, however, still require a focused question.

  • Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil, H. Chapter 11: Scoping Reviews (2020 version). In: Aromataris E, Munn Z (Editors). Joanna Briggs Institute Reviewer's Manual, JBI, 2020. 
  • Statement / Explanatory paper

Scoping reviews: what they are and how you can do them - Series of Cochrane Training videos presented by Dr. Andrea C. Tricco and Kafayat Oboirien

Martin, G. P., Jenkins, D. A., Bull, L., Sisk, R., Lin, L., Hulme, W., ... & Group, P. H. A. (2020). Toward a framework for the design, implementation, and reporting of methodology scoping reviews . Journal of Clinical Epidemiology, 127, 191-197.

Khalil, H., McInerney, P., Pollock, D., Alexander, L., Munn, Z., Tricco, A. C., ... & Peters, M. D. (2021). Practical guide to undertaking scoping reviews for pharmacy clinicians, researchers and policymakers . Journal of clinical pharmacy and therapeutics.

Colquhoun, H (2016) Current best practices for the conduct of scoping reviews (presentation)

Arksey H & O'Malley L (2005) Scoping studies: towards a methodological framework , International Journal of Social Research Methodology, 8:1, 19-32, DOI: 10.1080/1364557032000119616

Umbrella reviews

  • Pollock M, Fernandes RM, Becker LA, Pieper D, Hartling L. Chapter V: Overviews of Reviews . In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.2 (updated February 2021). Cochrane, 2021. Available from www.training.cochrane.org/handbook .  
  • Aromataris E, Fernandez R, Godfrey C, Holly C, Khalil H, Tungpunkom P. Chapter 10: Umbrella Reviews . In: Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. JBI, 2020. Available from https://jbi-global-wiki.refined.site/space/MANUAL/4687363 .
  • Aromataris, Edoardo; Fernandez, Ritin; Godfrey, Christina M.; Holly, Cheryl; Khalil, Hanan; Tungpunkom, Patraporn. Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach , International Journal of Evidence-Based Healthcare: September 2015 - Volume 13 - Issue 3 - p 132-140.

Meta-syntheses

Noyes, J., Booth, A., Cargo, M., Flemming, K., Garside, R., Hannes, K., ... & Thomas, J. (2018). Cochrane Qualitative and Implementation Methods Group guidance series—paper 1: introduction . Journal of clinical epidemiology, 97, 35-38.

Harris, J. L., Booth, A., Cargo, M., Hannes, K., Harden, A., Flemming, K., ... & Noyes, J. (2018). Cochrane Qualitative and Implementation Methods Group guidance series—paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis . Journal of clinical epidemiology, 97, 39-48.

Noyes, J., Booth, A., Flemming, K., Garside, R., Harden, A., Lewin, S., ... & Thomas, J. (2018). Cochrane Qualitative and Implementation Methods Group guidance series—paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings . Journal of clinical epidemiology, 97, 49-58.

Cargo, M., Harris, J., Pantoja, T., Booth, A., Harden, A., Hannes, K., ... & Noyes, J. (2018). Cochrane Qualitative and Implementation Methods Group guidance series—paper 4: methods for assessing evidence on intervention implementation . Journal of clinical epidemiology, 97, 59-69.

Harden, A., Thomas, J., Cargo, M., Harris, J., Pantoja, T., Flemming, K., ... & Noyes, J. (2018). Cochrane Qualitative and Implementation Methods Group guidance series—paper 5: methods for integrating qualitative and implementation evidence within intervention effectiveness reviews . Journal of clinical epidemiology, 97, 70-78.

Flemming, K., Booth, A., Hannes, K., Cargo, M., & Noyes, J. (2018). Cochrane Qualitative and Implementation Methods Group guidance series—Paper 6: Reporting guidelines for qualitative, implementation, and process evaluation evidence syntheses . Journal of Clinical Epidemiology, 97, 79-85.

Walsh, D. and Downe, S. (2005), Meta-synthesis method for qualitative research: a literature review . Journal of Advanced Nursing, 50: 204–211. doi:10.1111/j.1365-2648.2005.03380.x

Living reviews

  • Akl, E.A., Meerpohl, J.J., Elliott, J., Kahale, L.A., Schünemann, H.J., Agoritsas, T., Hilton, J., Perron, C., Akl, E., Hodder, R. and Pestridge, C., 2017. Living systematic reviews: 4. Living guideline recommendations . Journal of clinical epidemiology, 91, pp.47-53.

Qualitative systematic reviews

  • Dixon-Woods, M., Bonas, S., Booth, A., Jones, D. R., Miller, T., Sutton, A. J., . . . Young, B. (2006). How can systematic reviews incorporate qualitative research? A critical perspective . Qualitative Research,6(1), 27–44.
  • Thomas, J., & Harden, A. (2008). Methods for the thematic synthesis of qualitative research in systematic reviews . BMC Medical Research Methodology,8, 45–45.

Mixed methods systematic review

  • Lizarondo L, Stern C, Carrier J, Godfrey C, Rieger K, Salmond S, Apostolo J, Kirkpatrick P, Loveday H. Chapter 8: Mixed methods systematic reviews . In: Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. JBI, 2020. Available from https://synthesismanual.jbi.global. https://doi.org/10.46658/JBIMES-20-09
  • Pearson, A, White, H, Bath-Hextall, F, Salmond, S, Apostolo, J, & Kirkpatrick, P 2015, ' A mixed-methods approach to systematic reviews ', International Journal of Evidence-Based Healthcare, vol. 13, no. 3, p. 121-131. Available from: 10.1097/XEB.0000000000000052
  • Dixon-Woods, M., Agarwal, S., Jones, D., Young, B., & Sutton, A. (2005). Synthesising qualitative and quantitative evidence: A review of possible methods . Journal of Health Services Research &Policy,10(1), 45–53.

Realist reviews

The RAMESES Projects - Includes information on publication, quality, and reporting standards, as well as training materials for realist reviews, meta-narrative reviews, and realist evaluation.

Rycroft-Malone, J., McCormack, B., Hutchinson, A. M., DeCorby, K., Bucknall, T. K., Kent, B., ... & Wilson, V. (2012). Realist synthesis: illustrating the method for implementation research . Implementation Science, 7(1), 1-10.

Wong, G., Westhorp, G., Manzano, A. et al. RAMESES II reporting standards for realist evaluations. BMC Med 14, 96 (2016). https://doi.org/10.1186/s12916-016-0643-1

Wong, G., Greenhalgh, T., Westhorp, G., Buckingham, J., & Pawson, R. (2013). RAMESES publication standards: realist syntheses. BMC medicine, 11, 21. https://doi.org/10.1186/1741-7015-11-21

Wong, G., Greenhalgh, T., Westhorp, G., Buckingham, J., & Pawson, R. (2013). RAMESES publication standards: realist syntheses. BMC medicine, 11(1), 1-14.  https://doi.org/10.1186/1741-7015-11-21

Social sciences

  • Chapman, K. (2021). Characteristics of systematic reviews in the social sciences . The Journal of Academic Librarianship, 47(5), 102396.
  • Crisp, B. R. (2015). Systematic reviews: A social work perspective . Australian Social Work, 68(3), 284-295.  

Further Reading

Uttley, L., Montgomery, P. The influence of the team in conducting a systematic review . Syst Rev 6, 149 (2017). https://doi.org/10.1186/s13643-017-0548-x

  • << Previous: Review Process Steps
  • Next: Step 2: Developing the search >>
  • Last Updated: Apr 24, 2024 9:22 AM
  • URL: https://deakin.libguides.com/systematicreview

library logo banner

Systematic reviews: Formulate your question

  • Introduction
  • Formulate your question
  • Write a protocol
  • Search the literature
  • Manage references
  • Select studies
  • Assess the evidence
  • Write your review
  • Further resources

Defining the question

Defining the research question and developing a protocol are the essential first steps in your systematic review.  The success of your systematic review depends on a clear and focused question, so take the time to get it right.

  • A framework may help you to identify the key concepts in your research question and to organise your search terms in one of the Library's databases.
  • Several frameworks or models exist to help researchers structure a research question and three of these are outlined on this page: PICO, SPICE and SPIDER.
  • It is advisable to conduct some scoping searches in a database to look for any reviews on your research topic and establish whether your topic is an original one .
  • Y ou will need to identify the relevant database(s) to search and your choice will depend on your topic and the research question you need to answer.
  • By scanning the titles, abstracts and references retrieved in a scoping search, you will reveal the terms used by authors to describe the concepts in your research question, including the synonyms or abbreviations that you may wish to add to a database search.
  • The Library can help you to search for existing reviews: make an appointment with your Subject Librarian to learn more.

The PICO framework

PICO may be the most well-known model framework: it has its origins in epidemiology and now is widely-used for evidence-based practice and systematic reviews.

PICO normally stands for Population (or Patient or Problem)  - Intervention - Comparator - Outcome.

open in new tab

The SPICE framework

SPICE is used mostly in social science and healthcare research.  It stands for Setting - Population (or Perspective) - Intervention - Comparator - Evaluation.  It is similar to PICO and was devised by Booth (2004).  

The examples in the SPICE table are based on the following research question:  Can mortality rates for older people be reduced if a greater proportion are examined initially by allied health staff in A&E? Source: Booth, A (2004) Formulating answerable questions. In Booth, A & Brice, A (Eds) Evidence Based Practice for Information Professionals: A handbook. (pp. 61-70) London: Facet Publishing.

The SPIDER framework

SPIDER was  adapted from the  PIC O framework  in order to include searches for qualitative and mixed-methods research.  SPIDER was developed by Cooke, Smith and Booth (2012).

Source : Cooke, A., Smith, D. & Booth, A. (2012). Beyond PICO: the SPIDER tool for qualitative evidence synthesis.  Qualitative Health Research (10), 1435-1443. http://doi.org/10.1177/1049732312452938 .

More advice about formulating a research question

Module 1  in Cochrane Interactive Learning  explains the importance of the research question, some types of review question and the PICO  framework.  The Library is subscribing to  Cochrane Interactive Learning . 

Log in to Module 1:  Cochrane Interactive Learning

  • << Previous: Introduction
  • Next: Write a protocol >>
  • Last Updated: May 9, 2024 9:03 AM
  • URL: https://library.bath.ac.uk/systematic-reviews
  • Mayo Clinic Libraries
  • Systematic Reviews

Develop & Refine Your Research Question

Systematic reviews: develop & refine your research question.

  • Knowledge Synthesis Comparison
  • Knowledge Synthesis Decision Tree
  • Standards & Reporting Results
  • Materials in the Mayo Clinic Libraries
  • Training Resources
  • Review Teams
  • Develop a Timeline
  • Project Management
  • Communication
  • PRISMA-P Checklist
  • Eligibility Criteria
  • Register your Protocol
  • Other Resources
  • Other Screening Tools
  • Grey Literature Searching
  • Citation Searching
  • Data Extraction Tools
  • Minimize Bias
  • Critical Appraisal by Study Design
  • Synthesis & Meta-Analysis
  • Publishing your Systematic Review

A clear, well-defined, and answerable research question is essential for any systematic review, meta-analysis, or other form of evidence synthesis. The question must be answerable. Spend time refining your research question.

  • PICO Worksheet

PICO Framework

Focused question frameworks.

The PICO mnemonic is frequently used for framing quantitative clinical research questions. 1

The PEO acronym is appropriate for studies of diagnostic accuracy 2

The SPICE framework is effective “for formulating questions about qualitative or improvement research.” 3

The SPIDER search strategy was designed for framing questions best answered by qualitative and mixed-methods research. 4

References & Recommended Reading

1.          Anastasiadis E, Rajan P, Winchester CL. Framing a research question: The first and most vital step in planning research. Journal of Clinical Urology. 2015;8(6):409-411.

2.          Speckman RA, Friedly JL. Asking Structured, Answerable Clinical Questions Using the Population, Intervention/Comparator, Outcome (PICO) Framework. PM&R. 2019;11(5):548-553.

3.          Knowledge Into Action Toolkit. NHS Scotland. http://www.knowledge.scot.nhs.uk/k2atoolkit/source/identify-what-you-need-to-know/spice.aspx . Accessed April 23, 2021.

4.          Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qualitative health research. 2012;22(10):1435-1443.

  • << Previous: Review Teams
  • Next: Develop a Timeline >>
  • Last Updated: May 31, 2024 1:57 PM
  • URL: https://libraryguides.mayo.edu/systematicreviewprocess

Banner

Systematic Reviews: Formulate your question and protocol

  • Formulate your question and protocol
  • Developing the review protocol
  • Searching for evidence
  • Search strategy
  • Managing search results
  • Evaluating results (critical appraisal)
  • Synthesising and reporting
  • Further resources

This video illustrates how to use the PICO framework to formulate an effective research question, and it also shows how to search a database using the search terms identified. The database used in this video is CINAHL but the process is very similar in databases from other companies as well.

Recommended Reading

  • BMJ Best Practice Advice on using the PICO framework.

A longer on the important pre-planning and protocol development stages of systematic reviews, including tips for success and pitfalls to avoid. 

* You can start watching this video from around the 9 minute mark.*

Formulate Your Question

Having a focused and specific research question is especially important when undertaking a systematic review. If your search question is too broad you will retrieve too many search results and you will be unable to work with them all. If your question is too narrow, you may miss relevant papers. Taking the time to break down your question into separate, focused concepts will also help you search the databases effectively.

Deciding on your inclusion and exclusion criteria early on in the research process can also help you when it comes to focusing your research question and your search strategy.

A literature searching planning template can help to break your search question down into concepts and to record alternative search terms. Frameworks such as PICO and PEO can also help guide your search. A planning template is available to download below, and there is also information on PICO and other frameworks ( Adapted from: https://libguides.kcl.ac.uk/systematicreview/define).

Looking at published systematic reviews can give you ideas of how to construct a focused research question and an effective search strategy.

Example of an unfocused research question: How can deep vein thrombosis be prevented?

Example of a focused research question: What are the effects of wearing compression stockings versus not wearing them for preventing DVT in people travelling on flights lasting at least four hours.

In this Cochrane systematic review by Clarke et al. (2021), publications on randomised trials of compression stockings versus no stockings in passengers on flights lasting at least four hours were gathered. The appendix of the published review contains the comprehensive search strategy used.  This research question has focused on a particular method (wearing compression stockings) in a particular setting (flights of at least 4 hrs) and included only specific studies (randomised trails). An additional way of focusing a question could be to look at a particular section of the population.

Clarke  M. J., Broderick  C., Hopewell  S., Juszczak  E., and Eisinga  A., 20121. Compression stockings for preventing deep vein thrombosis in airline passengers. Cochrane Database of Systematic Reviews 2021, Issue 4. Art. No.: CD004002  [Accessed 30th April 2021].  Available from: 10.1002/14651858.CD004002.pub4

There are many different frameworks that you can use to structure your research question with clear parameters. The most commonly used framework is PICO:

  • Population This could be the general population, or a specific group defined by: age, socioeconomic status, location and so on.
  • Intervention This is the therapy/test/strategy to be investigated and can include medication, exercise, environmental factors, and counselling for example. It may help to think of this as 'the thing that will make a difference'.
  • Comparator This is a measure that you will use to compare results against. This can be patients who received no treatment or a placebo, or people who received alternative treatment/exposure, for instance.
  • Outcome What outcome is significant to your population or issue? This may be different from the outcome measures used in the studies.

Adapted from:  https://libguides.reading.ac.uk/systematic-review/protocol

  • Developing an efficient search strategy using PICO A tool created by Health Evidence to help construct a search strategy using PICO

Other Frameworks: alternatives to PICO

As well as PICO, there are other frameworks available, for instance:

  • PICOT : Population, Intervention, Comparison, Outcome, Time.
  • PEO: Population and/or Problem, Exposures, Outcome
  • SPICE: Setting, Population or Perspective, Intervention, Comparison, Evaluation
  • ECLIPS: Expectations, Client Group, Location, Impact, Professionals Involved, Service
  • SPIDER: Sample, Phenomenon of interest, Design, Evaluation, Research type

This page from City, University of London, contains useful information on several frameworks, including the ones listed above.

Develop Your Protocol

Atfer you have created your research question, the next step is to develop a protocol which outlines the study methodology. You need to include the following:

  • Research question and aims
  • Criteria for inclusion and exclusion
  • search strategy
  • selecting studies for inclusion
  • quality assessment
  • data extraction & analysis
  • synthesis of results
  • dissemination

To find out how much has been published on a particular topic, you can perform scoping searches in relevant databases. This can help you decide on the time limits of your study.

  • Systematic review protocol template This template from the University of Reading can help you plan your protocol.
  • Protocol Guidance This document from the University of York describes what each element of your protocol should cover.

Register Your Protocol

It is good practice to register your protocol and often this is a requirement for future publication of the review.

You can register your protocol here:

  • PROSPERO: international prospective register of systematic review
  • Cochrane Collaboration, Getting Involved
  • Campbell Collaboration, Co-ordinating Groups

Adapted from:   https://libguides.bodleian.ox.ac.uk/systematic-reviews/methodology

  • << Previous: Home
  • Next: Developing the review protocol >>
  • Last Updated: Sep 12, 2023 5:29 PM
  • URL: https://libguides.qmu.ac.uk/systematic-reviews
  • NYU Medical Archives

NYU Health Sciences Library

  • ☰ Menu
  • Getting Started
  • Subject Guides
  • Classes & Events

Systematic Reviews

  • Types of Reviews
  • 1) Formulating a Research Question
  • 2) Developing a Protocol
  • 3) Searching for Studies
  • 4) Screening
  • 5) Data Extraction
  • 6) Critical Appraisal
  • 7) Synthesis and Summary
  • 8) Reporting the Review Process
  • Tools and Resources
  • Library Support

Question Formats

A well-formulated and focused question is essential to the conduct of the review. The research question binds the scope of the project and informs the sources to search, the search syntax, the eligibility criteria.

Here is a list of commonly used frameworks to help you articulate a clearly defined research question: 

  • << Previous: Steps in the Review Process
  • Next: 2) Developing a Protocol >>
  • Last Updated: May 23, 2024 8:40 PM
  • URL: https://hslguides.med.nyu.edu/systematicreviews

NYU College of Dentistry

RMIT University

Teaching and Research guides

Systematic reviews.

  • Starting the review
  • About systematic reviews

Develop your research question

Types of questions, pico framework, spice, spider and eclipse.

  • Plan your search
  • Sources to search
  • Search example
  • Screen and analyse
  • Guides and software
  • Further help

A systematic review is an in-depth attempt to answer a specific, focused question in a methodical way.

Start with a clearly defined, researchable  question , that should accurately and succinctly sum up the review's line of inquiry.

A well formulated review question will help determine your inclusion and exclusion criteria, the creation of your search strategy, the collection of data and the presentation of your findings.

It is important to ensure the question:

  • relates to what you really need to know about your topic
  • is answerable, specific and focused
  • should strike a suitable balance between being too broad or too narrow in scope
  • has been formulated with care so as to avoid missing relevant studies or collecting a potentially biased result set

Is the research question justified?

  • Are healthcare providers, consumers, researchers, and policy makers requiring this evidence for their healthcare decisions?
  • Is there a gap in the current literature? The question should be worthy of an answer.
  • Has a similar review been done before?

Question types

To help in focusing the question and determining the most appropriate type of evidence consider the type of question. Is there is a study design (eg. Randomized Controlled Trials, Meta-Analysis) that would provide the best answer.

Is your research question to focus on:

  • Diagnosis : How to select and interpret diagnostic tests
  • Intervention/Therapy : How to select treatments to offer patients that do more good than harm and that are worth the efforts and costs of using them
  • Prediction/Prognosis : How to estimate the patient’s likely clinical course over time and anticipate likely complications of disease
  • Exploration/Etiology : How to identify causes for disease, including genetics

If appropriate, use a  framework  to help in the development of your research question. A framework will assist in identifying the important concepts in your question.

A good question will combine several concepts. Identifying the relevant concepts is crucial to successful development and execution of your systematic search. Your research question should provide you with a checklist for the main concepts to be included in your search strategy.

Using a framework to aid in the development of a research question can be useful. The more you understand your question the more likely you are to obtain relevant results for your review. There are a number of different frameworks available.

A technique often used in research for formulating a  clinical research question  is the PICO   model. PICO is explored in more detail in this guide. Slightly different versions of this concept are used to search for quantitative and qualitative reviews.

For quantitative reviews-

PICO  = Population, Intervention, Comparison, Outcome

For qualitative reviews-

  • Booth, A. (2006). Clear and present questions: Formulating questions for evidence based practice. Library hi tech, 24(3), 355-368.
  • Cooke, A., Smith, D., & Booth, A. (2012). Beyond PICO: The SPIDER tool for qualitative evidence synthesis. Qualitative Health Research, 22(10), 1435-1443.
  • Wildridge, V., & Bell, L. (2002). How CLIP became ECLIPSE: A mnemonic to assist in searching for health policy/management information. Health Information & Libraries Journal, 19(2), 113-115.
  • << Previous: About systematic reviews
  • Next: Protocol >>

Creative Commons license: CC-BY-NC.

  • Last Updated: Jun 4, 2024 9:31 AM
  • URL: https://rmit.libguides.com/systematicreviews

Duke University Libraries

Systematic Reviews for Non-Health Sciences

  • 1. Formulating the research question
  • Getting started
  • Types of reviews
  • 0. Planning the systematic review

Formulating a research question

Purpose of a framework, selecting a framework.

  • 2. Developing the protocol
  • 3. Searching, screening, and selection of articles
  • 4. Critical appraisal
  • 5. Writing and publishing
  • Software and tools
  • Software tutorials
  • Resources by discipline
  • Duke Med Center Library: Systematic reviews This link opens in a new window
  • Overwhelmed? General literature review guidance This link opens in a new window

Email a Librarian

how to formulate a research question systematic review

Contact a Librarian

Ask a Librarian

Formulating a question.

Formulating a strong research question for a systematic review can be a lengthy process. While you may have an idea about the topic you want to explore, your specific research question is what will drive your review and requires some consideration. 

You will want to conduct preliminary  or  exploratory searches  of the literature as you refine your question. In these searches you will want to:

  • Determine if a systematic review has already been conducted on your topic and if so, how yours might be different, or how you might shift or narrow your anticipated focus
  • Scope the literature to determine if there is enough literature on your topic to conduct a systematic review
  • Identify key concepts and terminology
  • Identify seminal or landmark studies
  • Identify key studies that you can test your research strategy against (more on that later)
  • Begin to identify databases that might be useful to your search question

Systematic review vs. other reviews

Systematic reviews required a  narrow and specific research question. The goal of a systematic review is to provide an evidence synthesis of ALL research performed on one particular topic. So, your research question should be clearly answerable from the data you gather from the studies included in your review.

Ask yourself if your question even warrants a systematic review (has it been answered before?). If your question is more broad in scope or you aren't sure if it's been answered, you might look into performing a systematic map or scoping review instead.

Learn more about systematic reviews versus scoping reviews:

  • CEE. (2022). Section 2:Identifying the need for evidence, determining the evidence synthesis type, and establishing a Review Team. Collaboration for Environmental Evidence.  https://environmentalevidence.org/information-for-authors/2-need-for-evidence-synthesis-type-and-review-team-2/
  • DistillerSR. (2022). The difference between systematic reviews and scoping reviews. DistillerSR.  https://www.distillersr.com/resources/systematic-literature-reviews/the-difference-between-systematic-reviews-and-scoping-reviews
  • Nalen, CZ. (2022). What is a scoping review? AJE.  https://www.aje.com/arc/what-is-a-scoping-review/

Illustration of man holding check mark, woman holding cross, with large page in between them

  • Frame your entire research process
  • Determine the scope of your review
  • Provide a focus for your searches
  • Help you identify key concepts
  • Guide the selection of your papers

There are different frameworks you can use to help structure a question.

Image by jcomp on Freepik

  • PICO / PECO
  • What if my topic doesn't fit a framework?

The PICO or PECO framework is typically used in clinical and health sciences-related research, but it can also be adapted for other quantitative research.

P — Patient / Problem / Population

I / E — Intervention / Indicator / phenomenon of Interest / Exposure / Event 

C  — Comparison / Context / Control

O — Outcome

Example topic : Health impact of hazardous waste exposure

Fazzo, L., Minichilli, F., Santoro, M., Ceccarini, A., Della Seta, M., Bianchi, F., Comba, P., & Martuzzi, M. (2017). Hazardous waste and health impact: A systematic review of the scientific literature.  Environmental Health ,  16 (1), 107.  https://doi.org/10.1186/s12940-017-0311-8

The SPICE framework is useful for both qualitative and mixed-method research.

S — Setting (where?)

P — Perspective (for whom?)

I — Intervention / Exposure (what?)

C — Comparison (compared with what?)

E — Evaluation (with what result?)

Learn more : Booth, A. (2006). Clear and present questions: Formulating questions for evidence based practice.  Library Hi Tech ,  24 (3), 355-368.  https://doi.org/10.1108/07378830610692127

The SPIDER framework is useful for both qualitative and mixed-method research.

S — Sample

PI — Phenomenon of Interest

D — Design

E — Evaluation

R — Study Type

Learn more : Cooke, A., Smith, D., & Booth, A. (2012). Beyond PICO: The SPIDER tool for qualitative evidence synthesis.  Qualitative Health Research, 22 (10), 1435-1443.  https://doi.org/10.1177/1049732312452938

Click  here  for an exhaustive list of research question frameworks from University of Maryland Libraries.

You might find that your topic does not always fall into one of the models listed on this page. You can always modify a model to make it work for your topic, and either remove or incorporate additional elements. Be sure to document in your review the established framework that yours is based off and how it has been modified.

  • << Previous: 0. Planning the systematic review
  • Next: 2. Developing the protocol >>
  • Last Updated: May 8, 2024 8:11 AM
  • URL: https://guides.library.duke.edu/systematicreviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

  • Library guides
  • Book study rooms
  • Library Workshops
  • Library Account  

City, University of London

Library Services

  • Library Services Home

Advanced literature search and systematic reviews

  • Introduction

Formulate your question

Using frameworks to structure your question, selecting a framework, inclusion and exclusion criteria, the scoping search.

  • Videos and Support
  • Step 2 - Develop a search strategy
  • Step 3 - Selecting databases
  • Step 4 - Develop your protocol
  • Step 5 - Perform your search
  • Step 6 - Searching grey literature
  • Step 7 - Manage your results
  • Step 8 - Analyse and understand your results
  • Step 9 - Write your methodology
  • Videos and support

Formulating a clear, well-defined, relevant and answerable research question is essential to finding the best evidence for your topic. On this page we outline the approaches to developing a research question that can be used as the basis for a review. 

Frameworks have been designed to help you structure research questions and identify the main concepts you want to focus on. Your topic may not fit perfectly into one of the frameworks listed on this page, but just using part of a framework can be sufficient.

The framework you should use depends on the type of question you will be researching.

A framework used for formulating a clinical research question, i.e. questions covering the effectiveness of an intervention, treatment, etc. 

Extensions to PICO

If your topic has additional concepts, there are extensions to the PICO framework that you can use: 

PICOS - S stands for study design.  Use this framework if you are only interested in examining specific designs of study. 

PICOT - T  stands for timeframe.  Use this framework if your outcomes need to be measured in a certain amount of time, e.g. 24 hours after surgery. 

PICOC - C stands for context.  Use this framework if you are focussing on a particular organisation or circumstances or scenario. 

A framework used for questions relating to prognosis issues. 

A framework used for questions relating to the prevalence / incidence of a condition.

Used for questions relating to cost effectiveness, economic evaluations and service improvements.

Used for questions relating to cost effectiveness, economic evaluations, and service improvements.

Used for qualitative questions evaluating experiences and meaningfulness 

For quantitative and qualitative questions evaluating experiences, and meaningfulness.

Used for qualitative questions evaluating experiences and meaningfulness.

Framework used for qualitative questions evaluating experiences and meaningfulness.

When you formulate a research question you also need to consider your inclusion and exclusion criteria. These are a list of pre-defined characteristics the literature must have, if they are to be included in a study. Different factors can be used as inclusion or exclusion criteria. 

The most common inclusion / exclusion criteria are: 

Geographic location

Limit the review of study to geographical area.

How far back do you wish to search for information? (For systematic reviews you need to give a reason if you choose to restrict your search by date).

Publication type

Common excluded publications are reviews and editorials.

Participants

Adults, child studies, certain age groups?

Limit the review of study to language.

Peer review

Has to be reviewed by accredited professionals in the field.

Study design

Randomised controlled trials, cohort studies?

Primary care, hospitals, general practice, schools?

Once you have a clear research question, you need to conduct a scoping search to identify:

  • The search terms you should use to retrieve information on your topic.
  • The body of the literature that has already been written on your topic.
  • If a systematic review, covering the question you are considering, has already been published or has been registered and it is in the process of being completed. If that is the case, you need to modify your research question. If the systematic review was completed over five years ago, you can perform an update of the same question. 

Search the following resources to find systematic reviews, either completed or in progress. Check the Supporting videos and online tutorials page on this guide for demonstration of how to do a scoping search. 

  • Prospero To search for systematic reviews that are "in progress" and those that have already been published. Accessibility information for Prospero
  • TRIP Pro A clinical search engine providing access to research evidence in the form of primary research articles, clinical trials, systematic reviews and evidence summaries. Grey literature is also available in the form of clinical guidelines, ongoing trials, blogs, videos and patient information leaflets. more... less... TripPRO is the advanced version of Trip providing more full text access and more systematic reviews than the basic version
  • The Cochrane Library The Cochrane Library is a collection of databases that contain different types of high-quality, independent evidence to inform healthcare decision-making. more... less... Advanced search.

To find primary research related to your topic you can search databases available via: 

  • EBSCOhost A platform providing access to databases covering a variety of subjects including business, economics, education, environment, food science, health, politics and sociology. Accessibility information for EBSCO
  • Ovid Online A platform providing access to a number of health databases covering general health topics as well as allied health; complementary medicine, health management, international health, maternity care, nursing and social policy. Accessibility information for Ovid Online
  • << Previous: Introduction
  • Next: Step 2 - Develop a search strategy >>
  • Last Updated: Apr 25, 2024 9:13 AM
  • URL: https://libguides.city.ac.uk/systematic-reviews

Banner

Systematic Reviews

  • Formulating a Research Question
  • Documentary Research
  • Registering the Protocol
  • Processing Data

Question Frameworks

how to formulate a research question systematic review

When doing a systematic review, having a research question framework can help you to identify key concepts of your research and facilitate the process of article selection for inclusion in the systematic review .

Framework for quantitative studies

- PICO is commonly used to frame quantitative systematic review questions and contains the following elements:

  • P – Patient, Problem or Population: demographic factors of your patient/population such as age, gender, ethnicity, socioeconomic status, etc.
  • I – Intervention: which main intervention, prognostic factor, or exposure are you considering?
  • C – Comparison or Control: what is the main alternative to compare with the intervention?
  • O – Outcome: what do you want to accomplish, measure, improve or affect? (ex: reduce mortality, improve water access, etc.). Outcomes should be measurable by indicators (e.g. quality of life, etc.)

Example: What are the socioeconomic and environmental effects ( outcome ) of the program "Payment for Environmental Services" ( intervention ) in low and middle income countries ( population )? In this case, one can compare groups that receive the intervention with groups without intervention ( comparison ).

Frameworks for qualitative research

- PICO for Qualitative Studies

As the PICO tool does not  always accommodate terms relating to qualitative research or specific qualitative designs, it has often been modified in practice to “PICOS” where the “S” refers to the Study design , thus limiting the number of irrelevant articles.   

If we use the previous example (the effects of the program "payment for environmental services"), where mixed research methods are employed, the study design (focus groups, interviews, observations, etc.) can be useful to find studies.

The SPIDER question format was adapted from the PICO tool to search for qualitative and mixed-methods research . Questions based on this format identify the following concepts:

  • Phenomenon of interest
  • Research type

Example: What are the experiences ( evaluation ) of young parents ( sample ) of attending antenatal education ( Phenomenon of Interest )? Design : Interviews, surveys. Research Type : the type of qualitative research (phenomenology, ethnography, grounded theory, case study).

PICO, PICOS or SPIDER?

According to a comparison study between PICO, PICOS and SPIDER “the recommendations for practice are to use the PICO tool for a fully comprehensive search but the PICOS tool where time and resources are limited. The SPIDER tool would not be recommended due to the risk of not identifying relevant papers, but has potential due to its greater specificity”.

  • << Previous: Home
  • Next: Documentary Research >>
  • Last Updated: Jul 21, 2023 12:12 PM
  • URL: https://libguides.graduateinstitute.ch/systematic_reviews

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10248995

Logo of sysrev

Guidance to best tools and practices for systematic reviews

Kat kolaski.

1 Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC USA

Lynne Romeiser Logan

2 Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY USA

John P. A. Ioannidis

3 Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA USA

Associated Data

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.

A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.

Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.

Supplementary Information

The online version contains supplementary material available at 10.1186/s13643-023-02255-9.

Part 1. The state of evidence synthesis

Evidence syntheses are commonly regarded as the foundation of evidence-based medicine (EBM). They are widely accredited for providing reliable evidence and, as such, they have significantly influenced medical research and clinical practice. Despite their uptake throughout health care and ubiquity in contemporary medical literature, some important aspects of evidence syntheses are generally overlooked or not well recognized. Evidence syntheses are mostly retrospective exercises, they often depend on weak or irreparably flawed data, and they may use tools that have acknowledged or yet unrecognized limitations. They are complicated and time-consuming undertakings prone to bias and errors. Production of a good evidence synthesis requires careful preparation and high levels of organization in order to limit potential pitfalls [ 1 ]. Many authors do not recognize the complexity of such an endeavor and the many methodological challenges they may encounter. Failure to do so is likely to result in research and resource waste.

Given their potential impact on people’s lives, it is crucial for evidence syntheses to correctly report on the current knowledge base. In order to be perceived as trustworthy, reliable demonstration of the accuracy of evidence syntheses is equally imperative [ 2 ]. Concerns about the trustworthiness of evidence syntheses are not recent developments. From the early years when EBM first began to gain traction until recent times when thousands of systematic reviews are published monthly [ 3 ] the rigor of evidence syntheses has always varied. Many systematic reviews and meta-analyses had obvious deficiencies because original methods and processes had gaps, lacked precision, and/or were not widely known. The situation has improved with empirical research concerning which methods to use and standardization of appraisal tools. However, given the geometrical increase in the number of evidence syntheses being published, a relatively larger pool of unreliable evidence syntheses is being published today.

Publication of methodological studies that critically appraise the methods used in evidence syntheses is increasing at a fast pace. This reflects the availability of tools specifically developed for this purpose [ 4 – 6 ]. Yet many clinical specialties report that alarming numbers of evidence syntheses fail on these assessments. The syntheses identified report on a broad range of common conditions including, but not limited to, cancer, [ 7 ] chronic obstructive pulmonary disease, [ 8 ] osteoporosis, [ 9 ] stroke, [ 10 ] cerebral palsy, [ 11 ] chronic low back pain, [ 12 ] refractive error, [ 13 ] major depression, [ 14 ] pain, [ 15 ] and obesity [ 16 , 17 ]. The situation is even more concerning with regard to evidence syntheses included in clinical practice guidelines (CPGs) [ 18 – 20 ]. Astonishingly, in a sample of CPGs published in 2017–18, more than half did not apply even basic systematic methods in the evidence syntheses used to inform their recommendations [ 21 ].

These reports, while not widely acknowledged, suggest there are pervasive problems not limited to evidence syntheses that evaluate specific kinds of interventions or include primary research of a particular study design (eg, randomized versus non-randomized) [ 22 ]. Similar concerns about the reliability of evidence syntheses have been expressed by proponents of EBM in highly circulated medical journals [ 23 – 26 ]. These publications have also raised awareness about redundancy, inadequate input of statistical expertise, and deficient reporting. These issues plague primary research as well; however, there is heightened concern for the impact of these deficiencies given the critical role of evidence syntheses in policy and clinical decision-making.

Methods and guidance to produce a reliable evidence synthesis

Several international consortiums of EBM experts and national health care organizations currently provide detailed guidance (Table ​ (Table1). 1 ). They draw criteria from the reporting and methodological standards of currently recommended appraisal tools, and regularly review and update their methods to reflect new information and changing needs. In addition, they endorse the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system for rating the overall quality of a body of evidence [ 27 ]. These groups typically certify or commission systematic reviews that are published in exclusive databases (eg, Cochrane, JBI) or are used to develop government or agency sponsored guidelines or health technology assessments (eg, National Institute for Health and Care Excellence [NICE], Scottish Intercollegiate Guidelines Network [SIGN], Agency for Healthcare Research and Quality [AHRQ]). They offer developers of evidence syntheses various levels of methodological advice, technical and administrative support, and editorial assistance. Use of specific protocols and checklists are required for development teams within these groups, but their online methodological resources are accessible to any potential author.

Guidance for development of evidence syntheses

Notably, Cochrane is the largest single producer of evidence syntheses in biomedical research; however, these only account for 15% of the total [ 28 ]. The World Health Organization requires Cochrane standards be used to develop evidence syntheses that inform their CPGs [ 29 ]. Authors investigating questions of intervention effectiveness in syntheses developed for Cochrane follow the Methodological Expectations of Cochrane Intervention Reviews [ 30 ] and undergo multi-tiered peer review [ 31 , 32 ]. Several empirical evaluations have shown that Cochrane systematic reviews are of higher methodological quality compared with non-Cochrane reviews [ 4 , 7 , 9 , 11 , 14 , 32 – 35 ]. However, some of these assessments have biases: they may be conducted by Cochrane-affiliated authors, and they sometimes use scales and tools developed and used in the Cochrane environment and by its partners. In addition, evidence syntheses published in the Cochrane database are not subject to space or word restrictions, while non-Cochrane syntheses are often limited. As a result, information that may be relevant to the critical appraisal of non-Cochrane reviews is often removed or is relegated to online-only supplements that may not be readily or fully accessible [ 28 ].

Influences on the state of evidence synthesis

Many authors are familiar with the evidence syntheses produced by the leading EBM organizations but can be intimidated by the time and effort necessary to apply their standards. Instead of following their guidance, authors may employ methods that are discouraged or outdated 28]. Suboptimal methods described in in the literature may then be taken up by others. For example, the Newcastle–Ottawa Scale (NOS) is a commonly used tool for appraising non-randomized studies [ 36 ]. Many authors justify their selection of this tool with reference to a publication that describes the unreliability of the NOS and recommends against its use [ 37 ]. Obviously, the authors who cite this report for that purpose have not read it. Authors and peer reviewers have a responsibility to use reliable and accurate methods and not copycat previous citations or substandard work [ 38 , 39 ]. Similar cautions may potentially extend to automation tools. These have concentrated on evidence searching [ 40 ] and selection given how demanding it is for humans to maintain truly up-to-date evidence [ 2 , 41 ]. Cochrane has deployed machine learning to identify randomized controlled trials (RCTs) and studies related to COVID-19, [ 2 , 42 ] but such tools are not yet commonly used [ 43 ]. The routine integration of automation tools in the development of future evidence syntheses should not displace the interpretive part of the process.

Editorials about unreliable or misleading systematic reviews highlight several of the intertwining factors that may contribute to continued publication of unreliable evidence syntheses: shortcomings and inconsistencies of the peer review process, lack of endorsement of current standards on the part of journal editors, the incentive structure of academia, industry influences, publication bias, and the lure of “predatory” journals [ 44 – 48 ]. At this juncture, clarification of the extent to which each of these factors contribute remains speculative, but their impact is likely to be synergistic.

Over time, the generalized acceptance of the conclusions of systematic reviews as incontrovertible has affected trends in the dissemination and uptake of evidence. Reporting of the results of evidence syntheses and recommendations of CPGs has shifted beyond medical journals to press releases and news headlines and, more recently, to the realm of social media and influencers. The lay public and policy makers may depend on these outlets for interpreting evidence syntheses and CPGs. Unfortunately, communication to the general public often reflects intentional or non-intentional misrepresentation or “spin” of the research findings [ 49 – 52 ] News and social media outlets also tend to reduce conclusions on a body of evidence and recommendations for treatment to binary choices (eg, “do it” versus “don’t do it”) that may be assigned an actionable symbol (eg, red/green traffic lights, smiley/frowning face emoji).

Strategies for improvement

Many authors and peer reviewers are volunteer health care professionals or trainees who lack formal training in evidence synthesis [ 46 , 53 ]. Informing them about research methodology could increase the likelihood they will apply rigorous methods [ 25 , 33 , 45 ]. We tackle this challenge, from both a theoretical and a practical perspective, by offering guidance applicable to any specialty. It is based on recent methodological research that is extensively referenced to promote self-study. However, the information presented is not intended to be substitute for committed training in evidence synthesis methodology; instead, we hope to inspire our target audience to seek such training. We also hope to inform a broader audience of clinicians and guideline developers influenced by evidence syntheses. Notably, these communities often include the same members who serve in different capacities.

In the following sections, we highlight methodological concepts and practices that may be unfamiliar, problematic, confusing, or controversial. In Part 2, we consider various types of evidence syntheses and the types of research evidence summarized by them. In Part 3, we examine some widely used (and misused) tools for the critical appraisal of systematic reviews and reporting guidelines for evidence syntheses. In Part 4, we discuss how to meet methodological conduct standards applicable to key components of systematic reviews. In Part 5, we describe the merits and caveats of rating the overall certainty of a body of evidence. Finally, in Part 6, we summarize suggested terminology, methods, and tools for development and evaluation of evidence syntheses that reflect current best practices.

Part 2. Types of syntheses and research evidence

A good foundation for the development of evidence syntheses requires an appreciation of their various methodologies and the ability to correctly identify the types of research potentially available for inclusion in the synthesis.

Types of evidence syntheses

Systematic reviews have historically focused on the benefits and harms of interventions; over time, various types of systematic reviews have emerged to address the diverse information needs of clinicians, patients, and policy makers [ 54 ] Systematic reviews with traditional components have become defined by the different topics they assess (Table 2.1 ). In addition, other distinctive types of evidence syntheses have evolved, including overviews or umbrella reviews, scoping reviews, rapid reviews, and living reviews. The popularity of these has been increasing in recent years [ 55 – 58 ]. A summary of the development, methods, available guidance, and indications for these unique types of evidence syntheses is available in Additional File 2 A.

Types of traditional systematic reviews

Both Cochrane [ 30 , 59 ] and JBI [ 60 ] provide methodologies for many types of evidence syntheses; they describe these with different terminology, but there is obvious overlap (Table 2.2 ). The majority of evidence syntheses published by Cochrane (96%) and JBI (62%) are categorized as intervention reviews. This reflects the earlier development and dissemination of their intervention review methodologies; these remain well-established [ 30 , 59 , 61 ] as both organizations continue to focus on topics related to treatment efficacy and harms. In contrast, intervention reviews represent only about half of the total published in the general medical literature, and several non-intervention review types contribute to a significant proportion of the other half.

Evidence syntheses published by Cochrane and JBI

a Data from https://www.cochranelibrary.com/cdsr/reviews . Accessed 17 Sep 2022

b Data obtained via personal email communication on 18 Sep 2022 with Emilie Francis, editorial assistant, JBI Evidence Synthesis

c Includes the following categories: prevalence, scoping, mixed methods, and realist reviews

d This methodology is not supported in the current version of the JBI Manual for Evidence Synthesis

Types of research evidence

There is consensus on the importance of using multiple study designs in evidence syntheses; at the same time, there is a lack of agreement on methods to identify included study designs. Authors of evidence syntheses may use various taxonomies and associated algorithms to guide selection and/or classification of study designs. These tools differentiate categories of research and apply labels to individual study designs (eg, RCT, cross-sectional). A familiar example is the Design Tree endorsed by the Centre for Evidence-Based Medicine [ 70 ]. Such tools may not be helpful to authors of evidence syntheses for multiple reasons.

Suboptimal levels of agreement and accuracy even among trained methodologists reflect challenges with the application of such tools [ 71 , 72 ]. Problematic distinctions or decision points (eg, experimental or observational, controlled or uncontrolled, prospective or retrospective) and design labels (eg, cohort, case control, uncontrolled trial) have been reported [ 71 ]. The variable application of ambiguous study design labels to non-randomized studies is common, making them especially prone to misclassification [ 73 ]. In addition, study labels do not denote the unique design features that make different types of non-randomized studies susceptible to different biases, including those related to how the data are obtained (eg, clinical trials, disease registries, wearable devices). Given this limitation, it is important to be aware that design labels preclude the accurate assignment of non-randomized studies to a “level of evidence” in traditional hierarchies [ 74 ].

These concerns suggest that available tools and nomenclature used to distinguish types of research evidence may not uniformly apply to biomedical research and non-health fields that utilize evidence syntheses (eg, education, economics) [ 75 , 76 ]. Moreover, primary research reports often do not describe study design or do so incompletely or inaccurately; thus, indexing in PubMed and other databases does not address the potential for misclassification [ 77 ]. Yet proper identification of research evidence has implications for several key components of evidence syntheses. For example, search strategies limited by index terms using design labels or study selection based on labels applied by the authors of primary studies may cause inconsistent or unjustified study inclusions and/or exclusions [ 77 ]. In addition, because risk of bias (RoB) tools consider attributes specific to certain types of studies and study design features, results of these assessments may be invalidated if an inappropriate tool is used. Appropriate classification of studies is also relevant for the selection of a suitable method of synthesis and interpretation of those results.

An alternative to these tools and nomenclature involves application of a few fundamental distinctions that encompass a wide range of research designs and contexts. While these distinctions are not novel, we integrate them into a practical scheme (see Fig. ​ Fig.1) 1 ) designed to guide authors of evidence syntheses in the basic identification of research evidence. The initial distinction is between primary and secondary studies. Primary studies are then further distinguished by: 1) the type of data reported (qualitative or quantitative); and 2) two defining design features (group or single-case and randomized or non-randomized). The different types of studies and study designs represented in the scheme are described in detail in Additional File 2 B. It is important to conceptualize their methods as complementary as opposed to contrasting or hierarchical [ 78 ]; each offers advantages and disadvantages that determine their appropriateness for answering different kinds of research questions in an evidence synthesis.

An external file that holds a picture, illustration, etc.
Object name is 13643_2023_2255_Fig1_HTML.jpg

Distinguishing types of research evidence

Application of these basic distinctions may avoid some of the potential difficulties associated with study design labels and taxonomies. Nevertheless, debatable methodological issues are raised when certain types of research identified in this scheme are included in an evidence synthesis. We briefly highlight those associated with inclusion of non-randomized studies, case reports and series, and a combination of primary and secondary studies.

Non-randomized studies

When investigating an intervention’s effectiveness, it is important for authors to recognize the uncertainty of observed effects reported by studies with high RoB. Results of statistical analyses that include such studies need to be interpreted with caution in order to avoid misleading conclusions [ 74 ]. Review authors may consider excluding randomized studies with high RoB from meta-analyses. Non-randomized studies of intervention (NRSI) are affected by a greater potential range of biases and thus vary more than RCTs in their ability to estimate a causal effect [ 79 ]. If data from NRSI are synthesized in meta-analyses, it is helpful to separately report their summary estimates [ 6 , 74 ].

Nonetheless, certain design features of NRSI (eg, which parts of the study were prospectively designed) may help to distinguish stronger from weaker ones. Cochrane recommends that authors of a review including NRSI focus on relevant study design features when determining eligibility criteria instead of relying on non-informative study design labels [ 79 , 80 ] This process is facilitated by a study design feature checklist; guidance on using the checklist is included with developers’ description of the tool [ 73 , 74 ]. Authors collect information about these design features during data extraction and then consider it when making final study selection decisions and when performing RoB assessments of the included NRSI.

Case reports and case series

Correctly identified case reports and case series can contribute evidence not well captured by other designs [ 81 ]; in addition, some topics may be limited to a body of evidence that consists primarily of uncontrolled clinical observations. Murad and colleagues offer a framework for how to include case reports and series in an evidence synthesis [ 82 ]. Distinguishing between cohort studies and case series in these syntheses is important, especially for those that rely on evidence from NRSI. Additional data obtained from studies misclassified as case series can potentially increase the confidence in effect estimates. Mathes and Pieper provide authors of evidence syntheses with specific guidance on distinguishing between cohort studies and case series, but emphasize the increased workload involved [ 77 ].

Primary and secondary studies

Synthesis of combined evidence from primary and secondary studies may provide a broad perspective on the entirety of available literature on a topic. This is, in fact, the recommended strategy for scoping reviews that may include a variety of sources of evidence (eg, CPGs, popular media). However, except for scoping reviews, the synthesis of data from primary and secondary studies is discouraged unless there are strong reasons to justify doing so.

Combining primary and secondary sources of evidence is challenging for authors of other types of evidence syntheses for several reasons [ 83 ]. Assessments of RoB for primary and secondary studies are derived from conceptually different tools, thus obfuscating the ability to make an overall RoB assessment of a combination of these study types. In addition, authors who include primary and secondary studies must devise non-standardized methods for synthesis. Note this contrasts with well-established methods available for updating existing evidence syntheses with additional data from new primary studies [ 84 – 86 ]. However, a new review that synthesizes data from primary and secondary studies raises questions of validity and may unintentionally support a biased conclusion because no existing methodological guidance is currently available [ 87 ].

Recommendations

We suggest that journal editors require authors to identify which type of evidence synthesis they are submitting and reference the specific methodology used for its development. This will clarify the research question and methods for peer reviewers and potentially simplify the editorial process. Editors should announce this practice and include it in the instructions to authors. To decrease bias and apply correct methods, authors must also accurately identify the types of research evidence included in their syntheses.

Part 3. Conduct and reporting

The need to develop criteria to assess the rigor of systematic reviews was recognized soon after the EBM movement began to gain international traction [ 88 , 89 ]. Systematic reviews rapidly became popular, but many were very poorly conceived, conducted, and reported. These problems remain highly prevalent [ 23 ] despite development of guidelines and tools to standardize and improve the performance and reporting of evidence syntheses [ 22 , 28 ]. Table 3.1  provides some historical perspective on the evolution of tools developed specifically for the evaluation of systematic reviews, with or without meta-analysis.

Tools specifying standards for systematic reviews with and without meta-analysis

a Currently recommended

b Validated tool for systematic reviews of interventions developed for use by authors of overviews or umbrella reviews

These tools are often interchangeably invoked when referring to the “quality” of an evidence synthesis. However, quality is a vague term that is frequently misused and misunderstood; more precisely, these tools specify different standards for evidence syntheses. Methodological standards address how well a systematic review was designed and performed [ 5 ]. RoB assessments refer to systematic flaws or limitations in the design, conduct, or analysis of research that distort the findings of the review [ 4 ]. Reporting standards help systematic review authors describe the methodology they used and the results of their synthesis in sufficient detail [ 92 ]. It is essential to distinguish between these evaluations: a systematic review may be biased, it may fail to report sufficient information on essential features, or it may exhibit both problems; a thoroughly reported systematic evidence synthesis review may still be biased and flawed while an otherwise unbiased one may suffer from deficient documentation.

We direct attention to the currently recommended tools listed in Table 3.1  but concentrate on AMSTAR-2 (update of AMSTAR [A Measurement Tool to Assess Systematic Reviews]) and ROBIS (Risk of Bias in Systematic Reviews), which evaluate methodological quality and RoB, respectively. For comparison and completeness, we include PRISMA 2020 (update of the 2009 Preferred Reporting Items for Systematic Reviews of Meta-Analyses statement), which offers guidance on reporting standards. The exclusive focus on these three tools is by design; it addresses concerns related to the considerable variability in tools used for the evaluation of systematic reviews [ 28 , 88 , 96 , 97 ]. We highlight the underlying constructs these tools were designed to assess, then describe their components and applications. Their known (or potential) uptake and impact and limitations are also discussed.

Evaluation of conduct

Development.

AMSTAR [ 5 ] was in use for a decade prior to the 2017 publication of AMSTAR-2; both provide a broad evaluation of methodological quality of intervention systematic reviews, including flaws arising through poor conduct of the review [ 6 ]. ROBIS, published in 2016, was developed to specifically assess RoB introduced by the conduct of the review; it is applicable to systematic reviews of interventions and several other types of reviews [ 4 ]. Both tools reflect a shift to a domain-based approach as opposed to generic quality checklists. There are a few items unique to each tool; however, similarities between items have been demonstrated [ 98 , 99 ]. AMSTAR-2 and ROBIS are recommended for use by: 1) authors of overviews or umbrella reviews and CPGs to evaluate systematic reviews considered as evidence; 2) authors of methodological research studies to appraise included systematic reviews; and 3) peer reviewers for appraisal of submitted systematic review manuscripts. For authors, these tools may function as teaching aids and inform conduct of their review during its development.

Description

Systematic reviews that include randomized and/or non-randomized studies as evidence can be appraised with AMSTAR-2 and ROBIS. Other characteristics of AMSTAR-2 and ROBIS are summarized in Table 3.2 . Both tools define categories for an overall rating; however, neither tool is intended to generate a total score by simply calculating the number of responses satisfying criteria for individual items [ 4 , 6 ]. AMSTAR-2 focuses on the rigor of a review’s methods irrespective of the specific subject matter. ROBIS places emphasis on a review’s results section— this suggests it may be optimally applied by appraisers with some knowledge of the review’s topic as they may be better equipped to determine if certain procedures (or lack thereof) would impact the validity of a review’s findings [ 98 , 100 ]. Reliability studies show AMSTAR-2 overall confidence ratings strongly correlate with the overall RoB ratings in ROBIS [ 100 , 101 ].

Comparison of AMSTAR-2 and ROBIS

a ROBIS includes an optional first phase to assess the applicability of the review to the research question of interest. The tool may be applicable to other review types in addition to the four specified, although modification of this initial phase will be needed (Personal Communication via email, Penny Whiting, 28 Jan 2022)

b AMSTAR-2 item #9 and #11 require separate responses for RCTs and NRSI

Interrater reliability has been shown to be acceptable for AMSTAR-2 [ 6 , 11 , 102 ] and ROBIS [ 4 , 98 , 103 ] but neither tool has been shown to be superior in this regard [ 100 , 101 , 104 , 105 ]. Overall, variability in reliability for both tools has been reported across items, between pairs of raters, and between centers [ 6 , 100 , 101 , 104 ]. The effects of appraiser experience on the results of AMSTAR-2 and ROBIS require further evaluation [ 101 , 105 ]. Updates to both tools should address items shown to be prone to individual appraisers’ subjective biases and opinions [ 11 , 100 ]; this may involve modifications of the current domains and signaling questions as well as incorporation of methods to make an appraiser’s judgments more explicit. Future revisions of these tools may also consider the addition of standards for aspects of systematic review development currently lacking (eg, rating overall certainty of evidence, [ 99 ] methods for synthesis without meta-analysis [ 105 ]) and removal of items that assess aspects of reporting that are thoroughly evaluated by PRISMA 2020.

Application

A good understanding of what is required to satisfy the standards of AMSTAR-2 and ROBIS involves study of the accompanying guidance documents written by the tools’ developers; these contain detailed descriptions of each item’s standards. In addition, accurate appraisal of a systematic review with either tool requires training. Most experts recommend independent assessment by at least two appraisers with a process for resolving discrepancies as well as procedures to establish interrater reliability, such as pilot testing, a calibration phase or exercise, and development of predefined decision rules [ 35 , 99 – 101 , 103 , 104 , 106 ]. These methods may, to some extent, address the challenges associated with the diversity in methodological training, subject matter expertise, and experience using the tools that are likely to exist among appraisers.

The standards of AMSTAR, AMSTAR-2, and ROBIS have been used in many methodological studies and epidemiological investigations. However, the increased publication of overviews or umbrella reviews and CPGs has likely been a greater influence on the widening acceptance of these tools. Critical appraisal of the secondary studies considered evidence is essential to the trustworthiness of both the recommendations of CPGs and the conclusions of overviews. Currently both Cochrane [ 55 ] and JBI [ 107 ] recommend AMSTAR-2 and ROBIS in their guidance for authors of overviews or umbrella reviews. However, ROBIS and AMSTAR-2 were released in 2016 and 2017, respectively; thus, to date, limited data have been reported about the uptake of these tools or which of the two may be preferred [ 21 , 106 ]. Currently, in relation to CPGs, AMSTAR-2 appears to be overwhelmingly popular compared to ROBIS. A Google Scholar search of this topic (search terms “AMSTAR 2 AND clinical practice guidelines,” “ROBIS AND clinical practice guidelines” 13 May 2022) found 12,700 hits for AMSTAR-2 and 1,280 for ROBIS. The apparent greater appeal of AMSTAR-2 may relate to its longer track record given the original version of the tool was in use for 10 years prior to its update in 2017.

Barriers to the uptake of AMSTAR-2 and ROBIS include the real or perceived time and resources necessary to complete the items they include and appraisers’ confidence in their own ratings [ 104 ]. Reports from comparative studies available to date indicate that appraisers find AMSTAR-2 questions, responses, and guidance to be clearer and simpler compared with ROBIS [ 11 , 101 , 104 , 105 ]. This suggests that for appraisal of intervention systematic reviews, AMSTAR-2 may be a more practical tool than ROBIS, especially for novice appraisers [ 101 , 103 – 105 ]. The unique characteristics of each tool, as well as their potential advantages and disadvantages, should be taken into consideration when deciding which tool should be used for an appraisal of a systematic review. In addition, the choice of one or the other may depend on how the results of an appraisal will be used; for example, a peer reviewer’s appraisal of a single manuscript versus an appraisal of multiple systematic reviews in an overview or umbrella review, CPG, or systematic methodological study.

Authors of overviews and CPGs report results of AMSTAR-2 and ROBIS appraisals for each of the systematic reviews they include as evidence. Ideally, an independent judgment of their appraisals can be made by the end users of overviews and CPGs; however, most stakeholders, including clinicians, are unlikely to have a sophisticated understanding of these tools. Nevertheless, they should at least be aware that AMSTAR-2 and ROBIS ratings reported in overviews and CPGs may be inaccurate because the tools are not applied as intended by their developers. This can result from inadequate training of the overview or CPG authors who perform the appraisals, or to modifications of the appraisal tools imposed by them. The potential variability in overall confidence and RoB ratings highlights why appraisers applying these tools need to support their judgments with explicit documentation; this allows readers to judge for themselves whether they agree with the criteria used by appraisers [ 4 , 108 ]. When these judgments are explicit, the underlying rationale used when applying these tools can be assessed [ 109 ].

Theoretically, we would expect an association of AMSTAR-2 with improved methodological rigor and an association of ROBIS with lower RoB in recent systematic reviews compared to those published before 2017. To our knowledge, this has not yet been demonstrated; however, like reports about the actual uptake of these tools, time will tell. Additional data on user experience is also needed to further elucidate the practical challenges and methodological nuances encountered with the application of these tools. This information could potentially inform the creation of unifying criteria to guide and standardize the appraisal of evidence syntheses [ 109 ].

Evaluation of reporting

Complete reporting is essential for users to establish the trustworthiness and applicability of a systematic review’s findings. Efforts to standardize and improve the reporting of systematic reviews resulted in the 2009 publication of the PRISMA statement [ 92 ] with its accompanying explanation and elaboration document [ 110 ]. This guideline was designed to help authors prepare a complete and transparent report of their systematic review. In addition, adherence to PRISMA is often used to evaluate the thoroughness of reporting of published systematic reviews [ 111 ]. The updated version, PRISMA 2020 [ 93 ], and its guidance document [ 112 ] were published in 2021. Items on the original and updated versions of PRISMA are organized by the six basic review components they address (title, abstract, introduction, methods, results, discussion). The PRISMA 2020 update is a considerably expanded version of the original; it includes standards and examples for the 27 original and 13 additional reporting items that capture methodological advances and may enhance the replicability of reviews [ 113 ].

The original PRISMA statement fostered the development of various PRISMA extensions (Table 3.3 ). These include reporting guidance for scoping reviews and reviews of diagnostic test accuracy and for intervention reviews that report on the following: harms outcomes, equity issues, the effects of acupuncture, the results of network meta-analyses and analyses of individual participant data. Detailed reporting guidance for specific systematic review components (abstracts, protocols, literature searches) is also available.

PRISMA extensions

PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses

a Note the abstract reporting checklist is now incorporated into PRISMA 2020 [ 93 ]

Uptake and impact

The 2009 PRISMA standards [ 92 ] for reporting have been widely endorsed by authors, journals, and EBM-related organizations. We anticipate the same for PRISMA 2020 [ 93 ] given its co-publication in multiple high-impact journals. However, to date, there is a lack of strong evidence for an association between improved systematic review reporting and endorsement of PRISMA 2009 standards [ 43 , 111 ]. Most journals require a PRISMA checklist accompany submissions of systematic review manuscripts. However, the accuracy of information presented on these self-reported checklists is not necessarily verified. It remains unclear which strategies (eg, authors’ self-report of checklists, peer reviewer checks) might improve adherence to the PRISMA reporting standards; in addition, the feasibility of any potentially effective strategies must be taken into consideration given the structure and limitations of current research and publication practices [ 124 ].

Pitfalls and limitations of PRISMA, AMSTAR-2, and ROBIS

Misunderstanding of the roles of these tools and their misapplication may be widespread problems. PRISMA 2020 is a reporting guideline that is most beneficial if consulted when developing a review as opposed to merely completing a checklist when submitting to a journal; at that point, the review is finished, with good or bad methodological choices. However, PRISMA checklists evaluate how completely an element of review conduct was reported, but do not evaluate the caliber of conduct or performance of a review. Thus, review authors and readers should not think that a rigorous systematic review can be produced by simply following the PRISMA 2020 guidelines. Similarly, it is important to recognize that AMSTAR-2 and ROBIS are tools to evaluate the conduct of a review but do not substitute for conceptual methodological guidance. In addition, they are not intended to be simple checklists. In fact, they have the potential for misuse or abuse if applied as such; for example, by calculating a total score to make a judgment about a review’s overall confidence or RoB. Proper selection of a response for the individual items on AMSTAR-2 and ROBIS requires training or at least reference to their accompanying guidance documents.

Not surprisingly, it has been shown that compliance with the PRISMA checklist is not necessarily associated with satisfying the standards of ROBIS [ 125 ]. AMSTAR-2 and ROBIS were not available when PRISMA 2009 was developed; however, they were considered in the development of PRISMA 2020 [ 113 ]. Therefore, future studies may show a positive relationship between fulfillment of PRISMA 2020 standards for reporting and meeting the standards of tools evaluating methodological quality and RoB.

Choice of an appropriate tool for the evaluation of a systematic review first involves identification of the underlying construct to be assessed. For systematic reviews of interventions, recommended tools include AMSTAR-2 and ROBIS for appraisal of conduct and PRISMA 2020 for completeness of reporting. All three tools were developed rigorously and provide easily accessible and detailed user guidance, which is necessary for their proper application and interpretation. When considering a manuscript for publication, training in these tools can sensitize peer reviewers and editors to major issues that may affect the review’s trustworthiness and completeness of reporting. Judgment of the overall certainty of a body of evidence and formulation of recommendations rely, in part, on AMSTAR-2 or ROBIS appraisals of systematic reviews. Therefore, training on the application of these tools is essential for authors of overviews and developers of CPGs. Peer reviewers and editors considering an overview or CPG for publication must hold their authors to a high standard of transparency regarding both the conduct and reporting of these appraisals.

Part 4. Meeting conduct standards

Many authors, peer reviewers, and editors erroneously equate fulfillment of the items on the PRISMA checklist with superior methodological rigor. For direction on methodology, we refer them to available resources that provide comprehensive conceptual guidance [ 59 , 60 ] as well as primers with basic step-by-step instructions [ 1 , 126 , 127 ]. This section is intended to complement study of such resources by facilitating use of AMSTAR-2 and ROBIS, tools specifically developed to evaluate methodological rigor of systematic reviews. These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal of systematic reviews [ 88 , 96 ].

To enable their uptake, Table 4.1  links review components to the corresponding appraisal tool items. Expectations of AMSTAR-2 and ROBIS are concisely stated, and reasoning provided.

Systematic review components linked to appraisal with AMSTAR-2 and ROBIS a

CoI conflict of interest, MA meta-analysis, NA not addressed, PICO participant, intervention, comparison, outcome, PRISMA-P Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols, RoB risk of bias

a Components shown in bold are chosen for elaboration in Part 4 for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors; and/or 2) the component is evaluated by standards of an AMSTAR-2 “critical” domain

b Critical domains of AMSTAR-2 are indicated by *

Issues involved in meeting the standards for seven review components (identified in bold in Table 4.1 ) are addressed in detail. These were chosen for elaboration for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors based on consistent reports of their frequent AMSTAR-2 or ROBIS deficiencies [ 9 , 11 , 15 , 88 , 128 , 129 ]; and/or 2) the review component is judged by standards of an AMSTAR-2 “critical” domain. These have the greatest implications for how a systematic review will be appraised: if standards for any one of these critical domains are not met, the review is rated as having “critically low confidence.”

Research question

Specific and unambiguous research questions may have more value for reviews that deal with hypothesis testing. Mnemonics for the various elements of research questions are suggested by JBI and Cochrane (Table 2.1 ). These prompt authors to consider the specialized methods involved for developing different types of systematic reviews; however, while inclusion of the suggested elements makes a review compliant with a particular review’s methods, it does not necessarily make a research question appropriate. Table 4.2  lists acronyms that may aid in developing the research question. They include overlapping concepts of importance in this time of proliferating reviews of uncertain value [ 130 ]. If these issues are not prospectively contemplated, systematic review authors may establish an overly broad scope, or develop runaway scope allowing them to stray from predefined choices relating to key comparisons and outcomes.

Research question development

a Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Hulley SB, Cummings SR, Browner WS, editors. Designing clinical research: an epidemiological approach; 4th edn. Lippincott Williams & Wilkins; 2007. p. 14–22

b Doran, GT. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manage Rev. 1981;70:35-6.

c Johnson BT, Hennessy EA. Systematic reviews and meta-analyses in the health sciences: best practice methods for research syntheses. Soc Sci Med. 2019;233:237–51

Once a research question is established, searching on registry sites and databases for existing systematic reviews addressing the same or a similar topic is necessary in order to avoid contributing to research waste [ 131 ]. Repeating an existing systematic review must be justified, for example, if previous reviews are out of date or methodologically flawed. A full discussion on replication of intervention systematic reviews, including a consensus checklist, can be found in the work of Tugwell and colleagues [ 84 ].

Protocol development is considered a core component of systematic reviews [ 125 , 126 , 132 ]. Review protocols may allow researchers to plan and anticipate potential issues, assess validity of methods, prevent arbitrary decision-making, and minimize bias that can be introduced by the conduct of the review. Registration of a protocol that allows public access promotes transparency of the systematic review’s methods and processes and reduces the potential for duplication [ 132 ]. Thinking early and carefully about all the steps of a systematic review is pragmatic and logical and may mitigate the influence of the authors’ prior knowledge of the evidence [ 133 ]. In addition, the protocol stage is when the scope of the review can be carefully considered by authors, reviewers, and editors; this may help to avoid production of overly ambitious reviews that include excessive numbers of comparisons and outcomes or are undisciplined in their study selection.

An association with attainment of AMSTAR standards in systematic reviews with published prospective protocols has been reported [ 134 ]. However, completeness of reporting does not seem to be different in reviews with a protocol compared to those without one [ 135 ]. PRISMA-P [ 116 ] and its accompanying elaboration and explanation document [ 136 ] can be used to guide and assess the reporting of protocols. A final version of the review should fully describe any protocol deviations. Peer reviewers may compare the submitted manuscript with any available pre-registered protocol; this is required if AMSTAR-2 or ROBIS are used for critical appraisal.

There are multiple options for the recording of protocols (Table 4.3 ). Some journals will peer review and publish protocols. In addition, many online sites offer date-stamped and publicly accessible protocol registration. Some of these are exclusively for protocols of evidence syntheses; others are less restrictive and offer researchers the capacity for data storage, sharing, and other workflow features. These sites document protocol details to varying extents and have different requirements [ 137 ]. The most popular site for systematic reviews, the International Prospective Register of Systematic Reviews (PROSPERO), for example, only registers reviews that report on an outcome with direct relevance to human health. The PROSPERO record documents protocols for all types of reviews except literature and scoping reviews. Of note, PROSPERO requires authors register their review protocols prior to any data extraction [ 133 , 138 ]. The electronic records of most of these registry sites allow authors to update their protocols and facilitate transparent tracking of protocol changes, which are not unexpected during the progress of the review [ 139 ].

Options for protocol registration of evidence syntheses

a Authors are advised to contact their target journal regarding submission of systematic review protocols

b Registration is restricted to approved review projects

c The JBI registry lists review projects currently underway by JBI-affiliated entities. These records include a review’s title, primary author, research question, and PICO elements. JBI recommends that authors register eligible protocols with PROSPERO

d See Pieper and Rombey [ 137 ] for detailed characteristics of these five registries

e See Pieper and Rombey [ 137 ] for other systematic review data repository options

Study design inclusion

For most systematic reviews, broad inclusion of study designs is recommended [ 126 ]. This may allow comparison of results between contrasting study design types [ 126 ]. Certain study designs may be considered preferable depending on the type of review and nature of the research question. However, prevailing stereotypes about what each study design does best may not be accurate. For example, in systematic reviews of interventions, randomized designs are typically thought to answer highly specific questions while non-randomized designs often are expected to reveal greater information about harms or real-word evidence [ 126 , 140 , 141 ]. This may be a false distinction; randomized trials may be pragmatic [ 142 ], they may offer important (and more unbiased) information on harms [ 143 ], and data from non-randomized trials may not necessarily be more real-world-oriented [ 144 ].

Moreover, there may not be any available evidence reported by RCTs for certain research questions; in some cases, there may not be any RCTs or NRSI. When the available evidence is limited to case reports and case series, it is not possible to test hypotheses nor provide descriptive estimates or associations; however, a systematic review of these studies can still offer important insights [ 81 , 145 ]. When authors anticipate that limited evidence of any kind may be available to inform their research questions, a scoping review can be considered. Alternatively, decisions regarding inclusion of indirect as opposed to direct evidence can be addressed during protocol development [ 146 ]. Including indirect evidence at an early stage of intervention systematic review development allows authors to decide if such studies offer any additional and/or different understanding of treatment effects for their population or comparison of interest. Issues of indirectness of included studies are accounted for later in the process, during determination of the overall certainty of evidence (see Part 5 for details).

Evidence search

Both AMSTAR-2 and ROBIS require systematic and comprehensive searches for evidence. This is essential for any systematic review. Both tools discourage search restrictions based on language and publication source. Given increasing globalism in health care, the practice of including English-only literature should be avoided [ 126 ]. There are many examples in which language bias (different results in studies published in different languages) has been documented [ 147 , 148 ]. This does not mean that all literature, in all languages, is equally trustworthy [ 148 ]; however, the only way to formally probe for the potential of such biases is to consider all languages in the initial search. The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed [ 149 – 151 ]. Again, inclusiveness will allow review authors to investigate whether results differ in gray literature and trials [ 41 , 151 – 153 ].

Authors should make every attempt to complete their review within one year as that is the likely viable life of a search. (1) If that is not possible, the search should be updated close to the time of completion [ 154 ]. Different research topics may warrant less of a delay, for example, in rapidly changing fields (as in the case of the COVID-19 pandemic), even one month may radically change the available evidence.

Excluded studies

AMSTAR-2 requires authors to provide references for any studies excluded at the full text phase of study selection along with reasons for exclusion; this allows readers to feel confident that all relevant literature has been considered for inclusion and that exclusions are defensible.

Risk of bias assessment of included studies

The design of the studies included in a systematic review (eg, RCT, cohort, case series) should not be equated with appraisal of its RoB. To meet AMSTAR-2 and ROBIS standards, systematic review authors must examine RoB issues specific to the design of each primary study they include as evidence. It is unlikely that a single RoB appraisal tool will be suitable for all research designs. In addition to tools for randomized and non-randomized studies, specific tools are available for evaluation of RoB in case reports and case series [ 82 ] and single-case experimental designs [ 155 , 156 ]. Note the RoB tools selected must meet the standards of the appraisal tool used to judge the conduct of the review. For example, AMSTAR-2 identifies four sources of bias specific to RCTs and NRSI that must be addressed by the RoB tool(s) chosen by the review authors. The Cochrane RoB-2 [ 157 ] tool for RCTs and ROBINS-I [ 158 ] for NRSI for RoB assessment meet the AMSTAR-2 standards. Appraisers on the review team should not modify any RoB tool without complete transparency and acknowledgment that they have invalidated the interpretation of the tool as intended by its developers [ 159 ]. Conduct of RoB assessments is not addressed AMSTAR-2; to meet ROBIS standards, two independent reviewers should complete RoB assessments of included primary studies.

Implications of the RoB assessments must be explicitly discussed and considered in the conclusions of the review. Discussion of the overall RoB of included studies may consider the weight of the studies at high RoB, the importance of the sources of bias in the studies being summarized, and if their importance differs in relationship to the outcomes reported. If a meta-analysis is performed, serious concerns for RoB of individual studies should be accounted for in these results as well. If the results of the meta-analysis for a specific outcome change when studies at high RoB are excluded, readers will have a more accurate understanding of this body of evidence. However, while investigating the potential impact of specific biases is a useful exercise, it is important to avoid over-interpretation, especially when there are sparse data.

Synthesis methods for quantitative data

Syntheses of quantitative data reported by primary studies are broadly categorized as one of two types: meta-analysis, and synthesis without meta-analysis (Table 4.4 ). Before deciding on one of these methods, authors should seek methodological advice about whether reported data can be transformed or used in other ways to provide a consistent effect measure across studies [ 160 , 161 ].

Common methods for quantitative synthesis

CI confidence interval (or credible interval, if analysis is done in Bayesian framework)

a See text for descriptions of the types of data combined in each of these approaches

b See Additional File 4  for guidance on the structure and presentation of forest plots

c General approach is similar to aggregate data meta-analysis but there are substantial differences relating to data collection and checking and analysis [ 162 ]. This approach to syntheses is applicable to intervention, diagnostic, and prognostic systematic reviews [ 163 ]

d Examples include meta-regression, hierarchical and multivariate approaches [ 164 ]

e In-depth guidance and illustrations of these methods are provided in Chapter 12 of the Cochrane Handbook [ 160 ]

Meta-analysis

Systematic reviews that employ meta-analysis should not be referred to simply as “meta-analyses.” The term meta-analysis strictly refers to a specific statistical technique used when study effect estimates and their variances are available, yielding a quantitative summary of results. In general, methods for meta-analysis involve use of a weighted average of effect estimates from two or more studies. If considered carefully, meta-analysis increases the precision of the estimated magnitude of effect and can offer useful insights about heterogeneity and estimates of effects. We refer to standard references for a thorough introduction and formal training [ 165 – 167 ].

There are three common approaches to meta-analysis in current health care–related systematic reviews (Table 4.4 ). Aggregate meta-analyses is the most familiar to authors of evidence syntheses and their end users. This standard meta-analysis combines data on effect estimates reported by studies that investigate similar research questions involving direct comparisons of an intervention and comparator. Results of these analyses provide a single summary intervention effect estimate. If the included studies in a systematic review measure an outcome differently, their reported results may be transformed to make them comparable [ 161 ]. Forest plots visually present essential information about the individual studies and the overall pooled analysis (see Additional File 4  for details).

Less familiar and more challenging meta-analytical approaches used in secondary research include individual participant data (IPD) and network meta-analyses (NMA); PRISMA extensions provide reporting guidelines for both [ 117 , 118 ]. In IPD, the raw data on each participant from each eligible study are re-analyzed as opposed to the study-level data analyzed in aggregate data meta-analyses [ 168 ]. This may offer advantages, including the potential for limiting concerns about bias and allowing more robust analyses [ 163 ]. As suggested by the description in Table 4.4 , NMA is a complex statistical approach. It combines aggregate data [ 169 ] or IPD [ 170 ] for effect estimates from direct and indirect comparisons reported in two or more studies of three or more interventions. This makes it a potentially powerful statistical tool; while multiple interventions are typically available to treat a condition, few have been evaluated in head-to-head trials [ 171 ]. Both IPD and NMA facilitate a broader scope, and potentially provide more reliable and/or detailed results; however, compared with standard aggregate data meta-analyses, their methods are more complicated, time-consuming, and resource-intensive, and they have their own biases, so one needs sufficient funding, technical expertise, and preparation to employ them successfully [ 41 , 172 , 173 ].

Several items in AMSTAR-2 and ROBIS address meta-analysis; thus, understanding the strengths, weaknesses, assumptions, and limitations of methods for meta-analyses is important. According to the standards of both tools, plans for a meta-analysis must be addressed in the review protocol, including reasoning, description of the type of quantitative data to be synthesized, and the methods planned for combining the data. This should not consist of stock statements describing conventional meta-analysis techniques; rather, authors are expected to anticipate issues specific to their research questions. Concern for the lack of training in meta-analysis methods among systematic review authors cannot be overstated. For those with training, the use of popular software (eg, RevMan [ 174 ], MetaXL [ 175 ], JBI SUMARI [ 176 ]) may facilitate exploration of these methods; however, such programs cannot substitute for the accurate interpretation of the results of meta-analyses, especially for more complex meta-analytical approaches.

Synthesis without meta-analysis

There are varied reasons a meta-analysis may not be appropriate or desirable [ 160 , 161 ]. Syntheses that informally use statistical methods other than meta-analysis are variably referred to as descriptive, narrative, or qualitative syntheses or summaries; these terms are also applied to syntheses that make no attempt to statistically combine data from individual studies. However, use of such imprecise terminology is discouraged; in order to fully explore the results of any type of synthesis, some narration or description is needed to supplement the data visually presented in tabular or graphic forms [ 63 , 177 ]. In addition, the term “qualitative synthesis” is easily confused with a synthesis of qualitative data in a qualitative or mixed methods review. “Synthesis without meta-analysis” is currently the preferred description of other ways to combine quantitative data from two or more studies. Use of this specific terminology when referring to these types of syntheses also implies the application of formal methods (Table 4.4 ).

Methods for syntheses without meta-analysis involve structured presentations of the data in any tables and plots. In comparison to narrative descriptions of each study, these are designed to more effectively and transparently show patterns and convey detailed information about the data; they also allow informal exploration of heterogeneity [ 178 ]. In addition, acceptable quantitative statistical methods (Table 4.4 ) are formally applied; however, it is important to recognize these methods have significant limitations for the interpretation of the effectiveness of an intervention [ 160 ]. Nevertheless, when meta-analysis is not possible, the application of these methods is less prone to bias compared with an unstructured narrative description of included studies [ 178 , 179 ].

Vote counting is commonly used in systematic reviews and involves a tally of studies reporting results that meet some threshold of importance applied by review authors. Until recently, it has not typically been identified as a method for synthesis without meta-analysis. Guidance on an acceptable vote counting method based on direction of effect is currently available [ 160 ] and should be used instead of narrative descriptions of such results (eg, “more than half the studies showed improvement”; “only a few studies reported adverse effects”; “7 out of 10 studies favored the intervention”). Unacceptable methods include vote counting by statistical significance or magnitude of effect or some subjective rule applied by the authors.

AMSTAR-2 and ROBIS standards do not explicitly address conduct of syntheses without meta-analysis, although AMSTAR-2 items 13 and 14 might be considered relevant. Guidance for the complete reporting of syntheses without meta-analysis for systematic reviews of interventions is available in the Synthesis without Meta-analysis (SWiM) guideline [ 180 ] and methodological guidance is available in the Cochrane Handbook [ 160 , 181 ].

Familiarity with AMSTAR-2 and ROBIS makes sense for authors of systematic reviews as these appraisal tools will be used to judge their work; however, training is necessary for authors to truly appreciate and apply methodological rigor. Moreover, judgment of the potential contribution of a systematic review to the current knowledge base goes beyond meeting the standards of AMSTAR-2 and ROBIS. These tools do not explicitly address some crucial concepts involved in the development of a systematic review; this further emphasizes the need for author training.

We recommend that systematic review authors incorporate specific practices or exercises when formulating a research question at the protocol stage, These should be designed to raise the review team’s awareness of how to prevent research and resource waste [ 84 , 130 ] and to stimulate careful contemplation of the scope of the review [ 30 ]. Authors’ training should also focus on justifiably choosing a formal method for the synthesis of quantitative and/or qualitative data from primary research; both types of data require specific expertise. For typical reviews that involve syntheses of quantitative data, statistical expertise is necessary, initially for decisions about appropriate methods, [ 160 , 161 ] and then to inform any meta-analyses [ 167 ] or other statistical methods applied [ 160 ].

Part 5. Rating overall certainty of evidence

Report of an overall certainty of evidence assessment in a systematic review is an important new reporting standard of the updated PRISMA 2020 guidelines [ 93 ]. Systematic review authors are well acquainted with assessing RoB in individual primary studies, but much less familiar with assessment of overall certainty across an entire body of evidence. Yet a reliable way to evaluate this broader concept is now recognized as a vital part of interpreting the evidence.

Historical systems for rating evidence are based on study design and usually involve hierarchical levels or classes of evidence that use numbers and/or letters to designate the level/class. These systems were endorsed by various EBM-related organizations. Professional societies and regulatory groups then widely adopted them, often with modifications for application to the available primary research base in specific clinical areas. In 2002, a report issued by the AHRQ identified 40 systems to rate quality of a body of evidence [ 182 ]. A critical appraisal of systems used by prominent health care organizations published in 2004 revealed limitations in sensibility, reproducibility, applicability to different questions, and usability to different end users [ 183 ]. Persistent use of hierarchical rating schemes to describe overall quality continues to complicate the interpretation of evidence. This is indicated by recent reports of poor interpretability of systematic review results by readers [ 184 – 186 ] and misleading interpretations of the evidence related to the “spin” systematic review authors may put on their conclusions [ 50 , 187 ].

Recognition of the shortcomings of hierarchical rating systems raised concerns that misleading clinical recommendations could result even if based on a rigorous systematic review. In addition, the number and variability of these systems were considered obstacles to quick and accurate interpretations of the evidence by clinicians, patients, and policymakers [ 183 ]. These issues contributed to the development of the GRADE approach. An international working group, that continues to actively evaluate and refine it, first introduced GRADE in 2004 [ 188 ]. Currently more than 110 organizations from 19 countries around the world have endorsed or are using GRADE [ 189 ].

GRADE approach to rating overall certainty

GRADE offers a consistent and sensible approach for two separate processes: rating the overall certainty of a body of evidence and the strength of recommendations. The former is the expected conclusion of a systematic review, while the latter is pertinent to the development of CPGs. As such, GRADE provides a mechanism to bridge the gap from evidence synthesis to application of the evidence for informed clinical decision-making [ 27 , 190 ]. We briefly examine the GRADE approach but only as it applies to rating overall certainty of evidence in systematic reviews.

In GRADE, use of “certainty” of a body of evidence is preferred over the term “quality.” [ 191 ] Certainty refers to the level of confidence systematic review authors have that, for each outcome, an effect estimate represents the true effect. The GRADE approach to rating confidence in estimates begins with identifying the study type (RCT or NRSI) and then systematically considers criteria to rate the certainty of evidence up or down (Table 5.1 ).

GRADE criteria for rating certainty of evidence

a Applies to randomized studies

b Applies to non-randomized studies

This process results in assignment of one of the four GRADE certainty ratings to each outcome; these are clearly conveyed with the use of basic interpretation symbols (Table 5.2 ) [ 192 ]. Notably, when multiple outcomes are reported in a systematic review, each outcome is assigned a unique certainty rating; thus different levels of certainty may exist in the body of evidence being examined.

GRADE certainty ratings and their interpretation symbols a

a From the GRADE Handbook [ 192 ]

GRADE’s developers acknowledge some subjectivity is involved in this process [ 193 ]. In addition, they emphasize that both the criteria for rating evidence up and down (Table 5.1 ) as well as the four overall certainty ratings (Table 5.2 ) reflect a continuum as opposed to discrete categories [ 194 ]. Consequently, deciding whether a study falls above or below the threshold for rating up or down may not be straightforward, and preliminary overall certainty ratings may be intermediate (eg, between low and moderate). Thus, the proper application of GRADE requires systematic review authors to take an overall view of the body of evidence and explicitly describe the rationale for their final ratings.

Advantages of GRADE

Outcomes important to the individuals who experience the problem of interest maintain a prominent role throughout the GRADE process [ 191 ]. These outcomes must inform the research questions (eg, PICO [population, intervention, comparator, outcome]) that are specified a priori in a systematic review protocol. Evidence for these outcomes is then investigated and each critical or important outcome is ultimately assigned a certainty of evidence as the end point of the review. Notably, limitations of the included studies have an impact at the outcome level. Ultimately, the certainty ratings for each outcome reported in a systematic review are considered by guideline panels. They use a different process to formulate recommendations that involves assessment of the evidence across outcomes [ 201 ]. It is beyond our scope to describe the GRADE process for formulating recommendations; however, it is critical to understand how these two outcome-centric concepts of certainty of evidence in the GRADE framework are related and distinguished. An in-depth illustration using examples from recently published evidence syntheses and CPGs is provided in Additional File 5 A (Table AF5A-1).

The GRADE approach is applicable irrespective of whether the certainty of the primary research evidence is high or very low; in some circumstances, indirect evidence of higher certainty may be considered if direct evidence is unavailable or of low certainty [ 27 ]. In fact, most interventions and outcomes in medicine have low or very low certainty of evidence based on GRADE and there seems to be no major improvement over time [ 202 , 203 ]. This is still a very important (even if sobering) realization for calibrating our understanding of medical evidence. A major appeal of the GRADE approach is that it offers a common framework that enables authors of evidence syntheses to make complex judgments about evidence certainty and to convey these with unambiguous terminology. This prevents some common mistakes made by review authors, including overstating results (or under-reporting harms) [ 187 ] and making recommendations for treatment. This is illustrated in Table AF5A-2 (Additional File 5 A), which compares the concluding statements made about overall certainty in a systematic review with and without application of the GRADE approach.

Theoretically, application of GRADE should improve consistency of judgments about certainty of evidence, both between authors and across systematic reviews. In one empirical evaluation conducted by the GRADE Working Group, interrater reliability of two individual raters assessing certainty of the evidence for a specific outcome increased from ~ 0.3 without using GRADE to ~ 0.7 by using GRADE [ 204 ]. However, others report variable agreement among those experienced in GRADE assessments of evidence certainty [ 190 ]. Like any other tool, GRADE requires training in order to be properly applied. The intricacies of the GRADE approach and the necessary subjectivity involved suggest that improving agreement may require strict rules for its application; alternatively, use of general guidance and consensus among review authors may result in less consistency but provide important information for the end user [ 190 ].

GRADE caveats

Simply invoking “the GRADE approach” does not automatically ensure GRADE methods were employed by authors of a systematic review (or developers of a CPG). Table 5.3 lists the criteria the GRADE working group has established for this purpose. These criteria highlight the specific terminology and methods that apply to rating the certainty of evidence for outcomes reported in a systematic review [ 191 ], which is different from rating overall certainty across outcomes considered in the formulation of recommendations [ 205 ]. Modifications of standard GRADE methods and terminology are discouraged as these may detract from GRADE’s objectives to minimize conceptual confusion and maximize clear communication [ 206 ].

Criteria for using GRADE in a systematic review a

a Adapted from the GRADE working group [ 206 ]; this list does not contain the additional criteria that apply to the development of a clinical practice guideline

Nevertheless, GRADE is prone to misapplications [ 207 , 208 ], which can distort a systematic review’s conclusions about the certainty of evidence. Systematic review authors without proper GRADE training are likely to misinterpret the terms “quality” and “grade” and to misunderstand the constructs assessed by GRADE versus other appraisal tools. For example, review authors may reference the standard GRADE certainty ratings (Table 5.2 ) to describe evidence for their outcome(s) of interest. However, these ratings are invalidated if authors omit or inadequately perform RoB evaluations of each included primary study. Such deficiencies in RoB assessments are unacceptable but not uncommon, as reported in methodological studies of systematic reviews and overviews [ 104 , 186 , 209 , 210 ]. GRADE ratings are also invalidated if review authors do not formally address and report on the other criteria (Table 5.1 ) necessary for a GRADE certainty rating.

Other caveats pertain to application of a GRADE certainty of evidence rating in various types of evidence syntheses. Current adaptations of GRADE are described in Additional File 5 B and included on Table 6.3 , which is introduced in the next section.

Concise Guide to best practices for evidence syntheses, version 1.0 a

AMSTAR A MeaSurement Tool to Assess Systematic Reviews, CASP Critical Appraisal Skills Programme, CERQual Confidence in the Evidence from Reviews of Qualitative research, ConQual Establishing Confidence in the output of Qualitative research synthesis, COSMIN COnsensus-based Standards for the selection of health Measurement Instruments, DTA diagnostic test accuracy, eMERGe meta-ethnography reporting guidance, ENTREQ enhancing transparency in reporting the synthesis of qualitative research, GRADE Grading of Recommendations Assessment, Development and Evaluation, MA meta-analysis, NRSI non-randomized studies of interventions, P protocol, PRIOR Preferred Reporting Items for Overviews of Reviews, PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses, PROBAST Prediction model Risk Of Bias ASsessment Tool, QUADAS quality assessment of studies of diagnostic accuracy included in systematic reviews, QUIPS Quality In Prognosis Studies, RCT randomized controlled trial, RoB risk of bias, ROBINS-I Risk Of Bias In Non-randomised Studies of Interventions, ROBIS Risk of Bias in Systematic Reviews, ScR scoping review, SWiM systematic review without meta-analysis

a Superscript numbers represent citations provided in the main reference list. Additional File 6 lists links to available online resources for the methods and tools included in the Concise Guide

b The MECIR manual [ 30 ] provides Cochrane’s specific standards for both reporting and conduct of intervention systematic reviews and protocols

c Editorial and peer reviewers can evaluate completeness of reporting in submitted manuscripts using these tools. Authors may be required to submit a self-reported checklist for the applicable tools

d The decision flowchart described by Flemming and colleagues [ 223 ] is recommended for guidance on how to choose the best approach to reporting for qualitative reviews

e SWiM was developed for intervention studies reporting quantitative data. However, if there is not a more directly relevant reporting guideline, SWiM may prompt reviewers to consider the important details to report. (Personal Communication via email, Mhairi Campbell, 14 Dec 2022)

f JBI recommends their own tools for the critical appraisal of various quantitative primary study designs included in systematic reviews of intervention effectiveness, prevalence and incidence, and etiology and risk as well as for the critical appraisal of systematic reviews included in umbrella reviews. However, except for the JBI Checklists for studies reporting prevalence data and qualitative research, the development, validity, and reliability of these tools are not well documented

g Studies that are not RCTs or NRSI require tools developed specifically to evaluate their design features. Examples include single case experimental design [ 155 , 156 ] and case reports and series [ 82 ]

h The evaluation of methodological quality of studies included in a synthesis of qualitative research is debatable [ 224 ]. Authors may select a tool appropriate for the type of qualitative synthesis methodology employed. The CASP Qualitative Checklist [ 218 ] is an example of a published, commonly used tool that focuses on assessment of the methodological strengths and limitations of qualitative studies. The JBI Critical Appraisal Checklist for Qualitative Research [ 219 ] is recommended for reviews using a meta-aggregative approach

i Consider including risk of bias assessment of included studies if this information is relevant to the research question; however, scoping reviews do not include an assessment of the overall certainty of a body of evidence

j Guidance available from the GRADE working group [ 225 , 226 ]; also recommend consultation with the Cochrane diagnostic methods group

k Guidance available from the GRADE working group [ 227 ]; also recommend consultation with Cochrane prognostic methods group

l Used for syntheses in reviews with a meta-aggregative approach [ 224 ]

m Chapter 5 in the JBI Manual offers guidance on how to adapt GRADE to prevalence and incidence reviews [ 69 ]

n Janiaud and colleagues suggest criteria for evaluating evidence certainty for meta-analyses of non-randomized studies evaluating risk factors [ 228 ]

o The COSMIN user manual provides details on how to apply GRADE in systematic reviews of measurement properties [ 229 ]

The expected culmination of a systematic review should be a rating of overall certainty of a body of evidence for each outcome reported. The GRADE approach is recommended for making these judgments for outcomes reported in systematic reviews of interventions and can be adapted for other types of reviews. This represents the initial step in the process of making recommendations based on evidence syntheses. Peer reviewers should ensure authors meet the minimal criteria for supporting the GRADE approach when reviewing any evidence synthesis that reports certainty ratings derived using GRADE. Authors and peer reviewers of evidence syntheses unfamiliar with GRADE are encouraged to seek formal training and take advantage of the resources available on the GRADE website [ 211 , 212 ].

Part 6. Concise Guide to best practices

Accumulating data in recent years suggest that many evidence syntheses (with or without meta-analysis) are not reliable. This relates in part to the fact that their authors, who are often clinicians, can be overwhelmed by the plethora of ways to evaluate evidence. They tend to resort to familiar but often inadequate, inappropriate, or obsolete methods and tools and, as a result, produce unreliable reviews. These manuscripts may not be recognized as such by peer reviewers and journal editors who may disregard current standards. When such a systematic review is published or included in a CPG, clinicians and stakeholders tend to believe that it is trustworthy. A vicious cycle in which inadequate methodology is rewarded and potentially misleading conclusions are accepted is thus supported. There is no quick or easy way to break this cycle; however, increasing awareness of best practices among all these stakeholder groups, who often have minimal (if any) training in methodology, may begin to mitigate it. This is the rationale for inclusion of Parts 2 through 5 in this guidance document. These sections present core concepts and important methodological developments that inform current standards and recommendations. We conclude by taking a direct and practical approach.

Inconsistent and imprecise terminology used in the context of development and evaluation of evidence syntheses is problematic for authors, peer reviewers and editors, and may lead to the application of inappropriate methods and tools. In response, we endorse use of the basic terms (Table 6.1 ) defined in the PRISMA 2020 statement [ 93 ]. In addition, we have identified several problematic expressions and nomenclature. In Table 6.2 , we compile suggestions for preferred terms less likely to be misinterpreted.

Terms relevant to the reporting of health care–related evidence syntheses a

a Reproduced from Page and colleagues [ 93 ]

Terminology suggestions for health care–related evidence syntheses

a For example, meta-aggregation, meta-ethnography, critical interpretative synthesis, realist synthesis

b This term may best apply to the synthesis in a mixed methods systematic review in which data from different types of evidence (eg, qualitative, quantitative, economic) are summarized [ 64 ]

We also propose a Concise Guide (Table 6.3 ) that summarizes the methods and tools recommended for the development and evaluation of nine types of evidence syntheses. Suggestions for specific tools are based on the rigor of their development as well as the availability of detailed guidance from their developers to ensure their proper application. The formatting of the Concise Guide addresses a well-known source of confusion by clearly distinguishing the underlying methodological constructs that these tools were designed to assess. Important clarifications and explanations follow in the guide’s footnotes; associated websites, if available, are listed in Additional File 6 .

To encourage uptake of best practices, journal editors may consider adopting or adapting the Concise Guide in their instructions to authors and peer reviewers of evidence syntheses. Given the evolving nature of evidence synthesis methodology, the suggested methods and tools are likely to require regular updates. Authors of evidence syntheses should monitor the literature to ensure they are employing current methods and tools. Some types of evidence syntheses (eg, rapid, economic, methodological) are not included in the Concise Guide; for these, authors are advised to obtain recommendations for acceptable methods by consulting with their target journal.

We encourage the appropriate and informed use of the methods and tools discussed throughout this commentary and summarized in the Concise Guide (Table 6.3 ). However, we caution against their application in a perfunctory or superficial fashion. This is a common pitfall among authors of evidence syntheses, especially as the standards of such tools become associated with acceptance of a manuscript by a journal. Consequently, published evidence syntheses may show improved adherence to the requirements of these tools without necessarily making genuine improvements in their performance.

In line with our main objective, the suggested tools in the Concise Guide address the reliability of evidence syntheses; however, we recognize that the utility of systematic reviews is an equally important concern. An unbiased and thoroughly reported evidence synthesis may still not be highly informative if the evidence itself that is summarized is sparse, weak and/or biased [ 24 ]. Many intervention systematic reviews, including those developed by Cochrane [ 203 ] and those applying GRADE [ 202 ], ultimately find no evidence, or find the evidence to be inconclusive (eg, “weak,” “mixed,” or of “low certainty”). This often reflects the primary research base; however, it is important to know what is known (or not known) about a topic when considering an intervention for patients and discussing treatment options with them.

Alternatively, the frequency of “empty” and inconclusive reviews published in the medical literature may relate to limitations of conventional methods that focus on hypothesis testing; these have emphasized the importance of statistical significance in primary research and effect sizes from aggregate meta-analyses [ 183 ]. It is becoming increasingly apparent that this approach may not be appropriate for all topics [ 130 ]. Development of the GRADE approach has facilitated a better understanding of significant factors (beyond effect size) that contribute to the overall certainty of evidence. Other notable responses include the development of integrative synthesis methods for the evaluation of complex interventions [ 230 , 231 ], the incorporation of crowdsourcing and machine learning into systematic review workflows (eg the Cochrane Evidence Pipeline) [ 2 ], the shift in paradigm to living systemic review and NMA platforms [ 232 , 233 ] and the proposal of a new evidence ecosystem that fosters bidirectional collaborations and interactions among a global network of evidence synthesis stakeholders [ 234 ]. These evolutions in data sources and methods may ultimately make evidence syntheses more streamlined, less duplicative, and more importantly, they may be more useful for timely policy and clinical decision-making; however, that will only be the case if they are rigorously reported and conducted.

We look forward to others’ ideas and proposals for the advancement of methods for evidence syntheses. For now, we encourage dissemination and uptake of the currently accepted best tools and practices for their development and evaluation; at the same time, we stress that uptake of appraisal tools, checklists, and software programs cannot substitute for proper education in the methodology of evidence syntheses and meta-analysis. Authors, peer reviewers, and editors must strive to make accurate and reliable contributions to the present evidence knowledge base; online alerts, upcoming technology, and accessible education may make this more feasible than ever before. Our intention is to improve the trustworthiness of evidence syntheses across disciplines, topics, and types of evidence syntheses. All of us must continue to study, teach, and act cooperatively for that to happen.

Acknowledgements

Michelle Oakman Hayes for her assistance with the graphics, Mike Clarke for his willingness to answer our seemingly arbitrary questions, and Bernard Dan for his encouragement of this project.

Authors’ contributions

All authors participated in the development of the ideas, writing, and review of this manuscript. The author(s) read and approved the final manuscript.

The work of John Ioannidis has been supported by an unrestricted gift from Sue and Bob O’Donnell to Stanford University.

Declarations

The authors declare no competing interests.

This article has been published simultaneously in BMC Systematic Reviews, Acta Anaesthesiologica Scandinavica, BMC Infectious Diseases, British Journal of Pharmacology, JBI Evidence Synthesis, the Journal of Bone and Joint Surgery Reviews , and the Journal of Pediatric Rehabilitation Medicine .

Publisher’ s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

how to formulate a research question systematic review

Systematic Reviews

  • Developing a Research Question
  • Developing a Protocol
  • Literature Searching
  • Screening References
  • Data Extraction
  • Quality Assessment
  • Reporting Results
  • Related Guides
  • Getting Help

Developing A Research Question

There are several different methods researchers might use in developing a research question. The best method to use depends on the discipline and nature of the research you hope to review. Consider the following example question templates.

Variations to PICO

Using PICO can help you define and narrow your research question so that it is specific.

  • P  - Patient, population, or problem
  • I   - Intervention
  • C - Comparison or Control
  • O - Outcome

Think about whether your question is relevant to practitioners, and whether the answer will help people (doctors, patients, nurses) make better informed health care decisions.

You can find out more about properly formulated questions by reviewing the YouTube video below.

The PICO method is used frequently, though there are some variations that exist to add other specifications to studies collected. Some variations include PICOSS, PICOT, and PICOC.

  • In addition to the fundamental components of PICO, additional criteria are made for  study design  (S) and  setting  (S).
  • (T), in this instance, represents  timeframe . This method could be used to narrow down length of treatment or intervention in health research.
  • In research where there may not be a comparison, Co instead denotes the  context  of the population and intervention being studied.

Using SPIDER can help you define and narrow your research question so that it is specific. This is typically used in qualitative research (Cooke, Smith, & Booth, 2012).

  • PI - Phenomenon of Interest 
  • E - Evaluation
  • R - Research type

Yet another search measure relating to Evidence-Based Practice (EBP) is SPICE. This framework builds on PICO by considering two additional axes: perspective and setting (Booth, 2006).

  • S - Setting
  • P - Perspective
  • I - Intervention
  • C - Comparison

Inclusion and Exclusion Criteria

Setting inclusion and exclusion criteria is a critical step in the systematic review process.

  • Inclusion criteria determine what characteristics are needed for a study to be included in a systematic review.
  • Exclusion criteria denote what attributes disqualify a study from consideration in a systematic review.
  • Knowing what to exclude or include helps speed up the review process.

These criteria will be used at different parts of the review process, including in search statements and the screening process.

Has this review been done?

After developing the research question, it is necessary to confirm that the review has not previously been conducted (or is currently in progress).

Make sure to check for both published reviews and registered protocols (to see if the review is in progress). Do a thorough search of appropriate databases; if additional help is needed,  consult a librarian  for suggestions.

  • << Previous: Planning a Review
  • Next: Developing a Protocol >>
  • Last Updated: Feb 9, 2024 4:57 PM
  • URL: https://guides.library.duq.edu/systematicreviews

University of Bristol logo

Systematic Reviews

  • Introduction
  • Formulating a focused question
  • Creating a protocol
  • Creating a search strategy
  • Grey literature
  • Managing your search results
  • Analysing your results
  • Writing up your review
  • Locations and opening hours Find books, articles and more Use the library Accessibility and support Subject support Research support Special Collections Events About New University Library Contacts

Formula ting a focused question

It is essential that you have a focused research question before you begin searching the literature as part of your Systematic Review. Broad, unfocused questions can result in being overwhelmed with unmanageable numbers of papers, many of which may prove to be irrelevant.

There are several different frameworks (see below for some examples) that you might want to use in order to identify the key concepts within your research question. These frameworks can also help as you start to consider what keywords you will need to use in your database search strategy. 

Clinical questions include questions about treatment, diagnosis, prognosis and the causes of health conditions. PICO can help you to think about your question and what might be included in your search.

P - Patient, Population, Problem  (Who is/are your patient or patients? Think about all the factors which might be important: the condition, the social setting etc)

I - Intervention  (Think about Therapy (drugs, surgery, physiotherapy etc)

C - Comparison  (Do you want to make a comparison? Think about, for example: - a control group: no intervention; treatment as usual - conventional treatment vs new treatment  - placebo)

O - Outcome (Think about, for example: - alleviation of symptoms - mortality rates - adverse effects - quality of life: better/worse? - cost effectiveness - practicality)

This is helpful for formulating your question for a review based on qualitative research.

P - Phenomenon of Interest

D - Designs

E - Evaluation

R - Research type

This framework is also good for a review based on qualitative research.

S - Setting

P - Population

I - Intervention

C - Comparator

Next step: Creating a protocol

  • << Previous: The Systematic review process
  • Next: Creating a protocol >>
  • Last Updated: Apr 23, 2024 1:48 PM
  • URL: https://bristol.libguides.com/systematic-reviews

UCI Libraries Mobile Site

  • Langson Library
  • Science Library
  • Grunigen Medical Library
  • Law Library
  • Connect From Off-Campus
  • Accessibility
  • Gateway Study Center

Libaries home page

Email this link

Systematic reviews & evidence synthesis methods.

  • Schedule a Consultation / Meet our Team
  • What is Evidence Synthesis?
  • Types of Evidence Synthesis
  • Evidence Synthesis Across Disciplines
  • Finding and Appraising Existing Systematic Reviews
  • 0. Preliminary Searching
  • 1. Develop a Protocol
  • 2. Draft your Research Question
  • 3. Select Databases
  • 4. Select Grey Literature Sources
  • 5. Write a Search Strategy
  • 6. Register a Protocol
  • 7. Translate Search Strategies
  • 8. Citation Management
  • 9. Article Screening
  • 10. Risk of Bias Assessment
  • 11. Data Extraction
  • 12. Synthesize, Map, or Describe the Results
  • Open Access Evidence Synthesis Resources

Developing a Research Question

Developing your research question is one of the most important steps in the evidence synthesis process. At this stage in the process, you and your team have identified a knowledge gap in your field and are aiming to answer a specific question:

  • If X is prescribed, then Y will happen to patients?

OR assess an intervention:

  • How does X affect Y?

OR synthesize the existing evidence

  • What is the nature of X? ​

​​Whatever your aim, formulating a clear, well-defined research question of appropriate scope is key to a successful evidence synthesis. The research question will be the foundation of your synthesis and from it your research team will identify 2-5 possible search concepts. These search concepts will later be used in step 5 to build your search strategy. 

Research Question Frameworks

Formulating a research question takes time and your team may go through different versions until settling on the right research question. To help formulate your research question, some research question frameworks are listed below (there are dozen of different types of these frameworks--(for a comprehensive, but concise, overview of the almost 40 different types of research question frameworks, see this review from BMJ Global Health: Rapid review of existing question formulation frameworks) .

Think of these frameworks as you would for a house or building. A framework is there to provide support and to be a scaffold for the rest of the structure. In the same way, a research question framework can also help structure your evidence synthesis question.  Probably the most common framework is PICO:

PICO for Quantitative Studies

  • P       Population/Problem
  • I        Intervention/Exposure
  • C       Comparison
  • O      Outcome

Example: Is gabapentin (intervention), compared to placebo (comparison), effective in decreasing pain symptoms (outcome) in middle aged male amputees suffering phantom limb pain (population)?

While PICO is a helpful framework for clinical research questions, it may not be the best choice for other types of research questions, especially outside the health sciences. Here are a few others:

PICo for Qualitative Studies

  • P       Population/Problem
  • I         Phenomenon of Interest 
  • Co    Context

Example: What are the experiences (phenomenon of interest) of caregivers providing home based care to patients with Alzheimer's disease (population) in Australia (context)?

  • S   Setting
  • P   Perspective (for whom)
  • I    Intervention/Exposure
  • C   Comparison
  • E   Evaluation

Example: What are the benefits (evaluation) of a doula (intervention) for low income mothers (perspective) in the developed world (setting) compared to no support (comparison)?

  • S     Sample
  • PI    Phenomenon of Interest
  • D     Design
  • E     Evaluation
  • R     Study Type

Example: What are the experiences (evaluation) of women (sample) undergoing IVF treatment (phenomenon of interest) as assessed?

Design: questionnaire or survey or interview

Study Type: qualitative or mixed method

Inclusion/Exclusion Criteria

Inclusion and exclusion criteria are developed after a research question is finalized but before a search is carried out. They determine the limits for the evidence synthesis and are typically reported in the methods section of the publication. For unfamiliar or unclear concepts, a definition may be necessary to adequately describe the criterion for readers.

An image describing various inclusion and exclusion criteria for systematic reviews.

Image from The University of Melbourne Library

Other inclusion/exclusion criteria can include the sample size, method of sampling or availability of a relevant comparison group in the study. Where a single study is reported across multiple papers the findings from the papers may be merged or only the latest data may be included.

How a Librarian Can Help

Librarians can help you learn how to search for existing information on your topic. Finding existing reviews on your topic will inform the development of your research question, identify gaps, and confirm that you are not duplicating the efforts of previous reviews. Contact the Evidence Synthesis Library Team  to learn more about developing a research question.

Video: Formulating a research question (4:43 minutes)

  • << Previous: 1. Develop a Protocol
  • Next: 3. Select Databases >>
  • Last Updated: Jun 4, 2024 11:01 AM
  • URL: https://guides.lib.uci.edu/evidence-synthesis

Off-campus? Please use the Software VPN and choose the group UCIFull to access licensed content. For more information, please Click here

Software VPN is not available for guests, so they may not have access to some content when connecting from off-campus.

Systematic reviews

  • Introduction to systematic reviews
  • Steps in a systematic review

Formulating a clear and concise question

Pico framework, other search frameworks.

  • Create a protocol (plan)
  • Sources to search
  • Conduct a thorough search
  • Post search phase
  • Select studies (screening)
  • Appraise the quality of the studies
  • Extract data, synthesise and analyse
  • Interpret results and write
  • Guides and manuals
  • Training and support

General principles

"A good systematic review is based on a well formulated, answerable question. The question guides the review by defining which studies will be included, what the search strategy to identify the relevant primary studies should be, and which data need to be extracted from each study."

A systematic review question needs to be

You may find it helpful to use a search framework, such as those listed below, to help you to refine your research question, but it is not mandatory. Similarly, you may not always need to use every aspect of the framework in order to build a workable research question.

Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews . Ann Intern Med. 1997;127(5):380–387.

To help formulate a focussed research question the PICO tool has been created. PICO is a mnemonic for Population, Intervention, Comparison, and Outcome. These elements have been highlighted to help define the core elements of the question which will be used in the literature search.

The elements of PICO

Population:.

Who or what is the topic of interest, in the health sciences this may be a disease or a condition, in the social sciences this may be a social group with a particular need.

Intervention:

The intervention is the effect or the change upon the population in question. In the health sciences, this could be a treatment, such as a drug, a procedure, or a preventative activity. Depending on the discipline the intervention could be a social policy, education, ban, or legislation.

Comparison:

The comparison is a comparison to the intervention, so if it were a drug it may be a similar drug in which effectiveness is compared. Sometimes the comparator is a placebo or no comparison.

The outcomes in PICO represent the outcomes of interest for the research question. The outcome measures will vary according to the question but will provide the data against which the effectiveness of the intervention is measured.

  • Examples of using the PICO framework (PDF, 173KB) This document contains worked examples of how to use the PICO search framework as well as other frameworks based on PICO.

Not all systematic review questions are well served by the PICO mnemonic and a number of other models have been created, these include:  ECLIPSE (Wildridge & Bell, 2002), SPICE (Booth, 2004), and SPIDER (Cooke, Smith, & Booth, 2012).

Wildridge V, Bell L. How CLIP became ECLIPSE: a mnemonic to assist in searching for health policy/ management information . Health Info Libr J. 2002;19(2):113–115.

Booth A.  Clear and present questions: formulating questions for evidence based practice . Library Hi Tech. 2006;24(3):355-368.

Cooke A, Smith D, Booth, A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis . Qual Health Res. 2012;22(10):1435–1443.

Remember: you do not have to use a search framework but it can help you to focus your research question and identify the key concepts and terms that you can use in your search. Similarly, you may not need to use all of the elements in your chosen framework, only the ones that are useful for your individual research question.

  • Using the SPIDER search framework (PDF, 134 KB) This document shows how you can use the SPIDER framework to guide your search.
  • Using the SPICE search framework (PDF, 134 KB) This document shows how you can use the SPICE framework to guide your search.
  • Using the ECLIPSE search framework (PDF, 145 KB) This document shows how you can use the ECLIPSE framework to guide your search.
  • << Previous: Steps in a systematic review
  • Next: Create a protocol (plan) >>
  • Last Updated: May 31, 2024 10:07 AM
  • URL: https://guides.library.uq.edu.au/research-techniques/systematic-reviews

Banner

Systematic review

  • How we can help
  • What is a systematic review?
  • Should I conduct a systematic review or a scoping review?
  • How do I develop a research question for systematic review?
  • Checking for existing systematic reviews
  • Do I need to register a protocol?
  • What sources should I search?
  • How do I develop a search strategy?
  • How can I evaluate the quality of my search strategy?
  • How can I record my search strategy?
  • How can I manage the research?
  • How can I find and connect to full-text articles?
  • Recommended reading

How do I develop a research question for systematic reveiw?

Once you have identified a topic to investigate you need to define a clear and answerable question for the systematic review. Some initial scoping searches will be needed to check the question is:

  • Manageable - not too broad or too narrow 
  • Answerable - relevant literature is available 
  • Has not already been done recently  

You may need to explore a few different versions of the question depending on what you find during these initial searches. The question must be clearly defined and it may be useful to use a research question framework such as PICO (population, intervention, comparison, outcome) or SPICE (setting, perspective, intervention, comparison, evaluation) to help structure both the question and the search terms. For more information and examples of research question frameworks visit our guide to research question frameworks. 

  • Research question frameworks
  • << Previous: Should I conduct a systematic review or a scoping review?
  • Next: Checking for existing systematic reviews >>
  • Last Updated: May 28, 2024 4:48 PM
  • URL: https://gcu.libguides.com/systematicreview

Systematic & scoping reviews

Why use pico.

Systematic reviews require focused clinical questions. PICO is a useful tool for formulating such questions. For information on PICO and other frameworks please see our tutorial below.

how to formulate a research question systematic review

Systematic Reviews: Formulating the Research Question [PDF, 191kB]

This PowerPoint covers:

  • Formulating the research question
  • The PICO framework
  • Types of questions
  • Types of studies
  • Qualitative questions

PICO example for quantitative studies

The PICO (Patient, Intervention, Comparison, Outcome) framework is commonly used to develop focused clinical questions for quantitative systematic reviews.

Sample topic:

In middle aged women suffering migraines, is Botulinium toxin type A compared to placebo effective at decreasing migraine frequency?

P - Middle aged women suffering migraines

I - Botulinium toxin type A

C - Placebo

O - Decreased migraine frequency

Use the following worksheet to complete a search strategy:

PICO SR worksheet [DOCX, 17kB]

Completed PICO SR worksheet [PDF, 114kB]

PICo, SPICE or SPIDER example for qualitative studies

The PICO (Patient, Intervention, Comparison, Outcome) framework is commonly used to develop focused clinical questions for quantitative systematic reviews. A modified version, PICo , can be used for qualitative questions.

What are caregivers’ experiences with providing home-based care to patients with HIV/AIDS in Africa?

P - Caregivers providing home-based care to persons with HIV/AIDS

I - Experiences

Co - Africa

Use the following worksheet to create a search strategy:

PICo qualitative worksheet [DOCX, 18kB]

Completed PICo Qualitative worksheet [PDF, 115kB]

The SPIDER framework is an alternative search strategy tool (based on PICo) for qualitative/mixed methods research.

What are the experiences of women undergoing IVF treatment?

PI - IVF treatment

D - Questionnaire or survey or interview

E - Experiences or views or attitudes or feelings

R - Qualitative or mixed method

Cooke, A., Smith, D., & Booth, A. (2012). Beyond PICO: The SPIDER tool for qualitative evidence synthesis

Methley, A. M., Campbell, S., Chew-Graham, C., McNally, R., & Cheraghi-Sohi, S. (2014). PICO, PICOS and SPIDER: A comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews

SPICE can be used for both qualitative and quantitative studies. SPICE stands for S etting (where?), P erspective (for whom?), I ntervention (what?), C omparison (compared with what?) and E valuation (with what result?).

What are the coping skills of parents of children with autism undergoing behavioural therapy in schools?

S - Schools

P - Parents of children with autism

I - Behavioural therapy

E - Coping skills

Booth, A. (2006). Clear and present questions: Formulating questions for evidence based practice

Banner

  • McGill Library

Systematic Reviews, Scoping Reviews, and other Knowledge Syntheses

  • Identifying the research question
  • Types of knowledge syntheses
  • Process of conducting a knowledge synthesis

Constructing a good research question

Inclusion/exclusion criteria, has your review already been done, where to find other reviews or syntheses, references on question formulation frameworks.

  • Developing the protocol
  • Database-specific operators and fields
  • Search filters and tools
  • Exporting search results
  • Deduplicating
  • Grey literature and other supplementary search methods
  • Documenting the search methods
  • Updating the database searches
  • Resources for screening, appraisal, and synthesis
  • Writing the review
  • Additional training resources

how to formulate a research question systematic review

Formulating a well-constructed research question is essential for a successful review. You should have a draft research question before you choose the type of knowledge synthesis that you will conduct, as the type of answers you are looking for will help guide your choice of knowledge synthesis.

Examples of systematic review and scoping review questions

  • Process of formulating a question

Developing a good research question is not a straightforward process and requires engaging with the literature as you refine and rework your idea.

how to formulate a research question systematic review

Some questions that might be useful to ask yourself as you are drafting your question:

  • Does the question fit into the PICO question format?
  • What age group?
  • What type or types of conditions?
  • What intervention? How else might it be described?
  • What outcomes? How else might they be described?
  • What is the relationship between the different elements of your question?
  • Do you have several questions lumped into one? If so, should you split them into more than one review? Alternatively, do you have many questions that could be lumped into one review?

A good knowledge synthesis question will have the following qualities:

  • Be focused on a specific question with a meaningful answer
  • Retrieve a number of results that is manageable for the research team (is the number of results on your topic feasible for you to finish the review? Your initial literature searches should give you an idea, and a librarian can help you with understanding the size of your question).

Considering the inclusion and exclusion criteria

It is important to think about which studies will be included in your review when you are writing your research question. The Cochrane Handbook chapter (linked below) offers guidance on this aspect.

McKenzie, J. E., Brennan, S. E., Ryan, R. E., Thomson, H. J., Johnston, R. V, & Thomas, J. (2021). Chapter 3: Defining the criteria for including studies and how they will be grouped for the synthesis. Retrieved from https://training.cochrane.org/handbook/current/chapter-03

Once you have a reasonably well defined research question, it is important to make sure your project has not already been recently and successfully undertaken. This means it is important to find out if there are other knowledge syntheses that have been published or that are in the process of being published on your topic.

If you are submitting your review or study for funding, for example, you may want to make a good case that your review or study is needed and not duplicating work that has already been successfully and recently completed—or that is in the process of being completed. It is also important to note that what is considered “recent” will depend on your discipline and the topic.

In the context of conducting a review, even if you do find one on your topic, it may be sufficiently out of date or you may find other defendable reasons to undertake a new or updated one. In addition, looking at other knowledge syntheses published around your topic may help you refocus your question or redirect your research toward other gaps in the literature.

  • PROSPERO Search PROSPERO is an international, searchable database that allows free registration of systematic reviews, rapid reviews, and umbrella reviews with a health-related outcome in health & social care, welfare, public health, education, crime, justice, and international development. Note: PROSPERO does not accept scoping review protocols.

McGill users only

The Cochrane Library (including systematic reviews of interventions, diagnostic studies, prognostic studies, and more) is an excellent place to start, even if Cochrane reviews are also indexed in MEDLINE/PubMed.

By default, the Cochrane Library will display “ Cochrane Reviews ” (Cochrane Database of Systematic Reviews, aka CDSR). You can ignore the results which show up in the Trials tab when looking for systematic reviews: They are records of controlled trials. 

The example shows the number of Cochrane Reviews with hiv AND circumcision in the title, abstract, or keywords.

Image showing results tabs in the Cochrane Library

  • Google Scholar

Subject-specific databases you can search to find existing or in-process reviews

Alternatively, you can use a search hedge/filter; for example, the filter used by  BMJ Best Practice  to find systematic reviews in Embase (can be copied and pasted into the Embase search box then combined with the concepts of your research question):

(exp review/ or (literature adj3 review$).ti,ab. or exp meta analysis/ or exp "Systematic Review"/) and ((medline or medlars or embase or pubmed or cinahl or amed or psychlit or psyclit or psychinfo or psycinfo or scisearch or cochrane).ti,ab. or RETRACTED ARTICLE/) or (systematic$ adj2 (review$ or overview)).ti,ab. or (meta?anal$ or meta anal$ or meta-anal$ or metaanal$ or metanal$).ti,ab.

Alternative interface to PubMed: You can also search MEDLINE on the Ovid platform, which we recommend for systematic searching. Perform a sufficiently developed search strategy (be as broad in your search as is reasonably possible) and then, from Additional Limits , select the publication type  Systematic Reviews, or select the subject subset  Systematic Reviews Pre 2019 for more sensitive/less precise results. 

The subject subset for Systematic Reviews is based on the filter version used in PubMed .

Perform a sufficiently developed search strategy (be as broad in your search as is reasonably possible) and then, from  Additional Limits , select, under  Methodology,  0830 Systematic Review

See Systematic Reviews Search Strategy Applied in PubMed for details.

  • BEME: Best Evidence Medical and Health Professional Education Lists published and in-progress reviews in health professional and medical education.
  • healthevidence.org Database of thousands of "quality-rated reviews on the effectiveness of public health interventions"
  • See also: Evidence-informed resources for Public Health

Munn Z, Stern C, Aromataris E, Lockwood C, Jordan Z. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol. 2018;18(1):5. doi: 10.1186/s12874-017-0468-4

Scoping reviews: Developing the title and question . In: Aromataris E, Munn Z (Editors) . JBI Manual for Evidence Synthesis.   JBI; 2020. https://doi.org/10.46658/JBIMES-20-01

Due to a large influx of requests, there may be an extended wait time for librarian support on knowledge syntheses.

Find a librarian in your subject area to help you with your knowledge synthesis project.

Or contact the librarians at the Schulich Library of Physical Sciences, Life Sciences, and Engineering s [email protected]

Need help? Ask us!

Online training resources.

  • The Art and Science of Searching in Systematic Reviews Self-paced course on search strategies, information sources, project management, and reporting (National University of Singapore)
  • CERTaIN: Knowledge Synthesis: Systematic Reviews and Clinical Decision Making "Learn how to interpret and report systematic review and meta-analysis results, and define strategies for searching and critically appraising scientific literature" (MDAndersonX)
  • Cochrane Interactive Learning Online modules that walk you through the process of working on a Cochrane intervention review. Module 1 is free (login to access) but otherwise payment is required to complete the online training
  • Introduction to Systematic Review and Meta-Analysis Free coursera MOOC offered by Johns Hopkins University; covers the whole process of conducting a systematic review; week 3 focuses on searching and assessing bias
  • Online Methods Course in Systematic Review and Systematic Mapping "This step-by-step course takes time to explain the theory behind each part of the review process, and provides guidance, tips and advice for those wanting to undertake a full systematic review or map." Developed using an environmental framework (Collaboration for Environmental Evidence, Stockholm Environment Institute)
  • Scoping Review Methods for Producing Research Syntheses Two-part, online workshop sponsored by the Center on Knowledge Translation for Disability and Rehabilitation Research (KTDRR)
  • Systematic Reviews and Meta-Analysis Online overview of the steps involved in systematic reviews of quantitative studies, with options to practice. Courtesy of the Campbell Collaboration and the Open Learning Initiative (Carnegie Mellon University). Free pilot
  • Systematic Searches Developed by the Harvey Cushing/John Hay Whitney Medical Library (Yale University)
  • SYRCLE Course on Systematic Reviews of Animal Studies "This e-learning is an introduction to systematic reviews of animal studies. You will practice some of the steps of a systematic review and learn about the advantages as well as the limitations of this methodology. " Free to enroll as of October 26, 2023.
  • << Previous: Types of knowledge syntheses
  • Next: Developing the protocol >>
  • Last Updated: May 17, 2024 4:34 PM
  • URL: https://libraryguides.mcgill.ca/knowledge-syntheses

McGill Library • Questions? Ask us! Privacy notice

University of Texas

  • University of Texas Libraries
  • UT Libraries

Scoping Reviews

  • Formulate Question
  • Find Existing Reviews
  • Searching Systematically
  • Saved Searches and Alerts
  • Organizing & Exporting Results
  • Supplementary Searching
  • Screening & Sorting Results
  • Tools & Guides
  • Librarian Support

Types of Research Questions for Scoping Reviews

Scoping Reviews are broad by nature. As the name suggests, their purpose is to identify the scope of the literature on a topic. Therefore, the research questions that a Scoping Review can answer are also broad. Questions appropriate for Scoping Review methodology include:

  • What has been done?
  • What populations have been included?
  • What progress has been made in the research?
  • Does enough literature exist to conduct a systematic review?

What is known from the literature about the use of animal-assisted therapies in people with mood disorders?

Developing your Research Question

Formulating a research question (RQ) may require some initial searching on your topic, especially if it is one you haven't already researched. There are three primary elements of a Scoping Review RQ. However, not all RQs need to include all 3:

  • Intervention

As you develop your research question, it is helpful to define your key concepts. This will help with the development of your inclusion criteria as well as your search strategy.

For example, what do you mean by adolescent? What age range are you including?

If you would like further help formulating your RQ, there are frameworks that can help as well as provide the foundational elements for your search strategy. Most of these frameworks were developed for the more specific RQs involved in Systematic Reviews, but they can also be helpful in thinking through your Scoping Review RQ.

  • Find frameworks
  • Last Updated: Nov 22, 2023 12:45 PM
  • URL: https://guides.lib.utexas.edu/scopingreviews

Creative Commons License

how to formulate a research question systematic review

The Literature Review

  • Publications: A World of Information
  • Primary, Secondary and Tertiary Sources
  • Types of Reviews and Their Differences

Steps 1 - 2: Getting Started

Steps 3 - 4: searching and organizing, steps 5 - 7: survey, critique and synthesize, when can i stop.

  • Information Sources: Where to Find Them
  • Webinar Recording (20 Minutes, Slides and Quiz)
  • Webinar Recording (50 Minutes, Slides and Quiz)

Related Guides and Sources

  • Nursing: Literature and Systematic Reviews A guide to literature and systematic reviews with a focus on nursing and health-related subjects. Information about PRISMA and other reporting guidelines are included.
  • The EBM Literature Review A guide for literature searches and literature reviews using evidence-based medicine (EBM) for medical and health sciences.

Man writing at desk

Image: Man writing on papger .  Permission by Pixabay.com license .

1.  Explore, select, then focus on a topic.

a.  This is the beginning of your question formation, research question, or hypothesis. b.  Look at “recommendations for further research” in the conclusions of articles or other items. c.  Use this to formulate your goal or objective of the review.

2.  Prepare for your search.

a.  Identify information sources for your topic and field:  library and information resources, organizations, special collections or archives, etc. b.  Consider other fields that also study your topic.  Some topics may be studied by multiple disciplines (e.g., aging can be studied in the fields of medicine, psychology and social work, and from their frame of reference). c.  Familiarize yourself with your organization’s library or information services, including interlibrary loan or document delivery. d.  Choose keywords and search strategy:  terminology, synonyms, and combining terms (Boolean Operators AND, OR, NOT). e.  Read other literature reviews of your topics if available.

2(i).  (For Systematic Reviews or Meta-Analyses)  Select your inclusion / pre-selection criteria to identify the types of studies that will be most relevant to the review.

a.  Decide on the following to create your inclusion criteria:

  • Patient, population, or people who were studied.
  • Methodology:  type of study design or method.
  • Data and Statistics:  the collected data and statistics used to analyze them.
  • Time range of when a study was done or published.

b.  Some disciplines, especially in the heath or human services, may use the PICO(T) or something similar to identify their inclusion criteria.

Paper Files in a Pile

Image: Papers and files.  Permission by Pixabay.com license .

3. Start your search.

a.  Keep track of your search strategies and results. b.  Skim, scan, read, or annotate what you find. c.  Try chain or citation searching to find additional documents.  This is also known as pearl mining/ growing, citation analysis, mining, or reference searching. d.  Manual or hand searching:  visit the stacks or your journal’s online version.  Also, browse, flip or skim through publications or journals on your topic. e.  Search alerts:  create a personal account in library databases, search engines and journal packages to get notifications.

  • Saved searches:  many indexes and databases have features that will send alerts when new publications are available on your saved searches.
  • Table of content (TOC) alerts:  most journals and other publications will send the table of contents for their upcoming issues, which is good for locating the most current information or scholarly works.
  • Citation alerts:  when a work is cited, an alert can be sent that shows it has been used, which also can provide current or new information on a topic.

3(i).  (For Systematic Reviews or Meta-Analyses) Use a guideline and document your searches and protocol.

a.  Refer to a systematic review or meta-analysis guidelines such as PRISMA or one that applies to your discipline.

b.  Many published systematic reviews will document some or all of their searches.  This will include the search terms used, the index or database fields utilized in the search, and the number of results by each search.

c.  These types of reviews will often utilize a flowchart to demonstrate how many studies were included or excluded based on their inclusion criteria and further review of their content, and lead to a final number of selected studies.

d.  Select a repository to submit your systematic review protocol.  Some authors will register theirs in PROSPERO or similar ones.

4.  Organize your documents, data, and information.

Male College Student Studying at a Laptop

Image: Student at Mac.  Permission by Unsplash license. .

5. Survey and review what is found.

6. Analyze and critique the literature

Remember, the literature review is an iterative process.  You may need to revisit parts of this search, find new or additional information, or update your research question based on what you find.

7.  Provide a synthesis and overview of the literature; this can be organized by themes or chronologically.

Stop Sign

Image: Stop sign.  Permission by Pixabay.com license .

Time and Rigor.  

There typically isn't a set amount of time for searching to determine when to stop less rigorous literature reviews like scoping, state-of-the-art or -science, or narrative reviews.  Reviews with higher levels of vigor, systematic reviews and meta-analyses, may take anywhere between 8 to 18 months or more complete. 

Points to Consider.  

The number of publications located usually won't indicate when to stop unless your review or assignment requirements specify this.  To summarize our conversations with professors and graduate students, and to draw conclusions from our own literature reviews, we suggest considering these points to decide when to finish your search:

  • Repetition of results with various searches.   If your search results become repetitive or continue to give the same publications after using various strategies, keywords, and search engines, you may have exhausted your search.
  • Sources Used for Search.   Did you search the standard information sources in your subject area?  Searches were done using standard information sources for your field (e.g., PsycInfo for psychology) and also general library sources (FAU Libraries' OneSearch). 
  • Amount of time spent and strategy used in your searches.  Did you use both keywords (also known as natural language searching or using everyday words) and controlled vocabulary in your search strategy?  Using both approaches ensures you've done a thorough search.
  • Your search includes current or recent publications .  Has your search included newer studies or recently published or created works?
  • Amount and quality of articles/ evidence.   Have you located strong or well-designed studies in your area?  Are the results of the studies valid or reliable? 
  • Being able to identify seminal works and authorities on topic.  Have you found important or highly cited articles on your topic?  Can you identify experts on your topic and their publications?  Do you know which institutions or organizations specialize on your topic? 
  • Feedback from your advisor, colleagues, etc.  Let them know your search strategy and what you are finding, and then ask for suggestions to your search.
  • << Previous: Types of Reviews and Their Differences
  • Next: Information Sources: Where to Find Them >>
  • Last Updated: May 29, 2024 11:22 AM
  • URL: https://libguides.fau.edu/literature-review

how to formulate a research question systematic review

Florida Atlantic University Libraries 777 Glades Road Boca Raton, FL 33431 (561) 297-6911

  • Open access
  • Published: 01 June 2024

Biomarkers for personalised prevention of chronic diseases: a common protocol for three rapid scoping reviews

  • E Plans-Beriso   ORCID: orcid.org/0000-0002-9388-8744 1 , 2   na1 ,
  • C Babb-de-Villiers 3   na1 ,
  • D Petrova 2 , 4 , 5 ,
  • C Barahona-López 1 , 2 ,
  • P Diez-Echave 1 , 2 ,
  • O R Hernández 1 , 2 ,
  • N F Fernández-Martínez 2 , 4 , 5 ,
  • H Turner 3 ,
  • E García-Ovejero 1 ,
  • O Craciun 1 ,
  • P Fernández-Navarro 1 , 2 ,
  • N Fernández-Larrea 1 , 2 ,
  • E García-Esquinas 1 , 2 ,
  • V Jiménez-Planet 7 ,
  • V Moreno 2 , 8 , 9 ,
  • F Rodríguez-Artalejo 2 , 10 , 11 ,
  • M J Sánchez 2 , 4 , 5 ,
  • M Pollan-Santamaria 1 , 2 ,
  • L Blackburn 3 ,
  • M Kroese 3   na2 &
  • B Pérez-Gómez 1 , 2   na2  

Systematic Reviews volume  13 , Article number:  147 ( 2024 ) Cite this article

119 Accesses

2 Altmetric

Metrics details

Introduction

Personalised prevention aims to delay or avoid disease occurrence, progression, and recurrence of disease through the adoption of targeted interventions that consider the individual biological, including genetic data, environmental and behavioural characteristics, as well as the socio-cultural context. This protocol summarises the main features of a rapid scoping review to show the research landscape on biomarkers or a combination of biomarkers that may help to better identify subgroups of individuals with different risks of developing specific diseases in which specific preventive strategies could have an impact on clinical outcomes.

This review is part of the “Personalised Prevention Roadmap for the future HEalThcare” (PROPHET) project, which seeks to highlight the gaps in current personalised preventive approaches, in order to develop a Strategic Research and Innovation Agenda for the European Union.

To systematically map and review the evidence of biomarkers that are available or under development in cancer, cardiovascular and neurodegenerative diseases that are or can be used for personalised prevention in the general population, in clinical or public health settings.

Three rapid scoping reviews are being conducted in parallel (February–June 2023), based on a common framework with some adjustments to suit each specific condition (cancer, cardiovascular or neurodegenerative diseases). Medline and Embase will be searched to identify publications between 2020 and 2023. To shorten the time frames, 10% of the papers will undergo screening by two reviewers and only English-language papers will be considered. The following information will be extracted by two reviewers from all the publications selected for inclusion: source type, citation details, country, inclusion/exclusion criteria (population, concept, context, type of evidence source), study methods, and key findings relevant to the review question/s. The selection criteria and the extraction sheet will be pre-tested. Relevant biomarkers for risk prediction and stratification will be recorded. Results will be presented graphically using an evidence map.

Inclusion criteria

Population: general adult populations or adults from specific pre-defined high-risk subgroups; concept: all studies focusing on molecular, cellular, physiological, or imaging biomarkers used for individualised primary or secondary prevention of the diseases of interest; context: clinical or public health settings.

Systematic review registration

https://doi.org/10.17605/OSF.IO/7JRWD (OSF registration DOI).

Peer Review reports

In recent years, innovative health research has moved quickly towards a new paradigm. The ability to analyse and process previously unseen sources and amounts of data, e.g. environmental, clinical, socio-demographic, epidemiological, and ‘omics-derived, has created opportunities in the understanding and prevention of chronic diseases, and in the development of targeted therapies that can cure them. This paradigm has come to be known as “personalised medicine”. According to the European Council Conclusion on personalised medicine for patients (2015/C 421/03), this term defines a medical model which involves characterisation of individuals’ genotypes, phenotypes and lifestyle and environmental exposures (e.g. molecular profiling, medical imaging, lifestyle and environmental data) for tailoring the right therapeutic strategy for the right person at the right time, and/or to determine the predisposition to disease and/or to deliver timely and targeted prevention [ 1 , 2 ]. In many cases, these personalised health strategies have been based on advances in fields such as molecular biology, genetic engineering, bioinformatics, diagnostic imaging and new’omics technologies, which have made it possible to identify biomarkers that have been used to design and adapt therapies to specific patients or groups of patients [ 2 ]. A biomarker is defined as a substance, structure, characteristic, or process that can be objectively quantified as an indicator of typical biological functions, disease processes, or biological reactions to exposure [ 3 , 4 ].

Adopting a public health perspective within this framework, one of the most relevant areas that would benefit from these new opportunities is the personalisation of disease prevention. Personalised prevention aims to delay or avoid the occurrence, progression and recurrence of disease by adopting targeted interventions that take into account biological information, environmental and behavioural characteristics, and the socio-economic and cultural context of individuals. These interventions should be timely, effective and equitable in order to maintain the best possible balance in lifetime health trajectory [ 5 ].

Among the main diseases that merit specific attention are chronic noncommunicable diseases, due to their incidence, their mortality or disability-adjusted life years [ 6 , 7 , 8 , 9 ]. Within the European Union (EU), in 2021, one-third of adults reported suffering from a chronic condition [ 10 ]. In addition, in 2019, the leading causes of mortality were cardiovascular disease (CVD) (35%), cancer (26%), respiratory disease (8%), and Alzheimer's disease (5%) [ 11 ]. For all of the above, in 2019, the PRECeDI consortium recommended the identification of biomarkers that could be used for the prevention of chronic diseases to integrate personalised medicine in the field of chronicity. This will support the goal of stratifying populations by indicating an individuals’ risk or resistance to disease and their potential response to drugs, guiding primary, secondary and tertiary preventive interventions [ 12 ]; understanding primary prevention as measures taken to prevent the occurrence of a disease before it occurs, secondary prevention as actions aimed at early detection, and tertiary prevention as interventions to prevent complications and improve quality of life in individuals already affected by a disease [ 4 ].

The “Personalised Prevention roadmap for the future HEalThcare” (PROPHET) project, funded by the European Union’s Horizon Europe research and innovation program and linked to ICPerMed, seeks to assess the effectiveness, clinical utility, and existing gaps in current personalised preventive approaches, as well as their potential to be implemented in healthcare settings. It also aims to develop a Strategy Research and Innovation Agenda (SRIA) for the European Union. This protocol corresponds to one of the first steps in the PROPHET, namely a review that aims to map the evidence and highlight the evidence gaps in research or the use of biomarkers in personalised prevention in the general adult population, as well as their integration with digital technologies, including wearable devices, accelerometers, and other appliances utilised for measuring physical and physiological functions. These biomarkers may be already available or currently under development in the fields of cancer, CVD, and neurodegenerative diseases.

There is already a significant body of knowledge about primary and secondary prevention strategies for these diseases. For example, hypercholesterolemia or dyslipidaemia, hypertension, smoking, diabetes mellitus and obesity or levels of physical activity are known risk factors for CVD [ 6 , 13 ] and neurodegenerative diseases [ 14 , 15 , 16 ]; for cancer, a summary of lifestyle preventive actions with good evidence is included in the European code against cancer [ 17 ]. The question is whether there is any biomarker or combination of biomarkers that can help to better identify subgroups of individuals with different risks of developing a particular disease, in which specific preventive strategies could have an impact on clinical outcomes. Our aim in this context is to show the available research in this field.

Given the context and time constraints, the rapid scoping review design is the most appropriate method for providing landscape knowledge [ 18 ] and provide summary maps, such as Campbell evidence and gap map [ 19 ]. Here, we present the protocol that will be used to elaborate three rapid scoping reviews and evidence maps of research on biomarkers investigated in relation to primary or secondary prevention of cancer, cardiovascular and neurodegenerative diseases, respectively. The results of these three rapid scoping reviews will contribute to inform the development of the PROPHET SRIA, which will guide the future policy for research in this field in the EU.

Review question

What biomarkers are being investigated in the context of personalised primary and secondary prevention of cancer, CVD and neurodegenerative diseases in the general adult population in clinical or public health settings?

Three rapid scoping reviews are being conducted between February and June 2023, in parallel, one for each disease group included (cancer, CVD and neurodegenerative diseases), using a common framework and specifying the adaptations to each disease group in search terms, data extraction and representation of results.

This research protocol, designed according to Joanna Briggs Institute (JBI) and Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist [ 20 , 21 , 22 ] was uploaded to the Open Science Framework for public consultation [ 23 ], with registration DOI https://doi.org/ https://doi.org/10.17605/OSF.IO/7JRWD . The protocol was also reviewed by experts in the field, after which modifications were incorporated.

Eligibility criteria

Following the PCC (population, concept and context) model [ 21 , 22 ], the included studies will meet the following eligibility criteria (Table  1 ):

Rationale for performing a rapid scoping review

As explained above, these scoping reviews are intended to be one of the first materials produced in the PROPHET project, so that they can inform the first draft of the SRIA. Therefore, according to the planned timetable, the reviews should be completed in only 4 months. Thus, following recommendations from the Cochrane Rapid Review Methods Group [ 24 ] and taking into account the large number of records expected to be assessed, according to the preliminary searches, and in order to meet these deadlines, specific restrictions were defined for the search—limited to a 3-year period (2020–2023), in English only, and using only MEDLINE and EMBASE as possible sources—and it was decided that the title-abstract and full-text screening phase would be carried out by a single reviewer, after an initial training phase with 10% of the records assessed by two reviewers to ensure concordance between team members. This percentage could be increased if necessary.

Rationale for population selection

These rapid scoping reviews are focused on the general adult population. In addition, they give attention to studies conducted among populations that present specific risk factors relevant to the selected diseases or that include these factors among those considered in the study.

For cancer, these risk (or preventive) factors include smoking [ 25 ], obesity [ 26 ], diabetes [ 27 , 28 , 29 ], Helicobacter pylori infection/colonisation [ 30 ], human papillomavirus (HPV) infection [ 30 ], human immunodeficiency virus (HIV) infection [ 30 ], alcohol consumption [ 31 ], liver cirrhosis and viral (HVB, HVC, HVD) hepatitis [ 32 ].

For CVD, we include hypercholesterolemia or dyslipidaemia, arterial hypertension, smoking, diabetes mellitus, chronic kidney disease, hyperglycaemia and obesity [ 6 , 13 ].

Risk groups for neurodegenerative diseases were defined based on the following risk factors: obesity [ 15 , 33 ], arterial hypertension [ 15 , 33 , 34 , 35 ], diabetes mellitus [ 15 , 33 , 34 , 35 ], dyslipidaemia [ 33 ], alcohol consumption [ 36 , 37 ] and smoking [ 15 , 16 , 33 , 34 ].

After the general search, only relevant and/or disease-specific subpopulations will be used for each specific disease. On the other hand, pregnancy is an exclusion criterion, as the very specific characteristics of this population group would require a specific review.

Rationale for disease selection

The search is limited to diseases with high morbidity and mortality within each of the three disease groups:

Cancer type

Due to time constraints, we only evaluate those malignant neoplasms with the greatest mortality and incidence rates in Europe, which according to the European Cancer Information System [ 38 ] are breast, prostate, colorectum, lung, bladder, pancreas, liver, stomach, kidney, and corpus uteri. Additionally, cervix uteri and liver cancers will also be included due to their preventable nature and/or the existence of public health screening programs [ 30 , 31 ].

We evaluate the following main causes of deaths: ischemic heart disease (49.2% of all CVD deaths), stroke (35.2%) (this includes ischemic stroke, intracerebral haemorrhage and subarachnoid haemorrhage), hypertensive heart disease (6.2%), cardiomyopathy and myocarditis (1.8%), atrial fibrillation and flutter (1.7%), rheumatic heart disease (1.6%), non-rheumatic valvular heart disease (0.9%), aortic aneurism (0.9%), peripheral artery disease (0.4%) and endocarditis (0.4%) [ 6 ].

In this scoping review, specifically in the context of CVD, rheumatic heart disease and endocarditis are not considered because of their infectious aetiology. Arterial hypertension is a risk factor for many cardiovascular diseases and for the purposes of this review is considered as an intermediary disease that leads to CVD.

  • Neurodegenerative diseases

The leading noncommunicable neurodegenerative causes of death are Alzheimer’s disease or dementia (20%), Parkinson’s disease (2.5%), motor neuron diseases (0.4%) and multiple sclerosis (0.2%) [ 8 ]. Alzheimer’s disease, vascular dementia, frontotemporal dementia and Lewy body disease will be specifically searched, following the pattern of European dementia prevalence studies [ 39 ]. Additionally, because amyotrophic lateral sclerosis is the most common motor neuron disease, it is also included in the search [ 8 , 40 , 41 ].

Rationale for context

Public health and clinical settings from any geographical location are being considered. The searches will only consider the period between January 2020 and mid-February 2023 due to time constraints.

Rationale for type of evidence

Qualitative studies are not considered since they cannot answer the research question. Editorials and opinion pieces, protocols, and conference abstracts will also be excluded. Clinical practice guidelines are not included since the information they contain should be in the original studies and in reviews on which they are based.

Pilot study

We did a pilot study to test and refine the search strategies, selection criteria and data extraction sheet as well as to get used to the software—Covidence [ 42 ]. The pilot study consisted of selecting from the results of the preliminary search matrix 100 papers in order of best fit to the topic, and 100 papers at random. The team comprised 15 individual reviewers (both in the pilot and final reviews) who met daily to revise, enhance, and reach consensus on the search matrices, criteria, and data extraction sheets.

Regarding the selected databases and the platforms used, we conducted various tests, including PubMed/MEDLINE and Ovid/MEDLINE, as well as Ovid/Embase and Elsevier/Embase. Ultimately, we chose Ovid as the platform for accessing both MEDLINE and Embase, utilizing thesaurus Mesh and EmTrees. We manually translated these thesauri to ensure consistency between them. Given that the review team was spread across the UK and Spain, we centralised the search results within the UK team's access to the Ovid license to ensure consistency. Additionally, using Ovid exclusively for accessing both MEDLINE and Embase streamlined the process and allowed for easier access to preprints, which represent the latest research in this rapidly evolving field.

Identification of research

The searches are being conducted in MEDLINE via Ovid, Embase via Ovid and Embase preprints via Ovid. We also explored the feasibility of searching in CDC-Authored Genomics and Precision Health Publications Databases [ 43 ] . However, the lack of advanced tools to refine the search, as well as the unavailability of bulk downloading prevented the inclusion of this data source. Nevertheless, a search with 15 records for each disease group showed a full overlap with MEDLINE and/or Embase.

Search strategy definition

An initial limited search of MEDLINE via PubMed and Ovid was undertaken to identify relevant papers on the topic. In this step, we identified keytext words in their titles and abstracts, as well as thesaurus terms. The SR-Accelerator, Citationchaser, and Yale Mesh Analyzer tools were used to assist in the construction of the search matrix. With all this information, we developed a full search strategy adapted for each included database and information source, optimised by research librarians.

Study evidence selection

The complete search strategies are shown in Additional file 3. The three searches are being conducted in parallel. When performing the search, no limits to the type of study or setting are being applied.

Following each search, all identified citations will be collated and uploaded into Covidence (Veritas Health Innovation, Melbourne, Australia, available at www.covidence.org ) with the citation details, and duplicates will be removed.

In the title-abstract and full-text screening phase, the first 10% of the papers will be evaluated by two independent reviewers (accounting for 200 or more papers in absolute numbers in the title-abstract phase). Then, a meeting to discuss discrepancies will lead to adjusting inclusion and exclusion criteria and to acquire consistency between reviewers’ decisions. After that, the full screening of the search results will be performed by a single reviewer. Disagreements that arise between reviewers at each stage of the selection process will be resolved through discussion, or with additional reviewers. We maintain an active forum to facilitate permanent contact among reviewers.

The results of the searches and the study inclusion processes will be reported and presented in a flow diagram following the PRISMA-ScR recommendations [ 22 ].

Expert consultation

The protocol has been refined after consultation with experts in each field (cancer, CVD, and neurodegenerative diseases) who gave input on the scope of the reviews regarding the diverse biomarkers, risk factors, outcomes, and types of prevention relevant to their fields of expertise. In addition, the search strategies have been peer-reviewed by a network of librarians (PRESS-forum in pressforum.pbworks.com) who kindly provided useful feedback.

Data extraction

We have developed a draft data extraction sheet, which is included as Additional file 4, based on the JBI recommendations [ 21 ]. Data extraction will include citation details, study design, population type, biomarker information (name, type, subtype, clinical utility, use of AI technology), disease (group, specific disease), prevention (primary or secondary, lifestyle if primary prevention), and subjective reviewer observations. The data extraction for all papers will be performed by two reviewers to ensure consistency in the classification of data.

Data analysis and presentation

The descriptive information about the studies collected in the previous phase will be coded according to predefined categories to allow the elaboration of visual summary maps that can allow readers and researchers to have a quick overview of their main results. As in the previous phases, this process will be carried out with the aid of Covidence.

Therefore, a summary of the extracted data will be presented in tables as well as in static and, especially, through interactive evidence gap maps (EGM) created using EPPI-Mapper [ 44 ], an open-access web application developed in 2018 by the Evidence for Policy and Practice Information and Coordinating Centre (EPPI-Centre) and Digital Solution Foundry, in partnership with the Campbell Collaboration, which has become the standard software for producing visual evidence gap maps.

Tables and static maps will be made by using R Studio, which will also be used to clean and prepare the database for its use in EPPI-Mapper by generating two Excel files: one containing the EGM structure (i.e. what will be the columns and rows of the visual table) and coding sets, and another containing the bibliographic references and their codes that reviewers had added. Finally, we will use a Python script to produce a file in JSON format, making it ready for importation into EPPI-Reviewer.

The maps are matrixes with biomarker categories/subcategories defining the rows and diseases serving as columns. They define cells, which contain small squares, each one representing each paper included in it. We will use a code of colours to reflect the study design. There will be also a second sublevel in the columns, depending on the map. Thus, for each group of diseases, we will produce three interactive EGMs: two for primary prevention and one for secondary prevention. For primary prevention, the first map will stratify the data to show whether any or which lifestyle has been considered in each paper in combination with the studied biomarker. The second map for primary prevention and the map for secondary prevention will include, as a second sublevel, the subpopulations in which the biomarker has been used or evaluated, which are disease-specific (i.e. cirrhosis for hepatic cancer) researched. The maps will also include filters that allow users to select records based on additional features, such as the use of artificial intelligence in the content of the papers. Furthermore, the EGM, which will be freely available online, will enable users to view and export selected bibliographic references and their abstracts. An example of these interactive maps with dummy data is provided in Additional file 5.

Finally, we will elaborate on two scientific reports for PROPHET. The main report, which will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) recommendations, will summarise the results of the three scoping reviews, will provide a general and global interpretation of the results and will comment on their implication for the SRIA, and will discuss the limitations of the process. The second report will present the specific methodology for the dynamic maps.

This protocol summarises the procedure to carry out three parallel rapid scoping reviews to provide an overview of the available research and gaps in the literature on biomarkers for personalised primary and secondary prevention for the three most common chronic disease groups: cancer, CVD and neurodegenerative diseases. The result will be a common report for the three scoping reviews and the online publication of interactive evidence gap maps to facilitate data visualisation.

This work will be complemented, in a further step of the PROPHET project, by a subsequent mapping report on the scientific evidence for the clinical utility of biomarkers. Both reports are part of an overall mapping effort to characterise the current knowledge and environment around personalised preventive medicine. In this context, PROPHET will also map personalised prevention research programs, as well as bottlenecks and challenges in the adoption of personalised preventive approaches or in the involvement of citizens, patients, health professionals and policy-makers in personalised prevention. The overall results will contribute to the development of the SRIA concept paper, which will help define future priorities for personalised prevention research in the European Union.

In regard to this protocol, one of the strengths of this approach is that it can be applied in the three scoping reviews. This will improve the consistency and comparability of the results between them, allowing for better leveraging of efforts; it also will facilitate the coordination among the staff conducting the different reviews and will allow them to discuss them together, providing a more global perspective as needed for the SRIA. In addition, the collaboration of researchers with different backgrounds, the inclusion of librarians in the research team, and the specific software tools used have helped us to guarantee the quality of the work and have shortened the time invested in defining the final version of this protocol. Another strength is that we have conducted a pilot study to test and refine the search strategy, selection criteria and data extraction sheet. In addition, the selection of the platform of access to the bibliographic databases has been decided after a previous evaluation process (Ovid-MEDLINE versus PubMed MEDLINE, Ovid-Embase versus Elsevier-Embase, etc.).

Only 10% of the papers will undergo screening by two reviewers, and if time permits, we will conduct kappa statistics to assess reviewer agreement during the screening phases. Additionally, ongoing communication and the exchange and discussion of uncertainties will ensure a high level of consensus in the review process.

The main limitation of this work is the very broad field it covers: personalised prevention in all chronic diseases; however, we have tried to maintain decisions to limit it to the chronic diseases with the greatest impact on the population and in the last 3 years, making a rapid scoping review due to time constraints following recommendations from the Cochrane Rapid Review Methods Group [ 24 ]; however, as our aim is to identify gaps in the literature in an area of growing interest (personalisation and prevention), we believe that the records retrieved will provide a solid foundation for evaluating available literature. Additionally, systematic reviews, which may encompass studies predating 2020, have the potential to provide valuable insights beyond the temporal constraints of our search.

Thus, this protocol reflects the decisions set by the PROPHET's timetable, without losing the quality and rigour of the work. In addition, the data extraction phase will be done by two reviewers in 100% of the papers to ensure the consistency of the extracted data. Lastly, extending beyond these three scoping reviews, the primary challenge resides in amalgamating their findings with those from numerous other reviews within the project, ultimately producing a cohesive concept paper in the Strategy Research and Innovation Agenda (SRIA) for the European Union, firmly rooted in evidence-based conclusions.

Council of European Union. Council conclusions on personalised medicine for patients (2015/C 421/03). Brussels: European Union; 2015 dic. Report No.: (2015/C 421/03). Disponible en: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52015XG1217(01)&from=FR .

Goetz LH, Schork NJ. Personalized medicine: motivation, challenges, and progress. Fertil Steril. 2018;109(6):952–63.

Article   PubMed   PubMed Central   Google Scholar  

FDA-NIH Biomarker Working Group. BEST (Biomarkers, EndpointS, and other Tools) Resource. Silver Spring (MD): Food and Drug Administration (US); 2016 [citado 3 de febrero de 2023]. Disponible en: http://www.ncbi.nlm.nih.gov/books/NBK326791/ .

Porta M, Greenland S, Hernán M, dos Silva I S, Last JM. International Epidemiological Association, editores. A dictionary of epidemiology. 6th ed. Oxford: Oxford Univ. Press; 2014. p. 343.

Google Scholar  

PROPHET. Project kick-off meeting. Rome. 2022.

Roth GA, Mensah GA, Johnson CO, Addolorato G, Ammirati E, Baddour LM, et al. Global burden of cardiovascular diseases and risk factors, 1990–2019. J Am College Cardiol. 2020;76(25):2982–3021.

Article   Google Scholar  

GBD 2019 Cancer Collaboration, Kocarnik JM, Compton K, Dean FE, Fu W, Gaw BL, et al. Cancer incidence, mortality, years of life lost, years lived with disability, and disability-adjusted life years for 29 cancer groups from 2010 to 2019: a systematic analysis for the global burden of disease study 2019. JAMA Oncol. 2022;8(3):420.

Feigin VL, Vos T, Nichols E, Owolabi MO, Carroll WM, Dichgans M, et al. The global burden of neurological disorders: translating evidence into policy. The Lancet Neurology. 2020;19(3):255–65.

Article   PubMed   Google Scholar  

GBD 2019 Collaborators, Nichols E, Abd‐Allah F, Abdoli A, Abosetugn AE, Abrha WA, et al. Global mortality from dementia: Application of a new method and results from the Global Burden of Disease Study 2019. A&D Transl Res & Clin Interv. 2021;7(1). Disponible en: https://onlinelibrary.wiley.com/doi/10.1002/trc2.12200 . [citado 7 de febrero de 2023].

Eurostat. ec.europa.eu. Self-perceived health statistics. European health interview survey (EHIS). 2022. Disponible en: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Self-perceived_health_statistics . [citado 7 de febrero de 2023].

OECD/European Union. Health at a Glance: Europe 2022: State of Health in the EU Cycle. Paris: OECD Publishing; 2022. Disponible en: https://www.oecd-ilibrary.org/social-issues-migration-health/health-at-a-glance-europe-2022_507433b0-en .

Boccia S, Pastorino R, Ricciardi W, Ádány R, Barnhoorn F, Boffetta P, et al. How to integrate personalized medicine into prevention? Recommendations from the Personalized Prevention of Chronic Diseases (PRECeDI) Consortium. Public Health Genomics. 2019;22(5–6):208–14.

Visseren FLJ, Mach F, Smulders YM, Carballo D, Koskinas KC, Bäck M, et al. 2021 ESC Guidelines on cardiovascular disease prevention in clinical practice. Eur Heart J. 2021;42(34):3227–337.

World Health Organization. Global action plan on the public health response to dementia 2017–2025. Geneva: WHO Document Production Services; 2017. p. 27.

Norton S, Matthews FE, Barnes DE, Yaffe K, Brayne C. Potential for primary prevention of Alzheimer’s disease: an analysis of population-based data. Lancet Neurol. 2014;13(8):788–94.

Mentis AFA, Dardiotis E, Efthymiou V, Chrousos GP. Non-genetic risk and protective factors and biomarkers for neurological disorders: a meta-umbrella systematic review of umbrella reviews. BMC Med. 2021;19(1):6.

Schüz J, Espina C, Villain P, Herrero R, Leon ME, Minozzi S, et al. European Code against Cancer 4th Edition: 12 ways to reduce your cancer risk. Cancer Epidemiol. 2015;39:S1-10.

Tricco AC, Langlois EtienneV, Straus SE, Alliance for Health Policy and Systems Research, World Health Organization. Rapid reviews to strengthen health policy and systems: a practical guide. Geneva: World Health Organization; 2017. Disponible en: https://apps.who.int/iris/handle/10665/258698 . [citado 3 de febrero de 2023].

White H, Albers B, Gaarder M, Kornør H, Littell J, Marshall Z, et al. Guidance for producing a Campbell evidence and gap map. Campbell Systematic Reviews. 2020;16(4). Disponible en: https://onlinelibrary.wiley.com/doi/10.1002/cl2.1125 . [citado 3 de febrero de 2023].

Aromataris E, Munn Z. editores. JBI: JBI Manual for Evidence Synthesis; 2020.

Peters MDJ, Marnie C, Tricco AC, Pollock D, Munn Z, Alexander L, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Synth. 2020;18(10):2119–26.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018;169(7):467–73.

OSF. Open Science Framework webpage. Disponible en: https://osf.io/ . [citado 8 de febrero de 2023].

Garritty C, Gartlehner G, Nussbaumer-Streit B, King VJ, Hamel C, Kamel C, et al. Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. Journal Clin Epidemiol. 2021;130:13–22.

Leon ME, Peruga A, McNeill A, Kralikova E, Guha N, Minozzi S, et al. European code against cancer, 4th edition: tobacco and cancer. Cancer Epidemiology. 2015;39:S20-33.

Anderson AS, Key TJ, Norat T, Scoccianti C, Cecchini M, Berrino F, et al. European code against cancer 4th edition: obesity, body fatness and cancer. Cancer Epidemiology. 2015;39:S34-45.

Barone BB, Yeh HC, Snyder CF, Peairs KS, Stein KB, Derr RL, et al. Long-term all-cause mortality in cancer patients with preexisting diabetes mellitus: a systematic review and meta-analysis. JAMA. 2008;300(23):2754–64.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Barone BB, Yeh HC, Snyder CF, Peairs KS, Stein KB, Derr RL, et al. Postoperative mortality in cancer patients with preexisting diabetes: systematic review and meta-analysis. Diabetes Care. 2010;33(4):931–9.

Noto H, Tsujimoto T, Sasazuki T, Noda M. Significantly increased risk of cancer in patients with diabetes mellitus: a systematic review and meta-analysis. Endocr Pract. 2011;17(4):616–28.

Villain P, Gonzalez P, Almonte M, Franceschi S, Dillner J, Anttila A, et al. European code against cancer 4th edition: infections and cancer. Cancer Epidemiology. 2015;39:S120-38.

Scoccianti C, Cecchini M, Anderson AS, Berrino F, Boutron-Ruault MC, Espina C, et al. European Code against Cancer 4th Edition: Alcohol drinking and cancer. Cancer Epidemiology. 2016;45:181–8.

El-Serag HB. Epidemiology of viral hepatitis and hepatocellular carcinoma. Gastroenterology. 2012;142(6):1264-1273.e1.

Li XY, Zhang M, Xu W, Li JQ, Cao XP, Yu JT, et al. Midlife modifiable risk factors for dementia: a systematic review and meta-analysis of 34 prospective cohort studies. CAR. 2020;16(14):1254–68.

Ford E, Greenslade N, Paudyal P, Bremner S, Smith HE, Banerjee S, et al. Predicting dementia from primary care records: a systematic review and meta-analysis Forloni G, editor. PLoS ONE. 2018;13(3):e0194735.

Xu W, Tan L, Wang HF, Jiang T, Tan MS, Tan L, et al. Meta-analysis of modifiable risk factors for Alzheimer’s disease. J Neurol Neurosurg Psychiatry. 2015;86(12):1299–306.

PubMed   Google Scholar  

Guo Y, Xu W, Liu FT, Li JQ, Cao XP, Tan L, et al. Modifiable risk factors for cognitive impairment in Parkinson’s disease: A systematic review and meta-analysis of prospective cohort studies. Mov Disord. 2019;34(6):876–83.

Jiménez-Jiménez FJ, Alonso-Navarro H, García-Martín E, Agúndez JAG. Alcohol consumption and risk for Parkinson’s disease: a systematic review and meta-analysis. J Neurol agosto de. 2019;266(8):1821–34.

ECIS European Cancer Information System. Data explorer | ECIS. 2023. Estimates of cancer incidence and mortality in 2020 for all cancer sites. Disponible en: https://ecis.jrc.ec.europa.eu/explorer.php?$0-0$1-AE27$2-All$4-2$3-All$6-0,85$5-2020,2020$7-7,8$CEstByCancer$X0_8-3$CEstRelativeCanc$X1_8-3$X1_9-AE27$CEstBySexByCancer$X2_8-3$X2_-1-1 . [citado 22 de febrero de 2023].

Bacigalupo I, Mayer F, Lacorte E, Di Pucchio A, Marzolini F, Canevelli M, et al. A systematic review and meta-analysis on the prevalence of dementia in Europe: estimates from the highest-quality studies adopting the DSM IV diagnostic criteria Bruni AC, editor. JAD. 2018;66(4):1471–81.

Barceló MA, Povedano M, Vázquez-Costa JF, Franquet Á, Solans M, Saez M. Estimation of the prevalence and incidence of motor neuron diseases in two Spanish regions: Catalonia and Valencia. Sci Rep. 2021;11(1):6207.

Ng L, Khan F, Young CA, Galea M. Symptomatic treatments for amyotrophic lateral sclerosis/motor neuron disease. Cochrane Neuromuscular Group, editor. Cochrane Database of Systematic Reviews. 2017;2017(1). Disponible en: http://doi.wiley.com/10.1002/14651858.CD011776.pub2 . [citado 13 de febrero de 2023].

Covidence systematic review software. Melbourne, Australia: Veritas Health Innovation; 2023. Disponible en: https://www.covidence.org .

Centre for Disease Control and Prevention. Public Health Genomics and Precision Health Knowledge Base (v8.4). 2023. Disponible en: https://phgkb.cdc.gov/PHGKB/specificPHGKB.action?action=about .

Digital Solution Foundry and EPPI Centre. EPPI Centre. UCL Social Research Institute: University College London; 2022.

Download references

Acknowledgements

We are grateful for the library support received from Teresa Carretero (Instituto de Salud Carlos III, ISCIII) and, from Concepción Campos-Asensio (Hospital Universitario de Getafe, Comité ejecutivo BiblioMadSalud) for the seminar on the Scoping Reviews methodology and for their continuous teachings through their social networks.

Also, we would like to thank Dr. Héctor Bueno (Centro Nacional de Investigaciones Cardiovasculares (CNIC), Hospital Universitario 12 de Octubre) and Dr. Pascual Sánchez (Fundación Centro de Investigación de Enfermedades Neurológicas (CIEN)) for their advice in their fields of expertise.

The PROPHET project has received funding from the European Union’s Horizon Europe research and innovation program under grant agreement no. 101057721. UK participation in Horizon Europe Project PROPHET is supported by UKRI grant number 10040946 (Foundation for Genomics & Population Health).

Author information

Plans-Beriso E and Babb-de-Villiers C contributed equally to this work.

Kroese M and Pérez-Gómez B contributed equally to this work.

Authors and Affiliations

Department of Epidemiology of Chronic Diseases, National Centre for Epidemiology, Instituto de Salud Carlos III, Madrid, Spain

E Plans-Beriso, C Barahona-López, P Diez-Echave, O R Hernández, E García-Ovejero, O Craciun, P Fernández-Navarro, N Fernández-Larrea, E García-Esquinas, M Pollan-Santamaria & B Pérez-Gómez

CIBER of Epidemiology and Public Health (CIBERESP), Madrid, Spain

E Plans-Beriso, D Petrova, C Barahona-López, P Diez-Echave, O R Hernández, N F Fernández-Martínez, P Fernández-Navarro, N Fernández-Larrea, E García-Esquinas, V Moreno, F Rodríguez-Artalejo, M J Sánchez, M Pollan-Santamaria & B Pérez-Gómez

PHG Foundation, University of Cambridge, Cambridge, UK

C Babb-de-Villiers, H Turner, L Blackburn & M Kroese

Instituto de Investigación Biosanitaria Ibs. GRANADA, Granada, Spain

D Petrova, N F Fernández-Martínez & M J Sánchez

Escuela Andaluza de Salud Pública (EASP), Granada, Spain

Cambridge University Medical Library, Cambridge, UK

National Library of Health Sciences, Instituto de Salud Carlos III, Madrid, Spain

V Jiménez-Planet

Oncology Data Analytics Program, Catalan Institute of Oncology (ICO), L’Hospitalet de Llobregat, Barcelona, 08908, Spain

Colorectal Cancer Group, ONCOBELL Program, Institut de Recerca Biomedica de Bellvitge (IDIBELL), L’Hospitalet de Llobregat, Barcelona, 08908, Spain

Department of Preventive Medicine and Public Health, Universidad Autónoma de Madrid, Madrid, Spain

F Rodríguez-Artalejo

IMDEA-Food Institute, CEI UAM+CSIC, Madrid, Spain

You can also search for this author in PubMed   Google Scholar

Contributions

BPG and MK supervised and directed the project. EPB and CBV coordinated and managed the development of the project. CBL, PDE, ORH, CBV and EPB developed the search strategy. All authors reviewed the content, commented on the methods, provided feedback, contributed to drafts and approved the final manuscript.

Corresponding author

Correspondence to E Plans-Beriso .

Ethics declarations

Competing interests.

There are no conflicts of interest in this project.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: glossary., additional file 2: glossary of biomarkers that may define high risk groups., additional file 3: search strategy., additional file 4: data extraction sheet., additional file 5: example of interactive maps in cancer and primary prevention., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Plans-Beriso, E., Babb-de-Villiers, C., Petrova, D. et al. Biomarkers for personalised prevention of chronic diseases: a common protocol for three rapid scoping reviews. Syst Rev 13 , 147 (2024). https://doi.org/10.1186/s13643-024-02554-9

Download citation

Received : 19 October 2023

Accepted : 03 May 2024

Published : 01 June 2024

DOI : https://doi.org/10.1186/s13643-024-02554-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Personalised prevention
  • Precision Medicine
  • Precision prevention
  • Cardiovascular diseases
  • Chronic diseases

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

how to formulate a research question systematic review

  • Systematic Review
  • Open access
  • Published: 24 May 2024

Turnover intention and its associated factors among nurses in Ethiopia: a systematic review and meta-analysis

  • Eshetu Elfios 1 ,
  • Israel Asale 1 ,
  • Merid Merkine 1 ,
  • Temesgen Geta 1 ,
  • Kidist Ashager 1 ,
  • Getachew Nigussie 1 ,
  • Ayele Agena 1 ,
  • Bizuayehu Atinafu 1 ,
  • Eskindir Israel 2 &
  • Teketel Tesfaye 3  

BMC Health Services Research volume  24 , Article number:  662 ( 2024 ) Cite this article

348 Accesses

Metrics details

Nurses turnover intention, representing the extent to which nurses express a desire to leave their current positions, is a critical global public health challenge. This issue significantly affects the healthcare workforce, contributing to disruptions in healthcare delivery and organizational stability. In Ethiopia, a country facing its own unique set of healthcare challenges, understanding and mitigating nursing turnover are of paramount importance. Hence, the objectives of this systematic review and meta-analysis were to determine the pooled proportion ofturnover intention among nurses and to identify factors associated to it in Ethiopia.

A comprehensive search carried out for studies with full document and written in English language through an electronic web-based search strategy from databases including PubMed, CINAHL, Cochrane Library, Embase, Google Scholar and Ethiopian University Repository online. Checklist from the Joanna Briggs Institute (JBI) was used to assess the studies’ quality. STATA version 17 software was used for statistical analyses. Meta-analysis was done using a random-effects method. Heterogeneity between the primary studies was assessed by Cochran Q and I-square tests. Subgroup and sensitivity analyses were carried out to clarify the source of heterogeneity.

This systematic review and meta-analysis incorporated 8 articles, involving 3033 nurses in the analysis. The pooled proportion of turnover intention among nurses in Ethiopia was 53.35% (95% CI (41.64, 65.05%)), with significant heterogeneity between studies (I 2  = 97.9, P  = 0.001). Significant association of turnover intention among nurses was found with autonomous decision-making (OR: 0.28, CI: 0.14, 0.70) and promotion/development (OR: 0.67, C.I: 0.46, 0.89).

Conclusion and recommendation

Our meta-analysis on turnover intention among Ethiopian nurses highlights a significant challenge, with a pooled proportion of 53.35%. Regional variations, such as the highest turnover in Addis Ababa and the lowest in Sidama, underscore the need for tailored interventions. The findings reveal a strong link between turnover intention and factors like autonomous decision-making and promotion/development. Recommendations for stakeholders and concerned bodies involve formulating targeted retention strategies, addressing regional variations, collaborating for nurse welfare advocacy, prioritizing career advancement, reviewing policies for nurse retention improvement.

Peer Review reports

Turnover intention pertaining to employment, often referred to as the intention to leave, is characterized by an employee’s contemplation of voluntarily transitioning to a different job or company [ 1 ]. Nurse turnover intention, representing the extent to which nurses express a desire to leave their current positions, is a critical global public health challenge. This issue significantly affects the healthcare workforce, contributing to disruptions in healthcare delivery and organizational stability [ 2 ].

The global shortage of healthcare professionals, including nurses, is an ongoing challenge that significantly impacts the capacity of healthcare systems to provide quality services [ 3 ]. Nurses, as frontline healthcare providers, play a central role in patient care, making their retention crucial for maintaining the functionality and effectiveness of healthcare delivery. However, the phenomenon of turnover intention, reflecting a nurse’s contemplation of leaving their profession, poses a serious threat to workforce stability [ 4 ].

Studies conducted globally shows that high turnover rates among nurses in several regions, with notable figures reported in Alexandria (68%), China (63.88%), and Jordan (60.9%) [ 5 , 6 , 7 ]. In contrast, Israel has a remarkably low turnover rate of9% [ 8 ], while Brazil reports 21.1% [ 9 ], and Saudi hospitals26% [ 10 ]. These diverse turnover rates highlight the global nature of the nurse turnover phenomenon, indicating varying degrees of workforce mobility in different regions.

The magnitude and severity of turnover intention among nurses worldwide underscore the urgency of addressing this issue. High turnover rates not only disrupt healthcare services but also result in a loss of valuable skills and expertise within the nursing workforce. This, in turn, compromises the continuity and quality of patient care, with potential implications for patient outcomes and overall health service delivery [ 11 ]. Extensive research conducted worldwide has identified a range of factors contributing to turnover intention among nurses [ 11 , 12 , 13 , 14 , 15 , 16 , 17 ]. These factors encompass both individual and organizational aspects, such as high workload, inadequate support, limited career advancement opportunities, job satisfaction, conflict, payment or reward, burnout sense of belongingness to their work environment. The complex interplay of these factors makes addressing turnover intention a multifaceted challenge that requires targeted interventions.

In Ethiopia, a country facing its own unique set of healthcare challenges, understanding and mitigating nursing turnover are of paramount importance. The healthcare system in Ethiopia grapples with issues like resource constraints, infrastructural limitations, and disparities in healthcare access [ 18 ]. Consequently, the factors influencing nursing turnover in Ethiopia may differ from those in other regions. Previous studies conducted in the Ethiopian context have started to unravel some of these factors, emphasizing the need for a more comprehensive examination [ 18 , 19 ].

Although many cross-sectional studies have been conducted on turnover intention among nurses in Ethiopia, the results exhibit variations. The reported turnover intention rates range from a minimum of 30.6% to a maximum of 80.6%. In light of these disparities, this systematic review and meta-analysis was undertaken to ascertain the aggregated prevalence of turnover intention among nurses in Ethiopia. By systematically analyzing findings from various studies, we aimed to provide a nuanced understanding of the factors influencing turnover intention specific to the Ethiopian healthcare context. Therefore, this systematic review and meta-analysis aimed to answer the following research questions.

What is the pooled prevalence of turnover intention among nurses in Ethiopia?

What are the factors associated with turnover intention among nurses in Ethiopia?

The primary objective of this review was to assess the pooled proportion of turnover intention among nurses in Ethiopia. The secondary objective was identifying the factors associated to turnover intention among nurses in Ethiopia.

Study design and search strategy

A comprehensive systematic review and meta-analysis was conducted, examining observational studies on turnover intention among nurses in Ethiopia. The procedure for this systematic review and meta-analysis was developed in accordance with the Preferred Reporting Items for Systematic review and Meta-analysis Protocols (PRISMA-P) statement [ 20 ]. PRISMA-2015 statement was used to report the findings [ 21 , 22 ]. This systematic review and meta-analysis were registered on PROSPERO with the registration number of CRD42024499119.

We conducted systematic and an extensive search across multiple databases, including PubMed, CINAHL, Cochrane Library, Embase, Google Scholar and Ethiopian University Repository online to identify studies reporting turnover intention among nurses in Ethiopia. We reviewed the database available at http://www.library.ucsf.edu and the Cochrane Library to ensure that the intended task had not been previously undertaken, preventing any duplication. Furthermore, we screened the reference lists to retrieve relevant articles. The process involved utilizing EndNote (version X8) software for downloading, organizing, reviewing, and citing articles. Additionally, a manual search for cross-references was performed to discover any relevant studies not captured through the initial database search. The search employed a comprehensive set of the following search terms:“prevalence”, “turnover intention”, “intention to leave”, “attrition”, “employee attrition”, “nursing staff turnover”, “Ethiopian nurses”, “nurses”, and “Ethiopia”. These terms were combined using Boolean operators (AND, OR) to conduct a thorough and systematic search across the specified databases.

Eligibility criteria

Inclusion criteria.

The established inclusion criteria for this meta-analysis and systematic review are as follows to guide the selection of articles for inclusion in this review.

Population: Nurses working in Ethiopia.

Study period: studies conducted or published until 23November 2023.

Study design: All observational study designs, such as cross-sectional, longitudinal, and cohort studies, were considered.

Setting: Only studies conducted in Ethiopia were included.

Outcome; turnover intention.

Study: All studies, whether published or unpublished, in the form of journal articles, master’s theses, and dissertations, were included up to the final date of data analysis.

Language: This study exclusively considered studies in the English language.

Exclusion criteria

Excluded were studies lacking full text or Studies with a Newcastle–Ottawa Quality Assessment Scale (NOS) score of 6 or less. Studies failing to provide information on turnover intention among nurses or studies for which necessary details could not be obtained were excluded. Three authors (E.E., T.G., K.A) independently assessed the eligibility of retrieved studies, other two authors (E.I & M.M) input sought for consensus on potential in- or exclusion.

Quality assessment and data extraction

Two authors (E.E, A.A, G.N) independently conducted a critical appraisal of the included studies. Joanna Briggs Institute (JBI) checklists of prevalence study was used to assess the quality of the studies. Studies with a Newcastle–Ottawa Quality Assessment Scale (NOS) score of seven or more were considered acceptable [ 23 ]. The tool has nine parameters, which have yes, no, unclear, and not applicable options [ 24 ]. Two reviewers (I.A, B.A) were involved when necessary, during the critical appraisal process. Accordingly, all studies were included in our review. ( Table  1 ) Questions to evaluate the methodological quality of studies on turnover intention among nurses and its associated factors in Ethiopia are the followings:

Q1 = was the sample frame appropriate to address the target population?

Q2. Were study participants sampled appropriately.

Q3. Was the sample size adequate?

Q4. Were the study subjects and the setting described in detail?

Q5. Was the data analysis conducted with sufficient coverage of the identified sample?

Q6. Were the valid methods used for the identification of the condition?

Q7. Was the condition measured in a standard, reliable way for all participants?

Q8. Was there appropriate statistical analysis?

Q9. Was the response rate adequate, and if not, was the low response rate.

managed appropriately?

Data was extracted and recorded in a Microsoft Excel as guided by the Joanna Briggs Institute (JBI) data extraction form for observational studies. Three authors (E.E, M.G, T.T) independently conducted data extraction. Recorded data included the first author’s last name, publication year, study setting or country, region, study design, study period, sample size, response rate, population, type of management, proportion of turnover intention, and associated factors. Discrepancies in data extraction were resolved through discussion between extractors.

Data processing and analysis

Data analysis procedures involved importing the extracted data into STATA 14 statistical software for conducting a pooled proportion of turnover intention among nurses. To evaluate potential publication bias and small study effects, both funnel plots and Egger’s test were employed [ 25 , 26 ]. We used statistical tests such as the I statistic to quantify heterogeneity and explore potential sources of variability. Additionally, subgroup analyses were conducted to investigate the impact of specific study characteristics on the overall results. I 2 values of 0%, 25%, 50%, and 75% were interpreted as indicating no, low, medium, and high heterogeneity, respectively [ 27 ].

To assess publication bias, we employed several methods, including funnel plots and Egger’s test. These techniques allowed us to visually inspect asymmetry in the distribution of study results and statistically evaluate the presence of publication bias. Furthermore, we conducted sensitivity analyses to assess the robustness of our findings to potential publication bias and other sources of bias.

Utilizing a random-effects method, a meta-analysis was performed to assess turnover intention among nurses, employing this method to account for observed variability [ 28 ]. Subgroup analyses were conducted to compare the pooled magnitude of turnover intention among nurses and associated factors across different regions. The results of the pooled prevalence were visually presented in a forest plot format with a 95% confidence interval.

Study selection

After conducting the initial comprehensive search concerning turnover intention among nurses through Medline, Cochran Library, Web of Science, Embase, Ajol, Google Scholar, and other sources, a total of 1343 articles were retrieved. Of which 575 were removed due to duplication. Five hundred ninety-three articles were removed from the remaining 768 articles by title and abstract. Following theses, 44 articles which cannot be retrieved were removed. Finally, from the remaining 131 articles, 8 articles with a total 3033 nurses were included in the systematic review and meta-analysis (Fig.  1 ).

figure 1

PRISMA flow diagram of the selection process of studies on turnover intention among nurses in Ethiopia, 2024

Study characteristics

All included 8 studies had a cross-sectional design and of which, 2 were from Tigray region, 2 were from Addis Ababa(Capital), 1 from south region, 1 from Amhara region, 1 from Sidama region, and 1 was multiregional and Nationwide. The prevalence of turnover intention among nurses ‘ranges from 30.6 to 80.6%. Table  2 .

Pooled prevalence of turnover intention among nurses in Ethiopia

Our comprehensive meta-analysis revealed a notable turnover intention rate of 53.35% (95% CI: 41.64, 65.05%) among Ethiopian nurses, accompanied by substantial heterogeneity between studies (I 2  = 97.9, P  = 0.000) as depicted in Fig.  2 . Given the observed variability, we employed a random-effects model to analyze the data, ensuring a robust adjustment for the significant heterogeneity across the included studies.

figure 2

Forest plot showing the pooled proportion of turnover intention among nurses in Ethiopia, 2024

Subgroup analysis of turnover intention among nurses in Ethiopia

To address the observed heterogeneity, we conducted a subgroup analysis based on regions. The results of the subgroup analysis highlighted considerable variations, with the highest level of turnover intention identified in Addis Ababa at 69.10% (95% CI: 46.47, 91.74%) and substantial heterogeneity (I 2  = 98.1%). Conversely, the Sidama region exhibited the lowest level of turnover intention among nurses at 30.6% (95% CI: 25.18, 36.02%), accompanied by considerable heterogeneity (I 2  = 100.0%) ( Fig.  3 ).

figure 3

Subgroup analysis of systematic review and meta-analysis by region of turnover intention among nurses in Ethiopia, 2024

Publication bias of turnover intention among nurses in Ethiopia

The Egger’s test result ( p  = 0.64) is not statistically significant, indicating no evidence of publication bias in the meta-analysis (Table  3 ). Additionally, the symmetrical distribution of included studies in the funnel plot (Fig.  4 ) confirms the absence of publication bias across studies.

figure 4

Funnel plot of systematic review and meta-analysis on turnover intention among nurses in Ethiopia, 2024

Sensitivity analysis

The leave-out-one sensitivity analysis served as a meticulous evaluation of the influence of individual studies on the comprehensive pooled prevalence of turnover intention within the context of Ethiopian nurses. In this systematic process, each study was methodically excluded from the analysis one at a time. The outcomes of this meticulous examination indicated that the exclusion of any particular study did not lead to a noteworthy or statistically significant alteration in the overall pooled estimate of turnover intention among nurses in Ethiopia. The findings are visually represented in Fig.  5 , illustrating the stability and robustness of the overall pooled estimate even with the removal of specific studies from the analysis.

figure 5

Sensitivity analysis of pooled prevalence for each study being removed at a time for systematic review and meta-analysis of turnover intention among nurses in Ethiopia

Factors associated with turnover intention among nurses in Ethiopia

In our meta-analysis, we comprehensively reviewed and conducted a meta-analysis on the determinants of turnover intention among nurses in Ethiopia by examining eight relevant studies [ 6 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ]. We identified a significant association between turnover intention with autonomous decision-making (OR: 0.28, CI: 0.14, 0.70) (Fig.  6 ) and promotion/development (OR: 0.67, CI: 0.46, 0.89) (Fig.  7 ). In both instances, the odds ratios suggest a negative association, signifying that increased levels of autonomous decision-making and promotion/development were linked to reduced odds of turnover intention.

figure 6

Forest plot of the association between autonomous decision making with turnover intention among nurses in Ethiopia2024

figure 7

Forest plot of the association between promotion/developpment with turnover intention among nurses in Ethiopia, 2024

In our comprehensive meta-analysis exploring turnover intention among nurses in Ethiopia, our findings revealed a pooled proportion of turnover intention at 53.35%. This significant proportion warrants a comparative analysis with turnover rates reported in other global regions. Distinct variations emerge when compared with turnover rates in Alexandria (68%), China (63.88%), and Jordan (60.9%) [ 5 , 6 , 7 ]. This comparison highlights that the multifaceted nature of turnover intention, influenced by diverse contextual, cultural, and organizational factors. Conversely, Ethiopia’s turnover rate among nurses contrasts with substantially lower figures reported in Israel (9%) [ 8 ], Brazil (21.1%) [ 9 ], and Saudi hospitals (26%) [ 10 ]. Challenges such as work overload, economic constraints, limited promotional opportunities, lack of recognition, and low job rewards are more prevalent among nurses in Ethiopia, contributing to higher turnover intention compared to their counterparts [ 7 , 29 , 36 ].

The highest turnover intention was observed in Addis Ababa, while Sidama region displayed the lowest turnover intention among nurses, These differences highlight the complexity of turnover intention among Ethiopian nurses, showing the importance of specific interventions in each region to address unique factors and improve nurses’ retention.

Our systematic review and meta-analysis in the Ethiopian nursing context revealed a significant inverse association between turnover intention and autonomous decision-making. The odd of turnover intention is approximately reduced by 72% in employees with autonomous decision-making compared to those without autonomous decision-making. This finding was supported by other similar studies conducted in South Africa, Tanzania, Kenya, and Turkey [ 37 , 38 , 39 , 40 ].

The significant association of turnover intention with promotion/development in our study underscores the crucial role of career advancement opportunities in alleviating turnover intention among nurses. Specifically, our analysis revealed that individuals with promotion/development had approximately 33% lower odds of turnover intention compared to those without such opportunities. These results emphasize the pivotal influence of organizational support in shaping the professional environment for nurses, providing substantive insights for the formulation of evidence-based strategies targeted at enhancing workforce retention. This finding is in line with former researches conducted in Taiwan, Philippines and Italy [ 41 , 42 , 43 ].

Our meta-analysis on turnover intention among Ethiopian nurses reveals a considerable challenge, with a pooled proportion of 53.35%. Regional variations highlight the necessity for region-specific strategies, with Addis Ababa displaying the highest turnover intention and Sidama region the lowest. A significant inverse association was found between turnover intention with autonomous decision-making and promotion/development. These insights support the formulation of evidence-based strategies and policies to enhance nurse retention, contributing to the overall stability of the Ethiopian healthcare system.

Recommendations

Federal ministry of health (fmoh).

The FMoH should consider the regional variations in turnover intention and formulate targeted retention strategies. Investment in professional development opportunities and initiatives to enhance autonomy can be integral components of these strategies.

Ethiopian nurses association (ENA)

ENA plays a pivotal role in advocating for the welfare of nurses. The association is encouraged to collaborate with healthcare institutions to promote autonomy, create mentorship programs, and advocate for improved working conditions to mitigate turnover intention.

Healthcare institutions

Hospitals and healthcare facilities should prioritize the provision of career advancement opportunities and recognize the value of professional autonomy in retaining nursing staff. Tailored interventions based on regional variations should be considered.

Policy makers

Policymakers should review existing healthcare policies to identify areas for improvement in nurse retention. Policy changes that address challenges such as work overload, limited promotional opportunities, and economic constraints can positively impact turnover rates.

Future research initiatives

Further research exploring the specific factors contributing to turnover intention in different regions of Ethiopia is recommended. Understanding the nuanced challenges faced by nurses in various settings will inform the development of more targeted interventions.

Strength and limitations

Our systematic review and meta-analysis on nurse turnover intention in Ethiopia present several strengths. The comprehensive inclusion of diverse studies provides a holistic view of the issue, enhancing the generalizability of our findings. The use of a random-effects model accounts for potential heterogeneity, ensuring a more robust and reliable synthesis of data.

However, limitations should be acknowledged. The heterogeneity observed across studies, despite the use of a random-effects model, may impact the precision of the pooled estimate. These considerations should be taken into account when interpreting and applying the results of our analysis.

Data availability

Data set used on this analysis will available from corresponding author upon reasonable request.

Abbreviations

Ethiopian Nurses Association

Federal Ministry of Health

Joanna Briggs Institute

Preferred Reporting Items for Systematic review and Meta-analysis Protocols

Kanchana L, Jayathilaka R. Factors impacting employee turnover intentions among professionals in Sri Lankan startups. PLoS ONE. 2023;18(2):e0281729.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Boateng AB, et al. Factors influencing turnover intention among nurses and midwives in Ghana. Nurs Res Pract. 2022;2022:4299702.

PubMed   PubMed Central   Google Scholar  

Organization WH. WHO Guideline on Health Workforce Development Attraction, Recruitment and Retention in Rural and Remote Areas, 2021, pp. 1-104.

Hayes LJ, et al. Nurse turnover: a literature review. Int J Nurs Stud. 2006;43(2):237–63.

Article   PubMed   Google Scholar  

Yang H, et al. Validation of work pressure and associated factors influencing hospital nurse turnover: a cross-sectional investigation in Shaanxi Province, China. BMC Health Serv Res. 2017;17:1–11.

Article   Google Scholar  

Ayalew E et al. Nurses’ intention to leave their job in sub-Saharan Africa: A systematic review and meta-analysis. Heliyon, 2021. 7(6).

Al Momani M. Factors influencing public hospital nurses’ intentions to leave their current employment in Jordan. Int J Community Med Public Health. 2017;4(6):1847–53.

DeKeyser Ganz F, Toren O. Israeli nurse practice environment characteristics, retention, and job satisfaction. Isr J Health Policy Res. 2014;3(1):1–8.

de Oliveira DR, et al. Intention to leave profession, psychosocial environment and self-rated health among registered nurses from large hospitals in Brazil: a cross-sectional study. BMC Health Serv Res. 2017;17(1):21.

Article   PubMed   PubMed Central   Google Scholar  

Dall’Ora C, et al. Association of 12 h shifts and nurses’ job satisfaction, burnout and intention to leave: findings from a cross-sectional study of 12 European countries. BMJ Open. 2015;5(9):e008331.

Lu H, Zhao Y, While A. Job satisfaction among hospital nurses: a literature review. Int J Nurs Stud. 2019;94:21–31.

Ramoo V, Abdullah KL, Piaw CY. The relationship between job satisfaction and intention to leave current employment among registered nurses in a teaching hospital. J Clin Nurs. 2013;22(21–22):3141–52.

Al Sabei SD, et al. Nursing work environment, turnover intention, Job Burnout, and Quality of Care: the moderating role of job satisfaction. J Nurs Scholarsh. 2020;52(1):95–104.

Wang H, Chen H, Chen J. Correlation study on payment satisfaction, psychological reward satisfaction and turnover intention of nurses. Chin Hosp Manag. 2018;38(03):64–6.

Google Scholar  

Loes CN, Tobin MB. Interpersonal conflict and organizational commitment among licensed practical nurses. Health Care Manag (Frederick). 2018;37(2):175–82.

Wei H, et al. The state of the science of nurse work environments in the United States: a systematic review. Int J Nurs Sci. 2018;5(3):287–300.

Nantsupawat A, et al. Effects of nurse work environment on job dissatisfaction, burnout, intention to leave. Int Nurs Rev. 2017;64(1):91–8.

Article   CAS   PubMed   Google Scholar  

Ayalew F, et al. Factors affecting turnover intention among nurses in Ethiopia. World Health Popul. 2015;16(2):62–74.

Debie A, Khatri RB, Assefa Y. Contributions and challenges of healthcare financing towards universal health coverage in Ethiopia: a narrative evidence synthesis. BMC Health Serv Res. 2022;22(1):866.

Moher D, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Reviews. 2015;4(1):1–9.

Moher D, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9.

Moher D et al. Group, P.-P.(2015) Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement.

Institute JB. Checklist for Prevalence Studies. Checkl prevalance Stud [Internet]. 2016;7.

Sakonidou S, et al. Interventions to improve quantitative measures of parent satisfaction in neonatal care: a systematic review. BMJ Paediatr Open. 2020;4(1):e000613.

Egger M, Smith GD. Meta-analysis: potentials and promise. BMJ. 1997;315(7119):1371.

Tura G, Fantahun M, Worku A. The effect of health facility delivery on neonatal mortality: systematic review and meta-analysis. BMC Pregnancy Childbirth. 2013;13:18.

Lin L. Comparison of four heterogeneity measures for meta-analysis. J Eval Clin Pract. 2020;26(1):376–84.

McFarland LV. Meta-analysis of probiotics for the prevention of antibiotic associated diarrhea and the treatment of Clostridium difficile disease. Am J Gastroenterol. 2006;101(4):812–22.

Asegid A, Belachew T, Yimam E. Factors influencing job satisfaction and anticipated turnover among nurses in Sidama zone public health facilities, South Ethiopia Nursing research and practice, 2014. 2014.

Wubetie A, Taye B, Girma B. Magnitude of turnover intention and associated factors among nurses working in emergency departments of governmental hospitals in Addis Ababa, Ethiopia: a cross-sectional institutional based study. BMC Nurs. 2020;19:97.

Getie GA, Betre ET, Hareri HA. Assessment of factors affecting turnover intention among nurses working at governmental health care institutions in east Gojjam, Amhara region, Ethiopia, 2013. Am J Nurs Sci. 2015;4(3):107–12.

Gebregziabher D, et al. The relationship between job satisfaction and turnover intention among nurses in Axum comprehensive and specialized hospital Tigray, Ethiopia. BMC Nurs. 2020;19(1):79.

Negarandeh R et al. Magnitude of nurses’ intention to leave their jobs and its associated factors of nurses working in tigray regional state, north ethiopia: cross sectional study 2020.

Nigussie Bolado G, et al. The magnitude of turnover intention and Associated factors among nurses working at Governmental Hospitals in Southern Ethiopia: a mixed-method study. Nursing: Research and Reviews; 2023. pp. 13–29.

Woldekiros AN, Getye E, Abdo ZA. Magnitude of job satisfaction and intention to leave their present job among nurses in selected federal hospitals in Addis Ababa, Ethiopia. PLoS ONE. 2022;17(6):e0269540.

Rhoades L, Eisenberger R. Perceived organizational support: a review of the literature. J Appl Psychol. 2002;87(4):698.

Lewis M. Causal factors that influence turnover intent in a manufacturing organisation. University of Pretoria (South Africa); 2008.

Kuria S, Alice O, Wanderi PM. Assessment of causes of labour turnover in three and five star-rated hotels in Kenya International journal of business and social science, 2012. 3(15).

Blaauw D, et al. Comparing the job satisfaction and intention to leave of different categories of health workers in Tanzania, Malawi, and South Africa. Global Health Action. 2013;6(1):19287.

Masum AKM, et al. Job satisfaction and intention to quit: an empirical analysis of nurses in Turkey. PeerJ. 2016;4:e1896.

Song L. A study of factors influencing turnover intention of King Power Group at Downtown Area in Bangkok, Thailand. Volume 2. International Review of Research in Emerging Markets & the Global Economy; 2016. 3.

Karanikola MN, et al. Moral distress, autonomy and nurse-physician collaboration among intensive care unit nurses in Italy. J Nurs Manag. 2014;22(4):472–84.

Labrague LJ, McEnroe-Petitte DM, Tsaras K. Predictors and outcomes of nurse professional autonomy: a cross-sectional study. Int J Nurs Pract. 2019;25(1):e12711.

Download references

No funding was received.

Author information

Authors and affiliations.

School of Nursing, College of Health Science and Medicine, Wolaita Sodo University, Wolaita Sodo, Ethiopia

Eshetu Elfios, Israel Asale, Merid Merkine, Temesgen Geta, Kidist Ashager, Getachew Nigussie, Ayele Agena & Bizuayehu Atinafu

Department of Midwifery, College of Health Science and Medicine, Wolaita Sodo University, Wolaita Sodo, Ethiopia

Eskindir Israel

Department of Midwifery, College of Health Science and Medicine, Wachamo University, Hossana, Ethiopia

Teketel Tesfaye

You can also search for this author in PubMed   Google Scholar

Contributions

E.E. conceptualized the study, designed the research, performed statistical analysis, and led the manuscript writing. I.A, T.G, M.M contributed to the study design and provided critical revisions. K.A., G.N, B.A., E.I., and T.T. participated in data extraction and quality assessment. M.M. and T.G. K.A. and G.N. contributed to the literature review. I.A, A.A. and B.A. assisted in data interpretation. E.I. and T.T. provided critical revisions to the manuscript. All authors read and approved the final version.

Corresponding author

Correspondence to Eshetu Elfios .

Ethics declarations

Ethical approval.

Ethical approval and informed consent are not required, as this study is a systematic review and meta-analysis that only involved the use of previously published data.

Ethical guidelines

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Elfios, E., Asale, I., Merkine, M. et al. Turnover intention and its associated factors among nurses in Ethiopia: a systematic review and meta-analysis. BMC Health Serv Res 24 , 662 (2024). https://doi.org/10.1186/s12913-024-11122-9

Download citation

Received : 20 January 2024

Accepted : 20 May 2024

Published : 24 May 2024

DOI : https://doi.org/10.1186/s12913-024-11122-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Turnover intention
  • Systematic review
  • Meta-analysis

BMC Health Services Research

ISSN: 1472-6963

how to formulate a research question systematic review

LawBhoomi Logo

5-Day Virtual Training on How to Plan and Write Systematic Literature Review (SLR) and Bibliometric Analysis Research Paper by LekSha Research Centre: Register by June 9

  • Courses and Workshops Opportunities
  • June 3, 2024

how to formulate a research question systematic review

About LekSha Research Centre

The LekSha Research Centre is a leading research, training, and consulting service provider to students, researchers, teachers, and international development organizations. They offer services in different disciplines, including the Humanities, Social Sciences, and Management branches. The Center aims to train and assist faculty members, research scholars, and PG and UG students actively pursuing interdisciplinary and socially relevant research. So, they design short courses and teach them online and in-class to boost your professional competence.

About the Training

LekSha Research Centre is organising a training on “How to Plan and Write Systematic Literature Review (SLR) and Bibliometric Analysis Research Paper”.

Eligibility

This workshop is open to researchers, academicians, research scholars, librarians, and professionals involved in research evaluation, publication, and decision- making processes.

  • 10-11 June 2024: Writing Systematic Literature Review (SLR) Papers
  • 12 June 2024: AI Tools for Systematic Literature Review (SLR)
  • 13-14 June 2024: Bibliometric Analyses using Open Source Software

Detailed Agenda

10-11 June 2024:

  • Understanding review papers
  • Need for understanding Systematic Literature Review (SLR) frameworks
  • Preparation of SLR Protocol – Step-by-step
  • Hands-on Systematic Literature Review (SLR)

12 June 2024:

  • AI Tools for Systematic Literature Review (SLR)

13-14 June 2024:

  • Introduction to bibliometric analyses
  • Review of sample papers published using bibliographic analyses
  • Navigating SCOPUS/Open databases and downloading datasets
  • Bibliometric Analysis using Open-source software

Workshop Outline:

  • Need an understanding of Systematic Literature Review (SLR) frameworks
  • Hands-on SLR and Research paper discussion
  • Writing SLR paper by using AI tools
  • Review sample papers published using bibliographic analyses
  • Navigating SCOPUS / Web of Science and downloading data sets
  • Bibliometric Analysis using VOSviewer and Biblioshiny software
  • PPT, Data sets, and e-materials will be shared
  • Recorded videos for a lifetime
  • e-Certificate for participation
  • Networking and Research collaborations among the participants

Registration Fee

  • Indians – 650 Rs
  • Foreigner- USD20

Every 4th registration is free (Le., no fee for the 4th registration)

Registration Procedure

Click here to register ..

Note: Starting February 20, 2024, LawBhoomi will only provide help for courses that you register through the links on our platform. Registrations made through other links are not eligible for support.

You might like

World Cyber Security Forum

The Cyber Law Practicum: Training For Future Leaders Followed By Internship By World Cyber Security Forum: Enrol by June 08

LawLearnizo

12-Week Online Certificate Course on Technology Contract by Lawlearnizo [50% off via LawBhoomi]: Register by June 7

Into Legal World

Online Certificate Course on Data Protection and Privacy Laws in India by Into Legal World: Enrol by June 19

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Name  *

Email  *

Add Comment  *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Post Comment

Upgrad

COMMENTS

  1. Formulating a research question

    A systematic review should either specify definitions and boundaries around these elements at the outset, or be clear about which elements are undefined. ... Some mnemonics that sometimes help to formulate research questions, set the boundaries of question and inform a search strategy. Intervention effects. PICO Population ...

  2. 1. Formulate the Research Question

    Step 1. Formulate the Research Question. A systematic review is based on a pre-defined specific research question (Cochrane Handbook, 1.1).The first step in a systematic review is to determine its focus - you should clearly frame the question(s) the review seeks to answer (Cochrane Handbook, 2.1).It may take you a while to develop a good review question - it is an important step in your review.

  3. Formulate Question

    A narrow and specific research question is required in order to conduct a systematic review. The goal of a systematic review is to provide an evidence synthesis of ALL research performed on one particular topic. Your research question should be clearly answerable from the studies included in your review. Another consideration is whether the ...

  4. LibGuides: Systematic Reviews: 2. Develop a Research Question

    Systematic Reviews. 2. Develop a Research Question. A well-developed and answerable question is the foundation for any systematic review. This process involves: Using the PICO framework can help team members clarify and refine the scope of their question. For example, if the population is breast cancer patients, is it all breast cancer patients ...

  5. Systematic Reviews: Formulating Your Research Question

    evidence-based practice process. One way to streamline and improve the research process for nurses and researchers of all backgrounds is to utilize the PICO search strategy. PICO is a format for developing a good clinical research question prior to starting one's research. It is a mnemonic used to describe the four elements

  6. Systematic and systematic-like review toolkit

    The first stage in a review is formulating the research question. The research question accurately and succinctly sums up the review's line of inquiry. This page outlines approaches to developing a research question that can be used as the basis for a review. ... A modified approach to systematic review guidelines can be used for rapid reviews ...

  7. Systematic reviews: Formulate your question

    Defining the question. Defining the research question and developing a protocol are the essential first steps in your systematic review. The success of your systematic review depends on a clear and focused question, so take the time to get it right. A framework may help you to identify the key concepts in your research question and to organise ...

  8. Systematic Reviews: Develop & Refine Your Research Question

    Develop & Refine Your Research Question. A clear, well-defined, and answerable research question is essential for any systematic review, meta-analysis, or other form of evidence synthesis. The question must be answerable. Spend time refining your research question. PICO Worksheet.

  9. Systematic Reviews: Formulate your question and protocol

    This video illustrates how to use the PICO framework to formulate an effective research question, and it also shows how to search a database using the search terms identified. ... Having a focused and specific research question is especially important when undertaking a systematic review. If your search question is too broad you will retrieve ...

  10. 1) Formulating a Research Question

    A well-formulated and focused question is essential to the conduct of the review. The research question binds the scope of the project and informs the sources to search, the search syntax, the eligibility criteria. Here is a list of commonly used frameworks to help you articulate a clearly defined research question:

  11. Research question

    Develop your research question. A systematic review is an in-depth attempt to answer a specific, focused question in a methodical way. Start with a clearly defined, researchable question, that should accurately and succinctly sum up the review's line of inquiry. A well formulated review question will help determine your inclusion and exclusion ...

  12. 1. Formulating the research question

    Systematic review vs. other reviews. Systematic reviews required a narrow and specific research question. The goal of a systematic review is to provide an evidence synthesis of ALL research performed on one particular topic. So, your research question should be clearly answerable from the data you gather from the studies included in your review.

  13. Step 1

    If a systematic review, covering the question you are considering, has already been published or has been registered and it is in the process of being completed. If that is the case, you need to modify your research question. If the systematic review was completed over five years ago, you can perform an update of the same question.

  14. LibGuides: Systematic Reviews: Formulating a Research Question

    Source: Pixabay (CCO Creative Commons) When doing a systematic review, having a research question framework can help you to identify key concepts of your research and facilitate the process of article selection for inclusion in the systematic review.. Framework for quantitative studies - PICO is commonly used to frame quantitative systematic review questions and contains the following elements:

  15. Guidance to best tools and practices for systematic reviews

    We recommend that systematic review authors incorporate specific practices or exercises when formulating a research question at the protocol stage, These should be designed to raise the review team's awareness of how to prevent research and resource waste [84, 130] and to stimulate careful contemplation of the scope of the review . Authors ...

  16. Developing a Research Question

    After developing the research question, it is necessary to confirm that the review has not previously been conducted (or is currently in progress). Make sure to check for both published reviews and registered protocols (to see if the review is in progress). Do a thorough search of appropriate databases; if additional help is needed, consult a ...

  17. LibGuides: Systematic Reviews: Formulating a focused question

    Formulating a focused question. Formula. ting a focused question. It is essential that you have a focused research question before you begin searching the literature as part of your Systematic Review. Broad, unfocused questions can result in being overwhelmed with unmanageable numbers of papers, many of which may prove to be irrelevant.

  18. 2. Draft your Research Question

    Formulating a research question takes time and your team may go through different versions until settling on the right research question. To help formulate your research question, some research question frameworks are listed below (there are dozen of different types of these frameworks--(for a comprehensive, but concise, overview of the almost 40 different types of research question frameworks ...

  19. Library Guides: Systematic reviews: Formulate the question

    General principles. "A good systematic review is based on a well formulated, answerable question. The question guides the review by defining which studies will be included, what the search strategy to identify the relevant primary studies should be, and which data need to be extracted from each study." A systematic review question needs to be.

  20. How do I develop a research question for systematic review

    The question must be clearly defined and it may be useful to use a research question framework such as PICO (population, intervention, comparison, outcome) or SPICE (setting, perspective, intervention, comparison, evaluation) to help structure both the question and the search terms.

  21. Formulate a specific question

    Systematic reviews require focused clinical questions. PICO is a useful tool for formulating such questions. For information on PICO and other frameworks please see our tutorial below. The PICO (Patient, Intervention, Comparison, Outcome) framework is commonly used to develop focused clinical questions for quantitative systematic reviews.

  22. Identifying the research question

    Formulating a well-constructed research question is essential for a successful review. You should have a draft research question before you choose the type of knowledge synthesis that you will conduct, as the type of answers you are looking for will help guide your choice of knowledge synthesis. Examples of systematic review and scoping review ...

  23. Formulate Question

    There are three primary elements of a Scoping Review RQ. However, not all RQs need to include all 3: Population. Intervention. Outcome. As you develop your research question, it is helpful to define your key concepts. This will help with the development of your inclusion criteria as well as your search strategy.

  24. Beginning Steps and Finishing a Review

    This is the beginning of your question formation, research question, or hypothesis. ... Look at "recommendations for further research" in the conclusions of articles or other items. c. Use this to formulate your goal or objective of the review. 2. Prepare for your search. ... Refer to a systematic review or meta-analysis guidelines such as ...

  25. What are the steps to write a systematic literature review

    For writing a systematic literature review, follow this structure: Title: Create a concise and informative title that reflects the focus of your review. Abstract: Write a structured abstract that ...

  26. Biomarkers for personalised prevention of chronic diseases: a common

    Introduction Personalised prevention aims to delay or avoid disease occurrence, progression, and recurrence of disease through the adoption of targeted interventions that consider the individual biological, including genetic data, environmental and behavioural characteristics, as well as the socio-cultural context. This protocol summarises the main features of a rapid scoping review to show ...

  27. Turnover intention and its associated factors among nurses in Ethiopia

    By systematically analyzing findings from various studies, we aimed to provide a nuanced understanding of the factors influencing turnover intention specific to the Ethiopian healthcare context. Therefore, this systematic review and meta-analysis aimed to answer the following research questions. 1.

  28. 5-Day Virtual Training on How to Plan and Write Systematic Literature

    LekSha Research Centre is organising a training on "How to Plan and Write Systematic Literature Review (SLR) and Bibliometric Analysis Research Paper". Eligibility This workshop is open to researchers, academicians, research scholars, librarians, and professionals involved in research evaluation, publication, and decision- making processes.

  29. A realist review of health passports for Autistic adults.

    Inaccessible healthcare may contribute to this. Autism Health Passports (AHPs) are paper-based or digital tools which can be used to describe healthcare accessibility needs; they are recommended in UK clinical guidance. However, questions remained as to the theoretical underpinnings and effectiveness of AHPs.

  30. AI-Enhanced RAIN Protocol: A Systematic Approach to Optimize ...

    Background: Rectal cancers, or rectal neoplasms, are tumors that develop from the lining of the rectum, the concluding part of the large intestine ending at the anus. These tumors often start as benign polyps and may evolve into malignancies over several years. The causes of rectal cancer are diverse, with genetic mutations being a key factor. These mutations lead to uncontrolled cell growth ...