Source: Foster, M. (2018). Systematic reviews service: Introduction to systematic reviews. Retrieved September 18, 2018, from
Discover the world's research
JEPS Bulletin
The Official Blog of the Journal of European Psychology Students
Investigating concepts associated with psychology requires an indefinite amount of reading. Hence, good literature reviews are an inevitably needed part of providing the modern scientists with a broad spectrum of knowledge. In order to help, this blog post will introduce you to the basics of literature reviews and explain a specific methodological approach towards writing one, known as the systematic literature review.
Literature review is a term associated with the process of collecting, checking and (re)analysing data from the existing literature with a particular search question in mind. The latter could be for example:
A literature review (a) defines a specific issue, concept, theory, phenomena; (b) compiles published literature on a topic; (c) summarises critical points of current knowledge about the problem and (d) suggests next steps in addressing it.
Literature reviews can be based on all sorts of information found in scientific journals, books, academic dissertations, electronic bibliographic databases and the rest of the Internet. Electronic databases such as PsycINFO , PubMed , Web of Science could be a good starting point. Some of them, like EBSCOhost , ScienceDirect , SciELO , and ProQuest , provide full-text information, while others provide the users mostly with the abstracts of the material. Besides scientific literature, literature reviews often include the so called gray literature . This refers to the material that is either unpublished or published in non-commercial form (e.g., theses, dissertations, government reports, fact sheets, pre-prints of articles). Excluding it completely from a literature review is inappropriate because the search should be always as complete as possible in order to reduce the risk of publication bias. However, when reviewing the material on for example Google Scholar , Science.gov , Social Science Research Network , or PsycEXTRA it should be kept in mind that such search engines also display the material without peer-review and have therefore less credibility regarding the information they are disclosing.
When performing literature reviews, the use of appropriately selected terminology is essential, since it allows the researchers much clearer communication. In psychology, without some commonly agreed lists of terms, we would all get lost in the variety of concepts and vocabularies that could be applied. A typical recommendation for where to look for such index terms would be ‘ Thesaurus of Psychological Index Terms (2007) ’, which includes nearly 9,000 most commonly cross-referenced terms in psychology. In addition, electronic databases mentioned before sometimes prompt the use of the so-called Boolean operators , simple words such as AND, OR, NOT, or AND NOT. These are used for combining and/or excluding specific terms in your search and sometimes allow to obtain more focused and productive results in the search. Other tools to make search strategy more comprehensive and focused are also truncations – a tool for searching terminologies that have same initial roots (e.g., anxiety and anxious) and wildcards for words with spelling deviations (e.g., man and men). It is worth noting that the databases slightly differ in how they label the index terms and utilize specific search tools in their systems.
Among authors, there is not much coherence about different types of literature reviews but in general, most recognize at least two: traditional and systematic. The main difference between them is situated in the process of collecting and selecting data and the material for the review. Systematic literature review, as the name implies, is the more structured of the two and is thought to be more credible. On the other hand, traditional is thought to heavily depend on the researcher’s decisions regarding the data selection and, consequently, evaluation and results. Systematic protocol of the systematic literature review can be therefore understood as an optional solution for controlling the incomplete and possibly biased reports of traditional reviews.
The systematic literature review is a method/process/protocol in which a body of literature is aggregated, reviewed and assessed while utilizing pre-specified and standardized techniques. In other words, to reduce bias, the rationale, the hypothesis, and the methods of data collection are prepared before the review and are used as a guide for performing the process. Just like it is for the traditional literature reviews, the goal is to identify, critically appraise, and summarize the existing evidence concerning a clearly defined problem.
Systematic literature reviews allow us to examine conflicting and/or coincident findings, as well as to identify themes that require further investigation. Furthermore, they include the possibility of evaluating consistency and generalization of the evidence regarding specific scientific questions and are, therefore, also of great practical value within the psychological field. The method is particularly useful to integrate the information of a group of studies investigating the same phenomena and it typically focuses on a very specific empirical question, such as ‘Does the Rational Emotive Therapy intervention benefit the well-being of the patients diagnosed with depression?’.
Systematic literature reviews include all (or most) of the following characteristics:
The process of performing a systematic literature review consists of several stages and can be reported in a form of an original research article with the same name (i.e., systematic literature review):
1: Start by clearly defining the objective of the review or form a structured research question.
Place in the research article: Title, Abstract, Introduction.
Example of the objective: The objective of this literature revision is to systematically review and analyse the current research on the effects of music on the anxiety levels of children in hospital settings.
Example of a structured research question: What are the most important factors associated with the development of PTSD in soldiers?
Tip: In the title, identify that the report is a systematic literature review.
2: Clearly specify the methodology of the review and define eligibility criteria (i.e., study selection criteria that the published material must meet in order to be included or excluded from the study). The search should be extensive.
Place in the research article: Methods.
Examples of inclusion criteria: Publication was an academic and peer-reviewed study. Publication was a study that examined the effects of regular physical exercise intervention on depression and included a control group.
Examples of exclusion criteria: Publication was involving male adults. Studies that also examined non-physical activities as interventions. Studies that were only published in a language other than English.
Tips: The eligibility criteria sometimes fit to be presented in tables.
3: Retrieve eligible literature and thoroughly report your search strategy throughout the process. (Ideally, the selection process is performed by at least two independent investigators.)
Example: The EBSCOhost and PsychInfo electronic databases from 2010 to 2017 were searched. These were chosen because of the psychological focus that encompasses psychosocial effects of emotional abuse in childhood. Search terms were ‘emotional abuse’, ‘childhood’, ‘psychosocial effects’, and ‘psychosocial consequences’. The EBSCOhost produced 200 results from the search criteria, while PsychInfo produced 467, for a total of 667 articles. […] Articles were rejected if it was determined from the title and the abstract that the study failed to meet the inclusion criteria. Any ambiguities regarding the application of the selection criteria were resolved through discussions between all the researchers involved.
Tip: Sometimes it is nice to represent the selection process in a graphical representation; in the form of a decision tree or a flow diagram (check PRISMA ).
4: Assess the methodological quality of the selected literature whenever possible and exclude the articles with low methodological quality. Keep in mind that the quality of the systematic review depends on the validity and the quality of the studies included in the review.
Examples of the instruments available for evaluating the quality of the studies: PEDro, Jadad scale, the lists of Delphi, OTseeker, Maastricht criteria.
Tip: Present the excluded articles as a part of the selection process mentioned in step 3.
5: Proceed with the so-called characterization of the studies. Decide which data to look for in all the selected studies and present it in a summarized way. If the information is missing in some specific paper, always register it in your reports. (Ideally, the characterization of the studies is performed by at least two independent investigators.)
Place in the research article: Results.
Examples of the information that should and/or could be collected for characterization of the literature: authors, year, sample size, study design, aims and objectives, findings/results, limitations.
Tip: Sometimes results can be presented nicely in a form of a table depicting the main characteristics.
6: Write a synthesis of the results – integrate the results of different studies and interpret them in a narrative form.
Place in the research article: Interpretation, Conclusions.
Patterns discovered as results should be summarized in a qualitative, narrative form. Modulate one (or more) general arguments for organizing the review. Some trick to help you do this is to choose two or three main information sources (e.g., articles, books, other literature reviews) to explain the results of other studies through a similar way of organization. Connect the information reported by different sources and do not just summarize the results. Find patterns in the results of different studies, identify them, address the theoretical and/or methodological conflicts and try to interpret them. Summarize the principal conclusions and evaluate the current state on the subject by pointing out possible further directions.
The results emerging from the data that were included in such retrospective studies can lead to a certain level of credibility regarding their conclusions. Actually, systematic literature reviews are thought to be one of our best methods to summarize and synthesize evidence about some specific research question and are often used as the main ‘practice making guidelines’ in many health care disciplines. Therefore, it is no wonder why systematic reviews are gaining popularity among researchers and why journals are moving in this direction as well. This also shows in the development of more and more specific guidelines and checklists for writing systematic literature reviews (see for example PRISMA or Cochrane Handbook for Systematic Reviews of Interventions ). To find examples of systematic literature review articles you can check Cochrane Database of Systematic Reviews , BioMed Central’s Systematic Reviews Journal , and PROSPERO . If you are aware of the concept of ‘registered reports’, it is worth mentioning that submitting with PROSPERO provides you with the option of publishing the latter as well. I suggest that you go through the list of useful resources provided below and hopefully, you can get enough information about anything related that remained unanswered. Now, I encourage you to try to be a little more to be systematic whenever researching some topic, to try to write a systematic literature review yourself and to maybe even consider submitting it to JEPS .
EBSCOhost : https://search.ebscohost.com/
Google Scholar : https://scholar.google.com/
PRISMA : http://www.prisma-statement.org/
PROSPERO : https://www.crd.york.ac.uk/prospero/
ProQuest : http://www.proquest.com/
PsycEXTRA : http://www.apa.org/pubs/databases/psycextra/index.aspx :
PsycINFO : http://www.apa.org/pubs/databases/psycinfo/index.aspx
PubMed : https://www.ncbi.nlm.nih.gov/pubmed/
SciELO : http://www.scielo.org/php/index.php?lang=en
Science.gov : https://www.science.gov/
ScienceDirect : http://www.sciencedirect.com/
Scorpus : http://www.scopus.com/freelookup/form/author.uri
Social Science Research Network : https://www.ssrn.com/en/
Systematic Reviews Journal (BIOMED) : https://systematicreviewsjournal.biomedcentral.com/
Web of Science : https://webofknowledge.com/
Other sources
Eva Štrukelj is currently studying Clinical and Health Psychology at the University of Algarve in Portugal. Her main areas of interest are social psychology and health psychology. Regarding research, she is particularly curious about stigma and with it related topics.
Related posts:.
Created by health science librarians.
A simplified process map, how can the library help, publications by hsl librarians, systematic reviews in non-health disciplines, resources for performing systematic reviews.
Check our FAQ's
Email us
Call (919) 962-0800
Make an appointment with a librarian
Request a systematic or scoping review consultation
A systematic review is a literature review that gathers all of the available evidence matching pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods, documented in a protocol, to minimize bias , provide reliable findings , and inform decision-making. ¹
Before beginning a systematic review, consider whether it is the best type of review for your question, goals, and resources. The table below compares a few different types of reviews to help you decide which is best for you.
Systematic Review | Scoping Review | Systematized Review |
---|---|---|
Conducted for Publication | Conducted for Publication | Conducted for Assignment, Thesis, or (Possibly) Publication |
Protocol Required | Protocol Required | No Protocol Required |
Focused Research Question | Broad Research Question | Either |
Focused Inclusion & Exclusion Criteria | Broad Inclusion & Exclusion Criteria | Either |
Requires Large Team | Requires Small Team | Usually 1-2 People |
Systematic reviews follow established guidelines and best practices to produce high-quality research. Librarian involvement in systematic reviews is based on two levels. In Tier 1, your research team can consult with the librarian as needed. The librarian will answer questions and give you recommendations for tools to use. In Tier 2, the librarian will be an active member of your research team and co-author on your review. Roles and expectations of librarians vary based on the level of involvement desired. Examples of these differences are outlined in the table below.
Tasks | Tier 1: Consultative | Tier 2: Research Partner / Co-author |
---|---|---|
Guidance on process and steps | Yes | Yes |
Background searching for past and upcoming reviews | Yes | Yes |
Development and/or refinement of review topic | Yes | Yes |
Assistance with refinement of PICO (population, intervention(s), comparator(s), and key questions | Yes | Yes |
Guidance on study types to include | Yes | Yes |
Guidance on protocol registration | Yes | Yes |
Identification of databases for searches | Yes | Yes |
Instruction in search techniques and methods | Yes | Yes |
Training in citation management software use for managing and sharing results | Yes | Yes |
Development and execution of searches | No | Yes |
Downloading search results to citation management software and removing duplicates | No | Yes |
Documentation of search strategies | No | Yes |
Management of search results | No | Yes |
Guidance on methods | Yes | Yes |
Guidance on data extraction, and management techniques and software | Yes | Yes |
Suggestions of journals to target for publication | Yes | Yes |
Drafting of literature search description in "Methods" section | No | Yes |
Creation of PRISMA diagram | No | Yes |
Drafting of literature search appendix | No | Yes |
Review other manuscript sections and final draft | No | Yes |
Librarian contributions warrant co-authorship | No | Yes |
The following are systematic and scoping reviews co-authored by HSL librarians.
Only the most recent 15 results are listed. Click the website link at the bottom of the list to see all reviews co-authored by HSL librarians in PubMed
Researchers conduct systematic reviews in a variety of disciplines. If your focus is on a topic outside of the health sciences, you may want to also consult the resources below to learn how systematic reviews may vary in your field. You can also contact a librarian for your discipline with questions.
Environmental Topics
Social Sciences
Social Work
Software engineering
Sport, Exercise, & Nutrition
Updating reviews
Aims and scope.
Systematic Reviews encompasses all aspects of the design, conduct and reporting of systematic reviews. The journal publishes high quality systematic review products including systematic review protocols, systematic reviews related to a very broad definition of human health, rapid reviews, updates of already completed systematic reviews, and methods research related to the science of systematic reviews, such as decision modelling. At this time Systematic Reviews does not accept reviews of in vitro studies. The journal also aims to ensure that the results of all well-conducted systematic reviews are published, regardless of their outcome.
Click here to view which articles have been shared the most in the last month!
Authors: H. Cook, D. Zargaran, S. P. Glynou, S. Hamilton and A. Mosahebi
Authors: Charlotte Paterson, Caleb Leduc, Margaret Maxwell, Birgit Aust, Heather Strachan, Ainslie O’Connor, Fotini Tsantila, Johanna Cresswell-Smith, Gyorgy Purebl, Lars Winter, Naim Fanaj, Asmae Doukani, Bridget Hogg, Paul Corcoran, Luigia D’Alessandro, Sharna Mathieu…
Authors: Khai Ling Khor, Vashnarekha Kumarasuriar, Kok Wei Tan, Pei Boon Ooi and Yook-Chin Chia
Authors: Denise Schulz, Adrian Deichsel, Martin C. Jordan, Joachim Windolf, Michael J. Raschke and Anne Neubert
Authors: Rebecca Mlelwa and Hanna-Andrea Rother
Most recent articles RSS
View all articles
Authors: David Moher, Larissa Shamseer, Mike Clarke, Davina Ghersi, Alessandro Liberati, Mark Petticrew, Paul Shekelle and Lesley A Stewart
Authors: Matthew J. Page, Joanne E. McKenzie, Patrick M. Bossuyt, Isabelle Boutron, Tammy C. Hoffmann, Cynthia D. Mulrow, Larissa Shamseer, Jennifer M. Tetzlaff, Elie A. Akl, Sue E. Brennan, Roger Chou, Julie Glanville, Jeremy M. Grimshaw, Asbjørn Hróbjartsson, Manoj M. Lalu, Tianjing Li…
The Editorial to this article has been published in Systematic Reviews 2021 10 :117
Authors: Wichor M. Bramer, Melissa L. Rethlefsen, Jos Kleijnen and Oscar H. Franco
Authors: Swapnil Hiremath, Jeanne Françoise Kayibanda, Benjamin J. W. Chow, Dean Fergusson, Greg A. Knoll, Wael Shabana, Brianna Lahey, Olivia McBride, Alexandra Davis and Ayub Akbari
Authors: Melissa L. Rethlefsen, Shona Kirtley, Siw Waffenschmidt, Ana Patricia Ayala, David Moher, Matthew J. Page and Jonathan B. Koffel
Most accessed articles RSS
Systematic Reviews is published continuously online-only. We encourage you to sign up to receive free email alerts to keep up to date with all of the latest articles by registering here .
The Editors endorse peer review mentoring of early career researchers. Find out more here
Thematic series The role of systematic reviews in evidence-based research Edited by Professor Dawid Pieper and Professor Hans Lund
Thematic series Canadian Task Force on Preventive Health Care Evidence Reviews Edited by Assoc Prof Craig Lockwood
Thematic series Automation in the systematic review process Edited by Prof Joseph Lau
Thematic series Five years of Systematic Reviews
View all article collections
Your browser needs to have JavaScript enabled to view this timeline
2022 Citation Impact 3.7 - 2-year Impact Factor 3.8 - 5-year Impact Factor 1.561 - SNIP (Source Normalized Impact per Paper) 1.269 - SJR (SCImago Journal Rank)
2023 Speed 88 days submission to first editorial decision for all manuscripts (Median) 296 days submission to accept (Median)
2023 Usage 3,531,065 downloads 3,533 Altmetric mentions
ISSN: 2046-4053
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Prabhakar veginadu.
1 Department of Rural Clinical Sciences, La Trobe Rural Health School, La Trobe University, Bendigo Victoria, Australia
2 Lincoln International Institute for Rural Health, University of Lincoln, Brayford Pool, Lincoln UK
3 Department of Orthodontics, Saveetha Dental College, Chennai Tamil Nadu, India
Associated data.
APPENDIX B: List of excluded studies with detailed reasons for exclusion
APPENDIX C: Quality assessment of included reviews using AMSTAR 2
The aim of this overview is to identify and collate evidence from existing published systematic review (SR) articles evaluating various methodological approaches used at each stage of an SR.
The search was conducted in five electronic databases from inception to November 2020 and updated in February 2022: MEDLINE, Embase, Web of Science Core Collection, Cochrane Database of Systematic Reviews, and APA PsycINFO. Title and abstract screening were performed in two stages by one reviewer, supported by a second reviewer. Full‐text screening, data extraction, and quality appraisal were performed by two reviewers independently. The quality of the included SRs was assessed using the AMSTAR 2 checklist.
The search retrieved 41,556 unique citations, of which 9 SRs were deemed eligible for inclusion in final synthesis. Included SRs evaluated 24 unique methodological approaches used for defining the review scope and eligibility, literature search, screening, data extraction, and quality appraisal in the SR process. Limited evidence supports the following (a) searching multiple resources (electronic databases, handsearching, and reference lists) to identify relevant literature; (b) excluding non‐English, gray, and unpublished literature, and (c) use of text‐mining approaches during title and abstract screening.
The overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process, as well as some methodological modifications currently used in expedited SRs. Overall, findings of this overview highlight the dearth of published SRs focused on SR methodologies and this warrants future work in this area.
Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the “gold standard” of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search, appraise, and synthesize the available evidence. 3 Several guidelines, developed by various organizations, are available for the conduct of an SR; 4 , 5 , 6 , 7 among these, Cochrane is considered a pioneer in developing rigorous and highly structured methodology for the conduct of SRs. 8 The guidelines developed by these organizations outline seven fundamental steps required in SR process: defining the scope of the review and eligibility criteria, literature searching and retrieval, selecting eligible studies, extracting relevant data, assessing risk of bias (RoB) in included studies, synthesizing results, and assessing certainty of evidence (CoE) and presenting findings. 4 , 5 , 6 , 7
The methodological rigor involved in an SR can require a significant amount of time and resource, which may not always be available. 9 As a result, there has been a proliferation of modifications made to the traditional SR process, such as refining, shortening, bypassing, or omitting one or more steps, 10 , 11 for example, limits on the number and type of databases searched, limits on publication date, language, and types of studies included, and limiting to one reviewer for screening and selection of studies, as opposed to two or more reviewers. 10 , 11 These methodological modifications are made to accommodate the needs of and resource constraints of the reviewers and stakeholders (e.g., organizations, policymakers, health care professionals, and other knowledge users). While such modifications are considered time and resource efficient, they may introduce bias in the review process reducing their usefulness. 5
Substantial research has been conducted examining various approaches used in the standardized SR methodology and their impact on the validity of SR results. There are a number of published reviews examining the approaches or modifications corresponding to single 12 , 13 or multiple steps 14 involved in an SR. However, there is yet to be a comprehensive summary of the SR‐level evidence for all the seven fundamental steps in an SR. Such a holistic evidence synthesis will provide an empirical basis to confirm the validity of current accepted practices in the conduct of SRs. Furthermore, sometimes there is a balance that needs to be achieved between the resource availability and the need to synthesize the evidence in the best way possible, given the constraints. This evidence base will also inform the choice of modifications to be made to the SR methods, as well as the potential impact of these modifications on the SR results. An overview is considered the choice of approach for summarizing existing evidence on a broad topic, directing the reader to evidence, or highlighting the gaps in evidence, where the evidence is derived exclusively from SRs. 15 Therefore, for this review, an overview approach was used to (a) identify and collate evidence from existing published SR articles evaluating various methodological approaches employed in each of the seven fundamental steps of an SR and (b) highlight both the gaps in the current research and the potential areas for future research on the methods employed in SRs.
An a priori protocol was developed for this overview but was not registered with the International Prospective Register of Systematic Reviews (PROSPERO), as the review was primarily methodological in nature and did not meet PROSPERO eligibility criteria for registration. The protocol is available from the corresponding author upon reasonable request. This overview was conducted based on the guidelines for the conduct of overviews as outlined in The Cochrane Handbook. 15 Reporting followed the Preferred Reporting Items for Systematic reviews and Meta‐analyses (PRISMA) statement. 3
Only published SRs, with or without associated MA, were included in this overview. We adopted the defining characteristics of SRs from The Cochrane Handbook. 5 According to The Cochrane Handbook, a review was considered systematic if it satisfied the following criteria: (a) clearly states the objectives and eligibility criteria for study inclusion; (b) provides reproducible methodology; (c) includes a systematic search to identify all eligible studies; (d) reports assessment of validity of findings of included studies (e.g., RoB assessment of the included studies); (e) systematically presents all the characteristics or findings of the included studies. 5 Reviews that did not meet all of the above criteria were not considered a SR for this study and were excluded. MA‐only articles were included if it was mentioned that the MA was based on an SR.
SRs and/or MA of primary studies evaluating methodological approaches used in defining review scope and study eligibility, literature search, study selection, data extraction, RoB assessment, data synthesis, and CoE assessment and reporting were included. The methodological approaches examined in these SRs and/or MA can also be related to the substeps or elements of these steps; for example, applying limits on date or type of publication are the elements of literature search. Included SRs examined or compared various aspects of a method or methods, and the associated factors, including but not limited to: precision or effectiveness; accuracy or reliability; impact on the SR and/or MA results; reproducibility of an SR steps or bias occurred; time and/or resource efficiency. SRs assessing the methodological quality of SRs (e.g., adherence to reporting guidelines), evaluating techniques for building search strategies or the use of specific database filters (e.g., use of Boolean operators or search filters for randomized controlled trials), examining various tools used for RoB or CoE assessment (e.g., ROBINS vs. Cochrane RoB tool), or evaluating statistical techniques used in meta‐analyses were excluded. 14
The search for published SRs was performed on the following scientific databases initially from inception to third week of November 2020 and updated in the last week of February 2022: MEDLINE (via Ovid), Embase (via Ovid), Web of Science Core Collection, Cochrane Database of Systematic Reviews, and American Psychological Association (APA) PsycINFO. Search was restricted to English language publications. Following the objectives of this study, study design filters within databases were used to restrict the search to SRs and MA, where available. The reference lists of included SRs were also searched for potentially relevant publications.
The search terms included keywords, truncations, and subject headings for the key concepts in the review question: SRs and/or MA, methods, and evaluation. Some of the terms were adopted from the search strategy used in a previous review by Robson et al., which reviewed primary studies on methodological approaches used in study selection, data extraction, and quality appraisal steps of SR process. 14 Individual search strategies were developed for respective databases by combining the search terms using appropriate proximity and Boolean operators, along with the related subject headings in order to identify SRs and/or MA. 16 , 17 A senior librarian was consulted in the design of the search terms and strategy. Appendix A presents the detailed search strategies for all five databases.
Title and abstract screening of references were performed in three steps. First, one reviewer (PV) screened all the titles and excluded obviously irrelevant citations, for example, articles on topics not related to SRs, non‐SR publications (such as randomized controlled trials, observational studies, scoping reviews, etc.). Next, from the remaining citations, a random sample of 200 titles and abstracts were screened against the predefined eligibility criteria by two reviewers (PV and MM), independently, in duplicate. Discrepancies were discussed and resolved by consensus. This step ensured that the responses of the two reviewers were calibrated for consistency in the application of the eligibility criteria in the screening process. Finally, all the remaining titles and abstracts were reviewed by a single “calibrated” reviewer (PV) to identify potential full‐text records. Full‐text screening was performed by at least two authors independently (PV screened all the records, and duplicate assessment was conducted by MM, HC, or MG), with discrepancies resolved via discussions or by consulting a third reviewer.
Data related to review characteristics, results, key findings, and conclusions were extracted by at least two reviewers independently (PV performed data extraction for all the reviews and duplicate extraction was performed by AP, HC, or MG).
The quality assessment of the included SRs was performed using the AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews). The tool consists of a 16‐item checklist addressing critical and noncritical domains. 18 For the purpose of this study, the domain related to MA was reclassified from critical to noncritical, as SRs with and without MA were included. The other six critical domains were used according to the tool guidelines. 18 Two reviewers (PV and AP) independently responded to each of the 16 items in the checklist with either “yes,” “partial yes,” or “no.” Based on the interpretations of the critical and noncritical domains, the overall quality of the review was rated as high, moderate, low, or critically low. 18 Disagreements were resolved through discussion or by consulting a third reviewer.
To provide an understandable summary of existing evidence syntheses, characteristics of the methods evaluated in the included SRs were examined and key findings were categorized and presented based on the corresponding step in the SR process. The categories of key elements within each step were discussed and agreed by the authors. Results of the included reviews were tabulated and summarized descriptively, along with a discussion on any overlap in the primary studies. 15 No quantitative analyses of the data were performed.
From 41,556 unique citations identified through literature search, 50 full‐text records were reviewed, and nine systematic reviews 14 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 were deemed eligible for inclusion. The flow of studies through the screening process is presented in Figure 1 . A list of excluded studies with reasons can be found in Appendix B .
Study selection flowchart
Table 1 summarizes the characteristics of included SRs. The majority of the included reviews (six of nine) were published after 2010. 14 , 22 , 23 , 24 , 25 , 26 Four of the nine included SRs were Cochrane reviews. 20 , 21 , 22 , 23 The number of databases searched in the reviews ranged from 2 to 14, 2 reviews searched gray literature sources, 24 , 25 and 7 reviews included a supplementary search strategy to identify relevant literature. 14 , 19 , 20 , 21 , 22 , 23 , 26 Three of the included SRs (all Cochrane reviews) included an integrated MA. 20 , 21 , 23
Characteristics of included studies
Author, year | Search strategy (year last searched; no. databases; supplementary searches) | SR design (type of review; no. of studies included) | Topic; subject area | SR objectives | SR authors’ comments on study quality |
---|---|---|---|---|---|
Crumley, 2005 | 2004; Seven databases; four journals handsearched, reference lists and contacting authors | SR; = 64 | RCTs and CCTs; not specified | To identify and quantitatively review studies comparing two or more different resources (e.g., databases, Internet, handsearching) used to identify RCTs and CCTs for systematic reviews. | Most of the studies adequately described reproducible search methods, expected search yield. Poor quality in studies was mainly due to lack of rigor in reporting selection methodology. Majority of the studies did not indicate the number of people involved in independently screening the searches or applying eligibility criteria to identify potentially relevant studies. |
Hopewell, 2007 | 2002; eight databases; selected journals and published abstracts handsearched, and contacting authors | SR and MA; = 34 (34 in quantitative analysis) | RCTs; health care | To review systematically empirical studies, which have compared the results of handsearching with the results of searching one or more electronic databases to identify reports of randomized trials. | The electronic search was designed and carried out appropriately in majority of the studies, while the appropriateness of handsearching was unclear in half the studies because of limited information. The screening studies methods used in both groups were comparable in most of the studies. |
Hopewell, 2007 | 2005; two databases; selected journals and published abstracts handsearched, reference lists, citations and contacting authors | SR and MA; = 5 (5 in quantitative analysis) | RCTs; health care | To review systematically research studies, which have investigated the impact of gray literature in meta‐analyses of randomized trials of health care interventions. | In majority of the studies, electronic searches were designed and conducted appropriately, and the selection of studies for eligibility was similar for handsearching and database searching. Insufficient data for most studies to assess the appropriateness of handsearching and investigator agreeability on the eligibility of the trial reports. |
Horsley, 2011 | 2008; three databases; reference lists, citations and contacting authors | SR; = 12 | Any topic or study area | To investigate the effectiveness of checking reference lists for the identification of additional, relevant studies for systematic reviews. Effectiveness is defined as the proportion of relevant studies identified by review authors solely by checking reference lists. | Interpretability and generalizability of included studies was difficult. Extensive heterogeneity among the studies in the number and type of databases used. Lack of control in majority of the studies related to the quality and comprehensiveness of searching. |
Morrison, 2012 | 2011; six databases and gray literature | SR; = 5 | RCTs; conventional medicine | To examine the impact of English language restriction on systematic review‐based meta‐analyses | The included studies were assessed to have good reporting quality and validity of results. Methodological issues were mainly noted in the areas of sample power calculation and distribution of confounders. |
Robson, 2019 | 2016; three databases; reference lists and contacting authors | SR; = 37 | N/R | To identify and summarize studies assessing methodologies for study selection, data abstraction, or quality appraisal in systematic reviews. | The quality of the included studies was generally low. Only one study was assessed as having low RoB across all four domains. Majority of the studies were assessed to having unclear RoB across one or more domains. |
Schmucker, 2017 | 2016; four databases; reference lists | SR; = 10 | Study data; medicine | To assess whether the inclusion of data that were not published at all and/or published only in the gray literature influences pooled effect estimates in meta‐analyses and leads to different interpretation. | Majority of the included studies could not be judged on the adequacy of matching or adjusting for confounders of the gray/unpublished data in comparison to published data. |
Also, generalizability of results was low or unclear in four research projects | |||||
Morissette, 2011 | 2009; five databases; reference lists and contacting authors | SR and MA; = 6 (5 included in quantitative analysis) | N/R | To determine whether blinded versus unblinded assessments of risk of bias result in similar or systematically different assessments in studies included in a systematic review. | Four studies had unclear risk of bias, while two studies had high risk of bias. |
O'Mara‐Eves, 2015 | 2013; 14 databases and gray literature | SR; = 44 | N/R | To gather and present the available research evidence on existing methods for text mining related to the title and abstract screening stage in a systematic review, including the performance metrics used to evaluate these technologies. | Quality appraised based on two criteria‐sampling of test cases and adequacy of methods description for replication. No study was excluded based on the quality (author contact). |
SR = systematic review; MA = meta‐analysis; RCT = randomized controlled trial; CCT = controlled clinical trial; N/R = not reported.
The included SRs evaluated 24 unique methodological approaches (26 in total) used across five steps in the SR process; 8 SRs evaluated 6 approaches, 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 while 1 review evaluated 18 approaches. 14 Exclusion of gray or unpublished literature 21 , 26 and blinding of reviewers for RoB assessment 14 , 23 were evaluated in two reviews each. Included SRs evaluated methods used in five different steps in the SR process, including methods used in defining the scope of review ( n = 3), literature search ( n = 3), study selection ( n = 2), data extraction ( n = 1), and RoB assessment ( n = 2) (Table 2 ).
Summary of findings from review evaluating systematic review methods
Key elements | Author, year | Method assessed | Evaluations/outcomes (P—primary; S—secondary) | Summary of SR authors’ conclusions | Quality of review |
---|---|---|---|---|---|
Excluding study data based on publication status | Hopewell, 2007 | Gray vs. published literature | Pooled effect estimate | Published trials are usually larger and show an overall greater treatment effect than gray trials. Excluding trials reported in gray literature from SRs and MAs may exaggerate the results. | Moderate |
Schmucker, 2017 | Gray and/or unpublished vs. published literature | P: Pooled effect estimate | Excluding unpublished trials had no or only a small effect on the pooled estimates of treatment effects. Insufficient evidence to conclude the impact of including unpublished or gray study data on MA conclusions. | Moderate | |
S: Impact on interpretation of MA | |||||
Excluding study data based on language of publication | Morrison, 2012 | English language vs. non‐English language publications | P: Bias in summary treatment effects | No evidence of a systematic bias from the use of English language restrictions in systematic review‐based meta‐analyses in conventional medicine. Conflicting results on the methodological and reporting quality of English and non‐English language RCTs. Further research required. | Low |
S: number of included studies and patients, methodological quality and statistical heterogeneity | |||||
Resources searching | Crumley, 2005 | Two or more resources searching vs. resource‐specific searching | Recall and precision | Multiple‐source comprehensive searches are necessary to identify all RCTs for a systematic review. For electronic databases, using the Cochrane HSS or complex search strategy in consultation with a librarian is recommended. | Critically low |
Supplementary searching | Hopewell, 2007 | Handsearching only vs. one or more electronic database(s) searching | Number of identified randomized trials | Handsearching is important for identifying trial reports for inclusion in systematic reviews of health care interventions published in nonindexed journals. Where time and resources are limited, majority of the full English‐language trial reports can be identified using a complex search or the Cochrane HSS. | Moderate |
Horsley, 2011 | Checking reference list (no comparison) | P: additional yield of checking reference lists | There is some evidence to support the use of checking reference lists to complement literature search in systematic reviews. | Low | |
S: additional yield by publication type, study design or both and data pertaining to costs | |||||
Reviewer characteristics | Robson, 2019 | Single vs. double reviewer screening | P: Accuracy, reliability, or efficiency of a method | Using two reviewers for screening is recommended. If resources are limited, one reviewer can screen, and other reviewer can verify the list of excluded studies. | Low |
S: factors affecting accuracy or reliability of a method | |||||
Experienced vs. inexperienced reviewers for screening | Screening must be performed by experienced reviewers | ||||
Screening by blinded vs. unblinded reviewers | Authors do not recommend blinding of reviewers during screening as the blinding process was time‐consuming and had little impact on the results of MA | ||||
Use of technology for study selection | Robson, 2019 | Use of dual computer monitors vs. nonuse of dual monitors for screening | P: Accuracy, reliability, or efficiency of a method | There are no significant differences in the time spent on abstract or full‐text screening with the use and nonuse of dual monitors | Low |
S: factors affecting accuracy or reliability of a method | |||||
Use of Google translate to translate non‐English citations to facilitate screening | Use of Google translate to screen German language citations | ||||
O'Mara‐Eves, 2015 | Use of text mining for title and abstract screening | Any evaluation concerning workload reduction | Text mining approaches can be used to reduce the number of studies to be screened, increase the rate of screening, improve the workflow with screening prioritization, and replace the second reviewer. The evaluated approaches reported saving a workload of between 30% and 70% | Critically low | |
Order of screening | Robson, 2019 | Title‐first screening vs. title‐and‐abstract simultaneous screening | P: Accuracy, reliability, or efficiency of a method | Title‐first screening showed no substantial gain in time when compared to simultaneous title and abstract screening. | Low |
S: factors affecting accuracy or reliability of a method | |||||
Reviewer characteristics | Robson, 2019 | Single vs. double reviewer data extraction | P: Accuracy, reliability, or efficiency of a method | Use two reviewers for data extraction. Single reviewer data extraction followed by the verification of outcome data by a second reviewer (where statistical analysis is planned), if resources preclude | Low |
S: factors affecting accuracy or reliability of a method | |||||
Experienced vs. inexperienced reviewers for data extraction | Experienced reviewers must be used for extracting continuous outcomes data | ||||
Data extraction by blinded vs. unblinded reviewers | Authors do not recommend blinding of reviewers during data extraction as it had no impact on the results of MA | ||||
Use of technology for data extraction | Use of dual computer monitors vs. nonuse of dual monitors for data extraction | Using two computer monitors may improve the efficiency of data extraction | |||
Data extraction by two English reviewers using Google translate vs. data extraction by two reviewers fluent in respective languages | Google translate provides limited accuracy for data extraction | ||||
Computer‐assisted vs. double reviewer extraction of graphical data | Use of computer‐assisted programs to extract graphical data | ||||
Obtaining additional data | Contacting study authors for additional data | Recommend contacting authors for obtaining additional relevant data | |||
Reviewer characteristics | Robson, 2019 | Quality appraisal by blinded vs. unblinded reviewers | P: Accuracy, reliability, or efficiency of a method | Inconsistent results on RoB assessments performed by blinded and unblinded reviewers. Blinding reviewers for quality appraisal not recommended | Low |
S: factors affecting accuracy or reliability of a method | |||||
Morissette, 2011 | Risk of bias (RoB) assessment by blinded vs. unblinded reviewers | P: Mean difference and 95% confidence interval between RoB assessment scores | Findings related to the difference between blinded and unblinded RoB assessments are inconsistent from the studies. Pooled effects show no differences in RoB assessments for assessments completed in a blinded or unblinded manner. | Moderate | |
S: qualitative level of agreement, mean RoB scores and measures of variance for the results of the RoB assessments, and inter‐rater reliability between blinded and unblinded reviewers | |||||
Robson, 2019 | Experienced vs. inexperienced reviewers for quality appraisal | P: Accuracy, reliability, or efficiency of a method | Reviewers performing quality appraisal must be trained. Quality assessment tool must be pilot tested. | Low | |
S: factors affecting accuracy or reliability of a method | |||||
Use of additional guidance vs. nonuse of additional guidance for quality appraisal | Providing guidance and decision rules for quality appraisal improved the inter‐rater reliability in RoB assessments. | ||||
Obtaining additional data | Contacting study authors for obtaining additional information/use of supplementary information available in the published trials vs. no additional information for quality appraisal | Additional data related to study quality obtained by contacting study authors improved the quality assessment. | |||
RoB assessment of qualitative studies | Structured vs. unstructured appraisal of qualitative research studies | Use of structured tool if qualitative and quantitative studies designs are included in the review. For qualitative reviews, either structured or unstructured quality appraisal tool can be used. |
There was some overlap in the primary studies evaluated in the included SRs on the same topics: Schmucker et al. 26 and Hopewell et al. 21 ( n = 4), Hopewell et al. 20 and Crumley et al. 19 ( n = 30), and Robson et al. 14 and Morissette et al. 23 ( n = 4). There were no conflicting results between any of the identified SRs on the same topic.
Overall, the quality of the included reviews was assessed as moderate at best (Table 2 ). The most common critical weakness in the reviews was failure to provide justification for excluding individual studies (four reviews). Detailed quality assessment is provided in Appendix C .
3.3.1. methods for defining review scope and eligibility.
Two SRs investigated the effect of excluding data obtained from gray or unpublished sources on the pooled effect estimates of MA. 21 , 26 Hopewell et al. 21 reviewed five studies that compared the impact of gray literature on the results of a cohort of MA of RCTs in health care interventions. Gray literature was defined as information published in “print or electronic sources not controlled by commercial or academic publishers.” Findings showed an overall greater treatment effect for published trials than trials reported in gray literature. In a more recent review, Schmucker et al. 26 addressed similar objectives, by investigating gray and unpublished data in medicine. In addition to gray literature, defined similar to the previous review by Hopewell et al., the authors also evaluated unpublished data—defined as “supplemental unpublished data related to published trials, data obtained from the Food and Drug Administration or other regulatory websites or postmarketing analyses hidden from the public.” The review found that in majority of the MA, excluding gray literature had little or no effect on the pooled effect estimates. The evidence was limited to conclude if the data from gray and unpublished literature had an impact on the conclusions of MA. 26
Morrison et al. 24 examined five studies measuring the effect of excluding non‐English language RCTs on the summary treatment effects of SR‐based MA in various fields of conventional medicine. Although none of the included studies reported major difference in the treatment effect estimates between English only and non‐English inclusive MA, the review found inconsistent evidence regarding the methodological and reporting quality of English and non‐English trials. 24 As such, there might be a risk of introducing “language bias” when excluding non‐English language RCTs. The authors also noted that the numbers of non‐English trials vary across medical specialties, as does the impact of these trials on MA results. Based on these findings, Morrison et al. 24 conclude that literature searches must include non‐English studies when resources and time are available to minimize the risk of introducing “language bias.”
Crumley et al. 19 analyzed recall (also referred to as “sensitivity” by some researchers; defined as “percentage of relevant studies identified by the search”) and precision (defined as “percentage of studies identified by the search that were relevant”) when searching a single resource to identify randomized controlled trials and controlled clinical trials, as opposed to searching multiple resources. The studies included in their review frequently compared a MEDLINE only search with the search involving a combination of other resources. The review found low median recall estimates (median values between 24% and 92%) and very low median precisions (median values between 0% and 49%) for most of the electronic databases when searched singularly. 19 A between‐database comparison, based on the type of search strategy used, showed better recall and precision for complex and Cochrane Highly Sensitive search strategies (CHSSS). In conclusion, the authors emphasize that literature searches for trials in SRs must include multiple sources. 19
In an SR comparing handsearching and electronic database searching, Hopewell et al. 20 found that handsearching retrieved more relevant RCTs (retrieval rate of 92%−100%) than searching in a single electronic database (retrieval rates of 67% for PsycINFO/PsycLIT, 55% for MEDLINE, and 49% for Embase). The retrieval rates varied depending on the quality of handsearching, type of electronic search strategy used (e.g., simple, complex or CHSSS), and type of trial reports searched (e.g., full reports, conference abstracts, etc.). The authors concluded that handsearching was particularly important in identifying full trials published in nonindexed journals and in languages other than English, as well as those published as abstracts and letters. 20
The effectiveness of checking reference lists to retrieve additional relevant studies for an SR was investigated by Horsley et al. 22 The review reported that checking reference lists yielded 2.5%–40% more studies depending on the quality and comprehensiveness of the electronic search used. The authors conclude that there is some evidence, although from poor quality studies, to support use of checking reference lists to supplement database searching. 22
Three approaches relevant to reviewer characteristics, including number, experience, and blinding of reviewers involved in the screening process were highlighted in an SR by Robson et al. 14 Based on the retrieved evidence, the authors recommended that two independent, experienced, and unblinded reviewers be involved in study selection. 14 A modified approach has also been suggested by the review authors, where one reviewer screens and the other reviewer verifies the list of excluded studies, when the resources are limited. It should be noted however this suggestion is likely based on the authors’ opinion, as there was no evidence related to this from the studies included in the review.
Robson et al. 14 also reported two methods describing the use of technology for screening studies: use of Google Translate for translating languages (for example, German language articles to English) to facilitate screening was considered a viable method, while using two computer monitors for screening did not increase the screening efficiency in SR. Title‐first screening was found to be more efficient than simultaneous screening of titles and abstracts, although the gain in time with the former method was lesser than the latter. Therefore, considering that the search results are routinely exported as titles and abstracts, Robson et al. 14 recommend screening titles and abstracts simultaneously. However, the authors note that these conclusions were based on very limited number (in most instances one study per method) of low‐quality studies. 14
Robson et al. 14 examined three approaches for data extraction relevant to reviewer characteristics, including number, experience, and blinding of reviewers (similar to the study selection step). Although based on limited evidence from a small number of studies, the authors recommended use of two experienced and unblinded reviewers for data extraction. The experience of the reviewers was suggested to be especially important when extracting continuous outcomes (or quantitative) data. However, when the resources are limited, data extraction by one reviewer and a verification of the outcomes data by a second reviewer was recommended.
As for the methods involving use of technology, Robson et al. 14 identified limited evidence on the use of two monitors to improve the data extraction efficiency and computer‐assisted programs for graphical data extraction. However, use of Google Translate for data extraction in non‐English articles was not considered to be viable. 14 In the same review, Robson et al. 14 identified evidence supporting contacting authors for obtaining additional relevant data.
Two SRs examined the impact of blinding of reviewers for RoB assessments. 14 , 23 Morissette et al. 23 investigated the mean differences between the blinded and unblinded RoB assessment scores and found inconsistent differences among the included studies providing no definitive conclusions. Similar conclusions were drawn in a more recent review by Robson et al., 14 which included four studies on reviewer blinding for RoB assessment that completely overlapped with Morissette et al. 23
Use of experienced reviewers and provision of additional guidance for RoB assessment were examined by Robson et al. 14 The review concluded that providing intensive training and guidance on assessing studies reporting insufficient data to the reviewers improves RoB assessments. 14 Obtaining additional data related to quality assessment by contacting study authors was also found to help the RoB assessments, although based on limited evidence. When assessing the qualitative or mixed method reviews, Robson et al. 14 recommends the use of a structured RoB tool as opposed to an unstructured tool. No SRs were identified on data synthesis and CoE assessment and reporting steps.
4.1. summary of findings.
Nine SRs examining 24 unique methods used across five steps in the SR process were identified in this overview. The collective evidence supports some current traditional and modified SR practices, while challenging other approaches. However, the quality of the included reviews was assessed to be moderate at best and in the majority of the included SRs, evidence related to the evaluated methods was obtained from very limited numbers of primary studies. As such, the interpretations from these SRs should be made cautiously.
The evidence gathered from the included SRs corroborate a few current SR approaches. 5 For example, it is important to search multiple resources for identifying relevant trials (RCTs and/or CCTs). The resources must include a combination of electronic database searching, handsearching, and reference lists of retrieved articles. 5 However, no SRs have been identified that evaluated the impact of the number of electronic databases searched. A recent study by Halladay et al. 27 found that articles on therapeutic intervention, retrieved by searching databases other than PubMed (including Embase), contributed only a small amount of information to the MA and also had a minimal impact on the MA results. The authors concluded that when the resources are limited and when large number of studies are expected to be retrieved for the SR or MA, PubMed‐only search can yield reliable results. 27
Findings from the included SRs also reiterate some methodological modifications currently employed to “expedite” the SR process. 10 , 11 For example, excluding non‐English language trials and gray/unpublished trials from MA have been shown to have minimal or no impact on the results of MA. 24 , 26 However, the efficiency of these SR methods, in terms of time and the resources used, have not been evaluated in the included SRs. 24 , 26 Of the SRs included, only two have focused on the aspect of efficiency 14 , 25 ; O'Mara‐Eves et al. 25 report some evidence to support the use of text‐mining approaches for title and abstract screening in order to increase the rate of screening. Moreover, only one included SR 14 considered primary studies that evaluated reliability (inter‐ or intra‐reviewer consistency) and accuracy (validity when compared against a “gold standard” method) of the SR methods. This can be attributed to the limited number of primary studies that evaluated these outcomes when evaluating the SR methods. 14 Lack of outcome measures related to reliability, accuracy, and efficiency precludes making definitive recommendations on the use of these methods/modifications. Future research studies must focus on these outcomes.
Some evaluated methods may be relevant to multiple steps; for example, exclusions based on publication status (gray/unpublished literature) and language of publication (non‐English language studies) can be outlined in the a priori eligibility criteria or can be incorporated as search limits in the search strategy. SRs included in this overview focused on the effect of study exclusions on pooled treatment effect estimates or MA conclusions. Excluding studies from the search results, after conducting a comprehensive search, based on different eligibility criteria may yield different results when compared to the results obtained when limiting the search itself. 28 Further studies are required to examine this aspect.
Although we acknowledge the lack of standardized quality assessment tools for methodological study designs, we adhered to the Cochrane criteria for identifying SRs in this overview. This was done to ensure consistency in the quality of the included evidence. As a result, we excluded three reviews that did not provide any form of discussion on the quality of the included studies. The methods investigated in these reviews concern supplementary search, 29 data extraction, 12 and screening. 13 However, methods reported in two of these three reviews, by Mathes et al. 12 and Waffenschmidt et al., 13 have also been examined in the SR by Robson et al., 14 which was included in this overview; in most instances (with the exception of one study included in Mathes et al. 12 and Waffenschmidt et al. 13 each), the studies examined in these excluded reviews overlapped with those in the SR by Robson et al. 14
One of the key gaps in the knowledge observed in this overview was the dearth of SRs on the methods used in the data synthesis component of SR. Narrative and quantitative syntheses are the two most commonly used approaches for synthesizing data in evidence synthesis. 5 There are some published studies on the proposed indications and implications of these two approaches. 30 , 31 These studies found that both data synthesis methods produced comparable results and have their own advantages, suggesting that the choice of the method must be based on the purpose of the review. 31 With increasing number of “expedited” SR approaches (so called “rapid reviews”) avoiding MA, 10 , 11 further research studies are warranted in this area to determine the impact of the type of data synthesis on the results of the SR.
The findings of this overview highlight several areas of paucity in primary research and evidence synthesis on SR methods. First, no SRs were identified on methods used in two important components of the SR process, including data synthesis and CoE and reporting. As for the included SRs, a limited number of evaluation studies have been identified for several methods. This indicates that further research is required to corroborate many of the methods recommended in current SR guidelines. 4 , 5 , 6 , 7 Second, some SRs evaluated the impact of methods on the results of quantitative synthesis and MA conclusions. Future research studies must also focus on the interpretations of SR results. 28 , 32 Finally, most of the included SRs were conducted on specific topics related to the field of health care, limiting the generalizability of the findings to other areas. It is important that future research studies evaluating evidence syntheses broaden the objectives and include studies on different topics within the field of health care.
To our knowledge, this is the first overview summarizing current evidence from SRs and MA on different methodological approaches used in several fundamental steps in SR conduct. The overview methodology followed well established guidelines and strict criteria defined for the inclusion of SRs.
There are several limitations related to the nature of the included reviews. Evidence for most of the methods investigated in the included reviews was derived from a limited number of primary studies. Also, the majority of the included SRs may be considered outdated as they were published (or last updated) more than 5 years ago 33 ; only three of the nine SRs have been published in the last 5 years. 14 , 25 , 26 Therefore, important and recent evidence related to these topics may not have been included. Substantial numbers of included SRs were conducted in the field of health, which may limit the generalizability of the findings. Some method evaluations in the included SRs focused on quantitative analyses components and MA conclusions only. As such, the applicability of these findings to SR more broadly is still unclear. 28 Considering the methodological nature of our overview, limiting the inclusion of SRs according to the Cochrane criteria might have resulted in missing some relevant evidence from those reviews without a quality assessment component. 12 , 13 , 29 Although the included SRs performed some form of quality appraisal of the included studies, most of them did not use a standardized RoB tool, which may impact the confidence in their conclusions. Due to the type of outcome measures used for the method evaluations in the primary studies and the included SRs, some of the identified methods have not been validated against a reference standard.
Some limitations in the overview process must be noted. While our literature search was exhaustive covering five bibliographic databases and supplementary search of reference lists, no gray sources or other evidence resources were searched. Also, the search was primarily conducted in health databases, which might have resulted in missing SRs published in other fields. Moreover, only English language SRs were included for feasibility. As the literature search retrieved large number of citations (i.e., 41,556), the title and abstract screening was performed by a single reviewer, calibrated for consistency in the screening process by another reviewer, owing to time and resource limitations. These might have potentially resulted in some errors when retrieving and selecting relevant SRs. The SR methods were grouped based on key elements of each recommended SR step, as agreed by the authors. This categorization pertains to the identified set of methods and should be considered subjective.
This overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process. Limited evidence was also identified on some methodological modifications currently used to expedite the SR process. Overall, findings highlight the dearth of SRs on SR methodologies, warranting further work to confirm several current recommendations on conventional and expedited SR processes.
The authors declare no conflicts of interest.
APPENDIX A: Detailed search strategies
The first author is supported by a La Trobe University Full Fee Research Scholarship and a Graduate Research Scholarship.
Open Access Funding provided by La Trobe University.
Veginadu P, Calache H, Gussy M, Pandian A, Masood M. An overview of methodological approaches in systematic reviews . J Evid Based Med . 2022; 15 :39–54. 10.1111/jebm.12468 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
Internet finance has permeated into myriad households, bringing about lifestyle convenience alongside potential risks. Presently, internet finance enterprises are progressively adopting machine learning and other artificial intelligence methods for risk alertness. What is the current status of the application of various machine learning models and algorithms across different institutions? Is there an optimal machine learning algorithm suited for the majority of internet finance platforms and application scenarios? Scholars have embarked on a series of studies addressing these questions; however, the focus predominantly lies in comparing different algorithms within specific platforms and contexts, lacking a comprehensive discourse and summary on the utilization of machine learning in this domain. Thus, based on the data from Web of Science and Scopus databases, this paper conducts a systematic literature review on all aspects of machine learning in internet finance risk in recent years, based on publications trends, geographical distribution, literature focus, machine learning models and algorithms, and evaluations. The research reveals that machine learning, as a nascent technology, whether through basic algorithms or intricate algorithmic combinations, has made significant strides compared to traditional credit scoring methods in predicting accuracy, time efficiency, and robustness in internet finance risk management. Nonetheless, there exist noticeable disparities among different algorithms, and factors such as model structure, sample data, and parameter settings also influence prediction accuracy, although generally, updated algorithms tend to achieve higher accuracy. Consequently, there is no one-size-fits-all approach applicable to all platforms; each platform should enhance its machine learning models and algorithms based on its unique characteristics, data, and the development of AI technology, starting from key evaluation indicators to mitigate internet finance risks.
More than 100 reference examples and their corresponding in-text citations are presented in the seventh edition Publication Manual . Examples of the most common works that writers cite are provided on this page; additional examples are available in the Publication Manual .
To find the reference example you need, first select a category (e.g., periodicals) and then choose the appropriate type of work (e.g., journal article ) and follow the relevant example.
When selecting a category, use the webpages and websites category only when a work does not fit better within another category. For example, a report from a government website would use the reports category, whereas a page on a government website that is not a report or other work would use the webpages and websites category.
Also note that print and electronic references are largely the same. For example, to cite both print books and ebooks, use the books and reference works category and then choose the appropriate type of work (i.e., book ) and follow the relevant example (e.g., whole authored book ).
Examples on these pages illustrate the details of reference formats. We make every attempt to show examples that are in keeping with APA Style’s guiding principles of inclusivity and bias-free language. These examples are presented out of context only to demonstrate formatting issues (e.g., which elements to italicize, where punctuation is needed, placement of parentheses). References, including these examples, are not inherently endorsements for the ideas or content of the works themselves. An author may cite a work to support a statement or an idea, to critique that work, or for many other reasons. For more examples, see our sample papers .
Reference examples are covered in the seventh edition APA Style manuals in the Publication Manual Chapter 10 and the Concise Guide Chapter 10
Textual works are covered in Sections 10.1–10.8 of the Publication Manual . The most common categories and examples are presented here. For the reviews of other works category, see Section 10.7.
Data sets are covered in Section 10.9 of the Publication Manual . For the software and tests categories, see Sections 10.10 and 10.11.
Audiovisual media are covered in Sections 10.12–10.14 of the Publication Manual . The most common examples are presented together here. In the manual, these examples and more are separated into categories for audiovisual, audio, and visual media.
Online media are covered in Sections 10.15 and 10.16 of the Publication Manual . Please note that blog posts are part of the periodicals category.
IMAGES
VIDEO
COMMENTS
Example: Systematic review In 2008, ... Systematic review vs. literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method. ...
Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...
Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...
Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.
Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...
Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.
Systematic reviews that summarize the available information on a topic are an important part of evidence-based health care. There are both research and non-research reasons for undertaking a literature review. It is important to systematically review the literature when one would like to justify the need for a study, to update personal ...
Example: Systematic review In 2008, ... Systematic review vs literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method. ...
Screen the literature. Assess the quality of the studies. Extract the data. Analyze the results. Interpret and present the results. 1. Decide on your team. When carrying out a systematic literature review, you should employ multiple reviewers in order to minimize bias and strengthen analysis.
A systematic review aims to bring evidence together to answer a pre-defined research question. This involves the identification of all primary research relevant to the defined review question, the critical appraisal of this research, and the synthesis of the findings.13 Systematic reviews may combine data from different.
Example; Systematic review: The most robust review method, usually with the involvement of more than one author, intends to systematically search for and appraise literature with pre-existing inclusion criteria. (Salem et al., 2023) Rapid review: Utilises Systematic Review methods but may be time limited. (Randles and Finnegan, 2022) Meta-analysis
A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. ... systematic reviews conduct a search of other literature that is outside of traditional peer-reviewed journals. Examples of such types of literature include conference abstracts, websites, materials from the ...
Please choose the tab below for your discipline to see relevant examples. For more information about how to conduct and write reviews, please see the Guidelines section of this guide. Vibration and bubbles: a systematic review of the effects of helicopter retrieval on injured divers. (2018). Nicotine effects on exercise performance and ...
A roadmap for searching literature in PubMed from the VU Amsterdam; Alexander, P. A. (2020). ... This diagram illustrates what is actually in a published systematic review and gives examples from the relevant parts of a systematic review housed online on The Cochrane Library. It will help you to read or navigate a systematic review.
Systematic literature reviews (SRs) are a way of synt hesising scientific evidence to answer a particular. research question in a way that is transparent and reproducible, while seeking to include ...
Systematic literature reviews (SRs) are a way of synthesising scientific evidence to answer a particular ... design research is explored, and four recent examples of SRs in design research ar e analysed to illustrate ... SRs treat the literature review process like a scientific process, and apply concepts of empirical research in order to make ...
Example for a Systematic Literature Review: In references 5 example for paper that use Systematic Literature Review (SlR) example: ( Event-Driven Process Chain for Modeling and Verification of ...
The process of performing a systematic literature review consists of several stages and can be reported in a form of an original research article with the same name (i.e., systematic literature review): 1: Start by clearly defining the objective of the review or form a structured research question. Place in the research article: Title, Abstract ...
A systematic review is a literature review that gathers all of the available evidence matching pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods, documented in a protocol, to minimize bias, provide reliable findings, and inform decision-making.
Introduction and background. A literature review provides an important insight into a particular scholarly topic. It compiles published research on a topic, surveys different sources of research, and critically examines these sources [].A literature review may be argumentative, integrative, historical, methodological, systematic, or theoretical, and these approaches may be adopted depending ...
The present methodological literature review (cf. Aguinis et al., 2020) addresses this void and aims to identify the dominant approaches to sample selection and provide insights into essential choices in this step of systematic reviews, with a particular focus on management research.To follow these objectives, I have critically reviewed systematic reviews published in the two most prominent ...
Provides guidelines for conducting a systematic literature review in management research. ... different contributions could be valuable. For example, literature reviews can result in a historical analysis of the development within a research field (e.g. Carlborg, Kindström, & Kowalkowski, 2014), an agenda for further research (e.g., ...
Systematic Reviews encompasses all aspects of the design, conduct and reporting of systematic reviews. The journal publishes high quality systematic review products including systematic review protocols, systematic reviews related to a very broad definition of human health, rapid reviews, updates of already completed systematic reviews, and methods research related to the science of systematic ...
Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure .An SLR updates the reader with current literature about a subject .The goal is to review critical points of current knowledge on a topic about research ...
For example, the Mastercard Foundation has partnered with several universities in SSA to establish scholarship programs and comprehensive support services. ... This study followed the systematic literature review approach by first, mining data from databases such as Scopus, Google Scholar, and PubMed to obtain the requisite publications. ...
This is not an exhaustive or systematic review of the DEP literature, and the studies referenced are illustrative examples. Indeed, given the loose and varied manner that the concept of " double empathy " has been defined and operationalized in the literature (see the Fuzzy Theoretical Concept section ), it is currently not possible to ...
1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...
Thus, based on the data from Web of Science and Scopus databases, this paper conducts a systematic literature review on all aspects of machine learning in internet finance risk in recent years, based on publications trends, geographical distribution, literature focus, machine learning models and algorithms, and evaluations.
More than 100 reference examples and their corresponding in-text citations are presented in the seventh edition Publication Manual.Examples of the most common works that writers cite are provided on this page; additional examples are available in the Publication Manual.. To find the reference example you need, first select a category (e.g., periodicals) and then choose the appropriate type of ...