FSTA Logo

Start your free trial

Arrange a trial for your organisation and discover why FSTA is the leading database for reliable research on the sciences of food and health.

REQUEST A FREE TRIAL

  • Research Skills Blog

5 software tools to support your systematic review processes

By Dr. Mina Kalantar on 19-Jan-2021 13:01:01

4 software tools to support your systematic review processes | IFIS Publishing

Systematic reviews are a reassessment of scholarly literature to facilitate decision making. This methodical approach of re-evaluating evidence was initially applied in healthcare, to set policies, create guidelines and answer medical questions.

Systematic reviews are large, complex projects and, depending on the purpose, they can be quite expensive to conduct. A team of researchers, data analysts and experts from various fields may collaborate to review and examine incredibly large numbers of research articles for evidence synthesis. Depending on the spectrum, systematic reviews often take at least 6 months, and sometimes upwards of 18 months to complete.

The main principles of transparency and reproducibility require a pragmatic approach in the organisation of the required research activities and detailed documentation of the outcomes. As a result, many software tools have been developed to help researchers with some of the tedious tasks required as part of the systematic review process.

hbspt.cta._relativeUrls=true;hbspt.cta.load(97439, 'ccc20645-09e2-4098-838f-091ed1bf1f4e', {"useNewLoader":"true","region":"na1"});

The first generation of these software tools were produced to accommodate and manage collaborations, but gradually developed to help with screening literature and reporting outcomes. Some of these software packages were initially designed for medical and healthcare studies and have specific protocols and customised steps integrated for various types of systematic reviews. However, some are designed for general processing, and by extending the application of the systematic review approach to other fields, they are being increasingly adopted and used in software engineering, health-related nutrition, agriculture, environmental science, social sciences and education.

Software tools

There are various free and subscription-based tools to help with conducting a systematic review. Many of these tools are designed to assist with the key stages of the process, including title and abstract screening, data synthesis, and critical appraisal. Some are designed to facilitate the entire process of review, including protocol development, reporting of the outcomes and help with fast project completion.

As time goes on, more functions are being integrated into such software tools. Technological advancement has allowed for more sophisticated and user-friendly features, including visual graphics for pattern recognition and linking multiple concepts. The idea is to digitalise the cumbersome parts of the process to increase efficiency, thus allowing researchers to focus their time and efforts on assessing the rigorousness and robustness of the research articles.

This article introduces commonly used systematic review tools that are relevant to food research and related disciplines, which can be used in a similar context to the process in healthcare disciplines.

These reviews are based on IFIS' internal research, thus are unbiased and not affiliated with the companies.

ross-sneddon-sWlDOWk0Jp8-unsplash-1-2

This online platform is a core component of the Cochrane toolkit, supporting parts of the systematic review process, including title/abstract and full-text screening, documentation, and reporting.

The Covidence platform enables collaboration of the entire systematic reviews team and is suitable for researchers and students at all levels of experience.

From a user perspective, the interface is intuitive, and the citation screening is directed step-by-step through a well-defined workflow. Imports and exports are straightforward, with easy export options to Excel and CVS.

Access is free for Cochrane authors (a single reviewer), and Cochrane provides a free trial to other researchers in healthcare. Universities can also subscribe on an institutional basis.

Rayyan is a free and open access web-based platform funded by the Qatar Foundation, a non-profit organisation supporting education and community development initiative . Rayyan is used to screen and code literature through a systematic review process.

Unlike Covidence, Rayyan does not follow a standard SR workflow and simply helps with citation screening. It is accessible through a mobile application with compatibility for offline screening. The web-based platform is known for its accessible user interface, with easy and clear export options.

Function comparison of 5 software tools to support the systematic review process

Eppi-reviewer.

EPPI-Reviewer is a web-based software programme developed by the Evidence for Policy and Practice Information and Co-ordinating Centre  (EPPI) at the UCL Institute for Education, London .

It provides comprehensive functionalities for coding and screening. Users can create different levels of coding in a code set tool for clustering, screening, and administration of documents. EPPI-Reviewer allows direct search and import from PubMed. The import of search results from other databases is feasible in different formats. It stores, references, identifies and removes duplicates automatically. EPPI-Reviewer allows full-text screening, text mining, meta-analysis and the export of data into different types of reports.

There is no limit for concurrent use of the software and the number of articles being reviewed. Cochrane reviewers can access EPPI reviews using their Cochrane subscription details.

EPPI-Centre has other tools for facilitating the systematic review process, including coding guidelines and data management tools.

CADIMA is a free, online, open access review management tool, developed to facilitate research synthesis and structure documentation of the outcomes.

The Julius Institute and the Collaboration for Environmental Evidence established the software programme to support and guide users through the entire systematic review process, including protocol development, literature searching, study selection, critical appraisal, and documentation of the outcomes. The flexibility in choosing the steps also makes CADIMA suitable for conducting systematic mapping and rapid reviews.

CADIMA was initially developed for research questions in agriculture and environment but it is not limited to these, and as such, can be used for managing review processes in other disciplines. It enables users to export files and work offline.

The software allows for statistical analysis of the collated data using the R statistical software. Unlike EPPI-Reviewer, CADIMA does not have a built-in search engine to allow for searching in literature databases like PubMed.

DistillerSR

DistillerSR is an online software maintained by the Canadian company, Evidence Partners which specialises in literature review automation. DistillerSR provides a collaborative platform for every stage of literature review management. The framework is flexible and can accommodate literature reviews of different sizes. It is configurable to different data curation procedures, workflows and reporting standards. The platform integrates necessary features for screening, quality assessment, data extraction and reporting. The software uses Artificial Learning (AL)-enabled technologies in priority screening. It is to cut the screening process short by reranking the most relevant references nearer to the top. It can also use AL, as a second reviewer, in quality control checks of screened studies by human reviewers. DistillerSR is used to manage systematic reviews in various medical disciplines, surveillance, pharmacovigilance and public health reviews including food and nutrition topics. The software does not support statistical analyses. It provides configurable forms in standard formats for data extraction.

DistillerSR allows direct search and import of references from PubMed. It provides an add on feature called LitConnect which can be set to automatically import newly published references from data providers to keep reviews up to date during their progress.

The systematic review Toolbox is a web-based catalogue of various tools, including software packages which can assist with single or multiple tasks within the evidence synthesis process. Researchers can run a quick search or tailor a more sophisticated search by choosing their approach, budget, discipline, and preferred support features, to find the right tools for their research.

If you enjoyed this blog post, you may also be interested in our recently published blog post addressing the difference between a systematic review and a systematic literature review.

BLOG CTA

  • FSTA - Food Science & Technology Abstracts
  • IFIS Collections
  • Resources Hub
  • Diversity Statement
  • Sustainability Commitment
  • Company news
  • Frequently Asked Questions
  • Privacy Policy
  • Terms of Use for IFIS Collections

Ground Floor, 115 Wharfedale Road,  Winnersh Triangle, Wokingham, Berkshire RG41 5RB

Get in touch with IFIS

© International Food Information Service (IFIS Publishing) operating as IFIS – All Rights Reserved     |     Charity Reg. No. 1068176     |     Limited Company No. 3507902     |     Designed by Blend

systematic literature review tools

  • Help Center

GET STARTED

Rayyan

COLLABORATE ON YOUR REVIEWS WITH ANYONE, ANYWHERE, ANYTIME

Rayyan for students

Save precious time and maximize your productivity with a Rayyan membership. Receive training, priority support, and access features to complete your systematic reviews efficiently.

Rayyan for Librarians

Rayyan Teams+ makes your job easier. It includes VIP Support, AI-powered in-app help, and powerful tools to create, share and organize systematic reviews, review teams, searches, and full-texts.

Rayyan for Researchers

RESEARCHERS

Rayyan makes collaborative systematic reviews faster, easier, and more convenient. Training, VIP support, and access to new features maximize your productivity. Get started now!

Over 1 billion reference articles reviewed by research teams, and counting...

Intelligent, scalable and intuitive.

Rayyan understands language, learns from your decisions and helps you work quickly through even your largest systematic literature reviews.

WATCH A TUTORIAL NOW

Solutions for Organizations and Businesses

systematic literature review tools

Rayyan Enterprise and Rayyan Teams+ make it faster, easier and more convenient for you to manage your research process across your organization.

  • Accelerate your research across your team or organization and save valuable researcher time.
  • Build and preserve institutional assets, including literature searches, systematic reviews, and full-text articles.
  • Onboard team members quickly with access to group trainings for beginners and experts.
  • Receive priority support to stay productive when questions arise.
  • SCHEDULE A DEMO
  • LEARN MORE ABOUT RAYYAN TEAMS+

RAYYAN SYSTEMATIC LITERATURE REVIEW OVERVIEW

systematic literature review tools

LEARN ABOUT RAYYAN’S PICO HIGHLIGHTS AND FILTERS

systematic literature review tools

Join now to learn why Rayyan is trusted by already more than 500,000 researchers

Individual plans, teams plans.

For early career researchers just getting started with research.

Free forever

  • 3 Active Reviews
  • Invite Unlimited Reviewers
  • Import Directly from Mendeley
  • Industry Leading De-Duplication
  • 5-Star Relevance Ranking
  • Advanced Filtration Facets
  • Mobile App Access
  • 100 Decisions on Mobile App
  • Standard Support
  • Revoke Reviewer
  • Online Training
  • PICO Highlights & Filters
  • PRISMA (Beta)
  • Auto-Resolver 
  • Multiple Teams & Management Roles
  • Monitor & Manage Users, Searches, Reviews, Full Texts
  • Onboarding and Regular Training

Professional

For researchers who want more tools for research acceleration.

Per month billed annually

  • Unlimited Active Reviews
  • Unlimited Decisions on Mobile App
  • Priority Support
  • Auto-Resolver

For currently enrolled students with valid student ID.

Per month billed annually

Billed monthly

For a team that wants professional licenses for all members.

Per-user, per month, billed annually

  • Single Team
  • High Priority Support

For teams that want support and advanced tools for members.

  • Multiple Teams
  • Management Roles

For organizations who want access to all of their members.

Annual Subscription

Contact Sales

  • Organizational Ownership
  • For an organization or a company
  • Access to all the premium features such as PICO Filters, Auto-Resolver, PRISMA and Mobile App
  • Store and Reuse Searches and Full Texts
  • A management console to view, organize and manage users, teams, review projects, searches and full texts
  • Highest tier of support – Support via email, chat and AI-powered in-app help
  • GDPR Compliant
  • Single Sign-On
  • API Integration
  • Training for Experts
  • Training Sessions Students Each Semester
  • More options for secure access control

ANNUAL ONLY

Per-user, billed monthly

Rayyan Subscription

membership starts with 2 users. You can select the number of additional members that you’d like to add to your membership.

Total amount:

Click Proceed to get started.

Great usability and functionality. Rayyan has saved me countless hours. I even received timely feedback from staff when I did not understand the capabilities of the system, and was pleasantly surprised with the time they dedicated to my problem. Thanks again!

This is a great piece of software. It has made the independent viewing process so much quicker. The whole thing is very intuitive.

Rayyan makes ordering articles and extracting data very easy. A great tool for undertaking literature and systematic reviews!

Excellent interface to do title and abstract screening. Also helps to keep a track on the the reasons for exclusion from the review. That too in a blinded manner.

Rayyan is a fantastic tool to save time and improve systematic reviews!!! It has changed my life as a researcher!!! thanks

Easy to use, friendly, has everything you need for cooperative work on the systematic review.

Rayyan makes life easy in every way when conducting a systematic review and it is easy to use.

Covidence website will be inaccessible as we upgrading our platform on Monday 23rd August at 10am AEST, / 2am CEST/1am BST (Sunday, 15th August 8pm EDT/5pm PDT) 

The World's #1 Systematic Review Tool

Covidence

See your systematic reviews like never before

Faster reviews.

An average 35% reduction in time spent per review, saving an average of 71 hours per review.

Expert, online support

Easy to learn and use, with 24/7 support from product experts who are also seasoned reviewers!

Seamless collaboration

Enable the whole review team to collaborate from anywhere.

Suits all levels of experience and sectors

Suitable for reviewers in a variety of sectors including health, education, social science and many others.

Supporting the world's largest systematic review community

See how it works.

Step inside Covidence to see a more intuitive, streamlined way to manage systematic reviews.

Unlimited use for every organization

With no restrictions on reviews and users, Covidence gets out of the way so you can bring the best evidence to the world, more quickly.

Covidence is used by world-leading evidence organizations

Whether you’re an academic institution, a hospital or society, Covidence is working for organizations like yours right now.

See a list of organizations already using Covidence →

Covidence Case Study 1 - Education

How Covidence has enabled living guidelines for Australians impacted by stroke

Clinical guidelines took 7 years to update prior to moving to a living evidence approach. Learn how Covidence streamlined workflows and created real time savings for the guidelines team.

University of Ottawa Drives Systematic Review Excellence Across Many Academic Disciplines

University of Ottawa

Covidence case study medical

Top Ranked U.S. Teaching Hospital Delivers Effective Systematic Review Management

Top Ranked U.S. Teaching Hospital

See more Case Studies

Logo Wiell Cornell Medicine

Better systematic review management

Head office, working for an institution or organisation.

Find out why over 350 of the world’s leading institutions are seeing a surge in publications since using Covidence!

Request a consultation with one of our team members and start empowering your researchers:

By using our site you consent to our use of cookies to measure and improve our site’s performance. Please see our Privacy Policy for more information. 

Cochrane RevMan

Revman: systematic review and meta-analysis tool for researchers worldwide.

Pricing & subscription

Intuitive and easy to use

High quality support and guidance, promotes high quality research, collaborate on the same project.

Systematic Reviews and Meta Analysis

  • Getting Started
  • Guides and Standards
  • Review Protocols
  • Databases and Sources
  • Randomized Controlled Trials
  • Controlled Clinical Trials
  • Observational Designs
  • Tests of Diagnostic Accuracy

Software and Tools

  • Where do I get all those articles?
  • Collaborations
  • EPI 233/528
  • Countway Mediated Search
  • Risk of Bias (RoB)

Covidence is Web-based to for managing the review workflow. Tools for screening records, managing full-text articles, and extracting data make the process much less burdensome. Covidence currently is available for Harvard investigators with a hms.harvard.edu, hsdm.harvard.edu, or hsph.harvard.edu email address. To make use of Harvard's institutional account:

  • If you don't already have a Covidence account, sign up for one at: https://www.covidence.org/signups/new Make sure you use your hms, hsph, or hsdm Harvard email address.
  • Then associate your account with Harvard's institutional access at: https://www.covidence.org/organizations/58RXa/signup Use the same address you used in step 1 and follow the instructions in the resulting email.
  • To set up a project, go to your account dashboard page and click the 'Start a new review' button. Make sure you choose "Harvard University Libraries" under "which account" on the project creation page.

Rayyan is an alternative review manager that has a free option. It has ranking and sorting option lacking in Covidence but takes more time to learn. We do not provide support for Rayyan.

Other Review Software Systems

There are a number of tools available to help a team manage the systematic review process. Notable examples include Eppi-Reviewer ,  DistillerSR , and PICO Portal . These are subscription-based services but in some cases offer a trial project. Use the Systematic Review Toolbox to explore more options.

Citation Managers

Citation managers like EndNote or Zotero can be used to collect, manage and de-duplicate bibliographic records and full-text documents but are considerable more painful to use than specialized systematic review applications. Of course, they are handy for writing up your report.

Need more, or looking for alternatives? See the SR Toolbox , a searchable database of tools to support systematic reviews and meta-analysis.

  • << Previous: Tests of Diagnostic Accuracy
  • Next: Where do I get all those articles? >>
  • Last Updated: Feb 26, 2024 3:17 PM
  • URL: https://guides.library.harvard.edu/meta-analysis

systematic literature review tools

Accelerate your research with the best systematic literature review tools

The ideal literature review tool helps you make sense of the most important insights in your research field. ATLAS.ti empowers researchers to perform powerful and collaborative analysis using the leading software for literature review.

systematic literature review tools

Finalize your literature review faster with comfort

ATLAS.ti makes it easy to manage, organize, and analyze articles, PDFs, excerpts, and more for your projects. Conduct a deep systematic literature review and get the insights you need with a comprehensive toolset built specifically for your research projects.

systematic literature review tools

Figure out the "why" behind your participant's motivations

Understand the behaviors and emotions that are driving your focus group participants. With ATLAS.ti, you can transform your raw data and turn it into qualitative insights you can learn from. Easily determine user intent in the same spot you're deciphering your overall focus group data.

systematic literature review tools

Visualize your research findings like never before

We make it simple to present your analysis results with meaningful charts, networks, and diagrams. Instead of figuring out how to communicate the insights you just unlocked, we enable you to leverage easy-to-use visualizations that support your goals.

systematic literature review tools

Everything you need to elevate your literature review

Import and organize literature data.

Import and analyze any type of text content – ATLAS.ti supports all standard text and transcription files such as Word and PDF.

Analyze with ease and speed

Utilize easy-to-learn workflows that save valuable time, such as auto coding, sentiment analysis, team collaboration, and more.

Leverage AI-driven tools

Make efficiency a priority and let ATLAS.ti do your work with AI-powered research tools and features for faster results.

Visualize and present findings

With just a few clicks, you can create meaningful visualizations like charts, word clouds, tables, networks, among others for your literature data.

The faster way to make sense of your literature review. Try it for free, today.

A literature review analyzes the most current research within a research area. A literature review consists of published studies from many sources:

  • Peer-reviewed academic publications
  • Full-length books
  • University bulletins
  • Conference proceedings
  • Dissertations and theses

Literature reviews allow researchers to:

  • Summarize the state of the research
  • Identify unexplored research inquiries
  • Recommend practical applications
  • Critique currently published research

Literature reviews are either standalone publications or part of a paper as background for an original research project. A literature review, as a section of a more extensive research article, summarizes the current state of the research to justify the primary research described in the paper.

For example, a researcher may have reviewed the literature on a new supplement's health benefits and concluded that more research needs to be conducted on those with a particular condition. This research gap warrants a study examining how this understudied population reacted to the supplement. Researchers need to establish this research gap through a literature review to persuade journal editors and reviewers of the value of their research.

Consider a literature review as a typical research publication presenting a study, its results, and the salient points scholars can infer from the study. The only significant difference with a literature review treats existing literature as the research data to collect and analyze. From that analysis, a literature review can suggest new inquiries to pursue.

Identify a focus

Similar to a typical study, a literature review should have a research question or questions that analysis can answer. This sort of inquiry typically targets a particular phenomenon, population, or even research method to examine how different studies have looked at the same thing differently. A literature review, then, should center the literature collection around that focus.

Collect and analyze the literature

With a focus in mind, a researcher can collect studies that provide relevant information for that focus. They can then analyze the collected studies by finding and identifying patterns or themes that occur frequently. This analysis allows the researcher to point out what the field has frequently explored or, on the other hand, overlooked.

Suggest implications

The literature review allows the researcher to argue a particular point through the evidence provided by the analysis. For example, suppose the analysis makes it apparent that the published research on people's sleep patterns has not adequately explored the connection between sleep and a particular factor (e.g., television-watching habits, indoor air quality). In that case, the researcher can argue that further study can address this research gap.

External requirements aside (e.g., many academic journals have a word limit of 6,000-8,000 words), a literature review as a standalone publication is as long as necessary to allow readers to understand the current state of the field. Even if it is just a section in a larger paper, a literature review is long enough to allow the researcher to justify the study that is the paper's focus.

Note that a literature review needs only to incorporate a representative number of studies relevant to the research inquiry. For term papers in university courses, 10 to 20 references might be appropriate for demonstrating analytical skills. Published literature reviews in peer-reviewed journals might have 40 to 50 references. One of the essential goals of a literature review is to persuade readers that you have analyzed a representative segment of the research you are reviewing.

Researchers can find published research from various online sources:

  • Journal websites
  • Research databases
  • Search engines (Google Scholar, Semantic Scholar)
  • Research repositories
  • Social networking sites (Academia, ResearchGate)

Many journals make articles freely available under the term "open access," meaning that there are no restrictions to viewing and downloading such articles. Otherwise, collecting research articles from restricted journals usually requires access from an institution such as a university or a library.

Evidence of a rigorous literature review is more important than the word count or the number of articles that undergo data analysis. Especially when writing for a peer-reviewed journal, it is essential to consider how to demonstrate research rigor in your literature review to persuade reviewers of its scholarly value.

Select field-specific journals

The most significant research relevant to your field focuses on a narrow set of journals similar in aims and scope. Consider who the most prominent scholars in your field are and determine which journals publish their research or have them as editors or reviewers. Journals tend to look favorably on systematic reviews that include articles they have published.

Incorporate recent research

Recently published studies have greater value in determining the gaps in the current state of research. Older research is likely to have encountered challenges and critiques that may render their findings outdated or refuted. What counts as recent differs by field; start by looking for research published within the last three years and gradually expand to older research when you need to collect more articles for your review.

Consider the quality of the research

Literature reviews are only as strong as the quality of the studies that the researcher collects. You can judge any particular study by many factors, including:

  • the quality of the article's journal
  • the article's research rigor
  • the timeliness of the research

The critical point here is that you should consider more than just a study's findings or research outputs when including research in your literature review.

Narrow your research focus

Ideally, the articles you collect for your literature review have something in common, such as a research method or research context. For example, if you are conducting a literature review about teaching practices in high school contexts, it is best to narrow your literature search to studies focusing on high school. You should consider expanding your search to junior high school and university contexts only when there are not enough studies that match your focus.

You can create a project in ATLAS.ti for keeping track of your collected literature. ATLAS.ti allows you to view and analyze full text articles and PDF files in a single project. Within projects, you can use document groups to separate studies into different categories for easier and faster analysis.

For example, a researcher with a literature review that examines studies across different countries can create document groups labeled "United Kingdom," "Germany," and "United States," among others. A researcher can also use ATLAS.ti's global filters to narrow analysis to a particular set of studies and gain insights about a smaller set of literature.

ATLAS.ti allows you to search, code, and analyze text documents and PDF files. You can treat a set of research articles like other forms of qualitative data. The codes you apply to your literature collection allow for analysis through many powerful tools in ATLAS.ti:

  • Code Co-Occurrence Explorer
  • Code Co-Occurrence Table
  • Code-Document Table

Other tools in ATLAS.ti employ machine learning to facilitate parts of the coding process for you. Some of our software tools that are effective for analyzing literature include:

  • Named Entity Recognition
  • Opinion Mining
  • Sentiment Analysis

As long as your documents are text documents or text-enable PDF files, ATLAS.ti's automated tools can provide essential assistance in the data analysis process.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • JMIR Med Inform
  • v.10(5); 2022 May

Web-Based Software Tools for Systematic Literature Review in Medicine: Systematic Search and Feature Analysis

Kathryn cowie.

1 Nested Knowledge, Saint Paul, MN, United States

Asad Rahmatullah

Nicole hardy, kevin kallmes, associated data.

Supplementary Table 1: Screening Decisions for SR (systematic review) Tools Reviewed in Full.

Supplementary Table 2: Inter-observer Agreement across (1) Systematic Review (SR) Tools and (2) Features Assessed.

Systematic reviews (SRs) are central to evaluating therapies but have high costs in terms of both time and money. Many software tools exist to assist with SRs, but most tools do not support the full process, and transparency and replicability of SR depend on performing and presenting evidence according to established best practices.

This study aims to provide a basis for comparing and selecting between web-based software tools that support SR, by conducting a feature-by-feature comparison of SR tools.

We searched for SR tools by reviewing any such tool listed in the SR Toolbox, previous reviews of SR tools, and qualitative Google searching. We included all SR tools that were currently functional and required no coding, and excluded reference managers, desktop applications, and statistical software. The list of features to assess was populated by combining all features assessed in 4 previous reviews of SR tools; we also added 5 features (manual addition, screening automation, dual extraction, living review, and public outputs) that were independently noted as best practices or enhancements of transparency and replicability. Then, 2 reviewers assigned binary present or absent assessments to all SR tools with respect to all features, and a third reviewer adjudicated all disagreements.

Of the 53 SR tools found, 55% (29/53) were excluded, leaving 45% (24/53) for assessment. In total, 30 features were assessed across 6 classes, and the interobserver agreement was 86.46%. Giotto Compliance (27/30, 90%), DistillerSR (26/30, 87%), and Nested Knowledge (26/30, 87%) support the most features, followed by EPPI-Reviewer Web (25/30, 83%), LitStream (23/30, 77%), JBI SUMARI (21/30, 70%), and SRDB.PRO (VTS Software) (21/30, 70%). Fewer than half of all the features assessed are supported by 7 tools: RobotAnalyst (National Centre for Text Mining), SRDR (Agency for Healthcare Research and Quality), SyRF (Systematic Review Facility), Data Abstraction Assistant (Center for Evidence Synthesis in Health), SR Accelerator (Institute for Evidence-Based Healthcare), RobotReviewer (RobotReviewer), and COVID-NMA (COVID-NMA). Notably, of the 24 tools, only 10 (42%) support direct search, only 7 (29%) offer dual extraction, and only 13 (54%) offer living/updatable reviews.

Conclusions

DistillerSR, Nested Knowledge, and EPPI-Reviewer Web each offer a high density of SR-focused web-based tools. By transparent comparison and discussion regarding SR tool functionality, the medical community can both choose among existing software offerings and note the areas of growth needed, most notably in the support of living reviews.

Introduction

Systematic review costs and gaps.

According to the Centre for Evidence-Based Medicine, systematic reviews (SRs) of high-quality primary studies represent the highest level of evidence for evaluating therapeutic performance [ 1 ]. However, although vital to evidence-based medical practice, SRs are time-intensive, taking an average of 67.3 weeks to complete [ 2 ] and costing leading research institutions over US $141,000 in labor per published review [ 3 ]. Owing to the high costs in researcher time and complexity, up-to-date reviews cover only 10% to 17% of primary evidence in a representative analysis of the lung cancer literature [ 4 ]. Although many qualitative and noncomprehensive publications provide some level of summative evidence, SRs—defined as reviews of “evidence on a clearly formulated question that use systematic and explicit methods to identify, select and critically appraise relevant primary research, and to extract and analyze data from the studies that are included” [ 5 ]—are distinguished by both their structured approach to finding, filtering, and extracting from underlying articles and the resulting comprehensiveness in answering a concrete medical question.

Software Tools for Systematic Review

Software tools that assist with central SR activities—retrieval (searching or importing records), appraisal (screening of records), synthesis (content extraction from underlying studies), and documentation/output (presentation of SR outputs)—have shown promise in reducing the amount of effort needed in a given review [ 6 ]. Because of the time savings of web-based software tools, institutions and individual researchers engaged in evidence synthesis may benefit from using these tools in the review process [ 7 ].

Existing Studies of Software Tools

However, choosing among the existing software tools presents a further challenge to researchers; in the SR Toolbox [ 8 ], there are >240 tools indexed, of which 224 support health care reviews. Vitally, few of these tools can be used for each of the steps of SR, so comparing the features available through each tool can assist researchers in selecting an SR tool to use. This selection can be informed by feature analysis; for example, a previously published feature analysis compared 15 SR tools [ 9 ] across 21 subfeatures of interest and found that DistillerSR (Evidence Partners), EPPI-Reviewer (EPPI-Centre), SWIFT-Active Screener (Sciome), and Covidence (Cochrane) support the greatest number of features as of 2019. Harrison et al [ 10 ], Marshall et al [ 11 ], and Kohl et al [ 12 ] have completed similar analyses, but each feature assessment selected a different set of features and used different qualitative feature assessment methods, and none covered all SR tools currently available.

The SR tool landscape continues to evolve; as existing tools are updated, new software is made available to researchers, and new feature classes are developed. For instance, despite the growth of calls for living SRs, that is, reviews where the outputs are updated as new primary evidence becomes available, no feature analysis has yet covered this novel capability. Furthermore, the leading feature analyses [ 9 - 12 ] have focused on the screening phase of review, meaning that no comparison of data extraction capabilities has yet been published.

Feature Analysis of Systematic Review Tools

The authors, who are also the developers of the Nested Knowledge platform for SR and meta-analysis (Nested Knowledge, Inc) [ 13 ], have noted the lack of SR feature comparison among new tools and across all feature classes (retrieval, appraisal, synthesis, documentation/output, administration of reviews, and access/support features). To provide an updated feature analysis comparing SR software tools, we performed a feature analysis covering the full life cycle of SR across software tools.

Search Strategy

We searched the SR tools for assessment in 3 ways: first, we identified any SR tool that was published in existing reviews of SR tools (Table S1 in Multimedia Appendix 1 ). Second, we reviewed SR Toolbox [ 8 ], a repository of indexed software tools that support the SR process. Third, we performed a Google search for Systematic review software and identified any software tool that was among the first 5 pages of results. Furthermore, for any library resource pages that were among the search results, we included any SR tools mentioned by the library resource page that met our inclusion criteria. The search was completed between June and August 2021. Four additional tools, namely SRDR+ (Agency for Healthcare Research and Quality), Systematic Review Assistant-Deduplication Module (Institute for Evidence-Based Healthcare), Giotto Compliance, and Robotsearch (Robotsearch), were assessed in December 2021 following reviewer feedback.

Selection of Software Tools

The inclusion and exclusion criteria were determined by 3 authors (KK, KH, and KC). Among our search results, we queued up all software tools that had descriptions meeting our inclusion criteria for full examination of the software in a second round of review. We included any that were functioning web-based tools that require no coding by the user to install or operate, so long as they were used to support the SR process and can be used to review clinical or preclinical literature. The no coding requirement was established because the target audience of this review is medical researchers who are selecting a review software to use; thus, we aim to review only tools that this broad audience is likely to be able to adopt. We also excluded desktop applications, statistical packages, and tools built for reviewing software engineering and social sciences literature, as well as reference managers, to avoid unfairly casting these tools as incomplete review tools (as they would each score quite low in features that are not related to reference management). All software tools were screened by one reviewer (KC), and inclusion decisions were reviewed by a second (KK).

Selection of Features of Interest

We built on the previous comparisons of SR tools published by Van der Mierden et al [ 9 ], Harrison et al [ 10 ], Marshall et al [ 11 ], and Kohl et al [ 12 ], which assign features a level of importance and evaluate each feature in reference screening tools. As the studies by Van der Mierden et al [ 9 ] and Harrison et al [ 10 ] focus on reference screening, we supplemented the features with features identified in related reviews of SR tools (Table S1 in Multimedia Appendix 1 ). From a study by Kohl et al [ 12 ], we added database search, risk of bias assessment (critical appraisal), and data visualization. From Marshall et al [ 11 ], we added report writing.

We added 4 more features based on their importance to software-based SR: manual addition of records, automated full-text retrieval, dual extraction of studies, risk of bias (critical appraisal), living SR, and public outputs. Each addition represents either a best practice in SR [ 14 ] or a key feature for the accuracy, replicability, and transparency of SR. Thus, in total, we assessed the presence or absence of 30 features across 6 categories: retrieval, appraisal, synthesis, documentation/output, administration/project management, and access/support.

We adopted each feature unless it was outside of the SR process, it was required for inclusion in the present review, it duplicated another feature, it was not a discrete step for comparison, it was not necessary for English language reviews, it was not necessary for a web-based software, or it related to reference management (as we excluded reference managers from the present review). Table 1 shows all features not assessed, with rationale.

Features from systematic reviews not assessed in this review, with rationale.

Feature Assessment

To minimize bias concerning the subjective assessment of the necessity or desirability of features or of the relative performance of features, we used a binary assessment where each SR tool was scored 0 if a given feature was not present or 1 if a feature was present. Tools were assessed between June and August 2021. We assessed 30 features, divided into 6 feature classes. Of the 30 features, 77% (23/30) were identified in existing literature, and 23% (7/30) were added by the authors ( Table 2 ).

The criteria for each selected feature, as well as the rationale.

a API: application programming interface.

b Rationale only provided for features added in this review; all other features were drawn from existing feature analyses of Systematic Review Software Tools.

c RIS: Research Information System.

d PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses.

e AI: artificial intelligence.

Evaluation of Tools

For tools with free versions available, each of the researchers created an account and tested the program to determine feature presence. We also referred to user guides, publications, and training tutorials. For proprietary software, we gathered information on feature offerings from marketing webpages, training materials, and video tutorials. We also contacted all proprietary software providers to give them the opportunity to comment on feature offerings that may have been left out of those materials. Of the 8 proprietary software providers contacted, 38% (3/8) did not respond, 50% (4/8) provided feedback on feature offerings, and 13% (1/8) declined to comment. When providers provided feedback, we re-reviewed the features in question and altered the assessment as appropriate. One provider gave feedback after initial puplication, prompting issuance of a correction.

Feature assessment was completed independently by 2 reviewers (KC and AR), and all disagreements were adjudicated by a third (KK). Interobserver agreement was calculated using standard methods [ 19 ] as applied to binary assessments. First, the 2 independent assessments were compared, and the number of disagreements was counted per feature, per software. For each feature, the total number of disagreements was counted and divided by the number of software tools assessed. This provided a per-feature variability percentage; these percentages were averaged across all features to provide a cumulative interobserver agreement percentage.

Identification of SR Tools

We reviewed all 240 software tools offered on SR Toolbox and sent forward all studies that, based on the software descriptions, could meet our inclusion criteria; we then added in all software tools found on Google Scholar. This strategy yielded 53 software tools that were reviewed in full ( Figure 1 shows the PRISMA [Preferred Reporting Items for Systematic Reviews and Meta-Analyses]-based chart). Of these 53 software tools, 55% (29/53) were excluded. Of the 29 excluded tools, 17% (5/29) were built to review software engineering literature, 10% (3/29) were not functional as of August 2021, 7% (2/29) were citation managers, and 7% (2/29) were statistical packages. Other excluded tools included tools not designed for SRs (6/29, 21%), desktop applications (4/29, 14%), tools requiring users to code (3/29, 10%), a search engine (1/29, 3%), and a social science literature review tool (1/29, 3%). One tool, Research Screener [ 20 ], was excluded owing to insufficient information available on supported features. Another tool, the Health Assessment Workspace Collaborative, was excluded because it is designed to assess chemical hazards.

An external file that holds a picture, illustration, etc.
Object name is medinform_v10i4e33219_fig1.jpg

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses)-based chart showing the sources of all tools considered for inclusion, including 2-phase screening and reasons for all exclusions made at the full software review stage. SR: systematic review.

Overview of SR Tools

We assessed the presence of features in 24 software tools, of which 71% (17/24) are designed for health care or biomedical sciences. In addition, 63% (15/24) of the analyzed tools support the full SR process, meaning they enable search, screening, extraction, and export, as these are the basic capabilities necessary to complete a review in a single software tool. Furthermore, 21% (5/34) of the tools support the screening stage ( Table 3 ).

Breakdown of software tools for systematic review by process type (full process, screening, extraction, or visualization; n=24).

Data Gathering

Interobserver agreement between the 2 reviewers gathering data features was 86.46%, meaning that across all feature assessments, the 2 reviewers disagreed on <15% of the applications. Final assessments are summarized in Table 4 , and Table S2 in Multimedia Appendix 2 shows the interobserver agreement on a per–SR tool and per-feature basis. Interobserver agreement was ≥70% for every feature assessed and for all SR tools except 3: LitStream (ICF; 53.3%), RevMan Web (Cochrane; 50%), and SR Accelerator (Institute for Evidence-Based Healthcare; 53.3%); on investigation, these low rates of agreement were found to be due to name changes and versioning (LitStream and RevMan Web) and due to the modular nature of the subsidiary offerings (SR Accelerator). An interactive, updatable visualization of the features offered by each tool is available in the Systematic Review Methodologies Qualitative Synthesis.

Feature assessment scores by feature class for each systematic review tool analyzed. The total number of features across all feature classes is presented in descending order.

Giotto Compliance (27/30, 90%), DistillerSR (26/30, 87%), and Nested Knowledge (26/30, 87%) support the most features, followed by EPPI-Reviewer Web (25/30, 83%), LitStream (23/30, 77%), JBI SUMARI (21/30, 70%), and SRDB.PRO (VTS Software) (21/30, 70%).

The top 16 software tools are ranked by percent of features from highest to lowest in Figure 2 . Fewer than half of all features are supported by 7 tools: RobotAnalyst (National Centre for Text Mining), SRDR (Agency for Healthcare Research and Quality), SyRF (Systematic Review Facility), Data Abstraction Assistant (Center for Evidence Synthesis in Health, Institute for Evidence-Based Healthcare), SR-Accelerator, RobotReviewer (RobotReviewer), and COVID-NMA (COVID-NMA; Table 3 ).

An external file that holds a picture, illustration, etc.
Object name is medinform_v10i4e33219_fig2.jpg

Stacked bar chart comparing the percentage of supported features, broken down by their feature class (retrieval, appraisal, extraction, output, admin, and access), among all analyzed software tools.

Feature Assessment: Breakout by Feature Class

Of all 6 feature classes, administrative features are the most supported, and output and extraction features are the least supported ( Figure 3 ). Only 3 tools, Covidence (Cochrane), EPPI-Reviewer, and Giotto Compliance, offer all 4 extraction features ( Table 4 ). DistillerSR and Giotto support all 5 retrieval features, while Nested Knowledge supports all 5 documentation/output features. Colandr, DistillerSR, EPPI-Reviewer, Giotto Compliance, and PICOPortal support all 6 appraisal features.

An external file that holds a picture, illustration, etc.
Object name is medinform_v10i4e33219_fig3.jpg

Heat map of features observed in 24 analyzed software tools. Dark blue indicates that a feature is present, and light blue indicates that a feature is not present.

Feature Class 1: Retrieval

The ability to search directly within the SR tool was only present for 42% (10/24) of the software tools, meaning that for all other SR tools, the user is required to search externally and import records. The only SR tool that did not enable importing of records was COVID-NMA, which supplies studies directly from the providers of the tool but does not enable the user to do so.

Feature Class 2: Appraisal

Among the 19 tools that have title/abstract screening, all tools except for RobotAnalyst and SRDR+ enable dual screening and adjudication. Reference deduplication is less widespread, with 58% (14/24) of the tools supporting it. A form of machine learning/automation during the screening stage is present in 54% (13/24) of the tools.

Feature Class 3: Extraction

Although 75% (18/24) of the tools offer data extraction, only 29% (7/24) offer dual data extraction (Giotto Compliance, DistillerSR, SRDR+, Cadima [Cadima], Covidence, EPPI-Reviewer, and PICOPortal [PICOPortal]). A total of 54% (13/24) of the tools enable risk of bias assessments.

Feature Class 4: Output

Exporting references or collected data is available in 71% (17/24) of the tools. Of the 24 tools, 54% (13/24) generate figures or tables, 42% (10/24) of tools generate PRISMA flow diagrams, 32% (8/24%) have report writing, and only 13% (3/34) have in-text citations.

Feature Class 5: Admin

Protocols, customer support, and training materials are available in 71% (17/24), 79% (19/24), and 83% (20/24) of the tools, respectively. Of all administrative features, the least well developed are progress/activity monitoring, which is offered 67% (16/24) of the tools, and comments, which are available in 58% (14/24) of the tools.

Feature Class 6: Access

Access features cover both collaboration during the review, cost, and availability of outputs. Of the 24 software tools, 83% (20/24) permit collaboration by allowing multiple users to work on a project. COVID-NMA, RobotAnalyst, RobotReviewer, and SR-Accelerator do not allow multiple users. In addition, of the 24 tools, 71% (17/24) offer a free subscription, whereas 29% (7/24) require paid subscriptions or licenses (Covidence, DistillerSR, EPPI-Reviewer Web, Giotto Compliance, JBI Sumari, SRDB.PRO, and SWIFT-Active Screener). Only 54% (13/24) of the software tools support living, updatable reviews.

Principal Findings

Our review found a wide range of options in the SR software space; however, among these tools, many lacked features that are either crucial to the completion of a review or recommended as best practices. Only 63% (15/24) of the SR tools covered the full process from search/import through to extraction and export. Among these 15 tools, only 67% (10/15) had a search functionality directly built in, and only 47% (7/15) offered dual data extraction (which is the gold standard in quality control). Notable strengths across the field include collaborative mechanisms (offered by 20/24, 83% tools) and easy, free access (17/24, 71% of tools are free). Indeed, the top 4 software tools in terms of number of features offered (Giotto Compliance, DistillerSR, Nested Knowledge, and EPPI-Reviewer all offered between 83% and 90% of the features assessed. However, major remaining gaps include a lack of automation of any step other than screening (automated screening offered by 13/24, 54% of tools) and underprovision of living, updatable outputs.

Major Gaps in the Provision of SR Tools

Marshall et al [ 11 ] have previously noted that “the user should be able to perform an automated search from within the tool which should identify duplicate papers and handle them accordingly” [ 11 ]. Less than a third of tools (7/24, 29%) support search, reference import, and manual reference addition.

Study Selection

Screening of references is the most commonly offered feature and has the strongest offerings across features. All software tools that offer screening also support dual screening (with the exception of RobotAnalyst and SRDR+). This demonstrates adherence to SR best practices during the screening stage.

Automation and Machine Learning

Automation in medical SR screening has been growing. Some form of machine learning or other automation for screening literature is present in over half (13/24, 54%) of all the tools analyzed. Machine learning/screening includes reordering references, topic modeling, and predicting inclusion rates.

Data Extraction

In contrast to screening, extraction is underdeveloped. Although extraction is offered by 75% (18/24) tools, few tools adhere to SR best practices of dual extraction. This is a deep problem in the methods of review, as the error rate for manual extraction without dual extraction is highly variable and has even reached 50% in independent tests [ 16 ].

Although single extraction continues to be the only commonly offered method, the scientific community has noted that automating extraction would have value in both time savings and improved accuracy, but the field is as of yet underdeveloped. To quote a recent review on the subject of automated extraction, “[automation] techniques have not been fully utilized to fully or even partially automate the data extraction step of systematic review” [ 21 ]. The technologies to automate extraction have not achieved partial extraction at a sufficiently high accuracy level to be adopted; therefore, dual extraction is a pressing software requirement that is unlikely to be surpassed in the near future.

Project Management

Administrative features are well supported by SR software. However, there is a need for improved monitoring of review progress. Project monitoring is offered by 67% (16/24) of the tools, which is among the lowest of all admin features and likely the feature most closely associated with the quality of the outputs. As collaborative access is common and highly prized, SR software providers should recognize the barriers to collaboration in medical research; lack of mutual awareness, inertia in communication, and time management and capacity constraints are among the leading reasons for failure in interinstitutional research [ 22 ]. Project monitoring tools could assist with each of these pain points and improve the transparency and accountability within the research team.

Living Reviews

The scientific community has made consistent demands for SR processes to be rendered updatable, with the goal of improving the quality of evidence available to clinicians, health policymakers, and the medical public [ 23 , 24 ]. Despite these ongoing calls for change, living, updatable reviews are not yet standard in SR software tools. Only 54% (13/24) of the tools support living reviews, largely because living review depends on providing updatability at each step up through to outputs. However, until greater provision of living review tools is achieved, reviews will continue to fall out of date and out of sync with clinical practice [ 24 ].

Study Limitations

In our study design, we elected to use a binary assessment, which limited the bias induced by the subjective appeal of any given tool. Therefore, these assessments did not include any comparison of quality or usability among the SR tools. This also meant that we did not use the Desmet [ 25 ] method, which ranks features by level of importance. We also excluded certain assessments that may impact user choices such as language translation features or translated training documentation, which is supported by some technologies, including DistillerSR. We completed the review in August 2021 but added several software tools following reviewer feedback; by adding expert additions without repeating the entire search strategy, we may have missed SR tools that launched between August and December 2021. Finally, the authors of this study are the designers of one of the leading SR tools, Nested Knowledge, which may have led to tacit bias toward this tool as part of the comparison.

By assessing features offered by web-based SR applications, we have identified gaps in current technologies and areas in need of development. Feature count does not equate to value or usability; it fails to capture benefits of simple platforms, such as ease of use, effective user interface, alignment with established workflows, or relative costs. The authors make no claim about superiority of software based on feature prevalence.

Future Directions

We invite and encourage independent researchers to assess the landscape of SR tools and build on this review. We expect the list of features to be assessed will evolve as research changes. For example, this review did not include features such as the ability to search included studies, reuse of extracted data, and application programming interface calls to read data, which may grow in importance. Furthermore, this review assessed the presence of automation at a high level without evaluating details. A future direction might be characterizing specific types of automation models used in screening, as well as in other stages, for software applications that support SR of biomedical research.

The highest-performing SR tools were DistillerSR, EPPI-Reviewer Web, and Nested Knowledge, each of which offer >80% of features. The most commonly offered and robust feature class was screening, whereas extraction (especially quality-controlled dual extraction) was underprovided. Living reviews, although strongly advocated for in the scientific community, were similarly underprovided by the SR tools reviewed here. This review enables the medical community to complete transparent and comprehensive comparison of SR tools and may also be used to identify gaps in technology for further development by the providers of these or novel SR tools.

This review of web-based software review software tools represents an attempt to best capture information from software providers’ websites, free trials, peer-reviewed publications, training materials, or software tutorials. The review is based primarily on publicly available information and may not accurately reflect feature offerings, as relevant information was not always available or clear to interpret. This evaluation does not represent the views or opinions of any of the software developers or service providers, except those of the authors. The review was completed in August 2021, and readers should refer to the respective software providers’ websites to obtain updated information on feature offerings.

Acknowledgments

The authors acknowledge the software development team from Nested Knowledge, Stephen Mead, Jeffrey Johnson, and Darian Lehmann-Plantenberg for their input in designing Nested Knowledge. The authors thank the independent software providers who provided feedback on our feature assessment, which increased the quality and accuracy of the results.

Abbreviations

Multimedia appendix 1, multimedia appendix 2.

Authors' Contributions: All authors participated in the conception, drafting, and editing of the manuscript.

Conflicts of Interest: KC, NH, and KH work for and hold equity in Nested Knowledge, which provides a software application included in this assessment. AR worked for Nested Knowledge. KL works for and holds equity in Nested Knowledge, Inc, and holds equity in Superior Medical Experts, Inc. KK works for and holds equity in Nested Knowledge, and holds equity in Superior Medical Experts.

  • Oracle Mode
  • Oracle Mode – Advanced
  • Exploration Mode
  • Simulation Mode
  • Simulation Infrastructure

Join the movement towards fast, open, and transparent systematic reviews

ASReview LAB v1.5 is out!

YouTube

By loading the video, you agree to YouTube's privacy policy. Learn more

Always unblock YouTube

ASReview uses state-of-the-art active learning techniques to solve one of the most interesting challenges in systematically screening large amounts of text : there’s not enough time to read everything!  

The project has grown into a vivid worldwide community of researchers, users, and developers. ASReview is coordinated at Utrecht University and is part of the official AI-labs at the university.

systematic literature review tools

Free, Open and Transparent

The software is installed on your device locally. This ensures that nobody else has access to your data, except when you share it with others. Nice, isn’t it?

  • Free and open source
  • Local or server installation
  • Full control over your data
  • Follows the Reproducibility and Data Storage Checklist for AI-Aided Systematic Reviews

In 2 minutes up and running

With the smart project setup features, you can start a new project in minutes. Ready, set, start screening!

  • Create as many projects as you want
  • Choose your own or an existing dataset
  • Select prior knowledge
  • Select your favorite active learning algorithm

systematic literature review tools

Three modi to choose from

ASReview LAB can be used for:

  • Screening with the Oracle Mode , including advanced options
  • Teaching using the Exploration Mode
  • Validating algorithms using the Simulation Mode

We also offer an open-source research infrastructure to run large-scale simulation studies for validating newly developed AI algorithms.

Follow the development

Open-source means:

  • All annotated source code is available 
  • You can see the developers at work in open Pull Requests
  • Open Pull Request show in what direction the project is developing
  • Anyone can contribute!

Give a GitHub repo a star if you like our work.

systematic literature review tools

Join the community

A community-driven project means:

  • The project is a joined endeavor  
  • Your contribution matters!

Join the movement towards transparent AI-aided reviewing

Beginner -> User -> Developer -> Maintainer

Organizations

Github stars

Join the ASReview Development Fund

Many users donate their time to continue the development of the different software tools that are part of the ASReview universe. Also, donations and research grants make innovations possible!

systematic literature review tools

Navigating the Maze of Models in ASReview

Starting a systematic review can feel like navigating through a maze, with countless articles and endless…

systematic literature review tools

ASReview LAB Class 101

ASReview LAB Class 101 Welcome to ASReview LAB class 101, an introduction to the most important…

systematic literature review tools

Introducing the Noisy Label Filter (NLF) procedure in systematic reviews

The ASReview team developed a procedure to overcome replication issues in creating a dataset for simulation…

systematic literature review tools

Seven ways to integrate ASReview in your systematic review workflow

Seven ways to integrate ASReview in your systematic review workflow Systematic reviewing using software implementing Active…

systematic literature review tools

Active Learning Explained

Active Learning Explained The rapidly evolving field of artificial intelligence (AI) has allowed the development of…

The Zen of Elas

the Zen of Elas

The Zen of Elas Elas is the Mascotte of ASReview and your Electronic Learning Assistant who…

systematic literature review tools

Five ways to get involved in ASReview

Five ways to get involved in ASReview ASReview LAB is open and free (Libre) software, maintained…

systematic literature review tools

Connecting RIS import to export functionalities

What’s new in v0.19? Connecting RIS import to export functionalities Download ASReview LAB 0.19Update to ASReview…

systematic literature review tools

Meet the new ASReview Maintainer: Yongchao Ma

Meet Front-End Developer and ASReview Maintainer Yongchao Ma As a user of ASReview, you are probably…

systematic literature review tools

UPDATED: ASReview Hackathon for Follow the Money

This event has passed The winners of the hackathon were: Data pre-processing: Teamwork by: Raymon van…

What’s new in release 0.18?

More models more options, available now! Version 0.18 slowly opens ways to the soon to be…

systematic literature review tools

Simulation Mode Class 101

Simulation Mode Class 101 Have you ever done a systematic review manually, but wondered how much…

MSU Libraries

  • Need help? Ask a Librarian

Systematic & Advanced Evidence Synthesis Reviews

  • Our Services
  • Choosing A Review Type
  • Conducting A Review
  • Systematic Reviews
  • Scoping & Other Types of Advanced Reviews

Online Toolkits & Workbooks

Search strategies and citation chaining, citation management, deduplication, bibliography creation, and cite-while-you-write, screening results, creating prisma compliant flow charts, data analysis & abstraction, total workflow sr products, writing a manuscript.

  • Contact Your Librarian For Help

This page lists commonly used software for Systematic Review's (SRs) and other advanced evidence synthesis reviews and should not be taken as MSU Libraries endorsing one program over another. The sections of the guide list fee-based as well as free and open-source software for different aspects of the review workflow.  All-inclusive workflow products are listed in this section.

  • Wanner, Amanda. 2019. Getting started with your systematic or scoping review: Workbook & Endnote Instructions. Open Science Framework. This is a librarian created workbook on OSF that includes a pretty comprehensive workbook that walks you through all the steps and stages of creating a systematic or scoping review.
  • What review is right for you? This tool is designed to provide guidance and supporting material to reviewers on methods for the conduct and reporting of knowledge synthesis. As a pilot project, the current version of the tool only identifies methods for knowledge synthesis of quantitative studies. A future iteration will be developed for qualitative evidence synthesis.
  • Systematic Review Toolkit The Systematic Review Toolbox is a web-based catalogue of tools that support various tasks within the systematic review and wider evidence synthesis process. The toolbox aims to help researchers and reviewers find the following: Software tools, Quality assessment / critical appraisal checklists, Reporting standards, and Guidelines.

It is highly recommended that researchers partner with the academic librarian for their specialty to create search strategies for systematic and advanced reviews. Many guidance organizations recognize the invaluable contributions of information professionals to creating search strategies - the bedrock of synthesis reviews.

  • Visualising systematic review search strategies to assist information specialists
  • Gusenbauer, M., & Haddaway, N. R. (2019). Which Academic Search Systems are Suitable for Systematic Reviews or Meta‐Analyses? Evaluating Retrieval Qualities of Google Scholar, PubMed and 26 other Resources. Research Synthesis Methods.
  • Citation Chaser Forward citation chasing looks for all records citing one or more articles of known relevance; backward citation chasing looks for all records referenced in one or more articles. This tool automates this process by making use of the Lens.org API. An input article list can be used to return a list of all referenced records, and/or all citing records in the Lens.org database (consisting of PubMed, PubMed Central, CrossRef, Microsoft Academic Graph and CORE)

How do you track your integration and resourcing for projects that require systematic searching, like systematic or scoping reviews? What, where, and how should you be tracking? Using a tool like Air Table can help you stay organized.

  • Airtable for Systematic Search Tracking Talk from Whitney Townsend at the University of Michigan - April 6, 2022

Having a software program that can store citations from databases, deduplicating your results, and automating the creation and formatting of citations and a bibliography using a cite-while-you-write plugin will save a lot of time when doing any literature review. The software listed below can do all of these functions which are not found in the fee-based total systematic review workflow products.

You could also do most of the components of an SR in these software including screening.

  • Endnote Guide Endnote Online is free and has basic functionality like importing citations and cite-while-you-write for Microsoft Word. The desktop version of Endnote is a separate individual purchase and is more robust then the online version particularly for organization of citations and ease of use with large citation libraries.
  • Mendeley Guide Mendeley has all the standard features of a citation manager with the addition of a social community of scholars. Mendeley can be sluggish with large file sizes of multiple thousands of citations and the free version has limited collaborative features.

This resource is available only to Faculty, Staff, and Students logged in with their NetID.

Screening the titles, abstracts, and full text of your results is one of the most time consuming components of any review. There are easy-to-use free software for this process but they won't have features like automatically creating the flow charts and inter-rater reliability kappa coefficient that you need to report in your methodology section. You will have to do this by hand.

Deduplication of results before importation into one of these tools and screening should be done in a citation management program like Endnote, Mendeley, or Zotero.

  • Abstrackr Created by Brown University, Abstrackr is one of the best known and easiest to use free tools for screening results.
  • Colandr Colandr is an open source screening tool. Like Abstrackr, deduplication is best done in a citation manager and then results imported into Colandr. There is a learning curve to this tool but a vibrant user community does exist for troubleshooting.
  • Rayyan Built by the Qatar Foundation. It is a free web-tool (Beta) designed to help researchers working on systematic reviews and other knowledge synthesis projects. It has simple interface and a mobile app for screening-on-the-go.
  • PRISMA Diagram Generator Using the PRISMA Diagram Generator you can produce a diagram easily in any of 10 different formats. The official PRISMA website only has the format as a .docx or .pdf option. Using the generator the diagram is produced using the Open Source dot program (part of graphviz), and this tool provides the source for your diagram if you wish to further tweak your diagram.
  • PRISMA 2020: R Package and ShinyApp This free, online tool makes use of the DiagrammeR R package to develop a customisable flow diagram that conforms to PRISMA 2020 standards. It allows the user to specify whether previous and other study arms should be included, and allows interactivity to be embedded through the use of mouseover tooltips and hyperlinks on box clicks.

Tools for data analysis can help you categorize results such as outcomes of studies and perform metanalyses. The SRDR tool may be the easiest to use and has frequent functionality updates.

  • OpenMeta[Analyst] Developed by Brown University using an AHRQ grant, OpenMeta[Analyst] is a no-frills approach to data analysis.
  • SRDR Developed by the AHRQ, The Systematic Review Data Repository (SRDR) is a powerful and easy-to-use tool for the extraction and management of data for systematic review or meta-analysis. It is also an open and searchable archive of systematic reviews and their data.

Data abstraction commonly refers to the extraction, synthesis, and structured visualization of evidence characteristics. Evidence tables/table shells/reading grids are the core way article extraction analyses are displayed. It lists all the included data sources and their characteristics according to your inclusion/exclusion criteria. Tools like Covidence have modules to create your own data extraction form and export a table when finished.

  • OpenAcademics: Reading Grid Template
  • The National Academies Press: Sample Literature Table Shells
  • Campbell Collaboration: Data Extraction Tips

There are several fee-based products that are a one-stop-shop for systematic reviews. They complete all the steps from importing citations, deduplicating results, screening, bibliography management, and some even perform metanalyses. These are best used by teams that have grant or departmental funding because they can be rather expensive. 

None of these tools offers a robust bibliography creation function or cite-while-you write option. You will still need to use a separate citation manager to do these aspects of review writing. We list commonly used citation management tools on this page.

  • EPPI-Reviewer 4 EPPI-Reviewer 4 is a web-based software program for managing and analysing data in literature reviews. It has been developed for all types of systematic review (meta-analysis, framework synthesis, thematic synthesis etc) but also has features that would be useful in any literature review. It manages references, stores PDF files and facilitates qualitative and quantitative analyses such as meta-analysis and thematic synthesis. It also contains some new ‘text mining’ technology which is promising to make systematic reviewing more efficient. It also supports many different analytic functions for synthesis including meta-analysis, empirical synthesis and qualitative thematic synthesis. It does not have a bibliographic manager or cite-while-you-write feature.
  • JBI-SUMARI Currently, this tool can only accept Endnote XML files for citation import. So you would need to download citations to Endnote, import them into SUMARI, and when screening is complete use Endnote as your bibliographic manager for any writing. SUMARI supports 10 review types, including reviews of effectiveness, qualitative research, economic evaluations, prevalence/incidence, aetiology/risk, mixed methods, umbrella/overviews, text/opinion, diagnostic test accuracy and scoping reviews. It facilitates the entire review process, from protocol development, team management, study selection, critical appraisal, data extraction, data synthesis and writing your systematic review report.

Using Excel

Some teams may choose to use Excel for their systematic review. This is not recommended because it can be extremely time consuming and is more prone to error. However, there is a basic template for Excel-based SR's online that is good quality and walks one through the entire workflow of completing an SR (excluding bibliography creation and citation management).

  • PIECES Workbook This link will open an Excel workbook designed to help conduct, document, and manage a systematic review. Made by Margaret J. Foster, MS, MPH, AHIP Systematic Reviews Coordinator Associate Professor Medical Sciences Library, Texas A&M University
  • Systematic Review Accelerator: Methods Wizard An tool to help you write consistent, reproducible methods sections according to common reporting structures.
  • PRISMA Extensions Each PRISMA reporting extension has a manuscript checklist that lays out exactly how to write/report your review and what information to include.
  • << Previous: Scoping & Other Types of Advanced Reviews
  • Next: Contact Your Librarian For Help >>
  • Last Updated: May 14, 2024 2:50 PM
  • URL: https://libguides.lib.msu.edu/systematic_reviews

U.S. flag

An official website of the United States government

Here's how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Tools & Resources

Software tools.

There are a variety of fee-based and open-source (i.e., free) tools available for conducting the various steps of your scoping or systematic review. 

The NIH Library currently provides free access for NIH customers to Covidence . At least one user must be from NIH in order to request access and use Covidence. Please contact the NIH Library's Systematic Review Service to request access.

You can use Covidence to import citations from any citation management tool and then screen your citations at title and abstract and then full text levels. Covidence keeps track of who voted and manages the flow of the citations to ensure the correct number of screeners reviews each citation. It can also support single or dual screeners. In the full text screening step, you can upload PDFs into Covidence and it will keep track of your excluded citations and reasons for exclusion. Later, export this information to help you complete the PRISMA flow diagram. If you chose, you can also complete your data extraction and risk of bias assessments in Covidence by creating templates based on your needs and type of risk of bias tool. Finally, export all of your results for data management purposes or export your data into another data analysis tool for further work.

Other tools available for conducting scoping or systematic reviews are:

  • DistillerSR (fee-based)
  • EPPI-Reviewer (fee-based)
  • JBI SUMARI (from the Joanna Briggs Institute for their reviews) (fee-based)
  • LitStream (from ICF International)
  • Systematic Review Data Repository (SRDR+)  (from AHRQ) (free)
  • Abstrackr (open source)
  • Colandr  (open source)
  • Google Sheets and Forms
  • ​ HAWC (Health Assessment Workplace Collaborative) (free)
  • Rayyan (open source)

And check out the Systematic Review Toolbox for additional software suggestions for conducting your review.

Quality Assessment Tools (i.e., risk of bias, critical appraisal)

  • 2022 Repository of Quality Assessment and Risk of Bias Tools - A comprehensive resource for finding and selecting a risk of bias or quality assessment tool for evidence synthesis projects. Continually updated.
  • AMSTAR 2  - AMSTAR 2 ( A MeaSurement Tool to Assess systematic Reviews). Use for critically appraising ONLY systematic reviews of healthcare interventions including randomised controlled clinical trials.
  • JADAD Scale for Reporting Randomized Controlled Trials  - The Jadad scale, sometimes known as Jadad scoring or the Oxford quality scoring system, is a procedure to independently assess the methodological quality of a clinical trial. Jadad et al. published a three-point questionnaire that formed the basis for a Jadad score. 
  • Joanna Briggs Institute Critical Appraisal Tools  - includes 13 checklists to appraise a variety of different studies and publication types including qualitative studies.
  • RoB 2.0: Cochrane Risk of Bias Tool for Randomized Trials  Version 2 of the Cochrane RoB 2 can be used to assess the risk of bias in randomized trials.
  • CASP (Critical Appraisal Skills Program ) - a number of checklists are available to appraise systematic reviews, randomised controlled trials, cohort studies, case control studies, economic evaluations, diagnostic studies, qualitative studies and clinical prediction rule.
  • Newcastle-Ottawa Scale (NOS) for Assessing the Quality of Nonrandomised Studies in Meta-Analyses - Nonrandomised studies, including case-control and cohort studies, can be challenging to implement and conduct. Assessment of the quality of such studies is essential for a proper understanding of nonrandomised studies. The Newcastle-Ottawa Scale (NOS) is an ongoing collaboration between the Universities of Newcastle, Australia and Ottawa, Canada. It was developed to assess the quality of nonrandomised studies with its design, content and ease of use directed to the task of incorporating the quality assessments in the interpretation of meta-analytic results.

Background information on this important step of systematic reviews can be found at the following resources:

  • Cochrane Handbook for Sysetmatic Reviews of Interventions (version 6.2) 2021 - see Chapter 7: Considering bias and conflicts of interest among the included studies ,  Chapter 8: Assessing risk of bias in a randomized trial ,  Chapter 13: Assessing risk of bias due to missing results in a synthesis  and  Chapter 25: Assessing risk of bias in a non-randomized study
  • Aromataris E, Munn Z (Editors) . JBI Manual for Evidence Synthesis .  JBI, 2020. https://doi.org/10.46658/JBIMES-20-01  - see appropriate chapter for type of review and the section on risk of bias.
  • Chapter:  Assessing the Risk of Bias of Individual Studies in Systematic Reviews of Healthcare Interventions  from AHRQ. 2017 December.  Methods Guide for Effectiveness and Comparative Effectiveness Reviews . Rockville, MD, AHRQ.
  • Chapter 3: Standards for Finding and Assessing Individual Studies in Institute of Medicine. 2011.  Finding What Works in Health Care: Standards for Systematic Reviews . Washington, DC: The National Academies Press.

Grading Tools

GRADE Working Group

The working group has developed a common, sensible and transparent approach to grading quality of evidence and strength of recommendations.

  • Oxford Centre for Evidence Based Medicine - Levels of Evidence and Grades of Recommendations

Reporting Standards for Systematic Reviews

The Appraisal of Guidelines for Research and Evaluation (AGREE) Instrument evaluates the process of practice guideline development and the quality of reporting.

Collects guidance documents on reporting systematic reviews and other types of health research

PRISMA 2020

Preferred Reporting Items for Systematic Reviews and Meta-Analyses. PRISMA 2020 was published in 2021 with an revised checklist , flow diagram , and a new elaboration and explanation paper .

The Methodological Expectations of Cochrane Intervention Reviews (MECIR) are methodological standards to which all Cochrane Protocols, Reviews, and Updates are expected to adhere

  • RAMESES publication standards: meta-narrative reviews

Online Videos on Systematic Reviews

The Campbell Collaboration

A collection of introductory and advanced videos on systematic reviews

Cochrane Introduction to Systematic Reviews

This module provides an overview to Cochrane systematic reviews, and will take approximately 45 minutes to complete.

​ Systematic Review and Evidence-Based Medicine

Dr. Aaron Carroll (The Incidental Economist) take on evidenced-based practice and systematic reviews

How to Critically Appraise Evidence

A collection of videos on evidence-based practice, common statistical methods in medicine, and systematic reviews

Introduction to Meta-Analysis

Dr. Michael Borenstein short introduction to meta-analysis

Literature Review Tips & Tools

  • Tips & Examples

Organizational Tools

Tools for systematic reviews.

  • Bubbl.us Free online brainstorming/mindmapping tool that also has a free iPad app.
  • Coggle Another free online mindmapping tool.
  • Organization & Structure tips from Purdue University Online Writing Lab
  • Literature Reviews from The Writing Center at University of North Carolina at Chapel Hill Gives several suggestions and descriptions of ways to organize your lit review.
  • Cochrane Handbook for Systematic Reviews of Interventions "The Cochrane Handbook for Systematic Reviews of Interventions is the official guide that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions. "
  • Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) website "PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials, but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions."
  • PRISMA Flow Diagram Generator Free tool that will generate a PRISMA flow diagram from a CSV file (sample CSV template provided) more... less... Please cite as: Haddaway, N. R., Page, M. J., Pritchard, C. C., & McGuinness, L. A. (2022). PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis Campbell Systematic Reviews, 18, e1230. https://doi.org/10.1002/cl2.1230
  • Rayyan "Rayyan is a 100% FREE web application to help systematic review authors perform their job in a quick, easy and enjoyable fashion. Authors create systematic reviews, collaborate on them, maintain them over time and get suggestions for article inclusion."
  • Covidence Covidence is a tool to help manage systematic reviews (and create PRISMA flow diagrams). **UMass Amherst doesn't subscribe, but Covidence offers a free trial for 1 review of no more than 500 records. It is also set up for researchers to pay for each review.
  • PROSPERO - Systematic Review Protocol Registry "PROSPERO accepts registrations for systematic reviews, rapid reviews and umbrella reviews. PROSPERO does not accept scoping reviews or literature scans. Sibling PROSPERO sites registers systematic reviews of human studies and systematic reviews of animal studies."
  • Critical Appraisal Tools from JBI Joanna Briggs Institute at the University of Adelaide provides these checklists to help evaluate different types of publications that could be included in a review.
  • Systematic Review Toolbox "The Systematic Review Toolbox is a community-driven, searchable, web-based catalogue of tools that support the systematic review process across multiple domains. The resource aims to help reviewers find appropriate tools based on how they provide support for the systematic review process. Users can perform a simple keyword search (i.e. Quick Search) to locate tools, a more detailed search (i.e. Advanced Search) allowing users to select various criteria to find specific types of tools and submit new tools to the database. Although the focus of the Toolbox is on identifying software tools to support systematic reviews, other tools or support mechanisms (such as checklists, guidelines and reporting standards) can also be found."
  • Abstrackr Free, open-source tool that "helps you upload and organize the results of a literature search for a systematic review. It also makes it possible for your team to screen, organize, and manipulate all of your abstracts in one place." -From Center for Evidence Synthesis in Health
  • SRDR Plus (Systematic Review Data Repository: Plus) An open-source tool for extracting, managing,, and archiving data developed by the Center for Evidence Synthesis in Health at Brown University
  • RoB 2 Tool (Risk of Bias for Randomized Trials) A revised Cochrane risk of bias tool for randomized trials
  • << Previous: Tips & Examples
  • Next: Writing & Citing Help >>
  • Last Updated: Apr 2, 2024 4:46 PM
  • URL: https://guides.library.umass.edu/litreviews

© 2022 University of Massachusetts Amherst • Site Policies • Accessibility

DistillerSR Logo

Distiller SR: Literature Review Software

Smarter reviews: trusted evidence.

Securely automate every stage of your literature review to produce evidence-based research faster, more accurately, and more

transparently at scale.

Software Built for Every Stage of a Literature Review

DistillerSR automates the management of literature collection, screening, and assessment using AI and intelligent workflows. From a systematic literature review to a rapid review to a living review, DistillerSR makes any project simpler to manage and configure to produce transparent, audit-ready, and compliant results.

Literature Review Lifecycle, DistillerSR

Broader, Automated Literature Searches

Search more efficiently with DistillerSR’s integrations with data providers, such as PubMed, automatic review updates, and AI-powered duplicate detection and removal.

Search Screen Shot, DistillerSR

PubMed Integration

Automatic review updates.

Automatically import newly published references, always keeping literature reviews up-to-date with DistillerSR LitConnect .

Duplicate Detection

Detect and remove duplicate citations preventing skew and bias caused by studies included more than once.

Faster, More Effective Reference Screening

Reduce your screening burden by 60% with DistillerSR. Start working on later stages of your review sooner by finding relevant references faster and addressing conflicts more easily.

Screening Screen Shot. DistillerSR

AI-Powered Screening

Conflict resolution.

Automatically identifies conflicts and disagreements between literature reviewers for easy resolution.

AI Quality Check

Increase the thoroughness of your literature review by having AI double-check your exclusion decisions and validate your categorization of records with the help of DistillerSR AI Classifiers software module.

Cost-Effective Access  to Full-Text Documents

Ensure your literature review is always up-to-date with DistillerSR’s direct connections to full-text data sources, all the while lowering overall subscription costs.

DistillerSR Full Text Retrieval Screenshot

Ensure your review is always up-to-date with DistillerSR’s direct connections to full-text data sources, all the while lowering overall subscription costs.

Open Access Integrations

Automatically search for and upload full-text documents from PMC , and link directly to source material through DOI.org .

Copyright Compliant Bulk Search

Retrieve full-text articles for the lowest possible cost through Article Galaxy .

Ad-Hoc Document Retrieval

Leverage existing RightFind and Article Galaxy subscriptions, the open access Unpaywall plugin, and internal libraries to access copyright compliant documents.

Simple Yet Powerful Data-Extraction

Simplify data extraction through templates and configurable forms. Extract data easily with in-form validations and calculations, and easily capture repeating, complex data sets.

DistillerSR Data Extraction Screenshot

Cross-Review, Data Reuse

Prevent duplication of effort across your organization and reduce data extraction times with DistillerSR CuratorCR by easily reusing data across literature reviews.

Capturing Complex Output

Easily capture complex data, such as a variable number of time points across multiple studies in an easy-to-understand and ready-to-analyze way.

Smart Forms

Cut down on literature review data cleaning, data conversions, and effective measure calculations with input validation and built-in form calculations.

Automatic and Configurable Reporting

PRISMA 2020 Chart Example, DistillerSR

Customizable Reporting Engine

Build reports and schedule automated email updates to stakeholders. Integrate your data with third-party reporting applications and databases with DistillerSR API .

Auto-Generated Reports

Comprehensive audit-trail.

Automatically keeps track of every entry and decision providing transparency and reproducibility in your literature review.

Easy-to-use Literature Review Project Management

Facilitate project management throughout the literature review process with real-time user and project metric monitoring, reusable configurations, and granular user permissions.

DistillerSR Project Management Screenshot

Facilitate project management throughout the review process with real-time user and project metric monitoring, reusable configurations, and granular user permissions.

Real-Time User and Project Metrics

Monitor teams and literature review progress in real-time, improving management and quality oversight into projects.

Repeatable, Configurable Processes

Secure literature reviews.

Single sign-on (SSO) and fully configurable user roles and permissions simplify the literature reviewer experience while also ensuring data integrity and security .

I can’t think of a way to do reviews faster than with DistillerSR. Being able to monitor progress and collaborate with team members, no matter where they are located makes my life a lot easier.

DistillerSR Case Studies

Abbott Diagnostics, CS, DistillerSR

Abbott Diagnostics

Maple Health Case Study, DistillerSR

Maple Health Group

CNS Case Study, DistillerSR

Congress of Neurological Surgeons

Distillersr frequently asked questions, what types of reviews can be done with distillersr systematic reviews, living reviews, rapid reviews, or clinical evaluation report (cer) literature reviews.

Literature reviews can be a very simple or highly complex process, and literature reviews can use a variety of methods for finding, assessing, and presenting evidence. We describe DistillerSR as a literature review software because it supports all types of reviews , from systematic reviews to rapid reviews, and from living reviews to CER literature reviews.

DistillerSR software is used by over 300 customers in many different industries to support their evidence generation initiatives, from guideline development to HEOR analysis to CERs to post-market surveillance (PMS) and pharmacovigilance.

What are some of DistillerSR’s capabilities that support conducting systematic reviews?

Systematic reviews are the gold standard of literature reviews that aim to identify and screen all evidence relating to a specific research question. DistillerSR facilitates systematic reviews through a configurable, transparent, reproducible process that makes it easy to view the provenance of every cell of data.

DistillerSR was originally designed to support systematic reviews. The software handles dual reviewer screening, conflict resolution, capturing exclusion reasons while you work, risk of bias assessments, duplicate detection, multiple database searches, and reporting templates such as PRISMA . DistillerSR can readily scale for systematic reviews of all sizes, supporting more than 700,000 references per project through a robust enterprise-grade technical architecture . Using software like DistillerSR makes conducting systematic reviews easier to manage and configure to produce transparent evidence-based research faster and more accurately.

How does DistillerSR support clinical evaluation reports (CERs) and performance evaluation reports (PERs) program management?

The new European Union Medical Device Regulation (EU-MDR) and In-Vitro Device Regulation (EU-IVDR) require medical device manufacturers to increase the frequency, traceability, and overall documentation for CERs in the MDR program or PERs in the IVDR counterpart. Literature review software is an ideal tool to help you comply with these regulations.

DistillerSR automates literature reviews to enable a more transparent, repeatable, and auditable process , enabling manufacturers to create and implement a standard framework for literature reviews. This framework for conducting literature reviews can then be incorporated into all CER and PER program management plans consistently across every product, division, and research group.

How can DistillerSR help rapid reviews?

DistillerSR AI is ideal to speed up the rapid review process without compromising on quality. The AI-powered screening enables you to find references faster by continuously reordering relevant references, resulting in accelerated screening. The AI can also double-check your exclusion decisions to ensure relevant references are not left out of the rapid review.

DistillerSR title screening functionality enables you to quickly perform title screening on large numbers of references.

Does DistillerSR support living reviews?

The short answer is yes. DistillerSR has multiple capabilities that automate living systematic reviews , such as automatically importing newly published references into your projects and notifying reviewers that there’s screening to do. You can also put reports on an automated schedule so you’re never caught off guard when important new data is collected.   These capabilities help ensure the latest research is included in your living systematic review and that your review is up-to-date. 

How can DistillerSR help ensure the accuracy of Literature and Systematic reviews?

The quality of systematic reviews is foundational to evidence-based research. However, quality may be compromised because systematic reviews – by their very nature – are often tedious and repetitive, and prone to human error. Tracking all review activity in systematic review software, like DistillerSR, and making it easy to trace the provenance of every cell of data, delivers total transparency and auditability into the systematic review process. DistillerSR enables reviewers to work on the same project simultaneously without the risk of duplicating work or overwriting each other’s results. Configurable workflow filters ensure that the right references are automatically assigned to the right reviewers, and DistillerSR’s cross-project dashboard allows reviewers to monitor to-do lists for all projects from one place.

Why should I add DistillerSR to my Literature and Systematic Review Toolbox and retire my current spreadsheet solution?

It’s estimated that 90% of spreadsheets contain formula errors and approximately 50% have material defects. These errors, coupled with the time and resources necessary to fix them, adversely impact the management of the systematic review process. DistillerSR software was specifically designed to address the challenges faced by systematic review authors, namely the ever-increasing volume of research to screen and extract, review bottlenecks, and regulatory requirements for auditability and transparency, as well as a tool for managing a remote global workforce. Efficiency, consistency, better collaboration, and quality control are just a few of the benefits you’ll get when you choose DistillerSR’s systematic review process over a manual spreadsheet tool for your reviews.

What is the role of AI in your systematic review process?

DistillerSR AI enables the automation of the logistic-heavy tasks involved in conducting a systematic literature review, such as finding references faster using AI to continuously reorder references based on relevance. Continuous AI Reprioritization uses machine learning to learn from the references you are including and excluding and automatically reorder the ones you have left to screen, putting the most pertinent references in front of you first. This means that you find included references much more quickly during the screening process. DistillerSR also uses classifiers , which use NLP to classify and process information in the systematic review.  DistillerSR can also increase the thoroughness of your systematic review by having AI double-check your exclusion decisions.

What about the security and scalability of systematic literature reviews done on DistillerSR?

DistillerSR builds security, scalability, and availability into everything we do, so you can focus on producing evidence-based research faster, more accurately, and more securely with our  systematic review software. We undergo an annual independent third-party audit and certify our products using the American Institute of Certified Public Accountants SOC 2 framework. In terms of scalability, systematic review projects in DistillerSR can easily handle a large number of references; some of our customers have over 700,000 references in their projects.

Do you offer any commitments on the frequency of new product and capability launches?

We pride ourselves on listening to and working with our customers to regularly introduce new capabilities that improve DistillerSR and the systematic review process. We plan on offering two major releases a year in addition to two minor feature enhancements. We notify customers in advance about upcoming releases, host webinars, develop tools and training to introduce the new capabilities, and provide extensive release notes for our reviewers.

I have a unique literature review protocol. Is your software configurable with my literature review data and process?

Configurability is one of the key foundations of DistillerSR software. In fact, with over 300 customers in many different industries, we have yet to see a literature review protocol that our software couldn’t handle. DistillerSR is a professional B2B SaaS company with an exceptional customer success team that will work with you to understand your unique requirements and systematic review process to get you started quickly. Our global support team is available 24/7 to help you.

Still unsure if DistillerSR will meet your systematic literature review requirements?

Adopting a new software is about more than just money. New software is also about commitment and trusting that the new platform will match your systematic review and scalability needs. We have resources to help you in your analysis and decision: check out the systematic review software checklist or the literature review software checklist .

Learn More About DistillerSR

Linkedin Icon

systematic literature review tools

Help us improve our Library guides with this 5 minute survey . We appreciate your feedback!

  • UOW Library
  • Key guides for researchers

Systematic Review

  • What is a systematic review?
  • Five other types of systematic review
  • How is a literature review different?
  • Search tips for systematic reviews
  • Controlled vocabularies
  • Grey literature
  • Transferring your search
  • Documenting your results
  • Support & contact

Tools for systematic reviews

There are many tools you can use when conducting a systematic review. These tools are designed to assist with the key stages of the process, including title and abstract screening, data synthesis, and critical appraisal.

Registering your review is recommended best practice and options are explored in the Register your review section of this guide.

Covidence is a web-based screening and data extraction tool for authors conducting systematic and scoping reviews. Covidence includes functions to support uploading search results, screening abstracts, conducting risk of bias assessments and more to make your review production more efficient. 

How to join University of Wollongong’s Covidence account 

To request access to UOW’s Covidence account, you must have an active @uow.edu.au or @uowmail.edu.au email address.  

  • Go to the  Covidence sign-up page  
  • Enter your first name and UOW email address, then click “Request Invitation” 
  • You will receive an invitation email sent by covidence.org. Open the email and click “Accept Invitation” 
  • Follow the prompts to create your personal Covidence account using the same UOW email address. 

Covidence support 

The Covidence  Knowledge Base  and  Getting Started with Covidence  videos provide comprehensive support. 

Already signed up? 

Sign in  with your personal account and access Covidence.

Critical appraisal tools

Critical appraisal skills enable you to systematically assess the trustworthiness, relevance and results of published papers.  The Centre for Evidence Based Medicine  defines critical appraisal as the systematic evaluation of clinical research papers in order to establish: 

  • Does this study address a clearly focused question? 
  • Did the study use valid methods to address this question? 
  • Are the valid results of this study important? 
  • Are these valid, important results applicable to my patient or population? 

A comprehensive set of critical appraisal tools can be found on the  University of South Australia’s library guide .

JBI SUMARI facilitates the entire review process, from protocol development, team management, study selection, critical appraisal, data extraction, data synthesis and writing your systematic review. This tool is developed by the  Joanna Briggs Institute (JBI) .

To set up a personal OVID account and access SUMARI as UOW staff or student,  follow these instructions .

Risk of bias tools

The  NHMRC states  that risks of bias are the likelihood that features of the study design or conduct of the study will give misleading results. This can result in wasted resources, lost opportunities for effective interventions or harm to consumers. 

See  riskofbias.info  for details of tools you can use to asses risk of bias, including: 

  • RoB 2.0: Cochrane's risk of bias tool for randomised controlled trials 
  • ROBINS-I: evaluates the risk of bias in the studies that compare the health effects of two or more interventions 
  • ROBINS-E: provides a structured approach to assessing the risk of bias in observational epidemiological studies 
  • ROB ME: a tool for assessing risk of bias due to missing evidence in a synthesis 
  • Robvis: a web app designed to for visualizing risk-of-bias assessments performed as part of a systematic review. 

Systematic Reviewlution

Systematic Reviewlution  is a living review compiling evidence of where published systematic reviews are not being done well. Awareness of these problems will enable researchers, publishers and decision makers to conduct better systematic reviews in the future

The review includes a  framework of common problems  with systematic reviews, that should be considered as your develop your own review protocols.

Register your review

It is good practice to register your systematic review with PROSPERO or the International Database of Education Systematic Reviews. Scoping and rapid reviews can be registered with Figshare or Open Science Framework (OSF).

PROSPERO is an international register for prospective systematic literature reviews.

It includes protocol details for systematic reviews relevant to:

  • health and public health
  • social care and welfare
  • crime and justice
  • international development

Protocols can include any type of any study design where there is a health-related outcome.

Search PROSPERO

International Database of Education Systematic Reviews (IDESR)

IDESR is a database of published systematic reviews in Education and a clearinghouse for protocol registration of ongoing and planned systematic reviews. IDESR accepts registrations of protocols for systematic reviews in all fields of education.

  • Search and register with IDESR .

Figshare is an open repository where you can make your review protocol citable, shareable and discoverable.

  • Search and register with Figshare.

Open Science Framework (OSF)

Recommended by PRISMA and PRISMA-ScR, the Open Science Framework is a free, open platform to support users' research and enable collaboration.

  • Search and register with OSF.
  • Previous: Tools and frameworks
  • Next: Frameworks
  • Last Updated: Apr 22, 2024 3:02 PM
  • URL: https://uow.libguides.com/systematic-review

Insert research help text here

LIBRARY RESOURCES

Library homepage

Library SEARCH

A-Z Databases

STUDY SUPPORT

Academic Skills Centre

Referencing and citing

Digital Skills Hub

MORE UOW SERVICES

UOW homepage

Student support and wellbeing

IT Services

systematic literature review tools

On the lands that we study, we walk, and we live, we acknowledge and respect the traditional custodians and cultural knowledge holders of these lands.

systematic literature review tools

Copyright & disclaimer | Privacy & cookie usage

SCI Journal

10 Best Literature Review Tools for Researchers

Photo of author

This post may contain affiliate links that allow us to earn a commission at no expense to you. Learn more

Best Literature Review Tools for Researchers

Boost your research game with these Best Literature Review Tools for Researchers! Uncover hidden gems, organize your findings, and ace your next research paper!

Conducting literature reviews poses challenges for researchers due to the overwhelming volume of information available and the lack of efficient methods to manage and analyze it.

Researchers struggle to identify key sources, extract relevant information, and maintain accuracy while manually conducting literature reviews. This leads to inefficiency, errors, and difficulty in identifying gaps or trends in existing literature.

Advancements in technology have resulted in a variety of literature review tools. These tools streamline the process, offering features like automated searching, filtering, citation management, and research data extraction. They save time, improve accuracy, and provide valuable insights for researchers. 

In this article, we present a curated list of the 10 best literature review tools, empowering researchers to make informed choices and revolutionize their systematic literature review process.

Table of Contents

Top 10 Literature Review Tools for Researchers: In A Nutshell (2023)

#1. semantic scholar – a free, ai-powered research tool for scientific literature.

Credits: Semantic Scholar. Best Literature Review Tools for Researchers

Semantic Scholar is a cutting-edge literature review tool that researchers rely on for its comprehensive access to academic publications. With its advanced AI algorithms and extensive database, it simplifies the discovery of relevant research papers. 

By employing semantic analysis, users can explore scholarly articles based on context and meaning, making it a go-to resource for scholars across disciplines. 

Additionally, Semantic Scholar offers personalized recommendations and alerts, ensuring researchers stay updated with the latest developments. However, users should be cautious of potential limitations. 

Not all scholarly content may be indexed, and occasional false positives or inaccurate associations can occur. Furthermore, the tool primarily focuses on computer science and related fields, potentially limiting coverage in other disciplines. 

Researchers should be mindful of these considerations and supplement Semantic Scholar with other reputable resources for a comprehensive literature review. Despite these caveats, Semantic Scholar remains a valuable tool for streamlining research and staying informed.

#2. Elicit – Research assistant using language models like GPT-3

Credits: Elicit.Org, Best Literature Review Tools for Researchers

Elicit is a game-changing literature review tool that has gained popularity among researchers worldwide. With its user-friendly interface and extensive database of scholarly articles, it streamlines the research process, saving time and effort. 

The tool employs advanced algorithms to provide personalized recommendations, ensuring researchers discover the most relevant studies for their field. Elicit also promotes collaboration by enabling users to create shared folders and annotate articles.

However, users should be cautious when using Elicit. It is important to verify the credibility and accuracy of the sources found through the tool, as the database encompasses a wide range of publications. 

Additionally, occasional glitches in the search function have been reported, leading to incomplete or inaccurate results. While Elicit offers tremendous benefits, researchers should remain vigilant and cross-reference information to ensure a comprehensive literature review.

#3. Scite.Ai – Your personal research assistant

Credits: Scite, Best Literature Review Tools for Researchers

Scite.Ai is a popular literature review tool that revolutionizes the research process for scholars. With its innovative citation analysis feature, researchers can evaluate the credibility and impact of scientific articles, making informed decisions about their inclusion in their own work. 

By assessing the context in which citations are used, Scite.Ai ensures that the sources selected are reliable and of high quality, enabling researchers to establish a strong foundation for their research.

However, while Scite.Ai offers numerous advantages, there are a few aspects to be cautious about. As with any data-driven tool, occasional errors or inaccuracies may arise, necessitating researchers to cross-reference and verify results with other reputable sources. 

Moreover, Scite.Ai’s coverage may be limited in certain subject areas and languages, with a possibility of missing relevant studies, especially in niche fields or non-English publications. 

Therefore, researchers should supplement the use of Scite.Ai with additional resources to ensure comprehensive literature coverage and avoid any potential gaps in their research.

Rayyan offers the following paid plans:

  • Monthly Plan: $20
  • Yearly Plan: $12

Credits: Scite, Best Literature Review Tools for Researchers

#4. DistillerSR – Literature Review Software

Credits: DistillerSR, Best Literature Review Tools for Researchers

DistillerSR is a powerful literature review tool trusted by researchers for its user-friendly interface and robust features. With its advanced search capabilities, researchers can quickly find relevant studies from multiple databases, saving time and effort. 

The tool offers comprehensive screening and data extraction functionalities, streamlining the review process and improving the reliability of findings. Real-time collaboration features also facilitate seamless teamwork among researchers.

While DistillerSR offers numerous advantages, there are a few considerations. Users should invest time in understanding the tool’s features and functionalities to maximize its potential. Additionally, the pricing structure may be a factor for individual researchers or small teams with limited budgets.

Despite occasional technical glitches reported by some users, the developers actively address these issues through updates and improvements, ensuring a better user experience. 

Overall, DistillerSR empowers researchers to navigate the vast sea of information, enhancing the quality and efficiency of literature reviews while fostering collaboration among research teams .

#5. Rayyan – AI Powered Tool for Systematic Literature Reviews

Credits: Rayyan, Best Literature Review Tools for Researchers

Rayyan is a powerful literature review tool that simplifies the research process for scholars and academics. With its user-friendly interface and efficient management features, Rayyan is highly regarded by researchers worldwide. 

It allows users to import and organize large volumes of scholarly articles, making it easier to identify relevant studies for their research projects. The tool also facilitates seamless collaboration among team members, enhancing productivity and streamlining the research workflow. 

However, it’s important to be aware of a few aspects. The free version of Rayyan has limitations, and upgrading to a premium subscription may be necessary for additional functionalities. 

Users should also be mindful of occasional technical glitches and compatibility issues, promptly reporting any problems. Despite these considerations, Rayyan remains a valuable asset for researchers, providing an effective solution for literature review tasks.

Rayyan offers both free and paid plans:

  • Professional: $8.25/month
  • Student: $4/month
  • Pro Team: $8.25/month
  • Team+: $24.99/month

Credits: Rayyan, Best Literature Review Tools for Researchers

#6. Consensus – Use AI to find you answers in scientific research

Credits: Consensus, Best Literature Review Tools for Researchers

Consensus is a cutting-edge literature review tool that has become a go-to choice for researchers worldwide. Its intuitive interface and powerful capabilities make it a preferred tool for navigating and analyzing scholarly articles. 

With Consensus, researchers can save significant time by efficiently organizing and accessing relevant research material.People consider Consensus for several reasons. 

Its advanced search algorithms and filters help researchers sift through vast amounts of information, ensuring they focus on the most relevant articles. By streamlining the literature review process, Consensus allows researchers to extract valuable insights and accelerate their research progress.

However, there are a few factors to watch out for when using Consensus. As with any automated tool, researchers should exercise caution and independently verify the accuracy and relevance of the generated results. Complex or niche topics may present challenges, resulting in limited search results. Researchers should also supplement Consensus with manual searches to ensure comprehensive coverage of the literature.

Overall, Consensus is a valuable resource for researchers seeking to optimize their literature review process. By leveraging its features alongside critical thinking and manual searches, researchers can enhance the efficiency and effectiveness of their work, advancing their research endeavors to new heights.

Consensus offers both free and paid plans:

  • Premium: $9.99/month
  • Enterprise: Custom

Credits: Consensus, Best Literature Review Tools for Researchers

#7. RAx – AI-powered reading assistant

Credits: RAx, Best Literature Review Tools for Researchers

Consensus is a revolutionary literature review tool that has transformed the research process for scholars worldwide. With its user-friendly interface and advanced features, it offers a vast database of academic publications across various disciplines, providing access to relevant and up-to-date literature. 

Using advanced algorithms and machine learning, Consensus delivers personalized recommendations, saving researchers time and effort in their literature search. 

However, researchers should be cautious of potential biases in the recommendation system and supplement their search with manual verification to ensure a comprehensive review. 

Additionally, occasional inaccuracies in metadata have been reported, making it essential for users to cross-reference information with reliable sources. Despite these considerations, Consensus remains an invaluable tool for enhancing the efficiency and quality of literature reviews.

RAx offers both free and paid plans. Currently offering 50% discounts as of July 2023:

  • Premium: $6/month $3/month
  • Premium with Copilot: $8/month $4/month

Credits: RAx, Best Literature Review Tools for Researchers

#8. Lateral – Advance your research with AI

Credits: Lateral, Best Literature Review Tools for Researchers

“Lateral” is a revolutionary literature review tool trusted by researchers worldwide. With its user-friendly interface and powerful search capabilities, it simplifies the process of gathering and analyzing scholarly articles. 

By leveraging advanced algorithms and machine learning, Lateral saves researchers precious time by retrieving relevant articles and uncovering new connections between them, fostering interdisciplinary exploration.

While Lateral provides numerous benefits, users should exercise caution. It is advisable to cross-reference its findings with other sources to ensure a comprehensive review. 

Additionally, researchers must be mindful of potential biases introduced by the tool’s algorithms and should critically evaluate and interpret the results. 

Despite these considerations, Lateral remains an indispensable resource, empowering researchers to delve deeper into their fields of study and make valuable contributions to the academic community.

RAx offers both free and paid plans:

  • Premium: $10.98
  • Pro: $27.46

Credits: Lateral, Best Literature Review Tools for Researchers

#9. Iris AI – Introducing the researcher workspace

Credits: Iris AI, Best Literature Review Tools for Researchers

Iris AI is an innovative literature review tool that has transformed the research process for academics and scholars. With its advanced artificial intelligence capabilities, Iris AI offers a seamless and efficient way to navigate through a vast array of academic papers and publications. 

Researchers are drawn to this tool because it saves valuable time by automating the tedious task of literature review and provides comprehensive coverage across multiple disciplines. 

Its intelligent recommendation system suggests related articles, enabling researchers to discover hidden connections and broaden their knowledge base. However, caution should be exercised while using Iris AI. 

While the tool excels at surfacing relevant papers, researchers should independently evaluate the quality and validity of the sources to ensure the reliability of their work. 

It’s important to note that Iris AI may occasionally miss niche or lesser-known publications, necessitating a supplementary search using traditional methods. 

Additionally, being an algorithm-based tool, there is a possibility of false positives or missed relevant articles due to the inherent limitations of automated text analysis. Nevertheless, Iris AI remains an invaluable asset for researchers, enhancing the quality and efficiency of their research endeavors.

Iris AI offers different pricing plans to cater to various user needs:

  • Basic: Free
  • Premium: Monthly ($82.41), Quarterly ($222.49), and Annual ($791.07)

Credits: Iris AI, Best Literature Review Tools for Researchers

#10. Scholarcy – Summarize your literature through AI

Credits:Scholarcy, Best Literature Review Tools for Researchers

Scholarcy is a powerful literature review tool that helps researchers streamline their work. By employing advanced algorithms and natural language processing, it efficiently analyzes and summarizes academic papers, saving researchers valuable time. 

Scholarcy’s ability to extract key information and generate concise summaries makes it an attractive option for scholars looking to quickly grasp the main concepts and findings of multiple papers.

However, it is important to exercise caution when relying solely on Scholarcy. While it provides a useful starting point, engaging with the original research papers is crucial to ensure a comprehensive understanding. 

Scholarcy’s automated summarization may not capture the nuanced interpretations or contextual information presented in the full text. 

Researchers should also be aware that certain types of documents, particularly those with heavy mathematical or technical content, may pose challenges for the tool. 

Despite these considerations, Scholarcy remains a valuable resource for researchers seeking to enhance their literature review process and improve overall efficiency.

Scholarcy offer the following pricing plans:

  • Browser Extension and Flashcards: Free 
  • Personal Library: $9.99
  • Academic Institution License: $8K+

Credits: Scholarcy, Best Literature Review Tools for Researchers

Final Thoughts

In conclusion, conducting a comprehensive literature review is a crucial aspect of any research project, and the availability of reliable and efficient tools can greatly facilitate this process for researchers. This article has explored the top 10 literature review tools that have gained popularity among researchers.

Moreover, the rise of AI-powered tools like Iris.ai and Sci.ai promises to revolutionize the literature review process by automating various tasks and enhancing research efficiency. 

Ultimately, the choice of literature review tool depends on individual preferences and research needs, but the tools presented in this article serve as valuable resources to enhance the quality and productivity of research endeavors. 

Researchers are encouraged to explore and utilize these tools to stay at the forefront of knowledge in their respective fields and contribute to the advancement of science and academia.

Q1. What are literature review tools for researchers?

Literature review tools for researchers are software or online platforms designed to assist researchers in efficiently conducting literature reviews. These tools help researchers find, organize, analyze, and synthesize relevant academic papers and other sources of information.

Q2. What criteria should researchers consider when choosing literature review tools?

When choosing literature review tools, researchers should consider factors such as the tool’s search capabilities, database coverage, user interface, collaboration features, citation management, annotation and highlighting options, integration with reference management software, and data extraction capabilities. 

It’s also essential to consider the tool’s accessibility, cost, and technical support.

Q3. Are there any literature review tools specifically designed for systematic reviews or meta-analyses?

Yes, there are literature review tools that cater specifically to systematic reviews and meta-analyses, which involve a rigorous and structured approach to reviewing existing literature. These tools often provide features tailored to the specific needs of these methodologies, such as:

Screening and eligibility assessment: Systematic review tools typically offer functionalities for screening and assessing the eligibility of studies based on predefined inclusion and exclusion criteria. This streamlines the process of selecting relevant studies for analysis.

Data extraction and quality assessment: These tools often include templates and forms to facilitate data extraction from selected studies. Additionally, they may provide features for assessing the quality and risk of bias in individual studies.

Meta-analysis support: Some literature review tools include statistical analysis features that assist in conducting meta-analyses. These features can help calculate effect sizes, perform statistical tests, and generate forest plots or other visual representations of the meta-analytic results.

Reporting assistance: Many tools provide templates or frameworks for generating systematic review reports, ensuring compliance with established guidelines such as PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).

Q4. Can literature review tools help with organizing and annotating collected references?

Yes, literature review tools often come equipped with features to help researchers organize and annotate collected references. Some common functionalities include:

Reference management: These tools enable researchers to import references from various sources, such as databases or PDF files, and store them in a central library. They typically allow you to create folders or tags to organize references based on themes or categories.

Annotation capabilities: Many tools provide options for adding annotations, comments, or tags to individual references or specific sections of research articles. This helps researchers keep track of important information, highlight key findings, or note potential connections between different sources.

Full-text search: Literature review tools often offer full-text search functionality, allowing you to search within the content of imported articles or documents. This can be particularly useful when you need to locate specific information or keywords across multiple references.

Integration with citation managers: Some literature review tools integrate with popular citation managers like Zotero, Mendeley, or EndNote, allowing seamless transfer of references and annotations between platforms.

By leveraging these features, researchers can streamline the organization and annotation of their collected references, making it easier to retrieve relevant information during the literature review process.

Photo of author

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

We maintain and update science journals and scientific metrics. Scientific metrics data are aggregated from publicly available sources. Please note that we do NOT publish research papers on this platform. We do NOT accept any manuscript.

systematic literature review tools

2012-2024 © scijournal.org

{{loader.title}}

{{loader.text}}

  • My libraries
  • Your profile

systematic literature review tools

  • Methods Wizard
  • Trial Wizard
  • SearchRefinery
  • Polyglot Search
  • Deduplicator
  • Screenatron
  • Disputatron
  • TERA Farmer BETA
  • RevMan Replicant
  • {{library.title || 'Untitled'}}
  • {{operation.title}}
  • {{topic.title}}
  • Recommended Tools
  • {{crumb.title}}
  • {{pageTitle}}
  • Central access point
  • Evidence synthesis tool and database
  • Animal feeding trials
  • PreSto workshop protocols

Leave a comment

  • Need support

Access CADIMA

systematic literature review tools

--> <--.

Online workshop

Cadima in the media.

Updated selection process in a nutshell (will be released on 08 March 2019)

systematic literature review tools

CADIMA terms and condition

Create new account, forgotten password, do you really want to delete the item, successfully deleted, the item was successfully deleted.

Maritime shipping ports performance: a systematic literature review

  • Open access
  • Published: 04 June 2024
  • Volume 5 , article number  108 , ( 2024 )

Cite this article

You have full access to this open access article

systematic literature review tools

  • L. Kishore 1 ,
  • Yogesh P. Pai 2 ,
  • Bidyut Kumar Ghosh 1 &
  • Sheeba Pakkan 3  

The maritime sector has evolved as a crucial link in countries' economic development. Given that most of the trade across regions takes place through naval transportation, the performance of the seaports has been one of the focus areas of research. As the publication volume has significantly grown in the recent past, this study critically examines the publications related to the performance of ports for exploring the evolution, identifying the trends of articles, and analyzing the citations covering the publications based on relevant keywords in Scopus database for the period 1975–April 2024. Bibliometric and scientometric analysis was done using R, Python, and VOS software tools. Results indicate the core subject areas as “port efficiency”, “data envelopment analysis” (DEA), “port competitiveness”, “simulation”, “port governance”, and “sustainability,” with "sustainability" as the most discussed and highly relevant theme that has evolved in the last five years. Bibliometric data analysis on the subject area, yearly trends, top journals of publications, citation and author analysis, impact analysis, country-wise publication, and thematic analysis with clusters are also performed to outline future research directions. The analysis indicates an exponential rise in publications in recent times and with sustainability-related studies gaining more importance, especially for empirical research on port performance and demands for future empirical research on sustainability and smart port performance subject area. The study's findings are helpful for researchers, academicians, policymakers, and industry practitioners working towards a sustainable maritime port industry.

Avoid common mistakes on your manuscript.

1 Introduction

Maritime trade and seaports have evolved as an integral part of global economic development, with the trade through sea comprising more than 80 percent of the volume of international merchandise trade [ 1 ] and connecting developing countries with developed as well as between various modes of global logistics and transportation [ 2 , 3 , 4 , 5 ]. Given the critical role of maritime seaports in the worldwide supply chain, there has been an exponential rise in research in maritime seaport-related studies covering diverse topics and themes. With the burgeoning volumes of publications, as recommended by Moral-Muñoz et al. [ 6 ], bibliometric and systematic studies are helpful in scientifically tracking the growth trend of publications and in evaluating the essential characteristics and attributes of the research studies, supported by various contemporary statistical analysis software tools. Junquera et al. [ 7 ] highlighted the benefit of bibliometric data analysis in assisting the exploration of different characteristics and attributes related to the study area, such as publication trends, authors in the field, themes of ongoing research along country-specific details which are essential to understanding and enhancing the body of knowledge on the topic of interest, the ongoing trend, and aid in exploring the characteristics associated with different themes related to the subject of study.

Numerous bibliometric and systematic review studies by multiple authors discuss the synthesis of reviews on port management, port governance, port economics, digitization and new-age automation technology adoption in ports, and port choice selection topics. In their novel bibliometric study, Pallis et al. [ 8 ] identified significant emerging themes under various categories of port-related research. A large number of bibliometric and systematic review studies were published in the recent decade [ 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ] that covered many of the themes and categories, including “green port”, “container port terminal”, “seaport competitiveness”, “port sustainability” “dry port”, “port management”, “digitization of port operations”, “smart port.”

However, a holistic bibliometric data analysis on the “port performance” topic could not be traced in the extant publications. For shipping ports, which act as the backbone of the maritime transportation ecosystem [ 23 , 24 ], the port’s competitiveness and performance are considered one of the most critical elements [ 25 ]. For the growth and sustenance of the global maritime trade, the performance of the ports plays a crucial role. Numerous studies [ 5 , 24 , 25 , 26 , 27 ] have proven the positive impact of the performance of ports on the economic development of a country and how poorly performing ports result in lower trade volumes, especially in developing or less-developed countries. Given ports' vital role in economic development through boosted production-consumption in the value chain and increasing global trade, along with the interest of academicians, researchers, and policymakers in the field, the literature on port performance has been growing [ 5 ]. Bibliometric and systematic analysis can give an overview of the studies on port performance. It can demonstrate a broad understanding of ongoing research work and themes since the first publication will benefit researchers and practitioners in port management.

Therefore, this study aims to explore the bibliometric data of research articles related to the performance of ports and identify the trend and ongoing themes of research through bibliometric data analysis. The study also attempts to analyze scholarly publications' evolution and critical insights on port performance-related fields regarding themes or topics, subject areas, leading journals, citations, and country-wise contributions, along with collaboration and outline future research directions. This novel study explores the bibliometric data on the “port performance” studies published in the Scopus database. It analyzes the data through creative visualizations to identify trends, establish ongoing research themes, and outline future research.

The following sections cover the literature overview on port-related bibliometric studies to trace the ongoing research and identify the gap along with framing research questions, then describe the methodology adopted in the current bibliometric survey, followed by results and discussion, leading to drawing conclusions along with contributions and outlining the implications and future research directions.

2 Review of literature

The literature review of the extant body of knowledge on port-related bibliometric analysis studies identified many significant contributions in the Scopus database. The keywords search using the combination “TITLE-ABS-KEY ("port" OR "seaport" OR "shipping port" OR "maritime port" OR "maritime shipping port" OR "container port" AND bibliometric OR scientometric) AND (LIMIT-TO (LANGUAGE, "English"))” identified 48 articles. After the screening, 25 bibliometric data analyses published since 2010 were shortlisted and reviewed in detail. Among those, eight were published in 2023 and 7 in just the first quarter of 2024, indicating the pace with which research is burgeoning in port-related fields. Elsevier is the leading publisher, with about nine publications covering around 30% of the total publications. Springer and Routledge share the second spot with four publications each. “Maritime Policy and Management” and “Sustainability” were the leading sources, with 4 and 2 bibliometric articles published, respectively. Table 1 summarises the literature reviewed, along with their source and citations.

The bibliometric studies on port-related topics commenced with the review article of Pallis et al. (2010), who conducted a bibliometric analysis of port economics and management policy-related topics to unravel the emerging research field based on the papers published between 1997 and 2008 in multiple scholarly databases such as ScienceDirect, JSTOR, Google Scholar, and Econlit. They concluded that research on port-related studies is rapidly emerging, international collaboration is rising, and the majority were on container port terminals. We could also identify the recent trend of literature on port-related studies getting focussed on container terminals discussing innovation and digital automation of container terminal operations and the application of new-age big-data technologies, Artificial Intelligence (AI), and Machine learning techniques (ML), and Internet-of-things platforms for productivity improvement and real-time port operations management.

Along the lines of technology development and integration in port management, the study of Li et al. [ 34 ] focused on the novel technology integrated ports with the concept of Smart ports incorporating intelligent digital technology and infrastructure comprising of cloud computing technology, big data analytics, Internet of things (IoT), and AI-based applications for capacity and resource optimization as a new-age solution to cope with the challenges faced in the dynamic port industry. The most recent publication on the maritime port sector is the bibliometric analysis study by Diniz et al. [ 22 ] on the United Nations’ Sustainable Development Goals (SDG), wherein they used IRaMuTeQ and VOSviewer software tool to evaluate the trends through a systematic literature review. In the years 2023 and 2024 till the 20th of April, published six articles each year, the highest number of bibliometric-related publications since 2010. The highest citation of 177 was received for the study by Davarzani et al. [ 9 ] on “green ports,” followed by 122 citations for the study of Pallis et al. [ 8 ] on port economics and management.

Many studies [ 42 , 43 , 44 , 45 ] have pointed out the dynamic nature of maritime business. Amidst the dynamic nature of the port sector, as highlighted by Mantry and Ghatak [ 46 ], the country’s economic development is impacted by poor port performance. As per the United Nations Conference on Trade and Development (UNCTAD) [ 1 , 47 ], more than 80% of the international trade volume is handled through maritime transportation. [ 23 , 48 ] Studies have emphasized the significance of ports in the economic growth of a country. Given ports' vital contributions to economic development and global trade, along with the increasing interest of academicians, researchers, and policymakers, the literature on port performance has grown exponentially, especially in the last decade.

OConnor et al. [ 5 ] systematically reviewed port performance-related studies to identify performance dimensions and discussed port performance as a multi-dimensional construct. However, the study should have addressed the other characteristics and attributes that cause and impact the performance of ports. Notably, Wang et al. [ 35 ] were the first to use the WoS database exclusively for collecting bibliometric data for the period 2000 to 2020 and analyzing the data using the CiteSpace software tool. Their study focused on visual mapping of bibliometric data to uncover insights into trends of publications and authors along with their affiliations and countries and keyword analysis to derive more frequently discussed topics and themes. Though future research directions were indicated in the studies and many themes were highlighted, there needs to be more on the performance of ports and related variables for enhancing port performance. The scientometric analysis and computational text analysis by Sung-Woo et al. [ 49 ] were specific to port performance-related bibliometric study; however, they focused mainly on port and shipping along with supply chain logistics-related high-quality publications between 2000 and 2018 in journals listed in the Science Citation Index (SCI), Science Citation Index Expanded (SCIE), and Social Science Citation Index (SSCI) available in the Scopus and the WoS scholarly databases only. Since the number of articles was 1947 in the count, they adopted topic modeling using a text mining technique called “Latent Dirichlet Allocation (LDA)” to uncover significant research topics.

The qualitative study by Somensi et al. [ 50 ] analyzed the bibliographical characteristics of evaluating port performance studies published during 2000—2016 and discussed management practices and organizational performance aspects. Bibliographical data comprising 3112 articles for their research was collected from popular scholarly databases, and a series of keywords were used to search for performance, evaluation, and management-specific articles. Bibliographical portfolio selection and analysis were done using the Knowledge Development Process-Constructivist (ProKnow-C) tool developed at the Federal University of Santa Catarina. They selected 37 articles at the end of the portfolio selection procedure to analyze further regarding an author, journal, topic, and country analysis. They suggested increasing the research by extending the analysis period and conducting a more in-depth systematic analysis as the future research direction.

To address the gap identified, bibliometric data analysis can be adopted to explore the hidden characteristics and attributes related to the study area, such as publication trends, authors in the field, themes of ongoing research along with country-specific details and help with deep insights on the continuing trend, and identify the characteristics associated with different themes related to the topic of study. Therefore, a holistic bibliometric data-based exploratory study on “port performance” can give an overview of all the studies on port performance to date and demonstrate a broad understanding of ongoing research work and themes since the first publication. Further, the previous studies have not discussed co-occurrence or co-citation in articles published on port performance.

In this backdrop and taking a cue from shortcomings identified through the literature review, this study focuses on the following research questions:

What is the trend and evolution of research publications in maritime port performance?

What are the dynamics of journals publishing articles and citations of articles related to port performance?

Which countries have given utmost importance to port performance-related studies?

How are the citations, authorship, and collaborations shaping up?

What are the new and emerging topics and themes related to port performance studies?

3 Methodology

Akbari et al. [ 51 ] discuss how bibliometric analysis has recently received greater importance and hailed bibliometric analysis methods over traditional methods due to the benefits associated with conducting bibliometric analysis. The authors adopted an exploratory research approach by analyzing the bibliometric data downloaded from the most popular scholarly database, Scopus, to assess the trend and existing scenario of port performance-related studies, leading to the researcher's analysis and interpretation of visualized data in various plots and diagrams using relevant software tools. Scopus is one of the leading scholarly databases that has witnessed increasing citable articles and multi-disciplinary publications that provide quick and authoritative access to high-quality, comprehensive, and reliable content in multi-disciplinary fields [ 52 , 53 , 54 ].

In the first phase of the bibliometric study, we started with the search for scholarly articles in the Scopus database using an initial set of keywords and Boolean operator combinations to retrieve the relevant and possible publications available in the Scopus database. After multiple trials, the keyword combination was identified as "port performance" OR "performance of port" OR "performance of the port." With the identified keywords and boolean operators, the search in the Scopus database was conducted using the combination of keywords and Boolean operators as “ALL (("maritime port" OR "sea-port" OR "sea? port" OR "seaport" OR "shipping port" OR "container port" OR "container terminal port") AND ("port performance" OR "performance of port" OR "performance of the port" OR "performance of the shipping port" OR "performance of the maritime port" OR "performance of the seaport" OR "performance of shipping port" OR "performance of seaport")) AND (EXCLUDE (SUBJAREA, "PHAR") OR EXCLUDE (SUBJAREA, "NURS") OR EXCLUDE (SUBJAREA, "VETE") OR EXCLUDE (SUBJAREA, "NEUR") OR EXCLUDE (SUBJAREA, "MEDI") OR EXCLUDE (SUBJAREA, "CHEM") OR EXCLUDE (SUBJAREA, "BIOC") OR EXCLUDE (SUBJAREA, "PHYS")) AND (EXCLUDE (DOCTYPE, "no") OR EXCLUDE (DOCTYPE, "er") OR EXCLUDE (DOCTYPE, "tb") OR EXCLUDE (DOCTYPE, "ed")) AND (LIMIT-TO (LANGUAGE, "English")).” The scope of the study was limited to research, review, book, and conference publication articles available in English. The search was conducted on the 20th of April, 2024.

In the second phase, the filtered documents available after the search were downloaded from the Scopus database in CSV and Bib file formats for further bibliometric data analysis. Scopus database also provides a quick, ready-to-use results analysis option with basic diagrams representing documents per year, by source, by author, affiliation, country, type, and subject area, and by funding agency. These results are also available for download in CSV file format for customized data visualization and extended analysis. This was followed by a distinct analysis of the downloaded datasets in the third phase to analyze and interpret, leading to the discussion and conclusion in the fourth phase. Further, scientometric analysis was performed based on co-authorship and co-occurrence using the VOSviewer software tool [ 55 ]. VOSviewer is acknowledged as a scientific tool for data visualization to perform exploratory data analysis on various aspects of publication, such as keywords, countries of research activity, and its density [ 56 ]. Bibliometric data related to the subject area, yearly trend, journal, author, citation, and country-wise publication was visualized and analyzed using open-source R and Python software tools and relevant available libraries and packages. “bibliometrix” package available in R was the primary tool used for importing the raw bibliometric data and later developing many visualizations supported with Python for data cleaning before further analysis and developing visualizations.

The evolution themes developing over time were analyzed with three cutting points in 2008, 2014, and 2020, representing an equal distribution from 2008 to April 2024. The first cutting point of 2008 was fixed as the number of publications saw an upswing after 2007 as per the preliminary analysis and hence could be the milestone to start with further analysis as the first cutting point. Following the benchmark values adopted by Cobo et al. [ 57 ] and Wang et al. [ 58 ] in their bibliometric analysis study, the word count was set to default as 200, minimum cluster frequency of 5 per thousand documents, the number of labels for each cluster as 4 for optimal mapping with minimum weight index as 0.1 and thematic analysis using the Louvain clustering algorithm since past studies [ 59 , 60 ] have proven the Louvain algorithm’s consistency of performance and better results of modularity when compared with other clustering algorithm approaches. Informative trends and patterns identified through the analysis were discussed, and conclusions were outlined, leading to future research directions and highlighting emerging focus fields in port performance-related studies. A co-occurrence analysis for the country was performed to identify the density of research activities in different countries. In the co-authorship analysis for the country, the minimum number of documents was set as 75 to get the overlay of visualization of the top 20 countries, and the country-specific citation minimum threshold was zero, considering the score of the average number of publications per year. Further, the co-occurrence of the keywords was analyzed to create the network using Louvin’s algorithm while limiting the number of nodes to 30 and the minimum number of edges to 0.

4 Results and analysis

In this section, the visualizations of bibliometric data based on citation metrics, co-citation, and co-occurrences are discussed along with bibliometric data analysis comprising the trend in publications, publication subject areas highlights, country of research work, author analysis, collaboration, and the journals publishing the relevant articles, to derive meaningful insights.

4.1 Descriptive analysis

The keyword search in the database identified 2245 articles published collectively from 691 sources of scientific publications from 1979 till April 20th, 2024. Of 4189 authors who contributed to publications in the port performance field, close to 29% had international co-authorship, and 274 had single-authored publications. The annual growth rate was 10.62%, and average citation was 20.24 per document. The descriptive summary of the bibliometric data is given in Table  2 .

4.2 Trend of publications

Descriptive analysis of the bibliometric data shows a phenomenal annual growth rate of 10.62% in research publications related to port performance. The trend of published articles, along with the mean total citations per year from the first article published in 1979 till 20th April 2024, is shown in Fig.  1 . There has been a spike in the number of publications since 2007, as indicated in the figure, and the number of publications has exponentially increased after that, suggesting that port performance is one of the most focused research areas in the recent decade.

figure 1

Publication trend and citations from 1975 till 2023

Ahrens' [ 61 ] novel research on the engineering performance of ports outlined the importance of management training through audio-visual techniques for improving port performance in developing countries. The trend of core engineering-related performance studies continued till Thomas [ 62 ] discussed the strategic management of ports and their development. Roll et al. [ 63 ] introduced the application of the DEA methodology in port performance comparison with a sample of 20 selected ports. Later, a noticeable surge in port performance studies started after Lin et al. [ 64 ] studied the operation performance of major container ports in the Asia–Pacific region and applied the DEA approach to evaluate the operational performance of ports based on their operation efficiency.

4.3 Subject area of publication analysis

The percentage share of the articles published in different subject areas of research is shown in Fig.  2 . “Social Sciences,” “Engineering,” and “Business, Management, and Accounting” areas contribute more than 50% of the overall and are followed by the “Environmental Science,” Decision Sciences, and Economics, Econometrics and Finance,” and “Computer Science,” subject area and so on, out of which “Business, Management, and Accounting” areas account around 12%. Other areas include “Earth and Planetary Sciences,” “Energy,” “Mathematics,” “Agricultural and Biological Sciences”, “Arts and Humanities,” “Materials Science,” “Multidisciplinary,” “Chemical Engineering,” and “Psychology.”

figure 2

Subject-wise publication share

4.4 Journal of publication analysis

The distribution of the articles published in journals is shown in Fig.  3 for journals that have published more than 30 articles. “Maritime Policy and Management” journal is the leading source, with about 156 publications, followed by “Maritime Economics and Logistics” and “Sustainability” journals, together contributing to 5% of the total publications to date. “Research in Transportation and Management,” “Asian Journal of Shipping and Logistics,” and International Journal of Shipping and Transport Logistics” are closely competing with only one-third of the publications from the “Maritime Policy and Management” journal.

figure 3

Top publishers with more than 30 publications

To get a deeper understanding of the growth of sources, source dynamics were analyzed using a trend line, as shown in Fig.  4 . Accordingly, it was identified that although “Maritime Policy and Management,” “Maritime Economics and Logistics”, “Sustainability”, “Research in Transportation Business and Management”, “Asian Journal of Shipping and Logistics”, “International Journal of Shipping and Logistics”, “Ocean and Coastal Management”, and “Transport policy” are the leaders in terms of total publications in the given order. Phenomenal growth was achieved by the “Sustainability” journal, which was at the bottom in 2007 and has shown exponential growth since then, reaching the third position in annual publications growth, overtaking the “Research in Transportation Business and Management” journal.

figure 4

The trend of annual publications in top sources

4.5 Author publication and citation analysis

The publications from the leading authors based on their number of publications and their citations are shown in Fig.  5 . Lam JSL occupies the top position, with 27 publications commencing with the first publication in 2006. At the same time, the top author with the highest citation is Cullinane K, with the first publication in 2002 and contributing 21 publications in the last 20 years. Six of his publications in 2006 alone have received 822 citations so far.

figure 5

Number of Publications and Citation to Publication ratio for top authors

The authors’ collaboration network diagram is shown in Fig.  6 . Some top authors, especially Cullinane, Pallis, Lam, Chen J, Ducruet, and Song, collaborate highly, leading to higher quality publications with increasing citations.

figure 6

Author's network diagram

4.6 Country of research analysis

The distribution of articles published across the top 15 countries based on publications and based on citations is shown in Figs.  7 and 8 , respectively. China has the highest contribution, close to 24%, followed by the US and UK, with 8.6% and 4.6% of publications, respectively. Somensi et al. [ 50 ] also highlighted China as the highest contributor with the most significant port performance-related studies. India-centric publications are merely 3.47%, a mere 15% of that of China, which has 571 publications, followed by the US and the UK, with 205 and 110 publications, respectively. China is again the leader in citations with 8116 citations, followed by the US and UK 5189 and 4819 citations, respectively. However, Spain overtook Italy with 1843 citations from 93 articles, with 1628 citations from 94 publications.

figure 7

Country-wise publication

figure 8

Country-wise citation

The scatter plot in Fig.  9 shows China, the USA, and Korea leading mainly with single-country publications, compared with Singapore and the UK, which have more multi-country collaborated publications. Among the top 10 countries in collaboration aspects, India has higher single-country publications and only a few multi-country collaborated publications.

figure 9

Scatter plot of single and multi-country publications

4.7 Co-authorship and country-collaboration analysis

We considered the publications where the minimum number of publications was set as two, and the maximum number of countries counted as 25. Thus, among the 105 countries published, 77 meet this threshold. When calculating the total strength of the co-authorship links with other countries, only the countries with the greatest total link strength will be selected. The visualization of country-wise coauthorship and publication network in Fig.  10 shows that China has the highest density compared to other countries, indicating intense research on port and port performance.

figure 10

Country-wise overlay of co-authorship

4.8 Impact metrics analysis

The “Research Metrics Guidebook” provides a comprehensive list of metrics to assess the research impact at various levels, including journal, article, author, and affiliated institutional level productivity, citation, and collaboration based on scholarly content in the Scopus database [ 65 ]. Table 3 shows the citation impact metrics since 2018.

“Field-weighted citation Impact” (FWCI) metric is a comparative metric that calculates the citations received by a document compared to the expected citations. It is a normalized bibliometric indicator that factors in the type of document, subject area, and publication period [ 66 ]. As we can see, the FWCI has been fluctuating; overall, it is at 1.12, indicating that the impact is 12 percent above the global average. Further break-up analysis on the authorship impact, as shown in Table  4 , suggests more than 50 percent impact above the worldwide average of international collaboration. Industry-institute collaboration has significantly increased in 2024. “Outputs in Top Citation Percentiles” shows that 11.5 percent of the publications are in the top 10 percent. International collaboration has seen close to 30 percent collaboration over the years. The top fifteen country impact metrics, as shown in Table  5 , indicate China is leading with the highest number of views and citations, along with an FWCI of 1.85, suggesting they are 85 percent above the global average. Spain, India, and Indonesia are 15, 18, and 36 percent below the global average.

4.9 Co-occurrence analysis

The co-occurrence of the keywords was analyzed for keywords having a minimum of 40 occurrences to create cluster-based density visualization based on the weight of occurrences, as shown in Fig.  11 . The core subject areas with the highest occurrences in the field of port performance-related studies are “Data Envelopment Analysis”, “efficiency”, “simulation”, “container terminal”, “port competitiveness”, “port governance”, “port management”, and “sustainability”. DEA and efficiency are the most weighted labels in the performance-related studies, with counts of 55 and 53, respectively. DEA and efficiency labels were followed by a simulation of the performance of seaports and container terminals and then the constructs related to performance, such as competitiveness, governance, management, and sustainability practices. In the computations text analysis of Sung-Woo et al. [ 49 ], the LDA output indicated DEA methodology as the most weighted term.

figure 11

Density visualization of co-occurrences using VOSViewer

4.10 Keyword analysis

The scatter plot in Fig.  12 with size measures showing the frequency count of the top trending words indicates that the trending words with the highest frequency in the last ten years are “port operations”, followed by “Container terminal”, “Data Envelopment Analysis”, “efficiency”, and “Sustainability” with the count as 341, 168, 155,136 and 92 respectively. Automation has been the trending word in recent years, with the previous years trending with the COVID-19 keyword, followed by performance, port automation, and economic development.

figure 12

Top Trending words

4.11 Thematic evolution analysis

Thematic evolution using a longitudinal map (alluvial graph) divides the timespan of the research field into slices of time duration prescribed based on the developments in the field. It illustrates the continuation and discontinuation of identified themes, thus explaining the conceptual structure of the field of interest [ 67 , 68 ]. The thematic evolution shown in Fig.  13 demonstrates the evolution of the themes with three cutting points in the years 2008, 2014, and 2020. 2008 was set as the first cutting point as the publications trend showed an exponential increase after 2007. Then, the remaining cutting points were set as equal intervals to assess the thematic evolution. The word count was set to default as 200, with a minimum cluster frequency of 5, the number of labels for each cluster as 4 with a minimum weight index of 0.1, and thematic analysis using the Louvain algorithm.

figure 13

Thematic evolution since 1979 using R-bibliometrix package

Callon et al. [ 69 ] developed the co-word analysis technique based on the centrality and density matrix to analyze and explain word interactions in any research field over some time. According to Cobo et al. [ 57 ], the thematic map comprises four quadrants on which the themes are placed based on the centrality and density of the themes over the years. Centrality demonstrates the theme's importance or relevance within the given study area, whereas density represents the development of the theme over the selected timespan. The bubbles in the graph indicate the size of the occurrence within the cluster, comprised of interacting words demonstrating the co-occurrence network. Each quadrant has its characteristics based on the degree of centrality and density measures. Motor themes are those of high importance and development happening in the field. Niche themes are, by and large, isolated and highly developed combined with negligible, low, or limited importance. Emerging or declining themes are of low significance, and the density of the theme needs to be vigorously developed. Basic themes are characterized by high-importance and relevant and low-density themes. They are reasonably crucial for research since those topics still need to be fully developed and, therefore, potential issues for conducting future research [ 57 , 69 , 70 , 71 ]. The recent 4th stage of the thematic map is shown in Fig.  14 for the 2019 to 2024 time span.

figure 14

Thematic map of the 4th stage from 2018 till date using R-bibliometric package

The thematic map resulted in nine clusters in the 4th stage, as tabulated in Table  6 , summarising the themes and the related topics associated with each cluster.

The theme with the highest centrality, complimented with high density, is “sustainability” among the topics mapped. To get more insight into the theme of “sustainability” topic identified, the trend of sustainability keywords in the previous two decades was visualized as shown in Fig. 15 . Through the review of literature, it was also determined that the surge in usage of sustainability terms in research started after the pioneering work of Yap et al. [ 72 ], who initiated the focused discussions on sustainability-related topics, and after that, the usage has steadily grown exponentially.

figure 15

Frequency Trend of Sustainability keyword

5 Discussions

This study focused on bibliometric analysis of port performance-related studies based on the bibliometric data available on the Scopus database. This article critically examined bibliometric data of studies related to the performance of ports to explore the evolution, identify trends of articles published from 1975 till April 20th, 2024, the leading authors, top journals, impact metrics, and leading countries in terms of publications, and thereby highlight the research directions on port performance studies. From the trend of publications, it is evident that there has been a significant spike in the number of publications after 2007. After that, it has been exponentially increasing in concurrence with the findings of Pallis et al. [ 8 ], indicating that port performance is one of the highly focused research areas in recent times with over 10 percent annual growth rate. OConnor et al. [ 5 ] also highlighted the growing desire of policymakers and stakeholders in port performance evaluation and policy development, keeping in mind the interests of the public as well. The average citations were over 20 per document; however, the citations fluctuated with irregular peaking and flattening patterns. The timespan from 2000 to 2007 saw the highest number of citations and, after that, has been moderate but more significant than the rate of publications over the years except for the last two years, where the citations are yet to pick up due to the recently published articles. A review of publications gives insight into the fact that the articles are predominantly on port efficiency-related studies, with many articles starting to focus on DEA methodology application on port efficiency and port performance evaluation studies. Other studies [ 49 , 50 ] also found that DEA-based studies have the highest number of publications and citations.

Among the various pre-defined subject areas of port-performance-related publications in Scopus, “Business, Management, and Accounting” contributes close to 12 percent, about half of the contribution in the “Social Sciences” subject area and, indicating “Business, Management, and Accounting” as a highly potential subject area for focused contribution in the port performance related field. Somensi et al. [ 50 ] also highlighted the need to enhance the research on business management. Among the sources of publications, “Maritime Policy & Management” leads the race in publications, with close to 7 percent of the publications. Our findings concur with the observations of Somensi et al. [ 50 ], who found similar results in their systematic literature review on the performance of the port topic. In their content analysis study, Notteboom et al. [ 49 ] highlighted 267 articles published in the “Maritime Policy & Management” journal, and the leading and continuous contributions of studies related to port were highlighted in the journal. Therefore, “Maritime Policy & Management” should be one of the primary journals researchers must subscribe to for notifications and regularly track updates on port research. The publications in “Maritime Policy & Management” are equal to the publications in “Maritime Economics and Logistics” and “Sustainability” journals. In the source of publication analysis, “Maritime Economics and Logistics” and “Sustainability” were identified as the sources with the highest growth rate for publications related to port performance. These two journals were at the bottom during 2000 and have shown exponential growth, especially the “Maritime Economics and Logistics” journal, which has reached second in annual publications growth, closely followed by the exponentially growing “Sustainability” journal, which has been gaining momentum since 2015. The “Sustainability” journal is growing steadily and exponentially compared to other trailing journals behind “Maritime Policy & Management”. The findings of Zhou et al. [ 73 ] also confirm that “Sustainability” and “Maritime Policy & Management” journals are the leading journals in port-related studies.

Among the authors contributing, Lam JSL, Notteboom, Song DW, Pallis, Ng AKY, Yang Z, and Ducruet C are a few critical leading authors with the highest contribution and co-citation in port performance-related studies. Zhou et al. [ 73 ] have a fascinating insight into the changing pattern of research hot spots in port-related studies and their associated dynamics. The study of Wang et al. [ 35 ] also identified LSL Lam as the most productive contributor with the highest number of publications. The collaborations network shows collaborations happening in some pockets within the US, UK, some parts of Europe, and South Korea to a greater level, thus taking international collaboration to 30% share. Although the US is ahead of the UK in publications, the normalized FWCI for the UK is higher by 30% at 1.75 compared to the US. It is worth highlighting that in addition to the multi-country-author collaboration, industry-institute collaboration is also improving and uplifting the impact further. Analyzing the country of publications, with about 20% contributions, China is the only developing economy in the leading countries of publications and citations, followed by the US, UK, Korea, Spain, and Italy. However, regarding citations, the UK has dominated other countries with the highest citations, followed by China and the US. This finding confirms the past conclusions [ 35 , 73 ], where China was identified as the leading country regarding the number of publications, followed by the US and South Korea.

Density visualization of co-occurrences categorized the keywords into 3 clusters centered around the port operation, container terminal, and efficiency topics. The port operations-centered cluster had related keywords: performance assessment, competitiveness, sustainability, sustainable port development, decision-making, and policy. The container terminal-centered cluster had container cargo handling and computer simulation aspects. Lastly, the port efficiency-centered cluster had DEA, benchmarking, and productivity aspects. In the top ten labels based on occurrence frequency, DEA and efficiency are the most weighted labels, which aligns with the findings of the past conclusions [ 49 , 50 ]. An overview of existing literature on port performance research also shows the studies were predominantly based on applying DEA methodology to compute the efficiency of the port, simulation modeling followed by critical dimensions such as port competitiveness, port performance, and sustainability, along with port governance and strategic management. DEA and efficiency labels were followed by a simulation of the performance of seaports and container terminals and then the constructs related to performance, such as competitiveness, governance, management, and sustainability practices.

Remarkably, thematic evolution shows the absence of DEA methodology after the cutting point in 2018, where it peaked and was later taken over by the sustainability theme. The sustainability theme started to evolve in 2013, far below DEA, and attained the top position from 2019 to 2024. The DEA theme, which has evolved since 2008, has been taken over by the port performance theme since 2019. The thematic analysis has also shed light on the themes revolving around the port hinterland theme, which have evolved through DEA methodology and recently, since 2017, into sustainability-related themes along with port performance. Container terminal and port governance have been themes that have continued to exist since 2008. The “COVID-19 pandemic” and “automatic detection systems” (AIS) were the latest themes that have explicitly evolved. As the entire world faced the wrath and impact of the global pandemic, the port industry was also not left free, and many studies [ 74 , 75 , 76 , 77 , 78 ] have evaluated the impact of COVID-19 on the port sector. Alongside this, most industries have adopted automation technologies to overcome the challenges and effects of the pandemic. This phenomenon is confirmed by the top trending words “automation” and “technology adoption” in 2023 and 2024. The application of robotics and other AIS in port operations became eminent, leading to many studies [ 79 , 80 , 81 , 82 ] exploring innovative applications and opportunities for automation and digital technology adoption. Even the keywords analysis indicates that technology adoption and automation have been the topics that have been highly discussed in recent times.

Yang et al. [ 81 ] also highlighted the increasing popularity of AIS in their review work on AIS and big data in maritime research. Ashrafi et al. [ 83 ] discussed the design of games to address various contain terminal problems. They proposed using virtual and augmented reality and Global Positioning System (GPS) technologies through simulation games in the dynamic port industry that can train and develop professionals who handle port planning, operations, and management. Meanwhile, Lee et al. [ 84 ] underlined the crucial role of AI and computer vision technology in response to dynamics in the port industry, specifically focusing on intelligent traffic management and parking space and container operations optimization in maritime ports. Applications of AIS and IoT through the “Smart Port” concept were detailed by Rajabi et al. [ 82 ] to overcome the challenges in port operations in the dynamic environment within which the port operates. Similar to the Industry 4.0 framework, the new-age innovative automation and robotics applications in seaport operations were conceptualized under the Shipping 4.0 framework in the study of Muhammad et al. [ 79 ].

The most trending words with the highest frequency in the last five years were identified as “Sustainability”, similar to the findings of Sung-Woo et al. [ 49 ], who highlighted the term as a core focus area in port-related research since 2010. They reviewed port-related research works applying the computational text analysis approach to the articles available in both the Scopus database and WoS database related to port research and published in international journals indexed in the Science Citation Index and Korea Citation Index also highlighted the need for sustainable port development and more focus on environmental sustainability alongside the development of port competitiveness. A similar finding was underscored by Wagner (2019) in the bibliometric data-based study on port cities. Sustainability is the new theme that has taken center stage, with a high density of publications and high importance and greater centrality, indicating the relevance of the studies in the current context. Most recent studies have spotted sustainability in the maritime industry as a topic of focused interest, as pointed out by Lee et al. [ 85 ], ever since the term was used at the first Earth Summit in 1992. It is emphasized as the need of the hour, supported by SDGs of the UN’s 2030 Agenda on emission reduction and sustainable maritime operations that have put significant pressure on maritime seaports, thereby demanding regulator compliance and sustainability reporting. Sustainability and intelligent ports were part of the motor theme cluster, indicating the theme of high importance and development happening in the field. AIS and ML were part of Motor cum niche cross-over themes indicating they are developed in isolation but are niche in nature. Similarly, the blockchain technology keyword in the niche theme is a highly developed concept, but it is isolated from the application in port in the development and growth stage.

We identified some of the major theoretical foundations that were adopted in port-related studies, such as “Business model innovation theory”, “resilience theory”, “resource dependence theory” and “stakeholder theory”. Ashrafi et al. [ 83 ] adopted stakeholder theory to synthesize the drivers of sustainability in maritime ports in the systematic review study. They discussed the sustainability strategies grouped into different clusters based on multi-stakeholder perspectives to integrate into port planning and operations as a response to the changing industry dynamics. Denktas-Sakar et al. [ 86 ] adopted the “resource dependence theory” to conceptualize a framework to integrate the relationships between the supply chain and port stakeholders to identify the impact on the sustainability of ports. Giudice et al. [ 13 ] adopted the “Business model innovation theory” and “resilience theory” to determine the innovative technologies and digitization of port operations as a solution for the economic, environmental, and social sustainability of ports in line with the description of Elkington [ 87 ] who coined the “Triple Bottom Line” foundations of sustainability. No specific definition of sustainability has been universally accepted, even though many have attempted to define it [ 88 , 89 , 90 , 91 ]; however, there is a common understanding from different schools of thought [ 88 , 89 ] that sustainability encompasses most frequently related dimensions which are termed as the three pillars of sustainability have respective practices, viz economic sustainability practices, environmental sustainability practices, and social sustainability practices, that facilitate and lead towards sustainable development through practicing these practices. Recently, Jugović et al. [ 32 ] highlighted the emerging concept of a green port governance model of adopting sustainability practices in the port. Many studies [ 92 , 93 , 94 , 95 ] defined sustainability practices as the practices that aid organizations in developing opportunities and, at the same time, managing the three dimensions of organizational processes—economic, environmental, and social aspects in value creation over the long term.

Furthermore, Bjerkan et al. [ 96 ] highlighted the need for more port sustainability-related studies and empirical research on port sustainability. Adding to that, Lim et al. [ 97 ] also emphasized the importance of sustainable port performance in their systematic review of port sustainability and performance-related studies. They raised the flag on the focus of extant studies, mainly on environmental sustainability, and the need for more importance placed on social and economic sustainability in research studies. Multiple studies [ 98 , 99 , 100 ] have pointed out the uncertainty and lack of clarity among industry professionals and other research-oriented consultants and academicians on approaches to excel in sustainable performance and whether there are any significant positive results on performance due to sustainability. This considerable gap must be addressed and indicates the dire need for research incorporating sustainability concepts within the framework related to port performance. Many studies [ 14 , 49 , 101 , 102 ] also acknowledged sustainability as one of the primary factors contributing to port competitiveness and performance enhancement. The report by UNCTAD [ 1 ] highlights the expectations of ports to consider sustainability aligned with port performance through strategic and operational steps as it has become a priority in overall maritime shipment. The report also opined that ports operating with higher sustainability have greater chances of attracting investments and increased support from various port stakeholders. Lim et al. [ 97 ] also highlighted the importance of sustainable port performance in their systematic review of port sustainability and performance. They mentioned the focus on only environmental sustainability and the need for more importance on social and economic sustainability in research studies. A similar emphasis on the ecological sustainability of green and sustainable ports was found in other studies [ 103 , 104 ]. However, their studies also mentioned incorporating sustainability's economic and social dimensions in future research. Lee et al. [ 85 ] also outlined the need to explore the methodologies adopted in sustainability-related studies in their proposed future research directions.

Even though Sung-Woo et al. [ 49 ] highlighted quality and sustainability as the focus areas of port-related research since 2010, [ 99 ] opined that sustainability is an emerging concept that has yet to be overlooked. They also raised doubts about practitioners' and researchers' need for more clarity on whether the sustainability concept can yield positive results or has been successful. Broccardo et al. [ 100 ] also highlight the concern and crucial gap of need for clarity among academicians and researchers on the excellence that can be achieved in sustainability and performance. Further, in the review of tools and technologies work by Bjerkan et al. [ 96 ], empirical data-based research on the sustainability of ports was demanded due to the need for more sufficient studies related to experiences on implementation and associated challenges in port operations. More importantly, empirical data-driven research on sustainability-related topics and port performance will be critical to the growing body of knowledge.

Summarising the above discussions and findings, the insight drawn indicates that sustainability” is the most highlighted and evolving theme in recent years in port performance-related studies. [ 105 ] also pointed out the increased focus and evolution of sustainability in the context of society, industry as well as regulatory bodies in line with the argument of Broccardo et al. [ 100 ], who highlighted the concern and gap of lack of clarity among academicians and researchers on the excellence that can be achieved in sustainability and performance and emphasizes on addressing this crucial gap. Further, although companies are becoming increasingly involved in sustainability [ 106 ], academic researchers still need to make clear how to excel in sustainability and performance [ 98 ], thus highlighting a gap that must be addressed. This has resulted in a gush of publications on topics related to sustainability, as highlighted by [ 107 ].

6 Conclusion and future research directions

To the best of the researcher's knowledge, this study is novel due to its holistic coverage of the span of publications and growth and the thematic evolution of publications in maritime port performance-related studies. The bibliometric exploratory data analysis of articles published from 1979 to April 2024 was conducted to review the trend, explore the existing characteristics of port performance-related studies, and identify opportunities for future research. The increasing number of publications related to port performance indicates the extreme importance and focus on the performance of ports and related topic areas, especially from 2008 onwards.

The study contributes in the following ways. Firstly, it contributes to the overall understanding of the introduction and growth of port performance-related studies worldwide. Secondly, it provides exploratory data analysis on key characteristics such as the occurrence of keywords, research subject areas, top publishing journals, and country-wise research publications. Lastly, the findings give possible future research directions and opportunities. This is also a pioneer study that demonstrated the use of Python software and relevant packages for creating advanced visualizations using bibliometric data and the Bibliometrix package of the R-programming tool.

The study and the outcome discussions are bound with limitations, as in most research, and future research can address the shortcomings. Primarily, this study was limited to articles published in the Scopus database alone. Even institutional ranking agencies like Quacquarelli Symonds (QS) and Times Higher Education (THE) are adopting indexing matrices from Scopus due to its popularity and reliability of peer-reviewed publications in reputed journals. However, future research could integrate articles from other databases like WoS, ProQuest, IEEE, and Google Scholar for a holistic view of research publications available in other leading scholarly databases. An extended scoping review study can be conducted to understand better the underlying themes and the antecedents of port performance variables. Also, the studies should be focused on port management, competitiveness, and sustainability constructs to keep in line with the growing number of studies on these important and relevant labels related to sustainable port performance management. As recommended by Jeevan et al. [ 14 ], topic modeling, also termed LDA, to uncover the specific themes in port performance can be explored for further thematic research and comparing the studies between countries. Further, the digital and technology revolution has given way to innovative technologies and automation systems that aid resource optimization in various port operations and management. The extent of AI and ML applications supported with big data and blockchain concepts could also be explored for technology-aided sustainable development.

Despite the limitations mentioned above, the study contributes to the body of knowledge in terms of the evolution and trend of ongoing research in port performance, the leading journals of publication, publication citations, the most prolific authors, the co-authorship and occurrence network, top frequently used labels and topics, the thematic evolution and subject areas of study which will be of significant review and reference to researchers, academicians, and industry practitioner giving future directions of research on port performance and increased focus on a sustainability theme.

Data availability

The data for analysis in the study was based on the bibliometric data downloaded from the scholarly database Scopus and was limited to published research and review articles in English till March 2024. The datasets generated during and analyzed during the current study are private for some as the bibliometric data search is available for subscribed users but from the corresponding author at a reasonable request.

UNCTAD. International maritime trade and port traffic; 2019. pp. 1–25. https://doi.org/10.18356/bdd3e686-en

Wang Y, Wang N. The role of the port industry in China’s national economy: an input–output analysis. Transp Policy (Oxf). 2019;78:1–7. https://doi.org/10.1016/j.tranpol.2019.03.007 .

Article   Google Scholar  

Kuo KC, Lu WM, Le MH. Exploring the performance and competitiveness of Vietnam port industry using DEA. Asian J Shipp Logist. 2020. https://doi.org/10.1016/j.ajsl.2020.01.002 .

Thai VV, Yeo GT, Pak JY. Comparative analysis of port competency requirements in Vietnam and Korea. Marit Policy Manag. 2016;43(5):614–29. https://doi.org/10.1080/03088839.2015.1106017 .

OConnor E, Evers N, Vega A. Port performance from a policy perspective—a systematic review of the literature. J Ocean Coast Econ. 2019. https://doi.org/10.15351/2373-8456.1093 .

Moral-Muñoz JA, Herrera-Viedma E, Santisteban-Espejo A, Cobo MJ. Software tools for conducting bibliometric analysis in science: an up-to-date review. Prof Inf. 2020;29(1):2. https://doi.org/10.3145/epi.2020.ene.03

Junquera B, Mitre M. Value of bibliometric analysis for research policy: a case study of Spanish research into innovation and technology management. Scientometrics. 2007;71(3):443–54. https://doi.org/10.1007/s11192-007-1689-9 .

Pallis AA, Vitsounis TK, de Langen PW. Port Economics, policy and management: review of an emerging research field. Transp Rev. 2010;30(1):115–61. https://doi.org/10.1080/01441640902843208 .

Davarzani H, Fahimnia B, Bell M, Sarkis J. Greening ports and maritime logistics: a review. Transp Res D Transp Environ. 2016;48:473–87. https://doi.org/10.1016/j.trd.2015.07.007 .

Lau Y, Ducruet C, Ng AKY, Fu X. Across the waves: a bibliometric analysis of container shipping research since the 1960s. Marit Policy Manag. 2017;44(6):667–84. https://doi.org/10.1080/03088839.2017.1311425 .

Munim ZH, Saeed N. Seaport competitiveness research: the past, present and future. Int J Shipp Transp Logist. 2019;11(6):533–57. https://doi.org/10.1504/IJSTL.2019.103877 .

Miraj P, Berawi MA, Zagloel TY, Sari M, Saroji G. Research trend of dry port studies: a two-decade systematic review. Marit Policy Manag. 2021;48(4):563–82. https://doi.org/10.1080/03088839.2020.1798031 .

Del Giudice M, Di Vaio A, Hassan R, Palladino R. Digitalization and new technologies for sustainable business models at the ship–port interface: a bibliometric analysis. Marit Policy Manag. 2022;49(3):410–46. https://doi.org/10.1080/03088839.2021.1903600 .

Jeevan J, Selvaduray M, Mohd Salleh NH, Ngah AH, Zailani S. Evolution of Industrial Revolution 4.0 in seaport system: an interpretation from a bibliometric analysis. Austr J Marit Ocean Aff. 2022;14(4):229–50. https://doi.org/10.1080/18366503.2021.1962068 .

Weerasinghe BA, Perera HN, Bai X. Optimizing container terminal operations: a systematic review of operations research applications. Marit Econ Logist. 2023. https://doi.org/10.1057/s41278-023-00254-0 .

Pham TY. A smart port development: systematic literature and bibliometric analysis. Asian J Shipp Logist. 2023;39(3):57–62. https://doi.org/10.1016/j.ajsl.2023.06.005 .

Pallis AA, Kladaki P, Notteboom T. Port economics, management and policy studies (2009–2020): a bibliometric analysis. WMU J Marit Aff. 2023. https://doi.org/10.1007/s13437-023-00325-2 .

Megawati AP, Wayan-Nurjaya I, Machfud, Suseno SH. Bibliometric mapping of research developments on the topic of fishing port management using VOSviewer. IOP Conf Ser Earth Environ Sci. 2023. https://doi.org/10.1088/1755-1315/1266/1/012019 .

Zhang Z, et al. Digitalization and innovation in green ports: a review of current issues, contributions and the way forward in promoting sustainable ports and maritime logistics. Sci Total Environ. 2024;912:169075. https://doi.org/10.1016/j.scitotenv.2023.169075 .

Article   CAS   Google Scholar  

Dragović B, Zrnić N, Dragović A, Tzannatos E, Dulebenets MA. A comprehensive bibliometric analysis and assessment of high-impact research on the berth allocation problem. Ocean Eng. 2024;300:117163. https://doi.org/10.1016/j.oceaneng.2024.117163 .

Beyene ZT, Nadeem SP, Jaleta ME, Kreie A. Research trends in dry port sustainability: a bibliometric analysis. Sustainability (Switzerland). 2024;16(1):263. https://doi.org/10.3390/su16010263 .

Diniz NV, Cunha DR, de Santana Porte M, Oliveira CBM, de Freitas Fernandes F. A bibliometric analysis of sustainable development goals in the maritime industry and port sector. Reg Stud Mar Sci. 2024;69:103319. https://doi.org/10.1016/j.rsma.2023.103319 .

Bottasso A, Conti M, Ferrari C, Merk O, Tei A. The impact of port throughput on local employment: Evidence from a panel of European regions. Transp Policy (Oxf). 2013;27:32–8. https://doi.org/10.1016/j.tranpol.2012.12.001 .

Clark X, Dollar D, Micco A. Port efficiency, maritime transport costs, and bilateral trade. J Dev Econ. 2004;75(2):417–50. https://doi.org/10.1016/j.jdeveco.2004.06.005 .

Tongzon J. Efficiency measurement of selected Australian and other international ports using data envelopment analysis. Transp Res Part A Policy Pract. 2001;35(2):107–22. https://doi.org/10.1016/S0965-8564(99)00049-X .

Jung BM. Economic contribution of ports to the local economies in Korea. Asian J Shipp Logist. 2011;27:1–30. https://doi.org/10.1016/S2092-5212(11)80001-5 .

Munim ZH, Schramm H-J. The impacts of port infrastructure and logistics performance on economic growth: the mediating role of seaborne trade. J Shipp Trade. 2018;3(1):1–19. https://doi.org/10.1186/s41072-018-0027-0 .

Chen L, Xu X, Zhang P, Zhang X. Analysis on port and maritime transport system researches. J Adv Transp. 2018;2018:1–20. https://doi.org/10.1155/2018/6471625 .

Wagner N. Sustainability in port cities—A bibliometric approach. Transp Res Proc. 2019;39:587–96. https://doi.org/10.1016/j.trpro.2019.06.060 .

Xu X, Wang H, Wu Y, Yi W. Bibliometric analysis on port and shipping researches in scope of management science. Asia-Pac J Oper Res. 2021;38(3):21400273. https://doi.org/10.1142/S0217595921400273 .

Jović M, Tijan E, Brčić D, Pucihar A. Digitalization in maritime transport and seaports: bibliometric, content and thematic analysis. J Mar Sci Eng. 2022;10(4):486. https://doi.org/10.3390/jmse10040486 .

Jugović A, Sirotić M, Poletan Jugović T. Identification of pivotal factors influencing the establishment of green port governance models: a bibliometric analysis, content analysis, and DPSIR framework. J Mar Sci Eng. 2022;10(11):1701. https://doi.org/10.3390/jmse10111701 .

Lin C-Y, Dai G-L, Wang S, Fu X-M. The evolution of green port research: a knowledge mapping analysis. Sustainability (Switzerland). 2022;14(19):11857. https://doi.org/10.3390/su141911857 .

Li KX, Li M, Zhu Y, Yuen KF, Tong H, Zhou H. Smart port: a bibliometric review and future research directions. Transp Res E Logist Transp Rev. 2023;174:103098. https://doi.org/10.1016/j.tre.2023.103098 .

Wang S-B, Peng X-H. Knowledge mapping of port logistics in the recent 20 Years: a bibliometric analysis via CiteSpace. Marit Policy Manag. 2023;50(3):335–50. https://doi.org/10.1080/03088839.2021.1990429 .

Adarrab A, Mamad M, Houssaini A, Behlouli M. Systematic review of port choice criteria for evaluating port attractiveness determinants (PART i): Bibliometric and content analyses. Pomorstvo. 2023;37(1):86–105. https://doi.org/10.31217/p.37.1.8 .

Chen S, Ding Q, Liang K. Research on green port based on LDA model and CiteSpace bibliometric analysis. In: Proceedings of SPIE - The International Society for Optical Engineering; 2023. https://doi.org/10.1117/12.2679115

Kuakoski HS, Lermen FH, Graciano P, Lam JSL, Mazzuchetti RN. Marketing, entrepreneurship, and innovation in port management: trends, barriers, and research agenda. Marit Policy Manag. 2023. https://doi.org/10.1080/03088839.2023.2180548 .

Gerrero-Molina M, Vasquez-Suarez Y, Valdes-Mosquera D. Smart, green, and sustainable: unveiling technological trajectories in maritime port operations. IEEE Access. 2024. https://doi.org/10.1109/ACCESS.2024.3376431 .

du Plessis F, Goedhals-Gerber L, van Eeden J. The impacts of climate change on marine cargo insurance of cold chains: a systematic literature review and bibliometric analysis. Transp Res Interdiscip Perspect. 2024;23:101018. https://doi.org/10.1016/j.trip.2024.101018 .

Mojica Herazo JC, Piñeres Castillo AP, Cabello Eras JJ, Salais Fierro TE, Araújo JFC, Gatica G. Bibliometric analysis of energy management and efficiency in the maritime industry and port terminals: Trends. Proc Comput Sci. 2024;231:514–9. https://doi.org/10.1016/j.procs.2023.12.243 .

Pallis AA. Chapter 11 whither port strategy? Theory and practice in conflict. Res Transp Econ. 2007;21:343–82. https://doi.org/10.1016/S0739-8859(07)21011-X .

Le PT, Nguyen H-O. Influence of policy, operational and market conditions on seaport efficiency in newly emerging economies: the case of Vietnam. Appl Econ. 2020;52(43):4698–710. https://doi.org/10.1080/00036846.2020.1740159 .

Li W, Bai X, Yang D, Hou Y. Maritime connectivity, transport infrastructure expansion and economic growth: a global perspective. Transp Res Part A Policy Pract. 2023;170:103609. https://doi.org/10.1016/J.TRA.2023.103609 .

Geng X, Wen Y, Zhou C, Xiao C. Establishment of the sustainable ecosystem for the regional shipping industry based on system dynamics. Sustainability. 2017;9(5):742. https://doi.org/10.3390/su9050742 .

Mantry S, Ghatak RR. Comparing and contrasting competitiveness of major indian and select international ports. Int J Res Finance Market. 2017;7(5):1–19.

Google Scholar  

UNCTAD. Reflecting on the past , exploring the future. In: 50 Years of Review of Maritime Transport, 1968–2018: reflecting on the past, exploring the future, no. 10; 2018.

Xiu G, Zhao Z. Sustainable development of port economy based on intelligent system dynamics. IEEE Access. 2021;9:14070–7. https://doi.org/10.1109/ACCESS.2021.3051065 .

Sung-Woo L, Sung-Ho S. A review of port research using computational text analysis: a comparison of Korean and International Journals. Asian J Shipp Logist. 2019;35(3):138–46. https://doi.org/10.1016/j.ajsl.2019.09.002 .

Somensi K, Ensslin S, Dutra A, Ensslin L, Ripoll-Feliu VM, Dezem V. Knowledge construction about port performance: evaluation: an international literature analysis. Intang Cap. 2017;13(4):720–44. https://doi.org/10.3926/ic.956 .

Akbari M, Khodayari M, Danesh M, Davari A, Padash H. A bibliometric study of sustainable technology research. Cogent Bus Manag. 2020;7(1):1751906. https://doi.org/10.1080/23311975.2020.1751906 .

“Scopus,” Abstract and citation database. Elsevier. Accessed: 04 April 2024. [Online]. https://www.elsevier.com/products/scopus

Bartol T, Budimir G, Dekleva-Smrekar D, Pusnik M, Juznic P. Assessment of research fields in Scopus and Web of Science in the view of national research evaluation in Slovenia. Scientometrics. 2014;98(2):1491–504. https://doi.org/10.1007/s11192-013-1148-8 .

Lasda Bergman EM. Finding citations to social work literature: the relative benefits of using web of science, Scopus, or Google Scholar. J Acad Librariansh. 2012;38(6):370–9. https://doi.org/10.1016/j.acalib.2012.08.002 .

Van Eck NJ, Waltman L. VOSviewer: a computer program for bibliometric mapping. In: 12th International Conference on Scientometrics and Informetrics, ISSI 2009; 2009.

Castillo-Vergara M, Alvarez-Marin A, Placencio-Hidalgo D. A bibliometric analysis of creativity in the field of business economics. J Bus Res. 2018;85:1–9. https://doi.org/10.1016/j.jbusres.2017.12.011 .

Cobo MJ, López-Herrera AG, Herrera-Viedma E, Herrera F. An approach for detecting, quantifying, and visualizing the evolution of a research field: a practical application to the Fuzzy Sets Theory field. J Informetr. 2011;5(1):146–66. https://doi.org/10.1016/j.joi.2010.10.002 .

Wang C, Lv T, Cai R, Xu J, Wang L. Bibliometric analysis of multi-level perspective on sustainability transition research. Sustainability (Switzerland). 2022;14(7):4145. https://doi.org/10.3390/su14074145 .

Singh D, Garg R. Comparative analysis of sequential community detection algorithms based on internal and external quality measure. J Stat Manag Syst. 2020;23(7):1129–46. https://doi.org/10.1080/09720510.2020.1800189 .

Blondel VD, Guillaume JL, Lambiotte R, Lefebvre E. Fast unfolding of communities in large networks. J Stat Mech: Theory Exp. 2008;10:2008. https://doi.org/10.1088/1742-5468/2008/10/P10008 .

Ahrens JP. Irregular wave runup. In: Coastal structures 79, speciality conference on the design construction, maintenance and performance of port and coastal structure, vol. 2; 1979. pp. 998–1041.

Thomas BJ. Port management development—a strategy for the provision of a training capability in developing countries. Marit Policy Manag. 1981;8(3):179–90. https://doi.org/10.1080/03088838100000043 .

Roll Y, Hayuth Y. Port performance comparison applying data envelopment analysis (DEA). Marit Policy Manag. 1993;20(2):153–61. https://doi.org/10.1080/03088839300000025 .

Lin LC, Tseng CC. Operational performance evaluation of major container ports in the Asia-Pacific region. Marit Policy Manag. 2007;34(6):535–51. https://doi.org/10.1080/03088830701695248 .

Intelligence ER. Research Metrics Guidebook; 2019.

Purkayastha A, Palmaro E, Falk-Krzesinski HJ, Baas J. Comparison of two article-level, field-independent citation metrics: Field-Weighted Citation Impact (FWCI) and Relative Citation Ratio (RCR). J Informetr. 2019;13(2):635–42. https://doi.org/10.1016/j.joi.2019.03.012 .

Rosvall M, Bergstrom CT. Mapping change in large networks. PLoS ONE. 2010;5(1):e8694. https://doi.org/10.1371/journal.pone.0008694 .

Khare A, Jain R. Mapping the conceptual and intellectual structure of the consumer vulnerability field: a bibliometric analysis. J Bus Res. 2022;150:567–84. https://doi.org/10.1016/j.jbusres.2022.06.039 .

Callon M, Courtial JP, Laville F. Co-word analysis as a tool for describing the network of interactions between basic and technological research: the case of polymer chemsitry. Scientometrics. 1991;22(1):155–205. https://doi.org/10.1007/BF02019280 .

Madsen DØ, Berg T, Di Nardo M. Bibliometric Trends in industry 5.0 research: an updated overview. Applied System Innovation. 2023;6(4):63. https://doi.org/10.3390/asi6040063 .

Della Corte V, Del Gaudio G, Sepe F, Sciarelli F. Sustainable tourism in the open innovation realm: a bibliometric analysis. Sustainability (Switzerland). 2019;11(21):6114. https://doi.org/10.3390/su11216114 .

Yap WY, Lam JSL. 80 million-twenty-foot-equivalent-unit container port? Sustainability issues in port and coastal development. Ocean Coast Manag. 2013;71:13–25. https://doi.org/10.1016/j.ocecoaman.2012.10.011 .

Zhou F, Yu K, Xie W, Lyu J, Zheng Z, Zhou S. Digital twin-enabled smart maritime logistics management in the context of industry 5.0. IEEE Access. 2024;12:10920–31. https://doi.org/10.1109/ACCESS.2024.3354838 .

Zhou X, Jing D, Dai L, Wang Y, Guo S, Hu H. Evaluating the economic impacts of COVID-19 pandemic on shipping and port industry: a case study of the port of Shanghai. Ocean Coast Manag. 2022;230:106339. https://doi.org/10.1016/J.OCECOAMAN.2022.106339 .

Michail NA, Melas KD. Shipping markets in turmoil: an analysis of the Covid-19 outbreak and its implications. Transp Res Interdiscip Perspect. 2020;7:100178. https://doi.org/10.1016/J.TRIP.2020.100178 .

Notteboom T, Pallis T, Rodrigue JP. Disruptions and resilience in global container shipping and ports: the COVID-19 pandemic versus the 2008–2009 financial crisis. Maritime Econ Logist. 2021;23(2):179–210. https://doi.org/10.1057/S41278-020-00180-5/FIGURES/14 .

Cullinane K, Haralambides H. Global trends in maritime and port economics: the COVID-19 pandemic and beyond. Maritime Econ Logist. 2021;23(3):369–80. https://doi.org/10.1057/S41278-021-00196-5/FIGURES/1 .

Notteboom TE, Pallis AA, De Langen PW, Papachristou A. Advances in port studies: the contribution of 40 years Maritime Policy & Management. Marit Policy Manag. 2013;40(7):636–53. https://doi.org/10.1080/03088839.2013.851455 .

Muhammad B, Kumar A, Cianca E, Lindgren P. Improving port operations through the application of robotics and automation within the framework of shipping 4.0. In: International Symposium on Wireless Personal Multimedia Communications, WPMC, vol. 2018-November; 2018. pp. 387–392. https://doi.org/10.1109/WPMC.2018.8712998 .

Feng M, Shaw SL, Peng G, Fang Z. Time efficiency assessment of ship movements in maritime ports: A case study of two ports based on AIS data. J Transp Geogr. 2020;86:102741. https://doi.org/10.1016/J.JTRANGEO.2020.102741 .

Yang D, Wu L, Wang S, Jia H, Li KX. How big data enriches maritime research – a critical review of Automatic Identification System (AIS) data applications. Transp Rev. 2019;39(6):755–73. https://doi.org/10.1080/01441647.2019.1649315 .

Rajabi A, Khodadad Saryazdi A, Belfkih A, Duvallet C. Towards smart port: an application of AIS data. In: Proceedings - 20th International Conference on High Performance Computing and Communications, 16th International Conference on Smart City and 4th International Conference on Data Science and Systems, HPCC/SmartCity/DSS 2018; 2019. pp. 1414–1421, https://doi.org/10.1109/HPCC/SMARTCITY/DSS.2018.00234 .

Ashrafi M, Walker TR, Magnan GM, Adams M, Acciaro M. A review of corporate sustainability drivers in maritime ports: a multi-stakeholder perspective. Marit Policy Manag. 2020;47(8):1027–44. https://doi.org/10.1080/03088839.2020.1736354 .

Lee H, Chatterjee I, Cho G. A systematic review of computer vision and AI in parking space allocation in a seaport. Applied Sciences (Switzerland). 2023;13(18):10254. https://doi.org/10.3390/app131810254 .

Lee PTW, Kwon OK, Ruan X. Sustainability challenges in maritime transport and logistics industry and its way ahead. Sustainability (Switzerland). 2019;11(5):1331. https://doi.org/10.3390/SU11051331 .

Denktas-Sakar G, Karatas-Cetin C. Port sustainability and stakeholder management in supply chains: a framework on resource dependence theory. Asian J Shipp Logist. 2012;28(3):301–19. https://doi.org/10.1016/J.AJSL.2013.01.002 .

Elkington J. Tripple bottom line. In: Cannibals with Forks; 1997.

Ruggerio CA. Sustainability and sustainable development: A review of principles and definitions. Sci Total Environ. 2021;786:147481. https://doi.org/10.1016/J.SCITOTENV.2021.147481 .

Moore JE, Mascarenhas A, Bain J, Straus SE. Developing a comprehensive definition of sustainability. Implement Sci. 2017;12(1):1–8. https://doi.org/10.1186/S13012-017-0637-1/TABLES/3 .

Amui LBL, Jabbour CJC, de Sousa Jabbour ABL, Kannan D. Sustainability as a dynamic organizational capability: a systematic review and a future agenda toward a sustainable transition. J Clean Prod. 2017;142:308–22. https://doi.org/10.1016/j.jclepro.2016.07.103 .

Montiel I, Delgado-Ceballos J. Defining and measuring corporate sustainability. Organ Environ. 2014;27(2):113–39. https://doi.org/10.1177/1086026614526413 .

Seuring S, Müller M. From a literature review to a conceptual framework for sustainable supply chain management. J Clean Prod. 2008;16(15):1699–710. https://doi.org/10.1016/J.JCLEPRO.2008.04.020 .

Gladwin TN, Kennelly JJ, Krause T-S. “Shifting paradigms for sustainable development: implications for management theory and research. Acad Manag Rev. 1995;20(4):874–907. https://doi.org/10.5465/AMR.1995.9512280024 .

Ameer R, Othman R. Sustainability practices and corporate financial performance: a study based on the top global corporations. J Bus Ethics. 2012;108(1):61–79. https://doi.org/10.1007/S10551-011-1063-Y/TABLES/7 .

Chakrabarty S, Wang L. The long-term sustenance of sustainability practices in MNCs: a dynamic capabilities perspective of the role of R&D and internationalization. J Bus Ethics. 2012;110(2):205–17. https://doi.org/10.1007/S10551-012-1422-3 .

Bjerkan KY, Seter H. Reviewing tools and technologies for sustainable ports: Does research enable decision making in ports? Transp Res D Transp Environ. 2019;72:243–60. https://doi.org/10.1016/j.trd.2019.05.003 .

Lim S, Pettit S, Abouarghoub W, Beresford A. Port sustainability and performance: a systematic literature review. Transp Res D Transp Environ. 2019;72:47–64. https://doi.org/10.1016/J.TRD.2019.04.009 .

Lee MT, Raschke RL. Innovative sustainability and stakeholders’ shared understanding: the secret sauce to ‘performance with a purpose.’ J Bus Res. 2020;108:20–8. https://doi.org/10.1016/j.jbusres.2019.10.020 .

Ducruet C, Panahi R, Ng AKY, Jiang C, Afenyo M. Between geography and transport: a scientometric analysis of port studies in Journal of Transport Geography. J Transp Geogr. 2019;81:102527. https://doi.org/10.1016/j.jtrangeo.2019.102527 .

Broccardo L, Truant E, Dana L-P. The interlink between digitalization, sustainability, and performance: an Italian context. J Bus Res. 2023;158:113621. https://doi.org/10.1016/j.jbusres.2022.113621 .

Parola F, Risitano M, Ferretti M, Panetti E. The drivers of port competitiveness: a critical review. Transp Rev. 2017;37(1):116–38. https://doi.org/10.1080/01441647.2016.1231232 .

Woo SH, Pettit SJ, Kwak DW, Beresford AKC. Seaport research: a structured literature review on methodological issues since the 1980s. Transp Res Part A Policy Pract. 2011;45(7):667–85. https://doi.org/10.1016/j.tra.2011.04.014 .

Di Vaio A, Varriale L. Management innovation for environmental sustainability in seaports: managerial accounting instruments and training for competitive green ports beyond the regulations. Sustainability (Switzerland). 2018;10(3):783. https://doi.org/10.3390/su10030783 .

Balić K, Žgaljić D, Ukić Boljat H, Slišković M. The port system in addressing sustainability issues—a systematic review of research. J Mar Sci Eng. 2022;10(8):1048. https://doi.org/10.3390/jmse10081048 .

Oh H, Lee S-W, Seo Y-J. The evaluation of seaport sustainability: the case of South Korea. Ocean Coast Manag. 2018;161:50–6. https://doi.org/10.1016/j.ocecoaman.2018.04.028 .

Busco C, Fiori G, Frigo ML, Riccaboni A. Sustainable development goals: integrating sustainability initiatives with long-term value creation. Strategic Finance. 2017;99(3):28–37.

Wu Q, He Q, Duan Y. Explicating dynamic capabilities for corporate sustainability. EuroMed J Bus. 2013;8(3):255–72. https://doi.org/10.1108/EMJB-05-2013-0025 .

Download references

Open access funding provided by Manipal Academy of Higher Education, Manipal. This study has not received any funding from institutions or agencies.

Author information

Authors and affiliations.

Department of Commerce, Manipal Academy of Higher Education, Manipal, 576104, India

L. Kishore & Bidyut Kumar Ghosh

Department of Humanities & Management, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India

Yogesh P. Pai

Department of Library and Information Science, JSS Academy of Higher Education and Research, Mysore, Karnataka, India

Sheeba Pakkan

You can also search for this author in PubMed   Google Scholar

Contributions

Kishore L conceptualized the manuscript, collected the data, performed analysis, and authored the manuscript. Dr. Yogesh Pai P conducted an in-depth literature review of the bibliometric studies available in the Scopus database, authored the manuscript, and contributed to the results and discussion chapter along with justifications. Dr.Bidyut Kumar Ghosh co-authored the analysis and discussion chapter of the manuscript. Dr. Sheeba Pakkan contributed to co-occurrence, co-authorship network analysis, citation impact-related data collection, analysis, and discussion.

Corresponding author

Correspondence to L. Kishore .

Ethics declarations

Competing interests.

On behalf of all authors, the corresponding author states that there is no conflict of interest for financial or personal relationships with a third party whose interests could be positively or negatively influenced by the article’s content.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Kishore, L., Pai, Y.P., Ghosh, B.K. et al. Maritime shipping ports performance: a systematic literature review. Discov Sustain 5 , 108 (2024). https://doi.org/10.1007/s43621-024-00299-y

Download citation

Received : 07 December 2023

Accepted : 29 May 2024

Published : 04 June 2024

DOI : https://doi.org/10.1007/s43621-024-00299-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Maritime shipping port
  • Port performance
  • Sustainability of ports
  • Sustainability practices
  • Bibliometric analysis
  • Data visualization
  • Thematic analysis

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

Loading metrics

Open Access

Peer-reviewed

Research Article

Frameworks for procurement, integration, monitoring, and evaluation of artificial intelligence tools in clinical settings: A systematic review

Contributed equally to this work with: Sarim Dawar Khan, Zahra Hoodbhoy

Roles Data curation, Formal analysis, Methodology, Project administration, Writing – original draft

Affiliation CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan

Roles Conceptualization, Methodology, Supervision, Writing – review & editing

Affiliations CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan, Department of Paediatrics and Child Health, Aga Khan University, Karachi, Pakistan

Roles Project administration, Writing – original draft, Writing – review & editing

ORCID logo

Roles Methodology, Writing – review & editing

Affiliation Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States

Roles Data curation, Formal analysis, Methodology, Writing – review & editing

Affiliations Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom, Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom

Roles Data curation, Formal analysis, Project administration

Roles Data curation, Formal analysis

Roles Writing – original draft, Writing – review & editing

Roles Methodology, Visualization

Roles Methodology, Project administration, Visualization

Roles Supervision, Writing – review & editing

Affiliations Duke Clinical Research Institute, Duke University School of Medicine, Durham, North Carolina, United States, Division of Cardiology, Duke University School of Medicine, Durham, North Carolina, United States

¶ ‡ ZS and MPS also contributed equally to this work.

Affiliations CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan, Department of Medicine, Aga Khan University, Karachi, Pakistan

* E-mail: [email protected]

  • Sarim Dawar Khan, 
  • Zahra Hoodbhoy, 
  • Mohummad Hassan Raza Raja, 
  • Jee Young Kim, 
  • Henry David Jeffry Hogg, 
  • Afshan Anwar Ali Manji, 
  • Freya Gulamali, 
  • Alifia Hasan, 
  • Asim Shaikh, 

PLOS

  • Published: May 29, 2024
  • https://doi.org/10.1371/journal.pdig.0000514
  • Reader Comments

Fig 1

Research on the applications of artificial intelligence (AI) tools in medicine has increased exponentially over the last few years but its implementation in clinical practice has not seen a commensurate increase with a lack of consensus on implementing and maintaining such tools. This systematic review aims to summarize frameworks focusing on procuring, implementing, monitoring, and evaluating AI tools in clinical practice. A comprehensive literature search, following PRSIMA guidelines was performed on MEDLINE, Wiley Cochrane, Scopus, and EBSCO databases, to identify and include articles recommending practices, frameworks or guidelines for AI procurement, integration, monitoring, and evaluation. From the included articles, data regarding study aim, use of a framework, rationale of the framework, details regarding AI implementation involving procurement, integration, monitoring, and evaluation were extracted. The extracted details were then mapped on to the Donabedian Plan, Do, Study, Act cycle domains. The search yielded 17,537 unique articles, out of which 47 were evaluated for inclusion based on their full texts and 25 articles were included in the review. Common themes extracted included transparency, feasibility of operation within existing workflows, integrating into existing workflows, validation of the tool using predefined performance indicators and improving the algorithm and/or adjusting the tool to improve performance. Among the four domains (Plan, Do, Study, Act) the most common domain was Plan (84%, n = 21), followed by Study (60%, n = 15), Do (52%, n = 13), & Act (24%, n = 6). Among 172 authors, only 1 (0.6%) was from a low-income country (LIC) and 2 (1.2%) were from lower-middle-income countries (LMICs). Healthcare professionals cite the implementation of AI tools within clinical settings as challenging owing to low levels of evidence focusing on integration in the Do and Act domains. The current healthcare AI landscape calls for increased data sharing and knowledge translation to facilitate common goals and reap maximum clinical benefit.

Author summary

The use of artificial intelligence (AI) tools has seen exponential growth in multiple industries, over the past few years. Despite this, the implementation of these tools in healthcare settings is lagging with less than 600 AI tools approved by the United States Food and Drug Administration and fewer job AI related job postings in healthcare as compared to other industries. In this systematic review, we tried to organize and synthesize data and themes from published literature regarding key aspects of AI tool implementation; namely procurement, integration, monitoring and evaluation and map the extracted themes on to the Plan-Do-Study-Act framework. We found that currently the majority of literature on AI implementation in healthcare settings focuses on the “Plan” and “Study” domains with considerably less emphasis on the “Do” and “Act” domains. This is perhaps the reason why experts currently cite the implementation of AI tools in healthcare settings as challenging. Furthermore, the current AI healthcare landscape has poor representation from low and lower-middle-income countries. To ensure, the healthcare industry is able to implement AI tool into clinical workforce, across a variety of settings globally, we call for diverse and inclusive collaborations, coupled with further research targeted on the under-investigated stages of AI implementation.

Citation: Khan SD, Hoodbhoy Z, Raja MHR, Kim JY, Hogg HDJ, Manji AAA, et al. (2024) Frameworks for procurement, integration, monitoring, and evaluation of artificial intelligence tools in clinical settings: A systematic review. PLOS Digit Health 3(5): e0000514. https://doi.org/10.1371/journal.pdig.0000514

Editor: Zhao Ni, Yale University, UNITED STATES

Received: September 4, 2023; Accepted: April 18, 2024; Published: May 29, 2024

Copyright: © 2024 Khan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Funding: This work was supported by the Patrick J. McGovern Foundation (Grant ID 383000239 to SDK, ZH, MHR, JYK, AAAM, FG, AH, AS, ST, NSK, MRP, SB, ZS, MPS). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: MPS is a co-inventor of intellectual property licensed by Duke University to Clinetic, Inc., KelaHealth, Inc, and Cohere-Med, Inc. MPS holds equity in Clinetic, Inc. MPS has received honorarium for a conference presentation from Roche. MPS is a board member of Machine Learning for Health Care, a non-profit that convenes an annual research conference. SB is a co-inventor of intellectual property licensed by Duke University to Clinetic, Inc. and Cohere-Med, Inc. SB holds equity in Clinetic, Inc.

Introduction

The use of Artificial Intelligence (AI) tools has been exponentially growing, with several applications in the healthcare industry and tremendous potential to improve health outcomes. While there has been a rapid increase in literature on the use of AI in healthcare, the implementation of AI tools is lagging in both high-income and low-income settings, compared to other industries, has been noted, with fewer than 600 Food and Drug Administration-approved AI algorithms, and even fewer being presently used in clinical settings [ 1 – 4 ]. The development-implementation gap has been further assessed by Goldfarb et al., using job advertisements as a surrogate marker to measure technology diffusion patterns, finding among skilled healthcare job postings between 2015–2018, 1 in 1250 postings required AI skills, comparatively lower than other skilled sectors (information technology, management, finance and insurance, manufacturing etc.) [ 5 ].

Implementation of AI tools is a multi-phase process that involves procurement, integration, monitoring, and evaluation [ 6 , 7 ]. Procurement involves the scouting process before integrating an AI tool, including decisions whether to build the tool or buy the tool. Integration involves deploying an AI tool and incorporating it into existing clinical workflows. Monitoring and evaluation occur post-integration and entails keeping track of tool performance metrics, determining the impact of integrating the tool, and modifying it as needed to ensure it keeps functioning at its original intended level of performance. A key barrier highlighted by healthcare leaders across the globe to AI implementation in healthcare includes a lack of a systematic approach to AI procurement, implementation, monitoring and evaluation, since the majority of research on AI in healthcare does not comprehensively explore the multiple, complex steps involved in ensuring optimal implementation [ 8 – 11 ].

This systematic review aims to summarize themes arising from frameworks focusing on procuring, integrating, monitoring, and evaluating AI tools in clinical practice.

This systematic review followed the Preferred Items for Systematic Review and Meta-Analysis (PRISMA) guidelines for systematic reviews ( S1 Checklist ) [ 12 ]. This review is registered on PROSPERO (ID: CRD42022336899).

Information sources and search strategy

We searched electronic databases (MEDLINE, Wiley Cochrane, Scopus, EBSCO) until June 2022. The search string contained terms that described technology, setting, framework, and implementation phase including AI tool procurement, integration, monitoring, evaluation, including standard MeSH terms. Terms that weren’t standard MeSH terms, such as “clinical setting” were added following iterative discussions. To capture papers that were methodical guidelines for AI implementation, as opposed to experiential papers, and recognizing the heterogeneous nature of “frameworks”, ranging from commentaries to complex, extensively researched models, multiple terms such as “framework”, “model” and “guidelines” were used in the search strategy, without explicit definitions with the understanding that these encompassing terms would capture all relevant literature, which would later be refined as per the inclusion and exclusion criteria. The following search string was employed on MEDLINE: ("Artificial Intelligence"[Mesh] OR "Artificial Intelligence" OR "Machine Learning") AND ("clinical setting*"[tiab] OR clinic*[tiab] OR "Hospital" OR "Ambulatory Care"[Mesh] OR "Ambulatory Care Facilities"[Mesh]) AND (framework OR model OR guidelines) AND (monitoring OR evaluation OR procurement OR integration OR maintenance) without any restrictions. Search strategy used for the other databases are described in the appendix ( S1 Appendix ). All search strings were designed and transformed according to the database by the lead librarian (KM) at The Aga Khan University.

Eligibility criteria

Inclusion criteria..

All studies focused on implementing AI tools in a clinical setting were included. AI implementation was broadly conceptualized to consist of procurement, integration, monitoring, and evaluation. There was no restriction on the types of article included.

Exclusion criteria.

Studies published in any language besides English were excluded. Studies describing a single step of implementation (e.g., procurement) for a single AI tool that did not present a framework for implementation were not included, along with studies that discussed the experience of consumers using an AI tool as opposed to discussion on AI frameworks.

Study selection

Retrieved articles from the systematic search were imported into EndNote Reference Manager (Version X9; Clarivate Analytics, Philadelphia, Pennsylvania) and duplicate articles were removed. All articles were screened in duplicate by two independent pairs of reviewers (AM and JH, FG and SDK). Full texts of articles were then comprehensively reviewed for inclusion based on the predetermined criteria. Due to the heterogenous nature of articles curated (including opinion pieces) a risk of bias assessment was not conducted, as an appropriate, validated tool does not exist for this purpose.

Data extraction

Three pairs of reviewers (SK and SG, SDK and FG, HDJH and AA) independently extracted data from the selected studies by using a spreadsheet. Pairs attempted to resolve disagreements first, followed by adjudication by a third external reviewer (ZH) if needed. Data extracted comprised of the following items: name of authors, year of publication, journal of publication, country of origin, World Bank region (high-income, middle-income, low-income) for the corresponding author, study aim(s), rationale, methodology, framework novelty, and framework components. Framework component categories included procurement, integration, post-implementation monitoring and evaluation [ 6 , 7 ].

Data analysis

The qualitative data were extracted and delineated into themes based on the concepts presented in each individual study. Due to the lack of risk of bias assessment, a sensitivity analysis was not conducted. Once extracted, the themes (that encompassed the four stages of implementation (procurement, integration, evaluation, and monitoring)) were then clustered into different categories through iterative discussion and agreement within the investigator team. The study team felt that while a holistic framework for AI implementation does not yet exist, there are analogous structures that are widely used in healthcare quality improvement. One of the best established structures used for iterative quality improvement is the plan-do-study-act (PDSA) method ( S1 Fig ) [ 13 ]. PDSA is commonly used for a variety of healthcare improvement efforts [ 14 ], including patient feedback systems [ 15 ] and adherence to guideline-based practices [ 16 ]. This method has four stages: plan, do, study, and act. The ‘plan’ stage identifies a change to be improved; the ‘do’ stage tests the change; the ‘study’ stage examines the success of the change and the ‘act’ stage identifies adaptations and next steps to inform a new cycle [ 13 ]. PDSA is well suited to serve as a foundation for implementing AI, because it is well understood by healthcare leaders around the globe and offers a high level of abstraction to accommodate the great breadth of relevant use cases and implementation contexts. Hence the PDSA framework was deductively chosen, and the extracted themes from the articles (irrespective of whether the original article(s) contained the PDSA framework) were then mapped onto the 4 domains of PDSA framework, with the ‘plan’ domain representing the steps required in procurement, the ‘do’ domain representing the clinical integration, the ‘study’ domain highlighting the monitoring and evaluation processes and the ‘act’ domain representing the actions taken after the monitoring and evaluation process to improve functioning of the tool. This is displayed in S1 Table .

Baseline characteristics of included articles

A total of 17,537 unique studies were returned by the search strategy, with 47 studies included after title and abstract screening for full text review. 25 studies were included in the systematic review following full-text review. 22 studies were excluded in total because they either focused on pre-implementation processes (n = 12), evaluated the use of a singular tool (n = 4), evaluated perceptions of consumers (n = 4) or did not focus on a clinical setting (n = 2). Fig 1 . Shows the PRISMA diagram for this process. A range of articles, from narrative reviews and systematic reviews to opinion pieces and letters to the editor, were included for the review.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pdig.0000514.g001

The year of publication of the included articles ranged from 2017 to 2022 with the most (40%, n = 10) articles being published in 2020 and the least being published in 2017 and 2018 (4%, n = 1 each). All corresponding authors of the 25 included articles (100%) originated from high-income countries with the most common country of author affiliation being United States of America (52%, n = 13), followed by the United Kingdom, Canada, and Australia (24%, n = 2 each). Among 172 authors, only 1 (0.6%) was from a low-income country (LIC)(Uganda) and 2 (1.2%) from low-middle-income country (LMIC) (India and Ghana) ( Table 1 ). When stated, funding organizations included institutions in the US, Canada, the European Union and South Korea [ 17 – 24 ].

thumbnail

https://doi.org/10.1371/journal.pdig.0000514.t001

From the 25 included articles, a total of 17 themes were extracted, which were later mapped to respective domains. Table 2 . Shows a summary of the distribution of themes across all the PDSA domains including a few sample quotes from eligible articles. Fig 2 . Shows a Sankey diagram highlighting the overlap between all themes across all articles. The extracted themes are discussed below.

thumbnail

https://doi.org/10.1371/journal.pdig.0000514.g002

thumbnail

https://doi.org/10.1371/journal.pdig.0000514.t002

Seven themes were clustered and mapped to the Plan domain. Most articles in the Plan domain focused on the themes of feasibility of operation within existing workflows (48%, n = 12), followed by transparency (32%, n = 8) and ethical issues and bias (32%, n = 8), the cost of purchasing and implementing the tool (20%, n = 5), regulatory approval (20%, n = 5), rationale for use of AI tools (16% n = 4) and legal liability for harm (12%, n = 3). Example quotes related to each theme are captured in Table 2 .

1) Rationale of use of AI tools. Frameworks highlight the need to select clinically relevant problems and identify the need for acquiring an AI tool before initiating the procurement process [ 27 , 34 – 36 ].

2) Ethical issues and bias. Frameworks noted that AI tools may be developed in the context of competitive venture capitalism, the values, and ethics of which often differ from, and potentially may be incompatible with, the values of the healthcare industry. While ethical considerations should occur at all stages, it is especially important, before any tool is implemented, AI tool should be critically analyzed in their social, legal, and economic domains, to ensure ethical use while fulfilling its initially intended purpose [ 17 , 18 , 23 , 27 , 29 , 32 , 33 , 37 ].

3) Transparency. Transparency of AI tools is needed to increase trust in it and ensure it is fulfilling its initially intended purpose. Black box AI tools introduce implementation challenges. Teams implementing AI must balance priorities related to accuracy and interpretability. Even without model interpretability, frameworks highlight the importance of transparency in the training population, model functionality, architecture, risk factors and outcome definition. Frameworks also recommend transparency in reporting of model performance metrics as well as the test sets and methods to derive model performance [ 24 , 25 , 28 , 29 , 37 – 40 ].

4) Legal liability for harm. There is emphasis on the legal liability that healthcare settings may face from implementing AI tools that potentially cause harm. There is a need to clarify the degree to which an AI tool developer or clinician user is responsible for potential adverse events. Relevant stakeholders involved in the whole implementation process need to be identified to know who is to be held accountable in case of an adverse event [ 23 , 25 , 29 ].

5) Regulatory requirements: Regulatory frameworks differ across geographies and are in flux. Regulatory decisions about AI tool adoption should be made based on proof of clinically important improvements in relevant patient outcomes [ 22 , 23 , 26 , 32 , 36 ].

6) Cost of purchasing and implementing a tool. Cost is an important factor to consider when deciding to implement an AI tool. The cost should be compared to the baseline standard of care without the tool. Organizations should avoid selecting AI tools that fail to create value for patients or clinicians [ 23 , 26 , 27 , 36 , 41 ].

7) Feasibility of AI tool implementation . A careful analysis of available computing and storage resources should be carried out to ensure sufficient resources are in place to implement a new AI tool. Some AI tools might need specialized infrastructure, particularly if they use large datasets, such as images or high frequency streaming data. Moreover, similar efforts should be made to assess the differences between the cohort on which the AI tool was trained and the patient cohort in the implementation context. It is suggested to locally validate AI tools, develop a proper adoption plan, and provide clinician users sufficient training to increase the likelihood of success [ 20 , 25 , 26 , 28 , 29 , 33 , 35 – 38 , 40 , 41 ].

The following four themes were clustered and mapped to the Do domain. Articles that were clustered in the Do domain primarily focused on integrating into clinical workflows (44%, n = 11). User training was the second most common theme (24%, n = 6), followed by appropriate technical expertise (16%, n = 4) and user acceptability (8%, n = 2). Example quotes related to each theme are captured in Table 2 .

1) Appropriate technical expertise . Frameworks emphasized that the team responsible for implementing and evaluating the new AI tool should include people with different relevant expertise. Specific perspectives that should be include a machine learning expert and clinical expert (i.e. a healthcare professional who has extensive knowledge, experience, and expertise in a specific clinical area that the AI tool is being deployed for). Some frameworks suggested involving individuals with expertise across clinical and technical domains who can bridge among the different stakeholders. Inadequate representation among the team may lead to poor quality of the AI tool and patient harm due to incorrect information presented to clinician users [ 27 , 30 , 40 , 41 ].

2) User training. Frameworks highlighted the need to train clinician end users to get the maximum benefit from newly implemented AI tools, from understanding and interacting with the user interface to interpreting the outputs from the tool. A rigorous and comprehensive training plan should be executed to train the end-users with the required skillset so that they can handle high-risk patient situations [ 27 , 29 , 33 , 35 , 37 , 41 ].

3) User acceptability. Frameworks highlighted the key fact that AI models can be used in inappropriate ways that can potentially be harmful to patients. Unlike drugs, AI models do not come with that clear instructions to help users avoid inappropriate use that can lead to negative effects, hence user acceptability evaluates the how well the end users acclimatize to using the tool [ 25 , 30 ].

4) Integrating into clinical workflows. For AI tools to have clinical impact, the healthcare delivery setting and clinician users must be equipped to effectively use the tool. Healthcare delivery settings should ensure that individual clinicians are empowered to use the tool effectively [ 17 , 20 , 25 , 27 , 28 , 30 , 31 , 33 , 35 , 37 , 41 ].

Five themes were clustered and mapped to the Study domain. Articles that were clustered in the Study domain primarily focused on validation of the tool using predefined performance indicators (40%, n = 10). Assessment of clinical outcomes was the second most common theme (24%, n = 6), followed by user experience (8% n = 2), reporting of adverse events (4%, n = 1) and cost evaluation (4%, n = 1). Example quotes related to each theme are captured in Table 2 .

1) User experience. User experience in the study domain concerned the perception of AI system outputs from different perspectives ranging from professionals to patients. It is important to look at barriers to effective use, including trust, instructions, documentation, and user training [ 21 , 27 ].

2) Validation of the tool using predefined performance indicators. Frameworks discussed many different metrics and approaches to AI tool evaluation, including metrics related to sensitivity, specificity, precision, F1 score, the area under the receiver operating curve (ROC), and calibration plots. In addition to the metrics themselves, it is important to specify how the metrics are calculated. Frameworks also discussed the importance of evaluating AI tools on local, independent datasets and potentially fine-tuning AI tools to local settings, if needed [ 20 – 23 , 27 , 29 , 31 , 35 , 37 , 39 ].

3) Cost evaluation. Frameworks discussed the importance of accounting for costs associated with installation, use, and maintenance of AI tools. A particularly important dimension of costs is burden placed on frontline clinicians and changes in time required to complete clinical duties [ 27 ].

4) Assessment of clinical outcomes. Frameworks highlighted the importance of determining if a given AI tool leads to an improvement in clinical patient outcomes. AI tools are unlikely to improve patient outcomes unless clinician users effectively use the tool to intervene on patients. Changes to clinical decision making should be assessed to also ensure that clinician users do not over-rely on the AI tool [ 18 , 19 , 22 , 25 , 30 , 35 ].

5) Reporting adverse events. Frameworks discussed the importance of defining processes to report adverse events / system failures to relevant regulatory agencies. Healthcare settings should agree on protocols for reporting with the AI tool developer. Software updates that address known problems should be categorized as low-risk, medium-risk or high-risk to ensure stable appropriate use at the time of updating [ 32 ].

One theme was mapped to the Act domain.

1 ) Improvement of the tool/algorithm to improve performance. Frameworks discussed the need for tailored guidance on the use of AI tools that continuously learn from new data and allowing users and sites to adjust and fine-tune model thresholds to optimize performance for local contexts. For all AI tools, continuous monitoring should be in place and there should be channels for clinician users to provide feedback to AI tool developers for necessary changes This theme was mentioned by 6 articles, with example quotes related to theme captured in Table 2 . (24%, n = 6) [ 27 , 29 , 33 , 35 , 37 , 41 ].

Framework coverage of PDSA domains.

Among the four domains (Plan, Do, Study, Act) the most common domain was Plan (84%, n = 21), followed by Study (60%, n = 15), Do (52%, n = 13), and Act (24%, n = 6). Among the 25 included frameworks, four (16%) discussed all 4 domains, four (16%) discussed only 3 domains, ten (40%) discussed only 2 domains, and seven (28%) discussed only 1 domain.

Principal findings

In this systematic review, we comprehensively synthesized themes emerging from AI implementation frameworks, in healthcare, with a specific focus on the different phases of implementation. To help frame the AI implementation phases, we utilized the broadly recognizable PDSA approach. The present study found that current literature on AI implementation mainly focused on Plan and Study domains, whereas Do and Act domains were discussed less often, with a disparity in the representation of LMICs/LICs. Almost all framework authors originated from high-income countries (167 out of 172 authors, 97.1%), with the United States of America being the most represented (68 out of 172 authors, 39.5%).

Assessment of the existing frameworks

Finding the most commonly evaluated domains to be Plan and Study is encouraging as the capacity for strategic change management has been identified as a major barrier to AI implementation in healthcare [ 8 ]. Crossnohere et al. explored 14 AI frameworks in medicine and found comparable findings to the current study where most of the frameworks focused on development and validation subthemes in each domain [ 42 ]. This focus may help to mitigate against potential risks from algorithm integration, such as dataset shift, accidental fitting of confounders and differences in performance metrics owing to generalization to new populations [ 43 ]. The need for evolving, unified regulatory mechanisms, with improved understanding of the capabilities of AI, further drives the conversation towards the initial steps of implementation [ 44 ]. This could explain why researchers often choose to focus on the Plan and Study domains much more than other features of AI tool use, since these steps can be focused on ensuring minimal adverse effect on human outcomes, before implementing the AI tool in a wider setting, especially in healthcare, where the margin of error is minimal, if not, none at all.

The most common themes in the Plan domain were assessing feasibility of model operation within existing workflows, transparency and ethical issues and bias. Researchers across context emphasized the importance of effectively integrating AI tools into clinical workflows to enable positive impacts to clinical outcomes. Similarly, there was consensus among existing frameworks to provide transparency around how models are developed and function, by understanding the internal workings of the tool to comprehend medical decisions stemming from the utilization of AI, to help drive adoption and successful roll outs of AI tools [ 45 ]. Furthermore, there is still vast public concern surrounding the ethical issues in utilizing AI tools in clinical settings [ 46 ]. The least common themes in the Plan domain were rationale for use and legal liability for harm. Without a clear problem statement and rationale for use, adoption of AI is unlikely. Unfortunately, existing frameworks do not yet emphasize the importance of deeply understanding and articulating the problem addressed by an AI tool. Similarly, the lack of emphasis placed on legal liability for harm likely stems from variable approaches to product liability and a general lack of understanding of how to attribute responsibility and accountability of product performance.

The most common theme in the Study domain was validation against predefined performance indicators. Owing to their popularity, when these tools are studied, validation and assessment for clinical outcomes compared to standard of care strategies are perhaps easier to conduct as compared to final implementation procedures. Although, validation of the tool is absolutely vital for institutions to develop clinically trustworthy decision support systems [ 47 ], it is not the sole factor responsible for ensuring that an institution commits to a tool. User experience, economic burden, and regulatory compliance are equally important, if not more important, especially in LMICs [ 48 , 49 ].

We found that the Do and Act phases were the least commonly discussed domains. The fact that these domains were the least focused on across medical literature may contribute to the difficulties reported in the implementation of AI tools into existing human processes and clinical settings [ 50 ]. Within the Do domain implementation challenges are not only faced in clinical applications, but also extended to other healthcare disciplines, such as the delivery of medical education, where lack of technical knowledge is often cited as the main reason for difficulties [ 51 ]. Key challenges in implementation identified previously also include logistical complications and human barriers to adoption, such as ease of use, as well as sociocultural implications [ 43 ], which remain under evaluated. These aspects of implementation potentially form the backbone of supporting the practical rollout of AI tools. However, only a small number of studies focused on user acceptability, user training, and technical expertise requirements, which are key facilitators of successful integration [ 52 ]. Furthermore, it is potentially due to the emerging nature of the field, but the Act domain was by far the least prevalent in eligible articles with only 6 articles discussing improvement of the AI tool following integration.

Gaps in the existing frameworks

We identified that in all included articles, in the current systematic review, HICs tend to dominate the research landscape [ 53 ]. HICs have a robust and diverse funding portfolio and are home to the leading institutions that specialize in all aspects of AI [ 54 ]. The role of HICs in AI development is corroborated by existing literature, for example, three systematic reviews of randomized controlled trials (RCTs) assessing AI tools were published in 2021 and 2022 [ 55 – 57 ]. In total, these reviews included 95 studies published in English conducted across 29 countries. The most common settings were the United States, Netherlands, Canada, Spain, and the United Kingdom (n = 3, 3%). Other than China, the Global South is barely represented, with a single study conducted in India, a single study conducted in South America, and no studies conducted in Africa. This is mirrored by qualitative research, where a recent systematic review found among 102 eligible studies, 90 (88.2%) were from countries meeting the United Nations Development Programme’s definition of “very high human development” [ 58 ].

While LICs/LMICs have great potential to benefit from AI tools with their high disease burdens, their lack of representation puts them at a significant disadvantage in AI adoption. Because existing frameworks were developed for resource and capability rich environment, they may not be generalizable or applicable to LICs/LMICs. They considered neither severe limitations in local equipment, trained personnel, infrastructure, data protection frameworks, and public policies that these countries encounter [ 59 ] nor problems unique to these countries, such as societal acceptance [ 60 ] and physician readiness [ 61 ]. In addition, it has also been argued that AI tools should be contextually relevant and designed to fit a specific setting [ 44 ]. LICs/LMICs often have poor governance frameworks which are vital for the success of AI implementation. Governance is a key theme that is often region specific and contextual, providing a clear structure for ethical oversight and implementation processes. If development of AI is not inclusive of researchers in LICs/LMICs, it has the potential to make these regions slow adopters of technology [ 62 ].

Certain themes, which are important in terms of AI use and were expected to be extracted, were notably missing from literature. The fact that the Act domain was least discussed revealed that the existing frameworks failed to discuss when and how AI tools should be decommissioned and what needs to be considered for upgrading existing tools. Furthermore, while there is great potential to implement AI into healthcare there appears to be a disconnect between developers and end users—a missing link. Crossnohere et al. found that among the frameworks examined for the use of AI in medicine, they were least likely to offer direction with regards to engagement with relevant stakeholders and end users, to facilitate the adaption of AI [ 42 ]. Successful implementation of AI requires active collaboration between developers and end users and “facilitators” who promote this collaboration by connecting these parties [ 42 , 63 ]. The lack of these “facilitators” of AI technology will mean that emerging AI technology may remain confined to a minority of early adopters, with very few tools gaining widespread traction.

Strengths, Limitations and future directions

This review has appreciable strengths and some limitations. This is the first study evaluating implementation of AI tools in clinical settings across the entirety of the medical literature using a robust search strategy. A preestablished, extensively researched framework (PDSA) was also employed for domain and theme mapping. The PDSA framework, when utilized for the distinct mapping of AI implementation procedures in the literature, has been done previously but we believe the current paper takes a different approach by mapping distinct themes of AI implementation to a modified PDSA framework [ 64 ]. The current study aimed to focus on four key concepts with regards to AI implementation, namely procurement, integration, monitoring, and evaluation. These were felt to be a comprehensive yet succinct list that describe the steps of AI implementation within healthcare settings, but by no means are meant to be an exhaustive list. As AI only becomes more dominant in healthcare, the need to continuously appraise these tools will arise and hence has important implications with regards to Quality Improvement. Limitations of the current review include the exclusion of studies published in other languages that might have allowed for the exclusion of some relevant studies and the lack of a risk of bias assessment, due to a lack of validated tools for opinion pieces. The term “decision support” was not used in the search strategy, since we were ideally looking to capture frameworks and guidelines from our search strategy on AI implementation rather than articles referring to specific decision support tools. We recognize this may have inadvertently missed some articles however, we felt the terms in the search strategy, formulated iteratively, adequately picked up the necessary articles. A significant number of articles included had an inherently high risk of bias since they are simply expert opinion, and not empirical evidence. Additionally due to the heterogeneity in language surrounding AI implementation, there was considerable difficulty conducting a literature search and some studies may not have been captured by the search strategy. Furthermore, the study only searched scientific papers from four databases, namely MEDLINE, Wiley Cochrane, Scopus, EBSCO. The current review was also not able to compare implementation processes across different countries.

In order to develop clinically applicable strategies to tackle barriers to the implementation of AI tools, we propose that future studies evaluating specific AI tools place additional importance on the specific themes within the later stages of implementation. For future research, strategies to facilitate implementation of AI tools may be developed by identifying subthemes within each PDSA domain. LIC and LMIC stakeholders can fill gaps in current frameworks and must be proactive and intentionally engaged in efforts to develop, integrate, monitor, and evaluate AI tools to ensure wider adoption and benefit globally.

The existing frameworks on AI implementation largely focus on the initial stage of implementation and are generated with little input from LICs/LMICs. Healthcare professionals repeatedly cite how challenging it is to implement AI in their clinical settings with little guidance on how to do so. For future adoption of AI in healthcare, it is necessary to develop a more comprehensive and inclusive framework through engaging collaborators across the globe from different socioeconomic backgrounds and conduct additional studies that evaluate these parameters. Implementation guided by diverse and inclusive collaborations, coupled with further research targeted on under-investigated stages of AI implementation are needed before institutions can start to swiftly adopt existing tools within their clinical settings.

Supporting information

S1 checklist. prisma checklist..

https://doi.org/10.1371/journal.pdig.0000514.s001

S1 Fig. The PDSA cycle.

https://doi.org/10.1371/journal.pdig.0000514.s002

S1 Table. Domains of the Modified PDSA framework for AI implementation.

https://doi.org/10.1371/journal.pdig.0000514.s003

S1 Appendix. Search Strategy.

https://doi.org/10.1371/journal.pdig.0000514.s004

Acknowledgments

The authors gratefully acknowledge the role of Dr. Khwaja Mustafa, Head Librarian at the Aga Khan University for facilitating and synthesizing the initial literature search.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 2. Center for Devices and Radiological Health. Artificial Intelligence and machine learning (AI/ml)-enabled medical devices. Food and Drug Administration. 2022 [cited 2023 Aug 20]. Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices#resources .
  • 5. Goldfarb A, Teodoridis F. Why is AI adoption in health care lagging? Washington DC: Brookings Institution; 2022.
  • Systematic Review
  • Open access
  • Published: 31 May 2024

Retrospective charts for reporting, analysing, and evaluating disaster emergency response: a systematic review

  • Pengwei Hu 1 , 2 ,
  • Zhehao Li 2 ,
  • Jing Gui 2 , 3 ,
  • Honglei Xu 4 ,
  • Zhongsheng Fan 2 ,
  • Fulei Wu 5   na1 &
  • Xiaorong Liu 2   na1  

BMC Emergency Medicine volume  24 , Article number:  93 ( 2024 ) Cite this article

100 Accesses

Metrics details

Given the frequency of disasters worldwide, there is growing demand for efficient and effective emergency responses. One challenge is to design suitable retrospective charts to enable knowledge to be gained from disasters. This study provides comprehensive understanding of published retrospective chart review templates for designing and updating retrospective research.

We conducted a systematic review and text analysis of peer-reviewed articles and grey literature on retrospective chart review templates for reporting, analysing, and evaluating emergency responses. The search was performed on PubMed, Cochrane, and Web of Science and pre-identified government and non-government organizational and professional association websites to find papers published before July 1, 2022. Items and categories were grouped and organised using visual text analysis. The study is registered in PROSPERO (374,928).

Four index groups, 12 guidelines, and 14 report formats (or data collection templates) from 21 peer-reviewed articles and 9 grey literature papers were eligible. Retrospective tools were generally designed based on group consensus. One guideline and one report format were designed for the entire health system, 23 studies focused on emergency systems, while the others focused on hospitals. Five papers focused specific incident types, including chemical, biological, radiological, nuclear, mass burning, and mass paediatric casualties. Ten papers stated the location where the tools were used. The text analysis included 123 categories and 1210 specific items; large heterogeneity was observed.

Existing retrospective chart review templates for emergency response are heterogeneous, varying in type, hierarchy, and theoretical basis. The design of comprehensive, standard, and practicable retrospective charts requires an emergency response paradigm, baseline for outcomes, robust information acquisition, and among-region cooperation.

Peer Review reports

Introduction

The global incidence of disasters remains high. According to Centre for Research on the Epidemiology of Disasters (CRED), a total of 367 major natural disasters and more than 150 technological disasters occurred world wide in 2021, causing 10,492 and more than 5000 deaths respectively. ( 1 – 2 ) In this context, a growing body of evidence supports the positive impact of an efficient and effective emergency response on casualty outcomes, in both academic and operational fields of disaster medicine [ 3 ]. Although the modern era of organized disaster response of disaster can be traced back to the foundation of Red cross organization in 1863, it only became a distinct scientific discipline in the previous 60 years [ 4 ]. Disaster emergency management includes four stages: mitigation, preparedness, response, and recovery. Notably, the emergency response is recognised as having greatest immediate impact on disaster management outcomes [ 5 ]. This response requires a high level of scientific evidence to support performance improvement.

In evidence-based medicine, core concepts include population, interventions, comparison of outcomes, and hierarchy of evidence strength. However, given changing field conditions during disasters, ephemeral information, rumours, and security constraints, important questions in disaster medicine are not easily testable by evidence-based science [ 6 ]. Consequently, it is difficult to conduct controlled studies of disasters. Thus, a widely used methodology is retrospective chart review (RCR), which is a research design applicable to emergency medicine that utilizes pre-recorded data to validate research hypotheses [ 7 , 8 , 9 ]. Failures to create clearly articulated research questions, operationalize variables, develop and use standardized data abstraction forms are the common mistakes in RCR, making it difficult to compare outcomes of different exercises and to make evidence-based decisions in disaster management [ 10 ].

Given the urgent requirement for retrospective review of standard charts for data collection during disasters and for review in the aftermath, numerous evaluation indexes, report templates, and guidelines have been defined and published, such as the pre-hospital emergency response capacity index by Bayram and Zuabi, a data collection template for large-scale train accident emergency response by Leiba, et al., and the guidelines for reports on health crises and critical health events by Kulling P, et al. [ 11 , 12 , 13 ] These retrospective chart review templates were designed to allow researchers, educators, and managers to study different aspects of disaster management, by defining core concepts to evaluate the response, standardized work flow, and timelines from event occurrence to patients admission in emergency responses. A systematic study of templates for pre-hospital medical management of major events was published in 2013, revealing the limitations of existing templates in terms of validity and feasibility, such as unclear design methodology and lack of testing in real-life incidents [ 9 ]. Evidence is lacking regarding common aspects of retrospective charts that require attention and how reporting may be improved. Furthermore, numerous guidelines and templates from peer-reviewed articles and grey literature papers have been published since the 2013 review, such as The Health Care Coalition Surge Estimator Tool from the Administration for Strategic Preparedness and Response, after-action debriefing from Federal Emergency Management Agency, and emergency response and assessment team rapid assessment tool Association of Southeast Asian Nations [ 14 , 15 , 16 ].

This systematic review identifies existing retrospective chart review templates for reporting disaster emergency responses worldwide and provides a comprehensive assessment of these charts using content analysis. This provides a knowledge background for designing and updating widely accepted retrospective charts. The protocol is registered in PROSPERO (374,928).

Search strategy and criteria

To limit the scope of the review, this study focused only on the emergency response phase extending from a disaster occurrence to definitive patient treatment [ 5 ]. First, the Population, Intervention, Comparison, Outcomes, and Study Design (PICOS) model was used to shape the study question and build the search strategy. Searches were conducted using Cochrane Library, PubMed, and Web of Science to find peer-reviewed papers published before July 1, 2022, with keywords and MeSH terms related to disaster and emergency response (Supplemental Table S1 and Table S2 ). In addition, references from the selected articles, and prior systematic reviews were screened to identify additional relevant articles. Second, 29 pre-identified governmental, non-governmental, academic, and professional association websites and emergency-related registries stratified by World Health Organization (WHO) region were searched for published emergency response-related report forms, templates, guidelines, checklists, and data dictionaries available as of July 1, 2022 (Supplemental Table S3).

Peer-reviewed articles and grey literature were eligible if they met the following inclusion criteria: (i) the study object was an emergency response to natural, technical and social disasters, all extent of disasters from community to worldwide were included; (ii) the study designed at least one of the following types of retrospective tools: a report, a data collection template, guidelines, a checklist, a consensus, a questionnaire, or an index group with specific items for emergency response; and (iii) the study used verified specific retrospective tools to perform research related to emergency response. Papers were excluded if they met the following exclusion criteria: (i) the study only provided a theoretical frame without specific items under each concept category; (ii) any items were missing despite contacting authors to obtain the omitted information; and (iii) the study focused on an epidemiological emergency. The search, screening, and data extraction were performed independently by two reviewers (PW Hu and J Gui); any disagreements were resolved through discussion with a third investigator (FL Wu).

Data analysis

To analyse the characteristics of the rich text objects from the included articles or grey literature, text analysis was conducted, including measures of semantics, indicators, and information acquisition, using the following steps. (i) Clear original taxonomy concepts and items under each of the concept dimensions related to health facilities’ emergency responses were extracted and included in the text analysis. (ii) For semantic measures, a theoretical frame was built to label and categorise the included items that described the time, area, action, and resource dimensions of the emergency response, consistent with the classic emergency response paradigm. Here, the ‘time’ dimension signifies the key intervals extending from the beginning of the incident to the period when the surviving victims are being treated in the hospital. The ‘area’ dimension includes four important casualty tactical emergency care zones; specifically, a hot zone, a warm zone, an en route zone, and an in-hospital zone [ 17 ]. The ‘action’ dimension includes incident command, safety and security, hazard assessment, triage and treatment (including patient tracking), and evacuation according to the mass casualty incident management framework generated by the National Disaster Life Support (NDLS) Program [ 18 ]. The ‘resource’ dimension represents the evaluations of surge capacity in the included studies; thus this dimension more specifically includes systems, spaces, staff, supplies, events, and consumption, as per ‘the science of surge’ [ 19 , 20 ] (this theoretical framework is detailed in Supplemental Tables S4–S7 and Supplemental Figure S1 ). Four types of indicator measures were defined to categorise the items, and three information acquisition methods were identified to measure the feasibility of the included charts (these criteria are defined in Supplemental Tables S8–S9). Next, (iii) three of the current study’s authors (PW Hu, ZH Li, and J Gui) individually sorted included items using the above pre-defined taxonomy. When the three researchers could not reach consensus, a subject-matter expert was consulted. Finally, (iv) the number of items placed in each category was calculated, and text visualisation technology was used to present among-study heterogeneity (Supplemental Method).

Assessment of risk of bias (quality appraisal) was conducted using a checklist designed by the authors prior to data collection. This checklist was based on the authors’ assumptions of the data relevant to retrospective chart reports. Two of the current study’s authors (HL Xu and ZS Fan) individually assessed the risk of bias using the checklist; a subject-matter expert was consulted when consensus not reached.

The analysis included 4 index groups, 12 guidelines, and 14 report formats (or data collection templates) from 21 peer-reviewed articles and 9 grey literature papers [ 5 , 6 , 21 , 11 , 12 , 13 , 14 , 15 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 16 ], comprising > 2000 specific items (Fig.  1 ). The characteristics of the included papers are shown in Table  1 . A total of 26 papers stated the methodology used to design the retrospective chart, 18 of which were based on group consensus. One set of guidelines and one report format were created for an entire health system while 23 papers focused on emergency systems and the remaining papers focused on hospitals. Eight papers mentioned the specific type of disaster, including chemical, biological, radiation, nuclear (CBRN), mass burn casualty, and mass casualty incidents involving paediatric patients. Only 10 papers revealed the country or region to which the charts were applied; specifically, 2 were used in the United States, 2 in Germany, 1 in Sweden, 1 in the Netherlands, 1 in Australia, 1 in Israel, 1 in France, 1 in southeast Asia, and 1 worldwide. Quality assessment (quality appraisal) of the papers showed that most peer-reviewed articles clearly stated the methodology and data collection procedure, while most grey literature was initiated by a department, professional, or association. All of the included papers did not indicate that there was a pilot study of the retrospective chart review templates, and only 4 templates were used in other publications (Supplementary Table S10).

figure 1

Study selection flow chart

A total of 123 categories and 1210 specific items about emergency responses were included in the text analysis. The categories of the items highly varied across the papers; however, many papers commonly referred to the following 13 concepts. The most mentioned categories were ‘treatment’ and ‘communication’, which were evident in 5 studies, followed by ‘triage’ and ‘coordination’ (used by 4 studies). The text visualisation in Fig.  2 presents the categories common to papers, including ‘triage’, ‘treatment’, ‘cooperation’, and ‘communication’. The categories of the guidelines used by Lennquist et al. (2004) demonstrated the most overlap with other studies, including ‘communication’, ‘coordination’, ‘damage’, ‘outcome’, ‘psychological reactions’, and ‘severity of injuries’ [ 31 ] (Fig.  2 ).

figure 2

Taxonomy of the included retrospective charts

Regarding the semantic analysis, 720 items were categorised within the time dimension, 271 within the area, 1033 within the action, and 899 within the resource. Specifically, 2 index groups, 8 guidelines, and 5 report formats were common to all four response dimensions (the time, area, action, and resource). The most frequent categories under the time dimension were on-site care and on-site command and control phases (183 and 163 items, respectively). The treatment area of most concern was the indirect threat zone (110 items), while less attention was paid to the direct threat zone (21 items). Almost all papers mentioned the ‘action’ and ‘resource’ dimensions, except one report. Regarding the ‘action’ dimension, most items were classified into ‘incident command’ (393 items), followed by ‘treatment and triage (plus tracking)’ (281 items), and ‘support’ (141 items). Regarding the ‘resource’ dimension, most items were sorted into the ‘system’ category (417 items; see Supplemental Tables S11–S14). The indicator type analysis revealed 833 expressions of process indicators, 256 outcome indicators, 117 circumstance indicators, and 66 structure indicators (Supplemental Table S15). Regarding the datatype, 884 items acquire data as text, symbol, or combination or them; 270 items collect data as number; 171 items collect data as time while 17 items acquire location (Supplemental Table S16). We also analyzed the information acquisition method, 957 items involved data collection using a post-event investigation, 299 using database extraction, and 86 using evidence-based deduction (Supplemental Table S17). Heterogeneity among studies was observed through visual inspection of bar-charts of papers, plotting text semantics, indicator types, and information acquisition methods (Figs.  3 and 4 ).

figure 3

Literature fingerprint of included papers

figure 4

Distribution of indicator type and information acquisition methodology among the included papers, a shows the distribution of the indicators, b shows the method of information acquisition

Consistent data can be collected using standard retrospective charts for emergency response that include well-defined and clearly articulated items. Such charts facilitate communication among stakeholders and beneficiaries as to whether essential standards are being met and can link policy to action [ 10 ]. To assess the current state of emergency response reporting, this study systematically reviewed 30 peer-reviewed articles and grey literature papers on emergency response report chart review templates. Most studies were based on group consensus methods, which comprehensively integrate the knowledge backgrounds of experts in relevant fields in ways that are highly relevant to the emergency response process. However, a high level of heterogeneity among these retrospective chart review templates hinders their wide application across different countries or regions. The text visualisation used in the present study suggests that the heterogeneities may arise because the included chart review templates were designed as different types, suitable for different hierarchies, and based on different theoretical paradigms. Additionally, assessment of the risk of bias in the papers indicated that high heterogeneity might also be attributed to the lack of research collaboration, unclear methods, and lack of extrapolation [ 43 ].

It is essential that a widely acceptable retrospective chart template is constructed based on consensus regarding the theoretical paradigm and taxonomy of items. The text visualisation of the categories of the included items revealed that each paper’s taxonomy was independent of the others’, and the theoretical paradigm used to design the chart review templates in each paper was rarely mentioned. Although some theoretical models related to emergency response were constructed by professional associations in recent years, such as ‘science of surge’ and ‘DISASRTER’, they are not widely used in the construction of retrospective charts reviews [ 17 , 18 , 44 ]. There exist theories that were constructed from different perspectives, such as response capability, ( 19 – 20 ) course of action [ 18 ], or the elements of a Utstein-style templeate [ 5 ]. A novel and comprehensive paradigm that synthesises these ideas is required to further develop and guide chart design.

We explored the commonalities and divergence among researchers when designing the retrospective charts through text semantic analysis. Regarding the definition of key intervals of the emergency response, the results revealed that researchers pay most attention to responses in the on-site care and on-site command and control phases, which immediately impact casualty care, although there is currently no widely accepted model of the chronological sequence of EMS response and care. Only 2 articles in this study had a defined response timeline, but the response timeline was not uniform between these two studies. These findings reflect the fact that most EMS systems collect time data that were empirically developed based on arbitrary concepts and ease of data collection. For the treatment area, the items designed by the researchers primarily focused on the indirect threat zone; less attention was paid to the direct threat zone, which greatly impacts the treatment of the people injured in a disaster. Accordingly, a lack of retrospective data in this area will hinder the quality improvement of pre-hospital care. This contradiction may be caused by the prioritisation of treatment in direct threat zones, which causes response information management to be relatively ignored [ 42 ]. All papers, except one report, considered the ‘action’ and ‘resource’ dimensions, indicating that researchers are primarily concerned with response action and resource use. The broad consensus that information related to ‘incident command’, ‘treatment and triage (plus tracking)’, and ‘support’ should be merged in the chart review templates, suggests that these three action classifications account for most emergency response processes and have an important impact on research. Meanwhile, numerous items were sorted within the ‘system’ dimension (based on the science of surge), which comprised the sub-components of ‘plan’, ‘command’, ‘communication’, ‘coordination’, and ‘cyber security’, which places a great amount of information in the ‘system’ dimension. Thus, it is necessary to standardise the items under ‘system’ to create widely accepted retrospective charts for emergency response.

Indicator type notably reflects the application scope and function of a retrospective chart review template. The popularity of process indicator items indicates that emergency response involves dynamic management. Due to the lack of recognised benchmark standards for evaluating emergency response, outcome indicators have the potential to serve as gold standards, which can be verified through cohort studies [ 45 , 46 , 47 ].

Retrospective data collection in emergency response can require complicated detective work, for instance, to overcome the patients remembrance deviation, infer occurrence time, and calculate the consumption. Patients are often transported to several different hospitals, making patient-specific data collection difficult [ 48 ]. Improvement of the feasibility of retrospective chart review templates could mitigate this process by improving robustness of the data acquisition method. Among the included items, interviews were the most popular way to obtain data with the advantage to easily acquire data. The feasibility of the chart review template may be improved through the comprehensive use of monitoring systems, pre-hospital emergency systems, intelligent wearable devices for situational awareness, and capturing situational awareness information by specific items [ 49 , 50 ]. Further, obtaining permission from an organisation to collect data may be facilitated by referring to a specific guideline or template [ 51 , 52 ].

Although a prior systematic study of templates for reporting prehospital medical management of major incidents was published in 2013, it had several limitations. The current study adds to the work of this 2013 study in several ways. First, it expanded the scope by conducting a systematic review of reporting for extensive emergency response, rather than just major accidents. Additionally, it conducted a detailed content analysis, integrated multiple classical theoretical backgrounds, and constructed a category framework to conduct an in-depth analysis of text-rich data to excavate the elements of emergency response to which researchers are generally attentive and how reporting may be improved.

However, the current study still had several limitations. For instance, since the included papers were only published in English, papers from non-English-speaking regions, such as Africa, China, and Russia, were not considered. Additionally, due to the difficulty of quantifying the text-rich data, and a lack of some key variables, such as the regions of application of the chart review template and the specific events of interest, subgroup analysis was not performed to explore the exact sources of heterogeneity.

This study confirmed that existing retrospective chart review templates for emergency response continue to have large heterogeneity. Moving forward, data guidelines must be standardised to enable the comparison of events among countries. This would require different regions to cooperate in the design of comprehensive, standard, comparable, and feasible tools based on their own emergency response organisations.

Data availability

Data is provided within the manuscript and supplementary information files.

Centre for Research on the Epidemiology of Disasters. Disasters Year in Review 2021. April 2022. https://www.cred.be/sites/default/files/CredCrunch66.pdf .

Centre for Research on the Epidemiology of Disasters. Technological Disasters: Trends & Transport accidents. March 2022. https://www.cred.be/sites/default/files/CredCrunch65.pdf .

Sarin RR, Hick JL, Livinski AA, et al. Disaster medicine: a comprehensive review of the literature from 2016. Disaster Med Public Health Prep. 2019;13:946–57.

Article   PubMed   Google Scholar  

Fattah S, Rehn M, Reierth E, Wisborg T. Systematic literature review of templates for reporting prehospital major incident medical management. BMJ Open. 2013;3(8).

Debacker M, Hubloue I, Dhondt E et al. Utstein-style template for uniform data reporting of acute medical response in disasters. PLoS Curr . 2012;4:e4f6cf3e8df15a.

Bradt DA, Aitken P. Disaster medicine reporting: the need for new guidelines and the CONFIDE statement. Emerg Med Australas. 2010;22:483–7.

Worster A, Haines T. Advanced statistics: understanding medical record review (MRR) studies. Acad Emerg Med. 2004;11:187–92.

Gilbert EH, Lowenstein SR, Koziol-McLain J, Barta D, Steiner J. Chart reviews in emergency medicine research: where are the methods? Ann Emerg Med. 1996;27:305–8.

Article   CAS   PubMed   Google Scholar  

Lipowski EE. Developing great research questions. Am J Health Syst Pharm. 2008;65(17):1667–70.

Vassar M, Holzmann M. The retrospective chart review: important methodological considerations. J Educ Eval Health Prof. 2013;11(10):12.

Article   Google Scholar  

Bayram JD, Zuabi S. Disaster metrics: a proposed quantitative model for benchmarking prehospital medical response in trauma-related multiple casualty events. Prehosp Disaster Med. 2012;27:123–9.

Kulling P, Birnbaum M, Murray V, et al. Guidelines for reports on health crises and critical health events. Prehosp Disaster Med. 2010;25:377–83.

Leiba A, Schwartz D, Eran T, et al. DISAST-CIR: disastrous incidents systematic analysis through components, interactions and results: application to a large-scale train accident. J Emerg Med. 2009;37:46–50.

Federal Emergency Management Agency. FEMA US&R RESPONSE SYSTEM/INCIDENT SUPPORT TEAM, March. 18, 2012 https://www.fema.gov/pdf/emergency/usr/usr006.pdf .

Assistant Secretary for Preparedness and Response. ASPR TRACIE Health Care Coalition Surge Estimator Tool: Hospital Data Collection Form. Feburary 1, 2019. https://files.asprtracie.hhs.gov/documents/aspr-tracie-healthcare-coalition-surge-estimator-tool-hcc-aggregator.pdf .

Association of Southeast Asian Nations Emergency Response and Assessment Team. ASEAN-ERAT Guidlines. March, 2018. https://ahacentre.org/publication/asean-erat-guidelines/ .

Pennardt A, Schwartz RB. Hot, warm, and cold zones: applying existing national incident management system terminology to enhance tactical emergency medical support interoperability. J Spec Oper Med. 2014;14:78–9.

Bureau of EMS, Truama, Prepardness. Michigan SPECIAL OPERATIONS MASS CASUALTY INCIDENTS.Oct 10, 2017. https://demca.org/wp-content/uploads/MASS-CASUALTY-INCIDENTS.pdf .

Kelen GD, McCarthy ML. The science of surge. Acad Emerg Med. 2006;13(11):1089–94.

Barbisch DF, Koenig KL. Understanding surge capacity: essential elements. Acad Emerg Med. 2006;13(11):1098–102.

Thomasian NM, Madad S, Hick JL, et al. Hospital surge preparedness and response index. Disaster Med Public Health Prep. 2021;15:398–401.

Lessons T, Franke A, Schorscher N, et al. Emergency response to terrorist attacks: results of the federal-conducted evaluation process in Germany. Eur J Trauma Emerg Surg. 2020;46(4):725–30.

Wurmb T, Schorscher N, Justice P, Dietz S, Schua R, Jarausch T, et al. Structured analysis, evaluation and report of the emergency response to a terrorist attack in Wuerzburg, Germany using a new template of standardised quality indicators. Scand J Trauma Resusc Emerg Med. 2018;26(1):87.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Hall TN, McDonald A, Peleg K. Identifying factors that may influence decision-making related to the distribution of patients during a mass casualty incident. Disaster Med Public Health Prep. 2018;12:101–8.

Olivieri C, Ingrassia PL, Della Corte F, et al. Hospital preparedness and response in CBRN emergencies: TIER assessment tool. Eur J Emerg Med. 2017;24:366–70.

Adini B, Aharonson-Daniel L, Israeli A. Load index model: an advanced tool to support decision making during mass-casualty incidents. J Trauma Acute Care Surg. 2015;78:622–7.

Fattah S, Rehn M, Lockey D, et al. A consensus based template for reporting of pre-hospital major incident medical management. Scand J Trauma Resusc Emerg Med. 2014;22:5.

Article   PubMed   PubMed Central   Google Scholar  

Daftary RK, Cruz AT, Reaves EJ, et al. Making disaster care count: consensus formulation of measures of effectiveness for natural disaster acute phase medical response. Prehosp Disaster Med. 2014;29:461–7.

Rådestad M, Jirwe M, Castrén M, Svensson L, Gryth D, Rüter A. Essential key indicators for disaster medical response suggested to be included in a national uniform protocol for documentation of major incidents: a Delphi study. Scand J Trauma Resusc Emerg Med. 2013;21:68.

Juffermans J, Bierens JJ. Recurrent medical response problems during five recent disasters in the Netherlands. Prehosp Disaster Med. 2010;25:127–36.

Lennquist S. Protocol for reports from major accidents and disasters. Int J Disast Med. 2004;2:57–64.

Belmont E. Emergency preparedness, response & recovery checklist: beyond the emergency management plan. Washington, DC: American Health Lawyers Association; 2004. pp. 10–7.

Google Scholar  

Rüter A, Örtenwall P, Wikström T. Performance indicators for major incident medical management –a possible tool for quality control? Int J Disast Med. 2004;2:52–5.

Villarreal MS. Quality management tool for mass casualty emergency responses and disasters. Prehosp Disaster Med. 1997;12:200–9.

Ricci E, Pretto E. Assessment of prehospital and hospital response in disaster. Crit Care Clin. 1991;7:471–84.

World Health Orgnization. WHO-hospital-emergency-response-checklist. September 9. 2011. https://www.who.int/publications-detail-redirect/hospital-emergency-response-checklist .

National EMS information system. NEMSIS Data Dictionary Version 3.5.0. January 21, 2022. https://nemsis.org/v3-5-0-revision .

Assistant Secretary for Preparedness and Response. Healthcare Coalition Radiation Emergency Surge Annex Template. Feburary 1. 2019. https://asprtracie.hhs.gov/technical-resources/resource/10041/healthcare-coalition-radiation-emncy-surge-annex-template .

Assistant Secretary for Preparedness and Response. Healthcare Coalition Pediatric Surge Annex Template. Feburary 1, 2019. https://asprtracie.hhs.gov/technical-resources/resource/7037/healthcare-coalition-pediatric-surge-annex-template .

Assistant Secretary for Preparedness and Response. Healthcare Coalition Chemical Emergency Surge Annex Template. Feburary 1. 2019. https://asprtracie.hhs.gov/technical-resources/resource/10356/healthcare-coalition-chemical-emncy-surge-annex-template .

Assistant Secretary for Preparedness and Response. Healthcare Coalition Burn Surge Annex Template. Feburary 1. 2019. https://asprtracie.hhs.gov/technical-resources/resource/7653/healthcare-coalition-burn-surge-annex-template . July 1, 2022.

Khajehaminian MR, Ardalan A, Hosseini Boroujeni SM, et al. Prioritized criteria for casualty distribution following trauma-related mass incidents; a modified Delphi study. Arch Acad Emerg Med. 2020;8:e47.

PubMed   PubMed Central   Google Scholar  

Siersma V, ls-Nielsen B, Chen W, et al. Multivariable modelling for meta-epidemiological assessment of the association between trial quality and treatment effects estimated in randomized clinical trials. Stat Med. 2007;26(14):2745–58.

Fattah S, Rehn M, Reierth E, Wisborg T. Systematic literature review of templates for reporting prehospital major incident medical management. BMJ Open. 2013;3(8):e002658.

Becker TK, Hansoti B, Bartels S, et al. Global emergency medicine: a review of the literature from 2016. Acad Emerg Med. 2017;24:1150–60.

Stratton SJ. Use of structured observational methods in disaster research: ‘Recurrent medical response problems in five recent disasters in the Netherlands’. Prehosp Disaster Med. 2010;25:137–8.

Clarke M. Evidence aid-from the Asian tsunami to the Wenchuan earthquake. J Evid Based Med. 2008;1:9–11.

Svensøy JN, Nilsson H, Rimstad R. A qualitative study on researchers’ experiences after publishing scientific reports on major incidents, mass-casualty incidents, and disasters. Prehosp Disaster Med. 2021;36(5):536–42.

Langhelle A, Nolan J, Herlitz J, et al. Recommended guidelines for reviewing, reporting and conducting research on post-resuscitation care: the Utstein style. Resuscitation. 2005;66:271–83.

Ringdal KG, Coats TJ, Lefering R, et al. The Utstein template for uniform reporting of data following major trauma: a joint revision by SCANTEM, TARN, DGU-TR and RITG. Scand J Trauma Resusc Emerg Med. 2008;16:7.

Spaite DW, Valenzuela TD, Meislin HW, et al. Prospective validation of a new model for evaluating emergency medical services systems by in-field observation of specific time intervals in prehospital care. Ann Emerg Med. 1993;22:638–45.

Kaji AH, Schriger D, Green S. Looking through the retrospectoscope: reducing bias in emergency medicine chart review studies. Ann Emerg Med. 2014;64(3):292–8.

Download references

Acknowledgements

Fulei Wu and Xiaorong Liu are senior authors of this article and contribute equally to this study. Inclusion and exclusion criteria and the search strategy are detailed in the main text and Supplemental Tables S2–S3. Regarding the text visual analysis method, see the supplemental method. The items were classified by ‘time’, ‘area’, ‘action’, and ‘resource’, defined in detail in the supplemental method, Supplemental Tables S4–S7, and Supplemental Fig.  1 . In the visual representation of the results each text block is depicted as a coloured square aligned from left to right and top to bottom.

Author information

Fulei Wu and Xiaorong Liu both authors contributed equally to this study.

Authors and Affiliations

Department of Health Service, School of Public Health, Logistics University of People’s Armed Police Force, Tianjin, China

Department of Health Training, Second military medical University, Shanghai, 200433, China

Pengwei Hu, Zhehao Li, Jing Gui, Zhongsheng Fan & Xiaorong Liu

Department of Research, Characteristic Medical Center of People Armed Police, Tianjin, China

Medical Security Center, The No.983 Hospital of Joint Logistics Support Forces of Chinese PLA, Tianjin, China

School of Nursing, Fudan University, Shanghai, China

You can also search for this author in PubMed   Google Scholar

Contributions

Pengwei Hu and Xiaorong Liu conceived the original study concept, overall study design, and supervised the subsequent steps of the study. Fulei Wu and Honglei Xu contributed via their original studies to the study concept. Pengwei Hu and Zhehao Li designed the search strategy. Pengweiwei Hu, Jing Gui, and Zhehao Li, supported by Xiaorong Liu, conducted the literature search and data extraction. Pengweiwei Hu and Zhongsheng Fan conducted quality assessment of studies. Pengweiwei Hu and Fulei Wu conducted the statistical analysis. Pengweiwei Hu and Zhehao Li created the tables and figures. Xiaorong Liu and Fulei Wu reviewed the literature search and data analyses. Pengweiwei Hu wrote the first draft of the manuscript. All authors participated in the interpretation of data, have critically reviewed the manuscript providing edits and comments, and approved its final submission. Pengweiwei Hu, Zhehao Li, Jing Gui, and Xiaorong Liu had full access to all data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Corresponding author

Correspondence to Xiaorong Liu .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hu, P., Li, Z., Gui, J. et al. Retrospective charts for reporting, analysing, and evaluating disaster emergency response: a systematic review. BMC Emerg Med 24 , 93 (2024). https://doi.org/10.1186/s12873-024-01012-y

Download citation

Received : 28 November 2023

Accepted : 22 May 2024

Published : 31 May 2024

DOI : https://doi.org/10.1186/s12873-024-01012-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Retrospective charts
  • Emergency response

BMC Emergency Medicine

ISSN: 1471-227X

systematic literature review tools

Help | Advanced Search

Computer Science > Computation and Language

Title: text generation: a systematic literature review of tasks, evaluation, and challenges.

Abstract: Text generation has become more accessible than ever, and the increasing interest in these systems, especially those using large language models, has spurred an increasing number of related publications. We provide a systematic literature review comprising 244 selected papers between 2017 and 2024. This review categorizes works in text generation into five main tasks: open-ended text generation, summarization, translation, paraphrasing, and question answering. For each task, we review their relevant characteristics, sub-tasks, and specific challenges (e.g., missing datasets for multi-document summarization, coherence in story generation, and complex reasoning for question answering). Additionally, we assess current approaches for evaluating text generation systems and ascertain problems with current metrics. Our investigation shows nine prominent challenges common to all tasks and sub-tasks in recent text generation publications: bias, reasoning, hallucinations, misuse, privacy, interpretability, transparency, datasets, and computing. We provide a detailed analysis of these challenges, their potential solutions, and which gaps still require further engagement from the community. This systematic literature review targets two main audiences: early career researchers in natural language processing looking for an overview of the field and promising research directions, as well as experienced researchers seeking a detailed view of tasks, evaluation methodologies, open challenges, and recent mitigation strategies.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. How to conduct a Systematic Literature Review

    systematic literature review tools

  2. systematic literature review steps

    systematic literature review tools

  3. systematic literature review checklist

    systematic literature review tools

  4. Systematic Literature Review Methodology

    systematic literature review tools

  5. Steps for systematic literature review (PRISMA method) Source: Scheme

    systematic literature review tools

  6. systematic review step by step guide

    systematic literature review tools

VIDEO

  1. Systematic Literature Review (SLR)

  2. Literature Review, Systematic Literature Review, Meta

  3. Systematic Literature Review Workshop 3

  4. how to do literature review using CHAT PDF

  5. Systematic Literature Review

  6. Introduction Systematic Literature Review-Various frameworks Bibliometric Analysis

COMMENTS

  1. 5 software tools to support your systematic review processes

    Learn about five software tools that can help you with different stages of systematic reviews, such as screening, coding, meta-analysis and reporting. Compare their functions, accessibility, costs and features with examples and screenshots.

  2. Guidance to best tools and practices for systematic reviews

    This section is intended to complement study of such resources by facilitating use of AMSTAR-2 and ROBIS, tools specifically developed to evaluate methodological rigor of systematic reviews. These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal ...

  3. Rayyan

    Rayyan is a web-based tool that helps researchers organize, manage and accelerate their collaborative systematic literature reviews. It offers various features, such as de-duplication, relevance ranking, PICO highlights, and mobile app access, for different membership plans.

  4. Covidence

    Covidence is a software that helps you manage and streamline your systematic reviews faster, easier and better. It supports collaboration, expert support, and suits all levels of experience and sectors.

  5. RevMan: Systematic review and meta-analysis tool for researchers

    RevMan is a software that helps researchers conduct systematic reviews and meta-analyses of health evidence. It offers intuitive and easy to use features, high quality support and guidance, and collaboration options for multiple users.

  6. Software and Tools

    Other Review Software Systems. There are a number of tools available to help a team manage the systematic review process. Notable examples include Eppi-Reviewer , DistillerSR, and PICO Portal. These are subscription-based services but in some cases offer a trial project. Use the Systematic Review Toolbox to explore more options.

  7. How-to conduct a systematic literature review: A quick guide for

    A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies ... (WoS) is an international and multidisciplinary tool for accessing literature in science, technology, biomedicine, and other disciplines. Scopus is a database that today indexes 40,562 peer-reviewed ...

  8. ATLAS.ti

    Finalize your literature review faster with comfort. ATLAS.ti makes it easy to manage, organize, and analyze articles, PDFs, excerpts, and more for your projects. Conduct a deep systematic literature review and get the insights you need with a comprehensive toolset built specifically for your research projects.

  9. Web-Based Software Tools for Systematic Literature Review in Medicine

    Feature Analysis of Systematic Review Tools. ... (3/29, 10%), a search engine (1/29, 3%), and a social science literature review tool (1/29, 3%). One tool, Research Screener , was excluded owing to insufficient information available on supported features. Another tool, the Health Assessment Workspace Collaborative, was excluded because it is ...

  10. ASReview

    Three modi to choose from. ASReview LAB can be used for: Screening with the Oracle Mode, including advanced options. Teaching using the Exploration Mode. Validating algorithms using the Simulation Mode. We also offer an open-source research infrastructure to run large-scale simulation studies for validating newly developed AI algorithms.

  11. Systematic & Advanced Evidence Synthesis Reviews

    The Systematic Review Toolbox is a web-based catalogue of tools that support various tasks within the systematic review and wider evidence synthesis process. The toolbox aims to help researchers and reviewers find the following: Software tools, Quality assessment / critical appraisal checklists, Reporting standards, and Guidelines.

  12. Tools & Resources

    Software Tools. There are a variety of fee-based and open-source (i.e., free) tools available for conducting the various steps of your scoping or systematic review. The NIH Library currently provides free access for NIH customers to Covidence. At least one user must be from NIH in order to request access and use Covidence.

  13. Tools

    Free, open-source tool that "helps you upload and organize the results of a literature search for a systematic review. It also makes it possible for your team to screen, organize, and manipulate all of your abstracts in one place." -From Center for Evidence Synthesis in Health. SRDR Plus (Systematic Review Data Repository: Plus) An open-source ...

  14. Tools and Software for SLR

    A systematic literature review (SLR) involves a comprehensive and structured approach to searching, selecting, and analyzing relevant research papers. To facilitate this process, various tools and software can be used to streamline tasks. Here is a list of tools commonly used for conducting a systematic literature review:

  15. Software Tools for Conducting Systematic Reviews

    CReMs is available for Windows or Mac operating systems. The software is developed by the Joanna Briggs Institute in Australia. It is available through Lippincott for an annual subscription of $30.00. Rayyan QCRI: Rayyan is a 100% FREE web application to help systematic review authors perform their job in a quick, easy and enjoyable fashion.

  16. Research Guides: Evidence Synthesis and Systematic Reviews: Tools for

    A community driven directory of systematic review tools. Evidence Synthesis Standards. Checklists, diagrams, reporting guidelines and registration resources to help you plan your evidence synthesis review. ... PRISMA. Open Science Framework (OSF) PROSPERO. JBI Manual for Evidence Synthesis. Literature Searching Tools. Text mining tools to ...

  17. Systematic Review

    Systematic review vs. literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method. ... Note Generative AI tools like ChatGPT can be useful at ...

  18. Systematic Review Software

    Literature review software is an ideal tool to help you comply with these regulations. DistillerSR automates literature reviews to enable a more transparent, repeatable, and auditable process, enabling manufacturers to create and implement a standard framework for literature reviews. This framework for conducting literature reviews can then be ...

  19. Tools

    Tools for systematic reviews. There are many tools you can use when conducting a systematic review. These tools are designed to assist with the key stages of the process, including title and abstract screening, data synthesis, and critical appraisal. Registering your review is recommended best practice and options are explored in the Register ...

  20. PDF Systematic Literature Reviews: an Introduction

    Systematic literature reviews (SRs) are a way of synthesising scientific evidence to answer a particular ... Various tools exist for this stage (Crowe and Sheppard, 2011). Again, usually two reviewers assess each article in parallel, and their level of agreement is monitored. 6.

  21. 10 Best Literature Review Tools for Researchers

    6. Consensus. Researchers to work together, annotate, and discuss research papers in real-time, fostering team collaboration and knowledge sharing. 7. RAx. Researchers to perform efficient literature search and analysis, aiding in identifying relevant articles, saving time, and improving the quality of research. 8.

  22. SR-Accelerator

    The Systematic Review Accelerator project is based at the Bond University Institute for Evidence-Based Healthcare. The aim is to reduce the amount of time it takes to construct a Systematic Review without impacting quality

  23. CADIMA

    CADIMA is a free web tool facilitating the conduct and assuring for the documentation of systematic reviews, systematic maps and further literature reviews. No topic restrictions. Review questions from any research discipline can be addressed. Working in a team. CADIMA allows for an efficient cooperation of review team member throughout the ...

  24. Beginning Steps and Finishing a Review

    e. Read other literature reviews of your topics if available. 2(i). (For Systematic Reviews or Meta-Analyses) Select your inclusion / pre-selection criteria to identify the types of studies that will be most relevant to the review. a. Decide on the following to create your inclusion criteria: Patient, population, or people who were studied.

  25. Maritime shipping ports performance: a systematic literature review

    The most recent publication on the maritime port sector is the bibliometric analysis study by Diniz et al. on the United Nations' Sustainable Development Goals (SDG), wherein they used IRaMuTeQ and VOSviewer software tool to evaluate the trends through a systematic literature review. In the years 2023 and 2024 till the 20th of April ...

  26. Frameworks for procurement, integration, monitoring, and evaluation of

    The role of HICs in AI development is corroborated by existing literature, for example, three systematic reviews of randomized controlled trials (RCTs) assessing AI tools were published in 2021 and 2022 [55-57]. In total, these reviews included 95 studies published in English conducted across 29 countries.

  27. Retrospective charts for reporting, analysing, and evaluating disaster

    We conducted a systematic review and text analysis of peer-reviewed articles and grey literature on retrospective chart review templates for reporting, analysing, and evaluating emergency responses. The search was performed on PubMed, Cochrane, and Web of Science and pre-identified government and non-government organizational and professional ...

  28. [2405.15604] Text Generation: A Systematic Literature Review of Tasks

    Text generation has become more accessible than ever, and the increasing interest in these systems, especially those using large language models, has spurred an increasing number of related publications. We provide a systematic literature review comprising 244 selected papers between 2017 and 2024. This review categorizes works in text generation into five main tasks: open-ended text ...

  29. Clinical Indicators of the Nursing Diagnosis Caregiver Role Strain: A

    This study is a systematic review of diagnostic accuracy conducted in October 2023, following the recommendations of the Joanna Briggs Institute. The review protocol was registered and approved in the Prospective International Register of Systematic Reviews under registration number CRD42022377411. The QUADAS-2 tool was used to assess the risk ...

  30. A realist review of health passports for Autistic adults.

    Using an AHP tool alone in a healthcare Context that does not meet Autistic needs, without the inclusion of the local Autistic community developing the tool, and a wider intervention to reduce known barriers to health inequality, may mean that AHPs do not trigger any Mechanisms, and thus cannot affect Outcomes. ... Literature Review; Systematic ...