lit review tools

Something went wrong when searching for seed articles. Please try again soon.

No articles were found for that search term.

Author, year The title of the article goes here

Upcoming Litmaps Webinar

lit review tools

LITERATURE REVIEW SOFTWARE FOR BETTER RESEARCH

lit review tools

“Litmaps is a game changer for finding novel literature... it has been invaluable for my productivity.... I also got my PhD student to use it and they also found it invaluable, finding several gaps they missed”

Varun Venkatesh

Austin Health, Australia

lit review tools

As a full-time researcher, Litmaps has become an indispensable tool in my arsenal. The Seed Maps and Discover features of Litmaps have transformed my literature review process, streamlining the identification of key citations while revealing previously overlooked relevant literature, ensuring no crucial connection goes unnoticed. A true game-changer indeed!

Ritwik Pandey

Doctoral Research Scholar – Sri Sathya Sai Institute of Higher Learning

lit review tools

Using Litmaps for my research papers has significantly improved my workflow. Typically, I start with a single paper related to my topic. Whenever I find an interesting work, I add it to my search. From there, I can quickly cover my entire Related Work section.

David Fischer

Research Associate – University of Applied Sciences Kempten

“It's nice to get a quick overview of related literature. Really easy to use, and it helps getting on top of the often complicated structures of referencing”

Christoph Ludwig

Technische Universität Dresden, Germany

“This has helped me so much in researching the literature. Currently, I am beginning to investigate new fields and this has helped me hugely”

Aran Warren

Canterbury University, NZ

“I can’t live without you anymore! I also recommend you to my students.”

Professor at The Chinese University of Hong Kong

“Seeing my literature list as a network enhances my thinking process!”

Katholieke Universiteit Leuven, Belgium

“Incredibly useful tool to get to know more literature, and to gain insight in existing research”

KU Leuven, Belgium

“As a student just venturing into the world of lit reviews, this is a tool that is outstanding and helping me find deeper results for my work.”

Franklin Jeffers

South Oregon University, USA

“Any researcher could use it! The paper recommendations are great for anyone and everyone”

Swansea University, Wales

“This tool really helped me to create good bibtex references for my research papers”

Ali Mohammed-Djafari

Director of Research at LSS-CNRS, France

“Litmaps is extremely helpful with my research. It helps me organize each one of my projects and see how they relate to each other, as well as to keep up to date on publications done in my field”

Daniel Fuller

Clarkson University, USA

As a person who is an early researcher and identifies as dyslexic, I can say that having research articles laid out in the date vs cite graph format is much more approachable than looking at a standard database interface. I feel that the maps Litmaps offers lower the barrier of entry for researchers by giving them the connections between articles spaced out visually. This helps me orientate where a paper is in the history of a field. Thus, new researchers can look at one of Litmap's "seed maps" and have the same information as hours of digging through a database.

Baylor Fain

Postdoctoral Associate – University of Florida

lit review tools

lit review tools

Accelerate your research with the best systematic literature review tools

The ideal literature review tool helps you make sense of the most important insights in your research field. ATLAS.ti empowers researchers to perform powerful and collaborative analysis using the leading software for literature review.

lit review tools

Finalize your literature review faster with comfort

ATLAS.ti makes it easy to manage, organize, and analyze articles, PDFs, excerpts, and more for your projects. Conduct a deep systematic literature review and get the insights you need with a comprehensive toolset built specifically for your research projects.

lit review tools

Figure out the "why" behind your participant's motivations

Understand the behaviors and emotions that are driving your focus group participants. With ATLAS.ti, you can transform your raw data and turn it into qualitative insights you can learn from. Easily determine user intent in the same spot you're deciphering your overall focus group data.

lit review tools

Visualize your research findings like never before

We make it simple to present your analysis results with meaningful charts, networks, and diagrams. Instead of figuring out how to communicate the insights you just unlocked, we enable you to leverage easy-to-use visualizations that support your goals.

lit review tools

Everything you need to elevate your literature review

Import and organize literature data.

Import and analyze any type of text content – ATLAS.ti supports all standard text and transcription files such as Word and PDF.

Analyze with ease and speed

Utilize easy-to-learn workflows that save valuable time, such as auto coding, sentiment analysis, team collaboration, and more.

Leverage AI-driven tools

Make efficiency a priority and let ATLAS.ti do your work with AI-powered research tools and features for faster results.

Visualize and present findings

With just a few clicks, you can create meaningful visualizations like charts, word clouds, tables, networks, among others for your literature data.

The faster way to make sense of your literature review. Try it for free, today.

A literature review analyzes the most current research within a research area. A literature review consists of published studies from many sources:

  • Peer-reviewed academic publications
  • Full-length books
  • University bulletins
  • Conference proceedings
  • Dissertations and theses

Literature reviews allow researchers to:

  • Summarize the state of the research
  • Identify unexplored research inquiries
  • Recommend practical applications
  • Critique currently published research

Literature reviews are either standalone publications or part of a paper as background for an original research project. A literature review, as a section of a more extensive research article, summarizes the current state of the research to justify the primary research described in the paper.

For example, a researcher may have reviewed the literature on a new supplement's health benefits and concluded that more research needs to be conducted on those with a particular condition. This research gap warrants a study examining how this understudied population reacted to the supplement. Researchers need to establish this research gap through a literature review to persuade journal editors and reviewers of the value of their research.

Consider a literature review as a typical research publication presenting a study, its results, and the salient points scholars can infer from the study. The only significant difference with a literature review treats existing literature as the research data to collect and analyze. From that analysis, a literature review can suggest new inquiries to pursue.

Identify a focus

Similar to a typical study, a literature review should have a research question or questions that analysis can answer. This sort of inquiry typically targets a particular phenomenon, population, or even research method to examine how different studies have looked at the same thing differently. A literature review, then, should center the literature collection around that focus.

Collect and analyze the literature

With a focus in mind, a researcher can collect studies that provide relevant information for that focus. They can then analyze the collected studies by finding and identifying patterns or themes that occur frequently. This analysis allows the researcher to point out what the field has frequently explored or, on the other hand, overlooked.

Suggest implications

The literature review allows the researcher to argue a particular point through the evidence provided by the analysis. For example, suppose the analysis makes it apparent that the published research on people's sleep patterns has not adequately explored the connection between sleep and a particular factor (e.g., television-watching habits, indoor air quality). In that case, the researcher can argue that further study can address this research gap.

External requirements aside (e.g., many academic journals have a word limit of 6,000-8,000 words), a literature review as a standalone publication is as long as necessary to allow readers to understand the current state of the field. Even if it is just a section in a larger paper, a literature review is long enough to allow the researcher to justify the study that is the paper's focus.

Note that a literature review needs only to incorporate a representative number of studies relevant to the research inquiry. For term papers in university courses, 10 to 20 references might be appropriate for demonstrating analytical skills. Published literature reviews in peer-reviewed journals might have 40 to 50 references. One of the essential goals of a literature review is to persuade readers that you have analyzed a representative segment of the research you are reviewing.

Researchers can find published research from various online sources:

  • Journal websites
  • Research databases
  • Search engines (Google Scholar, Semantic Scholar)
  • Research repositories
  • Social networking sites (Academia, ResearchGate)

Many journals make articles freely available under the term "open access," meaning that there are no restrictions to viewing and downloading such articles. Otherwise, collecting research articles from restricted journals usually requires access from an institution such as a university or a library.

Evidence of a rigorous literature review is more important than the word count or the number of articles that undergo data analysis. Especially when writing for a peer-reviewed journal, it is essential to consider how to demonstrate research rigor in your literature review to persuade reviewers of its scholarly value.

Select field-specific journals

The most significant research relevant to your field focuses on a narrow set of journals similar in aims and scope. Consider who the most prominent scholars in your field are and determine which journals publish their research or have them as editors or reviewers. Journals tend to look favorably on systematic reviews that include articles they have published.

Incorporate recent research

Recently published studies have greater value in determining the gaps in the current state of research. Older research is likely to have encountered challenges and critiques that may render their findings outdated or refuted. What counts as recent differs by field; start by looking for research published within the last three years and gradually expand to older research when you need to collect more articles for your review.

Consider the quality of the research

Literature reviews are only as strong as the quality of the studies that the researcher collects. You can judge any particular study by many factors, including:

  • the quality of the article's journal
  • the article's research rigor
  • the timeliness of the research

The critical point here is that you should consider more than just a study's findings or research outputs when including research in your literature review.

Narrow your research focus

Ideally, the articles you collect for your literature review have something in common, such as a research method or research context. For example, if you are conducting a literature review about teaching practices in high school contexts, it is best to narrow your literature search to studies focusing on high school. You should consider expanding your search to junior high school and university contexts only when there are not enough studies that match your focus.

You can create a project in ATLAS.ti for keeping track of your collected literature. ATLAS.ti allows you to view and analyze full text articles and PDF files in a single project. Within projects, you can use document groups to separate studies into different categories for easier and faster analysis.

For example, a researcher with a literature review that examines studies across different countries can create document groups labeled "United Kingdom," "Germany," and "United States," among others. A researcher can also use ATLAS.ti's global filters to narrow analysis to a particular set of studies and gain insights about a smaller set of literature.

ATLAS.ti allows you to search, code, and analyze text documents and PDF files. You can treat a set of research articles like other forms of qualitative data. The codes you apply to your literature collection allow for analysis through many powerful tools in ATLAS.ti:

  • Code Co-Occurrence Explorer
  • Code Co-Occurrence Table
  • Code-Document Table

Other tools in ATLAS.ti employ machine learning to facilitate parts of the coding process for you. Some of our software tools that are effective for analyzing literature include:

  • Named Entity Recognition
  • Opinion Mining
  • Sentiment Analysis

As long as your documents are text documents or text-enable PDF files, ATLAS.ti's automated tools can provide essential assistance in the data analysis process.

Writing in the Health and Social Sciences: Literature Reviews and Synthesis Tools

  • Journal Publishing
  • Style and Writing Guides
  • Readings about Writing
  • Citing in APA Style This link opens in a new window
  • Resources for Dissertation Authors
  • Citation Management and Formatting Tools
  • What are Literature Reviews?
  • Conducting & Reporting Systematic Reviews
  • Finding Systematic Reviews
  • Tutorials & Tools for Literature Reviews

Systematic Literature Reviews: Steps & Resources

lit review tools

These steps for conducting a systematic literature review are listed below . 

Also see subpages for more information about:

  • The different types of literature reviews, including systematic reviews and other evidence synthesis methods
  • Tools & Tutorials

Literature Review & Systematic Review Steps

  • Develop a Focused Question
  • Scope the Literature  (Initial Search)
  • Refine & Expand the Search
  • Limit the Results
  • Download Citations
  • Abstract & Analyze
  • Create Flow Diagram
  • Synthesize & Report Results

1. Develop a Focused   Question 

Consider the PICO Format: Population/Problem, Intervention, Comparison, Outcome

Focus on defining the Population or Problem and Intervention (don't narrow by Comparison or Outcome just yet!)

"What are the effects of the Pilates method for patients with low back pain?"

Tools & Additional Resources:

  • PICO Question Help
  • Stillwell, Susan B., DNP, RN, CNE; Fineout-Overholt, Ellen, PhD, RN, FNAP, FAAN; Melnyk, Bernadette Mazurek, PhD, RN, CPNP/PMHNP, FNAP, FAAN; Williamson, Kathleen M., PhD, RN Evidence-Based Practice, Step by Step: Asking the Clinical Question, AJN The American Journal of Nursing : March 2010 - Volume 110 - Issue 3 - p 58-61 doi: 10.1097/01.NAJ.0000368959.11129.79

2. Scope the Literature

A "scoping search" investigates the breadth and/or depth of the initial question or may identify a gap in the literature. 

Eligible studies may be located by searching in:

  • Background sources (books, point-of-care tools)
  • Article databases
  • Trial registries
  • Grey literature
  • Cited references
  • Reference lists

When searching, if possible, translate terms to controlled vocabulary of the database. Use text word searching when necessary.

Use Boolean operators to connect search terms:

  • Combine separate concepts with AND  (resulting in a narrower search)
  • Connecting synonyms with OR  (resulting in an expanded search)

Search:  pilates AND ("low back pain"  OR  backache )

Video Tutorials - Translating PICO Questions into Search Queries

  • Translate Your PICO Into a Search in PubMed (YouTube, Carrie Price, 5:11) 
  • Translate Your PICO Into a Search in CINAHL (YouTube, Carrie Price, 4:56)

3. Refine & Expand Your Search

Expand your search strategy with synonymous search terms harvested from:

  • database thesauri
  • reference lists
  • relevant studies

Example: 

(pilates OR exercise movement techniques) AND ("low back pain" OR backache* OR sciatica OR lumbago OR spondylosis)

As you develop a final, reproducible strategy for each database, save your strategies in a:

  • a personal database account (e.g., MyNCBI for PubMed)
  • Log in with your NYU credentials
  • Open and "Make a Copy" to create your own tracker for your literature search strategies

4. Limit Your Results

Use database filters to limit your results based on your defined inclusion/exclusion criteria.  In addition to relying on the databases' categorical filters, you may also need to manually screen results.  

  • Limit to Article type, e.g.,:  "randomized controlled trial" OR multicenter study
  • Limit by publication years, age groups, language, etc.

NOTE: Many databases allow you to filter to "Full Text Only".  This filter is  not recommended . It excludes articles if their full text is not available in that particular database (CINAHL, PubMed, etc), but if the article is relevant, it is important that you are able to read its title and abstract, regardless of 'full text' status. The full text is likely to be accessible through another source (a different database, or Interlibrary Loan).  

  • Filters in PubMed
  • CINAHL Advanced Searching Tutorial

5. Download Citations

Selected citations and/or entire sets of search results can be downloaded from the database into a citation management tool. If you are conducting a systematic review that will require reporting according to PRISMA standards, a citation manager can help you keep track of the number of articles that came from each database, as well as the number of duplicate records.

In Zotero, you can create a Collection for the combined results set, and sub-collections for the results from each database you search.  You can then use Zotero's 'Duplicate Items" function to find and merge duplicate records.

File structure of a Zotero library, showing a combined pooled set, and sub folders representing results from individual databases.

  • Citation Managers - General Guide

6. Abstract and Analyze

  • Migrate citations to data collection/extraction tool
  • Screen Title/Abstracts for inclusion/exclusion
  • Screen and appraise full text for relevance, methods, 
  • Resolve disagreements by consensus

Covidence is a web-based tool that enables you to work with a team to screen titles/abstracts and full text for inclusion in your review, as well as extract data from the included studies.

Screenshot of the Covidence interface, showing Title and abstract screening phase.

  • Covidence Support
  • Critical Appraisal Tools
  • Data Extraction Tools

7. Create Flow Diagram

The PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) flow diagram is a visual representation of the flow of records through different phases of a systematic review.  It depicts the number of records identified, included and excluded.  It is best used in conjunction with the PRISMA checklist .

Example PRISMA diagram showing number of records identified, duplicates removed, and records excluded.

Example from: Stotz, S. A., McNealy, K., Begay, R. L., DeSanto, K., Manson, S. M., & Moore, K. R. (2021). Multi-level diabetes prevention and treatment interventions for Native people in the USA and Canada: A scoping review. Current Diabetes Reports, 2 (11), 46. https://doi.org/10.1007/s11892-021-01414-3

  • PRISMA Flow Diagram Generator (ShinyApp.io, Haddaway et al. )
  • PRISMA Diagram Templates  (Word and PDF)
  • Make a copy of the file to fill out the template
  • Image can be downloaded as PDF, PNG, JPG, or SVG
  • Covidence generates a PRISMA diagram that is automatically updated as records move through the review phases

8. Synthesize & Report Results

There are a number of reporting guideline available to guide the synthesis and reporting of results in systematic literature reviews.

It is common to organize findings in a matrix, also known as a Table of Evidence (ToE).

Example of a review matrix, using Microsoft Excel, showing the results of a systematic literature review.

  • Reporting Guidelines for Systematic Reviews
  • Download a sample template of a health sciences review matrix  (GoogleSheets)

Steps modified from: 

Cook, D. A., & West, C. P. (2012). Conducting systematic reviews in medical education: a stepwise approach.   Medical Education , 46 (10), 943–952.

  • << Previous: Citation Management and Formatting Tools
  • Next: What are Literature Reviews? >>
  • Last Updated: May 15, 2024 11:19 AM
  • URL: https://guides.nyu.edu/healthwriting

Literature Review Tips & Tools

  • Tips & Examples

Organizational Tools

Tools for systematic reviews.

  • Bubbl.us Free online brainstorming/mindmapping tool that also has a free iPad app.
  • Coggle Another free online mindmapping tool.
  • Organization & Structure tips from Purdue University Online Writing Lab
  • Literature Reviews from The Writing Center at University of North Carolina at Chapel Hill Gives several suggestions and descriptions of ways to organize your lit review.
  • Cochrane Handbook for Systematic Reviews of Interventions "The Cochrane Handbook for Systematic Reviews of Interventions is the official guide that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions. "
  • Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) website "PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials, but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions."
  • PRISMA Flow Diagram Generator Free tool that will generate a PRISMA flow diagram from a CSV file (sample CSV template provided) more... less... Please cite as: Haddaway, N. R., Page, M. J., Pritchard, C. C., & McGuinness, L. A. (2022). PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis Campbell Systematic Reviews, 18, e1230. https://doi.org/10.1002/cl2.1230
  • Rayyan "Rayyan is a 100% FREE web application to help systematic review authors perform their job in a quick, easy and enjoyable fashion. Authors create systematic reviews, collaborate on them, maintain them over time and get suggestions for article inclusion."
  • Covidence Covidence is a tool to help manage systematic reviews (and create PRISMA flow diagrams). **UMass Amherst doesn't subscribe, but Covidence offers a free trial for 1 review of no more than 500 records. It is also set up for researchers to pay for each review.
  • PROSPERO - Systematic Review Protocol Registry "PROSPERO accepts registrations for systematic reviews, rapid reviews and umbrella reviews. PROSPERO does not accept scoping reviews or literature scans. Sibling PROSPERO sites registers systematic reviews of human studies and systematic reviews of animal studies."
  • Critical Appraisal Tools from JBI Joanna Briggs Institute at the University of Adelaide provides these checklists to help evaluate different types of publications that could be included in a review.
  • Systematic Review Toolbox "The Systematic Review Toolbox is a community-driven, searchable, web-based catalogue of tools that support the systematic review process across multiple domains. The resource aims to help reviewers find appropriate tools based on how they provide support for the systematic review process. Users can perform a simple keyword search (i.e. Quick Search) to locate tools, a more detailed search (i.e. Advanced Search) allowing users to select various criteria to find specific types of tools and submit new tools to the database. Although the focus of the Toolbox is on identifying software tools to support systematic reviews, other tools or support mechanisms (such as checklists, guidelines and reporting standards) can also be found."
  • Abstrackr Free, open-source tool that "helps you upload and organize the results of a literature search for a systematic review. It also makes it possible for your team to screen, organize, and manipulate all of your abstracts in one place." -From Center for Evidence Synthesis in Health
  • SRDR Plus (Systematic Review Data Repository: Plus) An open-source tool for extracting, managing,, and archiving data developed by the Center for Evidence Synthesis in Health at Brown University
  • RoB 2 Tool (Risk of Bias for Randomized Trials) A revised Cochrane risk of bias tool for randomized trials
  • << Previous: Tips & Examples
  • Next: Writing & Citing Help >>
  • Last Updated: Apr 2, 2024 4:46 PM
  • URL: https://guides.library.umass.edu/litreviews

© 2022 University of Massachusetts Amherst • Site Policies • Accessibility

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

Banner

Citations + Writing

  • AMA Citation Stumpers
  • Writing for Graduate School
  • Writing up Your Research

Introduction

  • Writing a Lit Review
  • Tools to help you Organize your Lit Review
  • Other Reference Management Tools
  • Transitioning From RefWorks to Zotero

A Literature Review Matrix and a Literature Synthesis Matrix will help you organize the articles, etc. you've found along the way and will help you prepare to write your literature review.

Writing a Literature Review

Unless you have a reason to present the literature chronologically (to show development over time, perhaps), the preferred method for organizing your literature is thematically. 

Step 1 : Identify your themes.

Step 2 : Identify the articles that address those themes

Step 3 : Identify the similarities and differences among the articles within the themes

A Literature Synthesis Matrix will be especially helpful in this process.

Click on the title to go to the e-book.

Cover Art

Tools to Help you Organize your Literature Review

Literature Review Matrix

This type of matrix will help you see the content of all of your articles at a glance. Each row represents an article, and each column and element of the articles. Typical columns can include things like 

  • Research method
  • Sample size
  • Theoretical framework

However, the exact columns you chose depend on the elements of each study you want to discuss in your paper. You get to decide!

Here are couple of examples to give you a better idea.

  • An evolutionary concept analysis of helicopter parenting. (Lee et al., 2014)
  • The use of video conferencing for persons with chronic conditions: A systematic review. (Mallow et al. 2016)

Literature Synthesis Matrix

In essence a synthesis matrix is a way to organize your literature by theme, which is generally the way writers organize their whole literature reviews. The real benefit is that it helps you identify the articles that talk about the same themes so that you can write about them together in your literature review.

North Carolina State  has a very nice description and example of the process. 

This YouTube video also explains the process.

Here is a  template  you can use (this one is in Word instead of PPT).

  • << Previous: Writing up Your Research
  • Next: Other Reference Management Tools >>
  • Last Updated: May 22, 2024 1:17 PM
  • URL: https://libguides.massgeneral.org/citations-writing

lit review tools

Literature review: your definitive guide

lit review tools

Joanna Wilkinson

This is our ultimate guide on how to write a narrative literature review. It forms part of our Research Smarter series . 

How do you write a narrative literature review?

Researchers worldwide are increasingly reliant on literature reviews. That’s because review articles provide you with a broad picture of the field, and help to synthesize published research that’s expanding at a rapid pace .

In some academic fields, researchers publish more literature reviews than original research papers. The graph below shows the substantial growth of narrative literature reviews in the Web of Science™, alongside the percentage increase of reviews when compared to all document types.

lit review tools

It’s critical that researchers across all career levels understand how to produce an objective, critical summary of published research. This is no easy feat, but a necessary one. Professionally constructed literature reviews – whether written by a student in class or an experienced researcher for publication – should aim to add to the literature rather than detract from it.

To help you write a narrative literature review, we’ve put together some top tips in this blog post.

Best practice tips to write a narrative literature review:

  • Don’t miss a paper: tips for a thorough topic search
  • Identify key papers (and know how to use them)
  • Tips for working with co-authors
  • Find the right journal for your literature review using actual data
  • Discover literature review examples and templates

We’ll also provide an overview of all the products helpful for your next narrative review, including the Web of Science, EndNote™ and Journal Citation Reports™.

1. Don’t miss a paper: tips for a thorough topic search

Once you’ve settled on your research question, coming up with a good set of keywords to find papers on your topic can be daunting. This isn’t surprising. Put simply, if you fail to include a relevant paper when you write a narrative literature review, the omission will probably get picked up by your professor or peer reviewers. The end result will likely be a low mark or an unpublished manuscript, neither of which will do justice to your many months of hard work.

Research databases and search engines are an integral part of any literature search. It’s important you utilize as many options available through your library as possible. This will help you search an entire discipline (as well as across disciplines) for a thorough narrative review.

We provide a short summary of the various databases and search engines in an earlier Research Smarter blog . These include the Web of Science , Science.gov and the Directory of Open Access Journals (DOAJ).

Like what you see? Share it with others on Twitter:

[bctt tweet=”Writing a #LiteratureReview? Check out the latest @clarivateAG blog for top tips (from topic searches to working with coauthors), examples, templates and more”]

Searching the Web of Science

The Web of Science is a multidisciplinary research engine that contains over 170 million papers from more than 250 academic disciplines. All of the papers in the database are interconnected via citations. That means once you get started with your keyword search, you can follow the trail of cited and citing papers to efficiently find all the relevant literature. This is a great way to ensure you’re not missing anything important when you write a narrative literature review.

We recommend starting your search in the Web of Science Core Collection™. This database covers more than 21,000 carefully selected journals. It is a trusted source to find research papers, and discover top authors and journals (read more about its coverage here ).

Learn more about exploring the Core Collection in our blog, How to find research papers: five tips every researcher should know . Our blog covers various tips, including how to:

  • Perform a topic search (and select your keywords)
  • Explore the citation network
  • Refine your results (refining your search results by reviews, for example, will help you avoid duplication of work, as well as identify trends and gaps in the literature)
  • Save your search and set up email alerts

Try our tips on the Web of Science now.

2. Identify key papers (and know how to use them)

As you explore the Web of Science, you may notice that certain papers are marked as “Highly Cited.” These papers can play a significant role when you write a narrative literature review.

Highly Cited papers are recently published papers getting the most attention in your field right now. They form the top 1% of papers based on the number of citations received, compared to other papers published in the same field in the same year.

You will want to identify Highly Cited research as a group of papers. This group will help guide your analysis of the future of the field and opportunities for future research. This is an important component of your conclusion.

Writing reviews is hard work…[it] not only organizes published papers, but also positions t hem in the academic process and presents the future direction.   Prof. Susumu Kitagawa, Highly Cited Researcher, Kyoto University

3. Tips for working with co-authors

Writing a narrative review on your own is hard, but it can be even more challenging if you’re collaborating with a team, especially if your coauthors are working across multiple locations. Luckily, reference management software can improve the coordination between you and your co-authors—both around the department and around the world.

We’ve written about how to use EndNote’s Cite While You Write feature, which will help you save hundreds of hours when writing research . Here, we discuss the features that give you greater ease and control when collaborating with your colleagues.

Use EndNote for narrative reviews

Sharing references is essential for successful collaboration. With EndNote, you can store and share as many references, documents and files as you need with up to 100 people using the software.

You can share simultaneous access to one reference library, regardless of your colleague’s location or organization. You can also choose the type of access each user has on an individual basis. For example, Read-Write access means a select colleague can add and delete references, annotate PDF articles and create custom groups. They’ll also be able to see up to 500 of the team’s most recent changes to the reference library. Read-only is also an option for individuals who don’t need that level of access.

EndNote helps you overcome research limitations by synchronizing library changes every 15 minutes. That means your team can stay up-to-date at any time of the day, supporting an easier, more successful collaboration.

Start your free EndNote trial today .

4.Finding a journal for your literature review

Finding the right journal for your literature review can be a particular pain point for those of you who want to publish. The expansion of scholarly journals has made the task extremely difficult, and can potentially delay the publication of your work by many months.

We’ve written a blog about how you can find the right journal for your manuscript using a rich array of data. You can read our blog here , or head straight to Endnote’s Manuscript Matcher or Journal Citation Report s to try out the best tools for the job.

5. Discover literature review examples and templates

There are a few tips we haven’t covered in this blog, including how to decide on an area of research, develop an interesting storyline, and highlight gaps in the literature. We’ve listed a few blogs here that might help you with this, alongside some literature review examples and outlines to get you started.

Literature Review examples:

  • Aggregation-induced emission
  • Development and applications of CRISPR-Cas9 for genome engineering
  • Object based image analysis for remote sensing

(Make sure you download the free EndNote™ Click browser plugin to access the full-text PDFs).

Templates and outlines:

  • Learn how to write a review of literature , Univ. of Wisconsin – Madison
  • Structuring a literature review , Australian National University
  • Matrix Method for Literature Review: The Review Matrix , Duquesne University

Additional resources:

  • Ten simple rules for writing a literature review , Editor, PLoS Computational Biology
  • Video: How to write a literature review , UC San Diego Psychology

Related posts

Journal citation reports 2024 preview: unified rankings for more inclusive journal assessment.

lit review tools

Introducing the Clarivate Academic AI Platform

lit review tools

Reimagining research impact: Introducing Web of Science Research Intelligence

lit review tools

The Sheridan Libraries

  • Write a Literature Review
  • Sheridan Libraries
  • Find This link opens in a new window
  • Evaluate This link opens in a new window

What Will You Do Differently?

Please help your librarians by filling out this two-minute survey of today's class session..

Professor, this one's for you .

Introduction

Literature reviews take time. here is some general information to know before you start.  .

  •  VIDEO -- This video is a great overview of the entire process.  (2020; North Carolina State University Libraries) --The transcript is included --This is for everyone; ignore the mention of "graduate students" --9.5 minutes, and every second is important  
  • OVERVIEW -- Read this page from Purdue's OWL. It's not long, and gives some tips to fill in what you just learned from the video.  
  • NOT A RESEARCH ARTICLE -- A literature review follows a different style, format, and structure from a research article.  

Steps to Completing a Literature Review

lit review tools

  • Next: Find >>
  • Last Updated: Sep 26, 2023 10:25 AM
  • URL: https://guides.library.jhu.edu/lit-review

7 open source tools to make literature reviews easy

Open source, library schools, libraries, and digital dissemination

Opensource.com

A good literature review is critical for academic research in any field, whether it is for a research article, a critical review for coursework, or a dissertation. In a recent article, I presented detailed steps for doing  a literature review using open source software .

The following is a brief summary of seven free and open source software tools described in that article that will make your next literature review much easier.

1. GNU Linux

Most literature reviews are accomplished by graduate students working in research labs in universities. For absurd reasons, graduate students often have the worst computers on campus. They are often old, slow, and clunky Windows machines that have been discarded and recycled from the undergraduate computer labs. Installing a flavor of GNU Linux will breathe new life into these outdated PCs. There are more than 100 distributions , all of which can be downloaded and installed for free on computers. Most popular Linux distributions come with a "try-before-you-buy" feature. For example, with Ubuntu you can make a bootable USB stick that allows you to test-run the Ubuntu desktop experience without interfering in any way with your PC configuration. If you like the experience, you can use the stick to install Ubuntu on your machine permanently.

Linux distributions generally come with a free web browser, and the most popular is Firefox . Two Firefox plugins that are particularly useful for literature reviews are Unpaywall and Zotero. Keep reading to learn why.

3. Unpaywall

Often one of the hardest parts of a literature review is gaining access to the papers you want to read for your review. The unintended consequence of copyright restrictions and paywalls is it has narrowed access to the peer-reviewed literature to the point that even Harvard University is challenged to pay for it. Fortunately, there are a lot of open access articles—about a third of the literature is free (and the percentage is growing). Unpaywall is a Firefox plugin that enables researchers to click a green tab on the side of the browser and skip the paywall on millions of peer-reviewed journal articles. This makes finding accessible copies of articles much faster that searching each database individually. Unpaywall is fast, free, and legal, as it accesses many of the open access sites that I covered in my paper on using open source in lit reviews .

Formatting references is the most tedious of academic tasks. Zotero can save you from ever doing it again. It operates as an Android app, desktop program, and a Firefox plugin (which I recommend). It is a free, easy-to-use tool to help you collect, organize, cite, and share research. It replaces the functionality of proprietary packages such as RefWorks, Endnote, and Papers for zero cost. Zotero can auto-add bibliographic information directly from websites. In addition, it can scrape bibliographic data from PDF files. Notes can be easily added on each reference. Finally, and most importantly, it can import and export the bibliography databases in all publishers' various formats. With this feature, you can export bibliographic information to paste into a document editor for a paper or thesis—or even to a wiki for dynamic collaborative literature reviews (see tool #7 for more on the value of wikis in lit reviews).

5. LibreOffice

Your thesis or academic article can be written conventionally with the free office suite LibreOffice , which operates similarly to Microsoft's Office products but respects your freedom. Zotero has a word processor plugin to integrate directly with LibreOffice. LibreOffice is more than adequate for the vast majority of academic paper writing.

If LibreOffice is not enough for your layout needs, you can take your paper writing one step further with LaTeX , a high-quality typesetting system specifically designed for producing technical and scientific documentation. LaTeX is particularly useful if your writing has a lot of equations in it. Also, Zotero libraries can be directly exported to BibTeX files for use with LaTeX.

7. MediaWiki

If you want to leverage the open source way to get help with your literature review, you can facilitate a dynamic collaborative literature review . A wiki is a website that allows anyone to add, delete, or revise content directly using a web browser. MediaWiki is free software that enables you to set up your own wikis.

Researchers can (in decreasing order of complexity): 1) set up their own research group wiki with MediaWiki, 2) utilize wikis already established at their universities (e.g., Aalto University ), or 3) use wikis dedicated to areas that they research. For example, several university research groups that focus on sustainability (including mine ) use Appropedia , which is set up for collaborative solutions on sustainability, appropriate technology, poverty reduction, and permaculture.

Using a wiki makes it easy for anyone in the group to keep track of the status of and update literature reviews (both current and older or from other researchers). It also enables multiple members of the group to easily collaborate on a literature review asynchronously. Most importantly, it enables people outside the research group to help make a literature review more complete, accurate, and up-to-date.

Wrapping up

Free and open source software can cover the entire lit review toolchain, meaning there's no need for anyone to use proprietary solutions. Do you use other libre tools for making literature reviews or other academic work easier? Please let us know your favorites in the comments.

Joshua Pearce

Related Content

Two people chatting via a video conference app

SCI Journal

10 Best Literature Review Tools for Researchers

Photo of author

This post may contain affiliate links that allow us to earn a commission at no expense to you. Learn more

Best Literature Review Tools for Researchers

Boost your research game with these Best Literature Review Tools for Researchers! Uncover hidden gems, organize your findings, and ace your next research paper!

Conducting literature reviews poses challenges for researchers due to the overwhelming volume of information available and the lack of efficient methods to manage and analyze it.

Researchers struggle to identify key sources, extract relevant information, and maintain accuracy while manually conducting literature reviews. This leads to inefficiency, errors, and difficulty in identifying gaps or trends in existing literature.

Advancements in technology have resulted in a variety of literature review tools. These tools streamline the process, offering features like automated searching, filtering, citation management, and research data extraction. They save time, improve accuracy, and provide valuable insights for researchers. 

In this article, we present a curated list of the 10 best literature review tools, empowering researchers to make informed choices and revolutionize their systematic literature review process.

Table of Contents

Top 10 Literature Review Tools for Researchers: In A Nutshell (2023)

#1. semantic scholar – a free, ai-powered research tool for scientific literature.

Credits: Semantic Scholar. Best Literature Review Tools for Researchers

Semantic Scholar is a cutting-edge literature review tool that researchers rely on for its comprehensive access to academic publications. With its advanced AI algorithms and extensive database, it simplifies the discovery of relevant research papers. 

By employing semantic analysis, users can explore scholarly articles based on context and meaning, making it a go-to resource for scholars across disciplines. 

Additionally, Semantic Scholar offers personalized recommendations and alerts, ensuring researchers stay updated with the latest developments. However, users should be cautious of potential limitations. 

Not all scholarly content may be indexed, and occasional false positives or inaccurate associations can occur. Furthermore, the tool primarily focuses on computer science and related fields, potentially limiting coverage in other disciplines. 

Researchers should be mindful of these considerations and supplement Semantic Scholar with other reputable resources for a comprehensive literature review. Despite these caveats, Semantic Scholar remains a valuable tool for streamlining research and staying informed.

#2. Elicit – Research assistant using language models like GPT-3

Credits: Elicit.Org, Best Literature Review Tools for Researchers

Elicit is a game-changing literature review tool that has gained popularity among researchers worldwide. With its user-friendly interface and extensive database of scholarly articles, it streamlines the research process, saving time and effort. 

The tool employs advanced algorithms to provide personalized recommendations, ensuring researchers discover the most relevant studies for their field. Elicit also promotes collaboration by enabling users to create shared folders and annotate articles.

However, users should be cautious when using Elicit. It is important to verify the credibility and accuracy of the sources found through the tool, as the database encompasses a wide range of publications. 

Additionally, occasional glitches in the search function have been reported, leading to incomplete or inaccurate results. While Elicit offers tremendous benefits, researchers should remain vigilant and cross-reference information to ensure a comprehensive literature review.

#3. Scite.Ai – Your personal research assistant

Credits: Scite, Best Literature Review Tools for Researchers

Scite.Ai is a popular literature review tool that revolutionizes the research process for scholars. With its innovative citation analysis feature, researchers can evaluate the credibility and impact of scientific articles, making informed decisions about their inclusion in their own work. 

By assessing the context in which citations are used, Scite.Ai ensures that the sources selected are reliable and of high quality, enabling researchers to establish a strong foundation for their research.

However, while Scite.Ai offers numerous advantages, there are a few aspects to be cautious about. As with any data-driven tool, occasional errors or inaccuracies may arise, necessitating researchers to cross-reference and verify results with other reputable sources. 

Moreover, Scite.Ai’s coverage may be limited in certain subject areas and languages, with a possibility of missing relevant studies, especially in niche fields or non-English publications. 

Therefore, researchers should supplement the use of Scite.Ai with additional resources to ensure comprehensive literature coverage and avoid any potential gaps in their research.

Rayyan offers the following paid plans:

  • Monthly Plan: $20
  • Yearly Plan: $12

Credits: Scite, Best Literature Review Tools for Researchers

#4. DistillerSR – Literature Review Software

Credits: DistillerSR, Best Literature Review Tools for Researchers

DistillerSR is a powerful literature review tool trusted by researchers for its user-friendly interface and robust features. With its advanced search capabilities, researchers can quickly find relevant studies from multiple databases, saving time and effort. 

The tool offers comprehensive screening and data extraction functionalities, streamlining the review process and improving the reliability of findings. Real-time collaboration features also facilitate seamless teamwork among researchers.

While DistillerSR offers numerous advantages, there are a few considerations. Users should invest time in understanding the tool’s features and functionalities to maximize its potential. Additionally, the pricing structure may be a factor for individual researchers or small teams with limited budgets.

Despite occasional technical glitches reported by some users, the developers actively address these issues through updates and improvements, ensuring a better user experience. 

Overall, DistillerSR empowers researchers to navigate the vast sea of information, enhancing the quality and efficiency of literature reviews while fostering collaboration among research teams .

#5. Rayyan – AI Powered Tool for Systematic Literature Reviews

Credits: Rayyan, Best Literature Review Tools for Researchers

Rayyan is a powerful literature review tool that simplifies the research process for scholars and academics. With its user-friendly interface and efficient management features, Rayyan is highly regarded by researchers worldwide. 

It allows users to import and organize large volumes of scholarly articles, making it easier to identify relevant studies for their research projects. The tool also facilitates seamless collaboration among team members, enhancing productivity and streamlining the research workflow. 

However, it’s important to be aware of a few aspects. The free version of Rayyan has limitations, and upgrading to a premium subscription may be necessary for additional functionalities. 

Users should also be mindful of occasional technical glitches and compatibility issues, promptly reporting any problems. Despite these considerations, Rayyan remains a valuable asset for researchers, providing an effective solution for literature review tasks.

Rayyan offers both free and paid plans:

  • Professional: $8.25/month
  • Student: $4/month
  • Pro Team: $8.25/month
  • Team+: $24.99/month

Credits: Rayyan, Best Literature Review Tools for Researchers

#6. Consensus – Use AI to find you answers in scientific research

Credits: Consensus, Best Literature Review Tools for Researchers

Consensus is a cutting-edge literature review tool that has become a go-to choice for researchers worldwide. Its intuitive interface and powerful capabilities make it a preferred tool for navigating and analyzing scholarly articles. 

With Consensus, researchers can save significant time by efficiently organizing and accessing relevant research material.People consider Consensus for several reasons. 

Its advanced search algorithms and filters help researchers sift through vast amounts of information, ensuring they focus on the most relevant articles. By streamlining the literature review process, Consensus allows researchers to extract valuable insights and accelerate their research progress.

However, there are a few factors to watch out for when using Consensus. As with any automated tool, researchers should exercise caution and independently verify the accuracy and relevance of the generated results. Complex or niche topics may present challenges, resulting in limited search results. Researchers should also supplement Consensus with manual searches to ensure comprehensive coverage of the literature.

Overall, Consensus is a valuable resource for researchers seeking to optimize their literature review process. By leveraging its features alongside critical thinking and manual searches, researchers can enhance the efficiency and effectiveness of their work, advancing their research endeavors to new heights.

Consensus offers both free and paid plans:

  • Premium: $9.99/month
  • Enterprise: Custom

Credits: Consensus, Best Literature Review Tools for Researchers

#7. RAx – AI-powered reading assistant

Credits: RAx, Best Literature Review Tools for Researchers

Consensus is a revolutionary literature review tool that has transformed the research process for scholars worldwide. With its user-friendly interface and advanced features, it offers a vast database of academic publications across various disciplines, providing access to relevant and up-to-date literature. 

Using advanced algorithms and machine learning, Consensus delivers personalized recommendations, saving researchers time and effort in their literature search. 

However, researchers should be cautious of potential biases in the recommendation system and supplement their search with manual verification to ensure a comprehensive review. 

Additionally, occasional inaccuracies in metadata have been reported, making it essential for users to cross-reference information with reliable sources. Despite these considerations, Consensus remains an invaluable tool for enhancing the efficiency and quality of literature reviews.

RAx offers both free and paid plans. Currently offering 50% discounts as of July 2023:

  • Premium: $6/month $3/month
  • Premium with Copilot: $8/month $4/month

Credits: RAx, Best Literature Review Tools for Researchers

#8. Lateral – Advance your research with AI

Credits: Lateral, Best Literature Review Tools for Researchers

“Lateral” is a revolutionary literature review tool trusted by researchers worldwide. With its user-friendly interface and powerful search capabilities, it simplifies the process of gathering and analyzing scholarly articles. 

By leveraging advanced algorithms and machine learning, Lateral saves researchers precious time by retrieving relevant articles and uncovering new connections between them, fostering interdisciplinary exploration.

While Lateral provides numerous benefits, users should exercise caution. It is advisable to cross-reference its findings with other sources to ensure a comprehensive review. 

Additionally, researchers must be mindful of potential biases introduced by the tool’s algorithms and should critically evaluate and interpret the results. 

Despite these considerations, Lateral remains an indispensable resource, empowering researchers to delve deeper into their fields of study and make valuable contributions to the academic community.

RAx offers both free and paid plans:

  • Premium: $10.98
  • Pro: $27.46

Credits: Lateral, Best Literature Review Tools for Researchers

#9. Iris AI – Introducing the researcher workspace

Credits: Iris AI, Best Literature Review Tools for Researchers

Iris AI is an innovative literature review tool that has transformed the research process for academics and scholars. With its advanced artificial intelligence capabilities, Iris AI offers a seamless and efficient way to navigate through a vast array of academic papers and publications. 

Researchers are drawn to this tool because it saves valuable time by automating the tedious task of literature review and provides comprehensive coverage across multiple disciplines. 

Its intelligent recommendation system suggests related articles, enabling researchers to discover hidden connections and broaden their knowledge base. However, caution should be exercised while using Iris AI. 

While the tool excels at surfacing relevant papers, researchers should independently evaluate the quality and validity of the sources to ensure the reliability of their work. 

It’s important to note that Iris AI may occasionally miss niche or lesser-known publications, necessitating a supplementary search using traditional methods. 

Additionally, being an algorithm-based tool, there is a possibility of false positives or missed relevant articles due to the inherent limitations of automated text analysis. Nevertheless, Iris AI remains an invaluable asset for researchers, enhancing the quality and efficiency of their research endeavors.

Iris AI offers different pricing plans to cater to various user needs:

  • Basic: Free
  • Premium: Monthly ($82.41), Quarterly ($222.49), and Annual ($791.07)

Credits: Iris AI, Best Literature Review Tools for Researchers

#10. Scholarcy – Summarize your literature through AI

Credits:Scholarcy, Best Literature Review Tools for Researchers

Scholarcy is a powerful literature review tool that helps researchers streamline their work. By employing advanced algorithms and natural language processing, it efficiently analyzes and summarizes academic papers, saving researchers valuable time. 

Scholarcy’s ability to extract key information and generate concise summaries makes it an attractive option for scholars looking to quickly grasp the main concepts and findings of multiple papers.

However, it is important to exercise caution when relying solely on Scholarcy. While it provides a useful starting point, engaging with the original research papers is crucial to ensure a comprehensive understanding. 

Scholarcy’s automated summarization may not capture the nuanced interpretations or contextual information presented in the full text. 

Researchers should also be aware that certain types of documents, particularly those with heavy mathematical or technical content, may pose challenges for the tool. 

Despite these considerations, Scholarcy remains a valuable resource for researchers seeking to enhance their literature review process and improve overall efficiency.

Scholarcy offer the following pricing plans:

  • Browser Extension and Flashcards: Free 
  • Personal Library: $9.99
  • Academic Institution License: $8K+

Credits: Scholarcy, Best Literature Review Tools for Researchers

Final Thoughts

In conclusion, conducting a comprehensive literature review is a crucial aspect of any research project, and the availability of reliable and efficient tools can greatly facilitate this process for researchers. This article has explored the top 10 literature review tools that have gained popularity among researchers.

Moreover, the rise of AI-powered tools like Iris.ai and Sci.ai promises to revolutionize the literature review process by automating various tasks and enhancing research efficiency. 

Ultimately, the choice of literature review tool depends on individual preferences and research needs, but the tools presented in this article serve as valuable resources to enhance the quality and productivity of research endeavors. 

Researchers are encouraged to explore and utilize these tools to stay at the forefront of knowledge in their respective fields and contribute to the advancement of science and academia.

Q1. What are literature review tools for researchers?

Literature review tools for researchers are software or online platforms designed to assist researchers in efficiently conducting literature reviews. These tools help researchers find, organize, analyze, and synthesize relevant academic papers and other sources of information.

Q2. What criteria should researchers consider when choosing literature review tools?

When choosing literature review tools, researchers should consider factors such as the tool’s search capabilities, database coverage, user interface, collaboration features, citation management, annotation and highlighting options, integration with reference management software, and data extraction capabilities. 

It’s also essential to consider the tool’s accessibility, cost, and technical support.

Q3. Are there any literature review tools specifically designed for systematic reviews or meta-analyses?

Yes, there are literature review tools that cater specifically to systematic reviews and meta-analyses, which involve a rigorous and structured approach to reviewing existing literature. These tools often provide features tailored to the specific needs of these methodologies, such as:

Screening and eligibility assessment: Systematic review tools typically offer functionalities for screening and assessing the eligibility of studies based on predefined inclusion and exclusion criteria. This streamlines the process of selecting relevant studies for analysis.

Data extraction and quality assessment: These tools often include templates and forms to facilitate data extraction from selected studies. Additionally, they may provide features for assessing the quality and risk of bias in individual studies.

Meta-analysis support: Some literature review tools include statistical analysis features that assist in conducting meta-analyses. These features can help calculate effect sizes, perform statistical tests, and generate forest plots or other visual representations of the meta-analytic results.

Reporting assistance: Many tools provide templates or frameworks for generating systematic review reports, ensuring compliance with established guidelines such as PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).

Q4. Can literature review tools help with organizing and annotating collected references?

Yes, literature review tools often come equipped with features to help researchers organize and annotate collected references. Some common functionalities include:

Reference management: These tools enable researchers to import references from various sources, such as databases or PDF files, and store them in a central library. They typically allow you to create folders or tags to organize references based on themes or categories.

Annotation capabilities: Many tools provide options for adding annotations, comments, or tags to individual references or specific sections of research articles. This helps researchers keep track of important information, highlight key findings, or note potential connections between different sources.

Full-text search: Literature review tools often offer full-text search functionality, allowing you to search within the content of imported articles or documents. This can be particularly useful when you need to locate specific information or keywords across multiple references.

Integration with citation managers: Some literature review tools integrate with popular citation managers like Zotero, Mendeley, or EndNote, allowing seamless transfer of references and annotations between platforms.

By leveraging these features, researchers can streamline the organization and annotation of their collected references, making it easier to retrieve relevant information during the literature review process.

Photo of author

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

We maintain and update science journals and scientific metrics. Scientific metrics data are aggregated from publicly available sources. Please note that we do NOT publish research papers on this platform. We do NOT accept any manuscript.

lit review tools

2012-2024 © scijournal.org

All-in-one Literature Review Software

Start your free trial.

Free MAXQDA trial for Windows and Mac

Your trial will end automatically after 14 days and will not renew. There is no need for cancelation.

MAXQDA The All-in-one Literature Review Software

MAXQDA is the best choice for a comprehensive literature review. It works with a wide range of data types and offers powerful tools for literature review, such as reference management, qualitative, vocabulary, text analysis tools, and more.

Document viewer

Your analysis.

Literature Review Software MAXQDA Interface

As your all-in-one literature review software, MAXQDA can be used to manage your entire research project. Easily import data from texts, interviews, focus groups, PDFs, web pages, spreadsheets, articles, e-books, and even social media data. Connect the reference management system of your choice with MAXQDA to easily import bibliographic data. Organize your data in groups, link relevant quotes to each other, keep track of your literature summaries, and share and compare work with your team members. Your project file stays flexible and you can expand and refine your category system as you go to suit your research.

Developed by and for researchers – since 1989

lit review tools

Having used several qualitative data analysis software programs, there is no doubt in my mind that MAXQDA has advantages over all the others. In addition to its remarkable analytical features for harnessing data, MAXQDA’s stellar customer service, online tutorials, and global learning community make it a user friendly and top-notch product.

Sally S. Cohen – NYU Rory Meyers College of Nursing

Literature Review is Faster and Smarter with MAXQDA

All-in-one Literature Review Software MAXQDA: Import of documents

Easily import your literature review data

With a literature review software like MAXQDA, you can easily import bibliographic data from reference management programs for your literature review. MAXQDA can work with all reference management programs that can export their databases in RIS-format which is a standard format for bibliographic information. Like MAXQDA, these reference managers use project files, containing all collected bibliographic information, such as author, title, links to websites, keywords, abstracts, and other information. In addition, you can easily import the corresponding full texts. Upon import, all documents will be automatically pre-coded to facilitate your literature review at a later stage.

Capture your ideas while analyzing your literature

Great ideas will often occur to you while you’re doing your literature review. Using MAXQDA as your literature review software, you can create memos to store your ideas, such as research questions and objectives, or you can use memos for paraphrasing passages into your own words. By attaching memos like post-it notes to text passages, texts, document groups, images, audio/video clips, and of course codes, you can easily retrieve them at a later stage. Particularly useful for literature reviews are free memos written during the course of work from which passages can be copied and inserted into the final text.

Using Literature Review Software MAXQDA to Organize Your Qualitative Data: Memo Tools

Find concepts important to your generated literature review

When generating a literature review you might need to analyze a large amount of text. Luckily MAXQDA as the #1 literature review software offers Text Search tools that allow you to explore your documents without reading or coding them first. Automatically search for keywords (or dictionaries of keywords), such as important concepts for your literature review, and automatically code them with just a few clicks. Document variables that were automatically created during the import of your bibliographic information can be used for searching and retrieving certain text segments. MAXQDA’s powerful Coding Query allows you to analyze the combination of activated codes in different ways.

Aggregate your literature review

When conducting a literature review you can easily get lost. But with MAXQDA as your literature review software, you will never lose track of the bigger picture. Among other tools, MAXQDA’s overview and summary tables are especially useful for aggregating your literature review results. MAXQDA offers overview tables for almost everything, codes, memos, coded segments, links, and so on. With MAXQDA literature review tools you can create compressed summaries of sources that can be effectively compared and represented, and with just one click you can easily export your overview and summary tables and integrate them into your literature review report.

Visual text exploration with MAXQDA's Word Tree

Powerful and easy-to-use literature review tools

Quantitative aspects can also be relevant when conducting a literature review analysis. Using MAXQDA as your literature review software enables you to employ a vast range of procedures for the quantitative evaluation of your material. You can sort sources according to document variables, compare amounts with frequency tables and charts, and much more. Make sure you don’t miss the word frequency tools of MAXQDA’s add-on module for quantitative content analysis. Included are tools for visual text exploration, content analysis, vocabulary analysis, dictionary-based analysis, and more that facilitate the quantitative analysis of terms and their semantic contexts.

Visualize your literature review

As an all-in-one literature review software, MAXQDA offers a variety of visual tools that are tailor-made for qualitative research and literature reviews. Create stunning visualizations to analyze your material. Of course, you can export your visualizations in various formats to enrich your literature review analysis report. Work with word clouds to explore the central themes of a text and key terms that are used, create charts to easily compare the occurrences of concepts and important keywords, or make use of the graphical representation possibilities of MAXMaps, which in particular permit the creation of concept maps. Thanks to the interactive connection between your visualizations with your MAXQDA data, you’ll never lose sight of the big picture.

Daten visualization with Literature Review Software MAXQDA

AI Assist: literature review software meets AI

AI Assist – your virtual research assistant – supports your literature review with various tools. AI Assist simplifies your work by automatically analyzing and summarizing elements of your research project and by generating suggestions for subcodes. No matter which AI tool you use – you can customize your results to suit your needs.

Free tutorials and guides on literature review

MAXQDA offers a variety of free learning resources for literature review, making it easy for both beginners and advanced users to learn how to use the software. From free video tutorials and webinars to step-by-step guides and sample projects, these resources provide a wealth of information to help you understand the features and functionality of MAXQDA for literature review. For beginners, the software’s user-friendly interface and comprehensive help center make it easy to get started with your data analysis, while advanced users will appreciate the detailed guides and tutorials that cover more complex features and techniques. Whether you’re just starting out or are an experienced researcher, MAXQDA’s free learning resources will help you get the most out of your literature review.

Free Tutorials for Literature Review Software MAXQDA

Free MAXQDA Trial for Windows and Mac

Get your maxqda license, compare the features of maxqda and maxqda analytics pro, faq: literature review software.

Literature review software is a tool designed to help researchers efficiently manage and analyze the existing body of literature relevant to their research topic. MAXQDA, a versatile qualitative data analysis tool, can be instrumental in this process.

Literature review software, like MAXQDA, typically includes features such as data import and organization, coding and categorization, advanced search capabilities, data visualization tools, and collaboration features. These features facilitate the systematic review and analysis of relevant literature.

Literature review software, including MAXQDA, can assist in qualitative data interpretation by enabling researchers to organize, code, and categorize relevant literature. This organized data can then be analyzed to identify trends, patterns, and themes, helping researchers draw meaningful insights from the literature they’ve reviewed.

Yes, literature review software like MAXQDA is suitable for researchers of all levels of experience. It offers user-friendly interfaces and extensive support resources, making it accessible to beginners while providing advanced features that cater to the needs of experienced researchers.

Getting started with literature review software, such as MAXQDA, typically involves downloading and installing the software, importing your relevant literature, and exploring the available features. Many software providers offer tutorials and documentation to help users get started quickly.

For students, MAXQDA can be an excellent literature review software choice. Its user-friendly interface, comprehensive feature set, and educational discounts make it a valuable tool for students conducting literature reviews as part of their academic research.

MAXQDA is available for both Windows and Mac users, making it a suitable choice for Mac users looking for literature review software. It offers a consistent and feature-rich experience on Mac operating systems.

When it comes to literature review software, MAXQDA is widely regarded as one of the best choices. Its robust feature set, user-friendly interface, and versatility make it a top pick for researchers conducting literature reviews.

Yes, literature reviews can be conducted without software. However, using literature review software like MAXQDA can significantly streamline and enhance the process by providing tools for efficient data management, analysis, and visualization.

lit review tools

  • Reserve a study room
  • Library Account
  • Undergraduate Students
  • Graduate Students
  • Faculty & Staff

Write a Literature Review

  • Developing a Research Question
  • Database Searching
  • Documenting Your Search and Findings
  • Discipline-Specific Literature Reviews
  • Last Updated: Oct 16, 2023 1:53 PM
  • URL: https://guides.library.vcu.edu/lit-review

Park University home

  • Park University
  • Tools for Academic Writing

Literature Review

Tools for Academic Writing: Literature Review

  • URL: https://library.park.edu/writing
  • Annotated Bibliography
  • Writing in Your Discipline This link opens in a new window
  • Giving Peer Feedback
  • Citing & Plagiarism This link opens in a new window
  • Individual Help This link opens in a new window
  • Careers & Job Hunting
  • Writing Tutoring

What is a literature review?

A literature review is a discussion of previously published information on a particular topic, providing summary and connections to help readers understand the research that has been completed on a subject and why it is important. Unlike a research paper, a literature review does not develop a new argument, instead focusing on what has been argued or proven in past papers. However, a literature review should not just be an annotated bibliography that lists the sources found; the literature review should be organized thematically as a cohesive paper.

Why write a literature review?

Literature reviews provide you with a handy guide to a particular topic. If you have limited time to conduct research, literature reviews can give you an overview or act as a stepping stone. For professionals, they are useful reports that keep them up to date with what is current in the field. For scholars, the depth and breadth of the literature review emphasizes the credibility of the writer in his or her field. Literature reviews also provide a solid background for a research paper’s investigation. Comprehensive knowledge of the literature of the field is essential to most research papers.

Who writes literature reviews?

Literature reviews are sometimes written in the humanities, but more often in the sciences and social sciences. In scientific reports and longer papers, they constitute one section of the work. Literature reviews can also be written as stand-alone papers.

How Should I Organize My Literature Review?

Here are some ways to organize a literature review from Purdue OWL: 

  • Chronological:  The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. 
  • Thematic:  If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Methodological:  If you draw your sources from different disciplines or fields that use a variety of research methods, you can compare the results and conclusions that emerge from different approaches. For example: Qualitative versus quantitative research, empirical versus theoretical scholarship, divide the research by sociological, historical, or cultural sources.
  • Theoretical:  In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theoretical concepts to create a framework for your research.

Outline Your Literature Review's Structure

How to Write a Literature Review

Literature Reviews: An Overview for Graduate Students

Writing the Literature Review

Find a focus Just like a term paper, a literature review is organized around ideas, not just sources. Use the research question you developed in planning your review and the issues or themes that connect your sources together to create a thesis statement. Yes, literature reviews have thesis statements! But your literature review thesis statement will be presenting a perspective on the material, rather than arguing for a position or opinion. For example:

The current trend in treatment for congestive heart failure combines surgery and medicine.

More and more cultural studies scholars are accepting popular media as a subject worthy of academic consideration.

Consider organization Once you have your thesis statement, you will need to think about the best way to effectively organize the information you have in your review. Like most academic papers, literature reviews should have an introduction, body, and conclusion. 

Use evidence and be selective When making your points in your literature review, you should refer to several sources as evidence, just like in any academic paper. Your interpretation of the available information must be backed up with evidence to show that your ideas are valid. You also need to be selective about the information you choose to include in your review. Select only the most important points in each source, making sure everything you mention relates to the review's focus.

Summarize and synthesize Remember to summarize and synthesize your sources in each paragraph as well as throughout the review. You should not be doing in-depth analysis in your review, so keep your use of quotes to a minimum. A literature review is not just a summary of current sources; you should be keeping your own voice and saying something new about the collection of sources you have put together.

Revise, revise, revise When you have finished writing the literature review, you still have one final step! Spending a lot of time revising is important to make sure you have presented your information in the best way possible. Check your review to see if it follows the assignment instructions and/or your outline. Rewrite or rework your language to be more concise and double check that you have documented your sources and formatted your review appropriately.

The Literature Review Model

lit review tools

Machi, Lawrence A, and Brenda T McEvoy. The Literature Review: Six Steps to Success. 2Nd ed. Thousand Oaks, Calif.: Corwin Press, 2012.

What the Literature Review IS and ISN'T:

Need assistance with writing? 24/7 help available

lit review tools

Literature Review Sample Paper

  • Literature Review Sample 1
  • Literature Review Sample 2
  • Literature Review Sample 3

Literature Review Tips

  • Taking Notes For The Literature Review
  • The Art of Scan Reading
  • UNC-Chapel Hill Writing Guide for Literature Reviews
  • Literature Review Guidelines from Purdue OWL

Organizing Your Review

As you read and evaluate your literature there are several different ways to organize your research . Courtesy of Dr. Gary Burkholder in the School of Psychology, these sample matrices are one option to help organize your articles. These documents allow you to compile details about your sources, such as the foundational theories, methodologies, and conclusions; begin to note similarities among the authors; and retrieve citation information for easy insertion within a document.

  • Literature Review Matrix 1
  • Literature Review Matrix 2
  • Spreadsheet Style

How to Create a Literature Matrix using Excel

Synthesis for Literature Reviews

Developing a Research Question 

  • << Previous: Academic Writing
  • Next: Annotated Bibliography >>
  • Last Updated: Jan 23, 2024 12:57 PM

RAxter is now Enago Read! Enjoy the same licensing and pricing with enhanced capabilities. No action required for existing customers.

Your all in one AI-powered Reading Assistant

A Reading Space to Ideate, Create Knowledge, and Collaborate on Your Research

  • Smartly organize your research
  • Receive recommendations that cannot be ignored
  • Collaborate with your team to read, discuss, and share knowledge

literature review research assistance

From Surface-Level Exploration to Critical Reading - All in one Place!

Fine-tune your literature search.

Our AI-powered reading assistant saves time spent on the exploration of relevant resources and allows you to focus more on reading.

Select phrases or specific sections and explore more research papers related to the core aspects of your selections. Pin the useful ones for future references.

Our platform brings you the latest research related to your and project work.

Speed up your literature review

Quickly generate a summary of key sections of any paper with our summarizer.

Make informed decisions about which papers are relevant, and where to invest your time in further reading.

Get key insights from the paper, quickly comprehend the paper’s unique approach, and recall the key points.

Bring order to your research projects

Organize your reading lists into different projects and maintain the context of your research.

Quickly sort items into collections and tag or filter them according to keywords and color codes.

Experience the power of sharing by finding all the shared literature at one place.

Decode papers effortlessly for faster comprehension

Highlight what is important so that you can retrieve it faster next time.

Select any text in the paper and ask Copilot to explain it to help you get a deeper understanding.

Ask questions and follow-ups from AI-powered Copilot.

Collaborate to read with your team, professors, or students

Share and discuss literature and drafts with your study group, colleagues, experts, and advisors. Recommend valuable resources and help each other for better understanding.

Work in shared projects efficiently and improve visibility within your study group or lab members.

Keep track of your team's progress by being constantly connected and engaging in active knowledge transfer by requesting full access to relevant papers and drafts.

Find papers from across the world's largest repositories

microsoft academic

Testimonials

Privacy and security of your research data are integral to our mission..

enago read privacy policy

Everything you add or create on Enago Read is private by default. It is visible if and when you share it with other users.

Copyright

You can put Creative Commons license on original drafts to protect your IP. For shared files, Enago Read always maintains a copy in case of deletion by collaborators or revoked access.

Security

We use state-of-the-art security protocols and algorithms including MD5 Encryption, SSL, and HTTPS to secure your data.

  • Research Guides
  • University Libraries

AI-Based Literature Review Tools

  • Dialogues: Insightful Facts
  • How to Craft Prompts
  • Plugins / Extensions for AI-powered Searches
  • Cite ChatGPT in APA / MLA
  • AI and Plagiarism
  • ChatGPT & Higher Education
  • Author Profile

Selected AI-Based Literature Review Tools

Updates: See news or release of AI (Beta) across various academic research databases including Web of Science , Scopus , Ebsco , ProQues t, OVID , Dimensions , JStor , Westlaw , and LexisNexis . ********* ********** ********** ********** **********   

Disclaimer: TAMU libraries do not have subscription access to the AI-powered tools listed below the divider line. The guide serves solely as an informational resource. It is recommended that you assess these tools and their usage methodologies independently. ------------------------------------------------------------------------------------------------------------------------------------------------------------- SEMANTIC SCHOLAR

  • SCIENTIFIC LITERATURE SEARCH ENGINE - finding semantically similar research papers.
  • " A free, AI-powered research tool for scientific literature."  <https://www.semanticscholar.org/>. But login is required in order to use all functions.
  • Over 200 millions of papers from all fields of science, the data of which has also served as a wellspring for the development of other AI-driven tools.

The 4000+ results can be sorted by Fields of Study, Date Range, Author, Journals & Conferences

Save the papers in your Library folder. The Research Feeds will recommend similar papers based on the items saved.

Example - SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality Total Citations: 22,438   [Note: these numbers were gathered when this guide was created] Highly Influential Citations 2,001 Background Citations 6,109 Methods Citations 3,273  Results Citations 385

Semantic Reader

TLDRs (Too Long; Didn't Read) Try this example . Press the pen icon to reveal the highlighted key points . TLDRs "are super-short summaries of the main objective and results of a scientific paper generated using expert background knowledge and the latest GPT-3 style NLP techniques. This new feature is available in beta for nearly 60 million papers in computer science, biology, and medicine..." < https://www.semanticscholar.org/product/tldr>

  • https://www.openread.academy/
  • Institutionally accessed by Harvard, MIT, University of Oxford, Johns Hopkins, Standford, Beijing University. .
  • AI-powered Academic Searching + Web Searching - Over 300 million papers and real-time web content.
  • Every keyword search or AI quest will yield a synthesis report with citations. If you want to re-orient the search outcomes, just click on the Re-generate button and all citations will be refreshed accordingly. After that click on Follow-Up Questions to delve deeper into a particular area or subject.
  • Use Paper Q&A to interact with a text directly, e.g. " What does this paper say about literature review ?"
  • Click on the Translation to put a text or search results into another language.
  • Upload a PDF document and let Paper Expresso to read it for you and parse the content into an academic report format for easy screening: Background and context> Research objectives and hypotheses> Methodology> Results and findings> Discussion and interpretation> Contributions to the field > Structure and flow> Achievements and significance> Limitations and future work>
  • AI-POWERED RESEARCH ASSISTANT - finding papers, filtering study types, automating research flow, brainstorming, summarizing and more.
  • " Elicit is a research assistant using language models like GPT-3 to automate parts of researchers’ workflows. Currently, the main workflow in Elicit is Literature Review. If you ask a question, Elicit will show relevant papers and summaries of key information about those papers in an easy-to-use table."   <https://elicit.org/faq#what-is-elicit.>; Find answers from 175 million papers. FAQS
  • Example - How do mental health interventions vary by age group?    /   Fish oil and depression Results: [Login required] (1) Summary of top 4 papers > Paper #1 - #4 with Title, abstract, citations, DOI, and pdf (2) Table view: Abstract / Interventions / Outcomes measured / Number of participants (3) Relevant studies and citations. (4) Click on Search for Paper Information to find - Metadata about Sources ( SJR etc.) >Population ( age etc.) >Intervention ( duration etc.) > Results ( outcome, limitations etc.) and > Methodology (detailed study design etc.) (5) Export as BIB or CSV
  • How to Search / Extract Data / List of Concept Search -Enter a research question >Workflow: Searching > Summarizing 8 papers> A summary of 4 top papers > Final answers. Each result will show its citation counts, DOI, and a full-text link to Semantic Scholar website for more information such as background citations, methods citation, related papers and more. - List of Concepts search - e.g. adult learning motivation . The results will present a list the related concepts. - Extract data from a pdf file - Upload a paper and let Elicit extract data for you.
  • Export Results - Various ways to export results.
  • How to Cite - Includes the elicit.org URL in the citation, for example: Ought; Elicit: The AI Research Assistant; https://elicit.org; accessed xxxx/xx/xx

CONSENSUS.APP

ACADEMIC SEARCH ENGINE- using AI to find insights in research papers.

"We are a search engine that is designed to accept research questions, find relevant answers within research papers, and synthesize the results using the same language model technology." <https://consensus.app/home/blog/maximize-your-consensus-experience-with-these-best-practices/>

  • Example - Does the death penalty reduce the crime?   /  Fish oil and depression  /    (1) Extracted & aggregated findings from relevant papers. (2) Results may include AIMS, DESIGN, PARTICIPANTS, FINDINGS or other methodological or report components. (3) Summaries and Full Text
  • How to Search Direct questions - Does the death penalty reduce the crime? Relationship between two concepts - Fish oil and depression / Does X cause Y? Open-ended concepts - effects of immigration on local economics Tips and search examples from Consensus' Best Practice   
  • Synthesize (beta) / Consensus Meter When the AI recognizes certain types of research questions, this functionality may be activated. It will examine a selection of some studies and provide a summary along with a Consensus Meter illustrating their collective agreement. Try this search: Is white rice linked to diabetes? The Consensus Meter reveals the following outcomes after analyzing 10 papers: 70% indicate a positive association, 20% suggest a possible connection, and 10% indicate no link.

Prompt “ write me a paragraph about the impact of climate change on GDP with citations “

CITATIONS IN CONTEXT

Integrated with Research Solutions.

Over 1.2 billion Citation Statements and metadata from over 181 million papers suggested reference.

How does it work? - "scite uses access to full-text articles and its deep learning model to tell you, for a given publication: - how many times it was cited by others - how it was cited by others by displaying the text where the citation happened from each citing paper - whether each citation offers supporting or contrasting evidence of the cited claims in the publication of interest, or simply mention it."   <https://help.scite.ai/en-us/article/what-is-scite-1widqmr/>

EXAMPLE of seeing all citations and citation statements in one place

More information: Scite: A smart citation index that displays the context of citations and classifies their intent using deep learning  

  • GPT3.5 by OpenAI. Knowledge cutoff date is September 2021.
  • Input/ Output length - ChatGPT-3.5 allows a maximum token limit of 4096 tokens. According to ChatGPT " On average, a token in English is roughly equivalent to 4 bytes or characters. English words are typically around 5 characters long. This means that, very roughly, you could fit around 800 to 1000 English words within 4096 tokens." 
  • According to ChatGPT, the generated responses are non-deterministic by default. So if you run the searches again and get slightly or very different results, it's likely due to this factor.
  • ChatGPT may find non-existent references.
  • According to this study < https://arxiv.org/ftp/arxiv/papers/2304/2304.06794.pdf > "ChatGPT cites the most-cited articles and journals, relying solely on Google Scholar's citation counts" within the field of environmental science.  
  • Example - "INTERVIEW WITH CHATGPT" as a Research Method & Teaching Tool Some researchers began to use this approach to obtain their research data. Try this Google Scholar search link "interview with ChatGPT"   or see two articles below: (1) Chatting about ChatGPT: how may AI and GPT impact academia and libraries? BD Lund, T Wang - Library Hi Tech News, 2023 (2) An interview with ChatGPT: discussing artificial intelligence in teaching, research, and practice , G Scaringi, M Loche - 2023  

Increased risk of sudden infant death syndrome (SIDS) Increased risk of premature birth Increased risk of low birth weight Increased risk of respiratory problems in newborns Increased risk of respiratory problems in infants exposed to secondhand smoke Increased risk of developing asthma and other respiratory illnesses later in life for infants exposed to secondhand smoke [Note : ChatGPT may generate non-existent references or false knowledge. To find out why Large Language Models hallucinate, check out this Wiki article: Hallucination (artificial intelligence) and this blog post -   A Gentle Introduction to Hallucinations in Large Language Models by Adrian Tam ]

Infant death Neonatal mortality (referring specifically to deaths within the first 28 days of life) Perinatal mortality (referring to deaths occurring during the period from 22 completed weeks of gestation to 7 completed days after birth) Early childhood mortality (referring to deaths occurring within the first five years of life) Child mortality (referring to deaths occurring before the age of 18) [Press the Regenerate button to get more.]

  • Example -  RELATED WORDS What are the related words of infant mortality? Neonatal mortality, Post-neonatal mortality, Stillbirths, Low birth weight, Malnutrition, Infectious diseases, Vaccination, Maternal health,  Access to healthcare, Poverty, Social inequality, Sanitation, Hygiene, Water quality, Childbirth complications, Congenital abnormalities, Birth defects, Maternal age, Under-five mortality, Child mortality, Perinatal mortality, Preterm birth, Low birth weight, Sudden infant death syndrome (SIDS), Maternal mortality, Postnatal care, Malnutrition, Immunization, Access to healthcare, Clean water and sanitation, Congenital anomalies, Infant health disparities, Infant mortality rate (IMR), Infant survival.  

(("Infant Death"[Mesh] OR "Sudden Infant Death Syndrome"[Mesh] OR "Infant Mortality"[Mesh] OR "Perinatal Mortality"[Mesh] OR "Neonatal Mortality"[Mesh] OR "Fetal Mortality"[Mesh] OR "Infant Death" OR "Sudden Infant Death Syndrome" OR "Infant Mortality" OR "Perinatal Mortality" OR "Neonatal Mortality" OR "Fetal Mortality") AND ("Risk Factors"[Mesh] OR "Etiology"[Mesh] OR "Prevention and Control"[Mesh] OR "Interventions"[Mesh] OR "Public Health"[Mesh] OR "Health Disparities"[Mesh] OR "Healthcare Disparities"[Mesh] OR "Poverty"[Mesh] OR "Malnutrition"[Mesh] OR "Infection"[Mesh] OR "Congenital Abnormalities"[Mesh] OR "Prematurity"[Mesh] OR "Low Birth Weight"[Mesh] OR "Maternal Education"[Mesh] OR "Maternal Health"[Mesh] OR "Risk Factors" OR "Etiology" OR "Prevention and Control" OR "Interventions" OR "Public Health" OR "Health Disparities" OR "Healthcare Disparities" OR "Poverty" OR "Malnutrition" OR "Infection" OR "Congenital Abnormalities" OR "Prematurity" OR "Low Birth Weight" OR "Maternal Education" OR "Maternal Health")) Comments and Suggestions: #1 -This search string may not work on all database platforms. For example, Medline (OVID) may not be able to run it, Medline & CINAHL (EBSCO) can run it without any problems while PubMed will run it but it will indicate which MeSH terms are not indexed. See PubMed's search History and Details under Advanced . #2 - This is just one possible combination. You may need to adjust it based on your task and the databases you are using . #3 - It might be easier if you break the task into 3 parts: ask for synonyms > ask for thesauri terms > and ask it to combine them.

Examples - DATABASE RECOMMENDATIONS Heath Science - Which databases to use to find articles on the EFFECTS OF MATERNAL SMOKING ON INFANT DEATH? PubMed: This is a database of biomedical literature from the National Library of Medicine. It includes articles from medical journals and other sources on a wide range of health-related topics. Embase: This is a database of biomedical literature that covers more international and pharmaceutical content than PubMed. It includes articles from medical journals and other sources on a wide range of health-related topics. Cochrane Library : This is a collection of databases that provides systematic reviews and meta-analyses of healthcare interventions, including interventions related to maternal smoking and infant death. Scopus: This is a multidisciplinary database that covers research from a wide range of fields, including health sciences. It includes articles from scientific journals, books, and conference proceedings. Web of Science: This is a database that covers research from a wide range of fields, including health sciences. It includes articles from scientific journals, books, and conference proceedings. CINAHL: This is a database of nursing and allied health literature, including articles on maternal smoking and infant death. [Note: The response above was copied and pasted directly from ChatGPT. No editing was added.] Education -- I am doing a meta-analysis on k-12 STEM education. What databases should I use? ERIC : This is a database of education research and information, including articles from journals, reports, and conference papers. It is a commonly used database for educational research. Education Source : This is a comprehensive database of education research and information, including articles from journals, books, and conference papers. Web of Science: This is a multidisciplinary database that covers research from a wide range of fields, including education. It includes articles from scientific journals, books, and conference proceedings. Scopus: This is another multidisciplinary database that covers research from a wide range of fields, including education. It includes articles from scientific journals, books, and conference proceedings. PsycINFO: This is a database of psychology research, including articles from journals, books, and dissertations. It can be useful for research on STEM education that includes psychological factors. Cochrane Library : This is a collection of databases that provides systematic reviews and meta-analyses of healthcare interventions including interventions related to education. [Author's note: You could also ask for the GREY LITERATURE sources related to the topic above,]

Example: ACT-AS-AN-EXPERT PROMPT Here's an alternative way to consolidate all the questions into a single consultation by using this popular Act-As-An-Expert prompt, aka, system message prompt:  I am going to conduct a systematic review on games and mathematics related to K-12 education  I want you to act as a systematic review expert . I will ask you some questions related to my research topic and you advise me on how to do it. 

[You may ask one question at a time] My first question is "which databases should I use?"  My second question is "what are the relevant subject terms, descriptors and key words?" My third question is "how to do data extraction?" ..... , .... Please note that the above prompts are merely for illustrative purposes. Actual questions may contain more specific instructions. As an example, for the first question, could you provide guidance on how to identify relevant publications? Specifically, can you provide a list of the primary databases and grey literature sources related to my topic?  

Example - GREY LITERATURE Please show me all the resources that offer grey literature for conducting research in the field of education     ERIC (Education Resources Information Center): https://eric.ed.gov/     OpenGrey: http://www.opengrey.eu/     British Education Index: https://www.ebscohost.com/academic/british-education-index     WorldCat: https://www.worldcat.org/     ProQuest Dissertations & Theses Global: https://www.proquest.com/products-services/pqdtglobal.html     EThOS (Electronic Theses Online Service): https://ethos.bl.uk/     Institutional Repositories of universities and academic institutions (search for specific institution repositories)     American Educational Research Association (AERA): https://www.aera.net/     National Education Association (NEA): https://www.nea.org/     Association for Supervision and Curriculum Development (ASCD): http://www.ascd.org/     RAND Corporation: https://www.rand.org/     National Center for Education Research (NCER): https://ies.ed.gov/ncer/     National Center for Education Statistics (NCES): https://nces.ed.gov/     European Educational Research Association (EERA): https://www.eera-ecer.de/     UNESCO International Institute for Educational Planning (IIEP): http://www.iiep.unesco.org/     Organization for Economic Cooperation and Development (OECD): https://www.oecd.org/education/     Education Development Center (EDC): https://www.edc.org/     National Institute for Educational Policy Research (NIER) - Japan: https://www.nier.go.jp/     Australian Council for Educational Research (ACER): https://www.acer.org/     Canadian Education Association (CEA): https://www.cea-ace.ca/ My topic is about stem education. Which grey literature databases should I use?    STEM Education Research Centre (SERC) Grey Literature Database    National STEM Education Research Repository (NSERR)    National Science Digital Library (NSDL)    STEM Education Resource Center (SERC) GrayLit Network    STEM Education Grey Literature Library

See other Comparison Examples under Bing AI.

Examples - ZERO SHOT, ONE SHOT & FEW SHOT PROMPTING for literature review See the examples on the left panel- How to Fine Tune Prompts

Bing Chat / Copilot

  • Bing Chat has become Copilot.
  • CONVERSATIONAL LLM 4.0 + SEARCH ENGINE. Connects to the Internet.
  • Three conversation modes: More Creative, More Balanced or More Precise.
  • Comparison - Prompt: My topic is about psychology. Which grey literature databases should I use? More Precise - PsycExtra, NIH Library and links to library guides, .gov and .org websites More Balanced - PsycExtra and three main research guides that mention psychology grey literature sources. More Creative- PsycExtra, LILACS, and Worldcat + definition of grey literature + suggestions on searching for professional association websites, newsletters, magazines, standards and guidelines, and other sources that may contain relevant information for your topic.
  • Some examples below were tested in Bing Chat only.  
  • Example - factors of infant mortality Infant mortality is a term that refers to the death of infants before their first birthday. The leading causes of infant mortality include birth asphyxia, pneumonia, congenital malformations, term birth complications such as abnormal presentation of the fetus umbilical cord prolapse, or prolonged labor, neonatal infection, diarrhea, malaria, measles, and malnutrition1. Many situational factors lead to infant mortality such as the pregnant woman’s level of education, environmental conditions, political infrastructure, and level of medical support1. In the United States, a primary determinant of infant mortality risk is infant birth weight with lower birth weights increasing the risk of infant mortality. The determinants of low birth weight include socioeconomic, psychological, behavioral and environmental factors2.  
  • Example: PEER REVIEWED SOURCES I need to find some peer reviewed articles about infant mortality? The answer includes 5 articles. The first one is “Causes of death and infant mortality rates among full-term births in the United States between 2010 and 2012: An observational study” by Neha Bairoliya and Günther Fink1. A click on  " Tell me more about the first article " leads to this summary: " The first article is titled “Causes of death and infant mortality rates among full-term births in the United States between 2010 and 2012: An observational study” and was published in PLOS Medicine on March 20, 2018. The authors are Neha Bairoliya and Günther Fink. The study used linked birth and death records for the period 2010–2012 to identify the state- and cause-specific burden of infant mortality among full-term infants (born at 37–42 weeks of gestation) in the United States. The researchers found that the full-term infant mortality rate (FTIMR) was 2.2 per 1,000 live births overall and ranged between 1.29 (Connecticut) and 3.77 (Mississippi) at the state level. Sudden unexpected death in infancy (SUDI) accounted for 43% of FTIM; congenital malformations and perinatal conditions accounted for 31% and 11.3% of FTIM, respectively. The largest mortality differentials between states with good and states with poor FTIMR were found for SUDI, with particularly large risk differentials for deaths due to sudden infant death syndrome (SIDS) and suffocation." More searches: Please find five empirical studies focusing on the factors influencing adult learners engaged in online education. Please locate five scholarly articles on stress management among college students site:pubmed.ncbi.nlm.nih.gov

Image illustrating data extraction

The ERIC Thesaurus contains nearly 12,000 terms, including more than 4,500 descriptors and 7,000 synonyms1. You can use the term “Academic Achievement” or its synonyms such as “Educational Achievement”, “Student Achievement”, “Academic Performance”, “Academic Success”, “Learning Achievement” and more. [ Note : The follow-up responses include some additional sources. See two below: https://www.powerthesaurus.org/academic_achievement https://eric.ed.gov/?ti=Achievement+Gap   including https://eric.ed.gov/?ti=Academic+Achievement |

My topic is about psychology.  Which grey literature databases should I use?

ChatGPT PsycEXTRA : This database is produced by the American Psychological Association (APA) and includes grey literature such as conference papers, white papers, and government reports. OpenGrey : This database provides access to grey literature produced in Europe, including reports, theses, and conference papers Grey Literature Report : This database is produced by The New York Academy of Medicine and includes reports, conference papers, and other grey literature on health and biomedical topics, including psychology. ProQuest Dissertations and Theses Global : This database provides access to graduate-level research, including theses and dissertations related to psychology. Google Scholar: While not exclusively a grey literature database, Google Scholar can be a useful tool for finding conference papers and other grey literature related to psychology. National Technical Information Service (NTIS) : This database provides access to technical reports produced by U.S. government agencies, including those related to psychology. Bing AI   - [asked in the Precise mode] For psychology, some recommended grey literature databases include PsycEXTRA and PsyArXiv . PsycEXTRA is a database of grey literature material relating to psychology, behavioral sciences and health. PsyArXiv is a psychology archive designed to facilitate rapid dissemination of psychological research. What other models have been developed based on SERVQUAL?

ChatGPT SERVPERF: This is a modified version of SERVQUAL that measures service performance instead of service quality. It was developed by Cronin and Taylor (1992) and is based on the same five dimensions as SERVQUAL. SERVPERF-GAP: This model, developed by Caruana and Ewing (2006), extends the SERVQUAL model to include three additional gaps that can occur when customers evaluate service quality. E-SERVQUAL: This model, developed by Parasuraman et al. (2005), adapts the SERVQUAL scale to measure the quality of online services. SERVCO: This is a modified version of SERVQUAL that includes a sixth dimension, "cost," in addition to the five original dimensions. It was developed by Sureshchandar et al. (2002) to address the importance of cost in service quality evaluation. SERVQ-HC: This is a modified version of SERVQUAL that is specifically designed for evaluating the quality of health care services. It was developed by Cronin et al. (2000) and includes 20 items across five dimensions. Bing AI The Nordic model of service quality2. The hierarchical model of service quality3. The American Customer Satisfaction Index (ACSI)4.

     Bing AI Sidebar - CHAT, THINK, & WRITE

  • Examples of Using AI sidebar to Recap Google Scholar findings

Chat - If you highlight a study in the search results of Google Scholar, you will be prompted with a question by AI sidebar asking if you want to send the selected text to it. If you choose to do so, you can then select either  EXPLAIN or EXPAND to learn more about the chosen article. It's worth noting that EXPAND provides more detailed information. You can then click on any of the follow-up links to delve further into the selected article or topic. If the text of the chosen study is written in a foreign language, the AI sidebar can also provide an explanation of the text in English. Another option for extracting the key points from a selected article is to submit the URL of the PDF. Insights - Use the insights button to get more ideas about the topic. Compose - Tell the Edge sidebar your topic and let its Compose tool to generate a draft for you. Google Scholar Button- See also the Plugins and Extension page for how to perform a combined search using Google, Google Scholar, and ChatGPT .

Example: FINDING SIMILAR ARTICLES Find similar articles like this one - [then paste in the abstract ] Note - Use this prompt with Bing chat since real-time web info is required.

Examples: SUMMARIZE + REFERENCE Use a basic prompt like " Please summarize . ..." or add a REFERENCE TEXT that provides the necessary information about what you want it to focus or how the summary or resolution should look like. The reference could be a quote, an excerpt, a partial sentence, a conversation, an abstract, a code snippet and so on. Formula : Specify a role > assign a task > provide a reference text. Example#1 - Scholarly Journal Quote

STEP 1 Prompt: As a research assistant specializing in renewable energy, your task is to locate comparable research studies based on the provided abstract from an article: "This paper discusses the perspective of renewable energy (wind, solar, wave and biomass) in the making of strategies for a sustainable development. Such strategies typically involve three major technological changes: energy savings on the demand side, efficiency improvements in the energy production, and replacement of fossil fuels by various sources of renewable energy."

Bing Chat then found 3 documents: the ScienceDirect paper where the abstract came from and two other documents from un.org and frontiersin.org respectively.

STEP 2 Prompt: Please synthesize the findings of the three studies provided above.

Bing Chat: "Sure! Here is a synthesis of the findings from the three studies:

All three studies emphasize the importance of renewable energy in achieving sustainable development and mitigating climate change. They all discuss various strategies and actions that can be taken to increase the use of renewable energy sources and reduce reliance on fossil fuels.

Henrik Lund’s paper discusses the perspective of renewable energy (wind, solar, wave and biomass) in the making of strategies for a sustainable development. Such strategies typically involve three major technological changes: energy savings on the demand side, efficiency improvements in the energy production, and .....

The United Nations outlines five critical actions the world needs to prioritize now to transform our energy systems and speed up the shift to renewable energy: Make renewable energy technology a global public good, Improve global access to components and raw materials......

“Sustainable Energy Transition for Renewable and Low Carbon Grid” discusses that a sustainable global electricity transition will entail increased use of renewable energy sources particularly wind and solar, nuclear energy as a low carbon energy source, electrification of transport and.......

In summary , these studies suggest that transitioning to renewable energy sources is crucial for sustainable development and mitigating climate change. This can be achieved through various strategies such as increasing energy efficiency, improving access to renewable technology and materials, leveling the playing field for renewables, ending fossil fuel subsidies, and redirecting investments towards renewables." Example #2 Y ou are an expert researcher, given the following article, please discuss its key findings. Reference: How Large Language Models are Transforming Machine-Paraphrased Plagiarism at https://arxiv.org/abs/2210.03568 This is another way to put REFERENCE TEXT. As a side not, role specifying or the use of persona is important as it can impact the level of responses obtained. Just imagine if the designated role were " You are a high school student... ." Example #3 Please read this article and summarize it f or me - "Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making" and then find 5 similar studies and please also summarize each for me.

lit review tools

CONNECTED PAPERS

  • RELATED STUDIES
  • Uses visual graphs or other ways to show relevant studies. The database is connected to the Semantic Scholar Paper Corpus which has compiled hundreds of millions of published papers across many science and social science fields.
  • See more details about how it works .  
  • Example - SERVQUAL and then click on SELECT A PAPER TO BUILD THE GRAPH > The first paper was selected. Results: (1) Origin paper - SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality + Connected papers with links to Connected Papers / PDF / DOI or Publisher's site / Semantic Scholar / Google Scholar. (2) Graph showing the origin paper + connected papers with links to the major sources . See above. (3) Links to Prior Works and Derivative Works See the detailed citations by Semantic Scholar on the origin SERVQUAL paper on the top of this page within Semantic Scholars.
  • How to Search Search by work title. Enter some keywords about a topic.
  • Download / Save Download your saved Items in Bib format.

PAPER DIGEST

  • SUMMARY & SYNTHESIS
  • " Knowledge graph & natural language processing platform tailored for technology domain . <"https://www.paperdigest.org/> Areas covered: technology, biology/health, all sciences areas, business, humanities/ social sciences, patents and grants ...

lit review tools

  • LITERATURE REVIEW - https://www.paperdigest.org/review/ Systematic Review - https://www.paperdigest.org/literature-review/
  • SEARCH CONSOLE - https://www.paperdigest.org/search/ Conference Digest - NIPS conference papers ... Tech AI Tools: Literature Review  | Literature Search | Question Answering | Text Summarization Expert AI Tools: Org AI | Expert search | Executive Search, Reviewer Search, Patent Lawyer Search...

Daily paper digest / Conference papers digest / Best paper digest / Topic tracking. In Account enter the subject areas interested. Daily Digest will upload studies based on your interests.

RESEARCH RABBIT

  • CITATION-BASED MAPPING: SIMILAR / EARLY / LATER WORKS
  • " 100s of millions of academic articles and covers more than 90%+ of materials that can be found in major databases used by academic institutions (such as Scopus, Web of Science, and others) ." See its FAQs page. Search algorithms were borrowed from NIH and Semantic Scholar.

The default “Untitled Collection” will collect your search histories, based on which Research Rabbit will send you recommendations for three types of related results: Similar Works / Earlier Works / Later Works, viewable in graph such as Network, Timeline, First Authors etc.

Zotero integration: importing and exporting between these two apps.

  • Example - SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality [Login required] Try it to see its Similar Works, Earlier Works and Later Works or other documents.
  • Export Results - Findings can be exported in BibTxt, RIS or CSV format.

CITING GENERATIVE AI

  • How to cite ChatGPT  [APA] - https://apastyle. apa.org/blog /how-to-cite-chatgpt  
  • How to Cite Generative AI  [MLA]  https://style. mla.org /citing-generative-ai/
  • Citation Guide - Citing ChatGPT and Other Generative AI (University of Queensland, Australia)
  • Next: Dialogues: Insightful Facts >>
  • Last Updated: May 9, 2024 2:16 PM
  • URL: https://tamu.libguides.com/c.php?g=1289555

Conducting a Literature Review

  • Literature Review
  • Developing a Topic
  • Planning Your Literature Review
  • Developing a Search Strategy
  • Managing Citations
  • Critical Appraisal Tools
  • Writing a Literature Review

Appraise Your Research Articles

The structure of a literature review should include the following :

  • An overview of the subject, issue, or theory under consideration, along with the objectives of the literature review,
  • Division of works under review into themes or categories [e.g. works that support a particular position, those against, and those offering alternative approaches entirely],
  • An explanation of how each work is similar to and how it varies from the others,
  • Conclusions as to which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of their area of research.

The critical evaluation of each work should consider :

  • Provenance  -- what are the author's credentials? Are the author's arguments supported by evidence [e.g. primary historical material, case studies, narratives, statistics, recent scientific findings]?
  • Methodology  -- were the techniques used to identify, gather, and analyze the data appropriate to addressing the research problem? Was the sample size appropriate? Were the results effectively interpreted and reported?
  • Objectivity  -- is the author's perspective even-handed or prejudicial? Is contrary data considered or is certain pertinent information ignored to prove the author's point?
  • Persuasiveness  -- which of the author's theses are most convincing or least convincing?
  • Value  -- are the author's arguments and conclusions convincing? Does the work ultimately contribute in any significant way to an understanding of the subject?

Reviewing the Literature

While conducting a review of the literature, maximize the time you devote to writing this part of your paper by thinking broadly about what you should be looking for and evaluating. Review not just what the articles are saying, but how are they saying it.

Some questions to ask:

  • How are they organizing their ideas?
  • What methods have they used to study the problem?
  • What theories have been used to explain, predict, or understand their research problem?
  • What sources have they cited to support their conclusions?
  • How have they used non-textual elements [e.g., charts, graphs, figures, etc.] to illustrate key points?
  • When you begin to write your literature review section, you'll be glad you dug deeper into how the research was designed and constructed because it establishes a means for developing more substantial analysis and interpretation of the research problem.

Tools for Critical Appraisal

Now, that you have found articles based on your research question you can appraise the quality of those articles. These are resources you can use to appraise different study designs.

Centre for Evidence Based Medicine (Oxford)

University of Glasgow

"AFP uses the Strength-of-Recommendation Taxonomy (SORT), to label key recommendations in clinical review articles."

  • SORT: Rating the Strength of Evidence    American Family Physician and other family medicine journals use the Strength of Recommendation Taxonomy (SORT) system for rating bodies of evidence for key clinical recommendations.

Seton Hall logo

  • The Interprofessional Health Sciences Library
  • 123 Metro Boulevard
  • Nutley, NJ 07110
  • [email protected]
  • Visiting Campus
  • News and Events
  • Parents and Families
  • Web Accessibility
  • Career Center
  • Public Safety
  • Accountability
  • Privacy Statements
  • Report a Problem
  • Login to LibApps

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Mark Access Health Policy
  • v.11(1); 2023
  • PMC10392303

Logo of jmaph

Rapid literature review: definition and methodology

Beata smela.

a Assignity, Cracow, Poland

Mondher Toumi

b Public Health Department, Aix-Marseille University, Marseille, France

Karolina Świerk

Clement francois, małgorzata biernikiewicz.

c Studio Slowa, Wroclaw, Poland

Emilie Clay

d Clever-Access, Paris, France

Laurent Boyer

Introduction: A rapid literature review (RLR) is an alternative to systematic literature review (SLR) that can speed up the analysis of newly published data. The objective was to identify and summarize available information regarding different approaches to defining RLR and the methodology applied to the conduct of such reviews.

Methods: The Medline and EMBASE databases, as well as the grey literature, were searched using the set of keywords and their combination related to the targeted and rapid review, as well as design, approach, and methodology. Of the 3,898 records retrieved, 12 articles were included.

Results: Specific definition of RLRs has only been developed in 2021. In terms of methodology, the RLR should be completed within shorter timeframes using simplified procedures in comparison to SLRs, while maintaining a similar level of transparency and minimizing bias. Inherent components of the RLR process should be a clear research question, search protocol, simplified process of study selection, data extraction, and quality assurance.

Conclusions: There is a lack of consensus on the formal definition of the RLR and the best approaches to perform it. The evidence-based supporting methods are evolving, and more work is needed to define the most robust approaches.

Introduction

A systematic literature review (SLR) summarizes the results of all available studies on a specific topic and provides a high level of evidence. Authors of the SLR have to follow an advanced plan that covers defining a priori information regarding the research question, sources they are going to search, inclusion criteria applied to choose studies answering the research question, and information regarding how they are going to summarize findings [ 1 ].

The rigor and transparency of SLRs make them the most reliable form of literature review [ 2 ], providing a comprehensive, objective summary of the evidence for a given topic [ 3 , 4 ]. On the other hand, the SLR process is usually very time-consuming and requires a lot of human resources. Taking into account a high increase of newly published data and a growing need to analyze information in the fastest possible way, rapid literature reviews (RLRs) often replace standard SLRs.

There are several guidelines on the methodology of RLRs [ 5–11 ]; however, only recently, one publication from 2021 attempted to construct a unified definition [ 11 ]. Generally, by RLRs, researchers understand evidence synthesis during which some of the components of the systematic approach are being used to facilitate answering a focused research question; however, scope restrictions and a narrower search strategy help to make the project manageable in a shorter time and to get the key conclusions faster [ 4 ].

The objective of this research was to collect and summarize available information on different approaches to the definition and methodology of RLRs. An RLR has been run to capture publications providing data that fit the project objective.

To find publications reporting information on the methodology of RLRs, searches were run in the Medline and EMBASE databases in November 2022. The following keywords were searched for in titles and abstracts: ‘targeted adj2 review’ OR ‘focused adj2 review’ OR ‘rapid adj2 review’, and ‘methodology’ OR ‘design’ OR ‘scheme’ OR ‘approach’. The grey literature was identified using Google Scholar with keywords including ‘targeted review methodology’ OR ‘focused review methodology’ OR ‘rapid review methodology’. Only publications in English were included, and the date of publication was restricted to year 2016 onward in order to identify the most up-to-date literature. The reference lists of each included article were searched manually to obtain the potentially eligible articles. Titles and abstracts of the retrieved records were first screened to exclude articles that were evidently irrelevant. The full texts of potentially relevant papers were further reviewed to examine their eligibility.

A pre-defined Excel grid was developed to extract the following information related to the methodology of RLR from guidelines:

  • Definition,
  • Research question and searches,
  • Studies selection,
  • Data extraction and quality assessment,
  • Additional information.

There was no restriction on the study types to be analyzed; any study reporting on the methodology of RLRs could be included: reviews, practice guidelines, commentaries, and expert opinions on RLR relevant to healthcare policymakers or practitioners. The data extraction and evidence summary were conducted by one analyst and further examined by a senior analyst to ensure that relevant information was not omitted. Disagreements were resolved by discussion and consensus.

Studies selection

A total of 3,898 records (3,864 articles from a database search and 34 grey literature from Google Scholar) were retrieved. After removing duplicates, titles and abstracts of 3,813 articles were uploaded and screened. The full texts of 43 articles were analyzed resulting in 12 articles selected for this review, including 7 guidelines [ 5–11 ] on the methodology of RLRs, together with 2 papers summarizing the results of the Delphi consensus on the topic [ 12 , 13 ], and 3 publications analyzing and assessing different approaches to RLRs [ 4 , 14 , 15 ].

Overall, seven guidelines were identified: from the World Health Organization (WHO) [ 5 ], National Collaborating Centre for Methods and Tools (NCCMT) [ 7 ], the UK government [ 8 ], the Oxford Centre for Evidence Based Medicine [ 9 ], the Cochrane group [ 6 , 11 ], and one multi-national review [ 10 ]. Among the papers that did not describe the guidelines, Gordon et al. [ 4 ] proposed 12 tips for conducting a rapid review in the right settings and discussed why these reviews may be more beneficial in some circumstances. The objective of work conducted by Tricco et al. [ 13 ] and Pandor et al. [ 12 ] was to collect and compare perceptions of rapid reviews from stakeholders, including researchers, policymakers, industry, journal editors, and healthcare providers, and to reach a consensus outlining the domains to consider when deciding on approaches for RLRs. Haby et al. [ 14 ] run a rapid review of systematic reviews and primary studies to find out the best way to conduct an RLR in health policy and practice. In Tricco et al. (2022) [ 15 ], JBI position statement for RLRs is presented.

From all the seven identified guidelines information regarding definitions the authors used for RLRs, approach to the PICOS criteria and search strategy development, studies selection, data extractions, quality assessment, and reporting were extracted.

Cochrane Rapid Reviews Methods Group developed methods guidance based on scoping review of the underlying evidence, primary methods studies conducted, as well as surveys sent to Cochrane representative and discussion among those with expertise [ 11 ]. They analyzed over 300 RLRs or RLR method papers and based on the methodology of those studies, constructed a broad definition RLR, one that meets a minimum set of requirements identified in the thematic analysis: ‘ A rapid review is a form of knowledge synthesis that accelerates the process of conducting a traditional systematic review through streamlining or omitting a variety of methods to produce evidence in a resource-efficient manner .’ This interpretation aligns with more than 50% of RLRs identified in this study. The authors additionally provided several other definitions, depending on specific situations or requirements (e.g., when RLR is produced on stakeholder’s request). It was additionally underlined that RLRs should be driven by the need of timely evidence for decision-making purposes [ 11 ].

Rapid reviews vary in their objective, format, and methods used for evidence synthesis. This is a quite new area, and still no agreement on optimal methods can be found [ 5 ]. All of the definitions are highlighting that RLRs are completed within shorter timeframes than SLRs, and also lack of time is one of the main reasons they are conducted. It has been suggested that most rapid reviews are conducted within 12 weeks; however, some of the resources suggest time between a few weeks to no more than 6 months [ 5 , 6 ]. Some of the definitions are highlighting that RLRs follow the SLR process, but certain phases of the process are simplified or omitted to retrieve information in a time-saving way [ 6 , 7 ]. Different mechanisms are used to enhance the timeliness of reviews. They can be used independently or concurrently: increasing the intensity of work by intensifying the efforts of multiple analysts by parallelization of tasks, using review shortcuts whereby one or more systematic review steps may be reduced, automatizing review steps by using new technologies [ 5 ]. The UK government report [ 8 ] referred to two different RLRs: in the form of quick scoping reviews (QSR) or rapid evidence assessments (REA). While being less resource and time-consuming compared to standard SLRs, QSRs and REAs are designed to be similarly transparent and to minimize bias. QSRs can be applied to rather open-ended questions, e.g., ‘what do we know about something’ but both, QSRs and REAs, provide an understanding of the volume and characteristics of evidence on a specific topic, allowing answering questions by maximizing the use of existing data, and providing a clear picture of the adequacy of existing evidence [ 8 ].

Research questions and searches

The guidelines suggest creating a clear research question and search protocol at the beginning of the project. Additionally, to not duplicate RLRs, the Cochrane Rapid Reviews Methods Group encourages all people working on RLRs to consider registering their search protocol with PROSPERO, the international prospective register of reviews; however, so far they are not formally registered in most cases [ 5 , 6 ]. They also recommend involving key stakeholders (review users) to set and refine the review question, criteria, and outcomes, as well as consulting them through the entire process [ 11 ].

Regarding research questions, it is better to structure them in a neutral way rather than focus on a specific direction for the outcome. By doing so, the researcher is in a better position to identify all the relevant evidence [ 7 ]. Authors can add a second, supportive research question when needed [ 8 ]. It is encouraged to limit the number of interventions, comparators and outcomes, to focus on the ones that are most important for decision-making [ 11 ]. Useful could be also reviewing additional materials, e.g., SLRs on the topic, as well as conducting a quick literature search to better understand the topic before starting with RLRs [ 7 ]. In SLRs researchers usually do not need to care a lot about time spent on creating PICOS, they need to make sure that the scope is broad enough, and they cannot use many restrictions. When working on RLRs, a reviewer may spend more or less time defining each of the components of the study question, and the main step is making sure that PICOS addresses the needs of those who requested the rapid review, and at the same time, it is feasible within the required time frame [ 7 ]. Search protocol should contain an outline of how the following review steps are to be carried out, including selected search keywords and a full strategy, a list of data sources, precise inclusion and exclusion criteria, a strategy for data extraction and critical appraisal, and a plan of how the information will be synthesized [ 8 ].

In terms of searches running, in most cases, an exhaustive process will not be feasible. Researchers should make sure that the search is effective and efficient to produce results in a timely manner. Cochrane Rapid Reviews Methods Group recommends involving an information specialist and conducting peer review of at least one search strategy [ 11 ]. According to the rapid review guidebook by McMaster University [ 7 ], it is important that RLRs, especially those that support policy and program decisions, are being fed by the results of a body of literature, rather than single studies, when possible. It would result in more generalizable findings applied at the level of a population and serve more realistic findings for program decisions [ 7 ]. It is important to document the search strategy, together with a record of the date and any date limits of the search, so that it can easily be run again, modified, or updated. Furthermore, the information on the individual databases included in platform services should always be reported, as this depends on organizations’ subscriptions and must be included for transparency and repeatability [ 7 , 8 ]. Good solution for RLRs is narrowing the scope or searching a limited number of databases and other sources [ 7 ]. Often, the authors use the PubMed/MEDLINE, Cochrane Library, and Embase databases. In most reviews, two or more databases are searched, and common limits are language (usually restricted to English), date, study design, and geographical area. Some RLRs include searching of grey literature; however, contact with authors is rather uncommon [ 5 , 8 ]. According to the flexible framework for restricted systematic review published by the University of Oxford, the search should be run in at least one major scientific database such as PubMed, and one other source, e.g., Google Scholar [ 9 ]. Grey literature and unpublished evidence may be particularly needed and important for intervention questions. It is related to the fact that studies that do not report the effects of interventions are less likely to be published [ 8 ]. If there is any type of evidence that will not be considered by the RLRs, e.g., reviews or theoretical and conceptual studies, it should also be stated in the protocol together with justification [ 8 ]. Additionally, authors of a practical guide published by WHO suggest using a staged search to identify existing SLRs at the beginning, and then focusing on studies with other designs [ 5 ]. If a low number of citations have been retrieved, it is acceptable to expand searches, remove some of the limits, and add additional databases and sources [ 7 ].

Searching for RLRs is an iterative process, and revising the approach is usually needed [ 7 ]. Changes should be confirmed with stakeholders and should be tracked and reflected in the final report [ 5 ].

The next step in the rapid review is the selection of studies consisting of two phases: screening of titles and abstracts, and analysis of full texts. Prior to screening initiation, it is recommended to conduct a pilot exercise using the same 30–50 abstracts and 5–10 full-texts for the entire screening team in order to calibrate and test the review form [ 11 ]. In contrast to SLRs, it can be done by one reviewer with or without verification by a second one. If verification is performed, usually the second reviewer checks only a subset of records and compares them. Cochrane Group, in contrast, recommends a stricter approach: at least 20% of references should be double-screened at titles and abstracts stage, and while the rest of the references may be screened by one reviewer, the excluded items need to be re-examined by second reviewer; similar approach is used in full-text screening [ 11 ]. This helps to ensure that bias was reduced and that the PICOS criteria are applied in a relevant way [ 5 , 8 , 9 , 11 ]. During the analysis of titles and abstracts, there is no need to report reasons for exclusion; however, they should be tracked for all excluded full texts [ 7 ].

Data extraction and quality assessment

According to the WHO guide, the most common method for data extraction in RLRs is extraction done by a single reviewer with or without partial verification. The authors point out that a reasonable approach is to use a second reviewer to check a random sample of at least 10% of the extractions for accuracy. Dual performance is more necessary for the extraction of quantitative results than for descriptive study information. In contrast, Cochrane group recommends that second reviewer should check the correctness and completeness of all data [ 11 ]. When possible, extractions should be limited to key characteristics and outcomes of the study. The same approach to data extraction is also suggested for a quality assessment process within rapid reviews [ 5 , 9 , 11 ]. Authors of the guidebook from McMaster University highlight that data extraction should be done ideally by two reviewers independently and consensus on the discrepancies should always be reached [ 7 ]. The final decision on the approach to this important step of review should depend on the available time and should also reflect the complexity of the research question [ 9 ].

For screening, analysis of full texts, extractions, and quality assessments, researchers can use information technologies to support them by making these review steps more efficient [ 5 ].

Before data reporting, a reviewer should prepare a document with key message headings, executive summary, background related to the topic and status of the current knowledge, project question, synthesis of findings, conclusions, and recommendations. According to the McMaster University guidebook, a report should be structured in a 1:2:20 format, that is, one page for key messages, two pages for an executive summary, and a full report of up to 20 pages [ 7 ]. All the limitations of the RLRs should be analyzed, and conclusions should be drawn with caution [ 5 ]. The quality of the accumulated evidence and the strength of recommendations can be assessed using, e.g., the GRADE system [ 5 ]. When working on references quoting, researchers should remember to use a primary source, not secondary references [ 7 ]. It would be worth considering the support of some software tools to automate reporting steps. Additionally, any standardization of the process and the usage of templates can support report development and enhance the transparency of the review [ 5 ].

Ideally, all the review steps should be completed during RLRs; however, often some steps may need skipping or will not be completed as thoroughly as should because of time constraints. It is always crucial to decide which steps may be skipped, and which are the key ones, depending on the project [ 7 ]. Guidelines suggest that it may be helpful to invite researchers with experience in the operations of SLRs to participate in the rapid review development [ 5 , 9 ]. As some of the steps will be completed by one reviewer only, it is important to provide them with relevant training at the beginning of the process, as well as during the review, to minimize the risk of mistakes [ 5 ].

Additional information

Depending on the policy goal and available resources and deadlines, methodology of the RLRs may be modified. Wilson et al. [ 10 ] provided extensive guidelines for performing RLR within days (e.g., to inform urgent internal policy discussions and/or management decisions), weeks (e.g., to inform public debates), or months (e.g., to inform policy development cycles that have a longer timeline, but that cannot wait for a traditional full systematic review). These approaches vary in terms of data synthesis, types of considered evidence and project management considerations.

In shortest timeframes, focused questions and subquestions should be formulated, typically to conduct a policy analysis; the report should consist of tables along with a brief narrative summary. Evidence from SLRs is often considered, as well as key informant interviews may be conducted to identify additional literature and insights about the topic, while primary studies and other types of evidence are not typically feasible due to time restrictions. The review would be best conducted with 1–2 reviewers sharing the work, enabling rapid iterations of the review. As for RLRs with longer timeline (weeks), these may use a mix of policy, systems and political analysis. Structure of the review would be similar to shorter RLRs – tabular with short narrative summary, as the timeline does not allow for comprehensive synthesis of data. Besides SLRs, primary studies and other evidence may be feasible in this timeframe, if obtained using the targeted searches in the most relevant databases. The review team should be larger, and standardized procedures for reviewing of the results and data extraction should be applied. In contrast to previous timeframe, merit review process may be feasible. For both timeframes, brief consultations with small transdisciplinary team should be conducted at the beginning and in the final stage of the review to discuss important matters.

For RLRs spanning several months, more comprehensive methodology may be adapted in terms of data synthesis and types of evidence. However, authors advise that review may be best conducted with a small review team in order to allow for more in-depth interpretation and iteration.

Studies analyzing methodology

There have been two interesting publications summarizing the results of Delphi consensus on the RLR methodology identified and included in this review [ 12 , 13 ].

Tricco et al. [ 13 ] first conducted an international survey and scoping review to collect information on the possible approaches to the running of rapid reviews, based on which, they employed a modified Delphi method that included inputs from 113 stakeholders to explore the most optimized approach. Among the six most frequent rapid review approaches (not all detailed here) being evaluated, the approach that combines inclusion of published literature only, a search of more than one database and limitations by date and language, study selection by one analyst, data extraction, and quality assessment by one analyst and one verifier, was perceived as the most feasible approach (72%, 81/113 responses) with the potentially lowest risk of bias (12%, 12/103). The approach ranked as the first one when considering timelines assumes updating of the search from a previously published review, no additional limits on search, studies selection and data extraction done by one reviewer, and no quality assessment. Finally, based on the publication, the most comprehensive RLRs can be made by moving on with the following rules: searching more than one database and grey literature and using date restriction, and assigning one reviewer working on screening, data extraction, and risk of bias assessment ( Table 1 ). Pandor et al. [ 12 ] introduced a decision tool for SelecTing Approaches for Rapid Reviews (STARR) that were produced through the Delphi consensus of international experts through an iterative and rigorous process. Participants were asked to assess the importance of predefined items in four domains related to the rapid review process: interaction with commissioners, understanding the evidence base, data extraction and synthesis methods, and reporting of rapid review methods. All items assigned to four domains achieved > 70% of consensus, and in that way, the first consensus-driven tool has been created that supports authors of RLRs in planning and deciding on approaches.

Six most frequent approaches to RLRs (adapted from Tricco et al. [ 13 ]).

Haby et al. [ 14 ] run searches of 11 databases and two websites and developed a comprehensive overview of the methodology of RLRs. With five SLRs and one RCT being finally included, they identified the following approaches used in RLRs to make them faster than full SLRs: limiting the number and scope of questions, searching fewer databases, limited searching of grey literature, restrictions on language and date (e.g., English only, most recent publications), updating the existing SLRs, eliminating or limiting hand searches of reference lists, noniterative search strategies, eliminating consultation with experts, limiting dual study selection, data extraction and quality assessment, minimal data synthesis with short concise conclusions or recommendations. All the SLRs included in this review were consistent in stating that no agreed definition of rapid reviews is available, and there is still no final agreement on the best methodological rules to be followed.

Gordon et al. [ 4 ] explained the advantages of performing a focused review and provided 12 tips for its conduction. They define focused reviews as ‘a form of knowledge synthesis in which the components of the systematic process are applied to facilitate the analysis of a focused research question’. The first tip presented by the authors is related to deciding if a focused review is a right solution for the considered project. RLRs will suit emerging topics, approaches, or assessments where early synthesis can support doctors, policymakers, etc., but also can direct future research. The second, third, and fourth tips highlight the importance of running preliminary searches and considering narrowing the results by using reasonable constraints taking into account the local context, problems, efficiency perspectives, and available time. Further tips include creating a team of experienced reviewers working on the RLRs, thinking about the target journal from the beginning of work on the rapid review, registering the search protocol on the PROSPERO registry, and the need for contacting authors of papers when data available in publications are missing or incongruent. The last three tips are related to the choice of evidence synthesis method, using the visual presentation of data, and considering and describing all the limitations of the focused review.

Finally, a new publication by Tricco et al. from 2022, describing JBI position statement [ 15 ] underlined that for the time being, there is no specific tool for critical appraisal of the RLR’s methodological quality. Instead, reviewers may use available tools to assess the risk of bias or quality of SLRs, like ROBIS, the JBI critical appraisal tools, or the assessment of multiple systematic reviews (AMSTAR).

Inconsistency in the definitions and methodologies of RLR

Although RLR was broadly perceived as an approach to quicken the conduct of conventional SLR, there is a lack of consensus on the formal definition of the RLR, so as to the best approaches to perform it. Only in 2021, a study proposing unified definition was published; however, it is important to note that the most accurate definition was only matching slightly over 50% of papers analysed by the authors, which underlines the lack of homogeneity in the field [ 11 ]. The evidence-based supporting methods are evolving, and more evidence is needed to define the most robust approaches [ 5 ].

Diverse terms are used to describe the RLR, including ‘rapid review’, focused systematic review’, ‘quick scoping reviews’, and ‘rapid evidence assessments’. Although the general principles of conducting RLR are to accelerate the whole process, complexity was seen in the methodologies used for RLRs, as reflected in this study. Also, inconsistencies related to the scope of the questions, search strategies, inclusion criteria, study screening, full-text review, quality assessment, and evidence presentation were implied. All these factors may hamper decision-making about optimal methodologies for conducting rapid reviews, and as a result, the efficiency of RLR might be decreased. Additionally, researchers may tend to report the methodology of their reviews without a sufficient level of detail, making it difficult to appraise the quality and robustness of their work.

Advantages and weaknesses of RLR

Although RLR used simplified approaches for evidence synthesis compared with SLR, the methodologies for RLR should be replicable, rigorous, and transparent to the greatest extent [ 16 ]. When time and resources are limited, RLR could be a practical and efficient tool to provide the summary of evidence that is critical for making rapid clinical or policy-related decisions [ 5 ]. Focusing on specific questions that are of controversy or special interest could be powerful in reaffirming whether the existing recommendation statements are still appropriate [ 17 ].

The weakness of RLR should also be borne in mind, and the trade-off of using RLR should be carefully considered regarding the thoroughness of the search, breadth of a research question, and depth of analysis [ 18 ]. If allowed, SLR is preferred over RLR considering that some relevant studies might be omitted with narrowed search strategies and simplified screening process [ 14 ]. Additionally, omitting the quality assessment of included studies could result in an increased risk of bias, making the comprehensiveness of RLR compromised [ 13 ]. Furthermore, in situations that require high accuracy, for example, where a small relative difference in an intervention has great impacts, for the purpose of drafting clinical guidelines, or making licensing decisions, a comprehensive SLR may remain the priority [ 19 ]. Therefore, clear communications with policymakers are recommended to reach an agreement on whether an RLR is justified and whether the methodologies of RLR are acceptable to address the unanswered questions [ 18 ].

Disclosure statement

No potential conflict of interest was reported by the author(s).

A systematic literature review of empirical research on ChatGPT in education

  • Open access
  • Published: 26 May 2024
  • Volume 3 , article number  60 , ( 2024 )

Cite this article

You have full access to this open access article

lit review tools

  • Yazid Albadarin   ORCID: orcid.org/0009-0005-8068-8902 1 ,
  • Mohammed Saqr 1 ,
  • Nicolas Pope 1 &
  • Markku Tukiainen 1  

Over the last four decades, studies have investigated the incorporation of Artificial Intelligence (AI) into education. A recent prominent AI-powered technology that has impacted the education sector is ChatGPT. This article provides a systematic review of 14 empirical studies incorporating ChatGPT into various educational settings, published in 2022 and before the 10th of April 2023—the date of conducting the search process. It carefully followed the essential steps outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) guidelines, as well as Okoli’s (Okoli in Commun Assoc Inf Syst, 2015) steps for conducting a rigorous and transparent systematic review. In this review, we aimed to explore how students and teachers have utilized ChatGPT in various educational settings, as well as the primary findings of those studies. By employing Creswell’s (Creswell in Educational research: planning, conducting, and evaluating quantitative and qualitative research [Ebook], Pearson Education, London, 2015) coding techniques for data extraction and interpretation, we sought to gain insight into their initial attempts at ChatGPT incorporation into education. This approach also enabled us to extract insights and considerations that can facilitate its effective and responsible use in future educational contexts. The results of this review show that learners have utilized ChatGPT as a virtual intelligent assistant, where it offered instant feedback, on-demand answers, and explanations of complex topics. Additionally, learners have used it to enhance their writing and language skills by generating ideas, composing essays, summarizing, translating, paraphrasing texts, or checking grammar. Moreover, learners turned to it as an aiding tool to facilitate their directed and personalized learning by assisting in understanding concepts and homework, providing structured learning plans, and clarifying assignments and tasks. However, the results of specific studies (n = 3, 21.4%) show that overuse of ChatGPT may negatively impact innovative capacities and collaborative learning competencies among learners. Educators, on the other hand, have utilized ChatGPT to create lesson plans, generate quizzes, and provide additional resources, which helped them enhance their productivity and efficiency and promote different teaching methodologies. Despite these benefits, the majority of the reviewed studies recommend the importance of conducting structured training, support, and clear guidelines for both learners and educators to mitigate the drawbacks. This includes developing critical evaluation skills to assess the accuracy and relevance of information provided by ChatGPT, as well as strategies for integrating human interaction and collaboration into learning activities that involve AI tools. Furthermore, they also recommend ongoing research and proactive dialogue with policymakers, stakeholders, and educational practitioners to refine and enhance the use of AI in learning environments. This review could serve as an insightful resource for practitioners who seek to integrate ChatGPT into education and stimulate further research in the field.

Similar content being viewed by others

lit review tools

Empowering learners with ChatGPT: insights from a systematic literature exploration

lit review tools

Incorporating AI in foreign language education: An investigation into ChatGPT’s effect on foreign language learners

lit review tools

Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT

Avoid common mistakes on your manuscript.

1 Introduction

Educational technology, a rapidly evolving field, plays a crucial role in reshaping the landscape of teaching and learning [ 82 ]. One of the most transformative technological innovations of our era that has influenced the field of education is Artificial Intelligence (AI) [ 50 ]. Over the last four decades, AI in education (AIEd) has gained remarkable attention for its potential to make significant advancements in learning, instructional methods, and administrative tasks within educational settings [ 11 ]. In particular, a large language model (LLM), a type of AI algorithm that applies artificial neural networks (ANNs) and uses massively large data sets to understand, summarize, generate, and predict new content that is almost difficult to differentiate from human creations [ 79 ], has opened up novel possibilities for enhancing various aspects of education, from content creation to personalized instruction [ 35 ]. Chatbots that leverage the capabilities of LLMs to understand and generate human-like responses have also presented the capacity to enhance student learning and educational outcomes by engaging students, offering timely support, and fostering interactive learning experiences [ 46 ].

The ongoing and remarkable technological advancements in chatbots have made their use more convenient, increasingly natural and effortless, and have expanded their potential for deployment across various domains [ 70 ]. One prominent example of chatbot applications is the Chat Generative Pre-Trained Transformer, known as ChatGPT, which was introduced by OpenAI, a leading AI research lab, on November 30th, 2022. ChatGPT employs a variety of deep learning techniques to generate human-like text, with a particular focus on recurrent neural networks (RNNs). Long short-term memory (LSTM) allows it to grasp the context of the text being processed and retain information from previous inputs. Also, the transformer architecture, a neural network architecture based on the self-attention mechanism, allows it to analyze specific parts of the input, thereby enabling it to produce more natural-sounding and coherent output. Additionally, the unsupervised generative pre-training and the fine-tuning methods allow ChatGPT to generate more relevant and accurate text for specific tasks [ 31 , 62 ]. Furthermore, reinforcement learning from human feedback (RLHF), a machine learning approach that combines reinforcement learning techniques with human-provided feedback, has helped improve ChatGPT’s model by accelerating the learning process and making it significantly more efficient.

This cutting-edge natural language processing (NLP) tool is widely recognized as one of today's most advanced LLMs-based chatbots [ 70 ], allowing users to ask questions and receive detailed, coherent, systematic, personalized, convincing, and informative human-like responses [ 55 ], even within complex and ambiguous contexts [ 63 , 77 ]. ChatGPT is considered the fastest-growing technology in history: in just three months following its public launch, it amassed an estimated 120 million monthly active users [ 16 ] with an estimated 13 million daily queries [ 49 ], surpassing all other applications [ 64 ]. This remarkable growth can be attributed to the unique features and user-friendly interface that ChatGPT offers. Its intuitive design allows users to interact seamlessly with the technology, making it accessible to a diverse range of individuals, regardless of their technical expertise [ 78 ]. Additionally, its exceptional performance results from a combination of advanced algorithms, continuous enhancements, and extensive training on a diverse dataset that includes various text sources such as books, articles, websites, and online forums [ 63 ], have contributed to a more engaging and satisfying user experience [ 62 ]. These factors collectively explain its remarkable global growth and set it apart from predecessors like Bard, Bing Chat, ERNIE, and others.

In this context, several studies have explored the technological advancements of chatbots. One noteworthy recent research effort, conducted by Schöbel et al. [ 70 ], stands out for its comprehensive analysis of more than 5,000 studies on communication agents. This study offered a comprehensive overview of the historical progression and future prospects of communication agents, including ChatGPT. Moreover, other studies have focused on making comparisons, particularly between ChatGPT and alternative chatbots like Bard, Bing Chat, ERNIE, LaMDA, BlenderBot, and various others. For example, O’Leary [ 53 ] compared two chatbots, LaMDA and BlenderBot, with ChatGPT and revealed that ChatGPT outperformed both. This superiority arises from ChatGPT’s capacity to handle a wider range of questions and generate slightly varied perspectives within specific contexts. Similarly, ChatGPT exhibited an impressive ability to formulate interpretable responses that were easily understood when compared with Google's feature snippet [ 34 ]. Additionally, ChatGPT was compared to other LLMs-based chatbots, including Bard and BERT, as well as ERNIE. The findings indicated that ChatGPT exhibited strong performance in the given tasks, often outperforming the other models [ 59 ].

Furthermore, in the education context, a comprehensive study systematically compared a range of the most promising chatbots, including Bard, Bing Chat, ChatGPT, and Ernie across a multidisciplinary test that required higher-order thinking. The study revealed that ChatGPT achieved the highest score, surpassing Bing Chat and Bard [ 64 ]. Similarly, a comparative analysis was conducted to compare ChatGPT with Bard in answering a set of 30 mathematical questions and logic problems, grouped into two question sets. Set (A) is unavailable online, while Set (B) is available online. The results revealed ChatGPT's superiority in Set (A) over Bard. Nevertheless, Bard's advantage emerged in Set (B) due to its capacity to access the internet directly and retrieve answers, a capability that ChatGPT does not possess [ 57 ]. However, through these varied assessments, ChatGPT consistently highlights its exceptional prowess compared to various alternatives in the ever-evolving chatbot technology.

The widespread adoption of chatbots, especially ChatGPT, by millions of students and educators, has sparked extensive discussions regarding its incorporation into the education sector [ 64 ]. Accordingly, many scholars have contributed to the discourse, expressing both optimism and pessimism regarding the incorporation of ChatGPT into education. For example, ChatGPT has been highlighted for its capabilities in enriching the learning and teaching experience through its ability to support different learning approaches, including adaptive learning, personalized learning, and self-directed learning [ 58 , 60 , 91 ]), deliver summative and formative feedback to students and provide real-time responses to questions, increase the accessibility of information [ 22 , 40 , 43 ], foster students’ performance, engagement and motivation [ 14 , 44 , 58 ], and enhance teaching practices [ 17 , 18 , 64 , 74 ].

On the other hand, concerns have been also raised regarding its potential negative effects on learning and teaching. These include the dissemination of false information and references [ 12 , 23 , 61 , 85 ], biased reinforcement [ 47 , 50 ], compromised academic integrity [ 18 , 40 , 66 , 74 ], and the potential decline in students' skills [ 43 , 61 , 64 , 74 ]. As a result, ChatGPT has been banned in multiple countries, including Russia, China, Venezuela, Belarus, and Iran, as well as in various educational institutions in India, Italy, Western Australia, France, and the United States [ 52 , 90 ].

Clearly, the advent of chatbots, especially ChatGPT, has provoked significant controversy due to their potential impact on learning and teaching. This indicates the necessity for further exploration to gain a deeper understanding of this technology and carefully evaluate its potential benefits, limitations, challenges, and threats to education [ 79 ]. Therefore, conducting a systematic literature review will provide valuable insights into the potential prospects and obstacles linked to its incorporation into education. This systematic literature review will primarily focus on ChatGPT, driven by the aforementioned key factors outlined above.

However, the existing literature lacks a systematic literature review of empirical studies. Thus, this systematic literature review aims to address this gap by synthesizing the existing empirical studies conducted on chatbots, particularly ChatGPT, in the field of education, highlighting how ChatGPT has been utilized in educational settings, and identifying any existing gaps. This review may be particularly useful for researchers in the field and educators who are contemplating the integration of ChatGPT or any chatbot into education. The following research questions will guide this study:

What are students' and teachers' initial attempts at utilizing ChatGPT in education?

What are the main findings derived from empirical studies that have incorporated ChatGPT into learning and teaching?

2 Methodology

To conduct this study, the authors followed the essential steps of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) and Okoli’s [ 54 ] steps for conducting a systematic review. These included identifying the study’s purpose, drafting a protocol, applying a practical screening process, searching the literature, extracting relevant data, evaluating the quality of the included studies, synthesizing the studies, and ultimately writing the review. The subsequent section provides an extensive explanation of how these steps were carried out in this study.

2.1 Identify the purpose

Given the widespread adoption of ChatGPT by students and teachers for various educational purposes, often without a thorough understanding of responsible and effective use or a clear recognition of its potential impact on learning and teaching, the authors recognized the need for further exploration of ChatGPT's impact on education in this early stage. Therefore, they have chosen to conduct a systematic literature review of existing empirical studies that incorporate ChatGPT into educational settings. Despite the limited number of empirical studies due to the novelty of the topic, their goal is to gain a deeper understanding of this technology and proactively evaluate its potential benefits, limitations, challenges, and threats to education. This effort could help to understand initial reactions and attempts at incorporating ChatGPT into education and bring out insights and considerations that can inform the future development of education.

2.2 Draft the protocol

The next step is formulating the protocol. This protocol serves to outline the study process in a rigorous and transparent manner, mitigating researcher bias in study selection and data extraction [ 88 ]. The protocol will include the following steps: generating the research question, predefining a literature search strategy, identifying search locations, establishing selection criteria, assessing the studies, developing a data extraction strategy, and creating a timeline.

2.3 Apply practical screen

The screening step aims to accurately filter the articles resulting from the searching step and select the empirical studies that have incorporated ChatGPT into educational contexts, which will guide us in answering the research questions and achieving the objectives of this study. To ensure the rigorous execution of this step, our inclusion and exclusion criteria were determined based on the authors' experience and informed by previous successful systematic reviews [ 21 ]. Table 1 summarizes the inclusion and exclusion criteria for study selection.

2.4 Literature search

We conducted a thorough literature search to identify articles that explored, examined, and addressed the use of ChatGPT in Educational contexts. We utilized two research databases: Dimensions.ai, which provides access to a large number of research publications, and lens.org, which offers access to over 300 million articles, patents, and other research outputs from diverse sources. Additionally, we included three databases, Scopus, Web of Knowledge, and ERIC, which contain relevant research on the topic that addresses our research questions. To browse and identify relevant articles, we used the following search formula: ("ChatGPT" AND "Education"), which included the Boolean operator "AND" to get more specific results. The subject area in the Scopus and ERIC databases were narrowed to "ChatGPT" and "Education" keywords, and in the WoS database was limited to the "Education" category. The search was conducted between the 3rd and 10th of April 2023, which resulted in 276 articles from all selected databases (111 articles from Dimensions.ai, 65 from Scopus, 28 from Web of Science, 14 from ERIC, and 58 from Lens.org). These articles were imported into the Rayyan web-based system for analysis. The duplicates were identified automatically by the system. Subsequently, the first author manually reviewed the duplicated articles ensured that they had the same content, and then removed them, leaving us with 135 unique articles. Afterward, the titles, abstracts, and keywords of the first 40 manuscripts were scanned and reviewed by the first author and were discussed with the second and third authors to resolve any disagreements. Subsequently, the first author proceeded with the filtering process for all articles and carefully applied the inclusion and exclusion criteria as presented in Table  1 . Articles that met any one of the exclusion criteria were eliminated, resulting in 26 articles. Afterward, the authors met to carefully scan and discuss them. The authors agreed to eliminate any empirical studies solely focused on checking ChatGPT capabilities, as these studies do not guide us in addressing the research questions and achieving the study's objectives. This resulted in 14 articles eligible for analysis.

2.5 Quality appraisal

The examination and evaluation of the quality of the extracted articles is a vital step [ 9 ]. Therefore, the extracted articles were carefully evaluated for quality using Fink’s [ 24 ] standards, which emphasize the necessity for detailed descriptions of methodology, results, conclusions, strengths, and limitations. The process began with a thorough assessment of each study's design, data collection, and analysis methods to ensure their appropriateness and comprehensive execution. The clarity, consistency, and logical progression from data to results and conclusions were also critically examined. Potential biases and recognized limitations within the studies were also scrutinized. Ultimately, two articles were excluded for failing to meet Fink’s criteria, particularly in providing sufficient detail on methodology, results, conclusions, strengths, or limitations. The review process is illustrated in Fig.  1 .

figure 1

The study selection process

2.6 Data extraction

The next step is data extraction, the process of capturing the key information and categories from the included studies. To improve efficiency, reduce variation among authors, and minimize errors in data analysis, the coding categories were constructed using Creswell's [ 15 ] coding techniques for data extraction and interpretation. The coding process involves three sequential steps. The initial stage encompasses open coding , where the researcher examines the data, generates codes to describe and categorize it, and gains a deeper understanding without preconceived ideas. Following open coding is axial coding , where the interrelationships between codes from open coding are analyzed to establish more comprehensive categories or themes. The process concludes with selective coding , refining and integrating categories or themes to identify core concepts emerging from the data. The first coder performed the coding process, then engaged in discussions with the second and third authors to finalize the coding categories for the first five articles. The first coder then proceeded to code all studies and engaged again in discussions with the other authors to ensure the finalization of the coding process. After a comprehensive analysis and capturing of the key information from the included studies, the data extraction and interpretation process yielded several themes. These themes have been categorized and are presented in Table  2 . It is important to note that open coding results were removed from Table  2 for aesthetic reasons, as it included many generic aspects, such as words, short phrases, or sentences mentioned in the studies.

2.7 Synthesize studies

In this stage, we will gather, discuss, and analyze the key findings that emerged from the selected studies. The synthesis stage is considered a transition from an author-centric to a concept-centric focus, enabling us to map all the provided information to achieve the most effective evaluation of the data [ 87 ]. Initially, the authors extracted data that included general information about the selected studies, including the author(s)' names, study titles, years of publication, educational levels, research methodologies, sample sizes, participants, main aims or objectives, raw data sources, and analysis methods. Following that, all key information and significant results from the selected studies were compiled using Creswell’s [ 15 ] coding techniques for data extraction and interpretation to identify core concepts and themes emerging from the data, focusing on those that directly contributed to our research questions and objectives, such as the initial utilization of ChatGPT in learning and teaching, learners' and educators' familiarity with ChatGPT, and the main findings of each study. Finally, the data related to each selected study were extracted into an Excel spreadsheet for data processing. The Excel spreadsheet was reviewed by the authors, including a series of discussions to ensure the finalization of this process and prepare it for further analysis. Afterward, the final result being analyzed and presented in various types of charts and graphs. Table 4 presents the extracted data from the selected studies, with each study labeled with a capital 'S' followed by a number.

This section consists of two main parts. The first part provides a descriptive analysis of the data compiled from the reviewed studies. The second part presents the answers to the research questions and the main findings of these studies.

3.1 Part 1: descriptive analysis

This section will provide a descriptive analysis of the reviewed studies, including educational levels and fields, participants distribution, country contribution, research methodologies, study sample size, study population, publication year, list of journals, familiarity with ChatGPT, source of data, and the main aims and objectives of the studies. Table 4 presents a comprehensive overview of the extracted data from the selected studies.

3.1.1 The number of the reviewed studies and publication years

The total number of the reviewed studies was 14. All studies were empirical studies and published in different journals focusing on Education and Technology. One study was published in 2022 [S1], while the remaining were published in 2023 [S2]-[S14]. Table 3 illustrates the year of publication, the names of the journals, and the number of reviewed studies published in each journal for the studies reviewed.

3.1.2 Educational levels and fields

The majority of the reviewed studies, 11 studies, were conducted in higher education institutions [S1]-[S10] and [S13]. Two studies did not specify the educational level of the population [S12] and [S14], while one study focused on elementary education [S11]. However, the reviewed studies covered various fields of education. Three studies focused on Arts and Humanities Education [S8], [S11], and [S14], specifically English Education. Two studies focused on Engineering Education, with one in Computer Engineering [S2] and the other in Construction Education [S3]. Two studies focused on Mathematics Education [S5] and [S12]. One study focused on Social Science Education [S13]. One study focused on Early Education [S4]. One study focused on Journalism Education [S9]. Finally, three studies did not specify the field of education [S1], [S6], and [S7]. Figure  2 represents the educational levels in the reviewed studies, while Fig.  3 represents the context of the reviewed studies.

figure 2

Educational levels in the reviewed studies

figure 3

Context of the reviewed studies

3.1.3 Participants distribution and countries contribution

The reviewed studies have been conducted across different geographic regions, providing a diverse representation of the studies. The majority of the studies, 10 in total, [S1]-[S3], [S5]-[S9], [S11], and [S14], primarily focused on participants from single countries such as Pakistan, the United Arab Emirates, China, Indonesia, Poland, Saudi Arabia, South Korea, Spain, Tajikistan, and the United States. In contrast, four studies, [S4], [S10], [S12], and [S13], involved participants from multiple countries, including China and the United States [S4], China, the United Kingdom, and the United States [S10], the United Arab Emirates, Oman, Saudi Arabia, and Jordan [S12], Turkey, Sweden, Canada, and Australia [ 13 ]. Figures  4 and 5 illustrate the distribution of participants, whether from single or multiple countries, and the contribution of each country in the reviewed studies, respectively.

figure 4

The reviewed studies conducted in single or multiple countries

figure 5

The Contribution of each country in the studies

3.1.4 Study population and sample size

Four study populations were included: university students, university teachers, university teachers and students, and elementary school teachers. Six studies involved university students [S2], [S3], [S5] and [S6]-[S8]. Three studies focused on university teachers [S1], [S4], and [S6], while one study specifically targeted elementary school teachers [S11]. Additionally, four studies included both university teachers and students [S10] and [ 12 , 13 , 14 ], and among them, study [S13] specifically included postgraduate students. In terms of the sample size of the reviewed studies, nine studies included a small sample size of less than 50 participants [S1], [S3], [S6], [S8], and [S10]-[S13]. Three studies had 50–100 participants [S2], [S9], and [S14]. Only one study had more than 100 participants [S7]. It is worth mentioning that study [S4] adopted a mixed methods approach, including 10 participants for qualitative analysis and 110 participants for quantitative analysis.

3.1.5 Participants’ familiarity with using ChatGPT

The reviewed studies recruited a diverse range of participants with varying levels of familiarity with ChatGPT. Five studies [S2], [S4], [S6], [S8], and [S12] involved participants already familiar with ChatGPT, while eight studies [S1], [S3], [S5], [S7], [S9], [S10], [S13] and [S14] included individuals with differing levels of familiarity. Notably, one study [S11] had participants who were entirely unfamiliar with ChatGPT. It is important to note that four studies [S3], [S5], [S9], and [S11] provided training or guidance to their participants before conducting their studies, while ten studies [S1], [S2], [S4], [S6]-[S8], [S10], and [S12]-[S14] did not provide training due to the participants' existing familiarity with ChatGPT.

3.1.6 Research methodology approaches and source(S) of data

The reviewed studies adopted various research methodology approaches. Seven studies adopted qualitative research methodology [S1], [S4], [S6], [S8], [S10], [S11], and [S12], while three studies adopted quantitative research methodology [S3], [S7], and [S14], and four studies employed mixed-methods, which involved a combination of both the strengths of qualitative and quantitative methods [S2], [S5], [S9], and [S13].

In terms of the source(s) of data, the reviewed studies obtained their data from various sources, such as interviews, questionnaires, and pre-and post-tests. Six studies relied on interviews as their primary source of data collection [S1], [S4], [S6], [S10], [S11], and [S12], four studies relied on questionnaires [S2], [S7], [S13], and [S14], two studies combined the use of pre-and post-tests and questionnaires for data collection [S3] and [S9], while two studies combined the use of questionnaires and interviews to obtain the data [S5] and [S8]. It is important to note that six of the reviewed studies were quasi-experimental [S3], [S5], [S8], [S9], [S12], and [S14], while the remaining ones were experimental studies [S1], [S2], [S4], [S6], [S7], [S10], [S11], and [S13]. Figures  6 and 7 illustrate the research methodologies and the source (s) of data used in the reviewed studies, respectively.

figure 6

Research methodologies in the reviewed studies

figure 7

Source of data in the reviewed studies

3.1.7 The aim and objectives of the studies

The reviewed studies encompassed a diverse set of aims, with several of them incorporating multiple primary objectives. Six studies [S3], [S6], [S7], [S8], [S11], and [S12] examined the integration of ChatGPT in educational contexts, and four studies [S4], [S5], [S13], and [S14] investigated the various implications of its use in education, while three studies [S2], [S9], and [S10] aimed to explore both its integration and implications in education. Additionally, seven studies explicitly explored attitudes and perceptions of students [S2] and [S3], educators [S1] and [S6], or both [S10], [S12], and [S13] regarding the utilization of ChatGPT in educational settings.

3.2 Part 2: research questions and main findings of the reviewed studies

This part will present the answers to the research questions and the main findings of the reviewed studies, classified into two main categories (learning and teaching) according to AI Education classification by [ 36 ]. Figure  8 summarizes the main findings of the reviewed studies in a visually informative diagram. Table 4 provides a detailed list of the key information extracted from the selected studies that led to generating these themes.

figure 8

The main findings in the reviewed studies

4 Students' initial attempts at utilizing ChatGPT in learning and main findings from students' perspective

4.1 virtual intelligent assistant.

Nine studies demonstrated that ChatGPT has been utilized by students as an intelligent assistant to enhance and support their learning. Students employed it for various purposes, such as answering on-demand questions [S2]-[S5], [S8], [S10], and [S12], providing valuable information and learning resources [S2]-[S5], [S6], and [S8], as well as receiving immediate feedback [S2], [S4], [S9], [S10], and [S12]. In this regard, students generally were confident in the accuracy of ChatGPT's responses, considering them relevant, reliable, and detailed [S3], [S4], [S5], and [S8]. However, some students indicated the need for improvement, as they found that answers are not always accurate [S2], and that misleading information may have been provided or that it may not always align with their expectations [S6] and [S10]. It was also observed by the students that the accuracy of ChatGPT is dependent on several factors, including the quality and specificity of the user's input, the complexity of the question or topic, and the scope and relevance of its training data [S12]. Many students felt that ChatGPT's answers were not always accurate and most of them believed that it requires good background knowledge to work with.

4.2 Writing and language proficiency assistant

Six of the reviewed studies highlighted that ChatGPT has been utilized by students as a valuable assistant tool to improve their academic writing skills and language proficiency. Among these studies, three mainly focused on English education, demonstrating that students showed sufficient mastery in using ChatGPT for generating ideas, summarizing, paraphrasing texts, and completing writing essays [S8], [S11], and [S14]. Furthermore, ChatGPT helped them in writing by making students active investigators rather than passive knowledge recipients and facilitated the development of their writing skills [S11] and [S14]. Similarly, ChatGPT allowed students to generate unique ideas and perspectives, leading to deeper analysis and reflection on their journalism writing [S9]. In terms of language proficiency, ChatGPT allowed participants to translate content into their home languages, making it more accessible and relevant to their context [S4]. It also enabled them to request changes in linguistic tones or flavors [S8]. Moreover, participants used it to check grammar or as a dictionary [S11].

4.3 Valuable resource for learning approaches

Five studies demonstrated that students used ChatGPT as a valuable complementary resource for self-directed learning. It provided learning resources and guidance on diverse educational topics and created a supportive home learning environment [S2] and [S4]. Moreover, it offered step-by-step guidance to grasp concepts at their own pace and enhance their understanding [S5], streamlined task and project completion carried out independently [S7], provided comprehensive and easy-to-understand explanations on various subjects [S10], and assisted in studying geometry operations, thereby empowering them to explore geometry operations at their own pace [S12]. Three studies showed that students used ChatGPT as a valuable learning resource for personalized learning. It delivered age-appropriate conversations and tailored teaching based on a child's interests [S4], acted as a personalized learning assistant, adapted to their needs and pace, which assisted them in understanding mathematical concepts [S12], and enabled personalized learning experiences in social sciences by adapting to students' needs and learning styles [S13]. On the other hand, it is important to note that, according to one study [S5], students suggested that using ChatGPT may negatively affect collaborative learning competencies between students.

4.4 Enhancing students' competencies

Six of the reviewed studies have shown that ChatGPT is a valuable tool for improving a wide range of skills among students. Two studies have provided evidence that ChatGPT led to improvements in students' critical thinking, reasoning skills, and hazard recognition competencies through engaging them in interactive conversations or activities and providing responses related to their disciplines in journalism [S5] and construction education [S9]. Furthermore, two studies focused on mathematical education have shown the positive impact of ChatGPT on students' problem-solving abilities in unraveling problem-solving questions [S12] and enhancing the students' understanding of the problem-solving process [S5]. Lastly, one study indicated that ChatGPT effectively contributed to the enhancement of conversational social skills [S4].

4.5 Supporting students' academic success

Seven of the reviewed studies highlighted that students found ChatGPT to be beneficial for learning as it enhanced learning efficiency and improved the learning experience. It has been observed to improve students' efficiency in computer engineering studies by providing well-structured responses and good explanations [S2]. Additionally, students found it extremely useful for hazard reporting [S3], and it also enhanced their efficiency in solving mathematics problems and capabilities [S5] and [S12]. Furthermore, by finding information, generating ideas, translating texts, and providing alternative questions, ChatGPT aided students in deepening their understanding of various subjects [S6]. It contributed to an increase in students' overall productivity [S7] and improved efficiency in composing written tasks [S8]. Regarding learning experiences, ChatGPT was instrumental in assisting students in identifying hazards that they might have otherwise overlooked [S3]. It also improved students' learning experiences in solving mathematics problems and developing abilities [S5] and [S12]. Moreover, it increased students' successful completion of important tasks in their studies [S7], particularly those involving average difficulty writing tasks [S8]. Additionally, ChatGPT increased the chances of educational success by providing students with baseline knowledge on various topics [S10].

5 Teachers' initial attempts at utilizing ChatGPT in teaching and main findings from teachers' perspective

5.1 valuable resource for teaching.

The reviewed studies showed that teachers have employed ChatGPT to recommend, modify, and generate diverse, creative, organized, and engaging educational contents, teaching materials, and testing resources more rapidly [S4], [S6], [S10] and [S11]. Additionally, teachers experienced increased productivity as ChatGPT facilitated quick and accurate responses to questions, fact-checking, and information searches [S1]. It also proved valuable in constructing new knowledge [S6] and providing timely answers to students' questions in classrooms [S11]. Moreover, ChatGPT enhanced teachers' efficiency by generating new ideas for activities and preplanning activities for their students [S4] and [S6], including interactive language game partners [S11].

5.2 Improving productivity and efficiency

The reviewed studies showed that participants' productivity and work efficiency have been significantly enhanced by using ChatGPT as it enabled them to allocate more time to other tasks and reduce their overall workloads [S6], [S10], [S11], [S13], and [S14]. However, three studies [S1], [S4], and [S11], indicated a negative perception and attitude among teachers toward using ChatGPT. This negativity stemmed from a lack of necessary skills to use it effectively [S1], a limited familiarity with it [S4], and occasional inaccuracies in the content provided by it [S10].

5.3 Catalyzing new teaching methodologies

Five of the reviewed studies highlighted that educators found the necessity of redefining their teaching profession with the assistance of ChatGPT [S11], developing new effective learning strategies [S4], and adapting teaching strategies and methodologies to ensure the development of essential skills for future engineers [S5]. They also emphasized the importance of adopting new educational philosophies and approaches that can evolve with the introduction of ChatGPT into the classroom [S12]. Furthermore, updating curricula to focus on improving human-specific features, such as emotional intelligence, creativity, and philosophical perspectives [S13], was found to be essential.

5.4 Effective utilization of CHATGPT in teaching

According to the reviewed studies, effective utilization of ChatGPT in education requires providing teachers with well-structured training, support, and adequate background on how to use ChatGPT responsibly [S1], [S3], [S11], and [S12]. Establishing clear rules and regulations regarding its usage is essential to ensure it positively impacts the teaching and learning processes, including students' skills [S1], [S4], [S5], [S8], [S9], and [S11]-[S14]. Moreover, conducting further research and engaging in discussions with policymakers and stakeholders is indeed crucial for the successful integration of ChatGPT in education and to maximize the benefits for both educators and students [S1], [S6]-[S10], and [S12]-[S14].

6 Discussion

The purpose of this review is to conduct a systematic review of empirical studies that have explored the utilization of ChatGPT, one of today’s most advanced LLM-based chatbots, in education. The findings of the reviewed studies showed several ways of ChatGPT utilization in different learning and teaching practices as well as it provided insights and considerations that can facilitate its effective and responsible use in future educational contexts. The results of the reviewed studies came from diverse fields of education, which helped us avoid a biased review that is limited to a specific field. Similarly, the reviewed studies have been conducted across different geographic regions. This kind of variety in geographic representation enriched the findings of this review.

In response to RQ1 , "What are students' and teachers' initial attempts at utilizing ChatGPT in education?", the findings from this review provide comprehensive insights. Chatbots, including ChatGPT, play a crucial role in supporting student learning, enhancing their learning experiences, and facilitating diverse learning approaches [ 42 , 43 ]. This review found that this tool, ChatGPT, has been instrumental in enhancing students' learning experiences by serving as a virtual intelligent assistant, providing immediate feedback, on-demand answers, and engaging in educational conversations. Additionally, students have benefited from ChatGPT’s ability to generate ideas, compose essays, and perform tasks like summarizing, translating, paraphrasing texts, or checking grammar, thereby enhancing their writing and language competencies. Furthermore, students have turned to ChatGPT for assistance in understanding concepts and homework, providing structured learning plans, and clarifying assignments and tasks, which fosters a supportive home learning environment, allowing them to take responsibility for their own learning and cultivate the skills and approaches essential for supportive home learning environment [ 26 , 27 , 28 ]. This finding aligns with the study of Saqr et al. [ 68 , 69 ] who highlighted that, when students actively engage in their own learning process, it yields additional advantages, such as heightened motivation, enhanced achievement, and the cultivation of enthusiasm, turning them into advocates for their own learning.

Moreover, students have utilized ChatGPT for tailored teaching and step-by-step guidance on diverse educational topics, streamlining task and project completion, and generating and recommending educational content. This personalization enhances the learning environment, leading to increased academic success. This finding aligns with other recent studies [ 26 , 27 , 28 , 60 , 66 ] which revealed that ChatGPT has the potential to offer personalized learning experiences and support an effective learning process by providing students with customized feedback and explanations tailored to their needs and abilities. Ultimately, fostering students' performance, engagement, and motivation, leading to increase students' academic success [ 14 , 44 , 58 ]. This ultimate outcome is in line with the findings of Saqr et al. [ 68 , 69 ], which emphasized that learning strategies are important catalysts of students' learning, as students who utilize effective learning strategies are more likely to have better academic achievement.

Teachers, too, have capitalized on ChatGPT's capabilities to enhance productivity and efficiency, using it for creating lesson plans, generating quizzes, providing additional resources, generating and preplanning new ideas for activities, and aiding in answering students’ questions. This adoption of technology introduces new opportunities to support teaching and learning practices, enhancing teacher productivity. This finding aligns with those of Day [ 17 ], De Castro [ 18 ], and Su and Yang [ 74 ] as well as with those of Valtonen et al. [ 82 ], who revealed that emerging technological advancements have opened up novel opportunities and means to support teaching and learning practices, and enhance teachers’ productivity.

In response to RQ2 , "What are the main findings derived from empirical studies that have incorporated ChatGPT into learning and teaching?", the findings from this review provide profound insights and raise significant concerns. Starting with the insights, chatbots, including ChatGPT, have demonstrated the potential to reshape and revolutionize education, creating new, novel opportunities for enhancing the learning process and outcomes [ 83 ], facilitating different learning approaches, and offering a range of pedagogical benefits [ 19 , 43 , 72 ]. In this context, this review found that ChatGPT could open avenues for educators to adopt or develop new effective learning and teaching strategies that can evolve with the introduction of ChatGPT into the classroom. Nonetheless, there is an evident lack of research understanding regarding the potential impact of generative machine learning models within diverse educational settings [ 83 ]. This necessitates teachers to attain a high level of proficiency in incorporating chatbots, such as ChatGPT, into their classrooms to create inventive, well-structured, and captivating learning strategies. In the same vein, the review also found that teachers without the requisite skills to utilize ChatGPT realized that it did not contribute positively to their work and could potentially have adverse effects [ 37 ]. This concern could lead to inequity of access to the benefits of chatbots, including ChatGPT, as individuals who lack the necessary expertise may not be able to harness their full potential, resulting in disparities in educational outcomes and opportunities. Therefore, immediate action is needed to address these potential issues. A potential solution is offering training, support, and competency development for teachers to ensure that all of them can leverage chatbots, including ChatGPT, effectively and equitably in their educational practices [ 5 , 28 , 80 ], which could enhance accessibility and inclusivity, and potentially result in innovative outcomes [ 82 , 83 ].

Additionally, chatbots, including ChatGPT, have the potential to significantly impact students' thinking abilities, including retention, reasoning, analysis skills [ 19 , 45 ], and foster innovation and creativity capabilities [ 83 ]. This review found that ChatGPT could contribute to improving a wide range of skills among students. However, it found that frequent use of ChatGPT may result in a decrease in innovative capacities, collaborative skills and cognitive capacities, and students' motivation to attend classes, as well as could lead to reduced higher-order thinking skills among students [ 22 , 29 ]. Therefore, immediate action is needed to carefully examine the long-term impact of chatbots such as ChatGPT, on learning outcomes as well as to explore its incorporation into educational settings as a supportive tool without compromising students' cognitive development and critical thinking abilities. In the same vein, the review also found that it is challenging to draw a consistent conclusion regarding the potential of ChatGPT to aid self-directed learning approach. This finding aligns with the recent study of Baskara [ 8 ]. Therefore, further research is needed to explore the potential of ChatGPT for self-directed learning. One potential solution involves utilizing learning analytics as a novel approach to examine various aspects of students' learning and support them in their individual endeavors [ 32 ]. This approach can bridge this gap by facilitating an in-depth analysis of how learners engage with ChatGPT, identifying trends in self-directed learning behavior, and assessing its influence on their outcomes.

Turning to the significant concerns, on the other hand, a fundamental challenge with LLM-based chatbots, including ChatGPT, is the accuracy and quality of the provided information and responses, as they provide false information as truth—a phenomenon often referred to as "hallucination" [ 3 , 49 ]. In this context, this review found that the provided information was not entirely satisfactory. Consequently, the utilization of chatbots presents potential concerns, such as generating and providing inaccurate or misleading information, especially for students who utilize it to support their learning. This finding aligns with other findings [ 6 , 30 , 35 , 40 ] which revealed that incorporating chatbots such as ChatGPT, into education presents challenges related to its accuracy and reliability due to its training on a large corpus of data, which may contain inaccuracies and the way users formulate or ask ChatGPT. Therefore, immediate action is needed to address these potential issues. One possible solution is to equip students with the necessary skills and competencies, which include a background understanding of how to use it effectively and the ability to assess and evaluate the information it generates, as the accuracy and the quality of the provided information depend on the input, its complexity, the topic, and the relevance of its training data [ 28 , 49 , 86 ]. However, it's also essential to examine how learners can be educated about how these models operate, the data used in their training, and how to recognize their limitations, challenges, and issues [ 79 ].

Furthermore, chatbots present a substantial challenge concerning maintaining academic integrity [ 20 , 56 ] and copyright violations [ 83 ], which are significant concerns in education. The review found that the potential misuse of ChatGPT might foster cheating, facilitate plagiarism, and threaten academic integrity. This issue is also affirmed by the research conducted by Basic et al. [ 7 ], who presented evidence that students who utilized ChatGPT in their writing assignments had more plagiarism cases than those who did not. These findings align with the conclusions drawn by Cotton et al. [ 13 ], Hisan and Amri [ 33 ] and Sullivan et al. [ 75 ], who revealed that the integration of chatbots such as ChatGPT into education poses a significant challenge to the preservation of academic integrity. Moreover, chatbots, including ChatGPT, have increased the difficulty in identifying plagiarism [ 47 , 67 , 76 ]. The findings from previous studies [ 1 , 84 ] indicate that AI-generated text often went undetected by plagiarism software, such as Turnitin. However, Turnitin and other similar plagiarism detection tools, such as ZeroGPT, GPTZero, and Copyleaks, have since evolved, incorporating enhanced techniques to detect AI-generated text, despite the possibility of false positives, as noted in different studies that have found these tools still not yet fully ready to accurately and reliably identify AI-generated text [ 10 , 51 ], and new novel detection methods may need to be created and implemented for AI-generated text detection [ 4 ]. This potential issue could lead to another concern, which is the difficulty of accurately evaluating student performance when they utilize chatbots such as ChatGPT assistance in their assignments. Consequently, the most LLM-driven chatbots present a substantial challenge to traditional assessments [ 64 ]. The findings from previous studies indicate the importance of rethinking, improving, and redesigning innovative assessment methods in the era of chatbots [ 14 , 20 , 64 , 75 ]. These methods should prioritize the process of evaluating students' ability to apply knowledge to complex cases and demonstrate comprehension, rather than solely focusing on the final product for assessment. Therefore, immediate action is needed to address these potential issues. One possible solution would be the development of clear guidelines, regulatory policies, and pedagogical guidance. These measures would help regulate the proper and ethical utilization of chatbots, such as ChatGPT, and must be established before their introduction to students [ 35 , 38 , 39 , 41 , 89 ].

In summary, our review has delved into the utilization of ChatGPT, a prominent example of chatbots, in education, addressing the question of how ChatGPT has been utilized in education. However, there remain significant gaps, which necessitate further research to shed light on this area.

7 Conclusions

This systematic review has shed light on the varied initial attempts at incorporating ChatGPT into education by both learners and educators, while also offering insights and considerations that can facilitate its effective and responsible use in future educational contexts. From the analysis of 14 selected studies, the review revealed the dual-edged impact of ChatGPT in educational settings. On the positive side, ChatGPT significantly aided the learning process in various ways. Learners have used it as a virtual intelligent assistant, benefiting from its ability to provide immediate feedback, on-demand answers, and easy access to educational resources. Additionally, it was clear that learners have used it to enhance their writing and language skills, engaging in practices such as generating ideas, composing essays, and performing tasks like summarizing, translating, paraphrasing texts, or checking grammar. Importantly, other learners have utilized it in supporting and facilitating their directed and personalized learning on a broad range of educational topics, assisting in understanding concepts and homework, providing structured learning plans, and clarifying assignments and tasks. Educators, on the other hand, found ChatGPT beneficial for enhancing productivity and efficiency. They used it for creating lesson plans, generating quizzes, providing additional resources, and answers learners' questions, which saved time and allowed for more dynamic and engaging teaching strategies and methodologies.

However, the review also pointed out negative impacts. The results revealed that overuse of ChatGPT could decrease innovative capacities and collaborative learning among learners. Specifically, relying too much on ChatGPT for quick answers can inhibit learners' critical thinking and problem-solving skills. Learners might not engage deeply with the material or consider multiple solutions to a problem. This tendency was particularly evident in group projects, where learners preferred consulting ChatGPT individually for solutions over brainstorming and collaborating with peers, which negatively affected their teamwork abilities. On a broader level, integrating ChatGPT into education has also raised several concerns, including the potential for providing inaccurate or misleading information, issues of inequity in access, challenges related to academic integrity, and the possibility of misusing the technology.

Accordingly, this review emphasizes the urgency of developing clear rules, policies, and regulations to ensure ChatGPT's effective and responsible use in educational settings, alongside other chatbots, by both learners and educators. This requires providing well-structured training to educate them on responsible usage and understanding its limitations, along with offering sufficient background information. Moreover, it highlights the importance of rethinking, improving, and redesigning innovative teaching and assessment methods in the era of ChatGPT. Furthermore, conducting further research and engaging in discussions with policymakers and stakeholders are essential steps to maximize the benefits for both educators and learners and ensure academic integrity.

It is important to acknowledge that this review has certain limitations. Firstly, the limited inclusion of reviewed studies can be attributed to several reasons, including the novelty of the technology, as new technologies often face initial skepticism and cautious adoption; the lack of clear guidelines or best practices for leveraging this technology for educational purposes; and institutional or governmental policies affecting the utilization of this technology in educational contexts. These factors, in turn, have affected the number of studies available for review. Secondly, the utilization of the original version of ChatGPT, based on GPT-3 or GPT-3.5, implies that new studies utilizing the updated version, GPT-4 may lead to different findings. Therefore, conducting follow-up systematic reviews is essential once more empirical studies on ChatGPT are published. Additionally, long-term studies are necessary to thoroughly examine and assess the impact of ChatGPT on various educational practices.

Despite these limitations, this systematic review has highlighted the transformative potential of ChatGPT in education, revealing its diverse utilization by learners and educators alike and summarized the benefits of incorporating it into education, as well as the forefront critical concerns and challenges that must be addressed to facilitate its effective and responsible use in future educational contexts. This review could serve as an insightful resource for practitioners who seek to integrate ChatGPT into education and stimulate further research in the field.

Data availability

The data supporting our findings are available upon request.

Abbreviations

  • Artificial intelligence

AI in education

Large language model

Artificial neural networks

Chat Generative Pre-Trained Transformer

Recurrent neural networks

Long short-term memory

Reinforcement learning from human feedback

Natural language processing

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

AlAfnan MA, Dishari S, Jovic M, Lomidze K. ChatGPT as an educational tool: opportunities, challenges, and recommendations for communication, business writing, and composition courses. J Artif Intell Technol. 2023. https://doi.org/10.37965/jait.2023.0184 .

Article   Google Scholar  

Ali JKM, Shamsan MAA, Hezam TA, Mohammed AAQ. Impact of ChatGPT on learning motivation. J Engl Stud Arabia Felix. 2023;2(1):41–9. https://doi.org/10.56540/jesaf.v2i1.51 .

Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus. 2023. https://doi.org/10.7759/cureus.35179 .

Anderson N, Belavý DL, Perle SM, Hendricks S, Hespanhol L, Verhagen E, Memon AR. AI did not write this manuscript, or did it? Can we trick the AI text detector into generated texts? The potential future of ChatGPT and AI in sports & exercise medicine manuscript generation. BMJ Open Sport Exerc Med. 2023;9(1): e001568. https://doi.org/10.1136/bmjsem-2023-001568 .

Ausat AMA, Massang B, Efendi M, Nofirman N, Riady Y. Can chat GPT replace the role of the teacher in the classroom: a fundamental analysis. J Educ. 2023;5(4):16100–6.

Google Scholar  

Baidoo-Anu D, Ansah L. Education in the Era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. Soc Sci Res Netw. 2023. https://doi.org/10.2139/ssrn.4337484 .

Basic Z, Banovac A, Kruzic I, Jerkovic I. Better by you, better than me, chatgpt3 as writing assistance in students essays. 2023. arXiv preprint arXiv:2302.04536 .‏

Baskara FR. The promises and pitfalls of using chat GPT for self-determined learning in higher education: an argumentative review. Prosiding Seminar Nasional Fakultas Tarbiyah dan Ilmu Keguruan IAIM Sinjai. 2023;2:95–101. https://doi.org/10.47435/sentikjar.v2i0.1825 .

Behera RK, Bala PK, Dhir A. The emerging role of cognitive computing in healthcare: a systematic literature review. Int J Med Inform. 2019;129:154–66. https://doi.org/10.1016/j.ijmedinf.2019.04.024 .

Chaka C. Detecting AI content in responses generated by ChatGPT, YouChat, and Chatsonic: the case of five AI content detection tools. J Appl Learn Teach. 2023. https://doi.org/10.37074/jalt.2023.6.2.12 .

Chiu TKF, Xia Q, Zhou X, Chai CS, Cheng M. Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Comput Educ Artif Intell. 2023;4:100118. https://doi.org/10.1016/j.caeai.2022.100118 .

Choi EPH, Lee JJ, Ho M, Kwok JYY, Lok KYW. Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education. Nurse Educ Today. 2023;125:105796. https://doi.org/10.1016/j.nedt.2023.105796 .

Cotton D, Cotton PA, Shipway JR. Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innov Educ Teach Int. 2023. https://doi.org/10.1080/14703297.2023.2190148 .

Crawford J, Cowling M, Allen K. Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). J Univ Teach Learn Pract. 2023. https://doi.org/10.53761/1.20.3.02 .

Creswell JW. Educational research: planning, conducting, and evaluating quantitative and qualitative research [Ebook]. 4th ed. London: Pearson Education; 2015.

Curry D. ChatGPT Revenue and Usage Statistics (2023)—Business of Apps. 2023. https://www.businessofapps.com/data/chatgpt-statistics/

Day T. A preliminary investigation of fake peer-reviewed citations and references generated by ChatGPT. Prof Geogr. 2023. https://doi.org/10.1080/00330124.2023.2190373 .

De Castro CA. A Discussion about the Impact of ChatGPT in education: benefits and concerns. J Bus Theor Pract. 2023;11(2):p28. https://doi.org/10.22158/jbtp.v11n2p28 .

Deng X, Yu Z. A meta-analysis and systematic review of the effect of Chatbot technology use in sustainable education. Sustainability. 2023;15(4):2940. https://doi.org/10.3390/su15042940 .

Eke DO. ChatGPT and the rise of generative AI: threat to academic integrity? J Responsib Technol. 2023;13:100060. https://doi.org/10.1016/j.jrt.2023.100060 .

Elmoazen R, Saqr M, Tedre M, Hirsto L. A systematic literature review of empirical research on epistemic network analysis in education. IEEE Access. 2022;10:17330–48. https://doi.org/10.1109/access.2022.3149812 .

Farrokhnia M, Banihashem SK, Noroozi O, Wals AEJ. A SWOT analysis of ChatGPT: implications for educational practice and research. Innov Educ Teach Int. 2023. https://doi.org/10.1080/14703297.2023.2195846 .

Fergus S, Botha M, Ostovar M. Evaluating academic answers generated using ChatGPT. J Chem Educ. 2023;100(4):1672–5. https://doi.org/10.1021/acs.jchemed.3c00087 .

Fink A. Conducting research literature reviews: from the Internet to Paper. Incorporated: SAGE Publications; 2010.

Firaina R, Sulisworo D. Exploring the usage of ChatGPT in higher education: frequency and impact on productivity. Buletin Edukasi Indonesia (BEI). 2023;2(01):39–46. https://doi.org/10.56741/bei.v2i01.310 .

Firat, M. (2023). How chat GPT can transform autodidactic experiences and open education.  Department of Distance Education, Open Education Faculty, Anadolu Unive .‏ https://orcid.org/0000-0001-8707-5918

Firat M. What ChatGPT means for universities: perceptions of scholars and students. J Appl Learn Teach. 2023. https://doi.org/10.37074/jalt.2023.6.1.22 .

Fuchs K. Exploring the opportunities and challenges of NLP models in higher education: is Chat GPT a blessing or a curse? Front Educ. 2023. https://doi.org/10.3389/feduc.2023.1166682 .

García-Peñalvo FJ. La percepción de la inteligencia artificial en contextos educativos tras el lanzamiento de ChatGPT: disrupción o pánico. Educ Knowl Soc. 2023;24: e31279. https://doi.org/10.14201/eks.31279 .

Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor A, Chartash D. How does ChatGPT perform on the United States medical Licensing examination? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9: e45312. https://doi.org/10.2196/45312 .

Hashana AJ, Brundha P, Ayoobkhan MUA, Fazila S. Deep Learning in ChatGPT—A Survey. In   2023 7th international conference on trends in electronics and informatics (ICOEI) . 2023. (pp. 1001–1005). IEEE. https://doi.org/10.1109/icoei56765.2023.10125852

Hirsto L, Saqr M, López-Pernas S, Valtonen T. (2022). A systematic narrative review of learning analytics research in K-12 and schools.  Proceedings . https://ceur-ws.org/Vol-3383/FLAIEC22_paper_9536.pdf

Hisan UK, Amri MM. ChatGPT and medical education: a double-edged sword. J Pedag Educ Sci. 2023;2(01):71–89. https://doi.org/10.13140/RG.2.2.31280.23043/1 .

Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr. 2023. https://doi.org/10.1093/jncics/pkad010 .

Househ M, AlSaad R, Alhuwail D, Ahmed A, Healy MG, Latifi S, Sheikh J. Large Language models in medical education: opportunities, challenges, and future directions. JMIR Med Educ. 2023;9: e48291. https://doi.org/10.2196/48291 .

Ilkka T. The impact of artificial intelligence on learning, teaching, and education. Minist de Educ. 2018. https://doi.org/10.2760/12297 .

Iqbal N, Ahmed H, Azhar KA. Exploring teachers’ attitudes towards using CHATGPT. Globa J Manag Adm Sci. 2022;3(4):97–111. https://doi.org/10.46568/gjmas.v3i4.163 .

Irfan M, Murray L, Ali S. Integration of Artificial intelligence in academia: a case study of critical teaching and learning in Higher education. Globa Soc Sci Rev. 2023;8(1):352–64. https://doi.org/10.31703/gssr.2023(viii-i).32 .

Jeon JH, Lee S. Large language models in education: a focus on the complementary relationship between human teachers and ChatGPT. Educ Inf Technol. 2023. https://doi.org/10.1007/s10639-023-11834-1 .

Khan RA, Jawaid M, Khan AR, Sajjad M. ChatGPT—Reshaping medical education and clinical management. Pak J Med Sci. 2023. https://doi.org/10.12669/pjms.39.2.7653 .

King MR. A conversation on artificial intelligence, Chatbots, and plagiarism in higher education. Cell Mol Bioeng. 2023;16(1):1–2. https://doi.org/10.1007/s12195-022-00754-8 .

Kooli C. Chatbots in education and research: a critical examination of ethical implications and solutions. Sustainability. 2023;15(7):5614. https://doi.org/10.3390/su15075614 .

Kuhail MA, Alturki N, Alramlawi S, Alhejori K. Interacting with educational chatbots: a systematic review. Educ Inf Technol. 2022;28(1):973–1018. https://doi.org/10.1007/s10639-022-11177-3 .

Lee H. The rise of ChatGPT: exploring its potential in medical education. Anat Sci Educ. 2023. https://doi.org/10.1002/ase.2270 .

Li L, Subbareddy R, Raghavendra CG. AI intelligence Chatbot to improve students learning in the higher education platform. J Interconnect Netw. 2022. https://doi.org/10.1142/s0219265921430325 .

Limna P. A Review of Artificial Intelligence (AI) in Education during the Digital Era. 2022. https://ssrn.com/abstract=4160798

Lo CK. What is the impact of ChatGPT on education? A rapid review of the literature. Educ Sci. 2023;13(4):410. https://doi.org/10.3390/educsci13040410 .

Luo W, He H, Liu J, Berson IR, Berson MJ, Zhou Y, Li H. Aladdin’s genie or pandora’s box For early childhood education? Experts chat on the roles, challenges, and developments of ChatGPT. Early Educ Dev. 2023. https://doi.org/10.1080/10409289.2023.2214181 .

Meyer JG, Urbanowicz RJ, Martin P, O’Connor K, Li R, Peng P, Moore JH. ChatGPT and large language models in academia: opportunities and challenges. Biodata Min. 2023. https://doi.org/10.1186/s13040-023-00339-9 .

Mhlanga D. Open AI in education, the responsible and ethical use of ChatGPT towards lifelong learning. Soc Sci Res Netw. 2023. https://doi.org/10.2139/ssrn.4354422 .

Neumann, M., Rauschenberger, M., & Schön, E. M. (2023). “We Need To Talk About ChatGPT”: The Future of AI and Higher Education.‏ https://doi.org/10.1109/seeng59157.2023.00010

Nolan B. Here are the schools and colleges that have banned the use of ChatGPT over plagiarism and misinformation fears. Business Insider . 2023. https://www.businessinsider.com

O’Leary DE. An analysis of three chatbots: BlenderBot, ChatGPT and LaMDA. Int J Intell Syst Account, Financ Manag. 2023;30(1):41–54. https://doi.org/10.1002/isaf.1531 .

Okoli C. A guide to conducting a standalone systematic literature review. Commun Assoc Inf Syst. 2015. https://doi.org/10.17705/1cais.03743 .

OpenAI. (2023). https://openai.com/blog/chatgpt

Perkins M. Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond. J Univ Teach Learn Pract. 2023. https://doi.org/10.53761/1.20.02.07 .

Plevris V, Papazafeiropoulos G, Rios AJ. Chatbots put to the test in math and logic problems: A preliminary comparison and assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard. arXiv (Cornell University) . 2023. https://doi.org/10.48550/arxiv.2305.18618

Rahman MM, Watanobe Y (2023) ChatGPT for education and research: opportunities, threats, and strategies. Appl Sci 13(9):5783. https://doi.org/10.3390/app13095783

Ram B, Verma P. Artificial intelligence AI-based Chatbot study of ChatGPT, google AI bard and baidu AI. World J Adv Eng Technol Sci. 2023;8(1):258–61. https://doi.org/10.30574/wjaets.2023.8.1.0045 .

Rasul T, Nair S, Kalendra D, Robin M, de Oliveira Santini F, Ladeira WJ, Heathcote L. The role of ChatGPT in higher education: benefits, challenges, and future research directions. J Appl Learn Teach. 2023. https://doi.org/10.37074/jalt.2023.6.1.29 .

Ratnam M, Sharm B, Tomer A. ChatGPT: educational artificial intelligence. Int J Adv Trends Comput Sci Eng. 2023;12(2):84–91. https://doi.org/10.30534/ijatcse/2023/091222023 .

Ray PP. ChatGPT: a comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber-Phys Syst. 2023;3:121–54. https://doi.org/10.1016/j.iotcps.2023.04.003 .

Roumeliotis KI, Tselikas ND. ChatGPT and Open-AI models: a preliminary review. Future Internet. 2023;15(6):192. https://doi.org/10.3390/fi15060192 .

Rudolph J, Tan S, Tan S. War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. J Appl Learn Teach. 2023. https://doi.org/10.37074/jalt.2023.6.1.23 .

Ruiz LMS, Moll-López S, Nuñez-Pérez A, Moraño J, Vega-Fleitas E. ChatGPT challenges blended learning methodologies in engineering education: a case study in mathematics. Appl Sci. 2023;13(10):6039. https://doi.org/10.3390/app13106039 .

Sallam M, Salim NA, Barakat M, Al-Tammemi AB. ChatGPT applications in medical, dental, pharmacy, and public health education: a descriptive study highlighting the advantages and limitations. Narra J. 2023;3(1): e103. https://doi.org/10.52225/narra.v3i1.103 .

Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care. 2023. https://doi.org/10.1186/s13054-023-04380-2 .

Saqr M, López-Pernas S, Helske S, Hrastinski S. The longitudinal association between engagement and achievement varies by time, students’ profiles, and achievement state: a full program study. Comput Educ. 2023;199:104787. https://doi.org/10.1016/j.compedu.2023.104787 .

Saqr M, Matcha W, Uzir N, Jovanović J, Gašević D, López-Pernas S. Transferring effective learning strategies across learning contexts matters: a study in problem-based learning. Australas J Educ Technol. 2023;39(3):9.

Schöbel S, Schmitt A, Benner D, Saqr M, Janson A, Leimeister JM. Charting the evolution and future of conversational agents: a research agenda along five waves and new frontiers. Inf Syst Front. 2023. https://doi.org/10.1007/s10796-023-10375-9 .

Shoufan A. Exploring students’ perceptions of CHATGPT: thematic analysis and follow-up survey. IEEE Access. 2023. https://doi.org/10.1109/access.2023.3268224 .

Sonderegger S, Seufert S. Chatbot-mediated learning: conceptual framework for the design of Chatbot use cases in education. Gallen: Institute for Educational Management and Technologies, University of St; 2022. https://doi.org/10.5220/0010999200003182 .

Book   Google Scholar  

Strzelecki A. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact Learn Environ. 2023. https://doi.org/10.1080/10494820.2023.2209881 .

Su J, Yang W. Unlocking the power of ChatGPT: a framework for applying generative AI in education. ECNU Rev Educ. 2023. https://doi.org/10.1177/20965311231168423 .

Sullivan M, Kelly A, McLaughlan P. ChatGPT in higher education: Considerations for academic integrity and student learning. J ApplLearn Teach. 2023;6(1):1–10. https://doi.org/10.37074/jalt.2023.6.1.17 .

Szabo A. ChatGPT is a breakthrough in science and education but fails a test in sports and exercise psychology. Balt J Sport Health Sci. 2023;1(128):25–40. https://doi.org/10.33607/bjshs.v127i4.1233 .

Taecharungroj V. “What can ChatGPT do?” analyzing early reactions to the innovative AI chatbot on Twitter. Big Data Cognit Comput. 2023;7(1):35. https://doi.org/10.3390/bdcc7010035 .

Tam S, Said RB. User preferences for ChatGPT-powered conversational interfaces versus traditional methods. Biomed Eng Soc. 2023. https://doi.org/10.58496/mjcsc/2023/004 .

Tedre M, Kahila J, Vartiainen H. (2023). Exploration on how co-designing with AI facilitates critical evaluation of ethics of AI in craft education. In: Langran E, Christensen P, Sanson J (Eds).  Proceedings of Society for Information Technology and Teacher Education International Conference . 2023. pp. 2289–2296.

Tlili A, Shehata B, Adarkwah MA, Bozkurt A, Hickey DT, Huang R, Agyemang B. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn Environ. 2023. https://doi.org/10.1186/s40561-023-00237-x .

Uddin SMJ, Albert A, Ovid A, Alsharef A. Leveraging CHATGPT to aid construction hazard recognition and support safety education and training. Sustainability. 2023;15(9):7121. https://doi.org/10.3390/su15097121 .

Valtonen T, López-Pernas S, Saqr M, Vartiainen H, Sointu E, Tedre M. The nature and building blocks of educational technology research. Comput Hum Behav. 2022;128:107123. https://doi.org/10.1016/j.chb.2021.107123 .

Vartiainen H, Tedre M. Using artificial intelligence in craft education: crafting with text-to-image generative models. Digit Creat. 2023;34(1):1–21. https://doi.org/10.1080/14626268.2023.2174557 .

Ventayen RJM. OpenAI ChatGPT generated results: similarity index of artificial intelligence-based contents. Soc Sci Res Netw. 2023. https://doi.org/10.2139/ssrn.4332664 .

Wagner MW, Ertl-Wagner BB. Accuracy of information and references using ChatGPT-3 for retrieval of clinical radiological information. Can Assoc Radiol J. 2023. https://doi.org/10.1177/08465371231171125 .

Wardat Y, Tashtoush MA, AlAli R, Jarrah AM. ChatGPT: a revolutionary tool for teaching and learning mathematics. Eurasia J Math, Sci Technol Educ. 2023;19(7):em2286. https://doi.org/10.29333/ejmste/13272 .

Webster J, Watson RT. Analyzing the past to prepare for the future: writing a literature review. Manag Inf Syst Quart. 2002;26(2):3.

Xiao Y, Watson ME. Guidance on conducting a systematic literature review. J Plan Educ Res. 2017;39(1):93–112. https://doi.org/10.1177/0739456x17723971 .

Yan D. Impact of ChatGPT on learners in a L2 writing practicum: an exploratory investigation. Educ Inf Technol. 2023. https://doi.org/10.1007/s10639-023-11742-4 .

Yu H. Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Front Psychol. 2023;14:1181712. https://doi.org/10.3389/fpsyg.2023.1181712 .

Zhu C, Sun M, Luo J, Li T, Wang M. How to harness the potential of ChatGPT in education? Knowl Manag ELearn. 2023;15(2):133–52. https://doi.org/10.34105/j.kmel.2023.15.008 .

Download references

The paper is co-funded by the Academy of Finland (Suomen Akatemia) Research Council for Natural Sciences and Engineering for the project Towards precision education: Idiographic learning analytics (TOPEILA), Decision Number 350560.

Author information

Authors and affiliations.

School of Computing, University of Eastern Finland, 80100, Joensuu, Finland

Yazid Albadarin, Mohammed Saqr, Nicolas Pope & Markku Tukiainen

You can also search for this author in PubMed   Google Scholar

Contributions

YA contributed to the literature search, data analysis, discussion, and conclusion. Additionally, YA contributed to the manuscript’s writing, editing, and finalization. MS contributed to the study’s design, conceptualization, acquisition of funding, project administration, allocation of resources, supervision, validation, literature search, and analysis of results. Furthermore, MS contributed to the manuscript's writing, revising, and approving it in its finalized state. NP contributed to the results, and discussions, and provided supervision. NP also contributed to the writing process, revisions, and the final approval of the manuscript in its finalized state. MT contributed to the study's conceptualization, resource management, supervision, writing, revising the manuscript, and approving it.

Corresponding author

Correspondence to Yazid Albadarin .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Table  4

The process of synthesizing the data presented in Table  4 involved identifying the relevant studies through a search process of databases (ERIC, Scopus, Web of Knowledge, Dimensions.ai, and lens.org) using specific keywords "ChatGPT" and "education". Following this, inclusion/exclusion criteria were applied, and data extraction was performed using Creswell's [ 15 ] coding techniques to capture key information and identify common themes across the included studies.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Albadarin, Y., Saqr, M., Pope, N. et al. A systematic literature review of empirical research on ChatGPT in education. Discov Educ 3 , 60 (2024). https://doi.org/10.1007/s44217-024-00138-2

Download citation

Received : 22 October 2023

Accepted : 10 May 2024

Published : 26 May 2024

DOI : https://doi.org/10.1007/s44217-024-00138-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Large language models
  • Educational technology
  • Systematic review

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

Help | Advanced Search

Computer Science > Computation and Language

Title: text generation: a systematic literature review of tasks, evaluation, and challenges.

Abstract: Text generation has become more accessible than ever, and the increasing interest in these systems, especially those using large language models, has spurred an increasing number of related publications. We provide a systematic literature review comprising 244 selected papers between 2017 and 2024. This review categorizes works in text generation into five main tasks: open-ended text generation, summarization, translation, paraphrasing, and question answering. For each task, we review their relevant characteristics, sub-tasks, and specific challenges (e.g., missing datasets for multi-document summarization, coherence in story generation, and complex reasoning for question answering). Additionally, we assess current approaches for evaluating text generation systems and ascertain problems with current metrics. Our investigation shows nine prominent challenges common to all tasks and sub-tasks in recent text generation publications: bias, reasoning, hallucinations, misuse, privacy, interpretability, transparency, datasets, and computing. We provide a detailed analysis of these challenges, their potential solutions, and which gaps still require further engagement from the community. This systematic literature review targets two main audiences: early career researchers in natural language processing looking for an overview of the field and promising research directions, as well as experienced researchers seeking a detailed view of tasks, evaluation methodologies, open challenges, and recent mitigation strategies.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Photo of a person's hands typing on a laptop.

AI-assisted writing is quietly booming in academic journals. Here’s why that’s OK

lit review tools

Lecturer in Bioethics, Monash University & Honorary fellow, Melbourne Law School, Monash University

Disclosure statement

Julian Koplin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Monash University provides funding as a founding partner of The Conversation AU.

View all partners

If you search Google Scholar for the phrase “ as an AI language model ”, you’ll find plenty of AI research literature and also some rather suspicious results. For example, one paper on agricultural technology says:

As an AI language model, I don’t have direct access to current research articles or studies. However, I can provide you with an overview of some recent trends and advancements …

Obvious gaffes like this aren’t the only signs that researchers are increasingly turning to generative AI tools when writing up their research. A recent study examined the frequency of certain words in academic writing (such as “commendable”, “meticulously” and “intricate”), and found they became far more common after the launch of ChatGPT – so much so that 1% of all journal articles published in 2023 may have contained AI-generated text.

(Why do AI models overuse these words? There is speculation it’s because they are more common in English as spoken in Nigeria, where key elements of model training often occur.)

The aforementioned study also looks at preliminary data from 2024, which indicates that AI writing assistance is only becoming more common. Is this a crisis for modern scholarship, or a boon for academic productivity?

Who should take credit for AI writing?

Many people are worried by the use of AI in academic papers. Indeed, the practice has been described as “ contaminating ” scholarly literature.

Some argue that using AI output amounts to plagiarism. If your ideas are copy-pasted from ChatGPT, it is questionable whether you really deserve credit for them.

But there are important differences between “plagiarising” text authored by humans and text authored by AI. Those who plagiarise humans’ work receive credit for ideas that ought to have gone to the original author.

By contrast, it is debatable whether AI systems like ChatGPT can have ideas, let alone deserve credit for them. An AI tool is more like your phone’s autocomplete function than a human researcher.

The question of bias

Another worry is that AI outputs might be biased in ways that could seep into the scholarly record. Infamously, older language models tended to portray people who are female, black and/or gay in distinctly unflattering ways, compared with people who are male, white and/or straight.

This kind of bias is less pronounced in the current version of ChatGPT.

However, other studies have found a different kind of bias in ChatGPT and other large language models : a tendency to reflect a left-liberal political ideology.

Any such bias could subtly distort scholarly writing produced using these tools.

The hallucination problem

The most serious worry relates to a well-known limitation of generative AI systems: that they often make serious mistakes.

For example, when I asked ChatGPT-4 to generate an ASCII image of a mushroom, it provided me with the following output.

It then confidently told me I could use this image of a “mushroom” for my own purposes.

These kinds of overconfident mistakes have been referred to as “ AI hallucinations ” and “ AI bullshit ”. While it is easy to spot that the above ASCII image looks nothing like a mushroom (and quite a bit like a snail), it may be much harder to identify any mistakes ChatGPT makes when surveying scientific literature or describing the state of a philosophical debate.

Unlike (most) humans, AI systems are fundamentally unconcerned with the truth of what they say. If used carelessly, their hallucinations could corrupt the scholarly record.

Should AI-produced text be banned?

One response to the rise of text generators has been to ban them outright. For example, Science – one of the world’s most influential academic journals – disallows any use of AI-generated text .

I see two problems with this approach.

The first problem is a practical one: current tools for detecting AI-generated text are highly unreliable. This includes the detector created by ChatGPT’s own developers, which was taken offline after it was found to have only a 26% accuracy rate (and a 9% false positive rate ). Humans also make mistakes when assessing whether something was written by AI.

It is also possible to circumvent AI text detectors. Online communities are actively exploring how to prompt ChatGPT in ways that allow the user to evade detection. Human users can also superficially rewrite AI outputs, effectively scrubbing away the traces of AI (like its overuse of the words “commendable”, “meticulously” and “intricate”).

The second problem is that banning generative AI outright prevents us from realising these technologies’ benefits. Used well, generative AI can boost academic productivity by streamlining the writing process. In this way, it could help further human knowledge. Ideally, we should try to reap these benefits while avoiding the problems.

The problem is poor quality control, not AI

The most serious problem with AI is the risk of introducing unnoticed errors, leading to sloppy scholarship. Instead of banning AI, we should try to ensure that mistaken, implausible or biased claims cannot make it onto the academic record.

After all, humans can also produce writing with serious errors, and mechanisms such as peer review often fail to prevent its publication.

We need to get better at ensuring academic papers are free from serious mistakes, regardless of whether these mistakes are caused by careless use of AI or sloppy human scholarship. Not only is this more achievable than policing AI usage, it will improve the standards of academic research as a whole.

This would be (as ChatGPT might say) a commendable and meticulously intricate solution.

  • Artificial intelligence (AI)
  • Academic journals
  • Academic publishing
  • Hallucinations
  • Scholarly publishing
  • Academic writing
  • Large language models
  • Generative AI

lit review tools

Lecturer / Senior Lecturer - Marketing

lit review tools

Head, School of Psychology

lit review tools

Senior Lecturer (ED) Ballarat

lit review tools

Senior Research Fellow - Women's Health Services

lit review tools

Assistant Editor - 1 year cadetship

  • Open access
  • Published: 24 May 2024

Cost-effectiveness of differentiated care models that incorporate economic strengthening for HIV antiretroviral therapy adherence: a systematic review

  • Annie Liang 1 ,
  • Marta Wilson-Barthes   ORCID: orcid.org/0000-0002-9845-7142 2 &
  • Omar Galárraga   ORCID: orcid.org/0000-0002-9985-9266 3  

Cost Effectiveness and Resource Allocation volume  22 , Article number:  46 ( 2024 ) Cite this article

1 Altmetric

Metrics details

There is some evidence that differentiated service delivery (DSD) models, which use a client-centered approach to simplify and increase access to care, improve clinical outcomes among people living with HIV (PLHIV) in high HIV prevalence countries. Integrating economic strengthening tools (e.g., microcredit, cash transfers, food assistance) within DSD models can help address the poverty-related barriers to HIV antiretroviral therapy (ART). Yet there is minimal evidence of the cost-effectiveness of these types of multilevel care delivery models, which potentially prohibits their wider implementation.

Using a qualitative systematic review, this article synthesizes the literature surrounding the cost-effectiveness of differentiated service delivery models that employ economic strengthening initiatives to improve HIV treatment adherence in low- and middle-income countries. We searched three academic databases for randomized controlled trials and observational studies published from January 2000 through March 2024 in Sub-Saharan Africa. The quality of each study was scored using a validated appraisal system.

Eighty-nine full texts were reviewed and 3 met all eligibility criteria. Two of the three included articles were specific to adolescents living with HIV. Economic strengthening opportunities varied by care model, and included developmental savings accounts, microenterprise workshops, and cash and non-cash conditional incentives. The main drivers of programmatic and per-patient costs were ART medications, CD4 cell count testing, and economic strengthening activities.

All economic evaluations in this review found that including economic strengthening as part of comprehensive differentiated service delivery was cost-effective at a willingness to pay threshold of at least 2 times the national per capita gross domestic product. Two of the three studies in this review focused on adolescents, suggesting that these types of care models may be especially cost-effective for youth entering adulthood. All studies were from the provider perspective, indicating that additional evidence is needed to inform the potential cost-savings of DSD and economic strengthening interventions to patients and society. Randomized trials testing the effectiveness of DSD models that integrate economic strengthening should place greater emphasis on costing these types of programs to inform the potential for bringing these types of multilevel interventions to scale.

Introduction

Responding to the World Health Organization’s Treat All Policy, low- and middle-income countries (LMICs) are increasingly using differentiated service delivery (DSD) models as a way to rapidly scale up access to life-saving antiretroviral therapy for people living with HIV (PLHIV) [ 1 ]. According to the International AIDS Society, “differentiated service delivery (DSD), previously referred to as differentiated care, is a client-centred approach that simplifies and adapts HIV services across the cascade to reflect the preferences, expectations and needs of people living with and affected by HIV, while reducing unnecessary burdens on the health system” [ 2 ]. DSD models aim to make care “patient-centered” while reducing logistical and administrative burden(s) on traditional, resource-constrained care facilities [ 1 ]. These models have shown to be effective for increasing treatment adherence, but most do not address the persistent poverty-related barriers to HIV care engagement (e.g., long and costly distances to facilities, food insecurity, HIV stigma). A recent systematic review from 20 LMICs found that economic strengthening interventions such as conditional cash transfers, microcredit, and transportation assistance can improve medication adherence and care-seeking behaviors among persons living with HIV, with more moderate impacts on clinical outcomes [ 3 ]. Two other systematic reviews found that, on their own, differentiated HIV service delivery approaches in Sub-Saharan Africa (SSA) generally cost the same as or less than standard HIV care in terms of the cost per patient per year from a patient perspective [ 1 , 4 ]. For providers and health systems, the available economic evidence suggests that DSD models in SSA are not cost saving compared to more traditional facility-based care models [ 4 ]. A 2017 modeling study found that differentiated service delivery models aiming to increase access to ART in SSA could yield up to a 17.5% reduction in health system costs and health workforce requirements over 5 years [ 5 ]. It remains to be seen whether differentiated service delivery models that additionally aim to address poverty-related barriers to care (e.g., food insecurity, long and costing distances to facilities, restricted access to income-generating opportunities) are cost-effective for patients, providers, or society as a whole [ 6 , 7 ].

The purpose of this systematic review is to (i) summarize the current evidence surrounding the cost and cost-effectiveness of differentiated HIV service delivery models that include economic strengthening compared to differentiated service delivery without economic strengthening and to standard HIV care, and (ii) offer a conceptual framework that can help future researchers understand the key components influencing the incremental cost-effectiveness of these holistic models for patients and providers.

Eligibility criteria

Our review focused on studies of the cost-effectiveness of differentiated HIV care models that incorporated at least one economic strengthening component. Articles were excluded if they were not a randomized controlled trial or observational study, did not include both an economic strengthening and a differentiated care component for promoting ART adherence, or did not report a standard metric for assessing cost-effectiveness of an ART adherence intervention. Economic strengthening included any activity that aimed to generate individual- or household-level income or wealth, such as microfinance groups, social protection programs, savings accounts, or training in financial literacy or entrepreneurship. Articles that were not peer reviewed, published in English, or conducted in SSA were also excluded. There were no restrictions on the study population in terms of age, gender, or SSA region. During the abstract round of screening if the study fit all other criteria (differentiated service delivery in Sub-Saharan Africa with economic strengthening) but did not mention whether a cost-analysis was performed, the study was included for full text screening to account for ancillary costeffectiveness analyses.

Information sources & search strategy

We conducted a literature search of articles in PubMed (National Center for Biotechnology Information, Bethesda, Maryland) and EconLit (American Economic Association, Nashville, Tennessee), supplemented by an Internet search of Google Scholar. Prior reviews indicate that DSD interventions have been implemented since the 2000s. Thus, we searched articles published from January 1, 2000 through March 31, 2024 using the terms “HIV or AIDS”, “ antiretroviral therapy”, “economic strengthening”, “differentiated service delivery”, “Sub-Saharan Africa” “cost analysis”, “cost-effectiveness” and “cost-savings”. Literature searched in PubMed used MeSH (Medical Subject Headings) controlled vocabulary to select key search terms. The full search strategy implemented for each database is provided in Additional File 1 .

Selection process

Initial search results were reviewed by one reviewer (AL). Abstracts and main texts of articles that met all eligibility criteria were double reviewed (AL and MWB), with a third reviewer consulted when necessary (OG).

Data collection process

A data extraction tool was developed to capture the following indicators: study context (e.g., country and region of study), design, population, DSD component(s), economic strengthening activity, costing perspective, main drivers of intervention and per-patient costs, cost-effectiveness metric (e.g., incremental cost-effectiveness ratio), willingness-to-pay threshold (WTP), and a binary indicator of whether the intervention showed to be cost-effective (yes/no). Due to significant heterogeneity across studies in terms of effectiveness and cost-effectiveness outcomes, a meta-analysis was not performed. Search findings were reported following the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines [ 8 ].

Quality assessment

Full texts that were standard health economic evaluations were assessed using the validated Quality of Health Economic Studies (QHES) appraisal system developed by Chiou [ 9 , 10 ]. The quality of each full text article was assessed based on the sixteen weighted criteria listed in Additional File 2 . Weighted scores for each criterion were summed to generate an overall quality score ranging from 0 (extremely poor quality) to 100 (excellent quality). Four quality categories (0–25, 25.1–50, 50.1–75, and 75.1–100) were used with scores > 75 indicating high quality studies [ 10 ]. Systematic reviews, micro-costing studies, and qualitative analyses were not scored given our focus on randomized controlled trials (RCTs) and observational studies.

Conceptual framework

Drawing on the papers included in the review, we adapted an existing conceptual framework to synthesize the key components that could be understood to drive the incremental cost-effectiveness of HIV differentiated service delivery models for SSA health systems.

Identified articles

Figure  1 documents the flow of articles through the review and reasons for exclusion. Most of the 89 articles were peer-reviewed journal articles (93.2%), followed by preprints (2.2%), and scientific reports (2.2%). Of the 57 articles that included a DSD intervention, the most common differentiated service delivery model was community-based ART support and adherence counseling. Of the 40 articles that included an economic strengthening (ES) component, conditional economic (cash and non-cash) incentives and microfinance engagement were the most common ES activities. The most common reasons for exclusion were no economic strengthening component and no cost-effectiveness analysis. Eleven of the 89 reviewed articles were traditional cost-effectiveness analyses and thus were appraised for quality using the Chiou grading system; those that were not appraised using the grading system included costing, budget impact, or other types of non-cost-effectiveness evaluations. The 11 articles had an average quality score of 80.73 (out of 100), and all satisfied at least 11 of the 16 grading criteria (Additional File 2 ). Of the 89 full text articles that were assessed, three papers met all eligibility criteria and were included in this narrative review.

figure 1

Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) flow diagram

Background and summary of included articles

All 3 studies scored above a 75 (out of 100) on the QHES appraisal system, indicating high quality studies [ 10 ]. Tozan et al. and Ekwunife et al. scored an 85 on the QHES, satisfying the same criteria. Stevens et al. scored 100 on the QHES, satisfying all criteria. Only Stevens et al. displayed a clear economic model, study methods and analysis, and components of numerator and denominator and justified choice of economic model, main assumptions, and limitations of the study. Although all three included studies were of high quality according to the QHES, each provided minimal rationale for their use of a given economic model which may hinder replicability.

Details of the three included studies are summarized in Table  1 . In brief, Tozan et al. [ 11 ] estimated the incremental costs of providing additional counseling sessions for HIV and ART adherence as well as an incentivized savings account and workshops on asset building to adolescents living with HIV in Uganda. Incremental intervention costs were compared to the cost of providing routine HIV care and social support alone. Ekwunife et al. [ 12 ] estimated the cost-effectiveness of a differentiated care model for young adults living with HIV in Nigeria that included motivational interview sessions and economic incentives based on viral load over 12 months. Stevens et al. [ 13 ] modelled the cost-effectiveness of scaling-up a combination care package in Swaziland, which included SMS reminders for ART adherence, counseling and health commodities for ART adherence (e.g., pillboxes and informational materials), and non-cash financial incentives for adults who newly tested positive for HIV. All included studies utilized a facility-based DSD model. For each study, the additional cost for a given intervention compared to the status quo was $970 [95% CI: $508 − 10,275] per additional patient virally suppressed [ 11 ], $1,419 per additional patient with undetected viral load [ 12 ], and $3,560 per additional quality-adjusted life year (QALY) gained [ 13 ].

Cost-effectiveness of differentiated care with economic strengthening

Table  2 presents the cost-effectiveness outcomes from each included study. All analyses used a provider perspective.

The threshold at which a given intervention was deemed cost-effective varied across studies. Tozan et al. did not report a pre-specified willingness to pay threshold [ 11 ]. Ekwunife et al. specified a willingness to pay threshold of $1,137 per additional QALY gained by the intervention [ 12 ]. Stevens et al. reported a threshold of $9,840 per additional QALY gained (3x Swaziland’s GDP per capita); the Link4Health combination package yielded an incremental cost effectiveness ratio (ICER) of $3,560 per additional QALY gained from the health sector perspective, which the authors deemed cost-effective at a willingness to pay threshold of 3 x Swaziland’s per capita GDP in 2018 [ 13 ]. The cost-effectiveness analysis by Ekwunife et al. [ 12 ] found that combing conditional economic incentives and motivational interviewing was not cost-effective compared to standard care at the authors’ pre-defined willingness to pay threshold of 0.51 times Nigeria’s per capita GDP; the intervention was cost-effective at 1 x Nigeria’s per capita GDP in 2021 ($2,027.80). Tozan et al. [ 11 ] did not report the cost-effectiveness of the combined adherence mentoring and incentivized financial savings account intervention in relation to a pre-defined cost-effectiveness threshold; however the intervention cost less than 2 x Uganda’s per capita GDP ($847.30 in 2021). The respective interventions analyzed by Ekwunife et al. [ 12 ] and Tozan et al. [ 11 ] were cost-effective (compared to standard care) assuming the World Health Organization’s willingness to pay thresholds of 2 to 3 times the national per capita GDP in the trial year. Across the three studies, the main drivers of programmatic and per-patient costs were ART treatment costs, CD4 cell count testing, and economic strengthening activities including the costs to provide non-financial incentives. In the Uganda cluster-randomized trial [ 12 ], the largest cost drivers for the intervention came from viral load tests, CD4 count testing, and patient transportation. Financial incentives and point of care CD4 testing were the main drivers of the observed cost differences in the analysis of the Link4Health cluster-RCT [ 13 ]. For Tozan et al. [ 11 ], intervention activities including health education sessions, microenterprise workshops, and savings accounts contributed the largest difference in costs between intervention and standard care. All interventions were more expensive than standard care in terms of total cost per patient.

Synthesizing framework

Based on the three papers in this review, we adapted an existing conceptual model originally developed by Kahn and colleagues [ 14 ] to illustrate – from a health system perspective – the key components that can be hypothesized to influence the cost-effectiveness of differentiated service delivery models that incorporate economic strengthening. (Fig.  2 ) Increasing patient access to antiretroviral therapy immediately following diagnosis and sustaining access over time (e.g., by offering community- or home-based care visits; accelerating ART initiation following point of care CD4 cell count testing) can be expected to add costs to the health system via an increased demand for higher drug quantities, follow-up tests, and personnel time. Similarly, providing economic strengthening opportunities that address known poverty-related barriers to ART adherence will almost always increase the incremental costs of these care delivery approaches if the initiatives are not self-sustaining. For example, providing economic incentives conditional on achieving a viral load below an assay’s lower detection limit will incur additional costs to health ministries who wish to offer this incentive scheme as part of a government social protection program. However, economic strengthening interventions have the potential to be cost-neutral to health systems if they can generate economic growth on their own, as in the case of saving and lending microfinance groups [ 15 , 16 ] or no fee savings accounts [ 11 ]. Averting new HIV infections and decreasing HIV-related morbidity by achieving an undetectable viral load via ART leads to substantial reductions in both disability-adjusted life years and treatment costs. However, as individuals live longer due to ART, they may develop other chronic diseases that incur additional costs to themselves and the health system [ 17 ]. Thus, differentiated service delivery models that integrate economic strengthening and treatment for co-occurring conditions have the potential to further reduce disease burden without substantially increasing treatment costs.

figure 2

Conceptual Framework. The conceptual framework was adapted from an existing conceptual model developed by Kahn et al. [ 11 ] The framework illustrates the key components that can be hypothesized to influence the cost-effectiveness of differentiated HIV care approaches that incorporate economic strengthening activities, from a health system perspective

All elements of this synthesizing conceptual framework are drawn from the authors’ analyses of the supporting literature. Further research on the cost-effectiveness impact of these mechanisms is required to support their validity.

This systematic narrative review found one of three studies testing a differentiated service delivery model that includes economic strengthening to be cost-effective for providers at the authors’ pre-determined WTP threshold. All three included articles were cost-effective at the WHO willingness to pay threshold of at least 2 times a given country’s per capita GDP. Sensitivity analyses [ 11 , 12 ] and modeling projections [ 13 ] in these papers suggest that the cost-effectiveness of these types of multilevel interventions would increase as these care models are brought to scale. Ekwunife et al. [ 12 ] found that if CD4 + count tests were performed triannually rather than four times a year, the intervention would become cost-effective. Thus, only minimal adjustments to the differentiated service delivery and ES components could increase the interventions’ cost-effectiveness.

Two of three studies in this review were among adolescents living with HIV. This suggests that cultivating routine medication taking behaviors and establishing positive economic skills (e.g., having a savings account, managing microcredit) may be especially important for lower income adolescents living with HIV who can carry these practices into adulthood. Additionally, two recent feasibility studies did not meet inclusion criteria (i.e., being an RCT or observational study) but were initially screened in this review. Findings from these studies further support the potential of integrating DSD with economic strengthening for improving HIV treatment outcomes along the care continuum (testing, linkage to care, and ART adherence) [ 18 , 19 ].

The World Health Organization’s Treat-All guidance recommends CD4 testing before initiating antiretroviral therapy (ART) and recommends routine viral load monitoring (over CD4 cell count monitoring) for patients on ART [ 20 , 21 ]. Viral load monitoring remains the gold standard for monitoring ART adherence and viral suppression among persons living with diagnosed HIV, even in settings where health systems face financial and resource constraints [ 22 , 23 , 24 ]. Thus, given that the focus of our review is on cost-effectiveness of models for ART adherence among persons with diagnosed HIV, our findings can inform scale-up of DSD models that support the most widely used HIV treatment outcomes.

Recent protocol studies reveal that there remains space in the literature to continue to examine DSD with economic strengthening interventions as an effective and cost-effective method of enhancing ART adherence [ 25 ]. For future research and policymaking, these findings suggest there may be potential for implementing scaled-up DSD with economic strengthening interventions enhancing ART adherence among adolescents and young adults specifically.

Limitations of this systematic review stemmed from the large variability in population, context, and target outcomes across studies, which limited our ability to calculate an overall combined economic effect of these interventions. Additionally, all of the cost-effectiveness analyses in this review calculated cost according to the provider perspective, which limits our ability to quantify the potential economic impact of these combination differentiated care models on patients or society. We aimed to mitigate any potential reviewer bias in the inclusion/exclusion of a quality assessment by using a standardized data extraction tool.

Despite calls for novel cost-effectiveness data of holistic differentiated care models in low- and middle-income countries [ 1 , 6 , 26 , 27 , 28 ], the evidence base surrounding the scale-up potential of DSD interventions and economic strengthening remains sparse. To our knowledge, this is the first review to synthesize the available evidence of poverty-addressing DSD models from a health economics perspective. This evidence is critical for policymakers and health care advocates working to address the economic determinants of HIV treatment adherence with limited resources.

This brief systematic review demonstrated that including economic strengthening tools as part of differentiated service delivery models is effective and largely cost-effective at common thresholds compared to traditional HIV care. Modelling projections suggest that scaling these types of multilevel intervention may improve their cost-effectiveness in the short and medium term. Future research should consider the cost-effectiveness and cost-savings of these comprehensive HIV care models from a patient and societal perspective.

Data availability

Data sharing is not applicable to this article as no new datasets were generated or analyzed during the current study.

Abbreviations

Adolescents Living with HIV

Antiretroviral Therapy

Differentiated Service Delivery

Economic Strengthening

Gross Domestic Product

Incremental Cost-Effectiveness Ratio

Income-Generating Activity

Low- or Middle-Income Country

Motivational Interviewing

People Living with HIV

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Quality-Adjusted Life Year

Randomized Controlled Trial

  • Sub-Saharan Africa

Willingness-to-Pay

Roy M, Bolton Moore C, Sikazwe I, Holmes CB. A review of Differentiated Service Delivery for HIV Treatment: effectiveness, mechanisms, Targeting, and Scale. Curr HIV/AIDS Rep. 2019;16(4):324–34. https://doi.org/10.1007/s11904-019-00454-5 .

Article   PubMed   Google Scholar  

Differentiated Service Delivery. International AIDS Society. 2024. Accessed April 24, 2024. https://www.iasociety.org/ias-programme/differentiated-service-delivery .

Swann M. Economic strengthening for retention in HIV care and adherence to antiretroviral therapy: a review of the evidence. AIDS Care. 2018;30(3):99–125. https://doi.org/10.1080/09540121.2018.1479030 .

Article   Google Scholar  

Rosen S, Nichols B, Guthrie T, Benade M, Kuchukhidze S, Long L. Do differentiated service delivery models for HIV treatment in sub-saharan Africa save money? Synthesis of evidence from field studies conducted in sub-saharan Africa in 2017–2019. Gates Open Res. 2022;5:177. https://doi.org/10.12688/gatesopenres.13458.2 .

Article   PubMed   PubMed Central   Google Scholar  

Barker C, Dutta A, Klein K. Can differentiated care models solve the crisis in HIV treatment financing? Analysis of prospects for 38 countries in sub-saharan Africa. J Int AIDS Soc. 2017;20(Suppl 4):21648. https://doi.org/10.7448/IAS.20.5.21648 .

Nachega JB, Adetokunboh O, Uthman OA, Knowlton A, Altice FL, Schechter M, et al. Community-based interventions to improve and sustain antiretroviral therapy adherence, Retention in HIV Care and clinical outcomes in low- and Middle-Income Countries for achieving the UNAIDS 90-90-90 targets. Curr HIV/AIDS Rep. 2016;13:241–55. https://doi.org/10.1007/s11904-016-0325-9 .

Munyayi FK, van Wyk B, Mayman Y. Interventions to Improve Treatment outcomes among adolescents on antiretroviral therapy with unsuppressed viral loads: a systematic review. IJERPH. 2022;19(7):3940. https://doi.org/10.3390/ijerph19073940 .

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. https://doi.org/10.1136/bmj.n71 .

Chiou CF, Hay JW, Wallace JF, Bloom BS, Neumann PJ, Sullivan SD, et al. Development and validation of a grading system for the quality of cost-effectiveness studies. Med Care. 2003;41(1):32–44. https://doi.org/10.1097/00005650-200301000-00007 .

Spiegel BM, Targownik LE, Kanwal F, et al. The quality of published health economic analyses in digestive diseases: a systematic review and quantitative appraisal. Gastroenterology. 2004;127(2):403–11. https://doi.org/10.1053/j.gastro.2004.04.020 .

Tozan Y, Capasso A, Sun S, Neilands TB, Damulira C, Namuwonge F, et al. The efficacy and cost-effectiveness of a family-based economic empowerment intervention (suubi + adherence) on suppression of HIV viral loads among adolescents living with HIV: results from a Cluster Randomized Controlled Trial in southern Uganda. JIAS. 2021;24(6):e25752. https://doi.org/10.1002/jia2.25752 .

Ekwunife OI, Ofomata CJ, Okafor CE, Anetoh MU, Kalu SO, Ele PU, et al. Cost-effectiveness and feasibility of conditional economic incentives and motivational interviewing to improve HIV health outcomes of adolescents living with HIV in Anambra State, Nigeria. BMC Health Serv Res. 2021;21:685. https://doi.org/10.1186/s12913-021-06718-4 .

Stevens ER, Li L, Nucifora KA, Zhou Q, McNairy ML, Gachuhi A, et al. Cost-effectiveness of a combination strategy to enhance the HIV care continuum in Swaziland: Link4Health. PLoS ONE. 2018;13(9):e0204245. https://doi.org/10.1371/journal.pone.0204245 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Kahn JG, Marseille EA, Bennett R, Williams BG, Granich R. Cost-effectiveness of antiretroviral therapy for prevention. Curr HIV Res. 2011;9(6):405–15. https://doi.org/10.2174/157016211798038542 .

Genberg BL, Wachira J, Steingrimsson JA, Pastakia S, Tran DNT, Said JA, et al. Integrated community-based HIV and non-communicable disease care within microfinance groups in Kenya: study protocol for the Harambee Cluster randomised trial. BMJ Open. 2021;11(5):e042662. https://doi.org/10.1136/bmjopen-2020-042662 .

Pastakia SD, Manyara SM, Vedanthan R, Kamano JH, Menya D, Andama B, et al. Impact of bridging Income Generation with Group Integrated Care (BIGPIC) on hypertension and diabetes in Rural Western Kenya. J Gen Intern Med. 2017;32(5):540–8. https://doi.org/10.1007/s11606-016-3918-5 .

Negin J, Bärnighausen T, Lundgren JD, Mills EJ. Aging with HIV in Africa: the challenges of living longer. AIDS. 2012;26:S1–5. https://doi.org/10.1097/QAD.0b013e3283560f54 .

Kim HY, Inghels M, Mathenjwa T, et al. The impact of a conditional financial incentive on linkage to HIV care: findings from the HITS cluster randomized clinical trial in rural South Africa. Preprint medRxiv. 2024. https://doi.org/10.1101/2024.03.15.24304278 . 2024.03.15.24304278. Published 2024 Mar 18.

Kibel M, Nyambura M, Embleton L, et al. Enabling adherence to treatment (EAT): a pilot study of a combination intervention to improve HIV treatment outcomes among street-connected individuals in western Kenya. BMC Health Serv Res. 2023;23(1):1331. https://doi.org/10.1186/s12913-023-10215-1 . Published 2023 Nov 30.

Guideline on when to start antiretroviral therapy and on pre-exposure prophylaxis for HIV. who.int. Published September 1, 2015. Accessed April 24. 2024. https://www.who.int/publications/i/item/9789241509565 .

Brazier E, Tymejczyk O, Zaniewski E, et al. Effects of National Adoption of treat-all guidelines on Pre-antiretroviral Therapy (ART) CD4 testing and viral load monitoring after ART initiation: a regression discontinuity analysis. Clin Infect Dis. 2021;73(6):e1273–81. https://doi.org/10.1093/cid/ciab222 .

Pham P MD, Nguyen HV, Anderson D, Crowe S, Luchters S. Viral load monitoring for people living with HIV in the era of test and treat: progress made and challenges ahead - a systematic review. BMC Public Health. 2022;22(1):1203. https://doi.org/10.1186/s12889-022-13504-2 . Published 2022 Jun 16.

Okoboi S, Musaazi J, King R, et al. Adherence monitoring methods to measure virological failure in people living with HIV on long-term antiretroviral therapy in Uganda. PLOS Glob Public Health. 2022;2(12):e0000569. https://doi.org/10.1371/journal.pgph.0000569 . Published 2022 Dec 30.

World Health Organization. Updated recommendations on HIV prevention, infant diagnosis, antiretroviral initiation and monitoring. who.int. Published March 17, 2021. Accessed April 24. 2024. https://www.who.int/publications/i/item/9789240022232 .

van Heerden A, Szpiro A, Ntinga X, Celum C, van Rooyen H, Essack Z, Barnabas R. A sequential multiple assignment Randomized Trial of scalable interventions for ART delivery in South Africa: the SMART ART study. Trials. 2023;24(1):32. https://doi.org/10.1186/s13063-022-07025-x .

Decroo T, Rasschaert F, Telfer B, Remartinez D, Laga M, Ford N. Community-based antiretroviral Therapy Programs can overcome barriers to Retention of Patients and Decongest Health Services in Sub-saharan Africa: a systematic review. Int Health. 2013;5(3):169–79. https://doi.org/10.1093/inthealth/iht016 .

Chaiyachati KH, Ogbuoji O, Price M, Suthar AB, Negussie EK, Bärnighausen T. Interventions to improve adherence to antiretroviral therapy: a Rapid systematic review. AIDS. 2014;28:187–204. https://doi.org/10.1097/QAD.0000000000000252 .

Creese A, Floyd K, Alban A, Guinness L. Cost-effectiveness of HIV/AIDS interventions in Africa: a systematic review of the evidence. Lancet. 2002;359(9318):1635–42. https://doi.org/10.1016/S0140-6736(02)08595-1 .

Download references

Acknowledgements

We thank the authors of the original source papers, whose work we drew on considerably.

Research reported in this publication was supported by the National Institute of Mental Health of the National Institutes of Health under award number R01MH118075, and by the National Institute of Allergy and Infectious Diseases of the National Institutes of Health through the Providence/Boston Center for Aids Research (CFAR) (award number P30AI042853). One hundred percent of this research was financed with Federal money. The design of the study and collection, analysis and interpretation of data and writing of the manuscript is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Author information

Authors and affiliations.

Brown University School of Public Health, Providence, RI, USA

Annie Liang

Department of Epidemiology, Brown University School of Public Health, Providence, RI, USA

Marta Wilson-Barthes

Department of Health Services, Policy and Practice; and International Health Institute, Brown University School of Public Health, 121 South Main Street, Box G-S121-2, Providence, RI, USA

Omar Galárraga

You can also search for this author in PubMed   Google Scholar

Contributions

AL and OG conceived and designed the work. AL led the analysis and interpretation of the data, and drafted the work. MWB contributed to the analysis and interpretation of data. MWB and OG substantively revised the work. All authors approved the submitted version of the manuscript, and agree to be personally accountable for their own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.

Corresponding author

Correspondence to Omar Galárraga .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Additional file 1: Search syntax

Additional file 2: quality assessment of full text articles that were standard health economic evaluations, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Liang, A., Wilson-Barthes, M. & Galárraga, O. Cost-effectiveness of differentiated care models that incorporate economic strengthening for HIV antiretroviral therapy adherence: a systematic review. Cost Eff Resour Alloc 22 , 46 (2024). https://doi.org/10.1186/s12962-024-00557-w

Download citation

Received : 13 April 2023

Accepted : 19 May 2024

Published : 24 May 2024

DOI : https://doi.org/10.1186/s12962-024-00557-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Differentiated care
  • Differentiated service delivery
  • Economic strengthening
  • Microfinance
  • Conditional cash transfer
  • Cost-effectiveness
  • Antiretroviral therapy

Cost Effectiveness and Resource Allocation

ISSN: 1478-7547

lit review tools

COMMENTS

  1. Litmaps

    As a full-time researcher, Litmaps has become an indispensable tool in my arsenal. The Seed Maps and Discover features of Litmaps have transformed my literature review process, streamlining the identification of key citations while revealing previously overlooked relevant literature, ensuring no crucial connection goes unnoticed.

  2. How to Write a Literature Review

    Example literature review #4: "Learners' Listening Comprehension Difficulties in English Language Learning: A Literature Review ... Tip AI tools like ChatGPT can be effectively used to brainstorm ideas and create an outline for your literature review. However, trying to pass off AI-generated text as your own work is a serious offense. ...

  3. Ace your research with these 5 literature review tools

    3. Zotero. A big part of many literature review workflows, Zotero is a free, open-source tool for managing citations that works as a plug-in on your browser. It helps you gather the information you need, cite your sources, lets you attach PDFs, notes, and images to your citations, and create bibliographies.

  4. ATLAS.ti

    Finalize your literature review faster with comfort. ATLAS.ti makes it easy to manage, organize, and analyze articles, PDFs, excerpts, and more for your projects. Conduct a deep systematic literature review and get the insights you need with a comprehensive toolset built specifically for your research projects.

  5. Literature Reviews and Synthesis Tools

    2. Scope the Literature. A "scoping search" investigates the breadth and/or depth of the initial question or may identify a gap in the literature. Eligible studies may be located by searching in: Background sources (books, point-of-care tools) Article databases; Trial registries; Grey literature; Cited references; Reference lists

  6. Tools

    Free, open-source tool that "helps you upload and organize the results of a literature search for a systematic review. It also makes it possible for your team to screen, organize, and manipulate all of your abstracts in one place." -From Center for Evidence Synthesis in Health. SRDR Plus (Systematic Review Data Repository: Plus) An open-source ...

  7. PDF Conducting a Literature Review

    What is a Literature Review 2. Tools to help with the various stages of your review -Searching -Evaluating -Analysing and Interpreting -Writing -Publishing 3. Additional Resources ... Literature Review A literature review is a survey of scholarly sources that provides an overview of a particular topic. Literature reviews are a ...

  8. Writing a Literature Review

    A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays).

  9. Lit Review Tools

    Writing a Literature Review. Unless you have a reason to present the literature chronologically (to show development over time, perhaps), the preferred method for organizing your literature is thematically. Step 1: Identify your themes. Step 2: Identify the articles that address those themes. Step 3: Identify the similarities and differences ...

  10. Literature review: your definitive guide

    Learn top tips to write an academic literature review, including topic searches and working with co-authors, and see review examples, templates and more. ... or head straight to Endnote's Manuscript Matcher or Journal Citation Reports to try out the best tools for the job. 5. Discover literature review examples and templates. There are a few ...

  11. START HERE

    Steps to Completing a Literature Review. Find. Conduct searches for relevant information. Evaluate. Critically review your sources. Summarize. Determine the most important and relevant information from each source, theories, findings, etc. Synthesize. Create a synthesis matrix to find connections between resources, and ensure your sources ...

  12. PDF Writing an Effective Literature Review

    literature review in academia, at this point it might be useful to state what a literature review is not, before looking at what it is. It is not: § A list or annotated bibliography of the sources you have read § A simple summary of those sources or paraphrasing of the conclusions § Confined to description of the studies and their findings

  13. 7 open source tools to make literature reviews easy

    2. Firefox. Linux distributions generally come with a free web browser, and the most popular is Firefox. Two Firefox plugins that are particularly useful for literature reviews are Unpaywall and Zotero. Keep reading to learn why. 3.

  14. 10 Best Literature Review Tools for Researchers

    6. Consensus. Researchers to work together, annotate, and discuss research papers in real-time, fostering team collaboration and knowledge sharing. 7. RAx. Researchers to perform efficient literature search and analysis, aiding in identifying relevant articles, saving time, and improving the quality of research. 8.

  15. Literature Review Software MAXQDA

    MAXQDA The All-in-one Literature Review Software. MAXQDA is the best choice for a comprehensive literature review. It works with a wide range of data types and offers powerful tools for literature review, such as reference management, qualitative, vocabulary, text analysis tools, and more.

  16. Free Literature Review Generator For College Students

    Our Literature Review Generator is an AI-powered tool that streamlines and simplifies the creation of literature reviews by automatically collecting, analyzing, summarizing, and synthesizing all the relevant academic sources on a specific topic within the parameters you define. It saves you additional time by highlighting themes, trends, and ...

  17. Literature Review Tools

    Write a Literature Review; Literature Review Tools; ... Write a Literature Review Toggle Dropdown. Developing a Research Question ; Database Searching ; Documenting Your Search and Findings ; Discipline-Specific Literature Reviews; This page is not currently available due to visibility settings.

  18. Tools for Academic Writing: Literature Review

    Writing the Literature Review. Find a focus. Just like a term paper, a literature review is organized around ideas, not just sources. Use the research question you developed in planning your review and the issues or themes that connect your sources together to create a thesis statement. Yes, literature reviews have thesis statements!

  19. AI-Powered Research and Literature Review Tool

    Simplify literature reviews and find answers to your questions about any research paper seamlessly. Discover, read, and understand research papers effortlessly with Enago Read, your AI-powered companion for academic research. ... Enago Read is an AI assistant that speeds up the literature review process, offering summaries and key insights to ...

  20. Research Guides: AI-Based Literature Review Tools: Home

    AI-POWERED RESEARCH ASSISTANT - finding papers, filtering study types, automating research flow, brainstorming, summarizing and more. " Elicit is a research assistant using language models like GPT-3 to automate parts of researchers' workflows. Currently, the main workflow in Elicit is Literature Review.

  21. LitLLM: A Toolkit for Scientific Literature Review

    LitLLM is an interactive tool to help scientists write the literature review or related work section of a scientific paper starting from a user-provided abstract (see Figure 1). The specific objectives of this work are to create a system to help users navigate through research papers and write a literature review for a given paper or project.

  22. Guidance to best tools and practices for systematic reviews

    Methods and guidance to produce a reliable evidence synthesis. Several international consortiums of EBM experts and national health care organizations currently provide detailed guidance (Table (Table1). 1).They draw criteria from the reporting and methodological standards of currently recommended appraisal tools, and regularly review and update their methods to reflect new information and ...

  23. Critical Appraisal Tools

    The structure of a literature review should include the following: An overview of the subject, issue, or theory under consideration, along with the objectives of the literature review, Division of works under review into themes or categories [e.g. works that support a particular position, those against, and those offering alternative approaches ...

  24. Rapid literature review: definition and methodology

    A systematic literature review (SLR) summarizes the results of all available studies on a specific topic and provides a high level of evidence. ... (WHO) , National Collaborating Centre for Methods and Tools (NCCMT) , the UK government , the Oxford Centre for Evidence Based Medicine , the Cochrane group [6,11], and one multi-national review ...

  25. A systematic literature review of empirical research on ChatGPT in

    To conduct this study, the authors followed the essential steps of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) and Okoli's [] steps for conducting a systematic review.These included identifying the study's purpose, drafting a protocol, applying a practical screening process, searching the literature, extracting relevant data, evaluating the quality ...

  26. [2405.15604] Text Generation: A Systematic Literature Review of Tasks

    Text generation has become more accessible than ever, and the increasing interest in these systems, especially those using large language models, has spurred an increasing number of related publications. We provide a systematic literature review comprising 244 selected papers between 2017 and 2024. This review categorizes works in text generation into five main tasks: open-ended text ...

  27. Collaborative Skills Training Using Digital Tools: A Systematic

    The present systematic literature review highlighted a diversity of findings about the effects of digital tools on the development of collaborative skills. This diversity can be attributed not only to the variety of digital tools used by researchers, but also to the variety of measures used to assess collaborative skills.

  28. AI-assisted writing is quietly booming in academic journals. Here's why

    An AI tool is more like your phone's autocomplete function than a human researcher. The question of bias Another worry is that AI outputs might be biased in ways that could seep into the ...

  29. Cost-effectiveness of differentiated care models that incorporate

    Background There is some evidence that differentiated service delivery (DSD) models, which use a client-centered approach to simplify and increase access to care, improve clinical outcomes among people living with HIV (PLHIV) in high HIV prevalence countries. Integrating economic strengthening tools (e.g., microcredit, cash transfers, food assistance) within DSD models can help address the ...

  30. Agrivoltaics, a promising new tool for electricity and food production

    A systematic literature search was conducted in Web of Science on 17 December 2022, resulting in 54 articles that met the inclusion criteria and concentrated primarily on food production. ... Our objectives were to review and synthesise the current agronomic knowledge on agrivoltaics and its future development possibilities. A systematic ...