write literature review using ai

AI Literature Review Generator

Generate high-quality literature reviews fast with ai.

  • Academic Research: Create a literature review for your thesis, dissertation, or research paper.
  • Professional Research: Conduct a literature review for a project, report, or proposal at work.
  • Content Creation: Write a literature review for a blog post, article, or book.
  • Personal Research: Conduct a literature review to deepen your understanding of a topic of interest.

New & Trending Tools

In-cite ai reference generator, legal text refiner, job search ai assistant.

Revolutionize Your Research with Jenni AI

Literature Review Generator

Welcome to Jenni AI, the ultimate tool for researchers and students. Our AI Literature Review Generator is designed to assist you in creating comprehensive, high-quality literature reviews, enhancing your academic and research endeavors. Say goodbye to writer's block and hello to seamless, efficient literature review creation.

write literature review using ai

Loved by over 1 million academics

write literature review using ai

Endorsed by Academics from Leading Institutions

Join the Community of Scholars Who Trust Jenni AI

google logo

Elevate Your Research Toolkit

Discover the Game-Changing Features of Jenni AI for Literature Reviews

Advanced AI Algorithms

Jenni AI utilizes cutting-edge AI technology to analyze and suggest relevant literature, helping you stay on top of current research trends.

Get started

write literature review using ai

Idea Generation

Overcome writer's block with AI-generated prompts and ideas that align with your research topic, helping to expand and deepen your review.

Citation Assistance

Get help with proper citation formats to maintain academic integrity and attribute sources correctly.

write literature review using ai

Our Pledge to Academic Integrity

At Jenni AI, we are deeply committed to the principles of academic integrity. We understand the importance of honesty, transparency, and ethical conduct in the academic community. Our tool is designed not just to assist in your research, but to do so in a way that respects and upholds these fundamental values.

How it Works

Start by creating your account on Jenni AI. The sign-up process is quick and user-friendly.

Define Your Research Scope

Enter the topic of your literature review to guide Jenni AI’s focus.

Citation Guidance

Receive assistance in citing sources correctly, maintaining the academic standard.

Easy Export

Export your literature review to LaTeX, HTML, or .docx formats

Interact with AI-Powered Suggestions

Use Jenni AI’s suggestions to structure your literature review, organizing it into coherent sections.

What Our Users Say

Discover how Jenni AI has made a difference in the lives of academics just like you

write literature review using ai

· Aug 26

I thought AI writing was useless. Then I found Jenni AI, the AI-powered assistant for academic writing. It turned out to be much more advanced than I ever could have imagined. Jenni AI = ChatGPT x 10.

write literature review using ai

Charlie Cuddy

@sonofgorkhali

· 23 Aug

Love this use of AI to assist with, not replace, writing! Keep crushing it @Davidjpark96 💪

write literature review using ai

Waqar Younas, PhD

@waqaryofficial

· 6 Apr

4/9 Jenni AI's Outline Builder is a game-changer for organizing your thoughts and structuring your content. Create detailed outlines effortlessly, ensuring your writing is clear and coherent. #OutlineBuilder #WritingTools #JenniAI

write literature review using ai

I started with Jenni-who & Jenni-what. But now I can't write without Jenni. I love Jenni AI and am amazed to see how far Jenni has come. Kudos to http://Jenni.AI team.

write literature review using ai

· 28 Jul

Jenni is perfect for writing research docs, SOPs, study projects presentations 👌🏽

write literature review using ai

Stéphane Prud'homme

http://jenni.ai is awesome and super useful! thanks to @Davidjpark96 and @whoisjenniai fyi @Phd_jeu @DoctoralStories @WriteThatPhD

Frequently asked questions

What exactly does jenni ai do, is jenni ai suitable for all academic disciplines, is there a trial period or a free version available.

How does Jenni AI help with writer's block?

Can Jenni AI write my literature review for me?

How often is the literature database updated in Jenni AI?

How user-friendly is Jenni AI for those not familiar with AI tools?

Jenni AI: Standing Out From the Competition

In a sea of online proofreaders, Jenni AI stands out. Here’s how we compare to other tools on the market:

Feature Featire

COMPETITORS

Advanced AI-Powered Assistance

Uses state-of-the-art AI technology to provide relevant literature suggestions and structural guidance.

May rely on simpler algorithms, resulting in less dynamic or comprehensive support.

User-Friendly Interface

Designed for ease of use, making it accessible for users with varying levels of tech proficiency.

Interfaces can be complex or less intuitive, posing a challenge for some users.

Transparent and Flexible Pricing

Offers a free trial and clear, flexible pricing plans suitable for different needs.

Pricing structures can be opaque or inflexible, with fewer user options.

Unparalleled Customization

Offers highly personalized suggestions and adapts to your specific research needs over time.

Often provide generic suggestions that may not align closely with individual research topics.

Comprehensive Literature Access

Provides access to a vast and up-to-date range of academic literature, ensuring comprehensive research coverage.

Some may have limited access to current or diverse research materials, restricting the scope of literature reviews.

Ready to Transform Your Research Process?

Don't wait to elevate your research. Sign up for Jenni AI today and discover a smarter, more efficient way to handle your academic literature reviews.

web1.jpg

We generate robust evidence fast

What is silvi.ai    .

Silvi is an end-to-end screening and data extraction tool supporting Systematic Literature Review and Meta-analysis.

Silvi helps create systematic literature reviews and meta-analyses that follow Cochrane guidelines in a highly reduced time frame, giving a fast and easy overview. It supports the user through the full process, from literature search to data analyses. Silvi is directly connected with databases such as PubMed and ClinicalTrials.gov and is always updated with the latest published research. It also supports RIS files, making it possible to upload a search string from your favorite search engine (i.e., Ovid). Silvi has a tagging system that can be tailored to any project.

Silvi is transparent, meaning it documents and stores the choices (and the reasons behind them) the user makes. Whether publishing the results from the project in a journal, sending them to an authority, or collaborating on the project with several colleagues, transparency is optimal to create robust evidence.

Silvi is developed with the user experience in mind. The design is intuitive and easily available to new users. There is no need to become a super-user. However, if any questions should arise anyway, we have a series of super short, instructional videos to get back on track.

To see Silvi in use, watch our short introduction video.

  Short introduction video  

write literature review using ai

Learn more about Silvi’s specifications here.

"I like that I can highlight key inclusions and exclusions which makes the screening process really quick - I went through 2000+ titles and abstracts in just a few hours"

Eishaan Kamta Bhargava 

Consultant Paediatric ENT Surgeon, Sheffield Children's Hospital

"I really like how intuitive it is working with Silvi. I instantly felt like a superuser."

Henriette Kristensen

Senior Director, Ferring Pharmaceuticals

"The idea behind Silvi is great. Normally, I really dislike doing literature reviews, as they take up huge amounts of time. Silvi has made it so much easier! Thanks."

Claus Rehfeld

Senior Consultant, Nordic Healthcare Group

"AI has emerged as an indispensable tool for compiling evidence and conducting meta-analyses. Silvi.ai has proven to be the most comprehensive option I have explored, seamlessly integrating automated processes with the indispensable attributes of clarity and reproducibility essential for rigorous research practices."

Martin Södermark

M.Sc. Specialist in clinical adult psychology

weba.jpg

Silvi.ai was founded in 2018 by Professor in Health Economic Evidence, Tove Holm-Larsen, and expert in Machine Learning, Rasmus Hvingelby. The idea for Silvi stemmed from their own research, and the need to conduct systematic literature reviews and meta-analyses faster.

The ideas behind Silvi were originally a component of a larger project. In 2016, Tove founded the group “Evidensbaseret Medicin 2.0” in collaboration with researchers from Ghent University, Technical University of Denmark, University of Copenhagen, and other experts. EBM 2.0  wanted to optimize evidence-based medicine to its highest potential using Big Data and Artificial Intelligence, but needed a highly skilled person within AI.

Around this time, Tove met Rasmus, who shared the same visions. Tove teamed up with Rasmus, and Silvi.ai was created.

Our story  

Silvi ikon hvid (uden baggrund)

       Free Trial       

    No   card de t ails nee ded!  

RAxter is now Enago Read! Enjoy the same licensing and pricing with enhanced capabilities. No action required for existing customers.

Your all in one AI-powered Reading Assistant

A Reading Space to Ideate, Create Knowledge, and Collaborate on Your Research

  • Smartly organize your research
  • Receive recommendations that cannot be ignored
  • Collaborate with your team to read, discuss, and share knowledge

literature review research assistance

From Surface-Level Exploration to Critical Reading - All in one Place!

Fine-tune your literature search.

Our AI-powered reading assistant saves time spent on the exploration of relevant resources and allows you to focus more on reading.

Select phrases or specific sections and explore more research papers related to the core aspects of your selections. Pin the useful ones for future references.

Our platform brings you the latest research related to your and project work.

Speed up your literature review

Quickly generate a summary of key sections of any paper with our summarizer.

Make informed decisions about which papers are relevant, and where to invest your time in further reading.

Get key insights from the paper, quickly comprehend the paper’s unique approach, and recall the key points.

Bring order to your research projects

Organize your reading lists into different projects and maintain the context of your research.

Quickly sort items into collections and tag or filter them according to keywords and color codes.

Experience the power of sharing by finding all the shared literature at one place.

Decode papers effortlessly for faster comprehension

Highlight what is important so that you can retrieve it faster next time.

Select any text in the paper and ask Copilot to explain it to help you get a deeper understanding.

Ask questions and follow-ups from AI-powered Copilot.

Collaborate to read with your team, professors, or students

Share and discuss literature and drafts with your study group, colleagues, experts, and advisors. Recommend valuable resources and help each other for better understanding.

Work in shared projects efficiently and improve visibility within your study group or lab members.

Keep track of your team's progress by being constantly connected and engaging in active knowledge transfer by requesting full access to relevant papers and drafts.

Find papers from across the world's largest repositories

microsoft academic

Testimonials

Privacy and security of your research data are integral to our mission..

enago read privacy policy

Everything you add or create on Enago Read is private by default. It is visible if and when you share it with other users.

Copyright

You can put Creative Commons license on original drafts to protect your IP. For shared files, Enago Read always maintains a copy in case of deletion by collaborators or revoked access.

Security

We use state-of-the-art security protocols and algorithms including MD5 Encryption, SSL, and HTTPS to secure your data.

HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

  • failed: arydshln
  • failed: inconsolata

Authors: achieve the best HTML results from your LaTeX submissions by following these best practices .

LitLLM: A Toolkit for Scientific Literature Review

Conducting literature reviews for scientific papers is essential for understanding research, its limitations, and building on existing work. It is a tedious task which makes an automatic literature review generator appealing. Unfortunately, many existing works that generate such reviews using Large Language Models (LLMs) have significant limitations. They tend to hallucinate—generate non-factual information—and ignore the latest research they have not been trained on. To address these limitations, we propose a toolkit that operates on Retrieval Augmented Generation (RAG) principles, specialized prompting and instructing techniques with the help of LLMs. Our system first initiates a web search to retrieve relevant papers by summarizing user-provided abstracts into keywords using an off-the-shelf LLM. Authors can enhance the search by supplementing it with relevant papers or keywords, contributing to a tailored retrieval process. Second, the system re-ranks the retrieved papers based on the user-provided abstract. Finally, the related work section is generated based on the re-ranked results and the abstract. There is a substantial reduction in time and effort for literature review compared to traditional methods, establishing our toolkit as an efficient alternative. Our open-source toolkit is accessible at https://github.com/shubhamagarwal92/LitLLM and Huggingface space ( https://huggingface.co/spaces/shubhamagarwal92/LitLLM ) with the video demo at https://youtu.be/E2ggOZBAFw0

Shubham Agarwal 1,2,3 , Issam H. Laradji 1,4 , Laurent Charlin 2,3,5 , Christopher Pal 1,2,5 1 ServiceNow Research, 2 Mila - Quebec AI Institute, 3 HEC Montreal, Canada 4 UBC, Vancouver, Canada, 5 Canada CIFAR AI Chair Correspondence: [email protected]

1 Introduction

Scientists have long used NLP systems like search engines to find and retrieve relevant papers. Scholarly engines, including Google Scholar, Microsoft Academic Graph, and Semantic Scholar, provide additional tools and structure to help researchers further. Following recent advances in large language models (LLMs), a new set of systems provides even more advanced features. For example, Explainpaper 1 1 1 https://www.explainpaper.com/ helps explain the contents of papers, and Writefull 2 2 2 https://x.writefull.com/ helps with several writing tasks, including abstract and title generation. There are, of course, many other tasks where similar technologies could be helpful.

Systems that help researchers with literature reviews hold promising prospects. The literature review is a difficult task that can be decomposed into several sub-tasks, including retrieving relevant papers and generating a related works section that contextualizes the proposed work compared to the existing literature. It is also a task where factual correctness is essential. In that sense, it is a challenging task for current LLMs, which are known to hallucinate. Overall, creating tools to help researchers more rapidly identify, summarize and contextualize relevant prior work could significantly help the research community.

Recent works explore the task of literature review in parts or in full. For example, Lu et al. ( 2020 ) proposes generating the related works section of a paper using its abstract and a list of (relevant) references. Researchers also look at the whole task and build systems using LLMs like ChatGPT for literature review  Haman and Školník ( 2023 ); Huang and Tan ( 2023 ) . While these LLMs tend to generate high-quality text, they are prone to hallucinations  Athaluri et al. ( 2023 ) . For example, the Galactica system was developed to reason about scientific knowledge (Taylor et al., 2022 ) . While it outperforms contemporary models on various scientific tasks, it generates made-up content like inaccurate citations and imaginary papers. 3 3 3 see e.g., What Meta Learned from Galactica

Refer to caption

As a step forward, we explore retrieval-augmented-generation (RAG) to improve factual correctness  Lewis et al. ( 2020 ) . The idea is to use the retrieval mechanism to obtain a relevant list of existing papers to be cited which provides relevant contextual knowledge for LLM based generation.

LitLLM is an interactive tool to help scientists write the literature review or related work section of a scientific paper starting from a user-provided abstract (see Figure  1 ). The specific objectives of this work are to create a system to help users navigate through research papers and write a literature review for a given paper or project. Our main contributions are:

We provide a system based on a modular pipeline that conducts a literature review based on a user-proposed abstract.

We use Retrieval Augmented Generation (RAG) techniques to condition the generated related work on factual content and avoid hallucinations using multiple search techniques.

We incorporate sentence-based planning to promote controllable generation.

Refer to caption

2 Related Work

LLMs have demonstrated significant capabilities in storing factual knowledge and achieving state-of-the-art results when fine-tuned on downstream Natural Language Processing (NLP) tasks   Lewis et al. ( 2020 ) .

However, they also face challenges such as hallucination, outdated knowledge, and non-transparent, untraceable reasoning processes   Huang et al. ( 2023 ); Gao et al. ( 2023 ); Li et al. ( 2024 ) . These limitations have motivated the development of RAG (Retrieval Augmented Generation), which incorporates knowledge from external databases to enhance the accuracy and credibility of the models, particularly for knowledge-intensive tasks   Gao et al. ( 2023 ) . RAG has emerged as a promising solution to the challenges faced by LLMs. It synergistically merges LLMs’ intrinsic knowledge with the vast, dynamic repositories of external databases   Gao et al. ( 2023 ) . This approach allows for continuous knowledge updates and integration of domain-specific information in an attempt to limit the effect of outdated knowledge. The proposed work builds upon the advancements around RAG to provide a more efficient solution for academic writing.

On the other hand, there has been a notable emphasis on utilizing Large Language Models (LLMs) for tasks related to information retrieval and ranking   Zhu et al. ( 2023 ) . The work by  Sun et al. ( 2023 ) leverages generative LLMs such as ChatGPT and GPT-4 for relevance ranking in information retrieval, demonstrating that these models can deliver competitive results to state-of-the-art supervised methods. Pradeep et al. ( 2023b , a ) introduce different open-source LLM for listwise zero-shot reranking, further motivating the proposed approach of using LLMs for reranking in our work.

The exploration of large language models (LLMs) and their zero-shot abilities has been a significant focus in recent research. For instance, one study investigated using LLMs in recommender systems, demonstrating their promising zero-shot ranking abilities, although they struggled with the order of historical interactions and position bias   Hou et al. ( 2023 ) . Another study improved the zero-shot learning abilities of LLMs through instruction tuning, which led to substantial improvements in performance on unseen tasks   Wei et al. ( 2021 ) . A similar approach was taken to enhance the zero-shot reasoning abilities of LLMs, with the introduction of an autonomous agent to instruct the reasoning process, resulting in significant performance boosts   Crispino et al. ( 2023 ) . The application of LLMs has also been explored in the context of natural language generation (NLG) assessment, with comparative assessment found to be superior to prompt scoring   Liusie et al. ( 2023 ) . In the domain of Open-Domain Question Answering (ODQA), a Self-Prompting framework was proposed to utilize the massive knowledge stored in LLMs, leading to significant improvements over previous methods   Li et al. ( 2022 ) . Prompt engineering has been identified as a key technique for enhancing the abilities of LLMs, with various strategies being explored   Shi et al. ( 2023 ) . 4 4 4 This paragraph was generated using our platform with some minor modifications based on a slightly different version of our abstract.

Figure 2 provides an overview of the pipeline. The user provides a draft of the abstract or a research idea. We use LLM to first summarize the abstract in keywords that can be used as a query for search engines. Optionally, the users could provide relevant keywords to improve search results. This query is passed to the search engine, which retrieves relevant papers with the corresponding information, such as abstracts and open-access PDF URLs. These retrieved abstracts with the original query abstract are used as input to the other LLM Re-ranker, which provides a listwise ranking of the papers based on the relevance to the query abstract. These re-ranked abstracts with the original query are finally passed to the LLM generator, which generates the related work section of the paper. Recently, Agarwal et al. ( 2024 ) showed that prompting the LLMs with the sentence plans results in reduced hallucinations in the generation outputs. These plans contain information about the number of sentences and the citation description on each line, providing control to meet author preferences. We include this sentence-based planning in the LLM generator as part of this system. In the following, we provide more details about each of the modules.

Refer to caption

3.1 Paper Retrieval Module

In our toolkit, we retrieve relevant papers using the Semantic Scholar API. Other platforms could be used, but the S2 Platform is well-adapted to this use case. It is a large-scale academic corpus comprising 200M+ metadata records across multiple research areas, providing information about papers’ metadata, authors, paper embedding, etc. The Recommendations API also provides relevant papers similar to any seed paper. Figure 3 shows our system’s different strategies. We describe these three settings that we use to search for references:

User provides an abstract or a research idea (roughly the length of the abstract). We prompt an LLM (see Figure 4 ) to summarize this abstract in keywords which can be used as a search query with most APIs.

Users can optionally also provide keywords that can improve search results. This is similar (in spirit) to how researchers search for related work with a search engine. This is particularly useful in interdisciplinary research, and authors would like to include the latest research from a particular domain, which could not be captured much in the abstract.

Lastly, any seed paper the user finds relevant enough to their idea could be used with the Recommendations API from search engines to provide other closely related papers.

Refer to caption

3.2 Paper Re-Ranking Module

Recent efforts have explored the application of proprietary LLMs for ranking   Sun et al. ( 2023 ); Ma et al. ( 2023 ) as well as open-source models like   Pradeep et al. ( 2023a , b ) . These approaches provide a combined list of passages directly as input to the model and retrieve the re-ordered ranking list   Zhang et al. ( 2023 ) . Typically, a retriever first filters top-k potential candidates, which are then re-ranked by an LLM to provide the final output list. In our work, we use the instructional permutation generation approach  Sun et al. ( 2023 ) where the model is prompted to generate a permutation of the different papers in descending order based on the relevance to the user-provided abstract, thus producing an ordered list of preferences against providing intermediate scores. Figure 5 showcases the prompt we used for LLM-based re-ranking.

Refer to caption

3.3 Summary Generation Module

We explore two strategies for generation: (1) Zero-shot generation and (2) Plan-based generation, which relies on sentence plans for controllable generation, described in the following

3.3.1 Zero-shot generation

While LLMs can potentially search and generate relevant papers from their parametric memory and trained data, they are prone to hallucinating and generating non-factual content. Retrieval augmented generation, first introduced in   Parvez et al. ( 2021 ) for knowledge tasks, addresses this by augmenting the generation model with an information retrieval module. The RAG principles have been subsequently used for dialogue generation in task-oriented settings  Thulke et al. ( 2021 ) , code generation  Liu et al. ( 2020 ); Parvez et al. ( 2021 ) and product review generation  Kim et al. ( 2020 ) . RAG drastically reduces hallucinations in the generated output  Gao et al. ( 2023 ); Tonmoy et al. ( 2024 ) .

Our work builds upon the principles of RAG, where we retrieve the relevant papers based on the query and augment them as context for generating the literature review. This also allows the system to be grounded in the retrieved information and be updated with the latest research where the training data limits the parametric knowledge of the LLM. Figure 6 shows our system’s prompt for effective Retrieval Augmented Generation (RAG).

3.3.2 Plan based generation

To get the best results from LLM, recent research shifts focus on designing better prompts (Prompt Engineering) including 0-shot chain-of-thought prompting   Kojima et al. ( 2022 ); Zhou et al. ( 2022 ) , few-shot prompting   Brown et al. ( 2020 ) techniques, few-shot Chain-of-thought prompting   Wei et al. ( 2022 ) and in-context prompting   Li and Liang ( 2021 ); Qin and Eisner ( 2021 ) . However, the longer context of our problem statement (query paper and multiple relevant papers) hinders the application of these techniques for response generation.

We utilized sentence plan-based prompting techniques drawing upon insights from the literature of traditional modular Natural Language Generation (NLG) pipelines with intermediary steps of sentence planning and surface realization  Reiter and Dale ( 1997 ); Stent et al. ( 2004 ) . These plans provide a sentence structure of the expected output, which efficiently guides the LLM in generating the literature review in a controllable fashion as demonstrated in concurrent work (Agarwal et al., 2024 ) . Figure  7 (in Appendix) shows the prompt for plan-based generation with an example template as:

Please generate {num_sentences} sentences in {num_words} words. Cite {cite_x} at line {line_x}. Cite {cite_y} at line {line_y}.

4 Implementation Details

We build our system using Gradio  Abid et al. ( 2019 ) , which provides a nice interface to quickly and efficiently build system demos. Our user interface is also available at HuggingFace Space 5 5 5 https://huggingface.co/spaces/shubhamagarwal92/LitLLM . We query the Semantic Scholar API available through the Semantic Scholar Open Data Platform (Lo et al., 2020 ; Kinney et al., 2023 ) to search for the relevant papers. Specifically, we use the Academic Graph 6 6 6 https://api.semanticscholar.org/api-docs/graph and Recommendations 7 7 7 https://api.semanticscholar.org/api-docs/recommendations API endpoint. In this work, we use OpenAI API 8 8 8 https://platform.openai.com/docs/guides/gpt to generate results for LLM using GPT-3.5-turbo and GPT-4 model. At the same time, our modular pipeline allows using any LLM (proprietary or open-sourced) for different components. We also allow the end-user to sort the retrieved papers by relevance (default S2 results), citation count, or year.

5 User Experience

As a preliminary study, we provided access to our user interface to 5 different researchers who worked through the demo to write literature reviews and validate the system’s efficacy. We also provide an example in the demo with an abstract for a quick start. Particularly, the users found the 0-shot generation to be more informative about the literature in general while the plan-based generation to be more accessible and tailored for their research paper, as also evident in our demo video. 9 9 9 https://youtu.be/E2ggOZBAFw0 . Table 1 (in Appendix) shows the output-related work for a recent paper  Li et al. ( 2023 ) that was randomly chosen with a number of cited papers as 4. Our system generated an informative query Multimodal Research: Image-Text Model Interaction and retrieved relevant papers where the top recommended paper was also cited in the original paper. While zero-shot generation provides valuable insights into existing literature, plan-based generation produces a more succinct and readily usable literature review.

6 Conclusion and Future Work

In this work, we introduce and describe LitLLM, a system which can generate literature reviews in a few clicks from an abstract using off-the-shelf LLMs. This LLM-powered toolkit relies on the RAG with a re-ranking strategy to generate a literature review with attribution. Our auxiliary tool allows researchers to actively search for related work based on a preliminary research idea, research proposal or even a full abstract. We present a modular pipeline that can be easily adapted to include the next generation of LLMs and other domains, such as news, by changing the source of retrieval information.

Given the growing impact of different LLM-based writing assistants, we are optimistic that our system may aid researchers in searching relevant papers and improve the quality of automatically generated related work sections of a paper. While our system shows promise as a helpful research assistant, we believe that their usage should be disclosed to the readers, and authors should also observe caution in eliminating any possible hallucinations.

In the future, we would also like to explore academic search through multiple APIs, such as Google Scholar. This work only considered abstracts of the query paper and the retrieved papers, which creates a bottleneck in effective literature review generation. With the advent of longer context LLMs, we envision our system ingesting the whole paper (potentially leveraging an efficient LLM-based PDF parser) to provide a more relevant background of the related research. We consider our approach as an initial step for building intelligent research assistants which could help academicians through an interactive setting (Dwivedi-Yu et al., 2022 ) .

  • Abid et al. (2019) Abubakar Abid, Ali Abdalla, Ali Abid, Dawood Khan, Abdulrahman Alfozan, and James Zou. 2019. Gradio: Hassle-free sharing and testing of ml models in the wild . arXiv preprint arXiv:1906.02569 .
  • Agarwal et al. (2024) Shubham Agarwal, Issam Laradji, Laurent Charlin, and Christopher Pal. 2024. LLMs for Literature Review generation: Are we there yet? Under submission .
  • Athaluri et al. (2023) Sai Anirudh Athaluri, Sandeep Varma Manthena, V S R Krishna Manoj Kesapragada, Vineel Yarlagadda, Tirth Dave, and Rama Tulasi Siri Duddumpudi. 2023. Exploring the boundaries of reality: Investigating the phenomenon of artificial intelligence hallucination in scientific writing through chatgpt references . Cureus , 15.
  • Brown et al. (2020) Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners .
  • Crispino et al. (2023) Nicholas Crispino, Kyle Montgomery, Fankun Zeng, Dawn Song, and Chenguang Wang. 2023. Agent instructs large language models to be general zero-shot reasoners . ArXiv , abs/2310.03710.
  • Dwivedi-Yu et al. (2022) Jane Dwivedi-Yu, Timo Schick, Zhengbao Jiang, Maria Lomeli, Patrick Lewis, Gautier Izacard, Edouard Grave, Sebastian Riedel, and Fabio Petroni. 2022. Editeval: An instruction-based benchmark for text improvements. arXiv preprint arXiv:2209.13331 .
  • Gao et al. (2023) Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, and Haofen Wang. 2023. Retrieval-augmented generation for large language models: A survey . arXiv preprint arXiv:2312.10997 .
  • Haman and Školník (2023) Michael Haman and Milan Školník. 2023. Using chatgpt to conduct a literature review. Accountability in Research , pages 1–3.
  • Hou et al. (2023) Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. 2023. Large language models are zero-shot rankers for recommender systems . ArXiv , abs/2305.08845.
  • Huang and Tan (2023) Jingshan Huang and Ming Tan. 2023. The role of chatgpt in scientific communication: writing better scientific review articles . American Journal of Cancer Research , 13(4):1148.
  • Huang et al. (2023) Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. 2023. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions . arXiv preprint arXiv:2311.05232 .
  • Kim et al. (2020) Jihyeok Kim, Seungtaek Choi, Reinald Kim Amplayo, and Seung-won Hwang. 2020. Retrieval-augmented controllable review generation. In Proceedings of the 28th International Conference on Computational Linguistics , pages 2284–2295.
  • Kinney et al. (2023) Rodney Kinney, Chloe Anastasiades, Russell Authur, Iz Beltagy, Jonathan Bragg, Alexandra Buraczynski, Isabel Cachola, Stefan Candra, Yoganand Chandrasekhar, Arman Cohan, et al. 2023. The semantic scholar open data platform. arXiv preprint arXiv:2301.10140 .
  • Kojima et al. (2022) Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems , 35:22199–22213.
  • Lewis et al. (2020) Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems , 33:9459–9474.
  • Li et al. (2023) Hang Li, Jindong Gu, Rajat Koner, Sahand Sharifzadeh, and Volker Tresp. 2023. Do dall-e and flamingo understand each other? In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 1999–2010.
  • Li et al. (2022) Junlong Li, Zhuosheng Zhang, and Hai Zhao. 2022. Self-prompting large language models for zero-shot open-domain qa .
  • Li et al. (2024) Junyi Li, Jie Chen, Ruiyang Ren, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2024. The dawn after the dark: An empirical study on factuality hallucination in large language models. arXiv preprint arXiv:2401.03205 .
  • Li and Liang (2021) Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190 .
  • Liu et al. (2020) Shangqing Liu, Yu Chen, Xiaofei Xie, Jingkai Siow, and Yang Liu. 2020. Retrieval-augmented generation for code summarization via hybrid gnn. arXiv preprint arXiv:2006.05405 .
  • Liusie et al. (2023) Adian Liusie, Potsawee Manakul, and Mark John Francis Gales. 2023. Llm comparative assessment: Zero-shot nlg evaluation through pairwise comparisons using large language models .
  • Lo et al. (2020) Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel Weld. 2020. S2ORC: The semantic scholar open research corpus . In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 4969–4983, Online. Association for Computational Linguistics.
  • Lu et al. (2020) Yao Lu, Yue Dong, and Laurent Charlin. 2020. Multi-XScience: A large-scale dataset for extreme multi-document summarization of scientific articles . In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 8068–8074. Association for Computational Linguistics.
  • Ma et al. (2023) Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. 2023. Zero-shot listwise document reranking with a large language model. arXiv preprint arXiv:2305.02156 .
  • Parvez et al. (2021) Md Rizwan Parvez, Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Retrieval augmented code generation and summarization. arXiv preprint arXiv:2108.11601 .
  • Pradeep et al. (2023a) Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023a. Rankvicuna: Zero-shot listwise document reranking with open-source large language models. arXiv preprint arXiv:2309.15088 .
  • Pradeep et al. (2023b) Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023b. Rankzephyr: Effective and robust zero-shot listwise reranking is a breeze! arXiv preprint arXiv:2312.02724 .
  • Qin and Eisner (2021) Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts . arXiv preprint arXiv:2104.06599 .
  • Qu et al. (2021) Leigang Qu, Meng Liu, Jianlong Wu, Zan Gao, and Liqiang Nie. 2021. Dynamic modality interaction modeling for image-text retrieval . Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval .
  • Reiter and Dale (1997) Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering , 3(1):57–87.
  • Shi et al. (2023) Fobo Shi, Peijun Qing, D. Yang, Nan Wang, Youbo Lei, H. Lu, and Xiaodong Lin. 2023. Prompt space optimizing few-shot reasoning success with large language models . ArXiv , abs/2306.03799.
  • Srinivasan et al. (2021) Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, and Marc Najork. 2021. Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning . Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval .
  • Stent et al. (2004) Amanda Stent, Rashmi Prasad, and Marilyn Walker. 2004. Trainable sentence planning for complex information presentations in spoken dialog systems . In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04) , pages 79–86, Barcelona, Spain.
  • Sun et al. (2023) Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is chatgpt good at search? investigating large language models as re-ranking agent. arXiv preprint arXiv:2304.09542 .
  • Taylor et al. (2022) Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large language model for science . arXiv preprint arXiv:2211.09085 .
  • Thulke et al. (2021) David Thulke, Nico Daheim, Christian Dugast, and Hermann Ney. 2021. Efficient retrieval augmented generation from unstructured knowledge for task-oriented dialog . arXiv preprint arXiv:2102.04643 .
  • Tonmoy et al. (2024) SM Tonmoy, SM Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, and Amitava Das. 2024. A comprehensive survey of hallucination mitigation techniques in large language models. arXiv preprint arXiv:2401.01313 .
  • Wei et al. (2021) Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2021. Finetuned language models are zero-shot learners . ArXiv , abs/2109.01652.
  • Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed H Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems .
  • Yu et al. (2022) Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. 2022. Coca: Contrastive captioners are image-text foundation models . Trans. Mach. Learn. Res. , 2022.
  • Zhang et al. (2023) Xinyu Zhang, Sebastian Hofstätter, Patrick Lewis, Raphael Tang, and Jimmy Lin. 2023. Rank-without-gpt: Building gpt-independent listwise rerankers on open-source large language models. arXiv preprint arXiv:2312.02969 .
  • Zhao et al. (2022) Zijia Zhao, Longteng Guo, Xingjian He, Shuai Shao, Zehuan Yuan, and Jing Liu. 2022. Mamo: Fine-grained vision-language representations learning with masked multimodal modeling . Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval .
  • Zhou et al. (2022) Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910 .
  • Zhu et al. (2023) Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Zhicheng Dou, and Ji-Rong Wen. 2023. Large language models for information retrieval: A survey . arXiv preprint arXiv:2308.07107 .

Refer to caption

In the following, we provide snippets of code to retrieve results from the Semantic Scholar API for both recommendation and query-based search:

  • Research Guides
  • University Libraries

AI-Based Literature Review Tools

  • Dialogues: Insightful Facts
  • How to Craft Prompts
  • Plugins / Extensions for AI-powered Searches
  • Cite ChatGPT in APA / MLA
  • AI and Plagiarism
  • ChatGPT & Higher Education
  • Author Profile

Selected AI-Based Literature Review Tools

Updates: See news or release of AI (Beta) across various academic research databases including Web of Science , Scopus , Ebsco , ProQues t, OVID , Dimensions , JStor , Westlaw , and LexisNexis . ********* ********** ********** ********** **********   

Disclaimer: TAMU libraries do not have subscription access to the AI-powered tools listed below the divider line. The guide serves solely as an informational resource. It is recommended that you assess these tools and their usage methodologies independently. ------------------------------------------------------------------------------------------------------------------------------------------------------------- SEMANTIC SCHOLAR

  • SCIENTIFIC LITERATURE SEARCH ENGINE - finding semantically similar research papers.
  • " A free, AI-powered research tool for scientific literature."  <https://www.semanticscholar.org/>. But login is required in order to use all functions.
  • Over 200 millions of papers from all fields of science, the data of which has also served as a wellspring for the development of other AI-driven tools.

The 4000+ results can be sorted by Fields of Study, Date Range, Author, Journals & Conferences

Save the papers in your Library folder. The Research Feeds will recommend similar papers based on the items saved.

Example - SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality Total Citations: 22,438   [Note: these numbers were gathered when this guide was created] Highly Influential Citations 2,001 Background Citations 6,109 Methods Citations 3,273  Results Citations 385

Semantic Reader

TLDRs (Too Long; Didn't Read) Try this example . Press the pen icon to reveal the highlighted key points . TLDRs "are super-short summaries of the main objective and results of a scientific paper generated using expert background knowledge and the latest GPT-3 style NLP techniques. This new feature is available in beta for nearly 60 million papers in computer science, biology, and medicine..." < https://www.semanticscholar.org/product/tldr>

  • https://www.openread.academy/
  • Institutionally accessed by Harvard, MIT, University of Oxford, Johns Hopkins, Standford, Beijing University. .
  • AI-powered Academic Searching + Web Searching - Over 300 million papers and real-time web content.
  • Every keyword search or AI quest will yield a synthesis report with citations. If you want to re-orient the search outcomes, just click on the Re-generate button and all citations will be refreshed accordingly. After that click on Follow-Up Questions to delve deeper into a particular area or subject.
  • Use Paper Q&A to interact with a text directly, e.g. " What does this paper say about literature review ?"
  • Click on the Translation to put a text or search results into another language.
  • Upload a PDF document and let Paper Expresso to read it for you and parse the content into an academic report format for easy screening: Background and context> Research objectives and hypotheses> Methodology> Results and findings> Discussion and interpretation> Contributions to the field > Structure and flow> Achievements and significance> Limitations and future work>
  • AI-POWERED RESEARCH ASSISTANT - finding papers, filtering study types, automating research flow, brainstorming, summarizing and more.
  • " Elicit is a research assistant using language models like GPT-3 to automate parts of researchers’ workflows. Currently, the main workflow in Elicit is Literature Review. If you ask a question, Elicit will show relevant papers and summaries of key information about those papers in an easy-to-use table."   <https://elicit.org/faq#what-is-elicit.>; Find answers from 175 million papers. FAQS
  • Example - How do mental health interventions vary by age group?    /   Fish oil and depression Results: [Login required] (1) Summary of top 4 papers > Paper #1 - #4 with Title, abstract, citations, DOI, and pdf (2) Table view: Abstract / Interventions / Outcomes measured / Number of participants (3) Relevant studies and citations. (4) Click on Search for Paper Information to find - Metadata about Sources ( SJR etc.) >Population ( age etc.) >Intervention ( duration etc.) > Results ( outcome, limitations etc.) and > Methodology (detailed study design etc.) (5) Export as BIB or CSV
  • How to Search / Extract Data / List of Concept Search -Enter a research question >Workflow: Searching > Summarizing 8 papers> A summary of 4 top papers > Final answers. Each result will show its citation counts, DOI, and a full-text link to Semantic Scholar website for more information such as background citations, methods citation, related papers and more. - List of Concepts search - e.g. adult learning motivation . The results will present a list the related concepts. - Extract data from a pdf file - Upload a paper and let Elicit extract data for you.
  • Export Results - Various ways to export results.
  • How to Cite - Includes the elicit.org URL in the citation, for example: Ought; Elicit: The AI Research Assistant; https://elicit.org; accessed xxxx/xx/xx

CONSENSUS.APP

ACADEMIC SEARCH ENGINE- using AI to find insights in research papers.

"We are a search engine that is designed to accept research questions, find relevant answers within research papers, and synthesize the results using the same language model technology." <https://consensus.app/home/blog/maximize-your-consensus-experience-with-these-best-practices/>

  • Example - Does the death penalty reduce the crime?   /  Fish oil and depression  /    (1) Extracted & aggregated findings from relevant papers. (2) Results may include AIMS, DESIGN, PARTICIPANTS, FINDINGS or other methodological or report components. (3) Summaries and Full Text
  • How to Search Direct questions - Does the death penalty reduce the crime? Relationship between two concepts - Fish oil and depression / Does X cause Y? Open-ended concepts - effects of immigration on local economics Tips and search examples from Consensus' Best Practice   
  • Synthesize (beta) / Consensus Meter When the AI recognizes certain types of research questions, this functionality may be activated. It will examine a selection of some studies and provide a summary along with a Consensus Meter illustrating their collective agreement. Try this search: Is white rice linked to diabetes? The Consensus Meter reveals the following outcomes after analyzing 10 papers: 70% indicate a positive association, 20% suggest a possible connection, and 10% indicate no link.

Prompt “ write me a paragraph about the impact of climate change on GDP with citations “

CITATIONS IN CONTEXT

Integrated with Research Solutions.

Over 1.2 billion Citation Statements and metadata from over 181 million papers suggested reference.

How does it work? - "scite uses access to full-text articles and its deep learning model to tell you, for a given publication: - how many times it was cited by others - how it was cited by others by displaying the text where the citation happened from each citing paper - whether each citation offers supporting or contrasting evidence of the cited claims in the publication of interest, or simply mention it."   <https://help.scite.ai/en-us/article/what-is-scite-1widqmr/>

EXAMPLE of seeing all citations and citation statements in one place

More information: Scite: A smart citation index that displays the context of citations and classifies their intent using deep learning  

  • GPT3.5 by OpenAI. Knowledge cutoff date is September 2021.
  • Input/ Output length - ChatGPT-3.5 allows a maximum token limit of 4096 tokens. According to ChatGPT " On average, a token in English is roughly equivalent to 4 bytes or characters. English words are typically around 5 characters long. This means that, very roughly, you could fit around 800 to 1000 English words within 4096 tokens." 
  • According to ChatGPT, the generated responses are non-deterministic by default. So if you run the searches again and get slightly or very different results, it's likely due to this factor.
  • ChatGPT may find non-existent references.
  • According to this study < https://arxiv.org/ftp/arxiv/papers/2304/2304.06794.pdf > "ChatGPT cites the most-cited articles and journals, relying solely on Google Scholar's citation counts" within the field of environmental science.  
  • Example - "INTERVIEW WITH CHATGPT" as a Research Method & Teaching Tool Some researchers began to use this approach to obtain their research data. Try this Google Scholar search link "interview with ChatGPT"   or see two articles below: (1) Chatting about ChatGPT: how may AI and GPT impact academia and libraries? BD Lund, T Wang - Library Hi Tech News, 2023 (2) An interview with ChatGPT: discussing artificial intelligence in teaching, research, and practice , G Scaringi, M Loche - 2023  

Increased risk of sudden infant death syndrome (SIDS) Increased risk of premature birth Increased risk of low birth weight Increased risk of respiratory problems in newborns Increased risk of respiratory problems in infants exposed to secondhand smoke Increased risk of developing asthma and other respiratory illnesses later in life for infants exposed to secondhand smoke [Note : ChatGPT may generate non-existent references or false knowledge. To find out why Large Language Models hallucinate, check out this Wiki article: Hallucination (artificial intelligence) and this blog post -   A Gentle Introduction to Hallucinations in Large Language Models by Adrian Tam ]

Infant death Neonatal mortality (referring specifically to deaths within the first 28 days of life) Perinatal mortality (referring to deaths occurring during the period from 22 completed weeks of gestation to 7 completed days after birth) Early childhood mortality (referring to deaths occurring within the first five years of life) Child mortality (referring to deaths occurring before the age of 18) [Press the Regenerate button to get more.]

  • Example -  RELATED WORDS What are the related words of infant mortality? Neonatal mortality, Post-neonatal mortality, Stillbirths, Low birth weight, Malnutrition, Infectious diseases, Vaccination, Maternal health,  Access to healthcare, Poverty, Social inequality, Sanitation, Hygiene, Water quality, Childbirth complications, Congenital abnormalities, Birth defects, Maternal age, Under-five mortality, Child mortality, Perinatal mortality, Preterm birth, Low birth weight, Sudden infant death syndrome (SIDS), Maternal mortality, Postnatal care, Malnutrition, Immunization, Access to healthcare, Clean water and sanitation, Congenital anomalies, Infant health disparities, Infant mortality rate (IMR), Infant survival.  

(("Infant Death"[Mesh] OR "Sudden Infant Death Syndrome"[Mesh] OR "Infant Mortality"[Mesh] OR "Perinatal Mortality"[Mesh] OR "Neonatal Mortality"[Mesh] OR "Fetal Mortality"[Mesh] OR "Infant Death" OR "Sudden Infant Death Syndrome" OR "Infant Mortality" OR "Perinatal Mortality" OR "Neonatal Mortality" OR "Fetal Mortality") AND ("Risk Factors"[Mesh] OR "Etiology"[Mesh] OR "Prevention and Control"[Mesh] OR "Interventions"[Mesh] OR "Public Health"[Mesh] OR "Health Disparities"[Mesh] OR "Healthcare Disparities"[Mesh] OR "Poverty"[Mesh] OR "Malnutrition"[Mesh] OR "Infection"[Mesh] OR "Congenital Abnormalities"[Mesh] OR "Prematurity"[Mesh] OR "Low Birth Weight"[Mesh] OR "Maternal Education"[Mesh] OR "Maternal Health"[Mesh] OR "Risk Factors" OR "Etiology" OR "Prevention and Control" OR "Interventions" OR "Public Health" OR "Health Disparities" OR "Healthcare Disparities" OR "Poverty" OR "Malnutrition" OR "Infection" OR "Congenital Abnormalities" OR "Prematurity" OR "Low Birth Weight" OR "Maternal Education" OR "Maternal Health")) Comments and Suggestions: #1 -This search string may not work on all database platforms. For example, Medline (OVID) may not be able to run it, Medline & CINAHL (EBSCO) can run it without any problems while PubMed will run it but it will indicate which MeSH terms are not indexed. See PubMed's search History and Details under Advanced . #2 - This is just one possible combination. You may need to adjust it based on your task and the databases you are using . #3 - It might be easier if you break the task into 3 parts: ask for synonyms > ask for thesauri terms > and ask it to combine them.

Examples - DATABASE RECOMMENDATIONS Heath Science - Which databases to use to find articles on the EFFECTS OF MATERNAL SMOKING ON INFANT DEATH? PubMed: This is a database of biomedical literature from the National Library of Medicine. It includes articles from medical journals and other sources on a wide range of health-related topics. Embase: This is a database of biomedical literature that covers more international and pharmaceutical content than PubMed. It includes articles from medical journals and other sources on a wide range of health-related topics. Cochrane Library : This is a collection of databases that provides systematic reviews and meta-analyses of healthcare interventions, including interventions related to maternal smoking and infant death. Scopus: This is a multidisciplinary database that covers research from a wide range of fields, including health sciences. It includes articles from scientific journals, books, and conference proceedings. Web of Science: This is a database that covers research from a wide range of fields, including health sciences. It includes articles from scientific journals, books, and conference proceedings. CINAHL: This is a database of nursing and allied health literature, including articles on maternal smoking and infant death. [Note: The response above was copied and pasted directly from ChatGPT. No editing was added.] Education -- I am doing a meta-analysis on k-12 STEM education. What databases should I use? ERIC : This is a database of education research and information, including articles from journals, reports, and conference papers. It is a commonly used database for educational research. Education Source : This is a comprehensive database of education research and information, including articles from journals, books, and conference papers. Web of Science: This is a multidisciplinary database that covers research from a wide range of fields, including education. It includes articles from scientific journals, books, and conference proceedings. Scopus: This is another multidisciplinary database that covers research from a wide range of fields, including education. It includes articles from scientific journals, books, and conference proceedings. PsycINFO: This is a database of psychology research, including articles from journals, books, and dissertations. It can be useful for research on STEM education that includes psychological factors. Cochrane Library : This is a collection of databases that provides systematic reviews and meta-analyses of healthcare interventions including interventions related to education. [Author's note: You could also ask for the GREY LITERATURE sources related to the topic above,]

Example: ACT-AS-AN-EXPERT PROMPT Here's an alternative way to consolidate all the questions into a single consultation by using this popular Act-As-An-Expert prompt, aka, system message prompt:  I am going to conduct a systematic review on games and mathematics related to K-12 education  I want you to act as a systematic review expert . I will ask you some questions related to my research topic and you advise me on how to do it. 

[You may ask one question at a time] My first question is "which databases should I use?"  My second question is "what are the relevant subject terms, descriptors and key words?" My third question is "how to do data extraction?" ..... , .... Please note that the above prompts are merely for illustrative purposes. Actual questions may contain more specific instructions. As an example, for the first question, could you provide guidance on how to identify relevant publications? Specifically, can you provide a list of the primary databases and grey literature sources related to my topic?  

Example - GREY LITERATURE Please show me all the resources that offer grey literature for conducting research in the field of education     ERIC (Education Resources Information Center): https://eric.ed.gov/     OpenGrey: http://www.opengrey.eu/     British Education Index: https://www.ebscohost.com/academic/british-education-index     WorldCat: https://www.worldcat.org/     ProQuest Dissertations & Theses Global: https://www.proquest.com/products-services/pqdtglobal.html     EThOS (Electronic Theses Online Service): https://ethos.bl.uk/     Institutional Repositories of universities and academic institutions (search for specific institution repositories)     American Educational Research Association (AERA): https://www.aera.net/     National Education Association (NEA): https://www.nea.org/     Association for Supervision and Curriculum Development (ASCD): http://www.ascd.org/     RAND Corporation: https://www.rand.org/     National Center for Education Research (NCER): https://ies.ed.gov/ncer/     National Center for Education Statistics (NCES): https://nces.ed.gov/     European Educational Research Association (EERA): https://www.eera-ecer.de/     UNESCO International Institute for Educational Planning (IIEP): http://www.iiep.unesco.org/     Organization for Economic Cooperation and Development (OECD): https://www.oecd.org/education/     Education Development Center (EDC): https://www.edc.org/     National Institute for Educational Policy Research (NIER) - Japan: https://www.nier.go.jp/     Australian Council for Educational Research (ACER): https://www.acer.org/     Canadian Education Association (CEA): https://www.cea-ace.ca/ My topic is about stem education. Which grey literature databases should I use?    STEM Education Research Centre (SERC) Grey Literature Database    National STEM Education Research Repository (NSERR)    National Science Digital Library (NSDL)    STEM Education Resource Center (SERC) GrayLit Network    STEM Education Grey Literature Library

See other Comparison Examples under Bing AI.

Examples - ZERO SHOT, ONE SHOT & FEW SHOT PROMPTING for literature review See the examples on the left panel- How to Fine Tune Prompts

Bing Chat / Copilot

  • Bing Chat has become Copilot.
  • CONVERSATIONAL LLM 4.0 + SEARCH ENGINE. Connects to the Internet.
  • Three conversation modes: More Creative, More Balanced or More Precise.
  • Comparison - Prompt: My topic is about psychology. Which grey literature databases should I use? More Precise - PsycExtra, NIH Library and links to library guides, .gov and .org websites More Balanced - PsycExtra and three main research guides that mention psychology grey literature sources. More Creative- PsycExtra, LILACS, and Worldcat + definition of grey literature + suggestions on searching for professional association websites, newsletters, magazines, standards and guidelines, and other sources that may contain relevant information for your topic.
  • Some examples below were tested in Bing Chat only.  
  • Example - factors of infant mortality Infant mortality is a term that refers to the death of infants before their first birthday. The leading causes of infant mortality include birth asphyxia, pneumonia, congenital malformations, term birth complications such as abnormal presentation of the fetus umbilical cord prolapse, or prolonged labor, neonatal infection, diarrhea, malaria, measles, and malnutrition1. Many situational factors lead to infant mortality such as the pregnant woman’s level of education, environmental conditions, political infrastructure, and level of medical support1. In the United States, a primary determinant of infant mortality risk is infant birth weight with lower birth weights increasing the risk of infant mortality. The determinants of low birth weight include socioeconomic, psychological, behavioral and environmental factors2.  
  • Example: PEER REVIEWED SOURCES I need to find some peer reviewed articles about infant mortality? The answer includes 5 articles. The first one is “Causes of death and infant mortality rates among full-term births in the United States between 2010 and 2012: An observational study” by Neha Bairoliya and Günther Fink1. A click on  " Tell me more about the first article " leads to this summary: " The first article is titled “Causes of death and infant mortality rates among full-term births in the United States between 2010 and 2012: An observational study” and was published in PLOS Medicine on March 20, 2018. The authors are Neha Bairoliya and Günther Fink. The study used linked birth and death records for the period 2010–2012 to identify the state- and cause-specific burden of infant mortality among full-term infants (born at 37–42 weeks of gestation) in the United States. The researchers found that the full-term infant mortality rate (FTIMR) was 2.2 per 1,000 live births overall and ranged between 1.29 (Connecticut) and 3.77 (Mississippi) at the state level. Sudden unexpected death in infancy (SUDI) accounted for 43% of FTIM; congenital malformations and perinatal conditions accounted for 31% and 11.3% of FTIM, respectively. The largest mortality differentials between states with good and states with poor FTIMR were found for SUDI, with particularly large risk differentials for deaths due to sudden infant death syndrome (SIDS) and suffocation." More searches: Please find five empirical studies focusing on the factors influencing adult learners engaged in online education. Please locate five scholarly articles on stress management among college students site:pubmed.ncbi.nlm.nih.gov

Image illustrating data extraction

The ERIC Thesaurus contains nearly 12,000 terms, including more than 4,500 descriptors and 7,000 synonyms1. You can use the term “Academic Achievement” or its synonyms such as “Educational Achievement”, “Student Achievement”, “Academic Performance”, “Academic Success”, “Learning Achievement” and more. [ Note : The follow-up responses include some additional sources. See two below: https://www.powerthesaurus.org/academic_achievement https://eric.ed.gov/?ti=Achievement+Gap   including https://eric.ed.gov/?ti=Academic+Achievement |

My topic is about psychology.  Which grey literature databases should I use?

ChatGPT PsycEXTRA : This database is produced by the American Psychological Association (APA) and includes grey literature such as conference papers, white papers, and government reports. OpenGrey : This database provides access to grey literature produced in Europe, including reports, theses, and conference papers Grey Literature Report : This database is produced by The New York Academy of Medicine and includes reports, conference papers, and other grey literature on health and biomedical topics, including psychology. ProQuest Dissertations and Theses Global : This database provides access to graduate-level research, including theses and dissertations related to psychology. Google Scholar: While not exclusively a grey literature database, Google Scholar can be a useful tool for finding conference papers and other grey literature related to psychology. National Technical Information Service (NTIS) : This database provides access to technical reports produced by U.S. government agencies, including those related to psychology. Bing AI   - [asked in the Precise mode] For psychology, some recommended grey literature databases include PsycEXTRA and PsyArXiv . PsycEXTRA is a database of grey literature material relating to psychology, behavioral sciences and health. PsyArXiv is a psychology archive designed to facilitate rapid dissemination of psychological research. What other models have been developed based on SERVQUAL?

ChatGPT SERVPERF: This is a modified version of SERVQUAL that measures service performance instead of service quality. It was developed by Cronin and Taylor (1992) and is based on the same five dimensions as SERVQUAL. SERVPERF-GAP: This model, developed by Caruana and Ewing (2006), extends the SERVQUAL model to include three additional gaps that can occur when customers evaluate service quality. E-SERVQUAL: This model, developed by Parasuraman et al. (2005), adapts the SERVQUAL scale to measure the quality of online services. SERVCO: This is a modified version of SERVQUAL that includes a sixth dimension, "cost," in addition to the five original dimensions. It was developed by Sureshchandar et al. (2002) to address the importance of cost in service quality evaluation. SERVQ-HC: This is a modified version of SERVQUAL that is specifically designed for evaluating the quality of health care services. It was developed by Cronin et al. (2000) and includes 20 items across five dimensions. Bing AI The Nordic model of service quality2. The hierarchical model of service quality3. The American Customer Satisfaction Index (ACSI)4.

     Bing AI Sidebar - CHAT, THINK, & WRITE

  • Examples of Using AI sidebar to Recap Google Scholar findings

Chat - If you highlight a study in the search results of Google Scholar, you will be prompted with a question by AI sidebar asking if you want to send the selected text to it. If you choose to do so, you can then select either  EXPLAIN or EXPAND to learn more about the chosen article. It's worth noting that EXPAND provides more detailed information. You can then click on any of the follow-up links to delve further into the selected article or topic. If the text of the chosen study is written in a foreign language, the AI sidebar can also provide an explanation of the text in English. Another option for extracting the key points from a selected article is to submit the URL of the PDF. Insights - Use the insights button to get more ideas about the topic. Compose - Tell the Edge sidebar your topic and let its Compose tool to generate a draft for you. Google Scholar Button- See also the Plugins and Extension page for how to perform a combined search using Google, Google Scholar, and ChatGPT .

Example: FINDING SIMILAR ARTICLES Find similar articles like this one - [then paste in the abstract ] Note - Use this prompt with Bing chat since real-time web info is required.

Examples: SUMMARIZE + REFERENCE Use a basic prompt like " Please summarize . ..." or add a REFERENCE TEXT that provides the necessary information about what you want it to focus or how the summary or resolution should look like. The reference could be a quote, an excerpt, a partial sentence, a conversation, an abstract, a code snippet and so on. Formula : Specify a role > assign a task > provide a reference text. Example#1 - Scholarly Journal Quote

STEP 1 Prompt: As a research assistant specializing in renewable energy, your task is to locate comparable research studies based on the provided abstract from an article: "This paper discusses the perspective of renewable energy (wind, solar, wave and biomass) in the making of strategies for a sustainable development. Such strategies typically involve three major technological changes: energy savings on the demand side, efficiency improvements in the energy production, and replacement of fossil fuels by various sources of renewable energy."

Bing Chat then found 3 documents: the ScienceDirect paper where the abstract came from and two other documents from un.org and frontiersin.org respectively.

STEP 2 Prompt: Please synthesize the findings of the three studies provided above.

Bing Chat: "Sure! Here is a synthesis of the findings from the three studies:

All three studies emphasize the importance of renewable energy in achieving sustainable development and mitigating climate change. They all discuss various strategies and actions that can be taken to increase the use of renewable energy sources and reduce reliance on fossil fuels.

Henrik Lund’s paper discusses the perspective of renewable energy (wind, solar, wave and biomass) in the making of strategies for a sustainable development. Such strategies typically involve three major technological changes: energy savings on the demand side, efficiency improvements in the energy production, and .....

The United Nations outlines five critical actions the world needs to prioritize now to transform our energy systems and speed up the shift to renewable energy: Make renewable energy technology a global public good, Improve global access to components and raw materials......

“Sustainable Energy Transition for Renewable and Low Carbon Grid” discusses that a sustainable global electricity transition will entail increased use of renewable energy sources particularly wind and solar, nuclear energy as a low carbon energy source, electrification of transport and.......

In summary , these studies suggest that transitioning to renewable energy sources is crucial for sustainable development and mitigating climate change. This can be achieved through various strategies such as increasing energy efficiency, improving access to renewable technology and materials, leveling the playing field for renewables, ending fossil fuel subsidies, and redirecting investments towards renewables." Example #2 Y ou are an expert researcher, given the following article, please discuss its key findings. Reference: How Large Language Models are Transforming Machine-Paraphrased Plagiarism at https://arxiv.org/abs/2210.03568 This is another way to put REFERENCE TEXT. As a side not, role specifying or the use of persona is important as it can impact the level of responses obtained. Just imagine if the designated role were " You are a high school student... ." Example #3 Please read this article and summarize it f or me - "Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making" and then find 5 similar studies and please also summarize each for me.

write literature review using ai

CONNECTED PAPERS

  • RELATED STUDIES
  • Uses visual graphs or other ways to show relevant studies. The database is connected to the Semantic Scholar Paper Corpus which has compiled hundreds of millions of published papers across many science and social science fields.
  • See more details about how it works .  
  • Example - SERVQUAL and then click on SELECT A PAPER TO BUILD THE GRAPH > The first paper was selected. Results: (1) Origin paper - SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality + Connected papers with links to Connected Papers / PDF / DOI or Publisher's site / Semantic Scholar / Google Scholar. (2) Graph showing the origin paper + connected papers with links to the major sources . See above. (3) Links to Prior Works and Derivative Works See the detailed citations by Semantic Scholar on the origin SERVQUAL paper on the top of this page within Semantic Scholars.
  • How to Search Search by work title. Enter some keywords about a topic.
  • Download / Save Download your saved Items in Bib format.

PAPER DIGEST

  • SUMMARY & SYNTHESIS
  • " Knowledge graph & natural language processing platform tailored for technology domain . <"https://www.paperdigest.org/> Areas covered: technology, biology/health, all sciences areas, business, humanities/ social sciences, patents and grants ...

write literature review using ai

  • LITERATURE REVIEW - https://www.paperdigest.org/review/ Systematic Review - https://www.paperdigest.org/literature-review/
  • SEARCH CONSOLE - https://www.paperdigest.org/search/ Conference Digest - NIPS conference papers ... Tech AI Tools: Literature Review  | Literature Search | Question Answering | Text Summarization Expert AI Tools: Org AI | Expert search | Executive Search, Reviewer Search, Patent Lawyer Search...

Daily paper digest / Conference papers digest / Best paper digest / Topic tracking. In Account enter the subject areas interested. Daily Digest will upload studies based on your interests.

RESEARCH RABBIT

  • CITATION-BASED MAPPING: SIMILAR / EARLY / LATER WORKS
  • " 100s of millions of academic articles and covers more than 90%+ of materials that can be found in major databases used by academic institutions (such as Scopus, Web of Science, and others) ." See its FAQs page. Search algorithms were borrowed from NIH and Semantic Scholar.

The default “Untitled Collection” will collect your search histories, based on which Research Rabbit will send you recommendations for three types of related results: Similar Works / Earlier Works / Later Works, viewable in graph such as Network, Timeline, First Authors etc.

Zotero integration: importing and exporting between these two apps.

  • Example - SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality [Login required] Try it to see its Similar Works, Earlier Works and Later Works or other documents.
  • Export Results - Findings can be exported in BibTxt, RIS or CSV format.

CITING GENERATIVE AI

  • How to cite ChatGPT  [APA] - https://apastyle. apa.org/blog /how-to-cite-chatgpt  
  • How to Cite Generative AI  [MLA]  https://style. mla.org /citing-generative-ai/
  • Citation Guide - Citing ChatGPT and Other Generative AI (University of Queensland, Australia)
  • Next: Dialogues: Insightful Facts >>
  • Last Updated: May 9, 2024 2:16 PM
  • URL: https://tamu.libguides.com/c.php?g=1289555

Accelerate your dissertation literature review with AI

Accelerate your dissertation literature review with AI

Become a lateral pioneer.

Get started for free and help craft the future of research.

Early access. No credit card required.

Introduction

Dissertation writing is part of being a graduate student. There are many different ways to organise your research, and several steps to this process . Typically, the literature review is an early chapter in the dissertation, providing an overview of the field of study. It should summarise relevant research papers and other materials in your field, with specific references. To understand how to write a good literature review, we must first understand its purpose. The goals of a literature review are to place your dissertation topic in the context of existing work (this also allows you to acknowledge prior contributions, and avoid accusations of plagiarism), and to set you up to show you are making a new contribution to the field. Since literature review is repetitive, many students find it tedious. While there are some traditional tools and techniques to help, covered below, they tend to be cumbersome and keyword-based. For this reason, we built a better tool for research and literature review, which I describe in the last section. You can see the Lateral tool in action , and how it makes the literature review a lot easier. To sign up to the tool, click here.

1. Different kinds of reading

We can divide the activity of reading for research into three different kinds: 

  • Exploratory reading, mostly done in the initial phase;
  • Deep reading of highly informative sources; and 
  • Broad, targeted skim reading of large collections of books and articles, in order to find specific kinds of information you already know exist.

1.1. Exploratory reading

Initially, a research student will need to read widely in a new field to gain fundamental understanding. In this early stage, the goal is to explore and digest the main ideas in existing research. Traditionally, this phase has been a manual process, but there is a new generation of digital tools to aid in getting a quick overview of your field, and more generally to organise your research . This stage can happen both before and after the research topic or question has been formulated. It is often unstructured and full of serendipitous (“happy accidental”) discovery  — the student’s job is to absorb what they find, rather than to conduct a targeted search for particular information. ‍

Put another way: You don’t know what you’re looking for ahead of time. By the end of this phase, you should be able to sketch a rough map of your field of study.

1.2. Narrow, deep reading

After the exploratory reading phase, you will be able to prioritise the information you read. Now comes the second phase: Deep, reflective reading. In this phase, your focus will narrow to a small number of highly relevant sources — perhaps one or two books, or a handful of articles — which you will read carefully, with the goal of fully understanding important concepts. This is a deliberative style of reading, often accompanied by reflective pauses and significant note taking. If the goal in the first phase was sketching a map of the globe, the goal in this second phase is to decide which cities interest you most, and map them out in colour and detail.

1.3. Broad, targeted reading

You have now sketched a map of your field of study (exploratory reading), and filled in some parts of this map in more detail (narrow, deep reading). I will assume that by this point, you have found a thesis question or research topic, either on your own, or with the help of an advisor. This is often where the literature review begins in earnest. In order to coherently summarise the state of your field, you must review the literature once again, but this time in a more targeted way: You are searching for particular pieces of information that either illustrate existing work, or demonstrate a need for the new approach you will take in your dissertation. For example, 

  • You want to find all “methodology” sections in a group of academic articles, and filter for those that have certain key concepts;
  • You want to find all paragraphs that discuss product-market fit, inside a group of academic articles.

To return to the map analogy: This is like sketching in the important roads between your favourite cities — you are showing connections between the most important concepts in your field, through targeted information search.

write literature review using ai

2. Drawbacks of broad targeted reading

The third phase — broad, targeted reading, where you know what kind of information you’re looking for and simply wish to scan a collection of articles or books to find it — is often the most mechanical and time consuming one. Since human brains tend to lose focus in the face of dull repetition, this is also a tedious and error-prone phase for many people. What if you miss something important because you’re on autopilot? Often, students end up speed- or skim reading through large volumes of information to complete the literature review as quickly as possible. With focus and training, this manual approach can be efficient and effective, but it can also mean reduced attention to detail and missed opportunities to discover relevant information. Only half paying attention during this phase can also lead to accidental plagiarism, otherwise known as cryptomnesia: Your brain subconsciously stores a distinctive idea or quote from the existing literature without consciously attributing it to its source reference. Afterwards, you end up falsely, but sincerely believing you created the idea independently, exposing yourself to plagiarism accusations.

3. Existing solutions to speed up literature reviews

Given the drawbacks of manual speed- or skim-reading in the broad reading phase, it’s natural to turn to computer-driven solutions. One popular option is to systematically create a list of search term keywords or key phrases, which can then be combined using boolean operators to broaden results. For example, in researching a study about teenage obesity, one might use the query:

  • “BMI” or “obesity” and “adolescents” and not “geriatric”,

to filter for obesity-related articles that do mention adolescents, but don’t mention older adults.

Constructing such lists can help surface many relevant articles, but there are some disadvantages to this strategy:

  • These keyword queries are themselves fiddly and time-consuming to create.
  • Often what you want to find is whole “chunks” of text — paragraphs or sections, for example — not just keywords.
  • Even once you have finished creating your boolean keyword query list, how do you know you haven’t forgotten to include an important search query?

This last point reflects the fact that keyword searching is “fragile” and error-prone: You can miss results that would be relevant — this is known as getting “false negatives” — because your query uses words that are similar, but not identical to words appearing in one or more articles in the library database. For example, the query “sporting excellence” would not match with an article that mentioned only “high performance athletics”.

4. Lateral — a new solution

To make the process of finding specific information in big collections of documents quicker and easier — for example, in a literature review — search, we created the Lateral app , a new kind of AI-driven interface to help you organise, search through and save supporting quotes and information from collections of articles. Using techniques from natural language processing, it understands, out-of-the-box, not only that “sporting excellence” and “high-performance” athletics are very similar phrases, but also that two paragraphs discussing these topics in slightly different language are likely related. Moreover, it also learns to find specific blocks of information, given only a few examples. Want to find all “methodology” sections in a group of articles? Check. How about all paragraphs that mention pharmaceutical applications? We have you covered. If you’re interested, you can sign up today .

5. Final note — novel research alongside the literature review

Some students, to be more efficient, use the literature review process to collect data not just to summarise existing work, but also to support one or more novel theses contained in their research topic. After all, you are reading the literature anyway, so why not take the opportunity to note, for example, relevant facts, quotes and supporting evidence for your thesis? Because Lateral is designed to learn from whatever kind of information you’re seeking, this process also fits naturally into the software’s workflow.

References:

  • Is your brain asleep on the job?: https://www.psychologytoday.com/us/blog/prime-your-gray-cells/201107/is-your-brain-asleep-the-job
  • Tim Feriss speed reading: https://www.youtube.com/watch?v=ZwEquW_Yij0
  • Five biggest reading mistakes: https://www.timeshighereducation.com/blog/five-biggest-reading-mistakes-and-how-avoid-them
  • Skim reading can be bad: https://www.inc.com/jeff-steen/why-summaries-skim-reading-might-be-hurting-your-bottom-line.html
  • Cryptomnesia: https://en.wikipedia.org/wiki/Cryptomnesia
  • Systematic literature review with boolean keywords: https://libguides.library.cqu.edu.au/c.php?g=842872&p=6024187

Lit review youtube intro: https://www.youtube.com/watch?v=bNIG4qLuhJA

Spread the word

write literature review using ai

There is a better way than Dropbox and Google Drive to do collaborative research

In this blog, I describe the limitations of Dropbox and Google in the space of research, and propose Lateral as the much needed alternative.

write literature review using ai

Remote group work and the best student collaboration tools

In this blog, I outline some organisational techniques and the best digital collaborative tools for successful student group work.

write literature review using ai

6 things to consider and organise before writing your dissertation (and how Lateral can help)

I hope the following six things to consider and organise will make the complex dissertation writing more manageable.

Get into flow.

write literature review using ai

Swiss School of Business Research

  • How to Ethically Write Your Literature Review Using AI: A Comprehensive Guide

by Laura Perez | Mar 17, 2024 | Uncategorized | 1 comment

write literature review using ai

This traditional method, while thorough, was fraught with inefficiencies and obstacles. The most poignant of these was perhaps the sheer volume of time consumed in the pursuit of academic rigour, a precious resource that could scarcely be afforded by many. Moreover, the manual nature of the process introduced a higher risk of oversight and error, potentially compromising the precision and accuracy so vital to scholarly work.

Fast forward to the present, and the landscape of academic research has been transformed irrevocably by the advent of artificial intelligence (AI). What once took months, or even years, can now be accomplished in a matter of weeks, thanks to the automation capabilities of AI technologies. The literature review process, in particular, has been revolutionised. AI-powered tools such as ChatGPT have not only streamlined the discovery and analysis of relevant literature but have also enhanced the accuracy and precision of these reviews. No longer must researchers endure the tedium of sifting through irrelevant papers or the monotony of writing reviews of others’ work. Instead, AI enables a focus on the essence of research—generating insights and contributing to the body of knowledge with efficiency and ease.

This seismic shift has not only expedited the research process but has also opened new avenues for exploration, allowing scholars to delve deeper into their subjects with the support of AI’s vast computational power and intelligent analysis. The result? A literature review that is not only quicker to compile but richer in insight and more robust in its scholarly contribution. As we stand on the threshold of this new era in academic research, it’s clear that AI has not just changed the game—it’s redefined it.

This guide is crafted to delineate how one can ethically utilise ChatGPT to automate aspects of literature review, ensuring that the process remains within the ethical boundaries set by academia.

Embarking on Your Literature Review with a Clear Question

The foundation of a solid literature review is a well-crafted research question. This query should strike a balance, being narrow enough to be manageable, yet broad enough to encompass the complexity of your topic. ChatGPT can play a pivotal role here, aiding in refining your ideas into a polished research question by offering suggestions based on the existing body of knowledge and identified research gaps.

Creating a Structured Outline with ChatGPT’s Help

With your research question in hand, the next step is to draft a detailed outline for your literature review, and here’s where ChatGPT comes into play:

  • Identifying Key Themes : ChatGPT can assist in brainstorming relevant themes and subtopics, ensuring your review covers all necessary ground.
  • Organising the Structure : The AI can recommend a logical flow for these themes, enhancing the coherence of your review.
  • Refining Sections : ChatGPT can further refine your outline, suggesting where to expand or narrow down sections to align with your research question.

Efficient Research with Google Scholar

With an outline ready, Google Scholar becomes your go-to for sourcing high-quality academic papers, articles, and books. Keywords extracted from your outline, facilitated by ChatGPT, direct your searches, while abstracts give quick insights into each document’s relevance.

Streamlining References with Zotero

Zotero, a free reference management tool, is invaluable for keeping track of your sources. It simplifies citation and reference organisation, crucial for maintaining academic integrity. Zotero integrates seamlessly into your research workflow, from browser extensions for easy source collection to facilitating in-text citations and bibliographies in your document.

Building a Compelling Argument with the PIER System

To structure your literature review effectively, consider the PIER system:

  • Point : Start sections with a statement that addresses your research question.
  • Integration of Evidence : Use evidence from your sources to back up each point.
  • Examples for Support : Illustrate your arguments with specific findings from your research.
  • Recap : Conclude sections with summaries that tie back to your main question.

Refining Your Draft with ChatGPT

ChatGPT becomes a valuable asset again in the drafting phase, offering initial feedback on argument strength, coherence, and areas needing more evidence. Critically evaluate ChatGPT’s suggestions to ensure they align with your research objectives.

Ethical AI Use: A Success Story

One student’s experience highlights the transformative potential of ethically using ChatGPT for literature reviews. By leveraging AI, the student not only expedited their review process but also enriched the analytical depth of their work, resulting in significant academic improvement.

In Conclusion

The ethical employment of AI in academic research, especially in the drafting of literature reviews, marries the efficiency of technology with the researcher’s critical thinking skills. Following this guide ensures you can benefit from AI like ChatGPT, from formulating research questions to refining your final draft, all while adhering to the highest ethical standards.

jeffrey mitchell sugar defender

Your blog is a testament to your dedication to your craft. Your commitment to excellence is evident in every aspect of your writing. Thank you for being such a positive influence in the online community.

Don’t forget to follow us :)

Facebook

Recent Posts

  • Does it really need to take 3 or 4 years to complete a Ph.D. in 2024?
  • Understanding the Role of Accrediting Agencies in Ensuring Quality Education
  • New “Peacemaker” Master’s Degree Programme
  • Best FREE AI Tools for Academic Research and Researchers in 2024

Powered by WhatsApp Chat

#

WhatsApp Chat is free, download and try it now here!

Duke University Libraries

Literature Reviews

  • Artificial intelligence (AI) tools
  • Getting started
  • Types of reviews
  • 1. Define your research question
  • 2. Plan your search
  • 3. Search the literature
  • 4. Organize your results
  • 5. Synthesize your findings
  • 6. Write the review

Introduction to AI

Research rabbit, copilot (powered by chatgpt4).

  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

write literature review using ai

Contact a Librarian

Ask a Librarian

Generative AI tools have been receiving a lot of attention lately because they can create content like text, images, and music. These tools employ machine learning algorithms that can produce unique and sometimes unexpected results. Generative AI has opened up exciting possibilities in different fields, such as language models like GPT and image generators.

However, students need to approach these tools with awareness and responsibility. Here are some key points to consider:

Novelty and Creativity : Generative AI tools can produce content that is both innovative and unexpected. They allow users to explore new ideas, generate unique artworks, and even compose original music. This novelty is one of their most exciting aspects.

Ethical Considerations : While generative AI offers creative potential, it also raises ethical questions. Students should be aware of potential biases, unintended consequences, and the impact of their generated content. Responsible use involves considering the broader implications.

Academic Integrity : When using generative AI tools for academic purposes, students should consult their instructors. Policies regarding the use of AI-generated content may vary across institutions. Always seek guidance to ensure compliance with academic integrity standards.

In summary, generative AI tools are powerful and fascinating, but students should approach them thoughtfully, seek guidance, and adhere to institutional policies. Please refer to the Duke Community Standard  for questions related to ethical AI use.

Looking for a tool that isn't listed here? Let us know about it!

write literature review using ai

Research Rabbit is a literature mapping tool that takes one paper and performs backward- and forward citation searching in addition to recommending "similar work." It scans the Web for publicly available content to build its "database" of work.

Best suited for...

Disciplines whose literature is primarily published in academic journals.

Considerations

  • Integrates with Zotero
  • Works mostly with just journal articles
  • Potential for bias in citation searching/mapping

»   researchrabbit.ai   «

center

What is it?

Elicit is a tool that semi-automates time-intensive research processes, such as summarizing papers , extracting data , and synthesizing information . Elicit pulls academic literature from Semantic Scholar , an academic search engine that also uses machine learning to summarize information.

Empirical research (i.g., the sciences, especially biomedicine).

  • Both free and paid versions
  • Doesn't work well in identifying facts or in theoretical/non-empirical research (e.g., the humanities)
  • Potential biases in the natural language processing (NLP) algorithms
  • Summarized information and extracted data will still need to be critically analyzed and verified for accuracy by the user

»   elicit.com   «

write literature review using ai

Think of Consensus as ChatGPT for research! Consensus is "an AI-powered search engine designed to take in research questions, find relevant insights within research papers, and synthesize the results using the power of large language models" ( Consensus.app ).  Consensus runs its language model over its entire body of scientific literature (which is sourced from Semantic Scholar ) and extracts the “key takeaway” from every paper.

The social sciences and sciences (non-theoretical disciplines).

  • Free and paid versions
  • Similar to Elicit, Consensus should not be used to ask questions about basic facts
  • Consensus recommends that you ask questions related to research that has already been conducted by scientists
  • Potential for biases in the input data from participants

»   consensus.app   «

write literature review using ai

Dubbed the "AI-powered Swiss Army Knife for information discovery," Perplexity is used for answering questions (including basic facts, a function that many other AI tools are not adept at doing), exploring topics in depth utilizing Microsoft's Copilot, organizing your research into a library, and interacting with your data (including asking questions about your files).

Perplexity has wide-reaching applications and could be useful across disciplines.

  • Free and paid pro versions (the pro version utilizes Microsoft's Copilot AI tool)
  • Available in desktop, iOS, and Android apps
  • See  Perplexity's blog for more info
  • Your personal information and data on how you use the tool are stored for analytical purposes (however, this feature can be turned off in settings)
  • Features a browser plug-in, Perplexity Companion , that is essentially a blend of Google and ChatGPT

»   perplexity.ai   «

Did you know that as Duke faculty, staff, and students, we have free access to ChatGPT4 via Microsoft Copilot ?

Log in with your Duke credentials to start using it today.

write literature review using ai

The OG of generative AI tools, ChatGPT-4 is the latest iteration of the popular chatbot, answering questions and generating text that sounds like it was written by a human. While not a replacement for conducting research, it can be helpful when it comes to brainstorming topics or research questions and also as a writing tool (rewriting or paraphrasing content, assessing tone, etc.).

All users across all disciplines.

  • ChatGPT-3.5 is the default version of free and paid-tier chat users.
  • Since it can't verify its sources, be wary of hallucinations (or made-up citations) that can look very real.
  • It is not 100% accurate ! While ChatGPT-4 is touted as being 40% more accurate than its predecessor, users are still expected to verify the information generated by it.
  • There is always the potential for bias since ChatGPT was trained on a massive dataset of websites, articles, books, etc. (much of which is inherently biased since it was created by humans).

For ChatGPT-4 (access provided by Duke and requires login) »   copilot.microsoft.com   «

For ChatGPT-3.5 (free) »   chat.openai.com   «

  • << Previous: 6. Write the review
  • Next: Thompson Writing Studio >>
  • Last Updated: May 17, 2024 8:42 AM
  • URL: https://guides.library.duke.edu/litreviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

A free, AI-powered research tool for scientific literature

  • Sossina M. Haile

New & Improved API for Developers

Introducing semantic reader in beta.

Stay Connected With Semantic Scholar Sign Up What Is Semantic Scholar? Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI.

AI for literature reviews

Let ai assist boost your literature review and analysis, how to use ai assist for your literature review.

  • Step one: Identify and import your literature
  • Step two: Summarize your documents with AI Assist
  • Step three: Determine relevance and sort accordingly
  • Step four: Reading and rough coding
  • Step five: Confirm your initial codings
  • Step six: Refine your code system
  • Step seven: Analyze your literature

Literature about literature reviews and analysis

Tuesday, September 19, 2023

AI for Literature Reviews MAXQDA

As you may have noticed, there is a rapid growth in AI-based tools for all types of software packages. We followed this trend by releasing AI Assist – your virtual research assistant that simplifies your qualitative data analysis. In the following, we will present you the tools and functions of AI Assist and how they can facilitate your literature reviews.

Literature reviews are an important step in the data analysis journey of many research projects, but often it is a time-consuming and arduous affair. Whether you are reviewing literature for writing a meta-analysis or for the background section of your thesis, work with MAXQDA! Besides the classic tools of MAXQDA that can facilitate each phase of your literature review, the new tool AI Assist can boost your literature review and analysis in multiple ways.

Year by year, the number of publications grows in almost every field of research – our insights and knowledge likewise. The drawback is that the number of publications might be too high to keep track of the recent developments in your field of research. Consequently, conducting a proper literature review becomes more and more difficult, and the importance of quickly identifying whether a publication is interesting for your research question constantly increases.

Luckily, MAXQDA’s AI Assist tool is here to help. Among others, it can summarize your documents, text segments, and coded segments. But there is more – based on your coded segments AI Assist can generate subcodes suggestions. In the following, we will present you step-by-step instructions on how to use MAXQDA for your literature review and analysis with a special focus on how AI Assist can support you.

Step one of AI for literature reviews: Identify and import your literature

Despite the fact that MAXQDA and AI Assist can facilitate your literature review and analysis in manifold ways, the best advice is to carefully plan your literature review and analysis. Think about the purpose of your literature review and the questions you want to answer. Develop a search strategy which includes, but is not limited to, deciding on literature databases, search terms, and practical and methodological criteria for selecting high-quality scientific literature. Then start your literature review and analysis by searching the identified databases. Before downloading the PDFs and/or bibliographic information (RIS), briefly scan the search results for relevance by reading the title, keywords and abstract. If you find the publication interesting, download the PDF, and let AI Assist help you determining whether the publication falls within the narrower area of your research question.

MAXQDA’s import tab offers import options dedicated to different data types, such as bibliographic data (in RIS file format) and PDF documents. To import the selected literature, just click on the corresponding button, select the data you want to import, and click okay. Alternatively, you can import data simply by drag-and-dropping the data files from your Windows Explorer/Mac Finder window. If you import full texts and the corresponding bibliographic data, MAXQDA automatically connects the full text to the literature entry with an internal link.

Step two of AI for literature reviews: Summarize your documents with AI Assist

Now that you have imported all publications that might be interesting for your research question, it is time to explore whether they are indeed relevant for your literature review and analysis. Before the release of AI Assist, this step typically took a lot of time as you had to go through each paper individually. With the release of AI Assist, MAXQDA can accelerate this step with AI-generated summaries of your publications. For example, you can create AI-generated summaries either for the entire publication or for each chapter (e.g. Introduction, Methods, Results, and so on) individually and base your decision about a paper’s relevance on these summaries. Each AI-generated summary is stored in a memo that is attached to the underlying document or text segment, respectively.

Summarizing text segments with AI Assist just takes a few clicks. Simply highlight a text segment in the Document Browser and choose AI Assist from the context menu. Adjust the settings to your needs and let OpenAI do the work for you. To view and edit the summary, double-click on the yellow memo icon attached to the summarized text passage.

AI for literature reviews - Summarize text

Adjust settings for summarizing text with AI Assist for literature reviews

Step three of AI for literature reviews: Determine relevance and sort accordingly

Instead of reading the entire paper, you can use the AI-generated summaries to determine whether a publication falls within the narrower area of your research question. To do so, it might be helpful to view all memos containing summaries of a specific publication at once. Of course, this is possible with MAXQDA. Go to the Memo tab, click on (In-)document Memos, and click on the publication’s name to view only the AI-generated summaries related to this document. It is important to note that AI-generated summaries are not perfect yet. Therefore, it is advisable to read the entire paper in cases where you have doubts or can’t decide whether the publication is relevant.

Depending on the number of publications in your MAXQDA project, you might want to sort your documents in document groups, for example, based on the relevance for your research question or the topics discussed in the paper. You can easily create a new Document group by clicking on the respective icon in the Document System window. Documents can be added simply via drag-and-drop. Alternatively, you can create Document Sets which are especially helpful when you want to sort your documents by more than one domain (e.g. by relevance and methodology used).

AI for literature reviews: Sort documents

Sort documents in document groups according to their relevance using AI for literature reviews

Step four of AI for literature reviews: Reading and rough coding

Now that you have identified the publications important to your project, it is time to go through the documents. Although, AI Assist can support you at multiple stages of your literature review, it can’t replace the researcher. As a researcher, you still need a deep understanding of your material, analysis methods, and the software you use for analysis. As AI-generated summaries are not perfect yet, you might want to improve the summaries, if necessary, or add information that you consider especially important, e.g. participants’ demographics.

In a next step, it is time to create and apply some codes to the data. A code can be described as a label used to name phenomena in a text or an image. Depending on your approach, you might already have codes in mind (deductive coding) or you plan to generate codes on the basis of the data (inductive coding). No matter your approach – you can use MAXQDA’s advanced tools for coding. In many cases it is best, to start your first round of coding with rather rough codes that you can refine in a later step using the help of AI Assist. You can create codes in the Code System window by clicking on the plus-icon or in the Document Browser by highlighting a text segment via the context menu or the corresponding icons. A code can be applied to the data via drag-and-drop.

AI for literature reviews: Reading and rough coding

Reading and rough coding for AI for literature reviews

Step five of AI for literature reviews: Confirm your initial codings

Though AI Assist can’t validate your codings like a second researcher using intercoder agreement, AI Assist’s Code Summaries can help you to identify whether you have applied the code as intended. The AI-generated Code Summary is a summary of the content of all text segments coded with the corresponing code. This summary might give you an idea of how you have applied the code and if the coded text segments indeed contain what you had in mind when creating the code.

To create a summary of coded segments with AI Assist, simply right-click the code of interest in the Code System and choose AI Assist > Code Summary from the context menu. Adjust language and the summary length to your needs and let AI Assist do the summary for you. As for document summaries, the summary will be stored in a memo which is placed next to the code in the Code System. If the summary doesn’t match your code definition, you might want to review the coded segments and adjust your codings accordingly. By double-clicking on a code, you open the Overview of Coded Segments – a table perfectly suited to go through the coded segments and adjust or remove the codings.

AI for literature reviews: Confirm your initial codings

Confirm your initial codings with AI Assist’s Code Summary for literature reviews

Step six of AI for literature reviews: Refine your code system

In case you have applied rather rough codes to your data, your code definitions are probably too broad for you to make sense of the data. Depending on your goals, you might wish to refine these rather broad codes into more precise sub-codes. Again, you can use AI Assist’s power to support this step of your literature review. AI Assist analyzes the text and suggests subcodes while leaving the decision on whether you want to create the suggested sub-codes up to you.

To create AI-generated subcode suggestions, open the context menu of a code and choose AI Assist > Suggest Subcodes. Besides selecting a language, you can ask AI Assist to include examples for each subcode as a bullet list. Like the AI-generated summaries, the code suggestions are stored in the code’s memo. If you are satisfied with the code suggestions, you can create and apply them to your data. Alternatively, you can use the AI-generated code suggestions to confirm the subcodes that you have created.

AI for literature reviews: Refine your code system

Use AI Assist’s Suggest Subcodes function to refine your code system for your literature reviews

Step seven of AI for literature reviews: Analyze your literature

Now that you have coded your literature, it’s time to analyze the material with MAXQDA. Although you can use plenty of MAXQDA’s tools and functions even when the material is not coded, other tools require coded segments to be applicable. MAXQDA offers plenty of tools for qualitative data analysis, impossible to mention all. Among others, MAXQDA’s Overview and Summary Tables are useful for aggregating your data. With MAXQDA Visualization Tools you can quickly and easily create stunning visualizations of your data, and with MAXQDA’s Questions-Themes-Theories tool you have a place to synthesize your results and write up a literature review or report.

You can find more information and ideas for conducting a literature review with MAXQDA, here:

Learn more about literature reviews

For information about AI Assist and how to Activate AI Assist, visit:

Learn more about AI Assist

We offer a variety of free learning materials to help you get started with your literature review. Check out our Getting Started Guide to get a quick overview of MAXQDA and step-by-step instructions on setting up your software and creating your first project with your brand new QDA software. In addition, the free Literature Reviews Guide explains how to conduct a literature review with MAXQDA in more detail.

Getting started with MAXQDA

Getting Started with MAXQDA

Literature Review Guide

Literature Reviews with MAXQDA

MAXQDA Newsletter

Our research and analysis tips, straight to your inbox.

  • By submitting the form I accept the Privacy Policy.

write literature review using ai

Photo of a person's hands typing on a laptop.

AI-assisted writing is quietly booming in academic journals. Here’s why that’s OK

write literature review using ai

Lecturer in Bioethics, Monash University & Honorary fellow, Melbourne Law School, Monash University

Disclosure statement

Julian Koplin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Monash University provides funding as a founding partner of The Conversation AU.

View all partners

If you search Google Scholar for the phrase “ as an AI language model ”, you’ll find plenty of AI research literature and also some rather suspicious results. For example, one paper on agricultural technology says:

As an AI language model, I don’t have direct access to current research articles or studies. However, I can provide you with an overview of some recent trends and advancements …

Obvious gaffes like this aren’t the only signs that researchers are increasingly turning to generative AI tools when writing up their research. A recent study examined the frequency of certain words in academic writing (such as “commendable”, “meticulously” and “intricate”), and found they became far more common after the launch of ChatGPT – so much so that 1% of all journal articles published in 2023 may have contained AI-generated text.

(Why do AI models overuse these words? There is speculation it’s because they are more common in English as spoken in Nigeria, where key elements of model training often occur.)

The aforementioned study also looks at preliminary data from 2024, which indicates that AI writing assistance is only becoming more common. Is this a crisis for modern scholarship, or a boon for academic productivity?

Who should take credit for AI writing?

Many people are worried by the use of AI in academic papers. Indeed, the practice has been described as “ contaminating ” scholarly literature.

Some argue that using AI output amounts to plagiarism. If your ideas are copy-pasted from ChatGPT, it is questionable whether you really deserve credit for them.

But there are important differences between “plagiarising” text authored by humans and text authored by AI. Those who plagiarise humans’ work receive credit for ideas that ought to have gone to the original author.

By contrast, it is debatable whether AI systems like ChatGPT can have ideas, let alone deserve credit for them. An AI tool is more like your phone’s autocomplete function than a human researcher.

The question of bias

Another worry is that AI outputs might be biased in ways that could seep into the scholarly record. Infamously, older language models tended to portray people who are female, black and/or gay in distinctly unflattering ways, compared with people who are male, white and/or straight.

This kind of bias is less pronounced in the current version of ChatGPT.

However, other studies have found a different kind of bias in ChatGPT and other large language models : a tendency to reflect a left-liberal political ideology.

Any such bias could subtly distort scholarly writing produced using these tools.

The hallucination problem

The most serious worry relates to a well-known limitation of generative AI systems: that they often make serious mistakes.

For example, when I asked ChatGPT-4 to generate an ASCII image of a mushroom, it provided me with the following output.

It then confidently told me I could use this image of a “mushroom” for my own purposes.

These kinds of overconfident mistakes have been referred to as “ AI hallucinations ” and “ AI bullshit ”. While it is easy to spot that the above ASCII image looks nothing like a mushroom (and quite a bit like a snail), it may be much harder to identify any mistakes ChatGPT makes when surveying scientific literature or describing the state of a philosophical debate.

Unlike (most) humans, AI systems are fundamentally unconcerned with the truth of what they say. If used carelessly, their hallucinations could corrupt the scholarly record.

Should AI-produced text be banned?

One response to the rise of text generators has been to ban them outright. For example, Science – one of the world’s most influential academic journals – disallows any use of AI-generated text .

I see two problems with this approach.

The first problem is a practical one: current tools for detecting AI-generated text are highly unreliable. This includes the detector created by ChatGPT’s own developers, which was taken offline after it was found to have only a 26% accuracy rate (and a 9% false positive rate ). Humans also make mistakes when assessing whether something was written by AI.

It is also possible to circumvent AI text detectors. Online communities are actively exploring how to prompt ChatGPT in ways that allow the user to evade detection. Human users can also superficially rewrite AI outputs, effectively scrubbing away the traces of AI (like its overuse of the words “commendable”, “meticulously” and “intricate”).

The second problem is that banning generative AI outright prevents us from realising these technologies’ benefits. Used well, generative AI can boost academic productivity by streamlining the writing process. In this way, it could help further human knowledge. Ideally, we should try to reap these benefits while avoiding the problems.

The problem is poor quality control, not AI

The most serious problem with AI is the risk of introducing unnoticed errors, leading to sloppy scholarship. Instead of banning AI, we should try to ensure that mistaken, implausible or biased claims cannot make it onto the academic record.

After all, humans can also produce writing with serious errors, and mechanisms such as peer review often fail to prevent its publication.

We need to get better at ensuring academic papers are free from serious mistakes, regardless of whether these mistakes are caused by careless use of AI or sloppy human scholarship. Not only is this more achievable than policing AI usage, it will improve the standards of academic research as a whole.

This would be (as ChatGPT might say) a commendable and meticulously intricate solution.

  • Artificial intelligence (AI)
  • Academic journals
  • Academic publishing
  • Hallucinations
  • Scholarly publishing
  • Academic writing
  • Large language models
  • Generative AI

write literature review using ai

Lecturer / Senior Lecturer - Marketing

write literature review using ai

Senior Research Fellow - Women's Health Services

write literature review using ai

Senior Lecturer in Periodontics

write literature review using ai

Assistant Editor - 1 year cadetship

write literature review using ai

Executive Dean, Faculty of Health

  • Upcoming Webinar
  • Upcoming Webinar: AI & Lit Review
  • Effortless AI
  • Effortless Note Taking
  • Effortless Literature Review
  • Effortless Knowledge Management
  • Free 8-day note-taking course

Review: Consensus AI – An Intelligent Literature Review Companion

  • by Ilya Shabanov
  • May 18, 2024 May 18, 2024
  • AI , literature review , tool-review

The way we extract information for literature reviews has completely changed in the age of AI. While reading and understanding are still essential, finding the correct information quickly becomes accelerated with tools like Consensus. At it’s core Consensus is a search engine with AI built within and around it to speed up your research. It scans millions of papers to find the ones that match your query and sort them by which support or reject your argument or employ specific methods. This makes searching for the right paper faster, more enjoyable, and effortless.

Join the Effortless Newsletter. Receive free tips on note-taking, literature review, AI tools, and other productivity topics specifically tailored for academics and students.

In this review, we will break down the main use cases, limitations, and features of consensus in detail. First, we will look at what Consensus is and what it does, then dive deep into every feature and finish with an even more innovative use case: ConsensusGPT, which combines ChatGPT’s conversational capabilities with Consensus’ expansive database.

  • What problem does Consensus solve?
  • What does Consensus AI do?
  • The Consensus meter for a quick overview
  • Synthesize and CoPilot boxes to grasp a topic
  • Badges/Summaries to sort single studies
  • Filtering options in Consensus

ConsensusGPT: Consensus as a ChatGPT plugin

What is the difference between chatgpt and consensus.

  • Academic writing use-case: Asking Consensus for evidence
  • Academic writing use-case: Filling in Citations

How to use Consensus AI?

Like all AI tools, Consensus is as simple as typing a question into the textbox. You can start using it for free, and if you need it, get the premium membership, which allows you to use AI to summarize findings and systematically analyze single papers (more than one pricing in the last section of this post and a 30% off discount code! ). However, it is crucial to understand this technology’s limitations and know which results you can fully trust and which you need to double-check.

write literature review using ai

To use Consensus you will require a free account. The results have four main elements: Summary, Consensus Meter, Copilot, and about ten papers. If you are not a premium member, you will always see the papers, but the other elements are limited to a certain amount of monthly searches.

write literature review using ai

Compare the result in the screenshot above with the Google search for yourself (“Does climate change cause range shifts in forests?” ). Despite being a big and somewhat generalizing question, I got a concise reply. The devil is, of course, in the detail with every scientific question, but you see that finding tentative answers to complex questions becomes increasingly simple. If I am brainstorming parts of my research, I might ask dozens of questions like this, and understanding what works in seconds rather than hours makes the research process smooth.

At the end of your search you will find a number of studies sorted by relevance clicking on each of those brings the detailed study view:

write literature review using ai

This view brings together all the different pieces of Information has on this study which includes metadata, summaries, quality indicators, citation counts and a link to the publication itself.

Can you trust what Consensus AI tells you?

This is a general debate about how much we should trust AI. In my opinion, it is not different from asking a human. If you ask a five-year-old kid a serious question, you will certainly not bet on the correctness. If the kid is 16 and their answer is surprising but not unrealistic, you might trust but double-check. If a professor with 40 years of experience answers a question in their domain, your answer is as firm as it gets. The problem with AI is we don’t know which personas we are talking to.

My rule of thumb is to consider how much data has been published on a topic. If I am asking a question that has been debated for decades, there is likely a large amount of data available, and I tend to trust AI (like in the example question above). The more recent and specialized topic you are interested in, the less you should trust any AI on this. Luckily, Consensus spots these cases sometimes:

write literature review using ai

Overall, the speed of AI is increasing very fast, and the training datasets are growing daily. Don’t hesitate to be specific and treat Consensus as a new colleague you know is knowledgeable but has yet to talk to them much. Get to know one another.

Questions Consensus AI works best with

To get the most out of Consensus ask supported questions: ✅ Yes / No ✅ Relationship (e.g. Is Sauna beneficial for blood pressure) ✅ Benefits of… (e.g. “Sauna”) Open-ended questions or ones that require numbers/explanations MAY or MAY NOT work. This is a general case with AI. Questions requiring logical reasoning and thinking rarely produce excellent results (at least as of early 2024). Bad example: “How much climate change can European forests tolerate?”. While you will almost always get a reply from the AI, the value will diminish the more you deviate from questions AI can answer. This is not a particular limitation of consensus, but it will generally apply to large language models since they deal with facts as language rather than logically connected entities.

write literature review using ai

You can check out the much more detailed guide on the Consensus FAQ page . The best way to find out, however is to just try it out for yourself.

Domains Consensus AI works best with

According to the Consensus website, the best questions are in the medical and social policy domain. I have been using it for my research (forest ecology), and it has always come up with good studies and answers despite needing to be added to this list. It is best to try out the domain you already know and see if the search finds the essential papers you know are central to it.

write literature review using ai

However, the interface is designed with medical studies in mind. For example, you can filter by human/animal studies or find randomized control trials, concepts that don’t exist in ecology. However, the UI will likely change in the future as Consensus gets more adopted and expanded across all domains.

Limitations of Consensus AI

The two most significant limitations of all AI tools are incompleteness and irreproducibility . Incompleteness means that the results you get are filtered by an AI model with a certain level of randomness. We don’t truly understand how this process works, so it might sometimes miss something that a human would deem necessary. This is especially true since we need to know which papers Consensus can access.

This brings us to the second issue: Irreproducibility, which is of particular concern when conducting a systematic literature review. Such a review involves documenting your paper search process, theoretically allowing for replication. However, if you were to state, for instance, ‘I searched for X on Consensus’, and someone attempted to replicate this a year later, their findings might differ from yours. This variation directly results from the stochastic nature inherent in all AI models.

Remember, searching for an AI is like asking somebody for directions to a restaurant. You will generally provide the same reply to different people, but you will never use the exact words and add/omit specific, more minor details.

The Consensus Meter for yes/no Questions

The Consensus Meter consists of three bars: Yes, No, and Possibly. The way it works is that the Consensus AI first finds several papers related to your query, scans them, and then sorts them into one of the three categories. If most papers affirm your question, the yes bar will grow. This is why Consensus invites you to use yes/no questions. Otherwise, the meter would make little sense. To use this feature, you need to enable the “Synthesize” checkbox right below the search bar. You get 20 of those summaries for free monthly and must purchase a premium account if you need more.

write literature review using ai

This feature is handy even when researching what supplements to take or what exercise to prioritize daily. In my experience, I tend to trust this reply over clickbait Google articles written primarily for SEO (Search Engine Optimisation) and “approved” by somebody with a doctoral degree, as their purpose is mainly to rank high in the Google search and only secondary to providing scientifically correct output.

Synthesize and CoPilot for Topical Overview

The synthesize (or summary), and Copilot features are more in-depth AI features that process the search results (i.e., the top 10 papers answering your question) and summarize them. The summary feature creates a very short summary (~50 words), while Copilot creates a lengthy text of about 400 words with references to the search results below.

To use these features, activate the “Synthesize” and “Copilot” checkboxes right below the search bar in Consensus. Be aware that they both cost “AI credits.” 20 AI credits are free every month. Synthesize costs one credit per query, and Copilot costs two. If you are in a free account, I suggest you save these questions for the relevant questions and use them selectively and judiciously.

write literature review using ai

The Copilot is 100% free of hallucinations as it can only use the papers retrieved from the database.

Customizing Copilot with queries

While Syntehesize is fixed feature, Copilot can be customized to your needs by adding instructions to it. Think of it as an upgraded ChatGPT version that has access to the contents of all the papers that Consensus found for your query. Essentially this allows you to write a mini literature review. Here, I ask to generate a bullet-point list of the main drivers of forest range shifts. And Copilot solves the task for me:

write literature review using ai

When we keep the question but change the prompt the result changes. For example, here I ask to draft an outline of a literature review. Notice how useful this can be when starting out with a topic and seeking general guidance on how to solve a problem:

write literature review using ai

However, Consensus is primarily a search engine not a conversational AI. If you are learning a topic and want to have a back and forth conversation use the Consensus GPT bot (section below) instead.

Badges and Study Snapshops to Identify Relevant Studies

Badges and snapshots allow users to investigate a single study of interest. This is where Consensus’s strengths for medical-type questions shine, as the snapshots and badges lean towards medical studies and feature options like “animal trials” or “randomized controlled trials.” To generate a study snapshot, click on “Study Snapshot” right below the paper.

write literature review using ai

In the medical example above, you immediately see who the test subjects were, how many, which methods were used, and many other details that would likely take a while to find in the paper’s methods section.

Badges follow a similar idea but highlight different important properties of studies. For example, some studies might have more citations, while others employ a particular method. Badges make it easier to find papers that stand out.

write literature review using ai

There are many different badges and when you hover them with the mouse a small popup explains what they mean.

Filtering Options in Consensus

Since Consensus is primarily a search engine, you can filter your results like on PubMed. However, there is a big difference because Consensus filters the results using semantic search and AI, which allows filtering by content rather than words in the title/abstract.

write literature review using ai

Especially for medical questions where sample size or human studies are essential distinguishing factors, this search is unique. I hope that one day, Consensus will implement similar filters to search in my domain (ecology), for example, by study location, taxa, or statistical methods.

Not too long ago, ChatGPT released so-called GPTs or customized AI assistants that use ChatGPT but add the ability to communicate with other services (like Consensus). The result is an entire ecosystem of assistants for hundreds of use-cases. Here are my top 5 GPTs for academic purposes . Consensus has been featured in the GPT store from the first day and remains among the top GPTs, which is a testimony to its usefulness.

write literature review using ai

To use ConsensusGPT you will need a premium subscription to ChatGPT. Should you get one? I wrote a thread with my thoughts on this .

While you will find the same papers using ConsensusGPT or a website, you will generally have a different approach to using them. A GPT is just ChatGPT with a few extra instructions and the ability to call other services like consensus to ask, for example, for papers. So, while Consensus provides the data and some instructions, ChatGPT generates the text.

write literature review using ai

This allows for significant use cases like finding evidence for questions and having long conversations where you alternate between accessing knowledge from ChatGPT and querying papers from consensus. If you are curious, check out the Effortless Literature Review 3 webinar , where I heavily relied on this technique.

The downside is that you do not have access to the Consensus meter or the content of the papers like Consensus does on the website. Overall, this means that you can do a broader literature review that is slightly shallower. However, Consensus GPT provides a link to their website where you can continue to use the features introduced above.

If you want to start building a team of AI helpers for your academic career, check out my tutorial on building GPTs or dive as deep as possible by taking the Effortless AI course .

Is ConsensusGPT free?

Yes it is if you have a ChatGPT Premium subscription. It delivers more or less the same functionality when searching on the Consensus website (which is free there, too). However, to get insights from inside the single papers, you will need to use the Consensus website and their AI to analyse the results in-depth.

Academic writing use-case: Asking Consensus for Evidence

Below is a use case I often use when writing a paper. Usually, I know certain things are true because I have read them many times but don’t have a specific citation in mind to back them up in a published paper. This is where you can use ConsensusGPT. Use the straightforward prompt “Find evidence for …”. You can use The results of several studies as a reference in your writing.

write literature review using ai

It goes without saying that you should always double check the reply by at least skimming the paper. Keep in mind that often AIs will not have access to the entire PDF but only the publicly available abstract and its conclusions might therefore be incomplete.

Consensus AI vs SciSpace

Both companies have similar capabilities and the same goal. While SciSpace shines at using AI to help you read papers and discover related papers, Consensus extracts essential information from a collection of documents. It displays the final results as a quick, easily digestible summary. Consensus shines regarding Yes/No questions in the medical domain as its UI is optimized towards this, as evidenced by various filter options, the consensus meter, and badges. If you want to learn more about SciSpace, read my deep dive on SciSpace .

Consensus AI Pricing

Consensus works on a so-called freemium model. Some free features, including search results (papers) and badges, are forever free. This lets you know which papers are relevant but uses something other than AI to extract the appropriate insights. For this, you need the premium plan. However, Consensus allows you to try out the advanced features: Summary and Copilot up to 20 times a month. For this so-called AI, credits are consumed, of which you have 20 free every month. You use one of those credits (two for Copilot) whenever you create a summary.

write literature review using ai

Feel free to use the discount code Effortless30 for a 30% discount (I don’t even get any commission from it, by the way). Students can purchase the premium account at a 40% discount and the general price is 9$ per month.

Overall Consensus is a novel way to discover literature that particularly shines when you ask it medical yes/no questions as its UI has been designed with these papers in mind. It is equally useful in an academic context as well as just for you personally to answer some health related question.

  • It starts with a plain text query and delivers a set of papers that help answer this query.
  • If you ask it a yes/no question, Consensus will analyze how each paper leans and display that as a small yes/no/possibly graph called the Consensus Meter
  • If you are a premium member, AI will analyze the resulting papers and create an outline with citations in an academic fashion.
  • Consensus is available as GPT in ChatGPT, which allows for unique use cases as you can leverage ChatGPT conversational capabilities and Consensus database of studies.
  • Regular price is 9$ with a 40% student discount or 30% with the “Effortless30” code. Searches are always free, and the AI analysis is what you are paying for. Twenty uses of AI are free per month.

Consensus is featured as part of my literature Review webinar and you can get a recording here:

Workflow for the Literature Review webianr

Leverage semantic and citation search with AI to find the most impactful literature quickly. Uncover reference gaps combining multiple tools.  Use ChatGPT assistants to get rid of hallucinations and use AI to aid faithfully and ethically in your lit review and writing process.

write literature review using ai

How To Write A Research Proposal With AI

To write a research paper using A.I., define your research topic and formulate straightforward, specific research questions, use A.I. to streamline the literature reviews, refine methodologies, and create a structured proposal outline, draft your proposal using the outline you created.

Fredrick Eghosa

Fredrick Eghosa

Apr 21, 2024

How To Write A Research Proposal With AI

Table of Contents

Key Takeaways

Understanding the research proposal and its components, introduction:, literature review:, research objectives:, methodology:, significance and contributions:, budget (if applicable):, references:, how to write a research proposal using a.i., define your research topic, formulate research questions, consider the types of research questions you want to use, characteristics of practical research questions you must keep in mind, how to structure your research questions, conduct a comprehensive literature review, develop a research proposal outline, refine your methodology, don't forget to include visual aids., draft and revise your proposal, wrapping up.

Fredrick Eghosa

  • A research proposal provides a detailed plan for a research project. It presents your proposed study's objectives, methodology, and potential contribution.
  • A research proposal usually includes a title, abstract, literature review, research objectives, methodology, significance and contribution, timeline, budget, and references.
  • Writing a research paper using A.I. involves defining your research topic and formulating straightforward, specific research questions. Use CoWriter to streamline literature reviews, refine methodologies, and create a structured proposal outline. Write your proposal and use CoWriter to correct errors and enhance clarity.

notion image

  • Use Interrogative Language. Begin questions with "What," "How," "Why," or "To what extent" to prompt investigation.
  • Avoid yes-or-no questions. Formulate questions that require detailed responses and deeper exploration.

notion image

  • Start by Determining the scope of your review (e.g., timeframe, geographical focus, key concepts) to ensure relevance. Then, Identify relevant keywords, synonyms, and phrases related to your research topic.
  • Explore library catalogs and repositories for books, theses, dissertations, and other publications related to your topic.
  • When reviewing the literature, applying filters (e.g., publication date, study design, language) will help you narrow search results and focus on high-quality sources.
  • Select literature that directly addresses your research questions or contributes significantly to understanding your topic.
  • Identify gaps or limitations in existing literature that highlight the need for your research.

notion image

  • Familiarize yourself with research designs such as experimental, observational, qualitative, quantitative, or mixed-methods.
  • Select the most appropriate research design based on your research questions, objectives, and available resources.
  • Discuss the advantages and limitations of the chosen design and explain why it is suitable for your study.
  • Describe the methods you will use to collect data (e.g., surveys, interviews, observations, and experiments).
  • Explain how you will develop or adapt data collection instruments (e.g., questionnaires, interview guides, and observation protocols).
  • Specify the target population and describe the sampling frame (e.g., random, purposeful, or convenience).
  • Discuss ethical issues regarding participant recruitment, data collection, and privacy protection.
  • Mention any software tools you will use for data analysis and explain why they are appropriate for your study.
  • Ensure that your methodology aligns with the research objectives and aims to answer the research questions effectively.
  • Please provide a clear rationale for each methodological choice and explain how it contributes to achieving the study's goals.

Join other 3200+ writers now!

Ready to take the next big step for your writing?

Lead engineering teams at Figma, Pitch, and Protocol Labs.

Related posts

How To Write A Research Paper With AI

May 21, 2024

How To Write A Research Paper With AI

To write a research paper, ensure you understand the assignment, choose an engaging topic, conduct thorough research, develop a clear thesis statement, and create a structured outline. Write while integrating research findings, then revise for clarity and coherence. Finally, edit for accuracy and adhere to formatting guidelines to present a polished and professional paper. You can use an A.I. writing assistant to optimize and streamline entire writing.

The Best Free AI MLA Citation Generator

Apr 16, 2024

The Best Free AI MLA Citation Generator

The best free AI MLA citation generators are CoWriter, Opendemia, Scribbr MLA Citation Generator, MyBib, Citefast, Cite This For Me's MLA Citation Generator, and QuillBot M.L.A. Citation Generator. They stay updated on the latest MLA guidelines and can handle various source types like books, articles, and websites.

The Best Free Outline Generator for Essays

The Best Free Outline Generator for Essays

The best free outline generators for essay writing are CoWriter, Ahrefs Free AI Outline Generator, ClickUp, Jenni AI Essay Outline Generator, GitMind, Rytr, and Simplified. They streamline the process by generating structured outlines based on prompts and parameters.

The Best Free AI APA Citation Generator

Apr 12, 2024

The Best Free AI APA Citation Generator

The best free AI APA citation generators are CoWriter, Grammarly, SciSpace APA Citation Generator, Scribbr AI Citation Generators, Quillbot AI Citation Generators, Junia.ai, and Simplified. These generators use intelligent algorithms to automatically create citations for different sources, saving time and ensuring formatting and APA citation style accuracy.

How is a narrative essay organized?

How is a narrative essay organized?

A narrative essay tells a story chronologically, beginning with an introduction, moving into a body where events build to a climax, and finishing with a resolution or final reflection.

10 Best AI Co-Writers

Mar 16, 2024

10 Best AI Co-Writers

The best AI co-writers are CoWriter, HIX.AI, Copy.ai, Wordtune, Scalenut, Rytr, HyperWrite, Texta, TextCortex, and Writesonic. They can help generate ideas, overcome writer's block, and improve the flow of your writing.

How to Edit an Essay Written by AI to Perfection

Mar 18, 2024

How to Edit an Essay Written by AI to Perfection

To edit an AI-written essay perfectly, carefully read the text to ensure it makes sense and conveys your ideas. Please review the entire piece to grasp its tone, purpose, and quality, as AI-generated content often lacks logical progression and may have gaps, awkward transitions, or readability issues.

10 Steps to Writing an Informative Essay with an Outline

Mar 24, 2024

10 Steps to Writing an Informative Essay with an Outline

To write an informative essay with an outline, pick a topic, research it, and create an outline. Write an introduction with a hook and thesis, followed by body paragraphs with evidence. Conclude by summarizing the main points and restating the thesis. Finally, edit, cite sources, and submit.

Jenni AI Review and Best Alternatives

Mar 25, 2024

Jenni AI Review and Best Alternatives

Jenni.AI is an AI writer designed to assist with academic research and writing. It offers a great selection of features for academic purposes, including a built-in research engine. The best alternatives for Jenni AI you can try are CoWriter, Jasper AI, Copy AI, Frase.io, and Writesonic.

How to Write a Biography Essay and Biography Essay Examples

Mar 27, 2024

How to Write a Biography Essay and Biography Essay Examples

To write a biography essay, choose an exciting subject, research extensively, and develop a thesis statement. Then, structure your essay chronologically or thematically, starting with a captivating introduction, exploring the subject's life in the body paragraphs, and concluding with its contemporary significance.

The Good AI Review and Best Alternatives

The Good AI Review and Best Alternatives

Good AI is a handy platform that uses smart technology to assist with writing tasks. Its AI essay writer lets users quickly create 1,500-word, well-structured, and informative essays.

How to Write a Biography Essay With Examples

How to Write a Biography Essay With Examples

30 AI Prompts to Humanize Your Essay

30 AI Prompts to Humanize Your Essay

AI prompts to humanize your essay are invaluable tools for students who want to humanize their AI-generated essays. With prompts for different writing styles and objectives, you can add personality, storytelling, humor, and clarity to their content.

The 7 Best AI Bibliography Generator

The 7 Best AI Bibliography Generator

The best AI bibliography generators are CoWriter, Zotero, EndNote, Cite This For Me, Citation Machine, CiteMaker, and KnightCite. These tools streamline the citation process by supporting various citation styles, source types, and formatting options, ensuring accuracy and adherence to academic standards.

10 Literary Analysis Essay Examples

May 20, 2024

10 Literary Analysis Essay Examples

To write a literary analysis essay always understand the assignment thoroughly and identify the key elements e.g. plot, characters, and themes. Select a central theme to focus on and put together evidence to support your analysis.

The 5 Best AI Writer for Students

The 5 Best AI Writer for Students

The best AI writers for students are CoWriter, Jasper, AI-Writer, Rytr, and Article Forge. Choosing the best AI writer for students depends on their needs. Some prioritize grammar and clarity assistance, while others focus on paraphrasing and summarizing.

Best Free AI Paper Writer

Best Free AI Paper Writer

The best free AI paper writers are CoWriter, Rytr, Simplified, CopyAI, Writesonic, Grammarly, and Paperpal. These tools can help you brainstorm or start and polish your paper.

5 Best AI Writer for Research Papers

Mar 11, 2024

5 Best AI Writer for Research Papers

The best AI writers for research papers are CoWriter, SciSpace Literature Review, Wordtune, Trinka, and Paperpal. These AI writers offer different functionalities, from grammar and citation checking to plagiarism detection and content generation.

Can AI Write An Annotated Bibliography

Can AI Write An Annotated Bibliography

A.I. can write an annotated bibliography by curating and organizing relevant sources. It can generate concise summaries that outline each source's content, relevance, and significance to your research. A.I. writing assistant streamlines the process of compiling an annotated bibliography, saving time and effort while ensuring the inclusion of informative annotations for each source.

How To Write A Historical Essay Introduction

How To Write A Historical Essay Introduction

To write a historical essay introduction, begin with an engaging hook, like a thought-provoking quote or a question. Follow this with background details about your topic and conclude with a clear thesis statement. Utilize CoWriter to assist in composing your introduction.

How To Write a 4-Paragraph Essay

How To Write a 4-Paragraph Essay

To write a 4-paragraph essay effectively, start by thoroughly researching your topic. This step is crucial for gathering supporting information. Next, create a structured outline that divides your essay into four clear paragraphs. Draft your thesis statement to convey the topic of your essay. With the outline in place, draft your essay following the outlined structure. Finally, take time to edit and proofread your essay.

How To Write A Curatorial Statement With AI

How To Write A Curatorial Statement With AI

To write a curatorial statement with AI, begin by conducting thorough research into the historical context of your exhibition and the backgrounds of the artworks, including their styles and mediums. Develop a clear and concise outline to structure your statement effectively. Write the content of your statement and edit it for clarity and correctness. Use CoWriter to streamline and enhance the entire writing and editing process.

How To Write A Discussion Essay

How To Write A Discussion Essay

To write a discussion essay, start with a clear introduction introducing the topic and presenting your main argument. Next, develop the body paragraphs, each addressing a specific aspect or viewpoint related to the topic, supported by evidence and facts. Then address counterarguments to strengthen your argument. Conclude your discussion essay by summarizing key points and restating your main argument, leaving readers with a compelling closing thought.

Difference Between Paraphrasing And Summarizing

Difference Between Paraphrasing And Summarizing

Paraphrasing involves restating the content of a passage in your own words, while Summarizing involves concisely stating the main points of a work or passage. Paraphrasing usually Retains more detail from the original text, while summarizing focuses on capturing the main points of the original text. Paraphrasing can be similar to or longer than the original text, while summaries are usually Significantly shorter.

10 Evaluation Essay examples

10 Evaluation Essay examples

An evaluation essay can take different styles; there are argumentative evaluation essays, analytical evaluation essays,  Descriptive evaluation essays, and comparative evaluation essays. To write an evaluation essay, start by choosing a suitable topic, setting evaluation criteria, gathering evidence to back your evaluation, creating an outline to guide your evaluation essay, writing your evaluation, and editing to ensure the final draft is error-free.

10 Expository Essay Examples

10 Expository Essay Examples

There are different types of expository essays, such as descriptive essays, process essays, comparison and contrast essays, cause-and-effect essays, and problem-and-solution essays. To write an expository essay, you must start by thoroughly understanding the purpose of the essay, conducting research, writing your thesis, creating an outline for your essay, drafting your essay, and editing it.

How To Write Essays With CoWriter

How To Write Essays With CoWriter

To write an essay using CoWriter, sign up on CoWriter to access its writing tools. Choose a suitable for your essay and conduct research using CoWriter to gather relevant sources. Develop a thesis statement and create an outline using CoWriter. Write the essay with the help of our AI writing assistant and cite relevant sources. Lastly, edit and proofread the final draft.

  • Our Mission

Guiding Students to Develop AI Literacy

Teachers can use these resources to model responsible AI use and show students how to experiment responsibly with the technology.

Illustration of a robot hand holding apple

Educators must stay informed, current, and ahead of the game (if possible) when it comes to emerging technologies. This means fostering the development of AI literacy skills. Being AI literate means more than knowing the basics about the technology or understanding relevant terms such as machine learning and algorithms . Literacy involves knowing the uses of AI in the world and being aware of the ethical considerations involved with its use. AI literacy should focus on developing skills to critically evaluate information and content generated by AI as well as to identify misinformation. Teachers should also know how to use AI in safe, ethical, and responsible ways and model this use for students.

Getting Started Teaching AI Literacy Skills

Introducing AI concepts at an earlier age gives students the opportunity to build their knowledge and skills over time—with younger students, starting with the basics of AI, such as focusing on where we see it in daily life. Begin with key definitions, and find resources that have been vetted and are safe for student and educator use. Several organizations provide a wealth of resources for educators that include ready-to-run lessons.

Modeling AI use in the classroom: As we teach students about AI, we can engage them in activities where they evaluate images, text, or videos to decide whether they are AI-generated or real. I have used game-based learning tools, such as Quizizz AI , to quickly create a quiz to check students’ understanding of the concepts covered or Eduaide.Ai , to make a gamified activity or lesson related to AI literacy.

Student-directed exploration: Beyond some initial tools that offer options for assessing students and that help with personalized learning, it is important to make time for conversations and create opportunities for students to explore different types of AI. Whether engaging with chatbots, testing some forms of generative AI, or using other AI-powered tools, we want to help students develop their skills and be better informed in the increasingly digital and AI-powered world.

We have used a variety of resources from organizations such as MIT AI Literacy Units for K–12 . Their RAISE (Responsible AI for Social Empowerment and Education) initiative provides resources for all grade levels that are specific to developing AI literacy skills.

All of these options provide a variety of AI-related learning experiences and resources for students. As they begin to use various tools or perhaps build their own chatbot, they will continue to build their AI literacy skills. These opportunities further enhance their understanding of how AI works, the importance of safeguarding personal information, and how to evaluate information that they receive.

Teaching about AI: Often as a start, I will show a video to spark conversation with students to provide an initial overview of what AI is and ask for their thoughts. One of the most frequent video series we use is the Crash Course AI , which provides clear and engaging videos covering the key components of AI so that students understand all of the components of AI. For younger learners, I recommend the video series “ What is AI: Artificial Intelligence Facts for Kids .” The videos are great starting points for learning about AI and sparking curiosity in students.

Nearpod also provides lessons on key terms related to AI, such as algorithms and machine learning , as well as lessons that consider bias. Within these interactive Nearpod lessons, educators can add in questions focused on checking student understanding of AI literacy.

Using SchoolAI gives students the opportunity to interact with a chatbot, speak with a famous historical figure, or interact with “Sidekick,” which is an AI assistant. Whether educators select from the Spaces available or create their own chatbot specific to their students, it helps students build their content knowledge and also think about how they are interacting with AI, which helps with AI literacy.

These online resources can help educators provide robust learning experiences for students, from basic knowledge of AI to understanding AI’s design principles to recognizing AI’s potential impact on society. Beyond the tools available, educators can design activities that engage students in discussions or debates, requiring them to apply the knowledge gained and also consider different perspectives.

Example activities could involve students engaging in debates on AI ethics, creating an AI-powered product and being able to explain its purpose, participating in simulations using AI tools to solve real-world problems, or developing basic AI models to grasp the technology’s potential and limitations. Student-driven experiences like these prepare students academically but also ethically, enabling them to navigate the AI-driven future responsibly.

AI literacy is critical because it empowers students with the knowledge and skills to navigate the growing AI landscape, which will better prepare them for future careers, many of which will require AI technologies.

share this!

May 13, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

written by researcher(s)

AI-assisted writing is quietly booming in academic journals—here's why that's OK

by Julian Koplin, The Conversation

AI-assisted writing is quietly booming in academic journals—here's why that's OK

If you search Google Scholar for the phrase " as an AI language model ," you'll find plenty of AI research literature and also some rather suspicious results. For example, one paper on agricultural technology says,

"As an AI language model, I don't have direct access to current research articles or studies. However, I can provide you with an overview of some recent trends and advancements …"

Obvious gaffes like this aren't the only signs that researchers are increasingly turning to generative AI tools when writing up their research. A recent study examined the frequency of certain words in academic writing (such as "commendable," "meticulously" and "intricate"), and found they became far more common after the launch of ChatGPT—so much so that 1% of all journal articles published in 2023 may have contained AI-generated text.

(Why do AI models overuse these words? There is speculation it's because they are more common in English as spoken in Nigeria, where key elements of model training often occur.)

The aforementioned study also looks at preliminary data from 2024, which indicates that AI writing assistance is only becoming more common. Is this a crisis for modern scholarship, or a boon for academic productivity?

Who should take credit for AI writing?

Many people are worried by the use of AI in academic papers. Indeed, the practice has been described as " contaminating " scholarly literature.

Some argue that using AI output amounts to plagiarism. If your ideas are copy-pasted from ChatGPT, it is questionable whether you really deserve credit for them.

But there are important differences between "plagiarizing" text authored by humans and text authored by AI. Those who plagiarize humans' work receive credit for ideas that ought to have gone to the original author.

By contrast, it is debatable whether AI systems like ChatGPT can have ideas, let alone deserve credit for them. An AI tool is more like your phone's autocomplete function than a human researcher.

The question of bias

Another worry is that AI outputs might be biased in ways that could seep into the scholarly record. Infamously, older language models tended to portray people who are female, black and/or gay in distinctly unflattering ways, compared with people who are male, white and/or straight.

This kind of bias is less pronounced in the current version of ChatGPT.

However, other studies have found a different kind of bias in ChatGPT and other large language models : a tendency to reflect a left-liberal political ideology.

Any such bias could subtly distort scholarly writing produced using these tools.

The hallucination problem

The most serious worry relates to a well-known limitation of generative AI systems: that they often make serious mistakes.

For example, when I asked ChatGPT-4 to generate an ASCII image of a mushroom, it provided me with the following output.

AI-assisted writing is quietly booming in academic journals—here's why that's OK

It then confidently told me I could use this image of a "mushroom" for my own purposes.

These kinds of overconfident mistakes have been referred to as "AI hallucinations" and " AI bullshit ." While it is easy to spot that the above ASCII image looks nothing like a mushroom (and quite a bit like a snail), it may be much harder to identify any mistakes ChatGPT makes when surveying scientific literature or describing the state of a philosophical debate.

Unlike (most) humans, AI systems are fundamentally unconcerned with the truth of what they say. If used carelessly, their hallucinations could corrupt the scholarly record.

Should AI-produced text be banned?

One response to the rise of text generators has been to ban them outright. For example, Science—one of the world's most influential academic journals—disallows any use of AI-generated text .

I see two problems with this approach.

The first problem is a practical one: current tools for detecting AI-generated text are highly unreliable. This includes the detector created by ChatGPT's own developers, which was taken offline after it was found to have only a 26% accuracy rate (and a 9% false positive rate ). Humans also make mistakes when assessing whether something was written by AI.

It is also possible to circumvent AI text detectors. Online communities are actively exploring how to prompt ChatGPT in ways that allow the user to evade detection. Human users can also superficially rewrite AI outputs, effectively scrubbing away the traces of AI (like its overuse of the words "commendable," "meticulously" and "intricate").

The second problem is that banning generative AI outright prevents us from realizing these technologies' benefits. Used well, generative AI can boost academic productivity by streamlining the writing process. In this way, it could help further human knowledge. Ideally, we should try to reap these benefits while avoiding the problems.

The problem is poor quality control, not AI

The most serious problem with AI is the risk of introducing unnoticed errors, leading to sloppy scholarship. Instead of banning AI, we should try to ensure that mistaken, implausible or biased claims cannot make it onto the academic record.

After all, humans can also produce writing with serious errors, and mechanisms such as peer review often fail to prevent its publication.

We need to get better at ensuring academic papers are free from serious mistakes, regardless of whether these mistakes are caused by careless use of AI or sloppy human scholarship. Not only is this more achievable than policing AI usage, it will improve the standards of academic research as a whole.

This would be (as ChatGPT might say) a commendable and meticulously intricate solution.

Provided by The Conversation

Explore further

Feedback to editors

write literature review using ai

The tunable coupling of two distant superconducting spin qubits

write literature review using ai

Study examines impacts of increased smoke on California lakes

3 hours ago

write literature review using ai

Excavation reveals 'major' ancient migration to Timor Island

write literature review using ai

DNA analysis reveals that Jamestown Colony residents ate dogs with Indigenous ancestry

13 hours ago

write literature review using ai

Study shows COVID-related private funding had little effect in 2020 presidential election

15 hours ago

write literature review using ai

Neutrons open window to explore space glass

write literature review using ai

Researchers find unique adaptations of fungus associated with bee bread

16 hours ago

write literature review using ai

Highly sensitive fiber optic gyroscope senses rotational ground motion around active volcano

write literature review using ai

'Dusty' archives inspire new story about 1886 Charleston earthquake

17 hours ago

write literature review using ai

Study finds widespread 'cell cannibalism' and related phenomena across tree of life

Relevant physicsforums posts, physics education is 60 years out of date.

May 16, 2024

Is "College Algebra" really just high school "Algebra II"?

May 14, 2024

Plagiarism & ChatGPT: Is Cheating with AI the New Normal?

Physics instructor minimum education to teach community college.

May 11, 2024

Studying "Useful" vs. "Useless" Stuff in School

Apr 30, 2024

Why are Physicists so informal with mathematics?

Apr 29, 2024

More from STEM Educators and Teaching

Related Stories

write literature review using ai

AI-generated academic science writing can be identified with over 99% accuracy

Jun 7, 2023

write literature review using ai

ChatGPT maker fields tool for spotting AI-written text

Feb 1, 2023

write literature review using ai

Is the genie out of the bottle? Can you trust ChatGPT in scientific writing?

Oct 19, 2023

write literature review using ai

What is ChatGPT: Here's what you need to know

Feb 16, 2023

write literature review using ai

Tool detects AI-generated text in science journals

Nov 7, 2023

write literature review using ai

Could artificial intelligence help or hurt medical research articles?

Feb 6, 2024

Recommended for you

write literature review using ai

First-generation medical students face unique challenges and need more targeted support, say researchers

write literature review using ai

Investigation reveals varied impact of preschool programs on long-term school success

May 2, 2024

write literature review using ai

Training of brain processes makes reading more efficient

Apr 18, 2024

write literature review using ai

Researchers find lower grades given to students with surnames that come later in alphabetical order

Apr 17, 2024

write literature review using ai

Earth, the sun and a bike wheel: Why your high-school textbook was wrong about the shape of Earth's orbit

Apr 8, 2024

write literature review using ai

Touchibo, a robot that fosters inclusion in education through touch

Apr 5, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

IMAGES

  1. How to EASILY write LITERATURE REVIEW using AI Tools

    write literature review using ai

  2. How to Write Literature Review using AI Tool for Research paper Writing

    write literature review using ai

  3. steps for writing a good literature review

    write literature review using ai

  4. Download literature review template 25

    write literature review using ai

  5. Literature Review Summary Table

    write literature review using ai

  6. Literature Review Examples using Network Visualization

    write literature review using ai

VIDEO

  1. MS / PhD Research Proposal: AI Based Tips & Guidelines (Part 2: Researchrabbit.ai) Urdu/Hindi

  2. Automate Literature Review Using ChatGPT

  3. Read Research paper in mins

  4. How to Perform Literature Review Using AI Tool?

  5. how to write literature review using chat gpt

  6. How To Use AI To Write Your Literature Review FAST

COMMENTS

  1. AI Literature Review Generator

    Generate high-quality literature reviews fast with AI using HyperWrite's tool. Input your research topic or question and get a structured review with proper citations, ready to be used in your research.

  2. AI Literature Review Generator

    Jenni AI is a tool that uses advanced AI algorithms to help you write literature reviews faster and better. It provides relevant literature suggestions, citation assistance, idea generation, and outline builder features.

  3. Top 10 Powerful AI Literature Review Generator Tools

    8. Get Merlin. Get Merlin AI literature review generator is an easy-to-use tool that helps automate the process of creating a literature review. It automates the creation of comprehensive literature reviews by searching and analyzing scholarly resources to identify key themes, methodologies, findings, and gaps.

  4. 12 Best AI Literature Review Tools In 2024

    5. Consensus.app: Simplifying Literature Review with AI. Consensus is a search engine that simplifies the literature review process for researchers. By accepting research questions and finding relevant answers within research papers, Consensus synthesizes the results using language model technology.

  5. Automate your literature review with AI

    Learn how to use AI tools to find, analyze, and organize research papers for your literature review. Explore the benefits, types, and examples of AI-powered tools for literature review.

  6. 15 Best AI Literature Review Tools

    By using AI literature review tools, researchers can easily find related literature pieces, understand the author's perspective, identify key objectives, and more. One such tool is Unriddle.AI, which allows users to read faster and write better through its powerful AI capabilities.

  7. How to Write a Literature Review with AI

    Learn how to use AI tools to assist in the initial and final stages of your literature review, while maintaining ethical and academic integrity. This blog post offers a step-by-step guide, tips, and examples for integrating AI into your research workflow.

  8. How To Write An Exceptional Literature Review With AI [NEXT LEVEL

    In this video I share with you to write a literature review with AI using next level tactics some incredible tools. Sign up for my FREE newsletterJoin 18,...

  9. Silvi.ai

    Silvi.ai is a software that uses artificial intelligence to help researchers create literature reviews and meta-analyses following Cochrane guidelines. It connects with databases, supports tagging, and documents the process for transparency and reproducibility.

  10. How to use AI tools for literature review

    Learn how to leverage AI tools to speed up, simplify, and enhance your literature review process. Explore the benefits, features, and best practices of using AI tools for academic research.

  11. AI-Powered Research and Literature Review Tool

    Enago Read is an AI assistant that speeds up the literature review process, offering summaries and key insights to save researchers reading time. It boosts productivity with advanced AI technology and the Copilot feature, enabling real-time questions for deeper comprehension of extensive literature.

  12. Introducing SciSpace's AI-powered literature review

    Introducing SciSpace's all-new AI-powered literature review workspace. Sucheth. Jul 4, 2023. Scientists increasingly rely on AI and automation's power to uncover groundbreaking scientific discoveries. And we have a new addition to this toolbox — our all-new AI-powered literature review tool. Now simply enter a keyword or query, and the AI ...

  13. LitLLM: A Toolkit for Scientific Literature Review

    LitLLM is an interactive tool to help scientists write the literature review or related work section of a scientific paper starting from a user-provided abstract (see Figure 1). The specific objectives of this work are to create a system to help users navigate through research papers and write a literature review for a given paper or project.

  14. Research Guides: AI-Based Literature Review Tools: Home

    AI-POWERED RESEARCH ASSISTANT - finding papers, filtering study types, automating research flow, brainstorming, summarizing and more. " Elicit is a research assistant using language models like GPT-3 to automate parts of researchers' workflows. Currently, the main workflow in Elicit is Literature Review.

  15. Accelerate your dissertation literature review with AI

    Learn how to use Lateral, a new tool that helps you find specific information in large collections of documents faster and easier. See how Lateral can aid you in exploratory, deep and targeted reading for your literature review.

  16. How to Ethically Write Your Literature Review Using AI: A Comprehensive

    Learn how to use ChatGPT and other tools to streamline and enhance your literature review process, from formulating research questions to refining your draft. This guide by the Swiss School of Business Research (SSBR) shows you how to leverage AI ethically and efficiently in academic research.

  17. Artificial intelligence and the conduct of literature reviews

    In this essay, we focus on the use of AI-based tools in the conduct of literature reviews. Advancing knowledge in this area is particularly promising since (1) standalone review projects require substantial efforts over months and years (Larsen et al., 2019), (2) the volume of reviews published in IS journals has been rising steadily (Schryen et al., 2020), and (3) literature reviews involve ...

  18. AI-Powered Literature Review Generator

    AI-Powered Literature Review Generator. Generate high-quality literature reviews fast with our AI tool. Summarize papers, identify key themes, and synthesize conclusions with just a few clicks. The AI reviews thousands of sources to find the most relevant info for your topic.

  19. AI Literature Review Generator

    Generate comprehensive and well-structured literature reviews with AI in seconds. Learn how to use this tool to access, analyze and evaluate scholarly sources for your research.

  20. LibGuides: Literature Reviews: Artificial intelligence (AI) tools

    Here are some key points to consider: Novelty and Creativity: Generative AI tools can produce content that is both innovative and unexpected. They allow users to explore new ideas, generate unique artworks, and even compose original music. This novelty is one of their most exciting aspects. Ethical Considerations: While generative AI offers ...

  21. Semantic Scholar

    Semantic Reader is an augmented reader with the potential to revolutionize scientific reading by making it more accessible and richly contextual. Try it for select papers. Semantic Scholar uses groundbreaking AI and engineering to understand the semantics of scientific literature to help Scholars discover relevant research.

  22. AI for literature reviews

    Step one of AI for literature reviews: Identify and import your literature. Despite the fact that MAXQDA and AI Assist can facilitate your literature review and analysis in manifold ways, the best advice is to carefully plan your literature review and analysis. Think about the purpose of your literature review and the questions you want to ...

  23. AI-assisted writing is quietly booming in academic journals. Here's why

    Many people are worried by the use of AI in academic papers. Indeed, the practice has been described as " contaminating " scholarly literature. Some argue that using AI output amounts to ...

  24. Review: Consensus AI

    Regular price is 9$ with a 40% student discount or 30% with the "Effortless30" code. Searches are always free, and the AI analysis is what you are paying for. Twenty uses of AI are free per month. Consensus is featured as part of my literature Review webinar and you can get a recording here:

  25. A Guide to Using AI Tools to Summarize Literature Reviews

    Key Benefits of Using AI Tools to Summarize Literature Review. 1. Best alternative to traditional literature review. Traditional literature reviews or manual literature reviews can be incredibly time-consuming and often require weeks or even months to complete. Researchers have to sift through myriad articles manually, read them in detail, and ...

  26. How To Write A Research Proposal With AI

    Lead engineering teams at Figma, Pitch, and Protocol Labs. To write a research paper using A.I., define your research topic and formulate straightforward, specific research questions, use A.I. to streamline the literature reviews, refine methodologies, and create a structured proposal outline, draft your proposal using the outline you created.

  27. An Ultimate Guide On Writing A Methodology For Literature Review

    Scholars write literature reviews to: 1. Identify relevant scholarship in relation to a particular topic. 2. Explain how each source contributes to the understanding of the research, issue, or theory under review. 3. Describe how the sources are related to each other. 4.

  28. AI Literacy for Students

    Guiding Students to Develop AI Literacy. Teachers can use these resources to model responsible AI use and show students how to experiment responsibly with the technology. Educators must stay informed, current, and ahead of the game (if possible) when it comes to emerging technologies. This means fostering the development of AI literacy skills.

  29. AI-assisted writing is quietly booming in academic journals—here's why

    Many people are worried by the use of AI in academic papers. Indeed, the practice has been described as "contaminating" scholarly literature. Some argue that using AI output amounts to plagiarism ...