User’s Guide

(Last updated: 7/12/2024)

If you’re just getting started with using generative AI (GenAI) in your research, start here.

This guide is developed by members of the MIDAS staff team and our postdoctoral fellows, and includes frequently asked questions and shows how GenAI can be used throughout the entire research process, based on published guidelines from journals, funding agencies, professional societies, and our own assessment of GenAI’s benefits and risks.

GenAI is a rapidly evolving technology, and we will update this guide as new information becomes available. Suggestions for improvements or additions? Email midas-research@umich.edu. We look forward to developing this guide collaboratively with our research community.

Please be aware that Generative AI models, including ChatGPT, UM-GPT and others, can sometimes provide false or inaccurate answers. All results produced by GenAI models should be validated manually. 

Step-by-Step Instructions for Specific Usages

This quick-start guide researchers with little programming experience start coding with the help of GenAI. 

Explore how to use Chat GPT4’s “data analysis” feature effectively. This guide covers code organization, error checking, data visualization, and translation between coding languages.

This guide shows how to set up custom GPT within Chat GPT 4, which is especially useful if you would like Chat GPT to carry out a specific task repeatedly, or prefer a specific style of output.

This guide demonstrates a number of strategies to craft your prompts in order to shape the content and style of GenAI outputs.

Using Generative AI for Writing

The default stance on using generative AI for writing research papers should generally be NO, particularly for creative contributions, due to issues around authorship, copyright, and plagiarism. However, generative AI can be beneficial for editorial assistance, provided you are aware of what is acceptable at your target publication venue.

Generating text and images for publications in scientific journals raises issues of authorship, copyright and plagiarism, many of which are still unresolved. Therefore, this is a very controversial area and many journals and research conferences are updating their policies. If you want to do this, please read very carefully the guidelines for authors of your target journal.

Here are a few examples of new authorship guidelines. 

  • Springer Nature journals prohibit the use of generative AI to generate images for manuscripts; texts generated by LLM should be well documented, and AI is not granted authorship.
  • Science journals require full disclosure for the use of generative AI to generate text; generative AI-generated images and multimedia can be used only with explicit permission of their editors. AI is not granted authorship.
  • JAMA and the JAMA network journals do not allow generative AI to be listed as authors. However, generative AI generated content or assistance in writing / editing are allowed in manuscripts but should be reported in the manuscript.
  • Elsevier permits the use of AI tools to enhance text readability but not creating or altering scientific content. Authors should provide full disclosure of  the use of AI. It prohibits the use of AI to generate or alter images, unless this is part of the research method. AI authorship is not allowed.
  • IEEE mandates disclosure of all AI-generated content in submissions, exceptI for editing and grammar enhancement.
  • The International Conference on Machine Learning prohibits content generated by generative AI, unless it is part of the research study being described.

While direct generation of content by generative AI is problematic, its role in the earlier stages of writing can be advantageous. For instance, non-native English speakers may use generative AI to refine the language of their writing. Generative AI can also serve as a tool for providing feedback on writing, similar to a copy editor’s role, by improving voice, argument, and structure. This utility is distinct from using AI for direct writing. As long as the human author assumes full responsibility for the final content, such editing help from generative AI is increasingly being recognized as acceptable in most disciplines where language is not the primary scholarly contribution. However, conservative editorial policies at some venues may limit the use of such techniques in the short term.

This should be undertaken only with an understanding of the risks involved. The bottom line is that the investigator is signing off on the proposal and is promising to do the work if funded, and so has to take responsibility for every part of the proposal content, even if generative AI assisted in some parts.

The reasoning is similar to that for writing papers, as discussed above, except that there usually will not be copyright and plagiarism issues. Also, not many funding agencies have well-developed policies as yet in this regard. 

For example, although the National Institutes of Health (NIH) does not specifically prohibit the use of generative AI to write grants (they do prohibit use of generative AI technology in the peer review process), they state that an author assumes the risk of using an AI tool to help write an application, noting “[…] when we receive a grant application, it is our understanding that it is the original idea proposed by the institution and their affiliated research team.” If AI generated text includes plagiarism, fabricated citations or falsified information, the NIH “will take appropriate actions to address the non-compliance.” (Source.)

Similarly, the National Science Foundation (NSF), in its notice dated December 14, 2023, emphasizes the use of generative AI in grant proposal preparation and the merit review process. While NSF acknowledges the potential benefits of AI in enhancing productivity and creativity, it imposes strict guidelines to safeguard the integrity and confidentiality of proposals. 

The DOE requires authors to verify any citations suggested by generative AI,  due to potential inaccuracies, and does not allow  AI-based chatbots like ChatGPT to be credited as authors or co-authors.

Reviewers are prohibited from uploading proposal content to non-approved AI tools, and proposers are encouraged to disclose the extent and manner of AI usage in their proposals. The NSF stresses that any breach in confidentiality or authenticity, especially through unauthorized disclosure via AI, could lead to legal liabilities and erosion of trust in the agency. (Source.)

Generative AI can offer multiple advantages. Generative AI can help you summarize a particular paper, so this saves you time and enables you to cover a much larger number of publications in the limited time you have. Generative AI can also help you summarize literature around certain research questions by searching through many papers. 

However, you should consider a number of factors that may impact how much you can trust such reviews.

  • When generative AI encounters a request that it lacks information / knowledge about, sometimes it “makes up” an answer. This “AI hallucination” is well documented and probably many of us have experienced it. You are responsible for verifying the summaries that generative AI gives you.
  • Unlike human researchers, generative AI does not have the ability to evaluate the quality of the published work. Therefore, it will indiscriminately include publications of varying quality, perhaps also many studies that cannot be reproduced. 
  • A generative AI model has a knowledge cutoff date, so newer publications after the cutoff date will not be included in the responses that it gives you.
  • Other types of inaccuracies. Generative AI’s effectiveness is based on the training datasets. Even though enormous amounts of training data are now used for generative AI models, there is still no guarantee that the training is unbiased.

Also, please do keep in mind all the limitations discussed above regarding the use of generative AI to assist in writing research papers. Subject to those limitations, this seems to be a reasonable thing to do.

Generative AI can be beneficial for summarizing or translating your work, especially with its ability to adjust the tone of a text, making it easier to create brief but complete summaries that suit different types of readers. Several advanced generative AI models are designed specifically to transform scientific manuscripts into presentations. 

However, you should be sure that, while using generative AI to summarize, present, or translate your work, you don’t input confidential information to generative AI. You should also always verify that summaries, presentations and translations created by generative AI accurately represent your work. When using generative AI for translation, it could be challenging if you are not proficient in both languages involved and you need to consult with a fluent speaker for verification. Also note that not all generative AI models are explicitly designed for translation tasks. Therefore, you should explore and identify the most suitable generative AI model that aligns with your specific translation needs.

Using Generative AI to Improve Productivity

No, you should not do this. The National Institutes of Health recently announced that it prohibits the use of generative AI to analyze and formulate critiques of grant proposals. This not only applies to generative AI systems that are publicly available, but also to systems hosted locally (such as a university’s own generative AI), as long as data may be shared with multiple individuals. The main rationale is that this would constitute a breach of confidentiality, which is essential in the grant review process. To use generative AI tools to evaluate and summarize grant proposals, or even let it edit critiques, one would need to feed to the AI system “substantial, privileged, and detailed information.” When we don’t know how the AI system will save, share or use the information that it is fed, we should not feed it such information.

Furthermore, expert review relies upon subject matter expertise, which a generative AI system could not be relied upon to have. So, it is unlikely that generative AI will produce a reliable and high-quality review.

For these reasons, we don’t recommend that you use generative AI for reviewing grant proposals or papers, even if the relevant publication venue or funding agency, unlike NIH, has not issued explicit guidance.

Generative AI can, in some situations, be useful to help you draft a letter, or edit your draft and to help you adopt a certain tone. We are not aware of any explicit rules against this. However, please keep in mind the following:

  • You are still fully responsible for everything in the letter because you are still the author.
  • You should consider the issue of confidentiality. Is there confidential information in the letter? If so, generative AI should not “know” it, because, again, we do not know for sure what it does with the information that users feed it.
  • Texts written by GPT tend to sound very generic. This is not good for letters of support, whose value may depend on their providing very specific information, and recommendations, about the subject of the letter. You still need to ensure that the letter is what you feel comfortable sending and will convey to the reader the same level of support to the subject of the letter if you’d write it yourself.

Generative AI can serve as effective brainstorming partners in research. These systems can – when used appropriately – help generate a variety of ideas, perspectives, and potential solutions, particularly useful during the initial stages of research planning. For instance, a researcher can input their basic research concept into the AI system and receive suggestions on experimental approaches, potential methodologies, or alternative research questions. An example prompt may be:

“Analyze recent research on memory consolidation and the influence of emotions on learning and recall. Based on this analysis, generate new hypotheses for potential studies investigating neurobiological mechanisms.”

However,  AI-generated ideas must be critically evaluated. While AI can offer diverse insights, these are based on existing data and may not always be novel or contextually appropriate. Researchers should use these suggestions as a starting point for further development rather than as definitive solutions.

Using Generative AI for Data Generation and Analysis

Yes, provided you can read code! Generative AI can indeed output computer programs. But, just as in the case of text, it is possible you get code that is good-looking but erroneous. To the extent that it is often easier to read code than to write it, you may be better off using generative AI to write code for you. We provide a guide on generating, editing and reviewing code using ChatGPT 4.0 here and a coding tutorial using local software such as GitHub copilot here.

This applies not just to computer programs, but also to databases. You can have generative AI write code for you in SQL to manage and to query databases. In fact, in many cases, you could even do some minimal debugging just by running the code/query on known instances and checking to make sure you get the right answers. While basic tests like these can catch many errors, remember that there is no guarantee your program will work on complex examples just because it worked on simple ones.

Yes. Generative AI models have been constantly improved to carry out data analysis and visualization. We provide some examples of data analysis and visualizations using ChatGPT 4.0 here.

Using generative AI as a substitute for human participants in surveys is not advisable due to significant concerns regarding construct validity. Generative AI, while adept at processing and generating data, cannot authentically replicate the nuances of human behavior and opinions that are the purpose of surveying humans in research. 

However, generative AI can be valuable in the preliminary stages of survey design. It can assist in testing the clarity and structure of survey questions, helping address ambiguity and effectively capture the intended information. This application leverages AI’s capability to process language and simulate varied responses, providing insights into how questions may be interpreted by a diverse audience. In short, while generative AI’s use as a direct replacement for human survey participants is not recommended due to validity concerns, its role in enhancing survey design and testing is a viable and beneficial application.

Generative AI can be employed for labeling, such as categorizing text and images. This application can streamline processes that are traditionally time-consuming and labor-intensive for human judges. However, the reliability of AI in these tasks requires careful consideration and validation on a case-by-case basis.

The key concern with AI-based judgment in labeling is its dependence on the quality and bias of training data. AI systems might replicate any inherent biases present in their training datasets, leading to skewed or inaccurate labeling. Researchers must validate the AI’s performance – comparing output with human-labeled benchmarks to ensure accuracy and impartiality.

Yes! Generative AI can serve as a supplementary tool in the process of data quality assurance, assisting in the identification of errors, inconsistencies, or biases in datasets. Its capability to process extensive data rapidly enables it to spot potential issues that might be missed in manual reviews. Researchers should use Generative AI as one component of a broader data review strategy. It’s essential to corroborate AI-detected anomalies with manual checks and expert assessments.

Reporting the Use of Generative AI

You used generative AI in the course of writing a research paper. How do you give it credit? And how do you inform the reader of your paper about its use?

Generative AI should not be listed as a co-author, but its use must be noted in the paper, including appropriate detail, e.g. about specific prompts and responses. The Committee on Publication Ethics has a succinct and incisive analysis.

The use of generative AI should be disclosed in the paper, along with a description of the places and manners of use. Typically, such disclosures will be in a “Methods” section of the paper, if it has one. If you rely on generative AI output, you should cite it, just as you would cite a web page look up or a personal communication. Keep in mind that some conversation identifiers may be local to your account, and hence not useful to your reader. Good citation style recommendations have been suggested by the American Psychological Association (APA) and the Chicago Manual of Style.

We provide recommendations on reporting the use of generative AI in research here.

Considerations for Choosing Generative AI Models

The most important factor is which generative AI system (what data, what model, what computing requirements) fits well with your research questions. In addition, there are some general considerations. 

Open source. “Open source” describes software that is published alongside the source code for use and exploration by anyone. This is a consideration because most generative AI models are not developed locally by the researchers themselves (as opposed to the usual Machine Learning models). Open-source generative AIs, as well generative AI systems trained with publicly accessible data, can be advantageous for researchers who would like to fine tune generative AI models, scrutinize the security and functionality of the system, and improve explainability and interpretability of the models. 

Accuracy and precision. When outputs of a generative AI can be verified (for example, if it is used in data analytics), you can gauge the efficacy of a generative AI by its precision and accuracy. 

Cost. Some models require subscriptions to APIs (application programming interfaces) for research use. Other models may be able to be integrated locally, but also come with integration costs and potentially ongoing costs for maintenance and updates. When selecting otherwise free models, you might need to cover the cost for an expert to set up and maintain the model.

Yes. Some commercial generative AI developers now provide ways for users to easily customize the models, provide their own data and documents to fine tune the models, and specify the styles of model outputs. See our Custom GPT guide for more details.

The nature of generative AI gives rise to a number of considerations that the entire research community is trying to grapple with. Transparency and accountability about the generative AI’s operations and decision making processes can be difficult when you operate a closed-source system.

We invite you to think about the following carefully, and be aware that many other issues might arise.

Data privacy concerns. Data privacy is more complicated with generative AI when using cloud-based services, as users don’t know for certain what happens to their input data and whether it could be retained for training future AI models. One way to circumvent these privacy concerns is to use locally-deployed generative AI models that run entirely on your own hardware and do not send data back to the AI provider. An example is Nvidia ChatRTX.

Bias in data. Bias in data, and consequently bias in the AI system’s output, could be a major issue because generative AI is trained on large datasets that you usually can’t access or assess, and may inadvertently learn and reproduce biases, stereotypes, and majority views present in these data. Moreover, many generative AI models are trained with overwhelmingly English texts, Western images and other types of data. Non-Western or non-English speaking cultures, as well as work by minorities and non-English speakers are seriously underrepresented in the training data. Thus, the results created by generative AI are definitely culturally biased. This should be a major consideration when assessing whether generative AI is suitable for your research.

AI hallucination. generative AI can produce outputs that are factually inaccurate or entirely incorrect, uncorroborated, nonsensical or fabricated. These phenomena are dubbed “hallucinations”. Therefore, it is essential for you to verify generative AI-generated output with reliable and credible sources.

Plagiarism. generative AI can only generate new contents based on, or drawn from, the data that it is trained on. Therefore, there is a likelihood that they will produce outputs that are similar to the training data, even to the point of being regarded as plagiarism if the similarity is too high. As such, you should confirm (e.g. by using plagiarism detection tools) that generative AI outputs are not plagiarized but instead “learned” from various sources in the manner humans learn without plagiarizing. 

Prompt Engineering. The advent of generative AI has created a new human activity – prompt engineering – because the quality of generative AI responses is heavily influenced by the user input or ‘prompt’. There are courses dedicated to this concept. However, you will need to experiment with how to craft prompts that are clear, specific and appropriately structured so that generative AI will generate the output with the desired style, quality and purpose. 

Knowledge Cutoff Date. Many generative AI models are trained on data up to a specific date, and are therefore unaware of any events or information produced beyond that. For example, if a generative AI is trained on data up to March 2019, they would be unaware of COVID-19 and the impact it had on humanity, or who is the current monarch of Britain. You need to know the cutoff date of the generative AI model that you use in order to assess what research questions are appropriate for its use.

Model Continuity. When you use generative AI models developed by external entities / vendors, you need to consider the possibility that one day the vendor might discontinue the model. This might have a big impact on the reproducibility of your research. 

Security. As with any computer or online system, a generative AI system is susceptible to security breaches and attacks. We have already mentioned the issue of confidentiality and privacy as you input information or give prompts to the system. But malicious attacks could be a bigger threat. For example, a new type of attack, prompt injection, deliberately feeds harmful or malicious contents into the system to manipulate the results that it generates for users. generative AI developers are designing processes and technical solutions against such risks (for example, see OpenAI’s GPT4 System Card and disallowed usage policy. But as a user, you also need to be aware what is at risk, follow guidelines of your local IT providers, and do due diligence with the results that a generative AI creates for you.

Lack of Standardized Evaluations: The AI Index Report 2024 found that leading developers test their models against different responsible AI benchmarks, making it challenging to systematically compare the risks and limitations of AI models. Be wary when models tout confidence in certain evaluation measures, as the measures may not have been fully tested.

Additional Reading

Many recommendations, guidelines and comments are out there regarding the use of Generative AI in research and in other lines of work. Here are a few examples.

For more content including manuscripts, use of generative AI in research and more – see our generative AI resource page.