Can I Use Generative AI to Write Non-Technical Summaries, Create Presentations, and Translate My Work?
Generative AI can be highly effective for summarizing or translating your research and creating presentations tailored to different audiences. Its ability to adjust tone and simplify language makes it particularly useful for creating clear, engaging content from technical material.
Capabilities
- Summarization: Generative AI can quickly distill complex research into accessible summaries for non-specialist audiences, saving time and improving communication.
- Translation: AI-powered translation tools (e.g., DeepL, Google Translate) can help translate research papers and presentations into multiple languages. Some models can preserve technical accuracy and terminology better than others.
- Presentation Creation: Generative AI can generate slide decks from research papers, suggest visual aids, and adjust the language to match the audience’s level of expertise. Tools like ChatGPT, Claude, and Copilot have pre-built capabilities for summarizing scientific content into a presentation format.
Limitations and Risks
- Oversimplification: Presentations generated by AI may lack depth or misrepresent nuanced findings.
- Loss of Detail: AI-generated summaries may omit important details or misrepresent complex findings.
- Misleading Translations: AI-generated translations can struggle with technical terminology and context.
Can I Use Generative AI to Review Grant Proposals or Papers?
No, you should not use Generative AI to review grant proposals or papers.
- Confidentiality Issues
- The National Institutes of Health (NIH) explicitly prohibits the use of generative AI to analyze or critique grant proposals due to confidentiality concerns. This applies to both publicly available AI models and locally hosted models if data could be accessed by others. Feeding substantial and privileged information into AI systems creates a risk of improper data retention or sharing.
- Lack of Subject Matter Expertise
- Generative AI lacks the human insight and critical judgment needed for nuanced peer review. While AI can identify patterns and summarize text, it cannot reliably assess scientific rigor, novelty, or relevance.
- Policy and Ethical Implications
- Even if a particular funding agency or journal has not issued explicit guidelines, relying on AI for peer review could undermine the integrity of the evaluation process. Maintaining the confidentiality and intellectual independence of the review process is critical to upholding research standards.
Recommendation: Avoid using AI to evaluate or summarize grant proposals and manuscripts under review. Keep the review process strictly within the domain of human expertise to preserve confidentiality and integrity.
Can I Use Generative AI to Write Letters of Support?
Yes, Generative AI can help draft or edit a letter of support, but with some important caveats:
Capabilities
- Drafting Assistance: AI can generate a draft based on key points you provide, saving time and improving structure.
- Tone Adjustment: AI can modify the tone of the letter to match the formality and audience expectations.
- Language Enhancement: AI can suggest alternative phrasing to make the letter more persuasive and professionally polished.
Risks and Limitations
- Generic Language: AI-generated letters often sound formulaic or impersonal.
- Lack of Specificity: AI cannot provide meaningful or detailed insights about the individual or project without clear input.
- Professional Responsibility: You remain responsible for the content, even if AI assists with drafting.
Best Practices
- Review AI-generated drafts carefully to ensure the letter aligns with your genuine assessment and professional standards.
- Use AI for structure and tone adjustment but customize the final letter to reflect your professional judgment.
- Ensure the letter is specific and includes concrete examples rather than relying on general language.
How Can I Use Generative AI as a Brainstorming Partner in My Research?
Generative AI can be highly effective as a brainstorming tool in the early stages of research planning, helping you explore new perspectives, methodologies, and research questions.
Capabilities
- Generating Hypotheses: Input a research concept and prompt AI to suggest alternative hypotheses, experimental designs, or potential challenges.
- Exploring Methodologies: AI can suggest statistical methods, data collection strategies, and experimental approaches based on existing research.
- Identifying Gaps: AI can highlight underexplored areas in the literature, prompting new research angles.
Limitations and Risks
- Recycled Knowledge: AI-generated ideas are based on existing training data and may not be novel or contextually appropriate.
- Lack of Critical Thinking: AI cannot evaluate the feasibility or scientific merit of an idea—it lacks the nuanced judgment of human experts.
- Risk of Hallucination: AI-generated suggestions may be factually incorrect or based on inaccurate sources.
Best Practices
- Critically evaluate AI suggestions and validate them through further research.
- Combine AI-generated insights with your expertise and creative input to develop robust research plans.
Can I Use Generative AI to Write Code?
Yes – provided you can read and understand the code! Generative AI, including tools like ChatGPT, Copilot, and Claude, can generate working code in various programming languages (e.g., Python, R, JavaScript).
Capabilities
- Speed and automation: AI can quickly generate boilerplate code or complex scripts.
- Debugging Assistance: AI can suggest solutions for coding errors and offer alternative methods.
- Flexibility: Models like ChatGPT-4 can generate code snippets, explain them, and refine them based on feedback.
Risks and Limitations
- Incorrect Code: AI-generated code can be syntactically correct but logically flawed.
- Security Concerns: AI-generated code may have hidden vulnerabilities (e.g., weak encryption, open ports).
- Over-reliance: AI is most effective when you understand the code – relying on AI without code literacy increases the risk of introducing bugs.
Best Practices
- Keep sensitive data and proprietary code out of AI prompts to protect confidentiality.
- Always test AI-generated code with data to validate accuracy.
- Use AI to generate code frameworks but refine complex logic manually.
Can I Use Generative AI for Data Analysis and Visualization?
Yes – Generative AI models are becoming increasingly capable of data analysis and visualization.
Capabilities
- Data Cleaning: AI can automate missing data handling, outlier detection, and formatting.
- Statistical Analysis: AI can generate descriptive statistics, run regressions, and conduct hypothesis testing.
- Visualization: AI can create charts, heatmaps, and interactive dashboards from raw datasets. Tools like Copilot and ChatGPT-4 can even create custom Python or R scripts for visualization.
Risks and Limitations
- AI-generated analysis is only as good as the underlying data – incomplete or biased datasets will yield flawed insights.
- Complex models or non-standard metrics may require manual adjustments and fine-tuning.
Best Practices
- Ensure the AI-generated code or analysis adheres to ethical guidelines and data privacy requirements.
- Use AI-generated analysis as a starting point – validate findings with established methods.
- Customize visualizations to ensure they accurately reflect the data and context.
Can I Use Generative AI as a Substitute for Human Participants in Surveys?
No – Generative AI should not replace human participants in surveys due to issues with construct validity and authenticity.
Capabilities
- Testing Survey Design: AI can simulate responses to identify unclear questions and improve structure.
- Language and Tone: AI can suggest more neutral or accessible phrasing for survey questions.
- Pre-testing: AI-generated responses can serve as a preliminary test for identifying potential biases or ambiguities.
Risks and Limitations
- AI-generated responses are based on patterns in training data, not genuine human experiences or opinions.
- Subjective human perspectives cannot be replicated by AI accurately.
Best Practices
- Use AI for testing and refinement – not as a substitute for real participants.
- Ensure that any AI-assisted changes to surveys are validated through pilot testing with human subjects.
Can Generative AI Be Used for Labeling Data?
Yes – Generative AI is highly effective at automating data labeling for large datasets, including:
Capabilities
- Categorizing text and images (e.g., labeling social media content)
- Entity recognition (e.g., identifying species in ecological datasets)
- Sentiment analysis (e.g., labeling reviews as positive or negative)AI can reduce time and cost for large-scale labeling tasks.
- AI models can handle multi-label classification and hierarchical labeling.
Risks and Limitations
- Training Bias: AI will replicate any biases present in the training data.
- Accuracy: AI-generated labels may lack nuance or misinterpret contextual information.
Best Practices
- Use AI to automate initial labeling – validate and refine labels with human review.
- Incorporate feedback loops to improve model accuracy over time.
- Regularly test AI-labeled data against ground truth benchmarks.
Can I Use Generative AI to Review Data for Errors and Biases?
Yes – Generative AI can help identify inconsistencies, errors, and biases in large datasets.
Capabilities
- Outlier Detection: AI can flag data points that deviate from expected patterns.
- Missing Data Identification: AI can highlight incomplete data fields.
- Bias Detection: AI can reveal imbalances in data (e.g., overrepresentation of certain groups).
Risks and Limitations
- AI cannot always explain why a pattern exists or whether a deviation is an error or a real observation – human expertise is needed to interpret results.
- Some AI models may themselves introduce bias, especially if trained on incomplete or skewed data.
Best Practices
- Combine AI-driven insights with human judgment for deeper analysis.
- Use multiple AI models and statistical tests to verify findings.
How Do I Cite Content Created or Assisted by Generative AI?
Generative AI should not be listed as a co-author, but its use must be transparently disclosed and appropriately cited in your research paper. Any use should follow some general disclosure guidelines.
Transparency
- State explicitly how Generative AI was used (e.g., for writing assistance, data analysis, figure generation).
- Include details about the prompts used and the AI’s responses, especially if they influenced the scientific conclusions.
- Mention the version of the model (e.g., ChatGPT 4.0, Claude 2) and the date of use since model outputs can change over time.
Placement
- Disclosures should typically be placed in the Methods section or an Acknowledgments section.
- If the AI-generated content directly influences the scientific claims or interpretation, a statement in the main text or footnote may be necessary.
Co-Authorship
- Generative AI cannot be listed as an author because it lacks accountability, intellectual responsibility, and the ability to provide consent—all of which are requirements for authorship under most journal guidelines.
- If the AI-generated content is central to the paper’s results or interpretation, human authors should take full responsibility for the final content.