AI in Science Training Materials and Resources

This resource list was developed by the Schmidt AI in Science Curriculum Committee. If there is a new AI resource that you think should be included, please submit your suggestions via this Google form. We look forward to collaborating with our research community to develop this guide for the community.


Organized by AI Carpentries in the U-M Schmidt Sciences Fellowship Program

Introduction and general topics in AI

Artificial Intelligence—The Revolution Hasn’t Happened Yet

This article from one of the giants in the fields of computer science and statistics, Michael Jordan, appears in the inaugural issue of HDSR. Jordan counters the mainstream argument that we are close to achieving a level of intelligence in machines that would rival its human counterpart. He argues that instead of pursuing human-imitative AI, we should build a new engineering discipline ro bring computers and humans together in a way that improves the human condition. He points, correctly in my opinion, that “AI” gets thrown around to refer to a lot of different things. These days it almost always refers to ML, a discipline at the intersection of computer science and statistics. Going beyond the theme of intelligence augmentation (IA), Jordan calls for the creation of a new discipline called “Intelligent Infrastructure” (II) “whereby a web of computation, data, and physical entities exists that makes human environments more supportive, interesting, and safe.” The commentaries on Jordan’s article are no less illuminating than the article itself.


Artificial Intelligence

It’s hard to cover the history of artificial intelligence in a single book let alone a single short article. But this brief article manages to capture some of the major trends in the history of AI from the 1950s to the present.


Conjoined Twins: Artificial Intelligence and the Invention of Computer Science

Thomas Haigh is a scholar who specializes in the history of computing. This is the first article in a series (forthcoming) of articles looking back at the early history of AI in the 1950s and 1960s. Given the hype surrounding AI these days it is important to realize that many of the debates we hear today have been around since the birth of the discipline.


Feynman on Artificial Intelligence and Machine Learning, with Updates

A book chapter in the 2nd edition of Feynman’s Lectures on Computation. For anyone interested in AI in Science, learning about what Feynman thought about computation and artificial intelligence is a must.

Deep learning

Deep learning

This Nature article from 2015 is from the deep learning trio who received the Turing Award in 2018.


Attention Is All You Need

The paper introduces transformer architecture, which is the base for many LLMs


BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

The first pretrained transformer-based model on a large amount of text data


Training language models to follow instructions with human feedback

Open AI paper on how to train large models to be aligned with human preferences


Diffusion Models Beat GANs on Image Synthesis

Introduces training techniques of diffusion models that can outperform the previous state-of-the-art models of GAN


An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale

Introduces the vision transformer model and its training methods


Scaling Laws for Neural Language Models

Studies scaling properties of language models, showing empirical relations between model size, training data, and task performance.

Uncertainty Quantification & Bayesian Statistics

Conformal Prediction is a way to provide prediction intervals for machine learning algorithms with guaranteed coverage at finite samples and without distributional assumptions. Its roots are in the work of Vladimir Vovk, the last PhD student of Kolmogorov. 


A Tutorial on Conformal Prediction

Doesn’t have the latest advances but this one comes straight from the source.


Conformal prediction: A unified review of theory and new challenges

Place to go to learn about the latest advances in the field of conformal prediction.


A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification

Easiest way to get started in using and understanding conformal prediction.

GenAI and Large Language Models

The Debate Over Understanding in AI’s Large Language Models

Does a good of summarizing a heated current debate on whether LLMs really “understand” language.


Language Models are Few-Shot Learners

GPT-3 paper that introduces the idea of in-context learning in prompts. 


Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Google paper that introduces generating rationales (i.e., chain of thought) before generating the final answer for reasoning tasks. 


On the Opportunities and Risks of Foundation Models

Stanford’s paper on what future directions for large language models and their potential risks. 


Sparks of Artificial General Intelligence: Early experiments with GPT-4

Discusses results to show GPT-4’s superior performance on tasks in different domains, e.g., medicine, law, psychology, etc


Organized by AI Carpentries in the U-M Schmidt Sciences Fellowship Program

Introduction and general topics in AI

Artificial Intelligence: A Modern Approach, 4th US ed.  Stuart Russel and Peter Norvig.

Nothing beats this as a solid, all-round introduction to AI and it is a widely used textbook.  It gives a comprehensive introduction to the field, covering traditional topics such as intelligent agents, search algorithms, knowledge representation, planning, as well as more modern topics in machine learning, natural language processing, and robotics.  


Artificial Intelligence: A Guide for Thinking Humans.  2019.  Melanie Mitchell. 

One of the best guides targeted towards the educated lay person.  This is a general overview of the promises and limits of AI with a narrative that traces the history of advances in AI and concerns over the development of superintelligence.    


Pattern Recognition and Machine Learning.  2006.  Christopher M. Bishop. 

This book covers basic machine learning topics and their foundations. It is aimed at advanced undergraduates or first year PhD students, as well as researchers and practitioners.  It assumes no previous knowledge of pattern recognition or machine learning concepts.


The Quest for Artificial Intelligence:  A History of Ideas and Achievements.  2010.  Nils J. Nisson.  A complete pre-print is available at this link: 

A history of AI from someone who actively shaped much of it.  This book emphasizes historical achievements in AI much more than the present state of AI, which is only about 10% of the book and is now 13 years out of date.  It is strong on explanation of foundational ideas underlying AI efforts and achievements from the 1950s through the 1980s.  

Deep learning

Machine Learning: A Probabilistic Perspective.  Kevin P. Murphy

Absolutely wonderful and comprehensive coverage of all areas of ML including deep learning. Kevin Murphy got help from a lot of leading researchers for writing specific chapters of these books. The result is a very authoritative reference on all aspects of ML and AI.  “Book 0” was published in 2012 and it seems like rather than publish a new edition, the updated material was split into a basic treatment (Book 1, 2022) and advanced topics (Book 2, 2023). 


Deep Learning (Adaptive Computation and Machine Learning series).  2016.  Ian Goodfellow, Yoshua Bengio, and Aaron Courville

Can serve as an introductory book for deep learning. It touches on the fundamentals of optimization, convolutional networks, recurrent networks, etc.  It is a standard graduate level introduction to the field.  

Deep Learning in Science

Covers applications in physics, chemistry, biology and medicine


Dive into Deep Learning

Comprehensive and up-to-date coverage of deep learning. It’s nice that you can choose your favorite programming framework (Pytorch, Tensorflow, JAX) for deep learning.

GenAI and Large Language Models

Superintelligence:  Paths, Dangers, Strategies.  2014.  Nick Bostrom.

Nick Bostrom is a philosopher and Director of the Future of Humanity Institute at the University of Oxford.  This book takes a close look at the likelihood of a superintelligent AI being developed, considers a range of dangers that could result, and considers strategies that could be used to reduce those dangers.  It is a good introduction to the value alignment problem, the concern that advanced AI may exhibit values that differ from those of humans.  

U-M resources

U-M Advanced Research Computing (ARC).  U-M ARC provides researchers with a Research Computing Package.  

U-M CSCAR:  Consulting for Statistics, Computing, and Analytics Research

U-M Library:  Data Services 

U-M MICDE:  Michigan Institute for Computational Discovery and Engineering (MICDE)

Other online resources including courses

Introduction to AI

Lex Fridman’s podcast is broad but has AI as a dominant theme


UC Berkeley CS 188: Introduction to Artificial Intelligence 


UM CSE 598: AI for Science


The TWIML AI podcast (shortened from This Week in Machine Learning & AI)

This weekly podcast features a different guest speaker each week, drawn from across a wide range of AI related pursuits and expertise.  It is highly variable from week to week:  sometimes very informative, sometimes not so much.   



AI for Scientific Research 

A beginner level course that covers the use of AI in science for data analysis, the complete machine learning process, and using AI to predict sequences in datasets

Causal Inference & Explainable AI

Blog from the UCLA causality group that includes Judea Pearl

Uncertainty Quantification & Bayesian Statistics

Andrew Gelman and company blog about a variety of statistical topics often from a Bayesian point of view