AI in Science Postdoctoral Fellowship Program

What is AI in Science?

 

How can AI enable research breakthroughs? There are as many ways as our creativity allows. The collection of ideas on this page by no means defines the scope of our program; rather, it means to stimulate our imagination and push the envelope of how AI can be instrumental for science and engineering.

This collection will keep growing as we receive new entries. U-M faculty members who want to submit their ideas please email midas-contact@umich.edu.

Biological Sciences

Applying AI to Dopamine Activity in Real Time

Jill Becker, Patricia Y Gurin Collegiate Professor of Psychology, Professor of Psychology, College of Literature, Science, and the Arts and Research Professor, Michigan Neuroscience Institute, Medical School

Cynthia Chestek, Associate Professor of Biomedical Engineering, College of Engineering and Medical School and Associate Professor of Electrical Engineering and Computer Science, College of Engineering

Dopamine (DA), a neurotransmitter, is known to play a role in reward, motivation, and learning. How is activity in these various brain regions controlled during behavior? How does the DA response differ from one grain structure to another even in the same animal? How do we connect these various responses to the animal’s ongoing behavior? We can measure DA concentrations in multiple brain areas every 15 msec while the animals are moving around. We are able to combine the neural data with data about animal location, time, cell types, animals’ behaviors and individual traits. AI methods, such as the machine learning toolbox that we are developing, in combination with other state-of-the-art analytical methods, will allow us to make full use of the rich data that we now have.

Jacinta Beehner Headshot

Using Machine Learning to Monitor Animal Movement in Real-Time

Jacinta Beehner, Professor of Psychology and Professor of Anthropology, College of Literature, Science, and the Arts

The largest movement dataset from any wild primate comes from 25 Kenyan baboons over the course of two weeks. However, tagging animals is neither feasible nor ethical for many primates, despite the fact that they are arguably the most interesting taxa for asking compelling theoretical and evolutionary questions about collective action problems and the interplay of social and spatial networks. Because primate vocalizations have unique signatures, we can use them in supervised and semi-supervised deep learning to identify individual animals as they move through a landscape. This approach will provide unprecedented opportunities for us to understand social dynamics within and across animal groups in their natural habitats.

Earth and Environmental Sciences

Accelerating Imaging and Microscopy

Ambuj Tewari, Professor of Statistics, College of Literature, Science, and the Arts and Professor of Electrical Engineering and Computer Science, College of Engineering

Anne McNeil, Carol A Fierke Collegiate Professor of Chemistry, Arthur F Thurnau Professor, Professor of Chemistry, College of Literature, Science, and the Arts and Professor of Macromolecular Science and Engineering, College of Engineering

Andrew Ault, Dow Corning Assistant Professor of Chemistry and Associate Professor of Chemistry, College of Literature, Science, and the Arts

Paul Zimmerman, Professor of Chemistry, College of Literature, Science, and the Arts

Allison Steiner, Professor of Climate and Space Sciences and Engineering, College of Engineering

The rate at which we can collect and analyze environmental samples containing emerging sources of pollution, such as microplastics and nanoplastics, limits our understanding of these pollutants. Current imaging and microscopy techniques necessitate a significant amount of manual labor. The path from raw measurement to final characterization is long and difficult after the initial data acquisition. AI advancements have the potential to automate critical parts of the data-processing pipeline, saving at least an order of magnitude in time.

Shasha Zou Headshot
Yang Chen Headshot

AI for Ionospheric Disturbance Prediction

Shasha Zou, Associate Professor of Climate and Space Sciences and Engineering, College of Engineering

Yang Chen, Assistant Professor of Statistics, College of Literature, Science, and the Arts

Critical infrastructure in the civilian, commercial, and military sectors can be harmed by space weather. Understanding the underlying physical processes of space weather, as well as improving our specification and forecasting, are required at the national level to protect vital assets on the ground and in space. One of the five major threats identified in the National Space Weather Strategy and Action Plan is ionospheric disturbance, specifically total electron content (TEC). We hope that advances in AI applied to large datasets from satellite systems will improve the specification and forecasting of local and global ionospheric TEC and its variability.

Aimee Classen Headshot

How Terrestrial Ecosystems Store and Release Carbon

Aimee Classen, Professor of Ecology and Evolutionary Biology and Director, Biological Station, College of Literature, Science, and the Arts

The study of terrestrial ecosystems that store carbon and release it back into the atmosphere is important for climate change science. We aim to understand the diversity of plants and their distribution, including root growth and productivity. Roots are important for soil carbon, which is the largest pool of terrestrial carbon. Currently, we need to manually analyze plant and root images in order to build our data sets, resulting in limited data size and errors. AI could easily automate image analysis, resulting in massive increases in data sets, inferences, and, eventually, our understanding of climate change.

Engineering

AI for Wearable Robots

Robert Gregg, Associate Professor of Robotics, Associate Professor of Electrical Engineering and Computer Science and Associate Professor of Mechanical Engineering, College of Engineering

Emerging lower-limb exoskeletons and powered prosthetic legs necessitate sophisticated control methods to convert sensor data to motor actions in collaboration with the human user, but these control methods must be tailored to each user’s unique gait. Individual differences in gait cannot be parameterized by measurable anatomical quantities, so AI methods are required to identify trends in individuality from large datasets of human locomotion. These trends can then be used to effectively tune or adapt wearable robots to their human users in order to achieve positive results.

David Fouhey Headshot

A Virtual Observatory that Combines the Best Features of Physical Telescopes

David Fouhey, Assistant Professor of Electrical Engineering and Computer Science, College of Engineering

Telescopes often need to trade spatial resolution (how much detail you can see) for the size of the field of view (how big a view you have). The Helioseismic and Magnetic Imager (HMI) of NASA’s Solar Dynamics Observatory (SDO) has a larger field of view and reasonably good spatial resolution data acquisition. The Solar Optical Telescope Spectro-Polarimeter (SOT-SP) of the JAXA/NASA Hinode mission emphasizes high spatial resolution but has a limited field of view and slower temporal resolution. We built SynthIA (Synthetic Inversion Approximation), a deep-learning system that can improve both missions by capturing the best of each instrument’s characteristics. We use SynthIA to generate a new data product that has the higher resolution of Hinode data and the large field-of-view and high temporal resolution of the SDO data.

Mathematical Sciences

Coming Soon

Physical Sciences

Shooting for the Stars: A Deep Search for Encounterable Objects Beyond Neptune

David Gerdes, Arthur F Thurnau Professor, Professor of Physics, Chair, Department of Physics and Professor of Astronomy, College of Literature, Science, and the Arts

What if we could find an object with a diameter of less than 100 km at a distance of 9 billion kilometers from Earth and visit it with a spacecraft? This is what we are trying to do as members of NASA’s New Horizons Kuiper Extended Mission science team. The New Horizons spacecraft was launched in 2006 and flew by Pluto in 2015, returning stunning images that changed our understanding of this icy world. The spacecraft is now traversing the distant realm of the outer solar system, which is populated by thousands of small bodies that have remained undisturbed since the formation of the solar system. There is enough fuel left to divert the spacecraft to another object, if one can be found. Finding moving objects at this distance, however, is a daunting task because they are too faint to be seen in individual images from even the largest telescopes. We are employing AI methods to combine images taken weeks, months, or even years apart with various telescopes, effectively turning entire observing campaigns into an ultra-deep exposure with a single, massive telescope. This is similar to the techniques used by the Event Horizon Telescope collaboration to image the black hole at our galaxy’s center. To achieve this goal, breakthroughs in AI-based image analysis, Bayesian inference, and large-scale GPU-based computation will be required, as well as some luck. The payoff would be a once-in-a-lifetime opportunity to learn something new.

Detecting Dwarf Galaxies and Understanding Dark Matter

Eric Bell, Arthur F Thurnau Professor and Professor of Astronomy, College of Literature, Science, and the Arts

The number, properties and distribution of the least luminous dwarf galaxies are humanity’s best current probe into the nature and distribution of dark matter. Nearby faint galaxies are so dispersed across the sky that they can only be found by looking for clusters of very faint stars. The Vera C. Rubin Observatory’s Legacy Survey of Space and Time will image billions of stars in our own Milky Way, as well as hundreds of very nearby galaxies and roughly 20 billion distant galaxies. Unfortunately, faint stars are vastly outnumbered by compact galaxies with similar observed features. We are leading the effort to develop and test supervised machine learning methods for distinguishing between stars and galaxies at the faintest possible limits, potentially allowing the discovery of hundreds of faint nearby dwarf galaxies – increasing the number known by orders of magnitude and giving precious insight into the nature and distribution of dark matter.

The Universe’s First Light and the Distribution of Matter

Oleg Gnedin, Professor of Astronomy, College of Literature, Science, and the Arts

Numerical simulations of galaxy formation have provided the most accurate models for the origin of the universe over the last decade. We at the University of Michigan have created cutting-edge simulations that reveal the structure of the first galaxies, which are the primary focus of the recently launched James Webb Space Telescope. Such numerical models are a necessary complement to the upcoming cosmic frontier observations. We’ve already gained experience using deep learning AI methods on these simulations to reveal non-trivial connections between giant black holes and the galaxies that host them, and we’ve formed collaborations between domain and technique experts from across the university. Building on these pilot projects, we will use AI to investigate links between the sources of the universe’s first light and the large-scale distribution of matter, the majority of which is unseen “dark matter.”

Understanding the Nature of Dark Matter

Monica Valluri, Research Professor, Astronomy and Adjunct Lecturer in Astronomy, College of Literature, Science, and the Arts

Astronomers are assembling public datasets with billions of Milky Way stars which we are using to understand the nature of “dark matter”. This is the substance that constitutes 85% of the mass in the Universe but is undetected. Dark matter is distributed in a halo around our Galaxy and the positions and velocities of stars that travel through it can be used to determine the dark matter distribution. Halo stars came from former satellites that were shredded beyond recognition, yet multi-dimensional unsupervised learning algorithms can be used to carry out “Galactic archeology” to determine the properties of the satellites that built our Galaxy. In addition, neural networks applied to the spectra of stars can tell us how far they are, when and where they were born and even the nature of dark matter in the satellites that delivered them.