AI in Science Postdoctoral Fellowship Program

What is AI in Science?


How can AI enable research breakthroughs? There are as many ways as our creativity allows. The collection of ideas on this page by no means defines the scope of our program; rather, it means to stimulate our imagination and push the envelope of how AI can be instrumental for science and engineering.

This collection will keep growing as we receive new entries. U-M faculty members who want to submit their ideas please email

Example AI in Science Projects at Michigan

Applying AI to Dopamine Activity in Real Time

Jill Becker, Patricia Y Gurin Collegiate Professor of Psychology, Professor of Psychology, College of Literature, Science, and the Arts and Research Professor, Michigan Neuroscience Institute, Medical School

Cynthia Chestek, Associate Professor of Biomedical Engineering, College of Engineering and Medical School and Associate Professor of Electrical Engineering and Computer Science, College of Engineering

Dopamine (DA), a neurotransmitter, is known to play a role in reward, motivation, and learning. How is activity in these various brain regions controlled during behavior? How does the DA response differ from one grain structure to another even in the same animal? How do we connect these various responses to the animal’s ongoing behavior? We can measure DA concentrations in multiple brain areas every 15 msec while the animals are moving around. We are able to combine the neural data with data about animal location, time, cell types, animals’ behaviors and individual traits. AI methods, such as the machine learning toolbox that we are developing, in combination with other state-of-the-art analytical methods, will allow us to make full use of the rich data that we now have.

Using Machine Learning to Monitor Animal Movement in Real-Time

Jacinta Beehner, Professor of Psychology and Professor of Anthropology, College of Literature, Science, and the Arts

The largest movement dataset from any wild primate comes from 25 Kenyan baboons over the course of two weeks. However, tagging animals is neither feasible nor ethical for many primates, despite the fact that they are arguably the most interesting taxa for asking compelling theoretical and evolutionary questions about collective action problems and the interplay of social and spatial networks. Because primate vocalizations have unique signatures, we can use them in supervised and semi-supervised deep learning to identify individual animals as they move through a landscape. This approach will provide unprecedented opportunities for us to understand social dynamics within and across animal groups in their natural habitats.

Jacinta Beehner Headshot

Developing novel AI methods for understanding a wide range of time-varying biomedical and health processes

Ivo Dinov, Professor of Nursing, Director Academic Program, School of Nursing and Professor of Computational Medicine and Bioinformatics, Medical School

Traditionally, longitudinal data are simply modeled as time-series. This new AI technique utilizes an innovative complex-time (kime) mathematical representation of repeated measurement observations. The spacekime analytics approach transforms observed 1D time-course curves to higher dimensional mathematical objects called manifolds. For time-varying biomedical processes, the main challenges in understanding normal and pathological patterns and forecasting diagnostic predictions are related to stochastic variations (noise) often exceeding the actual intensity of the signal we are trying to model. This challenge has remained stubbornly difficult because of the intrinsic limitations of classical low-dimensional representations of time dynamics.

Spacekime analytics capitalizes on the richer structure, geometry, and topology of analytic and parametric manifold representations of time-varying observations. Embedding a 2D sphere in a 3D space allows us to perceive depth, width and height along the three spatial dimensions. Quantifying the shape, curvature, and geodesic distance measures of a 2D sphere requires its higher dimensional embedding in 3D space. Similarly, the higher-dimensional complex-time representation of longitudinal data facilitates deeper understanding of the underlying mechanisms governing the temporal dynamics of biomedical data tracked over time. There are ongoing mental health (psychosis and bipolar) and neurodegeneration (aging and dementia) validation studies of these spacekime AI methods using cross-sectional and longitudinal data, e.g., fMRI, genomics, medical and phenotypic information.

A key advantage of complex-time representation of longitudinal processes is the disruptive potential for ubiquitous applications across multiple scientific domains, economic sectors, and human experiences. The spacekime representation exploits the deep connections between the mathematical formulation of quantum physics, computational data science fundamentals, and artificial intelligence algorithms. Any advances in understanding the basic principles of complex-time observability and its theoretical characteristics may lead to progress in exploring invariance and equivariance of statistical estimations, new quantum physics applications, and deeper understanding of bio-mechanistic dynamics.

Building analytical frameworks to study the spatial distributions and interactions of individual cells in and around cancerous tumors

Maria Masotti, Research Assistant Professor, Biostatistics, School of Public Health

These frameworks use data from multiplex imaging technologies to discover new biomarkers of tumor development, drug response, and more. Existing methods to quantify spatial cellular interactions do not scale to the rapidly evolving technical landscape where researchers are now able to map over fifty cellular markers at the single cell resolution with thousands of cells per image. Our novel way of summarizing the spatial and phenotypic information of multiplex images allows for direct application of machine learning techniques to unlock associations between patient-level outcomes and cellular colocalization in the tumor.

Our team is developing new methods to use patient-level outcomes to inform the discovery of spatial biomarkers of the tumor microenvironment. These discoveries may help clinicians predict which patients will respond to cancer therapies, or inform the development of new treatments.

Reducing plastic waste by identifying new electrochemical approaches to recycle and reuse them

Ambuj Tewari, Professor of Statistics, College of Literature, Science, and the Arts and Professor of Electrical Engineering and Computer Science, College of Engineering

Anne McNeil, Carol A Fierke Collegiate Professor of Chemistry, Arthur F Thurnau Professor, Professor of Chemistry, College of Literature, Science, and the Arts and Professor of Macromolecular Science and Engineering, College of Engineering

Nanta SophonratSchmidt AI in Science Fellow, Michigan Institute for Data Science

Paul Zimmerman, Professor of Chemistry, College of Literature, Science, and the Arts

We are using the active-transfer machine learning approach where we leverage both machine learning from existing data and expert chemist knowledge, the so-called “chemist-in-the-loop”. Because there is not a lot of data on electrochemical cleavage of polymers, we will use AI tools to build a model based on relevant electrochemical reactions of small molecules. The model will suggest possible reaction conditions, and then chemists will choose which experiments to conduct. The results from experiments could be used to update the model to give a better suggestion. In short, AI tools will help accelerate reaction development.

Currently, we don’t have an efficient way to recycle most plastics, so they are landfilled. While we are initially targeting one type of plastic, the approach can be applied to other types of plastics, and help reduce our plastic waste problem.

AI for Ionospheric Disturbance Prediction

Shasha Zou, Associate Professor of Climate and Space Sciences and Engineering, College of Engineering

Yang Chen, Assistant Professor of Statistics, College of Literature, Science, and the Arts

Critical infrastructure in the civilian, commercial, and military sectors can be harmed by space weather. Understanding the underlying physical processes of space weather, as well as improving our specification and forecasting, are required at the national level to protect vital assets on the ground and in space. One of the five major threats identified in the National Space Weather Strategy and Action Plan is ionospheric disturbance, specifically total electron content (TEC). We hope that advances in AI applied to large datasets from satellite systems will improve the specification and forecasting of local and global ionospheric TEC and its variability.

Shasha Zou Headshot
Yang Chen Headshot

Understanding the energy cost and climate impact of AI: how much energy does an AI model consume for training and during inference?

Mosharaf ChowdhuryMorris Wellman Faculty Development Professor of Computer Science and Engineering, Associate Professor, Electrical Engineering & Computer Science

The proliferation of open-source AI models has enabled us to build Zeus which powers tools like ML.ENERGY Leaderboard, where one can see energy consumption of different GenAI models in real-time. Using AI and other optimization technologies, we’re building tools not only to measure energy consumption but also to reduce it.

We’re extending Zeus to understand energy characteristics of AI models from milliseconds granularity to that over days, weeks, and months. Our work reduces energy consumption by up to 24% for GenAI models like GPT-3, variations of which powers commercial services like ChatGPT. Reduced energy consumption reduces carbon emissions and directly affects climate change.

How Terrestrial Ecosystems Store and Release Carbon

Aimee Classen, Professor of Ecology and Evolutionary Biology and Director, Biological Station, College of Literature, Science, and the Arts

The study of terrestrial ecosystems that store carbon and release it back into the atmosphere is important for climate change science. We aim to understand the diversity of plants and their distribution, including root growth and productivity. Roots are important for soil carbon, which is the largest pool of terrestrial carbon. Currently, we need to manually analyze plant and root images in order to build our data sets, resulting in limited data size and errors. AI could easily automate image analysis, resulting in massive increases in data sets, inferences, and, eventually, our understanding of climate change.

Aimee Classen Headshot

Understanding and designing new catalysts for use in sustainable energy, fuel, and environmental applications

Bryan Goldsmith, Assistant Professor of Chemical Engineering, College of Engineering

Catalysts, materials that accelerate rates of reactions without being consumed, have traditionally been designed using trial-and-error approaches, which are expensive and slow. AI tools and methodologies are enabling less expensive and faster solutions to understanding and designing improved catalytic materials for important energy and environmental applications.

Advances in state-of-the-art deep neural networks, reinforcement learning, and generative modeling is allowing the prediction of new materials with desirable properties at a much faster pace than ever before. Realizing the full potential of AI to predict new catalysts that are useful to society would broadly impact major challenges facing society such as climate change due to green-house gas emissions.

Studying climate change impacts on biodiversity, ecosystem processes, and other aspects of nature

Kai Zhu, Associate Professor of Environment and Sustainability, School for Environment and Sustainability

One current research project addresses how climate change alters the seasonality of forest trees. We mainly use AI tools to infer the underlying mechanisms of ecological processes and to make predictions of climate change impacts. 

The application of AI in environmental science is an exciting prospect. Climate change is causing widespread disruption to ecosystems around the world. By using innovative AI methods and expanding environmental data, we can gain valuable insights into how ecosystems are affected by climate change. These insights can help us to anticipate future risks and develop strategies for climate adaptation and mitigation.

Erhan Bayraktar

Using mathematicals models to enhance financial complex systems

Erhan Bayraktar, Susan Meredith Smith Professor of Actuarial Sciences and Professor of Mathematics, College of Literature, Science, and the Arts

My work includes exploring Mean Field Game models, optimal transport methods and understanding the dynamics of learning in the presence of experts, which taps into the realm of machine learning. The ultimate goal is to provide innovative solutions to tackle practical problems associated with the valuation and optimal control of financial assets, and to develop new ways to model interactions within large populations. The incorporation of AI tools and methodologies in my research has opened up new avenues for exploration and has significantly improved the efficiency and accuracy of our experiments. AI has been instrumental in executing complex high-dimensional simulations, optimization tasks, and data analysis, which form the heart of my work.

The future of my AI-enabled research is immensely exciting with widespread potential applications. As AI continues to advance, it has the capacity to revolutionize the way we manage and perceive risk, and its implementation can drastically change various sectors, including financial markets, insurance, retirement finance, and more. The relevance of our work spans from individual financial decision-making to large-scale societal risk management. Moreover, the improvement of risk management strategies through our research can contribute to economic stability, growth, and welfare, which should resonate with a broad audience.

Developing user-centered algorithms and systems for optimally connecting people with information that helps them learn and discover

Kevyn Collins-Thompson, Associate Professor of Information, School of Information and Associate Professor of Electrical Engineering and Computer Science, College of Engineering

I use generative AI tools and methods such as large language models to create semantically rich representations of educational content and learner/instructor needs. I use these in AI-based systems that enable productive interactions for learning, including suggesting effective questions, suggestions, and recommendations. The AI-based systems I develop learn to automatically improve the quality of their interaction from experience and from user feedback.

Recent AI advances are enabling us to finally move toward truly adaptive learning experiences for learners and instructors that will revolutionize how effectively and efficiently we can understand and support human learning for any goal or population. While recognizing the accompanying risks and challenges, and working to address them as part of my research, I believe AI capabilities are now at a point that will allow us to leverage the incredible expertise of human teachers in ways that will help make education more personalized and accessible.

Matias del Campo

Optimizing the building environment with Machine Learning and AI tools

Matias Del Campo, Associate Professor of Architecture, A Alfred Taubman College of Architecture and Urban Planning

The AR2IL laboratory at Taubman College of Architecture and Urban Planning is currently working on three research projects: The optimization of social housing using ML, the building of a 3D model dataset for architecture applications, and the automated sorting of demolition rubble using Machine Vision. Without AI tools, none of these research projects would be possible. The optimization of social housing is based on an apartment dataset built with diversity in mind. The dataset (“Common House”) is comparably small, and we are working on expanding it with plans from various world regions. Learning tools allow us to train a machine to find the balance between functionality, aesthetics, and local culture. It’s an ongoing project with essential benefits that, in the best case, would enhance living conditions for underprivileged populations. The 3D model dataset (Model Mine) is the first example of a dataset built by architects for architecture tasks and ensures that it complies with high standards regarding its architectural qualities. The application of machine vision for demolition tasks (Rubble Robot) allows us to reduce the currently massive carbon footprint of building demolition by ensuring that as much of the material as possible gets recycled.

The AR2IL lab is the first interdisciplinary laboratory dedicated to the building environment – both in regard to design as well as building methods. This collaboration between Architecture, Computer Science, Robotics, and Data Science is unique, and has already gained wide recognition as the leading lab regarding the use of Artificial Intelligence in architecture design, serving as a template for multiple labs founded in the last year around the globe. This powerful combination allows us to provide society with an architecture that is informed by the data harvested from millennia of architectural history, providing novel solutions for the problems at hand. Every aspect of the built environment will be affected – no matter if it is housing, infrastructure, transportation, or cultural buildings.

AI for Wearable Robots

Robert Gregg, Associate Professor of Robotics, Associate Professor of Electrical Engineering and Computer Science and Associate Professor of Mechanical Engineering, College of Engineering

Emerging lower-limb exoskeletons and powered prosthetic legs necessitate sophisticated control methods to convert sensor data to motor actions in collaboration with the human user, but these control methods must be tailored to each user’s unique gait. Individual differences in gait cannot be parameterized by measurable anatomical quantities, so AI methods are required to identify trends in individuality from large datasets of human locomotion. These trends can then be used to effectively tune or adapt wearable robots to their human users in order to achieve positive results.

David Fouhey Headshot

A Virtual Observatory that Combines the Best Features of Physical Telescopes

David Fouhey, Assistant Professor of Electrical Engineering and Computer Science, College of Engineering

Telescopes often need to trade spatial resolution (how much detail you can see) for the size of the field of view (how big a view you have). The Helioseismic and Magnetic Imager (HMI) of NASA’s Solar Dynamics Observatory (SDO) has a larger field of view and reasonably good spatial resolution data acquisition. The Solar Optical Telescope Spectro-Polarimeter (SOT-SP) of the JAXA/NASA Hinode mission emphasizes high spatial resolution but has a limited field of view and slower temporal resolution. We built SynthIA (Synthetic Inversion Approximation), a deep-learning system that can improve both missions by capturing the best of each instrument’s characteristics. We use SynthIA to generate a new data product that has the higher resolution of Hinode data and the large field-of-view and high temporal resolution of the SDO data.

Using AI to address gaps in nuclear power plant engineering systems

Majdi Radaideh, Assistant Professor of Nuclear Engineering and Radiological Sciences, College of Engineering

The Artificial Intelligence and Multiphysics Simulations (AIMS) group aims to address the gap between AI and complex engineering systems in the areas of accelerating multiphysics modeling and simulation tools, system optimization, autonomous control, uncertainty quantification, and explainability. Our research focuses on nuclear power plant engineering systems.

Creating a comprehensive mathematical or simulation model to encompass all the complexities of intricate engineering systems such as nuclear power plants is a formidable challenge. Nonetheless, harnessing the power of AI, with the ability to continuously gather data from sensors within these systems, allows for the development of data-centric technologies. These innovations can facilitate various applications, including digital twins, autonomous control, and the use of robotics for routine inspections. Moreover, AI can contribute to enhancing and expediting simulation tools for these systems by employing AI models to replace empirical correlations or accelerate specific aspects of the code. A faster simulation tool often leads to more robust system optimization.

AIMS is dedicated to empowering AI in engineering systems that operate in less-than-ideal virtual environments, where perfection in data or models is unattainable. In this setting, you can expect to confront issues such as noisy data, intricate geometries, resource-intensive models, and sophisticated systems that prioritize safety. Given our research’s emphasis on adaptable and versatile algorithms, the successful integration of AI technologies into nuclear systems has the potential to catalyze groundbreaking advancements in other engineering domains.

Majdi Radaideh

What’s the law that governs the rare events in complex physical systems?

Yang Zhang, Professor of Nuclear Engineering and Radiological Sciences, College of Engineering

AI tools such as explainable dimension reduction can help extract a minimal set of critical parameters needed to describe complex systems. We are developing understandable AI methods to interpret black-box computer models with human-understandable knowledge. This approach can not only enhance our trust to AI models but also accelerate the discovery of minimal mathematical theories of complex systems.

Shooting for the Stars: A Deep Search for Encounterable Objects Beyond Neptune

David Gerdes, Arthur F Thurnau Professor, Professor of Physics, Chair, Department of Physics and Professor of Astronomy, College of Literature, Science, and the Arts

What if we could find an object with a diameter of less than 100 km at a distance of 9 billion kilometers from Earth and visit it with a spacecraft? This is what we are trying to do as members of NASA’s New Horizons Kuiper Extended Mission science team. The New Horizons spacecraft was launched in 2006 and flew by Pluto in 2015, returning stunning images that changed our understanding of this icy world. The spacecraft is now traversing the distant realm of the outer solar system, which is populated by thousands of small bodies that have remained undisturbed since the formation of the solar system. There is enough fuel left to divert the spacecraft to another object, if one can be found. Finding moving objects at this distance, however, is a daunting task because they are too faint to be seen in individual images from even the largest telescopes. We are employing AI methods to combine images taken weeks, months, or even years apart with various telescopes, effectively turning entire observing campaigns into an ultra-deep exposure with a single, massive telescope. This is similar to the techniques used by the Event Horizon Telescope collaboration to image the black hole at our galaxy’s center. To achieve this goal, breakthroughs in AI-based image analysis, Bayesian inference, and large-scale GPU-based computation will be required, as well as some luck. The payoff would be a once-in-a-lifetime opportunity to learn something new.

Detecting Dwarf Galaxies and Understanding Dark Matter

Eric Bell, Arthur F Thurnau Professor and Professor of Astronomy, College of Literature, Science, and the Arts

The number, properties and distribution of the least luminous dwarf galaxies are humanity’s best current probe into the nature and distribution of dark matter. Nearby faint galaxies are so dispersed across the sky that they can only be found by looking for clusters of very faint stars. The Vera C. Rubin Observatory’s Legacy Survey of Space and Time will image billions of stars in our own Milky Way, as well as hundreds of very nearby galaxies and roughly 20 billion distant galaxies. Unfortunately, faint stars are vastly outnumbered by compact galaxies with similar observed features. We are leading the effort to develop and test supervised machine learning methods for distinguishing between stars and galaxies at the faintest possible limits, potentially allowing the discovery of hundreds of faint nearby dwarf galaxies – increasing the number known by orders of magnitude and giving precious insight into the nature and distribution of dark matter.

The Universe’s First Light and the Distribution of Matter

Oleg Gnedin, Professor of Astronomy, College of Literature, Science, and the Arts

Numerical simulations of galaxy formation have provided the most accurate models for the origin of the universe over the last decade. We at the University of Michigan have created cutting-edge simulations that reveal the structure of the first galaxies, which are the primary focus of the recently launched James Webb Space Telescope. Such numerical models are a necessary complement to the upcoming cosmic frontier observations. We’ve already gained experience using deep learning AI methods on these simulations to reveal non-trivial connections between giant black holes and the galaxies that host them, and we’ve formed collaborations between domain and technique experts from across the university. Building on these pilot projects, we will use AI to investigate links between the sources of the universe’s first light and the large-scale distribution of matter, the majority of which is unseen “dark matter.”

Understanding the Nature of Dark Matter

Monica Valluri, Research Professor, Astronomy and Adjunct Lecturer in Astronomy, College of Literature, Science, and the Arts

Astronomers are assembling public datasets with billions of Milky Way stars which we are using to understand the nature of “dark matter”. This is the substance that constitutes 85% of the mass in the Universe but is undetected. Dark matter is distributed in a halo around our Galaxy and the positions and velocities of stars that travel through it can be used to determine the dark matter distribution. Halo stars came from former satellites that were shredded beyond recognition, yet multi-dimensional unsupervised learning algorithms can be used to carry out “Galactic archeology” to determine the properties of the satellites that built our Galaxy. In addition, neural networks applied to the spectra of stars can tell us how far they are, when and where they were born and even the nature of dark matter in the satellites that delivered them.

Shravan Veerapaneni

Transformative acceleration of challenging computational problems in quantum physics and scientific computing through deep neural networks

Shravan Veerapaneni, Professor of Mathematics, College of Literature, Science, and the Arts

Deep neural networks are versatile function approximators. This means that whenever a computational problem can be posed as one of function approximation, there exists potential for transformative speedup. In quantum physics, for example, the ground state eigenvalue problem can be reformulated as function approximation. AI advancements, including autoregressive sampling, have been leveraged to accelerate the learning of many-body wave functions.

Developments in transformer-based autoregressive language models harbor considerable untapped potential for accelerating the search through chemical space for viable compounds and materials. Improvements in approximating ground state energies have downstream applications that can unlock mysteries of material science, including high-temperature superconductivity and efficient drug discovery.

Solving long-standing quantum mechanical problems in chemistry and materials science using advanced machine learning methods

Paul Zimmerman, Assistant Professor of Chemistry, College of Literature, Science, and the Arts

Research by the Zimmerman group, supported by the Schmidt AI in Science program, is using advanced machine learning methods to solve long-standing quantum mechanical problems in chemistry and materials science. The behavior of electrons dictates characteristics of chemical bonds and structure, so predicting how electrons move can give tremendous insight into the properties and design of molecules and materials. But how can this be done, given the huge complexity of the quantum mechanics that decide electronic motion? Using a combination of graph neural networks and symbolic regression, we are seeking low-dimensional representations of high-dimensional quantum problems. By training these representations using highly-accurate results based on the Schrodinger equation, patterns in electron behavior are being encoded in a dramatically more tractable form.

This research paradigm could provide a new, effective means to design molecules and materials with desirable characteristics, by leveraging the fundamental predictive power of quantum mechanics. While current-generation quantum mechanical methods either cannot handle strong correlation or are too costly for routine application, the low-dimension representations learned in this project may be able to surpass these limits. Since thousands of papers each year are published using conventional quantum methods, we hope the introduction of accurate, strongly-correlated, and low-cost quantum chemical methods will be transformative for the many researchers relying on these techniques.

Designing autonomous control algorithms for nuclear power plants

Kamal Abdulaheem, Schmidt AI in Science Fellows (2023 cohort)

My research is focused on the design of autonomous control algorithms for nuclear power plants that are capable of prognosis, diagnosis, and importantly decision-making and adaptive capabilities. AI/Machine Learning enhances control theories in areas where the model-based approach is challenging like in model-predictive control (MPC) and sliding mode control SMC. AI/ML enables the decision making capabilities of control algorithms.

The potential benefits of this research: proffering a solution to climate change because it will make nuclear energy, which does not generate greenhouse gasses, more available; solve the challenge of energy security, especially in developing countries because it will make nuclear energy accessible to most of these poor countries. Nuclear energy will complement renewable energy. My research, which is basically to design autonomous control for micro-reactor and small modular reactors, will miniaturize nuclear reactors. Consequently, it will increase the penetration of nuclear technology for power generation around the globe. Moreover, SMR and micro-reactor reactor technologies have been the major focus of government, investors, and scientists in recent years.

Answering fundamental questions in ecology and evolutionary biology

Jake Berv, Schmidt AI in Science Fellow (2023 cohort)

Jake’s research uses AI and machine learning to explore fundamental questions in ecology and evolutionary biology. Jake is most fascinated by how microevolutionary genetic processes operating at the level of individual organisms and populations propagate through the tree of life and time to generate macroevolutionary patterns. 

Jake’s current project utilizes a neural network to analyze bird skeletal evolution using high-throughput measurements from museum specimens. He is particularly interested in how small genetic variations at the individual and population levels can lead to significant evolutionary shifts. Jake’s research program is motivated by two broad evolutionary mechanisms: contingency and convergence. Evolutionary contingency refers to the role of random, unique events shaping evolution, often leading to unexpected evolutionary paths. Convergence, on the other hand, occurs when unrelated species evolve similar traits independently due to facing similar environmental pressures, revealing consistent evolutionary strategies. Examining these phenomena across the tree of life with AI methodologies holds exciting prospects for future discoveries in evolutionary biology.

Taking this lens, Jake’s research aims to investigate several overarching themes in comparative biology: What are the roles of evolutionary contingency and convergence in generating patterns of biodiversity? When and why might one of these modes of evolution predominate over the other? What are the drivers and correlates of evolutionary change? Addressing these questions requires an appeal to both population scale phenomena, as well as larger scale patterns that can only be directly observed from the fossil record. Overall, Jake’s interdisciplinary approach to comparative biology recognizes that variation in the “tempo” or speed, and “mode” or processes of evolutionary change can confound the interpretation of biodiversity data ( and seeks to discover the causal factors underlying evolutionary patterns.

Jacob Berv
Yossi Cohen

Improving the reliability of complex manufacturing systems through fault diagnosis and anomaly detection

Yossi Cohen, Schmidt AI in Science Fellow (2022 cohort)

AI-based analytics allows for leveraging real-time and historical sensor measurements to diagnose incipient failures in a component or system, which significantly enhances operational decision-making and key performance indicators such as yield, quality, and machine uptime. New directions include making data-driven predictions more trustworthy for human operators via utilizing explainable AI and uncertainty quantification techniques.

Identifying, quantifying, communicating, and reducing sources of uncertainty in a high-dimensional and dynamic environment has broad implications for society and industry, improving trust in AI-enabled decision-making via promoting human-centric and explainable methodologies.

“UAXAI” is a framework that supports 5 fundamental pillars (the ovals) and the connections depict how this framework can contribute to those pillars, and in return how that enhances trustworthiness.

Evolvability and the story of life, nature, and evolutionary history

Matthew Andres Moreno, Schmidt AI in Science Fellow (2023 cohort)

I am interested in evolvability, the capability of evolving organisms to generate novel variation that is viable. I plan to use machine learning approaches to investigate roles that unsupervised learning—like processes may play in facilitating biological evolvability. I use unsupervised learning approaches such as autoencoders and generative adversarial networks (GANs) as models for evolvable genotype-phenotype maps in evolutionary experiments.

I hope to contribute to emerging developments of entirely new chapters of evolutionary theory, which will lend new depth to our story of life, nature, and our own evolutionary history. Methodology to improve the evolvability of artificial systems also has important applications to evolution-based optimization algorithms, such as genetic algorithms and genetic programming. Improvements to these algorithms will empower discovery of better-quality solutions to real-world problems across engineering domains.

Kevin Napier

Fundamentally changing discovery of solar system bodies

Kevin Napier, Schmidt AI in Science Fellows (2023 cohort)

In addition to the so-called major planets Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune, our solar system is home to scads of smaller bodies known as minor planets (there are more than one million known today, most plentifully in the asteroid belt). When taken in aggregate, the minor planets’ diverse physical and dynamical properties provide insight about our solar system’s formation and evolution.

Historically, our ability to discover progressively smaller and more distant minor planets has been driven by innovations in telescope and camera technology. Now, as hardware capabilities are reaching their theoretical and/or practical limits, how can we continue to make progress? I am working to solve this problem by developing a software tool called heliostack, which combines novel theoretical techniques in orbital mechanics with state-of-the-art AI computer vision tools to squeeze every possible minor planet discovery out of astronomical images.

Mosaic of 2401 candidate minor planets discovered using AI in the DECam Ecliptic Exploration Project (DEEP)

Images of a sub-threshold Kuiper Belt Object (KBO) in stacks of 1, 5, 10, 30, and 100 images

Advancing next-gen propulsion with machine learning for accurate flow prediction and analysis

Andreas Rauch, Schmidt AI in Science Fellow (2022 cohort)

The development of the next generation of propulsion and power-generation devices requires enhanced understanding of the complex processes governing them. I am focused on developing machine learning models to provide high fidelity computational tools for flow prediction and analysis. Recent advances in both computational and experimental methods have generated significant high-resolution fluid dynamics data that can be harnessed by AI.

This is an exciting opportunity to use machine learning to learn new models that are both accurate and computationally less expensive. This can accelerate the design of the next low emissions aircraft engine or power-generation device.

Andreas Rauch
Christin Salley

Improving efficiency in US emergency call centers

Christin Salley, Schmidt AI in Science Fellow (2023 cohort)

My current research project aims to improve the efficiency of emergency call centers in the United States, which are currently understaffed, by investigating the use of Artificial Intelligence (AI) to handle non-emergency calls. The project will address concerns such as biases and ethical considerations through interviews with operators and developing AI bots that mimic human responses while accounting for biases, ultimately proposing solutions to enhance emergency dispatch systems with fairness and equity in mind.

AI tools have significantly advanced this research by enabling the creation of bots and construction of models, with the research team, that emulate human-like behaviors and can simulate various real-life scenarios in ways that traditional simulation methods do not achieve. I’m enthusiastic about the potential impact of this AI-enabled research. It will not only aid the infrastructure issues found within dispatch centers but also with the mental and physical health of operators. The cognitive personas package I plan to use holds great promise for numerous fields, offering assistance without replacing human input, and I believe it is an exciting future for AI applications in any field.

Predicting lead breakthrough in point-of-use drinking water filters

Alyssa Schubert, Schmidt AI in Science Fellow (2023 cohort)

Studying the point at which filters become full and lead is no longer able to be filtered out, before exposure to lead occurs. Alyssa will use multiparameter sensing and machine learning methods to forecast lead breakthroughs for a variety of drinking water conditions and filters. This work will be able to more precisely inform filter replacement policies and reduce lead exposure via drinking water. This work is especially timely in Michigan, where the installation of filters in schools and daycares has recently been required by law.

Alyssa Schubert

Building operational capacity for autonomous robots

Elena Shrestha, Schmidt AI in Science Fellow (2023 cohort)

Although AI-based autonomous systems have demonstrated effectiveness in structured indoor environments in recent years, their ability to operate in real-world, unstructured settings remains significantly underdeveloped. My research is on developing learning-based guidance and control algorithms for aerial, ground, and maritime robots in order to extend their operational capabilities and enable safe operations in dynamic and unpredictable real-world environments. I’m leveraging physics-based simulators and model-based reinforcement learning techniques that enable the agent to autonomously construct a representative world model of its environment, which is then used to learn a policy that guides its behavior. The overarching vision for the near future is to enable unmanned autonomous systems to perform tasks that are considered too “dull, dirty, and dangerous” for humans.

Fast-tracking functional material discovery

Soumi Tribedi, Schmidt AI in Science Fellow (2022 cohort)

Computational exploration of chemicals is crucial for fast-tracking functional material discovery. Density Functional Theory (DFT), a widely used quantum chemical modeling method, calculates electronic structure efficiently, but its accuracy falters in strongly correlated electron systems. I plan to enhance the accuracy of DFT using deep learning, leveraging data from more accurate yet expensive calculations. DFT’s universal functional dependence ensures that the learned density functional remains valid across diverse chemical systems. This approach aims to balance affordability and precision in electronic structure predictions.

I employ symbolic regression (SR) to acquire atom-centered basis functions for exchange-correlation potentials. Subsequently, I integrate these basis functions into a message-passing graph neural network (MPGNN). This approach combines the interpretability of SR with the powerful modeling capabilities of MPGNN for a comprehensive understanding of exchange-correlation potentials—a previously elusive factor in achieving exact solutions for DFT.

Active research in chemistry and physics has focused on the quest for an “exact” and “universal” density functional. The theoretical potential of a universal functional applicable to diverse chemical systems, including molecules and materials, has intrigued researchers for decades but has proven elusive. By incorporating deep learning, particularly with a symbolic and interpretable form through symbolic regression, we aim to tackle this longstanding unresolved issue. This approach will not only address the decades-long challenge but also promises to unlock cost-effective and highly accurate computations of chemical properties and electronic structures. This breakthrough, in turn, will facilitate the discovery of new chemicals.

Soumi Tribedi
Xin Xie

Integrating AI into the inverse design of topological photonic crystal systems

Xin Xie, Schmidt AI in Science Fellow (2022 cohort)

Topological photonics, a forefront field in physics, presents complex interplays between the design of devices and their properties. Traditional approaches demand extensive computational efforts and typically confine designs to a limited space, heavily dependent on physics intuition. Artificial Intelligence provides a more efficient alternative, enabling exploration of a vast, non-intuitive design landscape. This paves the path toward practical applications of optical devices, enhancing their robustness and functionality.