How can AI enable research breakthroughs? There are as many ways as our creativity allows. The collection of ideas on this page by no means defines the scope of our program; rather, it means to stimulate our imagination and push the envelope of how AI can be instrumental for science and engineering.
This collection will keep growing as we receive new entries. U-M faculty members who want to submit their ideas please email midas-contact@umich.edu.
Biological Sciences
Jill Becker, Patricia Y Gurin Collegiate Professor of Psychology, Professor of Psychology, College of Literature, Science, and the Arts and Research Professor, Michigan Neuroscience Institute, Medical School
Cynthia Chestek, Associate Professor of Biomedical Engineering, College of Engineering and Medical School and Associate Professor of Electrical Engineering and Computer Science, College of Engineering
Dopamine (DA), a neurotransmitter, is known to play a role in reward, motivation, and learning. How is activity in these various brain regions controlled during behavior? How does the DA response differ from one grain structure to another even in the same animal? How do we connect these various responses to the animal’s ongoing behavior? We can measure DA concentrations in multiple brain areas every 15 msec while the animals are moving around. We are able to combine the neural data with data about animal location, time, cell types, animals’ behaviors and individual traits. AI methods, such as the machine learning toolbox that we are developing, in combination with other state-of-the-art analytical methods, will allow us to make full use of the rich data that we now have.
Jacinta Beehner, Professor of Psychology and Professor of Anthropology, College of Literature, Science, and the Arts
The largest movement dataset from any wild primate comes from 25 Kenyan baboons over the course of two weeks. However, tagging animals is neither feasible nor ethical for many primates, despite the fact that they are arguably the most interesting taxa for asking compelling theoretical and evolutionary questions about collective action problems and the interplay of social and spatial networks. Because primate vocalizations have unique signatures, we can use them in supervised and semi-supervised deep learning to identify individual animals as they move through a landscape. This approach will provide unprecedented opportunities for us to understand social dynamics within and across animal groups in their natural habitats.
Ivo Dinov, Professor of Nursing, Director Academic Program, School of Nursing and Professor of Computational Medicine and Bioinformatics, Medical School
Traditionally, longitudinal data are simply modeled as time-series. This new AI technique utilizes an innovative complex-time (kime) mathematical representation of repeated measurement observations. The spacekime analytics approach transforms observed 1D time-course curves to higher dimensional mathematical objects called manifolds. For time-varying biomedical processes, the main challenges in understanding normal and pathological patterns and forecasting diagnostic predictions are related to stochastic variations (noise) often exceeding the actual intensity of the signal we are trying to model. This challenge has remained stubbornly difficult because of the intrinsic limitations of classical low-dimensional representations of time dynamics.
Spacekime analytics capitalizes on the richer structure, geometry, and topology of analytic and parametric manifold representations of time-varying observations. Embedding a 2D sphere in a 3D space allows us to perceive depth, width and height along the three spatial dimensions. Quantifying the shape, curvature, and geodesic distance measures of a 2D sphere requires its higher dimensional embedding in 3D space. Similarly, the higher-dimensional complex-time representation of longitudinal data facilitates deeper understanding of the underlying mechanisms governing the temporal dynamics of biomedical data tracked over time. There are ongoing mental health (psychosis and bipolar) and neurodegeneration (aging and dementia) validation studies of these spacekime AI methods using cross-sectional and longitudinal data, e.g., fMRI, genomics, medical and phenotypic information.
A key advantage of complex-time representation of longitudinal processes is the disruptive potential for ubiquitous applications across multiple scientific domains, economic sectors, and human experiences. The spacekime representation exploits the deep connections between the mathematical formulation of quantum physics, computational data science fundamentals, and artificial intelligence algorithms. Any advances in understanding the basic principles of complex-time observability and its theoretical characteristics may lead to progress in exploring invariance and equivariance of statistical estimations, new quantum physics applications, and deeper understanding of bio-mechanistic dynamics.
Maria Masotti, Research Assistant Professor, Biostatistics, School of Public Health
These frameworks use data from multiplex imaging technologies to discover new biomarkers of tumor development, drug response, and more. Existing methods to quantify spatial cellular interactions do not scale to the rapidly evolving technical landscape where researchers are now able to map over fifty cellular markers at the single cell resolution with thousands of cells per image. Our novel way of summarizing the spatial and phenotypic information of multiplex images allows for direct application of machine learning techniques to unlock associations between patient-level outcomes and cellular colocalization in the tumor.
Our team is developing new methods to use patient-level outcomes to inform the discovery of spatial biomarkers of the tumor microenvironment. These discoveries may help clinicians predict which patients will respond to cancer therapies, or inform the development of new treatments.
Earth and Environmental Sciences
Ambuj Tewari, Professor of Statistics, College of Literature, Science, and the Arts and Professor of Electrical Engineering and Computer Science, College of Engineering
Anne McNeil, Carol A Fierke Collegiate Professor of Chemistry, Arthur F Thurnau Professor, Professor of Chemistry, College of Literature, Science, and the Arts and Professor of Macromolecular Science and Engineering, College of Engineering
Nanta Sophonrat, Schmidt AI in Science Fellow, Michigan Institute for Data Science
Paul Zimmerman, Professor of Chemistry, College of Literature, Science, and the Arts
We are using the active-transfer machine learning approach where we leverage both machine learning from existing data and expert chemist knowledge, the so-called “chemist-in-the-loop”. Because there is not a lot of data on electrochemical cleavage of polymers, we will use AI tools to build a model based on relevant electrochemical reactions of small molecules. The model will suggest possible reaction conditions, and then chemists will choose which experiments to conduct. The results from experiments could be used to update the model to give a better suggestion. In short, AI tools will help accelerate reaction development.
Currently, we don’t have an efficient way to recycle most plastics, so they are landfilled. While we are initially targeting one type of plastic, the approach can be applied to other types of plastics, and help reduce our plastic waste problem.
Shasha Zou, Associate Professor of Climate and Space Sciences and Engineering, College of Engineering
Yang Chen, Assistant Professor of Statistics, College of Literature, Science, and the Arts
Critical infrastructure in the civilian, commercial, and military sectors can be harmed by space weather. Understanding the underlying physical processes of space weather, as well as improving our specification and forecasting, are required at the national level to protect vital assets on the ground and in space. One of the five major threats identified in the National Space Weather Strategy and Action Plan is ionospheric disturbance, specifically total electron content (TEC). We hope that advances in AI applied to large datasets from satellite systems will improve the specification and forecasting of local and global ionospheric TEC and its variability.
Mosharaf Chowdhury, Morris Wellman Faculty Development Professor of Computer Science and Engineering, Associate Professor, Electrical Engineering & Computer Science
The proliferation of open-source AI models has enabled us to build Zeus which powers tools like ML.ENERGY Leaderboard, where one can see energy consumption of different GenAI models in real-time. Using AI and other optimization technologies, we’re building tools not only to measure energy consumption but also to reduce it.
We’re extending Zeus to understand energy characteristics of AI models from milliseconds granularity to that over days, weeks, and months. Our work reduces energy consumption by up to 24% for GenAI models like GPT-3, variations of which powers commercial services like ChatGPT. Reduced energy consumption reduces carbon emissions and directly affects climate change.
Aimee Classen, Professor of Ecology and Evolutionary Biology and Director, Biological Station, College of Literature, Science, and the Arts
The study of terrestrial ecosystems that store carbon and release it back into the atmosphere is important for climate change science. We aim to understand the diversity of plants and their distribution, including root growth and productivity. Roots are important for soil carbon, which is the largest pool of terrestrial carbon. Currently, we need to manually analyze plant and root images in order to build our data sets, resulting in limited data size and errors. AI could easily automate image analysis, resulting in massive increases in data sets, inferences, and, eventually, our understanding of climate change.
Bryan Goldsmith, Assistant Professor of Chemical Engineering, College of Engineering
Catalysts, materials that accelerate rates of reactions without being consumed, have traditionally been designed using trial-and-error approaches, which are expensive and slow. AI tools and methodologies are enabling less expensive and faster solutions to understanding and designing improved catalytic materials for important energy and environmental applications.
Advances in state-of-the-art deep neural networks, reinforcement learning, and generative modeling is allowing the prediction of new materials with desirable properties at a much faster pace than ever before. Realizing the full potential of AI to predict new catalysts that are useful to society would broadly impact major challenges facing society such as climate change due to green-house gas emissions.
Kai Zhu, Associate Professor of Environment and Sustainability, School for Environment and Sustainability
One current research project addresses how climate change alters the seasonality of forest trees. We mainly use AI tools to infer the underlying mechanisms of ecological processes and to make predictions of climate change impacts.
The application of AI in environmental science is an exciting prospect. Climate change is causing widespread disruption to ecosystems around the world. By using innovative AI methods and expanding environmental data, we can gain valuable insights into how ecosystems are affected by climate change. These insights can help us to anticipate future risks and develop strategies for climate adaptation and mitigation.
Engineering
Erhan Bayraktar, Susan Meredith Smith Professor of Actuarial Sciences and Professor of Mathematics, College of Literature, Science, and the Arts
My work includes exploring Mean Field Game models, optimal transport methods and understanding the dynamics of learning in the presence of experts, which taps into the realm of machine learning. The ultimate goal is to provide innovative solutions to tackle practical problems associated with the valuation and optimal control of financial assets, and to develop new ways to model interactions within large populations. The incorporation of AI tools and methodologies in my research has opened up new avenues for exploration and has significantly improved the efficiency and accuracy of our experiments. AI has been instrumental in executing complex high-dimensional simulations, optimization tasks, and data analysis, which form the heart of my work.
The future of my AI-enabled research is immensely exciting with widespread potential applications. As AI continues to advance, it has the capacity to revolutionize the way we manage and perceive risk, and its implementation can drastically change various sectors, including financial markets, insurance, retirement finance, and more. The relevance of our work spans from individual financial decision-making to large-scale societal risk management. Moreover, the improvement of risk management strategies through our research can contribute to economic stability, growth, and welfare, which should resonate with a broad audience.
Kevyn Collins-Thompson, Associate Professor of Information, School of Information and Associate Professor of Electrical Engineering and Computer Science, College of Engineering
I use generative AI tools and methods such as large language models to create semantically rich representations of educational content and learner/instructor needs. I use these in AI-based systems that enable productive interactions for learning, including suggesting effective questions, suggestions, and recommendations. The AI-based systems I develop learn to automatically improve the quality of their interaction from experience and from user feedback.
Recent AI advances are enabling us to finally move toward truly adaptive learning experiences for learners and instructors that will revolutionize how effectively and efficiently we can understand and support human learning for any goal or population. While recognizing the accompanying risks and challenges, and working to address them as part of my research, I believe AI capabilities are now at a point that will allow us to leverage the incredible expertise of human teachers in ways that will help make education more personalized and accessible.