Mohamed Abouelenien

By |

Mohamed Abouelenien’s areas of interest broadly cover data science topics, including applied machine learning, computer vision, and natural language processing. He established the Affective Computing and Multimodal Systems Lab (ACMS) which focuses on modeling human behavior and developing multimodal approaches for different applications. He has worked on a number of projects in these areas, including multimodal deception detection, multimodal sensing of drivers’ alertness levels and thermal discomfort, distraction detection, circadian rhythm modeling, emotion and stress analysis, automated scoring of students’ progression, sentiment analysis, ensemble learning, and image processing, among others. His research is funded by Ford Motor Company (Ford), Educational Testing Service (ETS), Toyota Research institute (TRI), and Procter & Gamble (P&G). Abouelenien has published in several top venues in IEEE, ACM, Springer, and SPIE. He also served as a reviewer for IEEE transactions and Elsevier journals and served as a program committee member for multiple international conferences.

Carol Menassa

By |

My group’s research focuses on understanding and modeling the interconnections between human experience and the built environment. We design autonomous systems that support wellbeing, safety and productivity of office and construction workers, and provides them opportunities for lifelong learning and upskilling. In all research projects, we work hard to ensure that the results are inclusive and benefit people of different abilities in their daily activities and empower them for nontraditional careers.

Cheng Li

By |

My research focuses on developing advanced numerical models and computational tools to enhance our understanding and prediction capabilities for both terrestrial and extraterrestrial climate systems. By leveraging the power of data science, I aim to unravel the complexities of atmospheric dynamics and climate processes on Earth, as well as on other planets such as Mars, Venus, and Jupiter.

My approach involves the integration of large-scale datasets, including satellite observations and ground-based measurements, with statistical methods and sophisticated machine learning algorithms including vision-based large models. This enables me to extract meaningful insights and improve the accuracy of climate models, which are crucial for weather forecasting, climate change projections, and planetary exploration.

Michael Sjoding

By |

Application of machine learning and artificial intelligence in healthcare, particularly in the field of pulmonary and critical care medicine. Deep learning applied to radiologic imaging studies. Physician and artificial intelligence interactions and collaborations. Identifying and addressing algorithmic bias.


Accomplishments and Awards

Mark Draelos

Mark Draelos

By |

My work focuses on image-guided medical robots with an emphasis on clinical translation. My interests include medical robotics, biomedical imaging, data visualization, medical device development, and real-time algorithms.

A major ongoing project is the development of robotic system for automated eye examination. This system relies on machine learning models for tracking and eventually for interpretation of collected data. Other projects concern the live creation of virtual reality scenes from volumetric imaging modalities like optical coherence tomography and efficient acquisition strategies for such purposes.

Rebecca Lindsey

By |

Research in the Lindsey Lab focuses on using simulation to enable on-demand design, discovery, and synthesis of bespoke materials.

These efforts are made possible by Dr. Lindsey’s ChIMES framework, which comprises a unique physics-informed machine-learned (ML) interatomic potential (IAP) and artificial intelligence-automated development tool that enables “quantum accurate” simulation of complex systems on scales overlapping with experiment, with atomistic resolution. Using this tool, her group elucidates fundamental materials behavior and properties that can be manipulated through advanced material synthesis and modification techniques. At the same time, her group develops new approaches to overcome grand challenges in machine learning for physical sciences and engineering, including: training set generation, model uncertainty quantification, reproducibility and automation, robustness, and accessibility to the broader scientific community. Her also group seeks to understand what the models themselves can teach us about fundamental physics and chemistry.

Artists interpretation of a new laser-driven shockwave approach for nanocarbon synthesis predicted by ChIMES simulations and later validated experimentally.

Chuan Zhou

Chuan Zhou

By |

With a passion for developing decision support systems that integrate cutting edge techniques from artificial intelligence, quantitative image analysis, computer vision, and multimodal biomedical data fusion. Research interests have been focusing on characterizing diseases abnormalities and predicting their likelihood of being significant, with the goal to enable early diagnosis and risk stratification, as well as aiding treatment decision making and monitoring.

Bing Ye

Bing Ye

By |

The focus of our research is to address (1) how neuronal development contributes to the assembly and function of the nervous system, and (2) how defects in this process lead to brain disorders. We take a multidisciplinary approach that include genetics, cell biology, developmental biology, biochemistry, advanced imaging (for neuronal structures and activity), electrophysiology, computation (including machine learning and computer vision) and behavioral studies.

We are currently studying the neural basis for decision accuracy. We established imaging and computational methods for analyzing neural activities in the entire central nervous system (CNS) of the Drosophila larva. Moreover, we are exploring the possibility of applying the biological neural algorithms to robotics for testing these algorithms and for improving robot performance.

A major goal of neuroscience is to understand the neural basis for behavior, which requires accurate and efficient quantifications of behavior. To this end, we recently developed a software tool—named LabGym—for automatic identification and quantification of user-defined behavior through artificial intelligence. This tool is not restricted to a specific species or a set of behaviors. The updated version (LabGym2) can analyze social behavior and behavior in dynamic backgrounds. We are further developing LabGym and other computational tools for behavioral analyses in wild animals and in medicine.

The behavior that this chipmunk performed was identified and quantified by LabGym, an AI-based software tool that the Ye lab developed for quantifying user-defined behaviors.

The behavior that this chipmunk performed was identified and quantified by LabGym, an AI-based software tool that the Ye lab developed for quantifying user-defined behaviors.

What are some of your most interesting projects?

1) Develop AI-based software tools for analyzing the behavior of wild animals and human.
2) Use biology-inspired robotics to test biological neural algorithms.

How did you end up where you are today?

Since my teenage years, I have been curious about how brains (human’s and animals’) work, enjoyed playing with electronics, and learning about computational sciences. My curiosity and opportunities led me to become a neuroscientist. When I had my own research team and the resources to explore my other interests, I started to build simple electronic devices for my neuroscience research and to collaborate with computational scientists who are experts in machine learning and computer vision. My lab now combines these approaches in our neuroscience research.

What makes you excited about your data science and AI research?

I am very excited about the interactions between neuroscience and data science/AI research. This is a new area and has great potential of changing the society.

jjpark

Jeong Joon Park

By |

3D reconstruction and generative models. I use neural and physical 3D representations to generate realistic 3D objects and scenes. The current focus is large-scale, dynamic, and interactable 3D scene generations. These generative models will be greatly useful for content creators, like games or movies, or for autonomous agent training in virtual environments. For my research, I frequently use and adopt generative modeling techniques such as auto-decoders, GANs, or Diffusion Models.

In my project “DeepSDF,” I suggested a new representation for a 3D generative model that made a breakthrough in the field. The question I answered is: “what should the 3D model be generating? Points, meshes, or voxels?” In DeepSDF paper, I proposed that we should generate a “function,” that takes input as a 3D coordinate and outputs a field value corresponding to that coordinate, where the “function” is represented as a neural network. This neural coordinate-based representation is memory-efficient, differentiable, and expressive, and is at the core of huge progress our community has made for 3D generative modeling and reconstruction.

3D faces with apperance and geometry generated by our AI model

Two contributions I would like to make. First, I would like to enable AI generation of large-scale, dynamic, and interactable 3D world, which will benefit entertainment, autonomous agent training (robotics and self-driving) and various other scientific fields such as 3D medical imaging. Second, I would like to devise a new and more efficient neural network architecture that mimics our brains better. The current AI models are highly inefficient in terms of how they learn from data (requires a huge number of labels), difficult to train continuously and with verbal/visual instructions. I would like to develop a new architecture and learning methods that address these current limitations.

Liyue Shen

By |

My research interest is in Biomedical AI, which lies in the interdisciplinary areas of machine learning, computer vision, signal and image processing, medical image analysis, biomedical imaging, and data science. I am particularly interested in developing efficient and reliable AI/ML-driven computational methods for biomedical imaging and informatics to tackle real-world biomedicine and healthcare problems, including but not limited to, personalized cancer treatment, and precision medicine.

In the field of AI/ML, we focus on developing reliable, generalizable, data-efficient machine learning and deep learning algorithms by exploiting prior knowledge from the physical world, such as: Prior-integrated learning for data-efficient ML Uncertainty awareness for trustworthy ML. In the field of Biomedicine, we focus on developing efficient computational methods for biomedical imaging and biomedical data analysis to advance precision medicine and personalized treatment, such as: Multi-modal data analysis for decision making Clinical trial translation for real-world deployment.

In the field of AI/ML, we focus on developing reliable, generalizable, data-efficient machine learning and deep learning algorithms by exploiting prior knowledge from the physical world, such as: Prior-integrated learning for data-efficient ML Uncertainty awareness for trustworthy ML. In the field of Biomedicine, we focus on developing efficient computational methods for biomedical imaging and biomedical data analysis to advance precision medicine and personalized treatment, such as: Multi-modal data analysis for decision making Clinical trial translation for real-world deployment.

What are some of your most interesting projects?

Our goal is to develop efficient and reliable AI/ML-driven computational methods for biomedical imaging and informatics to tackle real-world biomedicine and healthcare problems. We hope the technology advancement in AI and ML can help us to better understand human health in different levels. Specifically, we develop Biomedical AI in different parts, including:
– AI in Biomedical Imaging: develop novel machine learning algorithms to advance biomedical imaging techniques for obtaining computational images with improved quality. Specifically, relevant topics include but not limited to: Implicit neural representation learning; Diffusion model / Score-based generative model; Physics-aware / Geometry-informed deep learning.
– AI in Biomedical Image Processing and Bioinformatics: develop robust and efficient machine learning algorithms to extract useful information from multimodal biomedical data for assisting decision making and precision medicine. Specifically, relevant topics include but not limited to: Multimodal representation learning; Robust learning with missing data / noisy labeling; Data-efficient learning such as self- / un- / semi-supervised learning with limited data / labels.

How did you end up where you are today?

I am an assistant professor in the ECE Division of the Electrical Engineering and Computer Science department of the College of Engineering, University of Michigan – Ann Arbor. Before this, I received my Ph.D. degree from the Department of Electrical Engineering, Stanford University. I obtained her Bachelor’s degree in Electronic Engineering from Tsinghua University in 2016. I is the recipient of Stanford Bio-X Bowes Graduate Student Fellowship (2019-2022), and was selected as the Rising Star in EECS by MIT and the Rising Star in Data Science by The University of Chicago in 2021.


Accomplishments and Awards