Michael Cianfrocco

Michael Cianfrocco

By |

Dr. Michael Cianfrocco uses cryo-electron microscopy (cryo-EM) to determine protein structures to understand how nanometer-sized molecular machines work. While a powerful technique, cryo-EM data collection and subsequent image analysis remain bespoke, clunky, and heuristic. Dr. Cianfrocco is coupling his 16+ years of experience with artificial intelligence to automate data collection and processing by capturing human expertise into AI-based algorithms. Recently, his laboratory implemented reinforcement learning to guide cryo-electron microscopes for data collection [1, 2]. This work combined real-world datasets and Dr. Cianfrocco’s expertise with AI-driven optimization algorithms to find the ‘best’ areas of cryo-EM samples for data collection.

cryoRL Distributed Data Collection process diagram

Human users must curate and select areas for subsequent analysis after data collection. Subjective decisions guide how to process the single particles and determine what constitutes ‘good’ data. To automate subsequent preprocessing, Dr. Cianfrocco’s lab built the first AI-backed data preprocessing in cryo-EM by training CNNs to recognize ‘good’ and ‘bad’ cryo-EM data [3]. This work enabled fully-automated cryo-EM data preprocessing, the first step in the processing pipeline of cryo-EM data. In the future, Dr. Cianfrocco wants to continue improving cryo-EM workflows to make them robust and automated, eventually surpassing human experts in the ability of algorithms to collect and analyze cryo-EM data. 1. Fan Q*, Li Y*, et al. “CryoRL: Reinforcement Learning Enables Efficient Cryo-EM Data Collection.” arXiv preprint arXiv:2204.07543 (2022). 2. Li Y*, Fan Q*, Optimized path planning surpasses human efficiency in cryo-EM imaging. bioRxiv 2022.06.17.496614 (2022). 3. Li Y, High-Throughput Cryo-EM Enabled by User-Free Preprocessing Routines. Structure. 2020 Jul 7;28(7):858-869.e3.

Photograph of Alison Davis Rabosky

Alison Davis Rabosky

By |

Our research group studies how and why an organism’s traits (“phenotypes”) evolve in natural populations. Explaining the mechanisms that generate and regulate patterns of phenotypic diversity is a major goal of evolutionary biology: why do we see rapid shifts to strikingly new and distinct character states, and how stable are these evolutionary transitions across space and time? To answer these questions, we generate and analyze high-throughput “big data” on both genomes and phenotypes across the 18,000 species of reptiles and amphibians across the globe. Then, we use the statistical tools of phylogenetic comparative analysis, geometric morphometrics of 3D anatomy generated from CT scans, and genome annotation and comparative transcriptomics to understand the integrated trait correlations that create complex phenotypes. Currently, we are using machine learning and neural networks to study the color patterns of animals vouchered into biodiversity collections and test hypotheses about the ecological causes and evolutionary consequences of phenotypic innovation. We are especially passionate about the effective and accurate visualization of large-scale multidimensional datasets, and we prioritize training in both best practices and new innovations in quantitative data display.

Photograph of Nate Sanders

Nate Sanders

By |

My research interests are broad, but generally center on the causes and consequences of biodiversity loss at local, regional, and global scales with an explicit focus on global change drivers. Our work has been published in Science, Nature, Science Advances, Global Change Biology, PNAS, AREES, TREE, and Ecology Letters among other journals. We are especially interested in using AI and machine learning to explore broad-scale patterns of biodiversity and phenotypic variation, mostly in ants.

Hamid Ghanbari

By |

My research focuses on using digital health solutions, signal processing, machine learning and ecological momentary assessment to understand the physiological and psychological determinants of symptoms in patients with atrial fibrillation. I am building a research framework for rich data collection using smartphone apps, medical records and wearable sensors. I believe that creating a multidimensional dataset to study atrial fibrillation will yield important insights and serve as model for studying all chronic medical conditions.

Zheng Song

By |

I received my second PhD in Computer Science (with a focus on distributed systems and software engineering) from Virginia Tech USA in 2020, and the first PhD (with a focus on wireless networking and mobile computing) from Beijing University of Posts and Telecommunications China in 2015. I received the Best Paper Award from IEEE International Conference on Edge Computing in 2019. My ongoing research projects include measuring the data quality of web services and using federated learning to improve indoor localization accuracy.

Susan Hautaniemi Leonard

By |

I am faculty at ICPSR, the largest social science data archive in the world. I manage an education research pre-registration site (sreereg.org) that is focused on transparency and replicability. I also manage a site for sharing work around record linkage, including code (linkagelibrary.org). I am involved in the LIFE-M project (life-m.org), recently classifying the mortality data. That project uses cutting-edge techniques for machine-reading handwritten forms.

Mortality rates for selected causes in the total population per 1,000, 1850–1912, Holyoke and Northampton, Massachusetts

Matias del Campo

By |

The goal of this project is the creation of a crucial building block of the research on AI and Architecture – a database of 3D models necessary to successfully run Artificial Neural Networks in 3D. This database is part of the first stepping-stones for the research at the AR2IL (Architecture and Artificial Intelligence Laboratory), an interdisciplinary Laboratory between Architecture (represented by Taubman College of Architecture of Urban Planning), Michigan Robotics, and the CS Department of the University of Michigan. A Laboratory dedicated to research specializing in the development of applications of Artificial Intelligence in the field of Architecture and Urban Planning. This area of inquiry has experienced an explosive growth in recent years (triggered in part by research conducted at UoM), as evidenced for example by the growth in papers dedicated to AI applications in architecture, as well as in the investment of the industry in this area. The research funded by this proposal would secure the leading position of Taubman College and the University of Michigan in the field of AI and Architecture. This proposal would also address the current lack of 3D databases that are specifically designed for Architecture applications.

The project “Generali Center’ presents itself as an experiment in the combination of Machine Learning processes capable of learning the salient features of a specific architecture style – in this case, Brutalism- in order to generatively perform interpolations between the data points of the provided dataset. These images serve as the basis of a pixel projection approach that results in a 3D model.

Elizabeth F. S. Roberts

By |

“Neighborhood Environments as Socio-Techno-bio Systems: Water Quality, Public Trust, and Health in Mexico City (NESTSMX)” is an NSF-funded multi-year collaborative interdisciplinary project that brings together experts in environmental engineering, anthropology, and environmental health from the University of Michigan and the Instituto Nacional de Salud Pública. The PI is Elizabeth Roberts (anthropology), and the co-PIs are Brisa N. Sánchez (biostatistics), Martha M Téllez-Rojo (public health), Branko Kerkez (environmental engineering), and Krista Rule Wigginton (civil and environmental engineering). Our overarching goal for NESTSMX is to develop methods for understanding neighborhoods as “socio-techno-bio systems” and to understand how these systems relate to people’s trust in (or distrust of) their water. In the process, we will collectively contribute to our respective fields of study while we learn how to merge efforts from different disciplinary backgrounds.
NESTSMX works with families living in Mexico City, that participate in an ongoing longitudinal birth-cohort chemical-exposure study (ELEMENT (Early Life Exposures in Mexico to ENvironmental Toxicants, U-M School of Public Health). Our research involves ethnography and environmental engineering fieldwork which we will combine with biomarker data previously gathered by ELEMENT. Our focus will be on the infrastructures and social structures that move water in and out of neighborhoods, households, and bodies.

Testing Real-Time Domestic Water Sensors in Mexico City

Testing Real-Time Domestic Water Sensors in Mexico City

Fabian Pfeffer

By |

My research investigates social inequality and its maintenance across time and generations. Current projects focus on wealth inequality and its consequences for the next generation, the institutional context of social mobility processes and educational inequality in the United States and other industrialized countries. I also help expand the social science data infrastructure and quantitative methods needed to address questions on inequality and mobility. I serve as Principal Investigator of the Wealth and Mobility (WAM) study as well as Co-Investigator of the Panel Study of Income Dynamics (PSID). As such, my research draws on and helps construct nationally representative survey data as well as full-population administrative data. My methodological work has been focused on causal inference, multiple imputation, and measurement error.