Explore ARCExplore ARC

Patrick Schloss

By |

The Schloss lab is broadly interested in beneficial and pathogenic host-microbiome interactions with the goal of improving our understanding of how the microbiome can be used to reach translational outcomes in the prevention, detection, and treatment of colorectal cancer, Crohn’s disease, and Clostridium difficile infection. To address these questions, we test traditional ecological theory in the microbial context using a systems biology approach. Specifically, the laboratory specializes in using studies involving human subjects and animal models to understand how biological diversity affects community function using a variety of culture-independent genomics techniques including sequencing 16S rRNA gene fragments, metagenomics, and metatranscriptomics. In addition, they use metabolomics to understand the functional role of the gut microbiota in states of health and disease. To support these efforts, they develop and apply bioinformatic tools to facilitate their analysis. Most notable is the development of the mothur software package (https://www.mothur.org), which is one of the most widely used tools for analyzing microbiome data and has been cited more than 7,300 times since it was initially published in 2009. The Schloss lab deftly merges the ability to collect data to answer important biological questions using cutting edge wet-lab techniques and computational tools to synthesize these data to answer their biological questions.

Given the explosion in microbiome research over the past 15 years, the Schloss lab has also stood at the center of a major effort to train interdisciplinary scientists in applying computational tools to study complex biological systems. These efforts have centered around developing reproducible research skills and applying modern data visualization techniques. An outgrowth of these efforts at the University of Michigan has been the institutionalization of The Carpentries organization on campus (https://carpentries.org), which specializes in peer-to-peer instruction of programming tools and techniques to foster better reproducibility and build a community of practitioners.

The Schloss lab uses computational tools to integrate multi-omics tools in a culture-independent approach to understand how bacteria interact with each other and their host to drive processes such as colorectal cancer and susceptibility to Clostridium difficile infections.

Victoria Morckel

By |

Dr. Morckel uses spatial and statistical methods to examine ways to improve quality of life for people living in shrinking, deindustrialized cities in the Midwestern United States. She is especially interested in the causes and consequences of population loss, including issues of vacancy, blight, and neighborhood change.

Suitability Analysis Results: Map of Potential Properties to Naturalize in the City of Flint, Michigan.

Tim Cernak

By |

Tim Cernak, PhD, is Assistant Professor of Medicinal Chemistry with secondary appointments in Chemistry and the Chemical Biology Program at the University of Michigan, Ann Arbor.

The functional and biological properties of a small molecule are encoded within its structure so synthetic strategies that access diverse structures are paramount to the invention of novel functional molecules such as biological probes, materials or pharmaceuticals. The Cernak Lab studies the interface of chemical synthesis and computer science to understand the relationship of structure, properties and reactions. We aim to use algorithms, robotics and big data to invent new chemical reactions, synthetic routes to natural products, and small molecule probes to answer questions in basic biology. Researchers in the group learn high-throughput chemical and biochemical experimentation, basic coding, and modern synthetic techniques. By studying the relationship of chemical synthesis to functional properties, we pursue the opportunity to positively impact human health.

Lawrence Seiford

By |

Professor Seiford’s research interests are primarily in the areas of quality engineering, productivity analysis, process improvement, multiple-criteria decision making, and performance measurement. In addition, he is recognized as one of the world’s experts in the methodology of Data Envelopment Analysis. His current research involves the development of benchmarking models for identifying best-practice in manufacturing and service systems. He has written and co-authored four books and over one hundred articles in the areas of quality, productivity, operations management, process improvement, decision analysis, and decision support systems.

Kathleen M Bergen

By |

Kathleen M Bergen, PhD, is Associate Research Scientist in the School for Environment and Sustainability at the University of Michigan, Ann Arbor. Dr. Bergen currently has interim administrative oversight of the SEAS Environmental Spatial Analysis Laboratory (ESALab) and is interim Director of the campus-wide Graduate Certificate Program in Spatial Analysis.

Prof. Bergen works in the areas of human dimensions of environmental change; remote sensing, GIS and biodiversity informatics; and environmental health and informatics. Her focus is on combining field and geospatial data and methods to study the pattern and process of ecological systems, biodiversity and health. She also strives to build bridges between science and social science to understand the implications of human actions on the social and natural systems of which we are a part. She teaches courses in Remote Sensing and Geographic Information Systems. Formerly she served as a founding member of the UM LIbrary’s MIRLYN implementation team, directed the University Map Collection, and set up the M-Link reference information network.

Brenda Gillespie

By |

Brenda Gillespie, PhD, is Associate Director in Consulting for Statistics, Computing and Analytics Research (CSCAR) with a secondary appointment as Associate Research Professor in the department of Biostatistics in the School of Public Health at the University of Michigan, Ann Arbor. She provides statistical collaboration and support for numerous research projects at the University of Michigan. She teaches Biostatistics courses as well as CSCAR short courses in survival analysis, regression analysis, sample size calculation, generalized linear models, meta-analysis, and statistical ethics. Her major areas of expertise are clinical trials and survival analysis.

Prof. Gillespie’s research interests are in the area of censored data and clinical trials. One research interest concerns the application of categorical regression models to the case of censored survival data. This technique is useful in modeling the hazard function (instead of treating it as a nuisance parameter, as in Cox proportional hazards regression), or in the situation where time-related interactions (i.e., non-proportional hazards) are present. An investigation comparing various categorical modeling strategies is currently in progress.

Another area of interest is the analysis of cross-over trials with censored data. Brenda has developed (with M. Feingold) a set of nonparametric methods for testing and estimation in this setting. Our methods out-perform previous methods in most cases.

Ding Zhao

By |

Ding Zhao, PhD, is Assistant Research Scientist in the department of Mechanical Engineering, College of Engineering with a secondary appointment in the Robotics Institute at The University of Michigan, Ann Arbor.

Dr. Zhao’s research interests include autonomous vehicles, intelligent/connected transportation, traffic safety, human-machine interaction, rare events analysis, dynamics and control, machine learning, and big data analysis

 

Heather B. Mayes

By |

Heather B. Mayes, PhD, is Assistant Professor of Chemical Engineering in the College of Engineering at The University of Michigan, Ann Arbor.

The Team Mayes and Blue focuses on discovering fundamental structure-function relationships that govern how proteins and sugars interact in applications from renewable materials to human health. We use atomistic simulation (molecular mechanics and quantum mechanics) to determine the fundamental, microscopic interactions that determine macroscopically observable phenomena. The resulting mechanistic understanding is harnessed to engineer more efficient proteins to meet biotechnology needs, whether to break down biomass to create feedstock for renewable fuels and chemicals, or create prebiotic carbohydrates.

Molecular simulations allow us to discover fundamental mechanistic processes, such as the overall energies associated with carbohydrate procession into an enzyme (A), and the individual structural components governing the mechanism, such as electrostatic interactions as a function of position (B). These simulations create rich data sets from which we can determine these structure-function relationships and use them to make predictions of how mutations to proteins can change function, thus enabling rational enzyme design.

 

Murali Mani

By |

Murali Mani, PhD, is Associate Professor of Computer Science at the University of Michigan, Flint.

The significant research problems Prof. Mani is investigating include the following: big data management, big data analytics and visualization, provenance, query processing of encrypted data, event stream processing, XML stream processing. data modeling using XML schemas, and effective computer science education. In addition, he has worked in industry on clickstream analytics (2015), and on web search engines (1999-2000). Prof. Mani’s significant publications are listed on DBLP at: http://dblp.uni-trier.de/pers/hd/m/Mani:Murali.

Illustrating how our SMART system effectively integrates big data processing and data visualization to enable big data visualization. The left side shows a typical data visualization scenario, where the different analysts are using their different visualization systems. These visualization systems can provide interactive visualizations but cannot handle the complexities of big data. They interact with a distributed data processing system that can handle the complexities of big data. The SMART system improves the user experience by carefully sending additional data to the visualization system in response to a request from an analyst so that future visualization requests can be answered directly by the visualization system without accessing the data processing system.

Illustrating how our SMART system effectively integrates big data processing and data visualization to enable big data visualization. The left side shows a typical data visualization scenario, where the different analysts are using their different visualization systems. These visualization systems can provide interactive visualizations but cannot handle the complexities of big data. They interact with a distributed data processing system that can handle the complexities of big data. The SMART system improves the user experience by carefully sending additional data to the visualization system in response to a request from an analyst so that future visualization requests can be answered directly by the visualization system without accessing the data processing system.