Explore ARCExplore ARC

Aaron A. King

By |

The long temporal and large spatial scales of ecological systems make controlled experimentation difficult and the amassing of informative data challenging and expensive. The resulting sparsity and noise are major impediments to scientific progress in ecology, which therefore depends on efficient use of data. In this context, it has in recent years been recognized that the onetime playthings of theoretical ecologists, mathematical models of ecological processes, are no longer exclusively the stuff of thought experiments, but have great utility in the context of causal inference. Specifically, because they embody scientific questions about ecological processes in sharpest form—making precise, quantitative, testable predictions—the rigorous confrontation of process-based models with data accelerates the development of ecological understanding. This is the central premise of my research program and the common thread of the work that goes on in my laboratory.

Harm Derksen

By |

Current research includes a project funded by Toyota that uses Markov Models and Machine Learning to predict heart arrhythmia, an NSF-funded project to detect Acute Respiratory Distress Syndrome (ARDS) from x-ray images and projects using tensor analysis on health care data (funded by the Department of Defense and National Science Foundation).

Yongsheng Bai

By |

Dr. Bai’s research interests lie in development and refinement of bioinformatics algorithms/software and databases on next-generation sequencing (NGS data), development of statistical model for solving biological problems, bioinformatics analysis of clinical data, as well as other topics including, but not limited to, uncovering disease genes and variants using informatics approaches, computational analysis of cis-regulation and comparative motif finding, large-scale genome annotation, comparative “omics”, and evolutionary genomics.

Hyun Min Kang

By |

Hyun Min Kang is an Associate Professor in the Department of Biostatistics. He received his Ph.D. in Computer Science from University of California, San Diego in 2009 and joined the University of Michigan faculty in the same year. Prior to his doctoral studies, he worked as a research fellow at the Genome Research Center for Diabetes and Endocrine Disease in the Seoul National University Hospital for a year and a half, after completing his Bachelors and Masters degree in Electrical Engineering at Seoul National University. His research interest lies in big data genome science. Methodologically, his primary focus is on developing statistical methods and computational tools for large-scale genetic studies. Scientifically, his research aims to understand the etiology of complex disease traits, including type 2 diabetes, bipolar disorder, cardiovascular diseases, and glomerular diseases.

Veera Baladandayuthapani

By |

Dr. Veera Baladandayuthapani is currently a Professor in the Department of Biostatistics at University of Michigan (UM), where he is also the Associate Director of the Center for Cancer Biostatistics. He joined UM in Fall 2018 after spending 13 years in the Department of Biostatistics at University of Texas MD Anderson Cancer Center, Houston, Texas, where was a Professor and Institute Faculty Scholar and held adjunct appointments at Rice University, Texas A&M University and UT School of Public Health. His research interests are mainly in high-dimensional data modeling and Bayesian inference. This includes functional data analyses, Bayesian graphical models, Bayesian semi-/non-parametric models and Bayesian machine learning. These methods are motivated by large and complex datasets (a.k.a. Big Data) such as high-throughput genomics, epigenomics, transcriptomics and proteomics as well as high-resolution neuro- and cancer- imaging. His work has been published in top statistical/biostatistical/bioinformatics and biomedical/oncology journals. He has also co-authored a book on Bayesian analysis of gene expression data. He currently holds multiple PI-level grants from NIH and NSF to develop innovative and advanced biostatistical and bioinformatics methods for big datasets in oncology. He has also served as the Director of the Biostatistics and Bioinformatics Cores for the Specialized Programs of Research Excellence (SPOREs) in Multiple Myeloma and Lung Cancer and Biostatistics&Bioinformatics platform leader for the Myeloma and Melanoma Moonshot Programs at MD Anderson. He is a fellow of the American Statistical Association and an elected member of the International Statistical Institute. He currently serves as an Associate Editor for Journal of American Statistical Association, Biometrics and Sankhya.

 

An example of horizontal (across cancers) and vertical (across multiple molecular platforms) data integration. Image from Ha et al (Nature Scientific Reports, 2018; https://www.nature.com/articles/s41598-018-32682-x)

Oleg Gnedin

By |

I am a theoretical astrophysicist studying the origins and structure of galaxies in the universe. My research focuses on developing more realistic gasdynamics simulations, starting with the initial conditions that are well constrained by observations, and advancing them in time with high spatial resolution using adaptive mesh refinement. I use machine-learning techniques to compare simulation predictions with observational data. Such comparison leads to insights about the underlying physics that governs the formation of stars and galaxies. I have developed a Computational Astrophysics course that teaches practical application of modern techniques for big-data analysis and model fitting.

Emergence of galaxies and star clusters in cosmological gasdynamics simulations. Left panel shows large-scale cosmic structure (density of dark matter particles), which formed by gravitational instability. In the middle panel we can resolve this structure into disk galaxies with complex morphology (density of molecular/red and atomic/blue gas). These galaxies should create massive star clusters, such as shown in the right panel (real image — to be reproduced by our future simulations!).

Xun Huan

By |

Prof. Huan’s research broadly revolves around uncertainty quantification, data-driven modeling, and numerical optimization. He focuses on methods to bridge together models and data: e.g., optimal experimental design, Bayesian statistical inference, uncertainty propagation in high-dimensional settings, and algorithms that are robust to model misspecification. He seeks to develop efficient numerical methods that integrate computationally-intensive models with big data, and combine uncertainty quantification with machine learning to enable robust and reliable prediction, design, and decision-making.

Optimal experimental design seeks to identify experiments that produce the most valuable data. For example, when designing a combustion experiment to learn chemical kinetic parameters, design condition A maximizes the expected information gain. When Bayesian inference is performed on data from this experiment, we indeed obtain “tighter” posteriors (with less uncertainty) compared to those obtained from suboptimal design conditions B and C.

Neda Masoud

By |

The future of transportation lies at the intersection of two emerging trends, namely, the sharing economy and connected and automated vehicle technology. Our research group investigates the impact of these two major trends on the future of mobility, quantifying the benefits and identifying the challenges of integrating these technologies into our current systems.

Our research on shared-use mobility systems focuses on peer-to-peer (P2P) ridesharing and multi-modal transportation. We provide: (i) operational tools and decision support systems for shared-use mobility in legacy as well as connected and automated transportation systems. This line of research focuses on system design as well as routing, scheduling, and pricing mechanisms to serve on-demand transportation requests; (ii) insights for regulators and policy makers on mobility benefits of multi-modal transportation; (ii) planning tools that would allow for informed regulations of sharing economy.

In another line of research we investigate challenges faced by the connected automated vehicle technology before mass adoption of this technology can occur. Our research mainly focuses on (i) transition of control authority between the human driver and the autonomous entity in semi-autonomous (level 3 SAE autonomy) vehicles; (ii) incorporating network-level information supplied by connected vehicle technology into traditional trajectory planning; (iii) improving vehicle localization by taking advantage of opportunities provided by connected vehicles; and (iv) cybersecurity challenges in connected and automated systems. We seek to quantify the mobility and safety implications of this disruptive technology, and provide insights that can allow for informed regulations.

Patrick Schloss

By |

The Schloss lab is broadly interested in beneficial and pathogenic host-microbiome interactions with the goal of improving our understanding of how the microbiome can be used to reach translational outcomes in the prevention, detection, and treatment of colorectal cancer, Crohn’s disease, and Clostridium difficile infection. To address these questions, we test traditional ecological theory in the microbial context using a systems biology approach. Specifically, the laboratory specializes in using studies involving human subjects and animal models to understand how biological diversity affects community function using a variety of culture-independent genomics techniques including sequencing 16S rRNA gene fragments, metagenomics, and metatranscriptomics. In addition, they use metabolomics to understand the functional role of the gut microbiota in states of health and disease. To support these efforts, they develop and apply bioinformatic tools to facilitate their analysis. Most notable is the development of the mothur software package (https://www.mothur.org), which is one of the most widely used tools for analyzing microbiome data and has been cited more than 7,300 times since it was initially published in 2009. The Schloss lab deftly merges the ability to collect data to answer important biological questions using cutting edge wet-lab techniques and computational tools to synthesize these data to answer their biological questions.

Given the explosion in microbiome research over the past 15 years, the Schloss lab has also stood at the center of a major effort to train interdisciplinary scientists in applying computational tools to study complex biological systems. These efforts have centered around developing reproducible research skills and applying modern data visualization techniques. An outgrowth of these efforts at the University of Michigan has been the institutionalization of The Carpentries organization on campus (https://carpentries.org), which specializes in peer-to-peer instruction of programming tools and techniques to foster better reproducibility and build a community of practitioners.

The Schloss lab uses computational tools to integrate multi-omics tools in a culture-independent approach to understand how bacteria interact with each other and their host to drive processes such as colorectal cancer and susceptibility to Clostridium difficile infections.

Áine Heneghan

By |

Professor Heneghan’s research interests include music analysis, study of archival documents, the history of music theory, and the Second Viennese School. Her new research project examines the corpus of piping tunes collected by James Goodman in south-west Ireland during the mid-1800s. Funded by MIDAS, this work is part of a larger project with colleagues in music theory, statistics, and linguistics entitled “A Computational Study of Patterned Melodic Structures across Musical Cultures.”