Explore ARCExplore ARC

Aaron A. King

By |

The long temporal and large spatial scales of ecological systems make controlled experimentation difficult and the amassing of informative data challenging and expensive. The resulting sparsity and noise are major impediments to scientific progress in ecology, which therefore depends on efficient use of data. In this context, it has in recent years been recognized that the onetime playthings of theoretical ecologists, mathematical models of ecological processes, are no longer exclusively the stuff of thought experiments, but have great utility in the context of causal inference. Specifically, because they embody scientific questions about ecological processes in sharpest form—making precise, quantitative, testable predictions—the rigorous confrontation of process-based models with data accelerates the development of ecological understanding. This is the central premise of my research program and the common thread of the work that goes on in my laboratory.

Jason Goldstick

By |

I am a statistician and my research focuses on applied public health work in a variety of fields specific to injury prevention, including substance use, violence, motor vehicle crash, and traumatic brain injury. Within those applications, I apply analytic methods for longitudinal data analysis, spatial and spatio-temporal data analysis, and predictive modeling (e.g., for clinical prediction of future injury risk applied to injuries like stroke, Benzodiazepine overdose, and firearm injury). I am also MPI of the System for Opioid Overdose Surveillance–a near-real-time system for monitoring fatal and nonfatal overdoses in Michigan; the system generates automated spatial and temporal summaries of recent overdose trends.

Harm Derksen

By |

Current research includes a project funded by Toyota that uses Markov Models and Machine Learning to predict heart arrhythmia, an NSF-funded project to detect Acute Respiratory Distress Syndrome (ARDS) from x-ray images and projects using tensor analysis on health care data (funded by the Department of Defense and National Science Foundation).

Yongsheng Bai

By |

Dr. Bai’s research interests lie in development and refinement of bioinformatics algorithms/software and databases on next-generation sequencing (NGS data), development of statistical model for solving biological problems, bioinformatics analysis of clinical data, as well as other topics including, but not limited to, uncovering disease genes and variants using informatics approaches, computational analysis of cis-regulation and comparative motif finding, large-scale genome annotation, comparative “omics”, and evolutionary genomics.

Hyun Min Kang

By |

Hyun Min Kang is an Associate Professor in the Department of Biostatistics. He received his Ph.D. in Computer Science from University of California, San Diego in 2009 and joined the University of Michigan faculty in the same year. Prior to his doctoral studies, he worked as a research fellow at the Genome Research Center for Diabetes and Endocrine Disease in the Seoul National University Hospital for a year and a half, after completing his Bachelors and Masters degree in Electrical Engineering at Seoul National University. His research interest lies in big data genome science. Methodologically, his primary focus is on developing statistical methods and computational tools for large-scale genetic studies. Scientifically, his research aims to understand the etiology of complex disease traits, including type 2 diabetes, bipolar disorder, cardiovascular diseases, and glomerular diseases.

Veera Baladandayuthapani

By |

Dr. Veera Baladandayuthapani is currently a Professor in the Department of Biostatistics at University of Michigan (UM), where he is also the Associate Director of the Center for Cancer Biostatistics. He joined UM in Fall 2018 after spending 13 years in the Department of Biostatistics at University of Texas MD Anderson Cancer Center, Houston, Texas, where was a Professor and Institute Faculty Scholar and held adjunct appointments at Rice University, Texas A&M University and UT School of Public Health. His research interests are mainly in high-dimensional data modeling and Bayesian inference. This includes functional data analyses, Bayesian graphical models, Bayesian semi-/non-parametric models and Bayesian machine learning. These methods are motivated by large and complex datasets (a.k.a. Big Data) such as high-throughput genomics, epigenomics, transcriptomics and proteomics as well as high-resolution neuro- and cancer- imaging. His work has been published in top statistical/biostatistical/bioinformatics and biomedical/oncology journals. He has also co-authored a book on Bayesian analysis of gene expression data. He currently holds multiple PI-level grants from NIH and NSF to develop innovative and advanced biostatistical and bioinformatics methods for big datasets in oncology. He has also served as the Director of the Biostatistics and Bioinformatics Cores for the Specialized Programs of Research Excellence (SPOREs) in Multiple Myeloma and Lung Cancer and Biostatistics&Bioinformatics platform leader for the Myeloma and Melanoma Moonshot Programs at MD Anderson. He is a fellow of the American Statistical Association and an elected member of the International Statistical Institute. He currently serves as an Associate Editor for Journal of American Statistical Association, Biometrics and Sankhya.

 

An example of horizontal (across cancers) and vertical (across multiple molecular platforms) data integration. Image from Ha et al (Nature Scientific Reports, 2018; https://www.nature.com/articles/s41598-018-32682-x)

Oleg Gnedin

By |

I am a theoretical astrophysicist studying the origins and structure of galaxies in the universe. My research focuses on developing more realistic gasdynamics simulations, starting with the initial conditions that are well constrained by observations, and advancing them in time with high spatial resolution using adaptive mesh refinement. I use machine-learning techniques to compare simulation predictions with observational data. Such comparison leads to insights about the underlying physics that governs the formation of stars and galaxies. I have developed a Computational Astrophysics course that teaches practical application of modern techniques for big-data analysis and model fitting.

Emergence of galaxies and star clusters in cosmological gasdynamics simulations. Left panel shows large-scale cosmic structure (density of dark matter particles), which formed by gravitational instability. In the middle panel we can resolve this structure into disk galaxies with complex morphology (density of molecular/red and atomic/blue gas). These galaxies should create massive star clusters, such as shown in the right panel (real image — to be reproduced by our future simulations!).

Xun Huan

By |

Prof. Huan’s research broadly revolves around uncertainty quantification, data-driven modeling, and numerical optimization. He focuses on methods to bridge together models and data: e.g., optimal experimental design, Bayesian statistical inference, uncertainty propagation in high-dimensional settings, and algorithms that are robust to model misspecification. He seeks to develop efficient numerical methods that integrate computationally-intensive models with big data, and combine uncertainty quantification with machine learning to enable robust and reliable prediction, design, and decision-making.

Optimal experimental design seeks to identify experiments that produce the most valuable data. For example, when designing a combustion experiment to learn chemical kinetic parameters, design condition A maximizes the expected information gain. When Bayesian inference is performed on data from this experiment, we indeed obtain “tighter” posteriors (with less uncertainty) compared to those obtained from suboptimal design conditions B and C.

Fred Feng

By |

Dr. Feng’s research involves conducting and using naturalistic observational studies to better understand the interactions between motorists and other road users including bicyclists and pedestrians. The goal is to use an evidence-based, data-driven approach that improves bicycling and walking safety and ultimately makes them viable mobility options. A naturalistic study is a valuable and unique research method that provides continuous, high-time-resolution, rich, and objective data about how people drive/ride/walk for their everyday trips in the real world. It also faces challenges from the sheer volume of the data, and as with all observational studies, there are potential confounding factors compared to a randomized laboratory experiment. Data analytic methods can be developed to interpret the behavioral data, make meaningful inferences, and get actionable insights.

Using naturalistic driving data to examine the interactions between motorists and bicyclists

Nicholson Price

By |

I study how law shapes innovation in the life sciences, with a substantial focus on big data and artificial intelligence in medicine. I write about the intellectual property incentives and protections for data and AI algorithms, the privacy issues with wide-scale health- and health-related data collection, the medical malpractice implications of AI in medicine, and how FDA should regulate the use of medical AI.