Explore ARCExplore ARC

Jeffrey Regier

By |

Jeffrey Regier received a PhD in statistics from UC Berkeley (2016) and joined the University of Michigan as an assistant professor. His research interests include graphical models, Bayesian inference, high-performance computing, deep learning, astronomy, and genomics.

Mark P Van Oyen

By |

Efficient, low regret contextual multi-armed bandit approaches for real time learning including Thompson sampling, UCB, and knowledge gradient descent. Integration of optimization and predictive analytics for determining the time to next measurement, which modality to use, and the optimal control of risk factors to manage chronic disease. Integration of soft voting ensemble classifiers and multiple models Kalman filters for disease state prediction, Real-time (online) contextual multi-armed bandits integrated with optimization of hospital bed type dynamic control decisions for reducing 30-day readmission rates in hospitals. Robustness in system optimization when the system model is uncertain with emphasis on quantile regression forests, sample average approximation, robust optimization and distributionally robust optimization. Health care delivery systems models with prediction and control for inpatient and outpatient. Work has been done on Emergency Department redesign for improved patient flow; Capacity management and planning and scheduling for outpatient care, including integrated services networks; admission control with machine learning to ICUs, stepdown, and regular care units Surgical planning and scheduling for access delay control; Planning and scheduling for Clinical Research Units.

Machine learning, system modeling, and stochastic control can be used to slow the rate of glaucoma progression based on treatment aggressiveness options selected jointly with the patient.

Aaron A. King

By |

The long temporal and large spatial scales of ecological systems make controlled experimentation difficult and the amassing of informative data challenging and expensive. The resulting sparsity and noise are major impediments to scientific progress in ecology, which therefore depends on efficient use of data. In this context, it has in recent years been recognized that the onetime playthings of theoretical ecologists, mathematical models of ecological processes, are no longer exclusively the stuff of thought experiments, but have great utility in the context of causal inference. Specifically, because they embody scientific questions about ecological processes in sharpest form—making precise, quantitative, testable predictions—the rigorous confrontation of process-based models with data accelerates the development of ecological understanding. This is the central premise of my research program and the common thread of the work that goes on in my laboratory.

Jason Goldstick

By |

I am a statistician and my research focuses on applied public health work in a variety of fields specific to injury prevention, including substance use, violence, motor vehicle crash, and traumatic brain injury. Within those applications, I apply analytic methods for longitudinal data analysis, spatial and spatio-temporal data analysis, and predictive modeling (e.g., for clinical prediction of future injury risk applied to injuries like stroke, Benzodiazepine overdose, and firearm injury). I am also MPI of the System for Opioid Overdose Surveillance–a near-real-time system for monitoring fatal and nonfatal overdoses in Michigan; the system generates automated spatial and temporal summaries of recent overdose trends.

Veera Baladandayuthapani

By |

Dr. Veera Baladandayuthapani is currently a Professor in the Department of Biostatistics at University of Michigan (UM), where he is also the Associate Director of the Center for Cancer Biostatistics. He joined UM in Fall 2018 after spending 13 years in the Department of Biostatistics at University of Texas MD Anderson Cancer Center, Houston, Texas, where was a Professor and Institute Faculty Scholar and held adjunct appointments at Rice University, Texas A&M University and UT School of Public Health. His research interests are mainly in high-dimensional data modeling and Bayesian inference. This includes functional data analyses, Bayesian graphical models, Bayesian semi-/non-parametric models and Bayesian machine learning. These methods are motivated by large and complex datasets (a.k.a. Big Data) such as high-throughput genomics, epigenomics, transcriptomics and proteomics as well as high-resolution neuro- and cancer- imaging. His work has been published in top statistical/biostatistical/bioinformatics and biomedical/oncology journals. He has also co-authored a book on Bayesian analysis of gene expression data. He currently holds multiple PI-level grants from NIH and NSF to develop innovative and advanced biostatistical and bioinformatics methods for big datasets in oncology. He has also served as the Director of the Biostatistics and Bioinformatics Cores for the Specialized Programs of Research Excellence (SPOREs) in Multiple Myeloma and Lung Cancer and Biostatistics&Bioinformatics platform leader for the Myeloma and Melanoma Moonshot Programs at MD Anderson. He is a fellow of the American Statistical Association and an elected member of the International Statistical Institute. He currently serves as an Associate Editor for Journal of American Statistical Association, Biometrics and Sankhya.

 

An example of horizontal (across cancers) and vertical (across multiple molecular platforms) data integration. Image from Ha et al (Nature Scientific Reports, 2018; https://www.nature.com/articles/s41598-018-32682-x)

Xun Huan

By |

Prof. Huan’s research broadly revolves around uncertainty quantification, data-driven modeling, and numerical optimization. He focuses on methods to bridge together models and data: e.g., optimal experimental design, Bayesian statistical inference, uncertainty propagation in high-dimensional settings, and algorithms that are robust to model misspecification. He seeks to develop efficient numerical methods that integrate computationally-intensive models with big data, and combine uncertainty quantification with machine learning to enable robust and reliable prediction, design, and decision-making.

Optimal experimental design seeks to identify experiments that produce the most valuable data. For example, when designing a combustion experiment to learn chemical kinetic parameters, design condition A maximizes the expected information gain. When Bayesian inference is performed on data from this experiment, we indeed obtain “tighter” posteriors (with less uncertainty) compared to those obtained from suboptimal design conditions B and C.

Fred Feng

By |

Dr. Feng’s research involves conducting and using naturalistic observational studies to better understand the interactions between motorists and other road users including bicyclists and pedestrians. The goal is to use an evidence-based, data-driven approach that improves bicycling and walking safety and ultimately makes them viable mobility options. A naturalistic study is a valuable and unique research method that provides continuous, high-time-resolution, rich, and objective data about how people drive/ride/walk for their everyday trips in the real world. It also faces challenges from the sheer volume of the data, and as with all observational studies, there are potential confounding factors compared to a randomized laboratory experiment. Data analytic methods can be developed to interpret the behavioral data, make meaningful inferences, and get actionable insights.

Using naturalistic driving data to examine the interactions between motorists and bicyclists

Neda Masoud

By |

The future of transportation lies at the intersection of two emerging trends, namely, the sharing economy and connected and automated vehicle technology. Our research group investigates the impact of these two major trends on the future of mobility, quantifying the benefits and identifying the challenges of integrating these technologies into our current systems.

Our research on shared-use mobility systems focuses on peer-to-peer (P2P) ridesharing and multi-modal transportation. We provide: (i) operational tools and decision support systems for shared-use mobility in legacy as well as connected and automated transportation systems. This line of research focuses on system design as well as routing, scheduling, and pricing mechanisms to serve on-demand transportation requests; (ii) insights for regulators and policy makers on mobility benefits of multi-modal transportation; (ii) planning tools that would allow for informed regulations of sharing economy.

In another line of research we investigate challenges faced by the connected automated vehicle technology before mass adoption of this technology can occur. Our research mainly focuses on (i) transition of control authority between the human driver and the autonomous entity in semi-autonomous (level 3 SAE autonomy) vehicles; (ii) incorporating network-level information supplied by connected vehicle technology into traditional trajectory planning; (iii) improving vehicle localization by taking advantage of opportunities provided by connected vehicles; and (iv) cybersecurity challenges in connected and automated systems. We seek to quantify the mobility and safety implications of this disruptive technology, and provide insights that can allow for informed regulations.

Xiang Zhou

By |

My research is focused on developing efficient and effective statistical and computational methods for genetic and genomic studies. These studies often involve large-scale and high-dimensional data; examples include genome-wide association studies, epigenome-wide association studies, and various functional genomic sequencing studies such as bulk and single cell RNAseq, bisulfite sequencing, ChIPseq, ATACseq etc. Our method development is often application oriented and specifically targeted for practical applications of these large-scale genetic and genomic studies, thus is not restricted in a particular methodology area. Our previous and current methods include, but are not limited to, Bayesian methods, mixed effects models, factor analysis models, sparse regression models, deep learning algorithms, clustering algorithms, integrative methods, spatial statistics, and efficient computational algorithms. By developing novel analytic methods, I seek to extract important information from these data and to advance our understanding of the genetic basis of phenotypic variation for various human diseases and disease related quantitative traits.

A statistical method recently developed in our group aims to identify tissues that are relevant to diseases or disease related complex traits, through integrating tissue specific omics studies (e.g. ROADMAP project) with genome-wide association studies (GWASs). Heatmap displays the rank of 105 tissues (y-axis) in terms of their relevance for each of the 43 GWAS traits (x-axis) evaluated by our method. Traits are organized by hierarchical clustering. Tissues are organized into ten tissue groups.