Explore ARCExplore ARC

Mark P Van Oyen

By |

Efficient, low regret contextual multi-armed bandit approaches for real time learning including Thompson sampling, UCB, and knowledge gradient descent. Integration of optimization and predictive analytics for determining the time to next measurement, which modality to use, and the optimal control of risk factors to manage chronic disease. Integration of soft voting ensemble classifiers and multiple models Kalman filters for disease state prediction, Real-time (online) contextual multi-armed bandits integrated with optimization of hospital bed type dynamic control decisions for reducing 30-day readmission rates in hospitals. Robustness in system optimization when the system model is uncertain with emphasis on quantile regression forests, sample average approximation, robust optimization and distributionally robust optimization. Health care delivery systems models with prediction and control for inpatient and outpatient. Work has been done on Emergency Department redesign for improved patient flow; Capacity management and planning and scheduling for outpatient care, including integrated services networks; admission control with machine learning to ICUs, stepdown, and regular care units Surgical planning and scheduling for access delay control; Planning and scheduling for Clinical Research Units.

Machine learning, system modeling, and stochastic control can be used to slow the rate of glaucoma progression based on treatment aggressiveness options selected jointly with the patient.

Aaron A. King

By |

The long temporal and large spatial scales of ecological systems make controlled experimentation difficult and the amassing of informative data challenging and expensive. The resulting sparsity and noise are major impediments to scientific progress in ecology, which therefore depends on efficient use of data. In this context, it has in recent years been recognized that the onetime playthings of theoretical ecologists, mathematical models of ecological processes, are no longer exclusively the stuff of thought experiments, but have great utility in the context of causal inference. Specifically, because they embody scientific questions about ecological processes in sharpest form—making precise, quantitative, testable predictions—the rigorous confrontation of process-based models with data accelerates the development of ecological understanding. This is the central premise of my research program and the common thread of the work that goes on in my laboratory.

Harm Derksen

By |

Current research includes a project funded by Toyota that uses Markov Models and Machine Learning to predict heart arrhythmia, an NSF-funded project to detect Acute Respiratory Distress Syndrome (ARDS) from x-ray images and projects using tensor analysis on health care data (funded by the Department of Defense and National Science Foundation).

Yongsheng Bai

By |

Dr. Bai’s research interests lie in development and refinement of bioinformatics algorithms/software and databases on next-generation sequencing (NGS data), development of statistical model for solving biological problems, bioinformatics analysis of clinical data, as well as other topics including, but not limited to, uncovering disease genes and variants using informatics approaches, computational analysis of cis-regulation and comparative motif finding, large-scale genome annotation, comparative “omics”, and evolutionary genomics.

Hyun Min Kang

By |

Hyun Min Kang is an Associate Professor in the Department of Biostatistics. He received his Ph.D. in Computer Science from University of California, San Diego in 2009 and joined the University of Michigan faculty in the same year. Prior to his doctoral studies, he worked as a research fellow at the Genome Research Center for Diabetes and Endocrine Disease in the Seoul National University Hospital for a year and a half, after completing his Bachelors and Masters degree in Electrical Engineering at Seoul National University. His research interest lies in big data genome science. Methodologically, his primary focus is on developing statistical methods and computational tools for large-scale genetic studies. Scientifically, his research aims to understand the etiology of complex disease traits, including type 2 diabetes, bipolar disorder, cardiovascular diseases, and glomerular diseases.

Xun Huan

By |

Prof. Huan’s research broadly revolves around uncertainty quantification, data-driven modeling, and numerical optimization. He focuses on methods to bridge together models and data: e.g., optimal experimental design, Bayesian statistical inference, uncertainty propagation in high-dimensional settings, and algorithms that are robust to model misspecification. He seeks to develop efficient numerical methods that integrate computationally-intensive models with big data, and combine uncertainty quantification with machine learning to enable robust and reliable prediction, design, and decision-making.

Optimal experimental design seeks to identify experiments that produce the most valuable data. For example, when designing a combustion experiment to learn chemical kinetic parameters, design condition A maximizes the expected information gain. When Bayesian inference is performed on data from this experiment, we indeed obtain “tighter” posteriors (with less uncertainty) compared to those obtained from suboptimal design conditions B and C.

James R. Hines Jr.

By |

Professor Hines’ research focuses on the analysis of the donative behavior of Americans, and how it affects the intergenerational and interpersonal transmission of economic well-being. To what extent do parents leave property to their children and others, and how is this behavior affected by legal institutions, taxes, social norms, and other considerations? While there are no comprehensive sources of data on wills, trusts, lifetime gifts, and other forms of property transmission, there is ample available information from legal documents that with the help of natural language processing can hopefully be coded and analyzed in a systematic way.

Nicholson Price

By |

I study how law shapes innovation in the life sciences, with a substantial focus on big data and artificial intelligence in medicine. I write about the intellectual property incentives and protections for data and AI algorithms, the privacy issues with wide-scale health- and health-related data collection, the medical malpractice implications of AI in medicine, and how FDA should regulate the use of medical AI.

Neda Masoud

By |

The future of transportation lies at the intersection of two emerging trends, namely, the sharing economy and connected and automated vehicle technology. Our research group investigates the impact of these two major trends on the future of mobility, quantifying the benefits and identifying the challenges of integrating these technologies into our current systems.

Our research on shared-use mobility systems focuses on peer-to-peer (P2P) ridesharing and multi-modal transportation. We provide: (i) operational tools and decision support systems for shared-use mobility in legacy as well as connected and automated transportation systems. This line of research focuses on system design as well as routing, scheduling, and pricing mechanisms to serve on-demand transportation requests; (ii) insights for regulators and policy makers on mobility benefits of multi-modal transportation; (ii) planning tools that would allow for informed regulations of sharing economy.

In another line of research we investigate challenges faced by the connected automated vehicle technology before mass adoption of this technology can occur. Our research mainly focuses on (i) transition of control authority between the human driver and the autonomous entity in semi-autonomous (level 3 SAE autonomy) vehicles; (ii) incorporating network-level information supplied by connected vehicle technology into traditional trajectory planning; (iii) improving vehicle localization by taking advantage of opportunities provided by connected vehicles; and (iv) cybersecurity challenges in connected and automated systems. We seek to quantify the mobility and safety implications of this disruptive technology, and provide insights that can allow for informed regulations.

Xiang Zhou

By |

My research is focused on developing efficient and effective statistical and computational methods for genetic and genomic studies. These studies often involve large-scale and high-dimensional data; examples include genome-wide association studies, epigenome-wide association studies, and various functional genomic sequencing studies such as bulk and single cell RNAseq, bisulfite sequencing, ChIPseq, ATACseq etc. Our method development is often application oriented and specifically targeted for practical applications of these large-scale genetic and genomic studies, thus is not restricted in a particular methodology area. Our previous and current methods include, but are not limited to, Bayesian methods, mixed effects models, factor analysis models, sparse regression models, deep learning algorithms, clustering algorithms, integrative methods, spatial statistics, and efficient computational algorithms. By developing novel analytic methods, I seek to extract important information from these data and to advance our understanding of the genetic basis of phenotypic variation for various human diseases and disease related quantitative traits.

A statistical method recently developed in our group aims to identify tissues that are relevant to diseases or disease related complex traits, through integrating tissue specific omics studies (e.g. ROADMAP project) with genome-wide association studies (GWASs). Heatmap displays the rank of 105 tissues (y-axis) in terms of their relevance for each of the 43 GWAS traits (x-axis) evaluated by our method. Traits are organized by hierarchical clustering. Tissues are organized into ten tissue groups.