I am Research Faculty with the Michigan Center for Integrative Research in Critical Care (MCIRCC). Our team builds predictive algorithms, analyzes signals, and implements statistical models to advance Critical Care Medicine. We use electronic healthcare record data to build predictive algorithms. One example of this is Predicting Intensive Care Transfers and other Unforeseen Events (PICTURE), which uses commonly collected vital signs and labs to predict patient deterioration on the general hospital floor. Additionally, our team collects waveforms from the University Hospital, and we store this data utilizing Amazon Web Services. We use these signals to build predictive algorithms to advance precision medicine. Our flagship algorithm called Analytic for Hemodynamic Instability (AHI), predicts patient deterioration using a single-lead electrocardiogram signal. We use Bayesian methods to analyze metabolomic biomarker data from blood and exhaled breath to understand Sepsis and Acute Respiratory Distress Syndrome. I also have an interest in statistical genetics.
Jeffrey Regier received a PhD in statistics from UC Berkeley (2016) and joined the University of Michigan as an assistant professor. His research interests include graphical models, Bayesian inference, high-performance computing, deep learning, astronomy, and genomics.
Efficient, low regret contextual multi-armed bandit approaches for real time learning including Thompson sampling, UCB, and knowledge gradient descent. Integration of optimization and predictive analytics for determining the time to next measurement, which modality to use, and the optimal control of risk factors to manage chronic disease. Integration of soft voting ensemble classifiers and multiple models Kalman filters for disease state prediction, Real-time (online) contextual multi-armed bandits integrated with optimization of hospital bed type dynamic control decisions for reducing 30-day readmission rates in hospitals. Robustness in system optimization when the system model is uncertain with emphasis on quantile regression forests, sample average approximation, robust optimization and distributionally robust optimization. Health care delivery systems models with prediction and control for inpatient and outpatient. Work has been done on Emergency Department redesign for improved patient flow; Capacity management and planning and scheduling for outpatient care, including integrated services networks; admission control with machine learning to ICUs, stepdown, and regular care units Surgical planning and scheduling for access delay control; Planning and scheduling for Clinical Research Units.
Current research includes a project funded by Toyota that uses Markov Models and Machine Learning to predict heart arrhythmia, an NSF-funded project to detect Acute Respiratory Distress Syndrome (ARDS) from x-ray images and projects using tensor analysis on health care data (funded by the Department of Defense and National Science Foundation).
Dr. Bai’s research interests lie in development and refinement of bioinformatics algorithms/software and databases on next-generation sequencing (NGS data), development of statistical model for solving biological problems, bioinformatics analysis of clinical data, as well as other topics including, but not limited to, uncovering disease genes and variants using informatics approaches, computational analysis of cis-regulation and comparative motif finding, large-scale genome annotation, comparative “omics”, and evolutionary genomics.
Dr. Veera Baladandayuthapani is currently a Professor in the Department of Biostatistics at University of Michigan (UM), where he is also the Associate Director of the Center for Cancer Biostatistics. He joined UM in Fall 2018 after spending 13 years in the Department of Biostatistics at University of Texas MD Anderson Cancer Center, Houston, Texas, where was a Professor and Institute Faculty Scholar and held adjunct appointments at Rice University, Texas A&M University and UT School of Public Health. His research interests are mainly in high-dimensional data modeling and Bayesian inference. This includes functional data analyses, Bayesian graphical models, Bayesian semi-/non-parametric models and Bayesian machine learning. These methods are motivated by large and complex datasets (a.k.a. Big Data) such as high-throughput genomics, epigenomics, transcriptomics and proteomics as well as high-resolution neuro- and cancer- imaging. His work has been published in top statistical/biostatistical/bioinformatics and biomedical/oncology journals. He has also co-authored a book on Bayesian analysis of gene expression data. He currently holds multiple PI-level grants from NIH and NSF to develop innovative and advanced biostatistical and bioinformatics methods for big datasets in oncology. He has also served as the Director of the Biostatistics and Bioinformatics Cores for the Specialized Programs of Research Excellence (SPOREs) in Multiple Myeloma and Lung Cancer and Biostatistics&Bioinformatics platform leader for the Myeloma and Melanoma Moonshot Programs at MD Anderson. He is a fellow of the American Statistical Association and an elected member of the International Statistical Institute. He currently serves as an Associate Editor for Journal of American Statistical Association, Biometrics and Sankhya.
Prof. Huan’s research broadly revolves around uncertainty quantification, data-driven modeling, and numerical optimization. He focuses on methods to bridge together models and data: e.g., optimal experimental design, Bayesian statistical inference, uncertainty propagation in high-dimensional settings, and algorithms that are robust to model misspecification. He seeks to develop efficient numerical methods that integrate computationally-intensive models with big data, and combine uncertainty quantification with machine learning to enable robust and reliable prediction, design, and decision-making.
My research is focused on developing efficient and effective statistical and computational methods for genetic and genomic studies. These studies often involve large-scale and high-dimensional data; examples include genome-wide association studies, epigenome-wide association studies, and various functional genomic sequencing studies such as bulk and single cell RNAseq, bisulfite sequencing, ChIPseq, ATACseq etc. Our method development is often application oriented and specifically targeted for practical applications of these large-scale genetic and genomic studies, thus is not restricted in a particular methodology area. Our previous and current methods include, but are not limited to, Bayesian methods, mixed effects models, factor analysis models, sparse regression models, deep learning algorithms, clustering algorithms, integrative methods, spatial statistics, and efficient computational algorithms. By developing novel analytic methods, I seek to extract important information from these data and to advance our understanding of the genetic basis of phenotypic variation for various human diseases and disease related quantitative traits.
Yuki Shiraito works primarily in the field of political methodology. His research interests center on the development and applications of Bayesian statistical models and large-scale computational algorithms for data analysis. He has applied these quantitative methods to political science research including a survey experiment on public support for conflicting parties in civil war, heterogeneous effects of indiscriminate state violence, and the detection of text diffusion among a large set of legislative bills.
After completing his undergraduate education at the University of Tokyo, Yuki received his Ph.D. in Politics (2017) from Princeton University. Before joining the University of Michigan as an Assistant Professor in September 2018, he served as a Postdoctoral Fellow in the Program of Quantitative Social Science at Dartmouth College.
My research broadly focuses on developing data analytics and decision-making methodologies specifically tailored for Internet of Things (IoT) enabled smart and connected products/systems. I envision that most (if not all) engineering systems will eventually become connected systems in the future. Therefore, my key focus is on developing next-generation data analytics, machine learning, individualized informatics and graphical and network modeling tools to truly realize the competitive advantages that are promised by smart and connected products/systems.