Nicholas Douville

By |

Dr. Douville is a critical care anesthesiologist with an investigative background in bioinformatics and perioperative outcomes research. He studies techniques for utilizing health care data, including genotype, to deliver personalized medicine in the perioperative period and intensive care unit. His research background has focused on ways technology can assist health care delivery to improve patient outcomes. This began designing microfluidic chips capable of recreating fluid mechanics of atelectatic alveoli and monitoring the resulting barrier breakdown real-time. His interest in bioinformatics was sparked when he observed how methodology designed for tissue engineering could be modified to the nano-scale to enable genomic analysis. Additionally, his engineering training provided the framework to apply data-driven modeling techniques, such as finite element analysis, to complex biological systems.

Jonathan Terhorst

By |

I develop probabilistic and statistical models to analyze genetic and genomic data. We use these methods to study evolution, natural selection, and human history. Recently, I have been interested in applying these techniques to study viral epidemics (e.g., HIV) and cancer.

Estimates of recent effective population sizes for various human subpopulations.

Jie Liu

By |

Dr. Liu’s research lab aims to develop machine learning approaches for real-world bioinformatics and medical informatics problems. We believe that computational methods are essential in order to understand many of these molecular biology problems, including the dynamics of genome conformation and nuclear organization, gene regulation, cellular networks, and the genetic basis of human diseases.

The first computational embedding method for single cells in terms of their chromatin organization.

Christopher E. Gillies

By |

I am Research Faculty with the Michigan Center for Integrative Research in Critical Care (MCIRCC). Our team builds predictive algorithms, analyzes signals, and implements statistical models to advance Critical Care Medicine. We use electronic healthcare record data to build predictive algorithms. One example of this is Predicting Intensive Care Transfers and other Unforeseen Events (PICTURE), which uses commonly collected vital signs and labs to predict patient deterioration on the general hospital floor. Additionally, our team collects waveforms from the University Hospital, and we store this data utilizing Amazon Web Services. We use these signals to build predictive algorithms to advance precision medicine. Our flagship algorithm called Analytic for Hemodynamic Instability (AHI), predicts patient deterioration using a single-lead electrocardiogram signal. We use Bayesian methods to analyze metabolomic biomarker data from blood and exhaled breath to understand Sepsis and Acute Respiratory Distress Syndrome. I also have an interest in statistical genetics.

Hyun Min Kang

By |

Hyun Min Kang is an Associate Professor in the Department of Biostatistics. He received his Ph.D. in Computer Science from University of California, San Diego in 2009 and joined the University of Michigan faculty in the same year. Prior to his doctoral studies, he worked as a research fellow at the Genome Research Center for Diabetes and Endocrine Disease in the Seoul National University Hospital for a year and a half, after completing his Bachelors and Masters degree in Electrical Engineering at Seoul National University. His research interest lies in big data genome science. Methodologically, his primary focus is on developing statistical methods and computational tools for large-scale genetic studies. Scientifically, his research aims to understand the etiology of complex disease traits, including type 2 diabetes, bipolar disorder, cardiovascular diseases, and glomerular diseases.

Veera Baladandayuthapani

By |

Dr. Veera Baladandayuthapani is currently a Professor in the Department of Biostatistics at University of Michigan (UM), where he is also the Associate Director of the Center for Cancer Biostatistics. He joined UM in Fall 2018 after spending 13 years in the Department of Biostatistics at University of Texas MD Anderson Cancer Center, Houston, Texas, where was a Professor and Institute Faculty Scholar and held adjunct appointments at Rice University, Texas A&M University and UT School of Public Health. His research interests are mainly in high-dimensional data modeling and Bayesian inference. This includes functional data analyses, Bayesian graphical models, Bayesian semi-/non-parametric models and Bayesian machine learning. These methods are motivated by large and complex datasets (a.k.a. Big Data) such as high-throughput genomics, epigenomics, transcriptomics and proteomics as well as high-resolution neuro- and cancer- imaging. His work has been published in top statistical/biostatistical/bioinformatics and biomedical/oncology journals. He has also co-authored a book on Bayesian analysis of gene expression data. He currently holds multiple PI-level grants from NIH and NSF to develop innovative and advanced biostatistical and bioinformatics methods for big datasets in oncology. He has also served as the Director of the Biostatistics and Bioinformatics Cores for the Specialized Programs of Research Excellence (SPOREs) in Multiple Myeloma and Lung Cancer and Biostatistics&Bioinformatics platform leader for the Myeloma and Melanoma Moonshot Programs at MD Anderson. He is a fellow of the American Statistical Association and an elected member of the International Statistical Institute. He currently serves as an Associate Editor for Journal of American Statistical Association, Biometrics and Sankhya.

 

An example of horizontal (across cancers) and vertical (across multiple molecular platforms) data integration. Image from Ha et al (Nature Scientific Reports, 2018; https://www.nature.com/articles/s41598-018-32682-x)

Nicholson Price

By |

I study how law shapes innovation in the life sciences, with a substantial focus on big data and artificial intelligence in medicine. I write about the intellectual property incentives and protections for data and AI algorithms, the privacy issues with wide-scale health- and health-related data collection, the medical malpractice implications of AI in medicine, and how FDA should regulate the use of medical AI.

Xiang Zhou

By |

My research is focused on developing efficient and effective statistical and computational methods for genetic and genomic studies. These studies often involve large-scale and high-dimensional data; examples include genome-wide association studies, epigenome-wide association studies, and various functional genomic sequencing studies such as bulk and single cell RNAseq, bisulfite sequencing, ChIPseq, ATACseq etc. Our method development is often application oriented and specifically targeted for practical applications of these large-scale genetic and genomic studies, thus is not restricted in a particular methodology area. Our previous and current methods include, but are not limited to, Bayesian methods, mixed effects models, factor analysis models, sparse regression models, deep learning algorithms, clustering algorithms, integrative methods, spatial statistics, and efficient computational algorithms. By developing novel analytic methods, I seek to extract important information from these data and to advance our understanding of the genetic basis of phenotypic variation for various human diseases and disease related quantitative traits.

A statistical method recently developed in our group aims to identify tissues that are relevant to diseases or disease related complex traits, through integrating tissue specific omics studies (e.g. ROADMAP project) with genome-wide association studies (GWASs). Heatmap displays the rank of 105 tissues (y-axis) in terms of their relevance for each of the 43 GWAS traits (x-axis) evaluated by our method. Traits are organized by hierarchical clustering. Tissues are organized into ten tissue groups.