Zhenke Wu

By |

Zhenke Wu is an Assistant Professor of Biostatistics, and Research Assistant Professor in Michigan Institute of Data Science (MIDAS). He received his Ph.D. in Biostatistics from the Johns Hopkins University in 2014 and then stayed at Hopkins for his postdoctoral training before joining the University of Michigan. Dr. Wu’s research focuses on the design and application of statistical methods that inform health decisions made by individuals, or precision medicine. The original methods and software developed by Dr. Wu are now used by investigators from research institutes such as CDC and Johns Hopkins, as well as site investigators from developing countries, e.g., Kenya, South Africa, Gambia, Mali, Zambia, Thailand and Bangladesh.

Danny Forger

By |

Daniel Forger is a Professor in the Department of Mathematics. He is devoted to understanding biological clocks. He uses techniques from many fields, including computer simulation, detailed mathematical modeling and mathematical analysis, to understand biological timekeeping. His research aims to generate predictions that can be experimentally verified.

Bhramar Mukherjee

By |

Bhramar Mukherjee is  a Professor in the Department of Biostatistics, joining the department in Fall, 2006. Bhramar is also a Professor in the Department of Epidemiology. Bhramar completed her Ph.D. in 2001 from Purdue University. Bhramar’s principal research interests lie in Bayesian methods in epidemiology and studies of gene-environment interaction. She is also interested in modeling missingness in exposure, categorical data models, Bayesian nonparametrics, and the general area of statistical inference under outcome/exposure dependent sampling schemes. Bhramar’s methodological research is funded by NSF and NIH.   Bhramar is involved as a co-investigator in several R01s led by faculty in Internal Medicine, Epidemiology and Environment Health sciences at UM. Her collaborative interests focus on genetic and environmental epidemiology, ranging from investigating the genetic architecture of colorectal cancer in relation to environmental exposures to studies of air pollution on pediatric Asthma events in Detroit. She is actively engaged in Global Health Research.

Jeffrey S. McCullough

By |

Jeffrey S. McCullough, PhD, is Associate Professor in the department of Health Management and Policy in the School of Public Health at the University of Michigan, Ann Arbor.

Prof. McCullough’s research focuses on technology and innovation in health care with an emphasis on information technology (IT), pharmaceuticals, and empirical methods.  Many of his studies explored the effect of electronic health record (EHR) systems on health care quality and productivity. While the short-run gains from health IT adoption may be modest, these technologies form the foundation for a health information infrastructure. As scientists are just beginning to understand how to harness and apply medical information, this problem is complicated by the sheer complexity of medical care, the heterogeneity across patients, and the importance of treatment selection. His current work draws on methods from both machine learning and econometrics to address these issues. Current pharmaceutical studies examine the roles of consumer heterogeneity and learning about the value of products as well as the effect of direct-to-consumer advertising on health.

The marginal effects of health IT on mortality by diagnosis and deciles of severity. We study the affect of hospitals' electronic health record (EHR) systems on patient outcomes. While we observe no benefits for the average patient, mortality falls significantly for high-risk patients in all EHR-sensitive conditions. These patterns, combined findings from other analyses, suggest that EHR systems may be more effective at supporting care coordination and information management than at rules-based clinical decision support. McCullough, Parente, and Town, "Health information technology and patient outcomes: the role of information and labor coordination." RAND Journal of Economics, Vol. 47, no. 1 (Spring 2016).

The marginal effects of health IT on mortality by diagnosis and deciles of severity. We study the affect of hospitals’ electronic health record (EHR) systems on patient outcomes. While we observe no benefits for the average patient, mortality falls significantly for high-risk patients in all EHR-sensitive conditions. These patterns, combined findings from other analyses, suggest that EHR systems may be more effective at supporting care coordination and information management than at rules-based clinical decision support. McCullough, Parente, and Town, “Health information technology and patient outcomes: the role of information and labor coordination.” RAND Journal of Economics, Vol. 47, no. 1 (Spring 2016).

Matthew Schipper

By |

Matthew Schipper, PhD, is Assistant Professor in the Departments of Radiation Oncology and Biostatistics. He received his Ph.D. in Biostatistics from the University of Michigan in 2006. Prior to joining the Radiation Oncology department he was a Research Investigator in the Department of Radiology at the University of Michigan and a consulting statistician at Innovative Analytics.

Prof. Schipper’s research interests include:

  • Use of Biomarkers to Individualize Treatment – Selection of dose for cancer patients treated with Radiation Therapy (RT) must balance the increased efficacy with the increased toxicity associated with higher dose. Historically, a single dose has been selected for a population of patients (e.g. all stage III NSC lung cancer). However, the availability of new biologic markers for toxicity and efficacy allow the possibility of selecting a more personalized dose. I am interested in using statistical models for toxicity and efficacy as a function of RT dose and biomarkers to select an optimal dose for an individual patient. We are studying quantitative methods based on utilities to make this efficacy/toxicity tradeoff explicit and quantitative when biomarkers for one or multiple outcomes are available. We have proposed a simulation based method for studying the likely effects of any model or marker based dose selection on both toxicity and efficacy outcomes for a population of patients. In related projects, we are studying the role of correlation between the sensitivity of a patient’ tumor and normal tissues to radiation. We are also studying how to utilize these techniques in combination with baseline and/or mid-treatment adaptive image guided RT.
  • Early Phase Oncology Study Design – An increasingly common feature of phase I designs is the inclusion of 1 or more dose expansion cohorts (DECs) in which the MTD is first estimated using a 3+3 or other Phase I design and then a fixed number (often 10-20 in 1-10 cohorts) of patients are treated at the dose initially estimated to be the MTD. Such an approach has not been studied statistically or compared to alternative designs. We have shown that a CRM design, in which the dose-assignment mechanism is kept active for all patients, more accurately identifies the MTD and protects the safety of trial patients than a similarly sized DEC trial. It also meets the objective of treating 15 or more patients at the final estimated MTD.  A follow-up paper evaluating the role of DECs with a focus on efficacy estimation is in press at Annals of Oncology.

Gilbert S. Omenn

By |

Gilbert Omenn, MD, PhD, is Professor of Computational Medicine & Bioinformatics with appointments in Human Genetics, Molecular Medicine & Genetics in the Medical School and Professor of Public Health in the School of Public Health and the Harold T. Shapiro Distinguished University Professor at the University of Michigan, Ann Arbor.

Doctor Omenn’s current research interests are focused on cancer proteomics, splice isoforms as potential biomarkers and therapeutic tar- gets, and isoform-level and single-cell functional networks of transcripts and proteins. He chairs the global Human Proteome Project of the Human Proteome Organization.

Omid Dehzangi

By |

Omid Dehzangi, PhD, is Assistant Professor of Computer and Information Science, College of Engineering and Computer Science, at the University of Michigan, Dearborn.

Wearable health technology is drawing significant attention for good reasons. The pervasive nature of such systems providing ubiquitous access to the continuous personalized data will transform the way people interact with each other and their environment. The resulting information extracted from these systems will enable emerging applications in healthcare, wellness, emergency response, fitness monitoring, elderly care support, long-term preventive chronic care, assistive care, smart environments, sports, gaming, and entertainment which create many new research opportunities and transform researches from various disciplines into data science which is the methodological terminology for data collection, data management, data analysis, and data visualization. Despite the ground-breaking potentials, there are a number of interesting challenges in order to design and develop wearable medical embedded systems. Due to limited available resources in wearable processing architectures, power-efficiency is demanded to allow unobtrusive and long-term operation of the hardware. Also, the data-intensive nature of continuous health monitoring requires efficient signal processing and data analytic algorithms for real-time, scalable, reliable, accurate, and secure extraction of relevant information from an overwhelmingly large amount of data. Therefore, extensive research in their design, development, and assessment is necessary. Embedded Processing Platform Design The majority of my work concentrates on designing wearable embedded processing platforms in order to shift the conventional paradigms from hospital-centric healthcare with episodic and reactive focus on diseases to patient-centric and home-based healthcare as an alternative segment which demands outstanding specialized design in terms of hardware design, software development, signal processing and uncertainty reduction, data analysis, predictive modeling and information extraction. The objective is to reduce the costs and improve the effectiveness of healthcare by proactive early monitoring, diagnosis, and treatment of diseases (i.e. preventive) as shown in Figure 1.

Figure 1. Embedded processing platform in healthcare

Jeremy M G Taylor

By |

Jeremy Taylor, PhD, is the Pharmacia Research Professor of Biostatistics in the School of Public Health and Professor in the Department of Radiation Oncology in the School of Medicine at the University of Michigan, Ann Arbor. He is the director of the University of Michigan Cancer Center Biostatistics Unit and director of the Cancer/Biostatistics training program. He received his B.A. in Mathematics from Cambridge University and his Ph.D. in Statistics from UC Berkeley. He was on the faculty at UCLA from 1983 to 1998, when he moved to the University of Michigan. He has had visiting positions at the Medical Research Council, Cambridge, England; the University of Adelaide; INSERM, Bordeaux and CSIRO, Sydney, Australia. He is a previously winner of the Mortimer Spiegelman Award from the American Public Health Association and the Michael Fry Award from the Radiation Research Society. He has worked in various areas of Statistics and Biostatistics, including Box-Cox transformations, longitudinal and survival analysis, cure models, missing data, smoothing methods, clinical trial design, surrogate and auxiliary variables. He has been heavily involved in collaborations in the areas of radiation oncology, cancer research and bioinformatics.

I have broad interests and expertise in developing statistical methodology and applying it in biomedical research, particularly in cancer research. I have undertaken research  in power transformations, longitudinal modeling, survival analysis particularly cure models, missing data methods, causal inference and in modeling radiation oncology related data.  Recent interests, specifically related to cancer, are in statistical methods for genomic data, statistical methods for evaluating cancer biomarkers, surrogate endpoints, phase I trial design, statistical methods for personalized medicine and prognostic and predictive model validation.  I strive to develop principled methods that will lead to valid interpretations of the complex data that is collected in biomedical research.

Johann Gagnon-Bartsch

By |

Johann Gagnon-Bartsch, PhD, is Assistant Professor of Statistics in the College of Literature, Science, and the Arts at the University of Michigan, Ann Arbor.

Prof. Gagnon-Bartsch’s research currently focuses on the analysis of high-throughput biological data as well as other types of high-dimensional data. More specifically, he is working with collaborators on developing methods that can be used when the data are corrupted by systematic measurement errors of unknown origin, or when the data suffer from the effects of unobserved confounders. For example, gene expression data suffer from both systematic measurement errors of unknown origin (due to uncontrolled variations in laboratory conditions) and the effects of unobserved confounders (such as whether a patient had just eaten before a tissue sample was taken). They are developing methodology that is able to correct for these systematic errors using “negative controls.” Negative controls are variables that (1) are known to have no true association with the biological signal of interest, and (2) are corrupted by the systematic errors, just like the variables that are of interest. The negative controls allow us to learn about the structure of the errors, so that we may then remove the errors from the other variables.

Microarray data from tissue samples taken from three different regions of the brain (anterior cingulate cortex, dorsolateral prefrontal cortex, and cerebellum) of ten individuals. The 30 tissue samples were separately analyzed in three different laboratories (UC Davis, UC Irvine, U of Michigan). The left plot shows the first two principal components of the data. The data cluster by laboratory, indicating that most of the variation in the data is systematic error that arises due to uncontrolled variation in laboratory conditions. The second plot shows the data after adjustment. The data now cluster by brain region (cortex vs. cerebellum). The data is from GEO (GSE2164).

Microarray data from tissue samples taken from three different regions of the brain (anterior cingulate cortex, dorsolateral prefrontal cortex, and cerebellum) of ten individuals. The 30 tissue samples were separately analyzed in three different laboratories (UC Davis, UC Irvine, U of Michigan). The left plot shows the first two principal components of the data. The data cluster by laboratory, indicating that most of the variation in the data is systematic error that arises due to uncontrolled variation in laboratory conditions. The second plot shows the data after adjustment. The data now cluster by brain region (cortex vs. cerebellum). The data is from GEO (GSE2164).

Mahesh Agarwal

By |

Mahesh Agarwal is Associate Professor of Mathematics and Statistics at the University of Michigan, Dearborn.

Prof. Agarwal’s is primarily interested in number theory, in particular in p-adic L-functions, Bloch-Kato conjecture and automorphic forms. His secondary research interests are polynomials, geometry and math education.