Explore ARCExplore ARC

Veera Baladandayuthapani

By |

Dr. Veera Baladandayuthapani is currently a Professor in the Department of Biostatistics at University of Michigan (UM), where he is also the Associate Director of the Center for Cancer Biostatistics. He joined UM in Fall 2018 after spending 13 years in the Department of Biostatistics at University of Texas MD Anderson Cancer Center, Houston, Texas, where was a Professor and Institute Faculty Scholar and held adjunct appointments at Rice University, Texas A&M University and UT School of Public Health. His research interests are mainly in high-dimensional data modeling and Bayesian inference. This includes functional data analyses, Bayesian graphical models, Bayesian semi-/non-parametric models and Bayesian machine learning. These methods are motivated by large and complex datasets (a.k.a. Big Data) such as high-throughput genomics, epigenomics, transcriptomics and proteomics as well as high-resolution neuro- and cancer- imaging. His work has been published in top statistical/biostatistical/bioinformatics and biomedical/oncology journals. He has also co-authored a book on Bayesian analysis of gene expression data. He currently holds multiple PI-level grants from NIH and NSF to develop innovative and advanced biostatistical and bioinformatics methods for big datasets in oncology. He has also served as the Director of the Biostatistics and Bioinformatics Cores for the Specialized Programs of Research Excellence (SPOREs) in Multiple Myeloma and Lung Cancer and Biostatistics&Bioinformatics platform leader for the Myeloma and Melanoma Moonshot Programs at MD Anderson. He is a fellow of the American Statistical Association and an elected member of the International Statistical Institute. He currently serves as an Associate Editor for Journal of American Statistical Association, Biometrics and Sankhya.

 

An example of horizontal (across cancers) and vertical (across multiple molecular platforms) data integration. Image from Ha et al (Nature Scientific Reports, 2018; https://www.nature.com/articles/s41598-018-32682-x)

Nicholson Price

By |

I study how law shapes innovation in the life sciences, with a substantial focus on big data and artificial intelligence in medicine. I write about the intellectual property incentives and protections for data and AI algorithms, the privacy issues with wide-scale health- and health-related data collection, the medical malpractice implications of AI in medicine, and how FDA should regulate the use of medical AI.

Samuel K Handelman

By |

Samuel K Handelman, Ph.D., is Research Assistant Professor in the department of Internal Medicine, Gastroenterology, of Michigan Medicine at the University of Michigan, Ann Arbor. Prof. Handelman is focused on multi-omics approaches to drive precision/personalized-therapy and to predict population-level differences in the effectiveness of interventions. He tends to favor regression-style and hierarchical-clustering approaches, partially because he has a background in both statistics and in cladistics. His scientific monomania is for compensatory mechanisms and trade-offs in evolution, but he has a principled reason to focus on translational medicine: real understanding of these mechanisms goes all the way into the clinic. Anything less that clinical translation indicates that we don’t understand what drove the genetics of human populations.

Srijan Sen

By |

Srijan Sen, MD, PhD, is the Frances and Kenneth Eisenberg Professor of Depression and Neurosciences. Dr. Sen’s research focuses on the interactions between genes and the environment and their effect on stress, anxiety, and depression. He also has a particular interest in medical education, and leads a large multi-institution study that uses medical internship as a model of stress.

Matthew Kay

By |

Matthew Kay, PhD, is Assistant Professor of Information, School of Information and Assistant Professor of Electrical Engineering and Computer Science, College of Engineering, at the University of Michigan, Ann Arbor.

Prof. Kay’s research includes work on communicating uncertainty, usable statistics, and personal informatics. People are increasingly exposed to sensing and prediction in their daily lives (“how many steps did I take today?”, “how long until my bus shows up?”, “how much do I weigh?”). Uncertainty is both inherent to these systems and usually poorly communicated. To build understandable data presentations, we must study how people interpret their data and what goals they have for it, which informs the way that we should communicate results from our models, which in turn determines what models we must use in the first place. Prof. Kay tackles these problems using a multi-faceted approach, including qualitative and quantitative analysis of behavior, building and evaluating interactive systems, and designing and testing visualization techniques. His work draws on approaches from human-computer interaction, information visualization, and statistics to build information visualizations that people can more easily understand along with the models to back those visualizations.

 

Joseph Ryan

By |

Joseph Ryan, PhD, is Associate Professor of Social Work, School of Social Work and Faculty Associate in the Center for Political Studies, ISR, at the University of Michigan, Ann Arbor.

Prof. Ryan’s research and teaching build upon his direct practice experiences with child welfare and juvenile justice populations. Dr. Ryan is the Co-Director of the Child and Adolescent Data, an applied research center focused on using big data to drive policy and practice decisions in the field. Dr. Ryan is currently involved with several studies including a randomized clinical trial of recovery coaches for substance abusing parents in Illinois (AODA Demonstration), a foster care placement prevention study for young children in Michigan (MiFamily Demonstration), a Pay for Success (social impact bonds) study focused on high risk adolescents involved with the Illinois child welfare and juvenile justice system and a study of the educational experiences of youth in foster care (Kellogg Foundation Education and Equity). Dr. Ryan is committed to building strong University and State partnerships that utilize big data and data visualization tools to advance knowledge and address critical questions in the fields of child welfare and juvenile justice.

Matthew Schipper

By |

Matthew Schipper, PhD, is Assistant Professor in the Departments of Radiation Oncology and Biostatistics. He received his Ph.D. in Biostatistics from the University of Michigan in 2006. Prior to joining the Radiation Oncology department he was a Research Investigator in the Department of Radiology at the University of Michigan and a consulting statistician at Innovative Analytics.

Prof. Schipper’s research interests include:

  • Use of Biomarkers to Individualize Treatment – Selection of dose for cancer patients treated with Radiation Therapy (RT) must balance the increased efficacy with the increased toxicity associated with higher dose. Historically, a single dose has been selected for a population of patients (e.g. all stage III NSC lung cancer). However, the availability of new biologic markers for toxicity and efficacy allow the possibility of selecting a more personalized dose. I am interested in using statistical models for toxicity and efficacy as a function of RT dose and biomarkers to select an optimal dose for an individual patient. We are studying quantitative methods based on utilities to make this efficacy/toxicity tradeoff explicit and quantitative when biomarkers for one or multiple outcomes are available. We have proposed a simulation based method for studying the likely effects of any model or marker based dose selection on both toxicity and efficacy outcomes for a population of patients. In related projects, we are studying the role of correlation between the sensitivity of a patient’ tumor and normal tissues to radiation. We are also studying how to utilize these techniques in combination with baseline and/or mid-treatment adaptive image guided RT.
  • Early Phase Oncology Study Design – An increasingly common feature of phase I designs is the inclusion of 1 or more dose expansion cohorts (DECs) in which the MTD is first estimated using a 3+3 or other Phase I design and then a fixed number (often 10-20 in 1-10 cohorts) of patients are treated at the dose initially estimated to be the MTD. Such an approach has not been studied statistically or compared to alternative designs. We have shown that a CRM design, in which the dose-assignment mechanism is kept active for all patients, more accurately identifies the MTD and protects the safety of trial patients than a similarly sized DEC trial. It also meets the objective of treating 15 or more patients at the final estimated MTD.  A follow-up paper evaluating the role of DECs with a focus on efficacy estimation is in press at Annals of Oncology.

Omid Dehzangi

By |

Omid Dehzangi, PhD, is Assistant Professor of Computer and Information Science, College of Engineering and Computer Science, at the University of Michigan, Dearborn.

Wearable health technology is drawing significant attention for good reasons. The pervasive nature of such systems providing ubiquitous access to the continuous personalized data will transform the way people interact with each other and their environment. The resulting information extracted from these systems will enable emerging applications in healthcare, wellness, emergency response, fitness monitoring, elderly care support, long-term preventive chronic care, assistive care, smart environments, sports, gaming, and entertainment which create many new research opportunities and transform researches from various disciplines into data science which is the methodological terminology for data collection, data management, data analysis, and data visualization. Despite the ground-breaking potentials, there are a number of interesting challenges in order to design and develop wearable medical embedded systems. Due to limited available resources in wearable processing architectures, power-efficiency is demanded to allow unobtrusive and long-term operation of the hardware. Also, the data-intensive nature of continuous health monitoring requires efficient signal processing and data analytic algorithms for real-time, scalable, reliable, accurate, and secure extraction of relevant information from an overwhelmingly large amount of data. Therefore, extensive research in their design, development, and assessment is necessary. Embedded Processing Platform Design The majority of my work concentrates on designing wearable embedded processing platforms in order to shift the conventional paradigms from hospital-centric healthcare with episodic and reactive focus on diseases to patient-centric and home-based healthcare as an alternative segment which demands outstanding specialized design in terms of hardware design, software development, signal processing and uncertainty reduction, data analysis, predictive modeling and information extraction. The objective is to reduce the costs and improve the effectiveness of healthcare by proactive early monitoring, diagnosis, and treatment of diseases (i.e. preventive) as shown in Figure 1.

Figure 1. Embedded processing platform in healthcare

Jeremy M G Taylor

By |

Jeremy Taylor, PhD, is the Pharmacia Research Professor of Biostatistics in the School of Public Health and Professor in the Department of Radiation Oncology in the School of Medicine at the University of Michigan, Ann Arbor. He is the director of the University of Michigan Cancer Center Biostatistics Unit and director of the Cancer/Biostatistics training program. He received his B.A. in Mathematics from Cambridge University and his Ph.D. in Statistics from UC Berkeley. He was on the faculty at UCLA from 1983 to 1998, when he moved to the University of Michigan. He has had visiting positions at the Medical Research Council, Cambridge, England; the University of Adelaide; INSERM, Bordeaux and CSIRO, Sydney, Australia. He is a previously winner of the Mortimer Spiegelman Award from the American Public Health Association and the Michael Fry Award from the Radiation Research Society. He has worked in various areas of Statistics and Biostatistics, including Box-Cox transformations, longitudinal and survival analysis, cure models, missing data, smoothing methods, clinical trial design, surrogate and auxiliary variables. He has been heavily involved in collaborations in the areas of radiation oncology, cancer research and bioinformatics.

I have broad interests and expertise in developing statistical methodology and applying it in biomedical research, particularly in cancer research. I have undertaken research  in power transformations, longitudinal modeling, survival analysis particularly cure models, missing data methods, causal inference and in modeling radiation oncology related data.  Recent interests, specifically related to cancer, are in statistical methods for genomic data, statistical methods for evaluating cancer biomarkers, surrogate endpoints, phase I trial design, statistical methods for personalized medicine and prognostic and predictive model validation.  I strive to develop principled methods that will lead to valid interpretations of the complex data that is collected in biomedical research.

Johann Gagnon-Bartsch

By |

Johann Gagnon-Bartsch, PhD, is Assistant Professor of Statistics in the College of Literature, Science, and the Arts at the University of Michigan, Ann Arbor.

Prof. Gagnon-Bartsch’s research currently focuses on the analysis of high-throughput biological data as well as other types of high-dimensional data. More specifically, he is working with collaborators on developing methods that can be used when the data are corrupted by systematic measurement errors of unknown origin, or when the data suffer from the effects of unobserved confounders. For example, gene expression data suffer from both systematic measurement errors of unknown origin (due to uncontrolled variations in laboratory conditions) and the effects of unobserved confounders (such as whether a patient had just eaten before a tissue sample was taken). They are developing methodology that is able to correct for these systematic errors using “negative controls.” Negative controls are variables that (1) are known to have no true association with the biological signal of interest, and (2) are corrupted by the systematic errors, just like the variables that are of interest. The negative controls allow us to learn about the structure of the errors, so that we may then remove the errors from the other variables.

Microarray data from tissue samples taken from three different regions of the brain (anterior cingulate cortex, dorsolateral prefrontal cortex, and cerebellum) of ten individuals. The 30 tissue samples were separately analyzed in three different laboratories (UC Davis, UC Irvine, U of Michigan). The left plot shows the first two principal components of the data. The data cluster by laboratory, indicating that most of the variation in the data is systematic error that arises due to uncontrolled variation in laboratory conditions. The second plot shows the data after adjustment. The data now cluster by brain region (cortex vs. cerebellum). The data is from GEO (GSE2164).

Microarray data from tissue samples taken from three different regions of the brain (anterior cingulate cortex, dorsolateral prefrontal cortex, and cerebellum) of ten individuals. The 30 tissue samples were separately analyzed in three different laboratories (UC Davis, UC Irvine, U of Michigan). The left plot shows the first two principal components of the data. The data cluster by laboratory, indicating that most of the variation in the data is systematic error that arises due to uncontrolled variation in laboratory conditions. The second plot shows the data after adjustment. The data now cluster by brain region (cortex vs. cerebellum). The data is from GEO (GSE2164).