Dan Rabosky

Dan Rabosky

By |

The Rabosky lab seeks to understand how and why life on Earth became so diverse. We focus primarily on large-scale patterns of species diversification (speciation and extinction) and on the tempo and mode of phenotypic evolution, to better understand what regulates the “amount” of biodiversity through Deep Time. To this end, we develop theoretical frameworks and computational tools for studying evolutionary dynamics using DNA-sequence-based evolutionary trees (phylogenies), the fossil record, as well as phenotypic data from present-day species (morphology, ecology). We develop and apply a range of methods involving supervised and unsupervised learning, including Markov chain Monte Carlo, hierarchical mixture models, hidden Markov models, latent feature models, and more. We are increasingly interested in complex morphological and ecological traits, which – due to a rapidly expanding data universe – represent a tremendous opportunity for the field to answer long-standing questions about how organisms evolve. At these same time, we are embracing the analytical challenges of these data, because fully realizing their potential requires the development of new analytical paradigms that go beyond the limitations of traditional parametric models for low-dimensional data.

Automatic feature identification from a large-scale evolutionary tree (phylogeny) using a compound model of the generating process (speciation, extinction) developed in the Rabosky lab. Colors correspond to distinct evolutionary rate regimes as estimated using Markov chain Monte Carlo. This method revealed widespread heterogeneity in the rate of species formation during 350 million years of ray-finned fish evolution. Warm colors = fast rates; cool colors = slow rates.

Automatic feature identification from a large-scale evolutionary tree (phylogeny) using a compound model of the generating process (speciation, extinction) developed in the Rabosky lab. Colors correspond to distinct evolutionary rate regimes as estimated using Markov chain Monte Carlo. This method revealed widespread heterogeneity in the rate of species formation during 350 million years of ray-finned fish evolution. Warm colors = fast rates; cool colors = slow rates.

Xiaoquan William Wen

By |

Xiaoquan (William) Wen is an Associate Professor of Biostatistics. He received his PhD in Statistics from the University of Chicago in 2011 and joined the faculty at the University of Michigan in the same year. His research centers on developing Bayesian and computational statistical methods to answer interesting scientific questions arising from genetics and genomics.

In the applied field,  he is  particularly interested in seeking statistically sound and computationally efficient solutions to scientific problems in the areas of genetics and functional genomics.
Quantifying tissue-specific expression quantitative trait loci (eQTLs) via Bayesian model comparison

Quantifying tissue-specific expression quantitative trait loci (eQTLs) via Bayesian model comparison

Cong Shi

By |

Cong Shi is an associate professor in the Department of Industrial and Operations Engineering at the University of Michigan College of Engineering. His primary research interest lies in developing efficient and provably-good data-driven algorithms for operations management models, including supply chain management, revenue management, service operations, and human-robot interactions. He received his Ph.D. in Operations Research at MIT in 2012, and his B.S. in Mathematics from the National University of Singapore in 2007.

Brendan Kochunas

By |

Dr. Kochunas’s research focus is on the next generation of numerical methods and parallel algorithms for high fidelity computational reactor physics and how to leverage these capabilities to develop digital twins. His group’s areas of expertise include neutron transport, nuclide transmutation, multi-physics, parallel programming, and HPC architectures. The Nuclear Reactor Analysis and Methods (NURAM) group is also developing techniques that integrate data-driven methods with conventional approaches in numerical analysis to produce “hybrid models” for accurate, real-time modeling applications. This is embodied by his recent efforts to combine high-fidelity simulation results simulation models in virtual reality through the Virtual Ford Nuclear Reactor.

Relationship of concepts for the Digital Model, Digital Shadow, Digital Twin, and the Physical Asset using images and models of the Ford Nuclear Reactor as an example. Large arrows represent automated information exchange and small arrows represent manual data exchange.

Ivy F. Tso

By |

My lab researches how the human brain processes social and affective information and how these processes are affected in psychiatric disorders, especially schizophrenia and bipolar disorder. We use behavioral, electrophysiological (EEG), neuroimaging (functional MRI), eye tracking, brain stimulation (TMS, tACS), and computational methods in our studies. One main focus of our work is building and validating computational models based on intensive, high-dimensional subject-level behavior and brain data to explain clinical phenomena, parse mechanisms, and predict patient outcome. The goal is to improve diagnostic and prognostic assessment, and to develop personalized treatments.

Brain activation (in parcellated map) during social and face processing.

Lubomir Hadjiyski

By |

Dr. Hadjiyski research interests include computer-aided diagnosis, artificial intelligence (AI), machine learning, predictive models, image processing and analysis, medical imaging, and control systems. His current research involves design of decision support systems for detection and diagnosis of cancer in different organs and quantitative analysis of integrated multimodality radiomics, histopathology and molecular biomarkers for treatment response monitoring using AI and machine learning techniques. He also studies the effect of the decision support systems on the physicians’ clinical performance.

Wenbo Sun

By |

Uncertainty quantification and decision making are increasingly demanded with the development of future technology in engineering and transportation systems. Among the uncertainty quantification problems, Dr. Wenbo Sun is particularly interested in statistical modelling of engineering system responses with considering the high dimensionality and complicated correlation structure, as well as quantifying the uncertainty from a variety of sources simultaneously, such as the inexactness of large-scale computer experiments, process variations, and measurement noises. He is also interested in data-driven decision making that is robust to the uncertainty. Specifically, he delivers methodologies for anomaly detection and system design optimization, which can be applied to manufacturing process monitoring, distracted driving detection, out-of-distribution object identification, vehicle safety design optimization, etc.

Yixin Wang

By |

Yixin Wang works in the fields of Bayesian statistics, machine learning, and causal inference, with applications to recommender systems, text data, and genetics. She also works on algorithmic fairness and reinforcement learning, often via connections to causality. Her research centers around developing practical and trustworthy machine learning algorithms for large datasets that can enhance scientific understandings and inform daily decision-making. Her research interests lie in the intersection of theory and applications.

Elle O’Brien

By |

My research focuses on building infrastructure for public health and health science research organizations to take advantage of cloud computing, strong software engineering practices, and MLOps (machine learning operations). By equipping biomedical research groups with tools that facilitate automation, better documentation, and portable code, we can improve the reproducibility and rigor of science while scaling up the kind of data collection and analysis possible.

Research topics include:
1. Open source software and cloud infrastructure for research,
2. Software development practices and conventions that work for academic units, like labs or research centers, and
3. The organizational factors that encourage best practices in reproducibility, data management, and transparency

The practice of science is a tug of war between competing incentives: the drive to do a lot fast, and the need to generate reproducible work. As data grows in size, code increases in complexity and the number of collaborators and institutions involved goes up, it becomes harder to preserve all the “artifacts” needed to understand and recreate your own work. Technical AND cultural solutions will be needed to keep data-centric research rigorous, shareable, and transparent to the broader scientific community.

View MIDAS Faculty Research Pitch, Fall 2021