Photograph of Alison Davis Rabosky

Alison Davis Rabosky

By |

Our research group studies how and why an organism’s traits (“phenotypes”) evolve in natural populations. Explaining the mechanisms that generate and regulate patterns of phenotypic diversity is a major goal of evolutionary biology: why do we see rapid shifts to strikingly new and distinct character states, and how stable are these evolutionary transitions across space and time? To answer these questions, we generate and analyze high-throughput “big data” on both genomes and phenotypes across the 18,000 species of reptiles and amphibians across the globe. Then, we use the statistical tools of phylogenetic comparative analysis, geometric morphometrics of 3D anatomy generated from CT scans, and genome annotation and comparative transcriptomics to understand the integrated trait correlations that create complex phenotypes. Currently, we are using machine learning and neural networks to study the color patterns of animals vouchered into biodiversity collections and test hypotheses about the ecological causes and evolutionary consequences of phenotypic innovation. We are especially passionate about the effective and accurate visualization of large-scale multidimensional datasets, and we prioritize training in both best practices and new innovations in quantitative data display.

Photograph of Nate Sanders

Nate Sanders

By |

My research interests are broad, but generally center on the causes and consequences of biodiversity loss at local, regional, and global scales with an explicit focus on global change drivers. Our work has been published in Science, Nature, Science Advances, Global Change Biology, PNAS, AREES, TREE, and Ecology Letters among other journals. We are especially interested in using AI and machine learning to explore broad-scale patterns of biodiversity and phenotypic variation, mostly in ants.

Xiaoquan William Wen

By |

Xiaoquan (William) Wen is an Associate Professor of Biostatistics. He received his PhD in Statistics from the University of Chicago in 2011 and joined the faculty at the University of Michigan in the same year. His research centers on developing Bayesian and computational statistical methods to answer interesting scientific questions arising from genetics and genomics.

In the applied field,  he is  particularly interested in seeking statistically sound and computationally efficient solutions to scientific problems in the areas of genetics and functional genomics.
Quantifying tissue-specific expression quantitative trait loci (eQTLs) via Bayesian model comparison

Quantifying tissue-specific expression quantitative trait loci (eQTLs) via Bayesian model comparison

Ivy F. Tso

By |

My lab researches how the human brain processes social and affective information and how these processes are affected in psychiatric disorders, especially schizophrenia and bipolar disorder. We use behavioral, electrophysiological (EEG), neuroimaging (functional MRI), eye tracking, brain stimulation (TMS, tACS), and computational methods in our studies. One main focus of our work is building and validating computational models based on intensive, high-dimensional subject-level behavior and brain data to explain clinical phenomena, parse mechanisms, and predict patient outcome. The goal is to improve diagnostic and prognostic assessment, and to develop personalized treatments.

Brain activation (in parcellated map) during social and face processing.

Qing Qu

By |

His research interest lies in the intersection of signal processing, data science, machine learning, and numerical optimization. He is particularly interested in computational methods for learning low-complexity models from high-dimensional data, leveraging tools from machine learning, numerical optimization, and high dimensional geometry, with applications in imaging sciences, scientific discovery, and healthcare. Recently, he is also interested in understanding deep networks through the lens of low-dimensional modeling.

Lubomir Hadjiyski

By |

Dr. Hadjiyski research interests include computer-aided diagnosis, artificial intelligence (AI), machine learning, predictive models, image processing and analysis, medical imaging, and control systems. His current research involves design of decision support systems for detection and diagnosis of cancer in different organs and quantitative analysis of integrated multimodality radiomics, histopathology and molecular biomarkers for treatment response monitoring using AI and machine learning techniques. He also studies the effect of the decision support systems on the physicians’ clinical performance.

Gen Li

By |

Dr. Gen Li is an Assistant Professor in the Department of Biostatistics. He is devoted to developing new statistical methods for analyzing complex biomedical data, including multi-way tensor array data, multi-view data, and compositional data. His methodological research interests include dimension reduction, predictive modeling, association analysis, and functional data analysis. He also has research interests in scientific domains including microbiome and genomics.

Novel tree-guided regularization methods can identify important microbial features at different taxonomic ranks that are predictive of the clinical outcome.

J.J. Prescott

By |

Broadly, I study legal decision making, including decisions related to crime and employment. I typically use large social science data bases, but also collect my own data using technology or surveys.

Lia Corrales

By |

My PhD research focused on identifying the size and mineralogical composition of interstellar dust through X-ray imaging of dust scattering halos to X-ray spectroscopy of bright objects to study absorption from intervening material. Over the course of my PhD I also developed an open source, object oriented approach to computing extinction properties of particles in Python that allows the user to change the scattering physics models and composition properties of dust grains very easily. In many cases, the signal I look for from interstellar dust requires evaluating the observational data on the 1-5% level. This has required me to develop a deep understanding of both the instrument and the counting statistics (because modern-day X-ray instruments are photon counting tools). My expertise led me to a postdoc at MIT, where I developed techniques to obtain high resolution X-ray spectra from low surface brightness (high background) sources imaged with the Chandra X-ray Observatory High Energy Transmission Grating Spectrometer. I pioneered these techniques in order to extract and analyze the high resolution spectrum of Sgr A*, our Galaxy’s central supermassive black hole (SMBH), producing a legacy dataset with a precision that will not be replaceable for decades. This dataset will be used to understand why Sgr A* is anomalously inactive, giving us clues to the connection between SMBH activity and galactic evolution. In order to publish the work, I developed an open source software package, pyXsis (github.com/eblur/pyxsis) in order to model the low signal-to-noise spectrum of Sgr A* simultaneously with a non-physical parameteric model of the background spectrum (Corrales et al., 2020). As a result of my vocal advocacy for Python compatible software tools and a modular approach to X-ray data analysis, I became Chair for HEACIT (which stands for “High Energy Astrophysics Codes, Interfaces, and Tools”), a new self-appointed working group of X-ray software engineers and early career scientists interested in developing tools for future X-ray observatories. We are working to identify science cases that high energy astronomers find difficult to support with the current software libraries, provide a central and publicly available online forum for tutorials and discussion of current software libraries, and develop a set of best practices for X-ray data analysis. My research focus is now turning to exoplanet atmospheres, where I hope to measure absorption from molecules and aerosols in the UV. Utilizing UM access to the Neil Gehrels Swift Observatory, I work to observe the dip in a star’s brightness caused by occultation (transit) from a foreground planet. Transit depths are typically <1%, and telescopes like Swift were not originally designed with transit measurements (i.e., this level of precision) in mind. As a result, this research strongly depends on robust methods of scientific inference from noisy datasets.

cirx1_heinz_pretty_image

As a graduate student, I attended some of the early “Python in Astronomy” workshops. While there, I wrote Jupyter Notebook tutorials that helped launch the Astropy Tutorials project (github.com/astropy/astropy-tutorials), which expanded to Learn Astropy (learn.astropy.org), for which I am a lead developer. Since then, I have also become a leader within the larger Astropy collaboration. I have helped develop the Astropy Project governance structure, hired maintainers, organized workshops, and maintained an AAS presence for the Astropy Project and NumFocus (the non-profit umbrella organization that works to sustain open source software communities in scientific computing) for the last several years. As a woman of color in a STEM field, I work to clear a path by teaching the skills I have learned along the way to other underrepresented groups in STEM. This year I piloted WoCCode (Women of Color Code), an online network and webinar series for women from minoritized backgrounds to share expertise and support each other in contributing to open source software communities.

Xianglei Huang

By |

Prof. Huang is specialized in satellite remote sensing, atmospheric radiation, and climate modeling. Optimization, pattern analysis, and dimensional reduction are extensively used in his research for explaining observed spectrally resolved infrared spectra, estimating geophysical parameters from such hyperspectral observations, and deducing human influence on the climate in the presence of natural variability of the climate system. His group has also developed a deep-learning model to make a data-driven solar forecast model for use in the renewable energy sector.