Lia Corrales

By |

My PhD research focused on identifying the size and mineralogical composition of interstellar dust through X-ray imaging of dust scattering halos to X-ray spectroscopy of bright objects to study absorption from intervening material. Over the course of my PhD I also developed an open source, object oriented approach to computing extinction properties of particles in Python that allows the user to change the scattering physics models and composition properties of dust grains very easily. In many cases, the signal I look for from interstellar dust requires evaluating the observational data on the 1-5% level. This has required me to develop a deep understanding of both the instrument and the counting statistics (because modern-day X-ray instruments are photon counting tools). My expertise led me to a postdoc at MIT, where I developed techniques to obtain high resolution X-ray spectra from low surface brightness (high background) sources imaged with the Chandra X-ray Observatory High Energy Transmission Grating Spectrometer. I pioneered these techniques in order to extract and analyze the high resolution spectrum of Sgr A*, our Galaxy’s central supermassive black hole (SMBH), producing a legacy dataset with a precision that will not be replaceable for decades. This dataset will be used to understand why Sgr A* is anomalously inactive, giving us clues to the connection between SMBH activity and galactic evolution. In order to publish the work, I developed an open source software package, pyXsis (github.com/eblur/pyxsis) in order to model the low signal-to-noise spectrum of Sgr A* simultaneously with a non-physical parameteric model of the background spectrum (Corrales et al., 2020). As a result of my vocal advocacy for Python compatible software tools and a modular approach to X-ray data analysis, I became Chair for HEACIT (which stands for “High Energy Astrophysics Codes, Interfaces, and Tools”), a new self-appointed working group of X-ray software engineers and early career scientists interested in developing tools for future X-ray observatories. We are working to identify science cases that high energy astronomers find difficult to support with the current software libraries, provide a central and publicly available online forum for tutorials and discussion of current software libraries, and develop a set of best practices for X-ray data analysis. My research focus is now turning to exoplanet atmospheres, where I hope to measure absorption from molecules and aerosols in the UV. Utilizing UM access to the Neil Gehrels Swift Observatory, I work to observe the dip in a star’s brightness caused by occultation (transit) from a foreground planet. Transit depths are typically <1%, and telescopes like Swift were not originally designed with transit measurements (i.e., this level of precision) in mind. As a result, this research strongly depends on robust methods of scientific inference from noisy datasets.

cirx1_heinz_pretty_image

As a graduate student, I attended some of the early “Python in Astronomy” workshops. While there, I wrote Jupyter Notebook tutorials that helped launch the Astropy Tutorials project (github.com/astropy/astropy-tutorials), which expanded to Learn Astropy (learn.astropy.org), for which I am a lead developer. Since then, I have also become a leader within the larger Astropy collaboration. I have helped develop the Astropy Project governance structure, hired maintainers, organized workshops, and maintained an AAS presence for the Astropy Project and NumFocus (the non-profit umbrella organization that works to sustain open source software communities in scientific computing) for the last several years. As a woman of color in a STEM field, I work to clear a path by teaching the skills I have learned along the way to other underrepresented groups in STEM. This year I piloted WoCCode (Women of Color Code), an online network and webinar series for women from minoritized backgrounds to share expertise and support each other in contributing to open source software communities.

Sardar Ansari

By |

I build data science tools to address challenges in medicine and clinical care. Specifically, I apply signal processing, image processing and machine learning techniques, including deep convolutional and recurrent neural networks and natural language processing, to aid diagnosis, prognosis and treatment of patients with acute and chronic conditions. In addition, I conduct research on novel approaches to represent clinical data and combine supervised and unsupervised methods to improve model performance and reduce the labeling burden. Another active area of my research is design, implementation and utilization of novel wearable devices for non-invasive patient monitoring in hospital and at home. This includes integration of the information that is measured by wearables with the data available in the electronic health records, including medical codes, waveforms and images, among others. Another area of my research involves linear, non-linear and discrete optimization and queuing theory to build new solutions for healthcare logistic planning, including stochastic approximation methods to model complex systems such as dispatch policies for emergency systems with multi-server dispatches, variable server load, multiple priority levels, etc.

Jesse Hamilton

By |

My research focuses on the development of novel Magnetic Resonance Imaging (MRI) technology for imaging the heart. We focus in particular on quantitative imaging techniques, in which the signal intensity at each pixel in an image represents a measurement of an inherent property of a tissue. Much of our research is based on cardiac Magnetic Resonance Fingerprinting (MRF), which is a class of methods for simultaneously measuring multiple tissue properties from one rapid acquisition.

Our group is exploring novel ways to combine physics-based modeling of MRI scans with deep learning algorithms for several purposes. First, we are exploring the use of deep learning to design quantitative MRI scans with improved accuracy and precision. Second, we are developing deep learning approaches for image reconstruction that will allow us to reduce image noise, improve spatial resolution and volumetric coverage, and enable highly accelerated acquisitions to shorten scan times. Third, we are exploring ways of using artificial intelligence to derive physiological motion signals directly from MRI data to enable continuous scanning that is robust to cardiac and breathing motion. In general, we focus on algorithms that are either self-supervised or use training data generated in computer simulations, since the collection of large amounts of training data from human subjects is often impractical when designing novel imaging methods.

Kathryn Luker

By |

As an expert in molecular imaging of single cell signaling in cancer, I develop integrated systems of molecular, cellular, optical, and custom image processing tools to extract rich data sets for biochemical and behavioral functions in living cells over minutes to days. Data sets composed of thousands to millions of cells enable us to develop predictive models of cellular function through a variety of computational approaches, including ODE, ABM, and IRL modeling.

J. Trent Alexander

By |

J. Trent Alexander is the Associate Director and a Research Professor at ICPSR in the Institute for Social Research at the University of Michigan. Alexander is a historical demographer and builds social science data infrastructure. He is currently leading the Decennial Census Digitization and Linkage Project (joint with Raj Chetty and Katie Genadek) and ResearchDataGov (joint with Lynette Hoelter). Prior to coming to ICPSR in 2017, Alexander initiated the Census Longitudinal Infrastructure Project at the Census Bureau and managed the Integrated Public Use Microdata Series (IPUMS) at the University of Minnesota.

Gary Luker

By |

We use a variety of quantitative imaging methods, ranging from single cells to clinical studies, to investigate cancer signaling and response to therapy over space and time. We develop image analysis methods to extract data from thousands of single cells over time and voxel-wise measurements of imaging parameters. We also use bulk and single-cell RNA sequencing to investigate heterogeneity among cancer cells and changes induced by intercellular interactions. A current goal of our ongoing work is to merge RNA sequencing and imaging data to understand cell decision making in cancer. We collaborate with investigators using machine learning and computational modeling approaches to inform cell signaling and resultant behaviors in tumor growth and metastasis.

Lana Garmire

By |

My research interest lies in applying data science for actionable transformation of human health from the bench to bedside. Current research focus areas include cutting edge single-cell sequencing informatics and genomics; precision medicine through integration of multi-omics data types; novel modeling and computational methods for biomarker research; public health genomics. I apply my biomedical informatics and analytical expertise to study diseases such as cancers, as well the impact of pregnancy/early life complications on later life diseases.

Shaobing Xu

By |

My work lies in the learning, control, and design of autonomous systems with an emphasis on connected automated vehicles (CAVs). I have been committed to developing robust autonomous vehicles, augmented reality (AR) technology, and V2X systems at Mcity. The highlights include: (1) a robust self-driving algorithm/software stack enabling high-level CAVs; (2) a data-and-AI-driven sensor-level augmented reality (AR) system for efficient safe CAV tests. These systems have been deployed on the Mcity CAV fleet and Mcity testing track for daily operations. I am interested in using big naturalistic human-driving data to train motion planning and control algorithms of self-driving cars, so the automated cars could behave with better roadmanship and thus higher acceptance. I am also interested in data-driven low-uncertainty learning algorithms for object detection, tracking, and fusion, in order to build the perception system of safety-critical autonomous systems.

Stephan F. Taylor

By |

STEPHAN F. TAYLOR is a professor of psychiatry and Associate Chair for Research and Research Regulatory Affairs in the Department of Psychiatry; and an adjunct professor of psychology.

His work uses brain mapping and brain stimulation to study and treat serious mental disorders such as psychosis, refractory depression and obsessive-compulsive disorder. Data science techniques area applied in the analysis of high dimensional functional magnetic resonance imaging datasets and meso-scale brain networks, using supervised and unsupervised techniques to interrogate brain-behavior correlations relevant for psychopathological conditions. Clinical-translation work with brain stimulation, primarily with transcranial magnetic stimulation, is informed by mapping meso-scale networks to guide treatment of conditions such as depression. Future work seeks to use machine learning to identify treatment predictors and match individual patients to specific treatments.

Evan Keller

By |

Our laboratory focuses on (1) the biology of cancer metastasis, especially bone metastasis, including the role of the host microenvironment; and (2) mechanisms of chemoresistance. We explore for genes that regulate metastasis and the interaction between the host microenvironment and cancer cells. We are performing single cell multiomics and spatial analysis to enable us to identify rare cell populations and promote precision medicine. Our research methodology uses a combination of molecular, cellular, and animal studies. The majority of our work is highly translational to provide clinical relevance to our work. In terms of data science, we collaborate on applications of both established and novel methodologies to analyze high dimensional; deconvolution of high dimensional data into a cellular and tissue context; spatial mapping of multiomic data; and heterogenous data integration.