Lia Corrales

By |

My PhD research focused on identifying the size and mineralogical composition of interstellar dust through X-ray imaging of dust scattering halos to X-ray spectroscopy of bright objects to study absorption from intervening material. Over the course of my PhD I also developed an open source, object oriented approach to computing extinction properties of particles in Python that allows the user to change the scattering physics models and composition properties of dust grains very easily. In many cases, the signal I look for from interstellar dust requires evaluating the observational data on the 1-5% level. This has required me to develop a deep understanding of both the instrument and the counting statistics (because modern-day X-ray instruments are photon counting tools). My expertise led me to a postdoc at MIT, where I developed techniques to obtain high resolution X-ray spectra from low surface brightness (high background) sources imaged with the Chandra X-ray Observatory High Energy Transmission Grating Spectrometer. I pioneered these techniques in order to extract and analyze the high resolution spectrum of Sgr A*, our Galaxy’s central supermassive black hole (SMBH), producing a legacy dataset with a precision that will not be replaceable for decades. This dataset will be used to understand why Sgr A* is anomalously inactive, giving us clues to the connection between SMBH activity and galactic evolution. In order to publish the work, I developed an open source software package, pyXsis (github.com/eblur/pyxsis) in order to model the low signal-to-noise spectrum of Sgr A* simultaneously with a non-physical parameteric model of the background spectrum (Corrales et al., 2020). As a result of my vocal advocacy for Python compatible software tools and a modular approach to X-ray data analysis, I became Chair for HEACIT (which stands for “High Energy Astrophysics Codes, Interfaces, and Tools”), a new self-appointed working group of X-ray software engineers and early career scientists interested in developing tools for future X-ray observatories. We are working to identify science cases that high energy astronomers find difficult to support with the current software libraries, provide a central and publicly available online forum for tutorials and discussion of current software libraries, and develop a set of best practices for X-ray data analysis. My research focus is now turning to exoplanet atmospheres, where I hope to measure absorption from molecules and aerosols in the UV. Utilizing UM access to the Neil Gehrels Swift Observatory, I work to observe the dip in a star’s brightness caused by occultation (transit) from a foreground planet. Transit depths are typically <1%, and telescopes like Swift were not originally designed with transit measurements (i.e., this level of precision) in mind. As a result, this research strongly depends on robust methods of scientific inference from noisy datasets.

cirx1_heinz_pretty_image

As a graduate student, I attended some of the early “Python in Astronomy” workshops. While there, I wrote Jupyter Notebook tutorials that helped launch the Astropy Tutorials project (github.com/astropy/astropy-tutorials), which expanded to Learn Astropy (learn.astropy.org), for which I am a lead developer. Since then, I have also become a leader within the larger Astropy collaboration. I have helped develop the Astropy Project governance structure, hired maintainers, organized workshops, and maintained an AAS presence for the Astropy Project and NumFocus (the non-profit umbrella organization that works to sustain open source software communities in scientific computing) for the last several years. As a woman of color in a STEM field, I work to clear a path by teaching the skills I have learned along the way to other underrepresented groups in STEM. This year I piloted WoCCode (Women of Color Code), an online network and webinar series for women from minoritized backgrounds to share expertise and support each other in contributing to open source software communities.

Sophia Brueckner

By |

Sophia Brueckner is a futurist artist/designer/engineer. Inseparable from computers since the age of two, she believes she is a cyborg. As an engineer at Google, she designed and built products used by millions. At RISD and the MIT Media Lab, she researched the simultaneously empowering and controlling nature of technology with a focus on haptics and social interfaces. Her work has been featured internationally by Artforum, SIGGRAPH, The Atlantic, Wired, the Peabody Essex Museum, Portugal’s National Museum of Contemporary Art, and more. Brueckner is the founder and creative director of Tomorrownaut, a creative studio focusing on speculative futures and sci-fi-inspired prototypes. She is currently an artist-in-residence at Nokia Bell Labs, was previously an artist-in-residence at Autodesk, and is an assistant professor at the University of Michigan teaching Sci-Fi Prototyping, a course combining sci-fi, prototyping, and ethics. Her ongoing objective is to combine her background in art, design, and engineering to inspire a more positive future.

Felipe da Veiga Lerprevost

By |

My research concentrates on the area of bioinformatics, proteomics, and data integration. I am particularly interested in mass spectrometry-based proteomics, software development for proteomics, cancer proteogenomics, and transcriptomics. The computational methods and tools previously developed by my colleagues and me, such as PepExplorer, MSFragger, Philosopher, and PatternLab for Proteomics, are among the most referred proteome informatics tools and are used by hundreds of laboratories worldwide.

I am also a Proteogenomics Data Analysis Center (UM-PGDAC) member as part of the NCI’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) initiative for processing and analyzing hundreds of cancer proteomics samples. UM-PGDAC develops advanced computational infrastructure for comprehensive and global characterization of genomics, transcriptomics, and proteomics data collected from several human tumor cohorts using NCI-provided biospecimens. Since 2019 I have been working as a bioinformatics data analyst with the University of Michigan Proteomics Resource Facility, which provides state-of-the-art capabilities in proteomics to the University of Michigan investigators, including Rogel Cancer Center investigators as Proteomics Shared Resource.

Marie O’Neill

By |

My research interests include health effects of air pollution, temperature extremes and climate change (mortality, asthma, hospital admissions, birth outcomes and cardiovascular endpoints); environmental exposure assessment; and socio-economic influences on health.
Data science tools and methodologies include geographic information systems and spatio-temporal analysis, epidemiologic study design and data management.

Omar Jamil Ahmed

By |

The Ahmed lab studies behavioral neural circuits and attempts to repair them when they go awry in neurological disorders. Working with patients and with transgenic rodent models, we focus on how space, time and speed are encoded by the spatial navigation and memory circuits of the brain. We also focus on how these same circuits go wrong in Alzheimer’s disease, Parkinson’s disease and epilepsy. Our research involves the collection of massive volumes of neural data. Within these terabytes of data, we work to identify and understand irregular activity patterns at the sub-millisecond level. This requires us to leverage high performance computing environments, and to design custom algorithmic and analytical signal processing solutions. As part of our research, we also discover new ways for the brain to encode information (how neurons encode sequences of space and time, for example) – and the algorithms utilized by these natural neural networks can have important implications for the design of more effective artificial neural networks.

Kevin Bakker

By |

Kevin’s research is focused on to identifying and interpreting the mechanisms responsible for the complex dynamics we observe in ecological and epidemiological systems using data science and modeling approaches. He is primarily interested in emerging and endemic pathogens, such as SARS-CoV-2, influenza, vampire bat rabies, and a host of childhood infectious diseases such as chickenpox. He uses statistical and mechanistic models to fit, forecast, and occasionally back-cast expected disease dynamics under a host of conditions, such as vaccination or other control mechanisms.

Sara Lafia

By |

I am a Research Fellow in the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan. My research is currently supported by a NSF project, Developing Evidence-based Data Sharing and Archiving Policies, where I am analyzing curation activities, automatically detecting data citations, and contributing to metrics for tracking the impact of data reuse. I hold a Ph.D. in Geography from UC Santa Barbara and I have expertise in GIScience, spatial information science, and urban planning. My interests also include the Semantic Web, innovative GIS education, and the science of science. I have experience deploying geospatial applications, designing linked data models, and developing visualizations to support data discovery.

Xianglei Huang

By |

Prof. Huang is specialized in satellite remote sensing, atmospheric radiation, and climate modeling. Optimization, pattern analysis, and dimensional reduction are extensively used in his research for explaining observed spectrally resolved infrared spectra, estimating geophysical parameters from such hyperspectral observations, and deducing human influence on the climate in the presence of natural variability of the climate system. His group has also developed a deep-learning model to make a data-driven solar forecast model for use in the renewable energy sector.

Rahul Ladhania

By |

Rahul Ladhania is an Assistant Professor of Health Informatics in the Department of Health Management & Policy at the University of Michigan School of Public Health. He also has a secondary (courtesy) appointment with the Department of Biostatistics at SPH. Rahul’s research is in the area of causal inference and machine learning in public and behavioral health. A large body of his work focuses on estimating personalized treatment rules and heterogeneous effects of policy, digital and behavioral interventions on human behavior and health outcomes in complex experimental and observational settings using statistical machine learning methods.

Rahul co-leads the Machine Learning team at the Behavior Change For Good Initiative (Penn), where he is working on two `mega-studies’ (very large multi-arm randomized trials): one in partnership with a national fitness chain, to estimate the effects of behavioral interventions on promoting gym visit habit formation; and the other in partnership with two large Mid-Atlantic health systems and a national pharmacy chain, to estimate the effects of text-based interventions on increasing flu shot vaccination rates. His other projects involve partnerships with step-counting apps and mobile-based games to learn user behavior patterns, and design and evaluate interventions and their heterogeneous effects on user behavior.