Elle O’Brien

By |

My research focuses on building infrastructure for public health and health science research organizations to take advantage of cloud computing, strong software engineering practices, and MLOps (machine learning operations). By equipping biomedical research groups with tools that facilitate automation, better documentation, and portable code, we can improve the reproducibility and rigor of science while scaling up the kind of data collection and analysis possible.

Research topics include:
1. Open source software and cloud infrastructure for research,
2. Software development practices and conventions that work for academic units, like labs or research centers, and
3. The organizational factors that encourage best practices in reproducibility, data management, and transparency

The practice of science is a tug of war between competing incentives: the drive to do a lot fast, and the need to generate reproducible work. As data grows in size, code increases in complexity and the number of collaborators and institutions involved goes up, it becomes harder to preserve all the “artifacts” needed to understand and recreate your own work. Technical AND cultural solutions will be needed to keep data-centric research rigorous, shareable, and transparent to the broader scientific community.

 

Lia Corrales

By |

My PhD research focused on identifying the size and mineralogical composition of interstellar dust through X-ray imaging of dust scattering halos to X-ray spectroscopy of bright objects to study absorption from intervening material. Over the course of my PhD I also developed an open source, object oriented approach to computing extinction properties of particles in Python that allows the user to change the scattering physics models and composition properties of dust grains very easily. In many cases, the signal I look for from interstellar dust requires evaluating the observational data on the 1-5% level. This has required me to develop a deep understanding of both the instrument and the counting statistics (because modern-day X-ray instruments are photon counting tools). My expertise led me to a postdoc at MIT, where I developed techniques to obtain high resolution X-ray spectra from low surface brightness (high background) sources imaged with the Chandra X-ray Observatory High Energy Transmission Grating Spectrometer. I pioneered these techniques in order to extract and analyze the high resolution spectrum of Sgr A*, our Galaxy’s central supermassive black hole (SMBH), producing a legacy dataset with a precision that will not be replaceable for decades. This dataset will be used to understand why Sgr A* is anomalously inactive, giving us clues to the connection between SMBH activity and galactic evolution. In order to publish the work, I developed an open source software package, pyXsis (github.com/eblur/pyxsis) in order to model the low signal-to-noise spectrum of Sgr A* simultaneously with a non-physical parameteric model of the background spectrum (Corrales et al., 2020). As a result of my vocal advocacy for Python compatible software tools and a modular approach to X-ray data analysis, I became Chair for HEACIT (which stands for “High Energy Astrophysics Codes, Interfaces, and Tools”), a new self-appointed working group of X-ray software engineers and early career scientists interested in developing tools for future X-ray observatories. We are working to identify science cases that high energy astronomers find difficult to support with the current software libraries, provide a central and publicly available online forum for tutorials and discussion of current software libraries, and develop a set of best practices for X-ray data analysis. My research focus is now turning to exoplanet atmospheres, where I hope to measure absorption from molecules and aerosols in the UV. Utilizing UM access to the Neil Gehrels Swift Observatory, I work to observe the dip in a star’s brightness caused by occultation (transit) from a foreground planet. Transit depths are typically <1%, and telescopes like Swift were not originally designed with transit measurements (i.e., this level of precision) in mind. As a result, this research strongly depends on robust methods of scientific inference from noisy datasets.

cirx1_heinz_pretty_image

As a graduate student, I attended some of the early “Python in Astronomy” workshops. While there, I wrote Jupyter Notebook tutorials that helped launch the Astropy Tutorials project (github.com/astropy/astropy-tutorials), which expanded to Learn Astropy (learn.astropy.org), for which I am a lead developer. Since then, I have also become a leader within the larger Astropy collaboration. I have helped develop the Astropy Project governance structure, hired maintainers, organized workshops, and maintained an AAS presence for the Astropy Project and NumFocus (the non-profit umbrella organization that works to sustain open source software communities in scientific computing) for the last several years. As a woman of color in a STEM field, I work to clear a path by teaching the skills I have learned along the way to other underrepresented groups in STEM. This year I piloted WoCCode (Women of Color Code), an online network and webinar series for women from minoritized backgrounds to share expertise and support each other in contributing to open source software communities.

Jodyn Platt

By |

Our team leads research on the Ethical, Legal, and Social Implications (ELSI) of learning health systems and related enterprises. Our research uses mixed methods to understand policies and practices that make data science methods (data collection and curation, AI, computable algorithms) trustworthy for patients, providers, and the public. Our work engages multiple stakeholders including providers and health systems, as well as the general public and minoritized communities on issues such as AI-enabled clinical decision support, data sharing and privacy, and consent for data use in precision oncology.

Ben Green

By |

Ben studies the social and political impacts of government algorithms. This work falls into several categories. First, evaluating how people make decisions in collaboration with algorithms. This work involves developing machine learning algorithms and studying how people use them in public sector prediction and decision settings. Second, studying the ethical and political implications of government algorithms. Much of this work draws on STS and legal theory to interrogate topics such as algorithmic fairness, smart cities, and criminal justice risk assessments. Third, developing algorithms for public sector applications. In addition to academic research, Ben spent a year developing data analytics tools as a data scientist for the City of Boston.

Ayumi Fujisaki-Manome

By |

Fujisaki-Manome’s research program aims to improve predictability of hazardous weather, ice, and lake/ocean events in cold regions in order to support preparedness and resilience in coastal communities, as well as improve the usability of their forecast products by working with stakeholders. The main question Fujisaki-Manome’s research aims to address is: what are the impacts of interactions between ice and oceans / ice and lakes on larger scale phenomena, such as climate, weather, storm surges, and sea/lake ice melting? Fujisaki-Manome primarily uses numerical geophysical modeling and machine learning to address the research question; and scientific findings from the research feed back into the models and improve their predictability. Her work has focused on applications to the Great Lakes, the Alaska’s coasts, Arctic Ocean, and the Sea of Okhotsk.

Areal fraction of ice cover in the Great Lakes in January 2018 modeled by the unstructured grid ice-hydrodynamic numerical model.

J. Alex Halderman

By |

My research focuses on computer security and privacy, with an emphasis on problems that broadly impact society and public policy. Topics that interest me include software security, network security, data privacy, anonymity, election cybersecurity, censorship resistance, computer forensics, ethics, and cybercrime. I’m also interested in the interaction of technology with politics and international affairs.

Sophia Brueckner

By |

Sophia Brueckner is a futurist artist/designer/engineer. Inseparable from computers since the age of two, she believes she is a cyborg. As an engineer at Google, she designed and built products used by millions. At RISD and the MIT Media Lab, she researched the simultaneously empowering and controlling nature of technology with a focus on haptics and social interfaces. Her work has been featured internationally by Artforum, SIGGRAPH, The Atlantic, Wired, the Peabody Essex Museum, Portugal’s National Museum of Contemporary Art, and more. Brueckner is the founder and creative director of Tomorrownaut, a creative studio focusing on speculative futures and sci-fi-inspired prototypes. She is currently an artist-in-residence at Nokia Bell Labs, was previously an artist-in-residence at Autodesk, and is an assistant professor at the University of Michigan teaching Sci-Fi Prototyping, a course combining sci-fi, prototyping, and ethics. Her ongoing objective is to combine her background in art, design, and engineering to inspire a more positive future.

Todd Hollon

By |

A major focus of the MLiNS lab is to combine stimulated Raman histology (SRH), a rapid label-free, optical imaging method, with deep learning and computer vision techniques to discover the molecular, cellular, and microanatomic features of skull base and malignant brain tumors. We are using SRH in our operating rooms to improve the speed and accuracy of brain tumor diagnosis. Our group has focused on deep learning-based computer vision methods for automated image interpretation, intraoperative diagnosis, and tumor margin delineation. Our work culminated in a multicenter, prospective, clinical trial, which demonstrated that AI interpretation of SRH images was equivalent in diagnostic accuracy to pathologist interpretation of conventional histology. We were able to show, for the first time, that a deep neural network is able to learn recognizable and interpretable histologic image features (e.g. tumor cellularity, nuclear morphology, infiltrative growth pattern, etc) in order to make a diagnosis. Our future work is directed at going beyond human-level interpretation towards identifying molecular/genetic features, single-cell classification, and predicting patient prognosis.

Felipe da Veiga Lerprevost

By |

My research concentrates on the area of bioinformatics, proteomics, and data integration. I am particularly interested in mass spectrometry-based proteomics, software development for proteomics, cancer proteogenomics, and transcriptomics. The computational methods and tools previously developed by my colleagues and me, such as PepExplorer, MSFragger, Philosopher, and PatternLab for Proteomics, are among the most referred proteome informatics tools and are used by hundreds of laboratories worldwide.

I am also a Proteogenomics Data Analysis Center (UM-PGDAC) member as part of the NCI’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) initiative for processing and analyzing hundreds of cancer proteomics samples. UM-PGDAC develops advanced computational infrastructure for comprehensive and global characterization of genomics, transcriptomics, and proteomics data collected from several human tumor cohorts using NCI-provided biospecimens. Since 2019 I have been working as a bioinformatics data analyst with the University of Michigan Proteomics Resource Facility, which provides state-of-the-art capabilities in proteomics to the University of Michigan investigators, including Rogel Cancer Center investigators as Proteomics Shared Resource.

Wei Lu

By |

Dr. Lu brings expertise in machine learning, particularly integrating human knowledge into machine learning and explainable machine learning. He has applied machine learning in a range of domain applications, such as autonomous driving and machine learning for optimized design and control of energy storage systems.