Elle O’Brien

By |

My research focuses on building infrastructure for public health and health science research organizations to take advantage of cloud computing, strong software engineering practices, and MLOps (machine learning operations). By equipping biomedical research groups with tools that facilitate automation, better documentation, and portable code, we can improve the reproducibility and rigor of science while scaling up the kind of data collection and analysis possible.

Research topics include:
1. Open source software and cloud infrastructure for research,
2. Software development practices and conventions that work for academic units, like labs or research centers, and
3. The organizational factors that encourage best practices in reproducibility, data management, and transparency

The practice of science is a tug of war between competing incentives: the drive to do a lot fast, and the need to generate reproducible work. As data grows in size, code increases in complexity and the number of collaborators and institutions involved goes up, it becomes harder to preserve all the “artifacts” needed to understand and recreate your own work. Technical AND cultural solutions will be needed to keep data-centric research rigorous, shareable, and transparent to the broader scientific community.

 

Lia Corrales

By |

My PhD research focused on identifying the size and mineralogical composition of interstellar dust through X-ray imaging of dust scattering halos to X-ray spectroscopy of bright objects to study absorption from intervening material. Over the course of my PhD I also developed an open source, object oriented approach to computing extinction properties of particles in Python that allows the user to change the scattering physics models and composition properties of dust grains very easily. In many cases, the signal I look for from interstellar dust requires evaluating the observational data on the 1-5% level. This has required me to develop a deep understanding of both the instrument and the counting statistics (because modern-day X-ray instruments are photon counting tools). My expertise led me to a postdoc at MIT, where I developed techniques to obtain high resolution X-ray spectra from low surface brightness (high background) sources imaged with the Chandra X-ray Observatory High Energy Transmission Grating Spectrometer. I pioneered these techniques in order to extract and analyze the high resolution spectrum of Sgr A*, our Galaxy’s central supermassive black hole (SMBH), producing a legacy dataset with a precision that will not be replaceable for decades. This dataset will be used to understand why Sgr A* is anomalously inactive, giving us clues to the connection between SMBH activity and galactic evolution. In order to publish the work, I developed an open source software package, pyXsis (github.com/eblur/pyxsis) in order to model the low signal-to-noise spectrum of Sgr A* simultaneously with a non-physical parameteric model of the background spectrum (Corrales et al., 2020). As a result of my vocal advocacy for Python compatible software tools and a modular approach to X-ray data analysis, I became Chair for HEACIT (which stands for “High Energy Astrophysics Codes, Interfaces, and Tools”), a new self-appointed working group of X-ray software engineers and early career scientists interested in developing tools for future X-ray observatories. We are working to identify science cases that high energy astronomers find difficult to support with the current software libraries, provide a central and publicly available online forum for tutorials and discussion of current software libraries, and develop a set of best practices for X-ray data analysis. My research focus is now turning to exoplanet atmospheres, where I hope to measure absorption from molecules and aerosols in the UV. Utilizing UM access to the Neil Gehrels Swift Observatory, I work to observe the dip in a star’s brightness caused by occultation (transit) from a foreground planet. Transit depths are typically <1%, and telescopes like Swift were not originally designed with transit measurements (i.e., this level of precision) in mind. As a result, this research strongly depends on robust methods of scientific inference from noisy datasets.

cirx1_heinz_pretty_image

As a graduate student, I attended some of the early “Python in Astronomy” workshops. While there, I wrote Jupyter Notebook tutorials that helped launch the Astropy Tutorials project (github.com/astropy/astropy-tutorials), which expanded to Learn Astropy (learn.astropy.org), for which I am a lead developer. Since then, I have also become a leader within the larger Astropy collaboration. I have helped develop the Astropy Project governance structure, hired maintainers, organized workshops, and maintained an AAS presence for the Astropy Project and NumFocus (the non-profit umbrella organization that works to sustain open source software communities in scientific computing) for the last several years. As a woman of color in a STEM field, I work to clear a path by teaching the skills I have learned along the way to other underrepresented groups in STEM. This year I piloted WoCCode (Women of Color Code), an online network and webinar series for women from minoritized backgrounds to share expertise and support each other in contributing to open source software communities.

Ben Green

By |

Ben studies the social and political impacts of government algorithms. This work falls into several categories. First, evaluating how people make decisions in collaboration with algorithms. This work involves developing machine learning algorithms and studying how people use them in public sector prediction and decision settings. Second, studying the ethical and political implications of government algorithms. Much of this work draws on STS and legal theory to interrogate topics such as algorithmic fairness, smart cities, and criminal justice risk assessments. Third, developing algorithms for public sector applications. In addition to academic research, Ben spent a year developing data analytics tools as a data scientist for the City of Boston.

Ayumi Fujisaki-Manome

By |

Fujisaki-Manome’s research program aims to improve predictability of hazardous weather, ice, and lake/ocean events in cold regions in order to support preparedness and resilience in coastal communities, as well as improve the usability of their forecast products by working with stakeholders. The main question Fujisaki-Manome’s research aims to address is: what are the impacts of interactions between ice and oceans / ice and lakes on larger scale phenomena, such as climate, weather, storm surges, and sea/lake ice melting? Fujisaki-Manome primarily uses numerical geophysical modeling and machine learning to address the research question; and scientific findings from the research feed back into the models and improve their predictability. Her work has focused on applications to the Great Lakes, the Alaska’s coasts, Arctic Ocean, and the Sea of Okhotsk.

Areal fraction of ice cover in the Great Lakes in January 2018 modeled by the unstructured grid ice-hydrodynamic numerical model.

Sophia Brueckner

By |

Sophia Brueckner is a futurist artist/designer/engineer. Inseparable from computers since the age of two, she believes she is a cyborg. As an engineer at Google, she designed and built products used by millions. At RISD and the MIT Media Lab, she researched the simultaneously empowering and controlling nature of technology with a focus on haptics and social interfaces. Her work has been featured internationally by Artforum, SIGGRAPH, The Atlantic, Wired, the Peabody Essex Museum, Portugal’s National Museum of Contemporary Art, and more. Brueckner is the founder and creative director of Tomorrownaut, a creative studio focusing on speculative futures and sci-fi-inspired prototypes. She is currently an artist-in-residence at Nokia Bell Labs, was previously an artist-in-residence at Autodesk, and is an assistant professor at the University of Michigan teaching Sci-Fi Prototyping, a course combining sci-fi, prototyping, and ethics. Her ongoing objective is to combine her background in art, design, and engineering to inspire a more positive future.

Todd Hollon

By |

A major focus of the MLiNS lab is to combine stimulated Raman histology (SRH), a rapid label-free, optical imaging method, with deep learning and computer vision techniques to discover the molecular, cellular, and microanatomic features of skull base and malignant brain tumors. We are using SRH in our operating rooms to improve the speed and accuracy of brain tumor diagnosis. Our group has focused on deep learning-based computer vision methods for automated image interpretation, intraoperative diagnosis, and tumor margin delineation. Our work culminated in a multicenter, prospective, clinical trial, which demonstrated that AI interpretation of SRH images was equivalent in diagnostic accuracy to pathologist interpretation of conventional histology. We were able to show, for the first time, that a deep neural network is able to learn recognizable and interpretable histologic image features (e.g. tumor cellularity, nuclear morphology, infiltrative growth pattern, etc) in order to make a diagnosis. Our future work is directed at going beyond human-level interpretation towards identifying molecular/genetic features, single-cell classification, and predicting patient prognosis.

Sardar Ansari

By |

I build data science tools to address challenges in medicine and clinical care. Specifically, I apply signal processing, image processing and machine learning techniques, including deep convolutional and recurrent neural networks and natural language processing, to aid diagnosis, prognosis and treatment of patients with acute and chronic conditions. In addition, I conduct research on novel approaches to represent clinical data and combine supervised and unsupervised methods to improve model performance and reduce the labeling burden. Another active area of my research is design, implementation and utilization of novel wearable devices for non-invasive patient monitoring in hospital and at home. This includes integration of the information that is measured by wearables with the data available in the electronic health records, including medical codes, waveforms and images, among others. Another area of my research involves linear, non-linear and discrete optimization and queuing theory to build new solutions for healthcare logistic planning, including stochastic approximation methods to model complex systems such as dispatch policies for emergency systems with multi-server dispatches, variable server load, multiple priority levels, etc.

Mark Steven Cohen

By |

In his various roles, he has helped develop several educational programs in Innovation and Entrepreneurial Development (the only one of their kind in the world) for medical students, residents, and faculty as well as co-founding 4 start-up companies (including a consulting group, a pharmaceutical company, a device company, and a digital health startup) to improve the care of surgical patients and patients with cancer. He has given over 80 invited talks both nationally and internationally, written and published over 110 original scientific articles, 12 book chapters, as well as a textbook on “Success in Academic Surgery: Innovation and Entrepreneurship” published in 2019 by Springer-NATURE. His research is focused on drug development and nanoparticle drug delivery for cancer therapeutic development as well as evaluation of circulating tumor cells, tissue engineering for development of thyroid organoids, and evaluating the role of mixed reality technologies, AI and ML in surgical simulation, education and clinical care delivery as well as directing the Center for Surgical Innovation at Michigan. He has been externally funded for 13 consecutive years by donors and grants from Susan G. Komen Foundation, the American Cancer Society, and he currently has funding from three National Institute of Health R-01 grants through the National Cancer Institute. He has served on several grant study sections for the National Science Foundation, the National Institute of Health, the Department of Defense, and the Susan G. Komen Foundation. He also serves of several scientific journal editorial boards and has serves on committees and leadership roles in the Association for Academic Surgery, the Society of University Surgeons and the American Association of Endocrine Surgeons where he was the National Program Chair in 2013. For his innovation efforts, he was awarded a Distinguished Faculty Recognition Award by the University of Michigan in 2019. His clinical interests and national expertise are in the areas of Endocrine Surgery: specifically thyroid surgery for benign and malignant disease, minimally invasive thyroid and parathyroid surgery, and adrenal surgery, as well as advanced Melanoma Surgery including developing and running the hyperthermic isolated limb perfusion program for in transit metastatic melanoma (the only one in the state of Michigan) which is now one of the largest in the nation.

Anne Fernandez

By |

Dr. Fernandez is a clinical psychologist with extensive training in both addiction and behavioral medicine. She is the Clinical Program Director at the University of Michigan Addiction Treatment Service. Her research focuses on the intersection of addiction and health across two main themes: 1) Expanding access to substance use disorder treatment and prevention services particularly in healthcare settings and; 2) applying precision health approaches to addiction-related healthcare questions. Her current grant-funded research includes an NIH-funded randomized controlled pilot trial of a preoperative alcohol intervention, an NIH-funded precision health study to leverage electronic health records to identify high-risk alcohol use at the time of surgery using natural language processing and other machine-learning based approaches, a University of Michigan funded precision health award to understand and prevent new persistent opioid use after surgery using prediction modeling, and a federally-funded evaluation of the state of Michigan’s substance use disorder treatment expansion.

Kevin Stange

By |

Prof. Stange’s research uses population administrative education and labor market data to understand, evaluate and improve education, employment, and economic policy. Much of the work involves analyzing millions of course-taking and transcript records for college students, whether they be at a single institution, a handful of institutions, or all institutions in several states. This data is used to richly characterize the experiences of college students and relate these experiences to outcomes such as educational attainment, employment, earnings, and career trajectories. Several projects also involve working with the text contained in the universe of all job ads posted online in the US for the past decade. This data is used to characterize the demand for different skills and education credentials in the US labor market. Classification is a task that is arising frequently in this work: How to classify courses into groups based on their title and content? How to identify students with similar educational experiences based on their course-taking patterns? How to classify job ads as being more appropriate for one type of college major or another? This data science work is often paired with traditional causal inference tools of economics, including quasi-experimental methods.