Susan Hautaniemi Leonard

By |

I am faculty at ICPSR, the largest social science data archive in the world. I manage an education research pre-registration site (sreereg.org) that is focused on transparency and replicability. I also manage a site for sharing work around record linkage, including code (linkagelibrary.org). I am involved in the LIFE-M project (life-m.org), recently classifying the mortality data. That project uses cutting-edge techniques for machine-reading handwritten forms.

Mortality rates for selected causes in the total population per 1,000, 1850–1912, Holyoke and Northampton, Massachusetts

Negar Farzaneh

By |

Dr. Farzaneh’s research interest centers on the application of computer science, in particular, machine learning, signal processing, and computer vision, to develop clinical decision support systems and solve medical problems.

Meha Jain

By |

​I am an Assistant Professor in the School for Environment and Sustainability at the University of Michigan and am part of the Sustainable Food Systems Initiative. My research examines the impacts of environmental change on agricultural production, and how farmers may adapt to reduce negative impacts. I also examine ways that we can sustainably enhance agricultural production. To do this work, I combine remote sensing and geospatial analyses with household-level and census datasets to examine farmer decision-making and agricultural production across large spatial and temporal scales.

Conducting wheat crop cuts to measure yield in India, which we use to train algorithms that map yield using satellite data

Qing Qu

By |

His research interest lies in the intersection of signal processing, data science, machine learning, and numerical optimization. He is particularly interested in computational methods for learning low-complexity models from high-dimensional data, leveraging tools from machine learning, numerical optimization, and high dimensional geometry, with applications in imaging sciences, scientific discovery, and healthcare. Recently, he is also interested in understanding deep networks through the lens of low-dimensional modeling.

Lubomir Hadjiyski

By |

Dr. Hadjiyski research interests include computer-aided diagnosis, artificial intelligence (AI), machine learning, predictive models, image processing and analysis, medical imaging, and control systems. His current research involves design of decision support systems for detection and diagnosis of cancer in different organs and quantitative analysis of integrated multimodality radiomics, histopathology and molecular biomarkers for treatment response monitoring using AI and machine learning techniques. He also studies the effect of the decision support systems on the physicians’ clinical performance.

Thomas L. Chenevert

By |

Multi-center clinical trials increasingly utilize quantitative diffusion imaging (DWI) to aid in patient management and treatment response assessment for translational oncology applications. A major source of systematic bias in diffusion was discovered originating from platform-dependent gradient hardware. Left uncorrected, these biases confound quantitative diffusion metrics used for characterization of tissue pathology and treatment response leading to inconclusive findings, and increasing the requisite subject numbers and trial cost. We have developed technology to mitigate systematic diffusion mapping bias that exists on MRI scanners and are in process of deploying this technology for multi-center clinical trials. Another major source of variance and bottleneck in high-throughput analysis of quantitative diffusion maps is segmentation of tumor/tissue volume of interest (VOI) based on intensities and patterns on multi-contrast MR image datasets, as well as reliable assessment of longitudinal change with disease progression or response to treatment. Our goal is development/trial/application AI algorithms for robust (semi-) automated VOI definition in analysis of multi-dimensional MR datasets for oncology trials.

Representative apparent diffusion coefficient (ADC) histograms and map overlays are shown for oncology trials to be supported by this Academic Industrial Partnership (AIP). ADC is used to characterize tumor malignancy of breast cancer, therapeutic effect in head and neck (H&N) and cellular proliferation in bone marrow of myelofibrosis (MF) patients. Relevant clinical outcome metrics are illustrated under histograms for detection sensitivity threshold (to reduce unnecessary breast biopsies (13)), Kaplan-Meier analysis of therapy response (stratified by median SD of H&N metastatic node (23)), and histopathologic proliferation stage (MF cellular infiltration classification).

Wenbo Sun

By |

Uncertainty quantification and decision making are increasingly demanded with the development of future technology in engineering and transportation systems. Among the uncertainty quantification problems, Dr. Wenbo Sun is particularly interested in statistical modelling of engineering system responses with considering the high dimensionality and complicated correlation structure, as well as quantifying the uncertainty from a variety of sources simultaneously, such as the inexactness of large-scale computer experiments, process variations, and measurement noises. He is also interested in data-driven decision making that is robust to the uncertainty. Specifically, he delivers methodologies for anomaly detection and system design optimization, which can be applied to manufacturing process monitoring, distracted driving detection, out-of-distribution object identification, vehicle safety design optimization, etc.

Lia Corrales

By |

My PhD research focused on identifying the size and mineralogical composition of interstellar dust through X-ray imaging of dust scattering halos to X-ray spectroscopy of bright objects to study absorption from intervening material. Over the course of my PhD I also developed an open source, object oriented approach to computing extinction properties of particles in Python that allows the user to change the scattering physics models and composition properties of dust grains very easily. In many cases, the signal I look for from interstellar dust requires evaluating the observational data on the 1-5% level. This has required me to develop a deep understanding of both the instrument and the counting statistics (because modern-day X-ray instruments are photon counting tools). My expertise led me to a postdoc at MIT, where I developed techniques to obtain high resolution X-ray spectra from low surface brightness (high background) sources imaged with the Chandra X-ray Observatory High Energy Transmission Grating Spectrometer. I pioneered these techniques in order to extract and analyze the high resolution spectrum of Sgr A*, our Galaxy’s central supermassive black hole (SMBH), producing a legacy dataset with a precision that will not be replaceable for decades. This dataset will be used to understand why Sgr A* is anomalously inactive, giving us clues to the connection between SMBH activity and galactic evolution. In order to publish the work, I developed an open source software package, pyXsis (github.com/eblur/pyxsis) in order to model the low signal-to-noise spectrum of Sgr A* simultaneously with a non-physical parameteric model of the background spectrum (Corrales et al., 2020). As a result of my vocal advocacy for Python compatible software tools and a modular approach to X-ray data analysis, I became Chair for HEACIT (which stands for “High Energy Astrophysics Codes, Interfaces, and Tools”), a new self-appointed working group of X-ray software engineers and early career scientists interested in developing tools for future X-ray observatories. We are working to identify science cases that high energy astronomers find difficult to support with the current software libraries, provide a central and publicly available online forum for tutorials and discussion of current software libraries, and develop a set of best practices for X-ray data analysis. My research focus is now turning to exoplanet atmospheres, where I hope to measure absorption from molecules and aerosols in the UV. Utilizing UM access to the Neil Gehrels Swift Observatory, I work to observe the dip in a star’s brightness caused by occultation (transit) from a foreground planet. Transit depths are typically <1%, and telescopes like Swift were not originally designed with transit measurements (i.e., this level of precision) in mind. As a result, this research strongly depends on robust methods of scientific inference from noisy datasets.

cirx1_heinz_pretty_image

As a graduate student, I attended some of the early “Python in Astronomy” workshops. While there, I wrote Jupyter Notebook tutorials that helped launch the Astropy Tutorials project (github.com/astropy/astropy-tutorials), which expanded to Learn Astropy (learn.astropy.org), for which I am a lead developer. Since then, I have also become a leader within the larger Astropy collaboration. I have helped develop the Astropy Project governance structure, hired maintainers, organized workshops, and maintained an AAS presence for the Astropy Project and NumFocus (the non-profit umbrella organization that works to sustain open source software communities in scientific computing) for the last several years. As a woman of color in a STEM field, I work to clear a path by teaching the skills I have learned along the way to other underrepresented groups in STEM. This year I piloted WoCCode (Women of Color Code), an online network and webinar series for women from minoritized backgrounds to share expertise and support each other in contributing to open source software communities.

Sardar Ansari

By |

I build data science tools to address challenges in medicine and clinical care. Specifically, I apply signal processing, image processing and machine learning techniques, including deep convolutional and recurrent neural networks and natural language processing, to aid diagnosis, prognosis and treatment of patients with acute and chronic conditions. In addition, I conduct research on novel approaches to represent clinical data and combine supervised and unsupervised methods to improve model performance and reduce the labeling burden. Another active area of my research is design, implementation and utilization of novel wearable devices for non-invasive patient monitoring in hospital and at home. This includes integration of the information that is measured by wearables with the data available in the electronic health records, including medical codes, waveforms and images, among others. Another area of my research involves linear, non-linear and discrete optimization and queuing theory to build new solutions for healthcare logistic planning, including stochastic approximation methods to model complex systems such as dispatch policies for emergency systems with multi-server dispatches, variable server load, multiple priority levels, etc.

Jesse Hamilton

By |

My research focuses on the development of novel Magnetic Resonance Imaging (MRI) technology for imaging the heart. We focus in particular on quantitative imaging techniques, in which the signal intensity at each pixel in an image represents a measurement of an inherent property of a tissue. Much of our research is based on cardiac Magnetic Resonance Fingerprinting (MRF), which is a class of methods for simultaneously measuring multiple tissue properties from one rapid acquisition.

Our group is exploring novel ways to combine physics-based modeling of MRI scans with deep learning algorithms for several purposes. First, we are exploring the use of deep learning to design quantitative MRI scans with improved accuracy and precision. Second, we are developing deep learning approaches for image reconstruction that will allow us to reduce image noise, improve spatial resolution and volumetric coverage, and enable highly accelerated acquisitions to shorten scan times. Third, we are exploring ways of using artificial intelligence to derive physiological motion signals directly from MRI data to enable continuous scanning that is robust to cardiac and breathing motion. In general, we focus on algorithms that are either self-supervised or use training data generated in computer simulations, since the collection of large amounts of training data from human subjects is often impractical when designing novel imaging methods.