Wenbo Sun

By |

Uncertainty quantification and decision making are increasingly demanded with the development of future technology in engineering and transportation systems. Among the uncertainty quantification problems, Dr. Wenbo Sun is particularly interested in statistical modelling of engineering system responses with considering the high dimensionality and complicated correlation structure, as well as quantifying the uncertainty from a variety of sources simultaneously, such as the inexactness of large-scale computer experiments, process variations, and measurement noises. He is also interested in data-driven decision making that is robust to the uncertainty. Specifically, he delivers methodologies for anomaly detection and system design optimization, which can be applied to manufacturing process monitoring, distracted driving detection, out-of-distribution object identification, vehicle safety design optimization, etc.

Yasser Aboelkassem

By |

In this project, we use multi-scale models coupled with machine learning algorithms to study cardiac electromechanic coupling. The approach spans out the molecular, Brownian, and Langevin dynamics of the contractile (sarcomeric proteins) mechanism of cardiac cells and up-to-the finite element analysis of the tissue and organ levels. In this work, a novel surrogate machine learning algorithm for the sarcomere contraction is developed. The model is trained and established using in-silico data-driven dynamic sampling procedures implemented using our previously derived myofilament mathematical models.

Multi-scale Machine Learning Modeling of Cardiac Electromechanics Coupling

Multi-scale Machine Learning Modeling of Cardiac Electromechanics Coupling

Lu Wang

By |

Lu’s research is focused on natural language processing, computational social science, and machine learning. More specifically, Lu works on algorithms for text summarization, language generation, argument mining, information extraction, and discourse analysis, as well as novel applications that apply such techniques to understand media bias and polarization and other interdisciplinary subjects.

Benjamin Fish

By |

My research tackles how human values can be incorporated into machine learning and other computational systems. This includes work on the translation process from human values to computational definitions and work on how to understand and encourage fairness while preventing discrimination in machine learning and data science. My research combines tools from the theory of machine learning with insights from economics, science and technology studies, and philosophy, among others, to improve our theories of the translation process and the algorithms we create. In settings like classification, social networks, and data markets, I explore the ways in which human values play a primary role in the quality of machine learning and data science.

The likelihood of receiving desirable information like public health information or job advertisements depends on both your position in a social network, and on who directly gets the information to start with (the seeds). This image shows how a new method for deciding who to select as the seeds, called maximin, outperforms the most popular approach in previous literature by decreasing the correlation between where you are in the social network and your likelihood of receiving the information. These figures are taken from work by Benjamin Fish, Ashkan Bashardoust, danah boyd, Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. Gaps in information access in social networks. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 480–490, 2019.

Yixin Wang

By |

Yixin Wang works in the fields of Bayesian statistics, machine learning, and causal inference, with applications to recommender systems, text data, and genetics. She also works on algorithmic fairness and reinforcement learning, often via connections to causality. Her research centers around developing practical and trustworthy machine learning algorithms for large datasets that can enhance scientific understandings and inform daily decision-making. Her research interests lie in the intersection of theory and applications.

Elle O’Brien

By |

My research focuses on building infrastructure for public health and health science research organizations to take advantage of cloud computing, strong software engineering practices, and MLOps (machine learning operations). By equipping biomedical research groups with tools that facilitate automation, better documentation, and portable code, we can improve the reproducibility and rigor of science while scaling up the kind of data collection and analysis possible.

Research topics include:
1. Open source software and cloud infrastructure for research,
2. Software development practices and conventions that work for academic units, like labs or research centers, and
3. The organizational factors that encourage best practices in reproducibility, data management, and transparency

The practice of science is a tug of war between competing incentives: the drive to do a lot fast, and the need to generate reproducible work. As data grows in size, code increases in complexity and the number of collaborators and institutions involved goes up, it becomes harder to preserve all the “artifacts” needed to understand and recreate your own work. Technical AND cultural solutions will be needed to keep data-centric research rigorous, shareable, and transparent to the broader scientific community.

View MIDAS Faculty Research Pitch, Fall 2021

 

Jodyn Platt

By |

Our team leads research on the Ethical, Legal, and Social Implications (ELSI) of learning health systems and related enterprises. Our research uses mixed methods to understand policies and practices that make data science methods (data collection and curation, AI, computable algorithms) trustworthy for patients, providers, and the public. Our work engages multiple stakeholders including providers and health systems, as well as the general public and minoritized communities on issues such as AI-enabled clinical decision support, data sharing and privacy, and consent for data use in precision oncology.

Sophia Brueckner

By |

Sophia Brueckner is a futurist artist/designer/engineer. Inseparable from computers since the age of two, she believes she is a cyborg. As an engineer at Google, she designed and built products used by millions. At RISD and the MIT Media Lab, she researched the simultaneously empowering and controlling nature of technology with a focus on haptics and social interfaces. Her work has been featured internationally by Artforum, SIGGRAPH, The Atlantic, Wired, the Peabody Essex Museum, Portugal’s National Museum of Contemporary Art, and more. Brueckner is the founder and creative director of Tomorrownaut, a creative studio focusing on speculative futures and sci-fi-inspired prototypes. She is currently an artist-in-residence at Nokia Bell Labs, was previously an artist-in-residence at Autodesk, and is an assistant professor at the University of Michigan teaching Sci-Fi Prototyping, a course combining sci-fi, prototyping, and ethics. Her ongoing objective is to combine her background in art, design, and engineering to inspire a more positive future.

Todd Hollon

By |

A major focus of the MLiNS lab is to combine stimulated Raman histology (SRH), a rapid label-free, optical imaging method, with deep learning and computer vision techniques to discover the molecular, cellular, and microanatomic features of skull base and malignant brain tumors. We are using SRH in our operating rooms to improve the speed and accuracy of brain tumor diagnosis. Our group has focused on deep learning-based computer vision methods for automated image interpretation, intraoperative diagnosis, and tumor margin delineation. Our work culminated in a multicenter, prospective, clinical trial, which demonstrated that AI interpretation of SRH images was equivalent in diagnostic accuracy to pathologist interpretation of conventional histology. We were able to show, for the first time, that a deep neural network is able to learn recognizable and interpretable histologic image features (e.g. tumor cellularity, nuclear morphology, infiltrative growth pattern, etc) in order to make a diagnosis. Our future work is directed at going beyond human-level interpretation towards identifying molecular/genetic features, single-cell classification, and predicting patient prognosis.