- This event has passed.
MIDAS Seminar Series Presents: Arya Farahi, University of Michigan
February 1 @ 4:00 pm - 5:00 pm
Data Science Fellow, MIDAS, University of Michigan
KiTE: A framework for algorithmic trustworthiness
Abstract: Applications of AI decision-support systems are increasingly shaping the fabric of our society and utilized for scientific discovery. These systems can exhibit and exacerbate undesirable biases that might hurt the under-represented population or lead to false scientific discovery. Therefore, it is critical to evaluate these systems’ performance not only from a lens of predictive power and rate of error but also from a lens of trustworthiness. In this talk, I will focus on probabilistic classifiers and reason that a trustworthy classifier must be group-wise calibrated — i.e., probability prediction of a classifier matches the frequency of future observations for every subset of the population. Performing group-wise hypothesis testing and/or group-wise bias quantification can be challenging. In this talk, I will present our solution KiTE, a hypothesis-testing framework with provable guarantees, that enables the practitioners to (i) test whether a model is group-wise calibrated, (ii) quantify untrustworthiness, and (iii) estimate prediction bias (both at the individual and group levels).
In the second part of the talk, I demonstrate how KiTE can be utilized to answer some of the pressing questions in cosmology. A key question that modern cosmology is concerned about is developing a more accurate and precise understanding of the properties of “dark matter” and “dark energy.” The most recent results from the high precision empirical experiments revealed a set of discrepancies between the measured properties of dark matter and dark energy. This could be a smoking gun for new physics, or there might be unaccounted modeling biases. I employ the methods developed in quantifying algorithmic trustworthiness to shed light on and correct for modeling biases in dark energy experiments.