Proving Robustness to Data Bias
23/11/2021, 9am
Speaker
Abstract
Datasets can be biased due to societal inequities, human biases, under-representation of minorities, etc. Our goal is to prove that models produced by a learning algorithm are robust in their predictions to potential dataset biases. This is a challenging problem: it entails learning models for a large, or even infinite, number of datasets, ensuring that they all produce the same prediction.
In this talk, I will show how we can adapt ideas from program analysis to prove robustness of a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point. We evaluate our approach on datasets that are commonly used in the fairness literature, and demonstrate our approach's viability on a range of bias models.
This is joint work with Anna Meyer and Loris D'Antoni
Speaker Bio
Aws Albarghouthi is an associate professor at the University of Wisconsin-Madison. He studies the problems of automated synthesis and verification of programs. He received his PhD from the University of Toronto in 2015. He has received a number of best-paper awards for his work (at FSE, UIST, and FAST), a CACM Research Highlight for his PLDI 2020 paper, an NSF CAREER award, a Google Faculty Research Award, and multiple Facebook Research Awards.