Рет қаралды 342
Abstract: To mitigate the bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance. Understanding such tradeoffs, therefore, underlies the design of optimal and fair algorithms. In this talk, Zhao will first discuss their recent work on characterizing the inherent tradeoff between fairness and accuracy in classification problems, where we show that the cost of fairness could be characterized by the optimal value of a Wasserstein-barycenter problem. Then he will show that the complexity of learning the optimal fair predictor is the same as learning the Bayes predictor, and present a post-processing algorithm based on the solution to the Wasserstein-barycenter problem that derives the optimal fair predictors from Bayes score functions. He will also present the empirical results of our fair algorithm and conclude the talk with some discussion on the close interplay between algorithmic fairness and domain generalization.
Speaker Bio: Dr. Han Zhao is an Assistant Professor of Computer Science at the University of Illinois Urbana-Champaign (UIUC). He is also an Amazon Visiting Academic at Amazon AI and Search Science. Dr. Zhao earned his Ph.D. degree from Carnegie Mellon University under the guidance of Prof. Geoff Gordon. His research interest is centered around trustworthy machine learning, with a focus on transfer learning, domain adaptation/generalization, and algorithmic fairness. His long-term goal is to build trustworthy ML systems that are efficient, robust, fair, private, and interpretable. He received his bachelor's degree in Computer Science from Tsinghua University and his master's degree in mathematics from the University of Waterloo.