Рет қаралды 367
Subscribe to our channel to get notified when we release a new video.
Like the video to tell KZbin that you want more content like this on your feed.
See our website for future seminars: sites.google.c...
Tuesday, October 22, 2024: Alexis Bellot (Google DeepMind, London)
Title: Partial Transportability for Domain Generalization
Discussant: Adam Li (Columbia University)
Abstract: A fundamental task in AI is providing performance guarantees for predictions made in unseen domains. In practice, there can be substantial uncertainty about the distribution of new data, and corresponding variability in the performance of existing predictors. For example, a risk prediction tool fine-tuned on a patient population (e.g. particular hospital, geographic location) may not be equally optimal if deployed on a different patient population that may differ in several aspects. This talk studies this problem through the lens of partial transportability, that combines data from source domains and assumptions about the data generating mechanisms, encoded in causal diagrams, to provide a guarantee on out-of-distribution performance of classification models. We will show that one may consistently predict the worst-case performance of existing classification models, and that, further, one may train classification models to explicitly optimize for worst-case performance in a target domain, under our assumptions. Both these methods may be parameterized with expressive neural networks and implemented with gradient-based optimization schemes. With these results, we hope to provide a fresh perspective on the problem of transfer learning and domain generalization in machine learning.