Рет қаралды 229
Abstract: Sharpness-aware minimization (SAM) and related adversarial deep-learning methods can drastically improve generalization, but their underlying mechanisms are not yet fully understood. In this talk, I will show how SAM can be interpreted as optimizing a relaxation of the Bayes objective where the expected negative-loss is replaced by the optimal convex lower bound, obtained by using the so-called Fenchel biconjugate. The connection enables a new Adam-like extension of SAM to automatically obtain reasonable uncertainty estimates, while sometimes also improving its accuracy.
Bio: Thomas Möllenhoff received his PhD in Informatics from the Technical University of Munich in 2020. Since then, he is a post-doctoral researcher in the Approximate Bayesian Inference Team at RIKEN-AIP. During his PhD, Thomas worked on nonconvex optimization methods for image processing and computer vision. His recent research focuses on improving deep learning via Bayesian principles. His awards include the “Best Paper Honorable Mention” at CVPR 2016 and a first-place at the NeurIPS 2021 “Challenge on Approximate Inference in Bayesian Deep Learning”.
More on our website: www.epfl.ch/research/domains/...