A Connection Between GANs, Inverse Reinforcement Learning, and Energy Based Models, NIPS 2016

  Рет қаралды 6,459

Preserve Knowledge

Preserve Knowledge

6 жыл бұрын

Chelsea Finn, Paul Christiano, Pieter Abbeel, Sergey Levine
UC Berkley AI Research Lab
NIPS 2016 Workshop on Adversarial Training Spotlight
arxiv.org/abs/1611.03852
Generative adversarial networks (GANs) are a recently proposed class of generative models in which a generator is trained to optimize a cost function that is being simultaneously learned by a discriminator. While the idea of learning cost functions is relatively new to the field of generative modeling, learning costs has long been studied in control and reinforcement learning (RL) domains, typically for imitation learning from demonstrations. In these fields, learning cost function underlying observed behavior is known as inverse reinforcement learning (IRL) or inverse optimal control. While at first the connection between cost learning in RL and cost learning in generative modeling may appear to be a superficial one, we show in this paper that certain IRL methods are in fact mathematically equivalent to GANs. In particular, we demonstrate an equivalence between a sample-based algorithm for maximum entropy IRL and a GAN in which the generator's density can be evaluated and is provided as an additional input to the discriminator. Interestingly, maximum entropy IRL is a special case of an energy-based model. We discuss the interpretation of GANs as an algorithm for training energy-based models, and relate this interpretation to other recent work that seeks to connect GANs and EBMs. By formally highlighting the connection between GANs, IRL, and EBMs, we hope that researchers in all three communities can better identify and apply transferable ideas from one domain to another, particularly for developing more stable and scalable algorithms: a major challenge in all three domains.

Пікірлер: 1
@sanketgujar7665
@sanketgujar7665 6 жыл бұрын
Hey would someone explain what is meant by Energy function being more expressive than generator function? Thanks.
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
31:25
Preserve Knowledge
Рет қаралды 150 М.
Concept Learning with Energy-Based Models (Paper Explained)
39:29
Yannic Kilcher
Рет қаралды 29 М.
Cat story: from hate to love! 😻 #cat #cute #kitten
00:40
Stocat
Рет қаралды 13 МЛН
[Vowel]물고기는 물에서 살아야 해🐟🤣Fish have to live in the water #funny
00:53
WHY DOES SHE HAVE A REWARD? #youtubecreatorawards
00:41
Levsob
Рет қаралды 31 МЛН
Joven bailarín noquea a ladrón de un golpe #nmas #shorts
00:17
CS885 Lecture17c: Inverse Reinforcement Learning
32:59
Pascal Poupart
Рет қаралды 9 М.
Deep RL Bootcamp  Lecture 10B Inverse Reinforcement Learning
41:08
Vectoring Words (Word Embeddings) - Computerphile
16:56
Computerphile
Рет қаралды 277 М.
How to train a GAN, NIPS 2016 | Soumith Chintala, Facebook AI Research
31:59
Deep RL Bootcamp  Lecture 4A: Policy Gradients
53:56
AI Prism
Рет қаралды 59 М.
Cat story: from hate to love! 😻 #cat #cute #kitten
00:40
Stocat
Рет қаралды 13 МЛН