Learning in Implicit Generative Models, NIPS 2016 | Shakir Mohamed, Google DeepMind

  Рет қаралды 1,660

Preserve Knowledge

Preserve Knowledge

6 жыл бұрын

Shakir Mohamed, Balaji Lakshminarayanan
arxiv.org/abs/1610.03483
NIPS 2016 Workshop on Adversarial Training Spotlight
Generative adversarial networks (GANs) provide an algorithmic framework for constructing generative models with several appealing properties: they do not require a likelihood function to be specified, only a generating procedure; they provide samples that are sharp and compelling; and they allow us to harness our knowledge of building highly accurate neural network classifiers. Here, we develop our understanding of GANs with the aim of forming a rich view of this growing area of machine learning---to build connections to the diverse set of statistical thinking on this topic, of which much can be gained by a mutual exchange of ideas. We frame GANs within the wider landscape of algorithms for learning in implicit generative models--models that only specify a stochastic procedure with which to generate data--and relate these ideas to modelling problems in related fields, such as econometrics and approximate Bayesian computation. We develop likelihood-free inference methods and highlight hypothesis testing as a principle for learning in implicit generative models, using which we are able to derive the objective function used by GANs, and many other related objectives. The testing viewpoint directs our focus to the general problem of density ratio estimation. There are four approaches for density ratio estimation, one of which is a solution using classifiers to distinguish real from generated data. Other approaches such as divergence minimisation and moment matching have also been explored in the GAN literature, and we synthesise these views to form an understanding in terms of the relationships between them and the wider literature, highlighting avenues for future exploration and cross-pollination.

Пікірлер: 1
@riteshajoodha4401
@riteshajoodha4401 Жыл бұрын
Wonderful, always a pleasure to hear you speak!
David Duvenaud | Reflecting on Neural ODEs | NeurIPS 2019
21:02
Preserve Knowledge
Рет қаралды 25 М.
когда достали одноклассники!
00:49
БРУНО
Рет қаралды 1,3 МЛН
КАХА и Джин 2
00:36
К-Media
Рет қаралды 3,8 МЛН
Тяжелые будни жены
00:46
К-Media
Рет қаралды 5 МЛН
Can You Draw The PERFECT Circle?
00:57
Stokes Twins
Рет қаралды 90 МЛН
Google Data Center 360° Tour
8:29
Google Cloud Tech
Рет қаралды 5 МЛН
Geoffrey Hinton: Turing Award Lecture "The Deep Learning Revolution"
32:28
Variational Autoencoders
15:05
Arxiv Insights
Рет қаралды 474 М.
когда достали одноклассники!
00:49
БРУНО
Рет қаралды 1,3 МЛН