Рет қаралды 3,349
Sebastian Nowozin, Microsoft Research
Generative neural samplers are probabilistic models that implement sampling using
feedforward neural networks: they take a random input vector and produce a sample
from a probability distribution defined by the network weights. These models
are expressive and allow efficient computation of samples and derivatives, but
cannot be used for computing likelihoods or for marginalization. The generativeadversarial
training method allows to train such models through the use of an
auxiliary discriminative neural network. We show that the generative-adversarial
approach is a special case of an existing more general variational divergence
estimation approach. We show that any f-divergence can be used for training
generative neural samplers. We discuss the benefits of various choices of divergence
functions on training complexity and the quality of the obtained generative models.