Variational Inference: Foundations and Modern Methods (NIPS 2016 tutorial)

  Рет қаралды 30,020

Steven Van Vaerenbergh

Steven Van Vaerenbergh

Күн бұрын

David Blei, Rajesh Ranganath, Shakir Mohamed.
One of the core problems of modern statistics and machine learning is to approximate difficult-to-compute probability distributions. This problem is especially important in probabilistic modeling, which frames all inference about unknown quantities as a calculation about a conditional distribution. In this tutorial we review and discuss variational inference (VI), a method a that approximates probability distributions through optimization. VI has been used in myriad applications in machine learning and tends to be faster than more traditional methods, such as Markov chain Monte Carlo sampling. Brought into machine learning in the 1990s, recent advances and easier implementation have renewed interest and application of this class of methods. This tutorial aims to provide both an introduction to VI with a modern view of the field, and an overview of the role that probabilistic inference plays in many of the central areas of machine learning.
The tutorial has three parts. First, we provide a broad review of variational inference from several perspectives. This part serves as an introduction (or review) of its central concepts. Second, we develop and connect some of the pivotal tools for VI that have been developed in the last few years, tools like Monte Carlo gradient estimation, black box variational inference, stochastic approximation, and variational auto-encoders. These methods have lead to a resurgence of research and applications of VI. Finally, we discuss some of the unsolved problems in VI and point to promising research directions.

Пікірлер: 7
@yididiyayilma900
@yididiyayilma900 2 жыл бұрын
30:00 The most intuitive explaination for stochastic optimization I have ever heard so far.
@Filaaaix
@Filaaaix 4 жыл бұрын
At 1:27:30 => I didn't really get how you derive the Auxiliary variational bound. Is there a good source where it's explained more thoroughly?
@nicodetullio
@nicodetullio 2 жыл бұрын
lo
@alexeygritsenko9955
@alexeygritsenko9955 6 жыл бұрын
At 44:43 - why does the score function have expectation of zero?
@mataneyal
@mataneyal 6 жыл бұрын
\begin{equation} \begin{aligned} \mathbb{E}_q [ abla_ u g(z; u)] &= \mathbb{E}_q [ abla_ u \log p(x,z) - abla_ u \log q(z; u)]\\ &= - \mathbb{E}_q [ abla_ u \log q(z; u)] ~ \textrm{($\log p(x,z)$ is not a function of $ u$)}\\ &= - \int q(z; u) abla_ u \log q(z; u) \\ &= - \int abla_ u q(z; u)~\textrm{(Log Derivative Trick)}\\ &= 0 ~\textrm{($q(z; u)$ is a continuous probability distribution)} \end{aligned} \end{equation}
@maxturgeon89
@maxturgeon89 4 жыл бұрын
Chain rule and dominated convergence theorem
Dave Blei: "Black Box Variational Inference"
37:03
PROBPROG Conference
Рет қаралды 7 М.
[DeepBayes2019]: Day 1, Lecture 3. Variational inference
1:02:55
BayesGroup.ru
Рет қаралды 13 М.
Nastya and balloon challenge
00:23
Nastya
Рет қаралды 65 МЛН
когда не обедаешь в школе // EVA mash
00:57
EVA mash
Рет қаралды 3,3 МЛН
规则,在门里生存,出来~死亡
00:33
落魄的王子
Рет қаралды 15 МЛН
Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)
1:55:54
Steven Van Vaerenbergh
Рет қаралды 126 М.
Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions
55:39
Generative Memory Lab
Рет қаралды 1,9 М.
What the Maker of Ozempic Doesn't Want You to Know: It's Bankrupting America
12:01
"Normalizing Flows" by Didrik Nielsen
1:44:53
Probabilistic AI School
Рет қаралды 3,3 М.
CS 285: Lecture 18, Variational Inference, Part 1
20:13
Variational Inference: Foundations and Innovations
1:05:29
Simons Institute
Рет қаралды 46 М.
On the geometry of Stein variational gradient descent and related ensemble sampling methods
48:52
UCL Centre for Artificial Intelligence
Рет қаралды 2,7 М.