Variational Inference: Foundations and Innovations

  Рет қаралды 47,009

Simons Institute

Simons Institute

Күн бұрын

Пікірлер: 19
@whaleshark8700
@whaleshark8700 4 жыл бұрын
Wonderful Talk~~~~ 7:45 start 10:49 GMM model example 13:37 LDA example 22:42 Conditionally conjugate models 28:22 ELBO 30:52 Mean-field VI 37:27 Stochastic VI 48:07 Black box VI 1:00:47 Reparameterization and amortization
@pauloabelha
@pauloabelha 2 жыл бұрын
“Great question! I wish this talk was over so I could go and think about it”
@jiachenlei1489
@jiachenlei1489 3 жыл бұрын
amazing! brief presentation but gives deep insights
@bnglr
@bnglr 4 жыл бұрын
Check Blei's latest talk on this topic: www.cs.columbia.edu/~blei/talks/Blei_VI_tutorial.pdf kzbin.info/www/bejne/epLUf4GCnsmmraM kzbin.info/www/bejne/jZWag5KPjZmDmbM And the 2016 NIPS tutorial talk: www.cs.columbia.edu/~blei/talks/Blei_VI_tutorial.pdf kzbin.info/www/bejne/pZjHp5JsmcepjLM
@ewfq2
@ewfq2 5 жыл бұрын
28:08 On the "bad properties of KL divergence" and alternative measures of divergence; Does anyone have any things to point to? Very interesting
@citiblocsMaster
@citiblocsMaster 7 жыл бұрын
7:45 This has to be true
@조성민-y9n
@조성민-y9n 3 жыл бұрын
so damn true
@monart4210
@monart4210 4 жыл бұрын
I understand we measure the distance between two distributions using KL divergence, but am still very confused. How do we know whether we are getting closer to the actual posterior distribution if we do not know the posterior distribution?
@prafful1723
@prafful1723 4 жыл бұрын
Please someone answer this!!
@superhanfeng
@superhanfeng 4 жыл бұрын
because c=a+b and c is a constant, and a is the KL is the divergence between posterior and variational distribution. By maximizing b, you minimizes a.
@rhettscronfinkle3106
@rhettscronfinkle3106 3 жыл бұрын
It is a mathematical thing. You could check out Sergey Levine's Lectures. RAIL CS182 Latent Variable Models. It is part of a larger lecture series on DL. It's there on KZbin.
@j2schmit
@j2schmit 2 жыл бұрын
You're exactly right, we don't know that actual posterior distribution, so it at first seems intractable to try to minimize a KL-divergence involving the actual posterior distribution. This is where the ELBO (the evidence lower bound) comes into play. This quantity can be used to bound the KL quantity of interest from below, and we can maximize this quantity without knowing the actual posterior, thereby finding the minimum of the KL. For a nice summary of this see Section 2.2. of this paper, arxiv.org/pdf/1601.00670.pdf
@carlossouza5151
@carlossouza5151 4 жыл бұрын
Amazing talk!!!
@sandeepreddy6295
@sandeepreddy6295 4 жыл бұрын
Great lecture !
@martindelgado4834
@martindelgado4834 7 жыл бұрын
Can we have access to the slides please?
@rdflrlz
@rdflrlz 7 жыл бұрын
Martin Delgado You can get the slides if you check Dr. Blei's website.
@harshnigam3385
@harshnigam3385 6 жыл бұрын
DAmn!
@rickferreira3146
@rickferreira3146 2 жыл бұрын
Groovy
Representational and Optimization Properties of Deep Residual Networks
44:41
Variational Inference | Evidence Lower Bound (ELBO) | Intuition & Visualization
25:06
Machine Learning & Simulation
Рет қаралды 74 М.
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН
Une nouvelle voiture pour Noël 🥹
00:28
Nicocapone
Рет қаралды 9 МЛН
Variational Inference: Foundations and Modern Methods (NIPS 2016 tutorial)
1:53:05
Steven Van Vaerenbergh
Рет қаралды 30 М.
14. Causal Inference, Part 1
1:18:43
MIT OpenCourseWare
Рет қаралды 141 М.
Learning from Dynamics
42:41
Simons Institute
Рет қаралды 846
Dave Blei: "Black Box Variational Inference"
37:03
PROBPROG Conference
Рет қаралды 7 М.
[DeepBayes2019]: Day 1, Lecture 3. Variational inference
1:02:55
BayesGroup.ru
Рет қаралды 13 М.
Understanding Variational Autoencoders (VAEs) | Deep Learning
29:54
Variational Autoencoders
43:25
Paul Hand
Рет қаралды 33 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН