Wonderful Talk~~~~ 7:45 start 10:49 GMM model example 13:37 LDA example 22:42 Conditionally conjugate models 28:22 ELBO 30:52 Mean-field VI 37:27 Stochastic VI 48:07 Black box VI 1:00:47 Reparameterization and amortization
@pauloabelha2 жыл бұрын
“Great question! I wish this talk was over so I could go and think about it”
@jiachenlei14893 жыл бұрын
amazing! brief presentation but gives deep insights
@bnglr4 жыл бұрын
Check Blei's latest talk on this topic: www.cs.columbia.edu/~blei/talks/Blei_VI_tutorial.pdf kzbin.info/www/bejne/epLUf4GCnsmmraM kzbin.info/www/bejne/jZWag5KPjZmDmbM And the 2016 NIPS tutorial talk: www.cs.columbia.edu/~blei/talks/Blei_VI_tutorial.pdf kzbin.info/www/bejne/pZjHp5JsmcepjLM
@ewfq25 жыл бұрын
28:08 On the "bad properties of KL divergence" and alternative measures of divergence; Does anyone have any things to point to? Very interesting
@citiblocsMaster7 жыл бұрын
7:45 This has to be true
@조성민-y9n3 жыл бұрын
so damn true
@monart42104 жыл бұрын
I understand we measure the distance between two distributions using KL divergence, but am still very confused. How do we know whether we are getting closer to the actual posterior distribution if we do not know the posterior distribution?
@prafful17234 жыл бұрын
Please someone answer this!!
@superhanfeng4 жыл бұрын
because c=a+b and c is a constant, and a is the KL is the divergence between posterior and variational distribution. By maximizing b, you minimizes a.
@rhettscronfinkle31063 жыл бұрын
It is a mathematical thing. You could check out Sergey Levine's Lectures. RAIL CS182 Latent Variable Models. It is part of a larger lecture series on DL. It's there on KZbin.
@j2schmit2 жыл бұрын
You're exactly right, we don't know that actual posterior distribution, so it at first seems intractable to try to minimize a KL-divergence involving the actual posterior distribution. This is where the ELBO (the evidence lower bound) comes into play. This quantity can be used to bound the KL quantity of interest from below, and we can maximize this quantity without knowing the actual posterior, thereby finding the minimum of the KL. For a nice summary of this see Section 2.2. of this paper, arxiv.org/pdf/1601.00670.pdf
@carlossouza51514 жыл бұрын
Amazing talk!!!
@sandeepreddy62954 жыл бұрын
Great lecture !
@martindelgado48347 жыл бұрын
Can we have access to the slides please?
@rdflrlz7 жыл бұрын
Martin Delgado You can get the slides if you check Dr. Blei's website.