L4 Latent Variable Models and Variational AutoEncoders -- CS294-158 SP24 Deep Unsupervised Learning

  Рет қаралды 6,758

Pieter Abbeel

Pieter Abbeel

Күн бұрын

Пікірлер: 12
@kamilkisielewicz2910
@kamilkisielewicz2910 9 ай бұрын
Great lecture, was nice to have the derivation of the objective spelled out in such an intuitive way
@sd-ti4yv
@sd-ti4yv 17 күн бұрын
Very useful lectures. Thank you.
@KostyaKanishev
@KostyaKanishev 11 ай бұрын
To the question 21:05 The estimate is unbiased - it is correctly stated that it is asymptotically exact - but it has high variance. The variational inference approach gives another tradeoff: lower the variance at the cost of the bias introduced by restricting the family of the proposal latent densities q(z).
@sahhaf1234
@sahhaf1234 11 ай бұрын
@40:13 yes, but how to backpropagate through such a thing? That's not clear at all.. It is all the more difficult to understand because the neural nets hide within pdf's...
@GobalKrishnanV
@GobalKrishnanV 4 ай бұрын
What kind of book to read, please refer the book from start. give document for it.
@spartacusche
@spartacusche 5 ай бұрын
Which probability course to revise to understadn qickli all this? sampling ...(from youtube) or other
@toyeoloko
@toyeoloko 6 ай бұрын
Is the code for the examples available or is it possible to get?
@spartacusche
@spartacusche 5 ай бұрын
on github you can find course 2020 with demos and solutions
@mingzhou2213
@mingzhou2213 6 ай бұрын
At time 1:15:22 (slide 55) for likelihood ratio gradient on VAE, I did my calculation. And for the first term (the one that does not become 0 eventually), I have the term sum over all z, we start with the term 1/q * \triangledown_\phi (q * [ .... ] ) = 1/q * \triangledown_\phi (q * [.... ]) + 1/q * q * \triangledown_\phi [...], I see the expected value of the second term is 0. For the first term, when i take expectation, it becomes \sum_z q * 1/q \triangledown_\phi (q* [....]) = \sum_z \triangledown_\phi (q* [...]) not sure how to proceed... and it's not the same case as slide 56 because we don't have something of q_phi(z|x) f(z), in this case we have q_phi(z|x) [ logp_z(z) + log p_\theta(z|x) - log q_phi(z|x) ]
@MSalman1
@MSalman1 6 ай бұрын
Just goes to show how much of stats and probability is required here.... I mean I did not know what "Importance sampling" was.. Great lecture though!
@spartacusche
@spartacusche 5 ай бұрын
what does he mean by I sample z, minute 23?
@rabbithole9853
@rabbithole9853 7 ай бұрын
I guess because I am a person below the average, I feel the lecture is very difficult to follow. Thanks for sharing though.
L5 GANs -- CS294-158 SP24 Deep Unsupervised Learning -- UC Berkeley
2:32:24
Understanding Variational Autoencoders (VAEs) | Deep Learning
29:54
L3 Flow Models
2:19:31
Pieter Abbeel
Рет қаралды 7 М.
Антон Долин о главных шедеврах мирового кино
2:14:14
L13b Neural Radiance Fields  -- Guest Lecturer Ben Mildenhall
1:04:24
Pieter Abbeel
Рет қаралды 2,9 М.
Variational Autoencoder - VISUALLY EXPLAINED!
35:33
Kapil Sachdeva
Рет қаралды 14 М.
L6 Diffusion Models (SP24)
2:22:24
Pieter Abbeel
Рет қаралды 21 М.