Stanford CS236: Deep Generative Models I 2023 I Lecture 7 - Normalizing Flows

  Рет қаралды 1,846

Stanford Online

Stanford Online

Ай бұрын

For more information about Stanford's Artificial Intelligence programs visit: stanford.io/ai
To follow along with the course, visit the course website:
deepgenerativemodels.github.io/
Stefano Ermon
Associate Professor of Computer Science, Stanford University
cs.stanford.edu/~ermon/
Learn more about the online course and how to enroll: online.stanford.edu/courses/c...
To view all online courses and programs offered by Stanford, visit: online.stanford.edu/

Пікірлер: 2
@dohyun0047
@dohyun0047 22 күн бұрын
i hope someone answer my question... in inference phase (not training) we can just pick z form standard gaussian distribution because of in KL divergence in training time? because KL has made latent variables(z) to be distributed in standard gaussian distribution (p(z) , we are assuming simple gaussian)
@CPTSMONSTER
@CPTSMONSTER Ай бұрын
8:00 Without KL term, similar to a stochastic autoencoder which takes an input and maps it to a distribution over latent variables 8:30 Reconstruction to resemble Gaussian, KL term encourages latent variables generated through encoder to be distributed similar to the prior distribution (Gaussian in this case) 10:00? Trick decoder 12:50? q also stochastic 14:10 Both p and q generative models, only regularizing latent space of an autoencoder (q) 15:10 Marginal distribution of z under p and under q seems like a possible training objective, intractable integrals 24:10? If p is a powerful autoregressive model, then z is not needed 32:05? Sample p of z given x, invert generative process, find z's likely under that posterior, intractable to compute 34:25? Sample from conditional, not selecting from most likely z 53:50 Change of variables formula 56:40 Mapping unit hypercube to parallelotope (linear invertible transformation) 59:10 Area of parallelogram is determinant of matrix 59:50 Parallelotope pdf 1:08 Non-linear invertible transformation formula, generalized to determinant of Jacobian of f. Dimension of x and z are equal, unlike in VAEs. Determinant of Jacobian of inverse of f is equal to inverse of determinant of Jacobian of f. 1:15:00 Worked example of non-linear transformation pdf formula 1:17:45 Two interpretations of diffusion models, stacked VAEs and infinitely deep flow models 1:21:20 Flow model intuition, latent variables z don't compress dimensionality, views data from another angle to make things easier to model
Stanford CS236: Deep Generative Models I 2023 I Lecture 8 - GANs
1:22:58
Stanford Online
Рет қаралды 1,5 М.
Stanford CS236: Deep Generative Models I 2023 I Lecture 10 - GANs
1:27:30
Stanford Online
Рет қаралды 1,3 М.
Wait for the last one! 👀
00:28
Josh Horton
Рет қаралды 34 МЛН
She ruined my dominos! 😭 Cool train tool helps me #gadget
00:40
Go Gizmo!
Рет қаралды 54 МЛН
ТАМАЕВ vs ВЕНГАЛБИ. Самая Быстрая BMW M5 vs CLS 63
1:15:39
Асхаб Тамаев
Рет қаралды 4,7 МЛН
Watermelon Cat?! 🙀 #cat #cute #kitten
00:56
Stocat
Рет қаралды 42 МЛН
What are Normalizing Flows?
12:31
Ari Seff
Рет қаралды 67 М.
Lecture 1 | The Theoretical Minimum
1:46:33
Stanford
Рет қаралды 830 М.
Mumbai-Ahmedabad high speed rail aims to cut travel time by up to two-thirds
3:27
When Computers Write Proofs, What's the Point of Mathematicians?
6:34
Quanta Magazine
Рет қаралды 376 М.
Stanford CS236: Deep Generative Models I 2023 I Lecture 6 - VAEs
1:22:01
Stanford Online
Рет қаралды 1,9 М.
Pakistan-bashing a feature of Indian elections | VOANews
3:08
Voice of America
Рет қаралды 9 М.
Advanced Quantum Mechanics Lecture 1
1:40:06
Stanford
Рет қаралды 435 М.
Wait for the last one! 👀
00:28
Josh Horton
Рет қаралды 34 МЛН