Justin is the best lecture teacher I've ever seen on KZbin! He can always present the most complicated things in a clear and simple way. Thanks!
@matato2932 Жыл бұрын
26:00, "we could ask a model: give me an image of a cat with a purple tail, but i dont think itll work". amazing how within just a few years we have already reached this point where we can synthesize images from arbitrary input.
@syedhasany18094 жыл бұрын
I am extremely glad that Generative Models were spanned over 2 lectures; excellent lecture as always!
@40yearoldman Жыл бұрын
This professor is an excellent communicator.
@ZinzinsIA Жыл бұрын
Extremly thanksful for this lecture, finally getting the intuitions behind generative model. Very valuable thanks again, awesome lecture
@frommarkham4245 ай бұрын
i gained a better understanding of generative models as soon as i saw the thumbnail without even watching the video thanks
@AkshayRoyal3 жыл бұрын
My left ear enjoyed this lecture a lot :P
@kolokolo23652 жыл бұрын
very nice lecture as allways
@heejuneAhn Жыл бұрын
For pixelRNN, why not mention the sampling methods (greedy, stochastic, temperature control, and maybe even beam search) which are quite related to the current GPT generation methods. Right?
@kyungryunlee96263 жыл бұрын
Can somebody help me with the concept of probability? At 31:44 he talks about how to train a model by a given dataset. It says the goal is to find the value W for a unsupervised model is to maximize the probablility of training data. I am confused with this "probability of training data". Does it mean the probability of when a training data x(i) is given the output to be the same x(i)? Like the cost function of a auto encoder( square of x_hat - x). My background knowledge is not good enough to look up for papers or math textbooks. so please help me!
@DED_Search3 жыл бұрын
For autoregressive models, the way to find optimal weights W that maximize probability of training data is the same as maximum likelihood estimation. The probability of training data is essentially the product of the probability of each training datapoint given by p(x_i) which by definition is f(x_i, W). Auto encoders do not explicitly model or estimate the probability density function of the model. Rather they find optimal weights by minimizing reconstruction error, that is the llx - x_hatll^2. Different algorithms adopt different methods to find optimal weights. Hopefully, this helps shed some light for u.
@kyungryunlee96263 жыл бұрын
@@DED_Search thanks for your help!!
@DED_Search3 жыл бұрын
I am quite confused at 1:08, where q(z|x) is posterior of decoder. But actually, we are using encoder to estimate q(z|x). So what is the implication of the terminology here? I'd really appreciate it if anyone can shed some light here.
@heejuneAhn Жыл бұрын
Can you explain why the Pixel RNN model explicit pdf model? Can you express the function of the pdf of the model? What do you mean by "explicit"? To be explicit the probability should be a form of prob(x1, x2, x3, ..... xn), where xi is the value of each pixel. Can you express it like that? And can you explain how we train the PixelRNN? e.g., the output has a probability of 0 to 255 values, and is L1 or L2 loss applied with the training images?
@mrfli24 Жыл бұрын
1. Your understanding of "explicit" is correct. The pdf is written in the form of the multiplication of conditional pdfs. At test time, we can sequentially compute conditional pdfs and multiply all together. 2. I think the standard way to deal with probability distribution outputs (with softmax) is the cross entropy loss. The trainining paradigm is kind of the same as training a language model.
@heejuneAhn Жыл бұрын
If you just assume z is Gaussian, it really become Gaussian? In principle and general, the latent vector has any distribution. So we have to add one more constraint (the latent should be digonal covariance multivariate Gaussian) to the Autoencoder when we training
@mrfli24 Жыл бұрын
These Gaussian and diagonal covariance things are design choices of the model that is used to approximate p(x). Gaussian distributions are always nice to play with and diagonal covariance is used for efficient computation as the lecture referred.
@francescos7361 Жыл бұрын
Thanks , inetersting .
@erniechu32542 жыл бұрын
26:03 already realized in 2022 haha
@frankiethou73667 ай бұрын
how time flies! Sora even makes video generation possible