Lecture 19: Generative Models I

  Рет қаралды 37,485

Michigan Online

Michigan Online

Күн бұрын

Пікірлер: 21
@antonywill835
@antonywill835 28 күн бұрын
Justin is the best lecture teacher I've ever seen on KZbin! He can always present the most complicated things in a clear and simple way. Thanks!
@matato2932
@matato2932 Жыл бұрын
26:00, "we could ask a model: give me an image of a cat with a purple tail, but i dont think itll work". amazing how within just a few years we have already reached this point where we can synthesize images from arbitrary input.
@syedhasany1809
@syedhasany1809 4 жыл бұрын
I am extremely glad that Generative Models were spanned over 2 lectures; excellent lecture as always!
@40yearoldman
@40yearoldman Жыл бұрын
This professor is an excellent communicator.
@ZinzinsIA
@ZinzinsIA Жыл бұрын
Extremly thanksful for this lecture, finally getting the intuitions behind generative model. Very valuable thanks again, awesome lecture
@frommarkham424
@frommarkham424 5 ай бұрын
i gained a better understanding of generative models as soon as i saw the thumbnail without even watching the video thanks
@AkshayRoyal
@AkshayRoyal 3 жыл бұрын
My left ear enjoyed this lecture a lot :P
@kolokolo2365
@kolokolo2365 2 жыл бұрын
very nice lecture as allways
@heejuneAhn
@heejuneAhn Жыл бұрын
For pixelRNN, why not mention the sampling methods (greedy, stochastic, temperature control, and maybe even beam search) which are quite related to the current GPT generation methods. Right?
@kyungryunlee9626
@kyungryunlee9626 3 жыл бұрын
Can somebody help me with the concept of probability? At 31:44 he talks about how to train a model by a given dataset. It says the goal is to find the value W for a unsupervised model is to maximize the probablility of training data. I am confused with this "probability of training data". Does it mean the probability of when a training data x(i) is given the output to be the same x(i)? Like the cost function of a auto encoder( square of x_hat - x). My background knowledge is not good enough to look up for papers or math textbooks. so please help me!
@DED_Search
@DED_Search 3 жыл бұрын
For autoregressive models, the way to find optimal weights W that maximize probability of training data is the same as maximum likelihood estimation. The probability of training data is essentially the product of the probability of each training datapoint given by p(x_i) which by definition is f(x_i, W). Auto encoders do not explicitly model or estimate the probability density function of the model. Rather they find optimal weights by minimizing reconstruction error, that is the llx - x_hatll^2. Different algorithms adopt different methods to find optimal weights. Hopefully, this helps shed some light for u.
@kyungryunlee9626
@kyungryunlee9626 3 жыл бұрын
@@DED_Search thanks for your help!!
@DED_Search
@DED_Search 3 жыл бұрын
I am quite confused at 1:08, where q(z|x) is posterior of decoder. But actually, we are using encoder to estimate q(z|x). So what is the implication of the terminology here? I'd really appreciate it if anyone can shed some light here.
@heejuneAhn
@heejuneAhn Жыл бұрын
Can you explain why the Pixel RNN model explicit pdf model? Can you express the function of the pdf of the model? What do you mean by "explicit"? To be explicit the probability should be a form of prob(x1, x2, x3, ..... xn), where xi is the value of each pixel. Can you express it like that? And can you explain how we train the PixelRNN? e.g., the output has a probability of 0 to 255 values, and is L1 or L2 loss applied with the training images?
@mrfli24
@mrfli24 Жыл бұрын
1. Your understanding of "explicit" is correct. The pdf is written in the form of the multiplication of conditional pdfs. At test time, we can sequentially compute conditional pdfs and multiply all together. 2. I think the standard way to deal with probability distribution outputs (with softmax) is the cross entropy loss. The trainining paradigm is kind of the same as training a language model.
@heejuneAhn
@heejuneAhn Жыл бұрын
If you just assume z is Gaussian, it really become Gaussian? In principle and general, the latent vector has any distribution. So we have to add one more constraint (the latent should be digonal covariance multivariate Gaussian) to the Autoencoder when we training
@mrfli24
@mrfli24 Жыл бұрын
These Gaussian and diagonal covariance things are design choices of the model that is used to approximate p(x). Gaussian distributions are always nice to play with and diagonal covariance is used for efficient computation as the lecture referred.
@francescos7361
@francescos7361 Жыл бұрын
Thanks , inetersting .
@erniechu3254
@erniechu3254 2 жыл бұрын
26:03 already realized in 2022 haha
@frankiethou7366
@frankiethou7366 7 ай бұрын
how time flies! Sora even makes video generation possible
Lecture 20: Generative Models II
1:12:47
Michigan Online
Рет қаралды 21 М.
Lecture 13 | Generative Models
1:17:41
Stanford University School of Engineering
Рет қаралды 495 М.
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
Let's build GPT: from scratch, in code, spelled out.
1:56:20
Andrej Karpathy
Рет қаралды 5 МЛН
MIT 6.S191 (2023): Deep Generative Modeling
59:52
Alexander Amini
Рет қаралды 311 М.
Lecture 8: CNN Architectures
1:12:03
Michigan Online
Рет қаралды 48 М.
Diffusion and Score-Based Generative Models
1:32:01
MITCBMM
Рет қаралды 84 М.
Lecture 3: Linear Classifiers
1:02:06
Michigan Online
Рет қаралды 75 М.
Lecture 18: Videos
1:15:21
Michigan Online
Рет қаралды 23 М.
Beren Millidge: Learning in the brain beyond backprop
47:35
Center for Cognitive Neuroscience Berlin
Рет қаралды 6 М.
Lecture 17: 3D Vision
1:12:34
Michigan Online
Рет қаралды 35 М.
Video Understanding Models - Part 1
59:31
Shafik Quoraishee
Рет қаралды 89
Lecture 2: Image Classification
1:02:15
Michigan Online
Рет қаралды 76 М.