This was an amazing mathematical intuition, eagerly waiting for further videos.
@ahteshamabbasi95034 жыл бұрын
One of the best mathematical explanation out there.
@ankitsingh-xl7bo8 сағат бұрын
JSD in GAN is only used for the optimal discriminator and not for all the cases... isn't it?
@MLDawn5 жыл бұрын
I am a huge fan! First time EVER that I have seen a lecture like this!
@AhladKumar5 жыл бұрын
thanks
@rayll85794 жыл бұрын
The video deserves more visibility
@sangameshkodge16645 жыл бұрын
While discussing the KL divergence(5:17) P was unknown and Q was a known distribution. But in the discussion for forward KL(12:16) I see that Q distribution is said to vary (How can Q stretch if Q is already known, shouldn't it be fixed?). Is Q assumed to be unknown and P assumed to be known in the case for forward KL ?
@pramethgaddale82424 жыл бұрын
P is the distribution of the data provided, which is unknown. Q is an approximation to the posterior which is chosen for our convenience. Q is chosen mostly to be gaussian, but if we know the data generating process, we can use different priors too(such as beta or dirichlet, many use Gaussian because it's easy to work with). Coming to Q being to vary, this posterior approximation, forming the ELBO, turns the inference problem to an optimisation. Using SGD, you start with some random mean and variance and work your way to get the best possible ones.
@darkmythos44575 жыл бұрын
Sir, thank you very much for taking the time to make such a greate lectures.
@marinamaher82112 ай бұрын
Magnificent!
@vaishanavshukla51993 жыл бұрын
wonderful explaination as always
@AbhishekSen5 жыл бұрын
Machaya bhai, sab samajh aa gya!!
@sudarshanregmi144 жыл бұрын
Thank you so much. Please, keep making such great videos.
@iliasaarab79224 жыл бұрын
Amazing explanation! Thank you sir! 🙌🏽
@delseyjohnson39602 жыл бұрын
how to get the complete course content
@asheerali23769 ай бұрын
perfect explaination
@newtonleibniz87925 күн бұрын
Can notes pdf be provided
@kristianmamforte412910 ай бұрын
amazing lecture!
@manikantabandla3923 Жыл бұрын
It could have been more informative if you can point out the relevant papers of lectures.
@chocclolita4 жыл бұрын
Thank you so much!
@appletree67413 жыл бұрын
excellent!
@ayushthada95445 жыл бұрын
Sir, this is great video. Not just the explanation was clear but I really like your teaching style too. Could you make a video on StackGANs and Full Bayesian Implementation of GAN too? That would be a big help sir.
@AhladKumar5 жыл бұрын
sure
@ayushthada95445 жыл бұрын
@@AhladKumar Thank you so much, sir.
@ayushthada95445 жыл бұрын
@@AhladKumar Thank you so much, sir.
@spencert944 жыл бұрын
If JS Divergence is the only/main reason GANs preform better than VAEs why would you not just use JS Divergence in VAEs. It seems like JS Divergence is just a more stable and symmetrical KL Divergence which doesn't seem like it would actually lead to better results in most cases just be more stable.
@krishanudasbaksi95304 жыл бұрын
maybe coz JS divergence is not differentiable.
@mostafakattan69715 жыл бұрын
This is great job, but can you put subtitle on all your videos.
@vijetakhare83315 жыл бұрын
Superb👌🏼👌🏼
@satyamdubey41108 ай бұрын
💖💖
@angquoctien46404 жыл бұрын
it would be clearer with subtitle below
@mimo-wx9mc4 жыл бұрын
hello sir, I don't understand your English can you add the subtitle that will help me a lot thank you