Holy sh*t, this guy is diabolically, criminally, offensively underrated. THE best explanation of GANs I have ever seen, somehow rooting it deeply in the mathematics while keeping it surface level enough to fit in a 12 min video. Wow
@doyoonkim4187Ай бұрын
It's hard to watch this kind of precise, sophisticated materials from internet. I really like this video
@elliotha682710 ай бұрын
The hallmark of a good teacher is when they can explain complex topics simply and intuitively. And your presentation on GANs in this video truly marks you as a phenomenal one. Thanks!
@fidaeharchli45905 ай бұрын
I agreeeeeeeee, you are the best, thank you sooo mutch
@luisr14214 жыл бұрын
Didn't think in a million years I'd get the math behind GANs. Thank you man
@welcomeaioverlords4 жыл бұрын
That's great to hear!
@shivammehta0074 жыл бұрын
This is Gold!!! Pure Gold!!
@shaoxuanchen20524 жыл бұрын
OMG that is the best one in explaining GANs I found these days!!!!! Thank you so much and I'm so lucky to find this vedio!!!!!!
@spandanpadhi82752 ай бұрын
This was the best 12 minutes of my months. Great explanation of GANs.
@alaayoussef3154 жыл бұрын
Brilliant! Never thought I could understand the math behind GAN's
@Daniel-ed7lt5 жыл бұрын
I have no idea how I found this video, but it has been very helpful. Thanks a lot and please continue making videos.
@welcomeaioverlords5 жыл бұрын
That's awesome, glad it helped. I'll definitely be making more videos. If there's any particular ML topics you'd like to see, please let me know!
@Daniel-ed7lt5 жыл бұрын
@@welcomeaioverlords I'm currently interested in CNNs and I think it would be really useful if you would describe its base architecture, same as you did for GAN, while simultaneously explaining the underlying math from a relevant paper.
@bikrammajhi30204 ай бұрын
Best mathematical explanation on GAN on the internet so far
@TheTakenKing9992 жыл бұрын
Awesome explanation. The original GAN paper isn't too hard to read but the "maximize" the Discriminator always irked me. Like.. my understanding was correct but I would always have trouble explaining it to someone else, this is a really well put together video. Clean, concise and good explanation. I think because of the way Goodfellow et al. phrased it, as "ascending the gradient" many people get stuck here, because for beginners like us we have gradient "descent" stuck in our heads lol.
@janaosea602011 ай бұрын
Wow. This video is so well explained and well presented!! The perfect amount of detail and explanation. Thank you so much for demystifying GANs. I wish I could like this video multiple times.
@shashanktomar99403 жыл бұрын
I have lost count of how many times I have paused the video to take notes. You're a lifesaver man!!
@deblu1189 ай бұрын
This video is amazing! You make things intuitive and really dig down to the core idea. Thank you! And also subscribed your blog!
@tusharkantirouth5605 Жыл бұрын
Simply the best .. short and crisp... thanks and keep uploading such beautiful videos..
@wenhuiwang44399 ай бұрын
Great learning resource for GAN. Thank you.
@siddhantbashisth54866 ай бұрын
Awesome explanation man.. I loved it!!
@dipayanbhadra83328 ай бұрын
Great Explanation! Nice and clean! All the best
@tarunreddy711 ай бұрын
Lovely explanation.
@gianfrancodemarco80652 жыл бұрын
Short, concise, clear. Perfect!
@EB31033 жыл бұрын
Best explainer of deep learning!
@dingusagar4 жыл бұрын
best video explaning the math of GAN. Thanks !!
@やみくも-q6d Жыл бұрын
Nice explanation! The argument at 7:13 once felt like a jump for me, but found it similar to 'calculus of variation' I learned in classical physics class.
@caiomelo7562 жыл бұрын
four years ago I read the original GAN paper for more than a month and could not understand what I was reading, and now it makes sense
@superaluis4 жыл бұрын
Thanks for the detailed video.
@williamrich39094 жыл бұрын
Thank you. This was very clear and easy to follow.
@walidb45514 жыл бұрын
THANK GOD I FOUND THIS ONE THANK YOU
@jovanasavic43573 жыл бұрын
This is awesome. Thank you so much!
@architsrivastava81963 жыл бұрын
You're a blessing.
@DavesTechChannel4 жыл бұрын
Great explanation man, I've read your article on Medium!
@dman87764 жыл бұрын
Best explanation I've seen. Thanks a lot!
@toheebadura2 жыл бұрын
Many thanks, dude! This is awesome.
@adeebmdislam4593 Жыл бұрын
man immediately knew you listen to prog and play guitar when i heard the intro hahaha! great explanation
@paichethan3 жыл бұрын
Fantastic explanation
@muneebhashmi10373 жыл бұрын
tbvvh couldn't have asked for a better explanation!
@ishanweerakoon98382 жыл бұрын
Thanks very clear
@StickDoesCS3 жыл бұрын
Really great video! I have a little question however since i'm new to this field and i'm a little confused. Why is that at 5:02 you mentioned about ascending the gradient to maximize the cost function? Would like to know exactly why this is the case because I initially thought the cost function generally has to be minimized, so the smaller the cost ideally the better the model. Maybe because of how I'm looking at cost functions in general? Like is there a notion of it being already referred to as something we want to be small, so now we'd simply treat it as the negative of a number in which that number is what you're referring to as the one we want to maximize? Subscribed by the way, keep up the good work! :>
@welcomeaioverlords3 жыл бұрын
In most ML, you optimize such that the cost is minimized. In this case, we have two *adversaries* that are working in opposition to one another. One is trying to decrease the cost (discriminator) and one is working to increase the cost (generator).
@psychotropicalfunk2 жыл бұрын
Very well explained!
@anilsarode61644 жыл бұрын
God bless you, man !! Great Job !! Excellent !!!
@symnshah4 жыл бұрын
Such a great explanation.
@kathanvakharia3 ай бұрын
nailed it!
@bernardoolisan10102 жыл бұрын
I have a question. in 4:49 from were we take the real samples, for example, we want to generate "faces", in the generator m samples are just random vectors of the dimensions of a face image, so it can be a super ugly blur picture right? but what about the real samples? they are just faces images that were taken out of the internet?
@friedrichwilhelmhufnagel3577 Жыл бұрын
CANNOT UPDATE ENOUGH. EVERY STATISTICS OR ML MATH VIDEO SHOULD BE AS CLEAR AS THIS. YOU DEMONSTRATE THAT MATH AND THEORY EXPLANATION IS ONLY A MATTER OF AN ABLE TEACHER
@manikantansrinivasan5261 Жыл бұрын
thanks a ton for this!
@bernardoolisan10102 жыл бұрын
when the training process is done, do we only use the generator model? or what? how to use it in production?
@maedehzarvandi37733 жыл бұрын
you helped a lot of lot 👏🏻🙌🏻👍🏻
@jrt6722 Жыл бұрын
Would the loss function works the same if I switch the label of the real sample and fake sample? ( 0 for real sample and 1 for fake sample).
@ramiismael75023 жыл бұрын
great video
@shourabhpayal11983 жыл бұрын
Good one
@bernardoolisan10102 жыл бұрын
also, were it says theory alert it means that is only for proving that the model is kind of good? like the min value is a good value?
@Darkev773 жыл бұрын
This was really good! Though could someone explain to me what does he mean by maximize the loss function for the discriminator? Shouldn't you also train your discriminator via gradient descent to improve classification accuracy?
@welcomeaioverlords3 жыл бұрын
To minimize the loss, you use gradient descent. You walk down the hill. To maximize the loss, you use gradient ASCENT. You calculate the same gradient, but walk up the hill. The discriminator walks up, the generator walks down. That’s why it’s adversarial. You could multiply everything by -1 and get the same result.
@sunnydial15092 жыл бұрын
i am not sure but in this case i think we maximise the discriminator loss function as it is expressed as log(1-D(G(Z)) which is equivalent to minimize the log(D(G(Z))) as it happens on normal neural networks.... so the discriminator is learning by maximising the loss in this case
@koen1994 жыл бұрын
@7:20 Why is p_data(x) and p_g(x) assumed constant over x in the integral (a and b)? In my mind the probability changes for each sample...
@welcomeaioverlords4 жыл бұрын
Hi Koen. When I say "at any particular point" I mean "at any particular value of x". So p_data(x) and p_g(x) change with x. Those are, for example, the probabilities of seeing any particular image either in the real or generated data. The analysis that follows is for any particular x, for which p_data and p_g have a single value, here called "a" and "b" respectively. The logical argument is that if you can find the D that maximizes the quantity under the integral for every choice of x, then you have found the D that maximizes the integral itself. For example: imagine you're integrating over two different curves and the first curve is always larger in value than the second. You can safely claim the integral of the first curve is larger than the integral of the second curve. I hope this helps.
@koen1994 жыл бұрын
@@welcomeaioverlords Oh wow it makes sense now! Thanks man.. keep up the good work
@123epsilon2 жыл бұрын
Does anyone know any good resources to learn more ML theory like how it’s explained in this video? Specifically content covering proofs and guaranteeing convergence
@goodn10515 жыл бұрын
Thaaaaaaank youuuuuu
@welcomeaioverlords5 жыл бұрын
I'm glad you got value from this!
@goodn10515 жыл бұрын
@@welcomeaioverlords yup...when you're self taught its videos like this that really help so much
@abdulaziztarhuni2 жыл бұрын
this was hard for me to follow , from where should i get more resources
@adityarajora72193 жыл бұрын
The cost function isn't the difference between True and predicted value right?, it's the actual predicted value in the range [0,1] right??
@welcomeaioverlords3 жыл бұрын
It's structured as a classification problem where the discriminator estimates the probability of the sample being real or fake, which is then compared against the ground truth of whether the sample is real, or was faked by the generator.
@adityarajora72193 жыл бұрын
@@welcomeaioverlords Thank you sir for your reply, Got it.
@adityarajora72193 жыл бұрын
what do you do for a living?
@jorgecelis84594 жыл бұрын
Very good explanation. One question: If we know the form of the optimal discriminator, don't we only need to get the Pg(x), as we have all the statistics of P(x) in advance? And that would be 'just' sampling from the z?
@welcomeaioverlords4 жыл бұрын
Thanks for the question, Jorge. I would point out that knowing the statistics of P(x) is very different than knowing P(x) itself. For instance, I could tell you the mean (and higher-order moments) of a sample from an arbitrary distribution and that wouldn't be sufficient for you to recreate it. The whole point is to model P(x) (the probability that a particular pixel configuration is of a face) , because then we could just sample from it to get new faces. Our real-life sample, which is the training dataset, is obviously a small portion of all possible faces. The generator effectively becomes our sampler of P(x) and the discriminator provides the training signal. I hope this helps.
@jorgecelis84594 жыл бұрын
@@welcomeaioverlords right... statistics of P(x) =/= distribution P(x), if we know P(x) we could just generate images and we would have no problem to solve with GAN. Thanks.
@saigeeta19934 жыл бұрын
PLEASE EXPLAIN TEXT TO SPEECH SYNTHESIS EXAMPLE USING GAN
@sarrae1005 жыл бұрын
What the fuck, u explained it like it's a toy story, u beauty 😍
@theepicguy65752 жыл бұрын
Found a gold mine
@samowarow2 жыл бұрын
kzbin.info/www/bejne/gGLEeGRombGiaqs How exactly did you do this variable substitution? Seems not legit to me
@JoesMarineRush2 жыл бұрын
I also stopped at this step. I think it is valid. Remember that the transformer g is fixed. In the second term, distribution of z and g(z) are the same, so we can set x = g(z) and replace the z with x. Then we can merge first and second integrals together, with the main difference being that the first term and second term have different probabilities for x since they are being sampled from different distributions.
@samowarow2 жыл бұрын
@@JoesMarineRush It's not in general legit to say that the distributions of Z and g(Z) are the same. Z is a random variable. A non-linear function of Z changes its distribution.
@JoesMarineRush2 жыл бұрын
@@samowarow I looked at it again the other day. Yes you are right. g can change the distribution of z. There are is a clarification step missing. Setting x = g(z) and swapping out z for x. The distribution of x is given be to under g. There is a link between distributions of z and g that needs clarification. I'll try to think on it.