Understand the Math and Theory of GANs in ~ 10 minutes

  Рет қаралды 63,445

WelcomeAIOverlords

WelcomeAIOverlords

Күн бұрын

Пікірлер: 84
@jlee-mp4
@jlee-mp4 7 ай бұрын
Holy sh*t, this guy is diabolically, criminally, offensively underrated. THE best explanation of GANs I have ever seen, somehow rooting it deeply in the mathematics while keeping it surface level enough to fit in a 12 min video. Wow
@doyoonkim4187
@doyoonkim4187 Ай бұрын
It's hard to watch this kind of precise, sophisticated materials from internet. I really like this video
@elliotha6827
@elliotha6827 10 ай бұрын
The hallmark of a good teacher is when they can explain complex topics simply and intuitively. And your presentation on GANs in this video truly marks you as a phenomenal one. Thanks!
@fidaeharchli4590
@fidaeharchli4590 5 ай бұрын
I agreeeeeeeee, you are the best, thank you sooo mutch
@luisr1421
@luisr1421 4 жыл бұрын
Didn't think in a million years I'd get the math behind GANs. Thank you man
@welcomeaioverlords
@welcomeaioverlords 4 жыл бұрын
That's great to hear!
@shivammehta007
@shivammehta007 4 жыл бұрын
This is Gold!!! Pure Gold!!
@shaoxuanchen2052
@shaoxuanchen2052 4 жыл бұрын
OMG that is the best one in explaining GANs I found these days!!!!! Thank you so much and I'm so lucky to find this vedio!!!!!!
@spandanpadhi8275
@spandanpadhi8275 2 ай бұрын
This was the best 12 minutes of my months. Great explanation of GANs.
@alaayoussef315
@alaayoussef315 4 жыл бұрын
Brilliant! Never thought I could understand the math behind GAN's
@Daniel-ed7lt
@Daniel-ed7lt 5 жыл бұрын
I have no idea how I found this video, but it has been very helpful. Thanks a lot and please continue making videos.
@welcomeaioverlords
@welcomeaioverlords 5 жыл бұрын
That's awesome, glad it helped. I'll definitely be making more videos. If there's any particular ML topics you'd like to see, please let me know!
@Daniel-ed7lt
@Daniel-ed7lt 5 жыл бұрын
@@welcomeaioverlords I'm currently interested in CNNs and I think it would be really useful if you would describe its base architecture, same as you did for GAN, while simultaneously explaining the underlying math from a relevant paper.
@bikrammajhi3020
@bikrammajhi3020 4 ай бұрын
Best mathematical explanation on GAN on the internet so far
@TheTakenKing999
@TheTakenKing999 2 жыл бұрын
Awesome explanation. The original GAN paper isn't too hard to read but the "maximize" the Discriminator always irked me. Like.. my understanding was correct but I would always have trouble explaining it to someone else, this is a really well put together video. Clean, concise and good explanation. I think because of the way Goodfellow et al. phrased it, as "ascending the gradient" many people get stuck here, because for beginners like us we have gradient "descent" stuck in our heads lol.
@janaosea6020
@janaosea6020 11 ай бұрын
Wow. This video is so well explained and well presented!! The perfect amount of detail and explanation. Thank you so much for demystifying GANs. I wish I could like this video multiple times.
@shashanktomar9940
@shashanktomar9940 3 жыл бұрын
I have lost count of how many times I have paused the video to take notes. You're a lifesaver man!!
@deblu118
@deblu118 9 ай бұрын
This video is amazing! You make things intuitive and really dig down to the core idea. Thank you! And also subscribed your blog!
@tusharkantirouth5605
@tusharkantirouth5605 Жыл бұрын
Simply the best .. short and crisp... thanks and keep uploading such beautiful videos..
@wenhuiwang4439
@wenhuiwang4439 9 ай бұрын
Great learning resource for GAN. Thank you.
@siddhantbashisth5486
@siddhantbashisth5486 6 ай бұрын
Awesome explanation man.. I loved it!!
@dipayanbhadra8332
@dipayanbhadra8332 8 ай бұрын
Great Explanation! Nice and clean! All the best
@tarunreddy7
@tarunreddy7 11 ай бұрын
Lovely explanation.
@gianfrancodemarco8065
@gianfrancodemarco8065 2 жыл бұрын
Short, concise, clear. Perfect!
@EB3103
@EB3103 3 жыл бұрын
Best explainer of deep learning!
@dingusagar
@dingusagar 4 жыл бұрын
best video explaning the math of GAN. Thanks !!
@やみくも-q6d
@やみくも-q6d Жыл бұрын
Nice explanation! The argument at 7:13 once felt like a jump for me, but found it similar to 'calculus of variation' I learned in classical physics class.
@caiomelo756
@caiomelo756 2 жыл бұрын
four years ago I read the original GAN paper for more than a month and could not understand what I was reading, and now it makes sense
@superaluis
@superaluis 4 жыл бұрын
Thanks for the detailed video.
@williamrich3909
@williamrich3909 4 жыл бұрын
Thank you. This was very clear and easy to follow.
@walidb4551
@walidb4551 4 жыл бұрын
THANK GOD I FOUND THIS ONE THANK YOU
@jovanasavic4357
@jovanasavic4357 3 жыл бұрын
This is awesome. Thank you so much!
@architsrivastava8196
@architsrivastava8196 3 жыл бұрын
You're a blessing.
@DavesTechChannel
@DavesTechChannel 4 жыл бұрын
Great explanation man, I've read your article on Medium!
@dman8776
@dman8776 4 жыл бұрын
Best explanation I've seen. Thanks a lot!
@toheebadura
@toheebadura 2 жыл бұрын
Many thanks, dude! This is awesome.
@adeebmdislam4593
@adeebmdislam4593 Жыл бұрын
man immediately knew you listen to prog and play guitar when i heard the intro hahaha! great explanation
@paichethan
@paichethan 3 жыл бұрын
Fantastic explanation
@muneebhashmi1037
@muneebhashmi1037 3 жыл бұрын
tbvvh couldn't have asked for a better explanation!
@ishanweerakoon9838
@ishanweerakoon9838 2 жыл бұрын
Thanks very clear
@StickDoesCS
@StickDoesCS 3 жыл бұрын
Really great video! I have a little question however since i'm new to this field and i'm a little confused. Why is that at 5:02 you mentioned about ascending the gradient to maximize the cost function? Would like to know exactly why this is the case because I initially thought the cost function generally has to be minimized, so the smaller the cost ideally the better the model. Maybe because of how I'm looking at cost functions in general? Like is there a notion of it being already referred to as something we want to be small, so now we'd simply treat it as the negative of a number in which that number is what you're referring to as the one we want to maximize? Subscribed by the way, keep up the good work! :>
@welcomeaioverlords
@welcomeaioverlords 3 жыл бұрын
In most ML, you optimize such that the cost is minimized. In this case, we have two *adversaries* that are working in opposition to one another. One is trying to decrease the cost (discriminator) and one is working to increase the cost (generator).
@psychotropicalfunk
@psychotropicalfunk 2 жыл бұрын
Very well explained!
@anilsarode6164
@anilsarode6164 4 жыл бұрын
God bless you, man !! Great Job !! Excellent !!!
@symnshah
@symnshah 4 жыл бұрын
Such a great explanation.
@kathanvakharia
@kathanvakharia 3 ай бұрын
nailed it!
@bernardoolisan1010
@bernardoolisan1010 2 жыл бұрын
I have a question. in 4:49 from were we take the real samples, for example, we want to generate "faces", in the generator m samples are just random vectors of the dimensions of a face image, so it can be a super ugly blur picture right? but what about the real samples? they are just faces images that were taken out of the internet?
@friedrichwilhelmhufnagel3577
@friedrichwilhelmhufnagel3577 Жыл бұрын
CANNOT UPDATE ENOUGH. EVERY STATISTICS OR ML MATH VIDEO SHOULD BE AS CLEAR AS THIS. YOU DEMONSTRATE THAT MATH AND THEORY EXPLANATION IS ONLY A MATTER OF AN ABLE TEACHER
@manikantansrinivasan5261
@manikantansrinivasan5261 Жыл бұрын
thanks a ton for this!
@bernardoolisan1010
@bernardoolisan1010 2 жыл бұрын
when the training process is done, do we only use the generator model? or what? how to use it in production?
@maedehzarvandi3773
@maedehzarvandi3773 3 жыл бұрын
you helped a lot of lot 👏🏻🙌🏻👍🏻
@jrt6722
@jrt6722 Жыл бұрын
Would the loss function works the same if I switch the label of the real sample and fake sample? ( 0 for real sample and 1 for fake sample).
@ramiismael7502
@ramiismael7502 3 жыл бұрын
great video
@shourabhpayal1198
@shourabhpayal1198 3 жыл бұрын
Good one
@bernardoolisan1010
@bernardoolisan1010 2 жыл бұрын
also, were it says theory alert it means that is only for proving that the model is kind of good? like the min value is a good value?
@Darkev77
@Darkev77 3 жыл бұрын
This was really good! Though could someone explain to me what does he mean by maximize the loss function for the discriminator? Shouldn't you also train your discriminator via gradient descent to improve classification accuracy?
@welcomeaioverlords
@welcomeaioverlords 3 жыл бұрын
To minimize the loss, you use gradient descent. You walk down the hill. To maximize the loss, you use gradient ASCENT. You calculate the same gradient, but walk up the hill. The discriminator walks up, the generator walks down. That’s why it’s adversarial. You could multiply everything by -1 and get the same result.
@sunnydial1509
@sunnydial1509 2 жыл бұрын
i am not sure but in this case i think we maximise the discriminator loss function as it is expressed as log(1-D(G(Z)) which is equivalent to minimize the log(D(G(Z))) as it happens on normal neural networks.... so the discriminator is learning by maximising the loss in this case
@koen199
@koen199 4 жыл бұрын
@7:20 Why is p_data(x) and p_g(x) assumed constant over x in the integral (a and b)? In my mind the probability changes for each sample...
@welcomeaioverlords
@welcomeaioverlords 4 жыл бұрын
Hi Koen. When I say "at any particular point" I mean "at any particular value of x". So p_data(x) and p_g(x) change with x. Those are, for example, the probabilities of seeing any particular image either in the real or generated data. The analysis that follows is for any particular x, for which p_data and p_g have a single value, here called "a" and "b" respectively. The logical argument is that if you can find the D that maximizes the quantity under the integral for every choice of x, then you have found the D that maximizes the integral itself. For example: imagine you're integrating over two different curves and the first curve is always larger in value than the second. You can safely claim the integral of the first curve is larger than the integral of the second curve. I hope this helps.
@koen199
@koen199 4 жыл бұрын
@@welcomeaioverlords Oh wow it makes sense now! Thanks man.. keep up the good work
@123epsilon
@123epsilon 2 жыл бұрын
Does anyone know any good resources to learn more ML theory like how it’s explained in this video? Specifically content covering proofs and guaranteeing convergence
@goodn1051
@goodn1051 5 жыл бұрын
Thaaaaaaank youuuuuu
@welcomeaioverlords
@welcomeaioverlords 5 жыл бұрын
I'm glad you got value from this!
@goodn1051
@goodn1051 5 жыл бұрын
@@welcomeaioverlords yup...when you're self taught its videos like this that really help so much
@abdulaziztarhuni
@abdulaziztarhuni 2 жыл бұрын
this was hard for me to follow , from where should i get more resources
@adityarajora7219
@adityarajora7219 3 жыл бұрын
The cost function isn't the difference between True and predicted value right?, it's the actual predicted value in the range [0,1] right??
@welcomeaioverlords
@welcomeaioverlords 3 жыл бұрын
It's structured as a classification problem where the discriminator estimates the probability of the sample being real or fake, which is then compared against the ground truth of whether the sample is real, or was faked by the generator.
@adityarajora7219
@adityarajora7219 3 жыл бұрын
@@welcomeaioverlords Thank you sir for your reply, Got it.
@adityarajora7219
@adityarajora7219 3 жыл бұрын
what do you do for a living?
@jorgecelis8459
@jorgecelis8459 4 жыл бұрын
Very good explanation. One question: If we know the form of the optimal discriminator, don't we only need to get the Pg(x), as we have all the statistics of P(x) in advance? And that would be 'just' sampling from the z?
@welcomeaioverlords
@welcomeaioverlords 4 жыл бұрын
Thanks for the question, Jorge. I would point out that knowing the statistics of P(x) is very different than knowing P(x) itself. For instance, I could tell you the mean (and higher-order moments) of a sample from an arbitrary distribution and that wouldn't be sufficient for you to recreate it. The whole point is to model P(x) (the probability that a particular pixel configuration is of a face) , because then we could just sample from it to get new faces. Our real-life sample, which is the training dataset, is obviously a small portion of all possible faces. The generator effectively becomes our sampler of P(x) and the discriminator provides the training signal. I hope this helps.
@jorgecelis8459
@jorgecelis8459 4 жыл бұрын
@@welcomeaioverlords right... statistics of P(x) =/= distribution P(x), if we know P(x) we could just generate images and we would have no problem to solve with GAN. Thanks.
@saigeeta1993
@saigeeta1993 4 жыл бұрын
PLEASE EXPLAIN TEXT TO SPEECH SYNTHESIS EXAMPLE USING GAN
@sarrae100
@sarrae100 5 жыл бұрын
What the fuck, u explained it like it's a toy story, u beauty 😍
@theepicguy6575
@theepicguy6575 2 жыл бұрын
Found a gold mine
@samowarow
@samowarow 2 жыл бұрын
kzbin.info/www/bejne/gGLEeGRombGiaqs How exactly did you do this variable substitution? Seems not legit to me
@JoesMarineRush
@JoesMarineRush 2 жыл бұрын
I also stopped at this step. I think it is valid. Remember that the transformer g is fixed. In the second term, distribution of z and g(z) are the same, so we can set x = g(z) and replace the z with x. Then we can merge first and second integrals together, with the main difference being that the first term and second term have different probabilities for x since they are being sampled from different distributions.
@samowarow
@samowarow 2 жыл бұрын
@@JoesMarineRush It's not in general legit to say that the distributions of Z and g(Z) are the same. Z is a random variable. A non-linear function of Z changes its distribution.
@JoesMarineRush
@JoesMarineRush 2 жыл бұрын
@@samowarow I looked at it again the other day. Yes you are right. g can change the distribution of z. There are is a clarification step missing. Setting x = g(z) and swapping out z for x. The distribution of x is given be to under g. There is a link between distributions of z and g that needs clarification. I'll try to think on it.
@kelixoderamirez
@kelixoderamirez 3 жыл бұрын
permission to learn sir
@welcomeaioverlords
@welcomeaioverlords 3 жыл бұрын
Permission granted.
Simple Explanation of AutoEncoders
10:31
WelcomeAIOverlords
Рет қаралды 105 М.
The Math Behind Generative Adversarial Networks Clearly Explained!
17:04
отомстил?
00:56
История одного вокалиста
Рет қаралды 7 МЛН
How Strong is Tin Foil? 💪
00:26
Preston
Рет қаралды 108 МЛН
iPhone or Chocolate??
00:16
Hungry FAM
Рет қаралды 34 МЛН
OYUNCAK MİKROFON İLE TRAFİK LAMBASINI DEĞİŞTİRDİ 😱
00:17
Melih Taşçı
Рет қаралды 12 МЛН
Diffusion Models | Paper Explanation | Math Explained
33:27
Outlier
Рет қаралды 248 М.
247 - Conditional GANs and their applications
39:51
DigitalSreeni
Рет қаралды 44 М.
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
31:25
Preserve Knowledge
Рет қаралды 152 М.
Building our first simple GAN
24:24
Aladdin Persson
Рет қаралды 113 М.
Gentle Intro to Generative Adversarial Networks - Part 1 (GANs)
10:19
WelcomeAIOverlords
Рет қаралды 14 М.
Terence Tao at IMO 2024: AI and Mathematics
57:24
AIMO Prize
Рет қаралды 387 М.
126 - Generative Adversarial Networks (GAN) using keras in python
33:34
The Boundary of Computation
12:59
Mutual Information
Рет қаралды 1 МЛН
Generative Adversarial Networks (GANs) - Computerphile
21:21
Computerphile
Рет қаралды 647 М.
[Classic] Generative Adversarial Networks (Paper Explained)
37:04
Yannic Kilcher
Рет қаралды 63 М.
отомстил?
00:56
История одного вокалиста
Рет қаралды 7 МЛН