Deep Learning 28: (2) Generative Adversarial Network (GAN) : Loss Derivation from Scratch

  Рет қаралды 46,274

Ahlad Kumar

Ahlad Kumar

Күн бұрын

This lecture derives the loss function of Generative Adversarial Network (GAN) from scratch
#adversarial#generative#deeplearning

Пікірлер: 102
@MLDawn
@MLDawn 4 жыл бұрын
Please indulge me! The two people who have disliked this video, what were they expecting to see that they could not find here! I genuinely cannot understand what they disliked about this lecture! This is just Gold!
@abdelghanikadouri1626
@abdelghanikadouri1626 4 жыл бұрын
I can't believe that I just find out about this wonderful tutorial I never thought I'd understand the gan like this easy thank you so much
@MLDawn
@MLDawn 5 жыл бұрын
You are killing me here! This is too amazing of a lecture to be true! Well done a million times
@AhladKumar
@AhladKumar 5 жыл бұрын
Thanks for the feedback...watch out for more
@shahriarshayesteh8602
@shahriarshayesteh8602 3 жыл бұрын
Fantastic explanation! we need such detailed and in-depth tutorials. I hope you can go over transformers and BERT in such detail. Thanks!
@iliasaarab7922
@iliasaarab7922 4 жыл бұрын
This channel deserves a million+ subs! Amazing explanation!
@hamzaameer2213
@hamzaameer2213 4 жыл бұрын
Thanks Sir , i wasted 4-Days in reading research papers . You have done it in just 4 Lectures. You are pure Gold.
@soumambanerjee1816
@soumambanerjee1816 3 жыл бұрын
Ian Goodfellow will be shocked to see such a great lecture. Thank you for choosing to be a professor. ❤️
@NR_Tutorials
@NR_Tutorials 4 жыл бұрын
Sir jaan lena hai jaan le lo is lecture ke liye ... kya padhya hau aap ne great sir salute
@learner3539
@learner3539 2 жыл бұрын
Thanks for your lecture. Your lecture makes all complicated topic much easier . You are doing great work.
@jyotideshwal337
@jyotideshwal337 4 жыл бұрын
There are not enough words to thank you for all that you do! I am lucky enough to call you my teacher
@DoctorPataMedicast
@DoctorPataMedicast 8 ай бұрын
Hmmm this is one of the best I have seen.You are a born teacher.
@ziauldba
@ziauldba 2 жыл бұрын
Seriously my feeling is like im in a classroom . Thanks professor. god bless u .
@shineshine9599
@shineshine9599 5 жыл бұрын
your videos literally saved me while working on Gan. can't thank you enough.
@gioxc88
@gioxc88 4 жыл бұрын
Words are not enough to thank you for this!!!
@BJ-gj2mv
@BJ-gj2mv 4 жыл бұрын
this is the best lectures i seen on deep learning . Great work . Thank you . Keep it up.
@Afshaanjum-ge7dt
@Afshaanjum-ge7dt 8 ай бұрын
Bravo.... your efforts are appreciated.
@ZERU326
@ZERU326 2 жыл бұрын
at 31:18, when we are training the generator fake data label is 1, thus we get directly L = -log(D(G(z))) as loss function for generator.
@surayuthpintawong8332
@surayuthpintawong8332 3 жыл бұрын
Thanks for sparing your time for teaching us
@submagr
@submagr 4 жыл бұрын
Thank you, Sir. I got a much better understanding of the loss function now!
@arjunmajumdar6874
@arjunmajumdar6874 Жыл бұрын
The binary cross-entropy loss is: -y.log(yhat) -(1-y).(1-yhat). In your derivation, this isn’t written explicitly. Is it because you have modified the BCE loss ?
@syeddanish7055
@syeddanish7055 Жыл бұрын
Very nicely explained... was able to learn all in one go. Thanks a lot, sir!!!
@danielmathew5008
@danielmathew5008 4 жыл бұрын
These videos are probably the best explanation of these machine learning concepts, but I have one problem.I don't want to complain about ads since I know you work hard to put these on and have to earn, but please try to put all the ads at the end and not 2 or 3 in the middle of the video since it disrupts the thought flow. Just a suggestion. Thanks for your work.
@mozhganrahmatinia1656
@mozhganrahmatinia1656 3 жыл бұрын
your description is perfect. thank you.
@minhajansari8272
@minhajansari8272 5 жыл бұрын
Wow! Thank you very much. I slept through all my lectures and tomorrow is my exam (which is going to be tough)
@nikhilkumawat1797
@nikhilkumawat1797 Жыл бұрын
Very well explained, Amazing.
@praslisa
@praslisa 4 жыл бұрын
I am watching all the ads on these videos...this person deserves $$$$...come up with an udemy course...you will do great :)
@haztec.
@haztec. 4 жыл бұрын
I don't think that gets them any more money though.
@girishmishra
@girishmishra 4 жыл бұрын
Awesome pick....no other lecture is needed. Thanks a lot
@danielenrique7184
@danielenrique7184 5 жыл бұрын
Thank you so much for the exceptional explanation! :)
@zeinabawad9317
@zeinabawad9317 Жыл бұрын
thanks a lot the best explanation ever for GANs thanks again
@madhuvarun2790
@madhuvarun2790 3 жыл бұрын
Your videos are a gold mine. Thank you so much sir.
@gauravsahani2499
@gauravsahani2499 4 жыл бұрын
Really interesting video Sir!, Thank you for this Playlist!
@anubhavgupta4917
@anubhavgupta4917 3 жыл бұрын
Sir, I am from India. Your lecture is diamond i.e. kohinoor hera. It deserves millions of likes and i am shocked why only 445 ??
@zeppelinpage861
@zeppelinpage861 Жыл бұрын
because justin bieber is more important to us than GANs
@tarmiziizzuddin337
@tarmiziizzuddin337 5 жыл бұрын
Your videos are gems sir, thank you for the effort!
@MustafaCoban-hm2ov
@MustafaCoban-hm2ov 5 жыл бұрын
Around 2:40, you say that "z is sampled from a random distribution and we cannot say it is gaussian or some other distribution". I don't think this is correct. First of all, what is random distribution? In GAN implementations, researchers determine a prior distribution such as Normal distribution and then they sample z vector from this normal distribution. This way, generator learns the mapping from normal distribution to the training data distribution. Therefore, what is random is the z vector, not the distribution itself.
@shineshine9599
@shineshine9599 5 жыл бұрын
i agree
@amitkumarshrivastava7437
@amitkumarshrivastava7437 5 жыл бұрын
What is z , is it some kind of image or anything else - I understand its something from distribution but is it images ? Also, what kind of noise we are basically adding -It may be some kind of vector but is it images ? I understand that discriminator will have real images like shoe, cloth etc . in the same context I am asking about z
@go64bit
@go64bit 5 жыл бұрын
This explanation is pure art. Brilliant!
@AhladKumar
@AhladKumar 5 жыл бұрын
thanks
@soumyasarkar4100
@soumyasarkar4100 3 жыл бұрын
awesome lectures....these lectures are so good that I want them to keep going for hours .... I was wondering if the minimisation objective is ill defined because it can go arbitary low towards negative infinity
@Vishal-mf2db
@Vishal-mf2db 3 жыл бұрын
Thank you so much for this video. Helped me a lot.
@youtubecommenter5122
@youtubecommenter5122 3 жыл бұрын
Bloody good explaination!
@SW-ud1wt
@SW-ud1wt Жыл бұрын
Dear Sir, very good elaboration. I need to ask question, at 4:44 time, you said weights need to be adjusted after calculating error and we take another samples from distribution Z, why should we not work with previous random points from z? Why we take new points? Because we only want only weights to be updated. Please guide . Thanks
@digvijaymahamuni7722
@digvijaymahamuni7722 Жыл бұрын
Very Impressive Lecture!
@shaurovdas5842
@shaurovdas5842 3 жыл бұрын
Isn't there supposed to be a negative sign in the binary cross entropy formula?
@sangeethbalakrishnan9177
@sangeethbalakrishnan9177 4 жыл бұрын
Nice lectures. But equation for binary cross entropy has negative sign , i think
@ambujmittal6824
@ambujmittal6824 4 жыл бұрын
Yes, it should be negative and the loss is always to be minimized. (not maximized) The log graphs will also be drawn reflected. The instructor has taught the concept completely wrong there. (My reason for disliking the video since this concept is very trivial in ML)
@zainabkhan5859
@zainabkhan5859 2 жыл бұрын
This is such a classy explanation Thank you so much
@RaviKumar-yu8xf
@RaviKumar-yu8xf 4 жыл бұрын
Simply Awesome!. Thank you very much sir!
@dkmoni17
@dkmoni17 4 жыл бұрын
Wow. Just One Question - At Slid 08:38, looks like you are Training GENERATOR to correct its Errors during Training. In PART 1, you said to train a GENERATOR Model, we should keep Label = 1 to Fool the DISCRIMINATOR. How come you have shown y = 0 in this Video for Training of GENERATOR. If DISCRIMINATOR is already Trained on Fake and Real Samples, then to Fool it the Generator should put Label = 1 else it will not be Fooled. Please pass your comments. Can someone reply on this please.
@nisaratutube
@nisaratutube 5 жыл бұрын
Thanks Sir, for this great stuff. One request, if you can put some refrence links alongwith the description of the videos, would be very helpful.
@armankamal3694
@armankamal3694 4 жыл бұрын
Very very very good explanation. Thanks for the lecture it helped me a lot
@rajeshraman1980
@rajeshraman1980 5 жыл бұрын
Awesome .the best on GAN.Thanks a lot
@aeigreen
@aeigreen Жыл бұрын
great explanation
@StrawHatLuffy750
@StrawHatLuffy750 5 жыл бұрын
Thank you so much :) pls keep explaining things :)
@ankurgupta2806
@ankurgupta2806 4 жыл бұрын
Sir at 0:50 I think it should be "to fool the discriminator".
@adminai9450
@adminai9450 5 жыл бұрын
awesome explanation sir thank you
@vasukapoor6423
@vasukapoor6423 4 жыл бұрын
Sir in first slide it should be that it can fool the discriminator in the place of fool the generator.
@priyansukushwaha5195
@priyansukushwaha5195 5 ай бұрын
Hey everyone, please note that sir made a mistake at 28:00, mistake that he clarified in next video . The expectation terms will also include minG and maxD on RHS of equation
@sharmaannapurna
@sharmaannapurna 4 жыл бұрын
Hi, very nice explanation. Thank u for sharing this. I would like u to check the first slide and the definition of generator. I think the last word is a typo.
@zahrakhalid4731
@zahrakhalid4731 Жыл бұрын
Please make a video on stylegan.
@microcosmos9654
@microcosmos9654 4 жыл бұрын
Thank you so much for the lectures, they help me a lot!
@raghavsharma2658
@raghavsharma2658 4 жыл бұрын
thank you again, you made this tough subject very easy...
@ankurbhatia24
@ankurbhatia24 4 жыл бұрын
the cross entropy loss function used here does not have a -ve sign. −(ylog(p)+(1−y)log(1−p)) This is the cross-entropy loss function. So every time we maximized or minimized the thing would be reversed. Right? Please correct me if I am wrong.
@karimmache4018
@karimmache4018 3 жыл бұрын
Thank you for this wonderful tutorial, It's any tutorial about autoencoder and variational autoencoder?
@PavanKumar-yk5mq
@PavanKumar-yk5mq Жыл бұрын
excellent
@BiranchiNarayanNayak
@BiranchiNarayanNayak 4 жыл бұрын
Thanks. Very well explained.
@prashant_canada
@prashant_canada 3 жыл бұрын
i actually started following you when i saw your first Deep Learning Video series where you were discussing about kaggle google colab, setting up an environment or getting ready to dive in data science. I found it very informative. but, as Tensflow 1.0 is completely out and we have tensorflow 2.0 instead. so many syntax/concepts does play a role like session, placeholder, summary and many more. so, how can fulfill this gap as i had to discontiue to follow rest of the series.. is there any other way or you have another video series for Tensorflow2.0. please update sir
@ammarkhan2611
@ammarkhan2611 3 жыл бұрын
At 0:57, isn't the ides of the generator to fool the discriminator ?
@홍성-w4g
@홍성-w4g 3 жыл бұрын
What software are you using for this teaching?
@madhavpr
@madhavpr 4 жыл бұрын
Hi Ahlad, Awesome tutorial. I am wondering if we can interchange max and min in the expression for the loss function of a GAN. More precisely, if L is the loss function, does min max L = max min L where the minimizer and the maximizer are with respect to G and D respectively ?
@adnanhashem98
@adnanhashem98 5 ай бұрын
I think that in 0:48 you meant to write and say that the role of the generator is to "...can fool the *discriminator* " instead of the generator.
@hamidrezanabati
@hamidrezanabati 5 жыл бұрын
Thanks for the great explanation!
@aakanksha7877
@aakanksha7877 3 жыл бұрын
thanks alot for this.
@pasinduekanayaka8023
@pasinduekanayaka8023 2 жыл бұрын
Hi, first of all, thanks for the explanation but I have some questions regarding the training cycle which you explained. so in the explanation, it seems like the discriminator is already trained to a maximum level before the generator starts to train. isn't that a problem? because during the research related to GANs many people have said that if the discriminator is stronger than the generator the model is not gonna perform well. It's somewhat logical as well because the main objective of GANs is to perform a min-max game between the generator and the discriminator so if the discriminator is stronger than the generator, it's not a fair game at all. can you please give an explanation about this? Thank you.
@douglasamoo-sargon5049
@douglasamoo-sargon5049 5 жыл бұрын
Awesome explanation.
@venkatbabu186
@venkatbabu186 3 жыл бұрын
A feeder loop string instruments are the modern hardware artificial intelligence for both machine learning and clustered deep learning reduction. Modulators demodulators. Weighed proportional clustering of deep learning. Advisory is error rectification and retuning. The higher the sensory perception the more new kind and a bit slower. Sometimes much faster because of new methods. That's why AI is able to remember almost the entire world and extras. Even gone with the wind. Color sensory says it is red. Smell sensory says it is attractive. Shape sensory says it is conical. So red rose.
@unchaineddreameralpa
@unchaineddreameralpa 4 жыл бұрын
Excellent tutorial
@sgrimm7346
@sgrimm7346 11 ай бұрын
I believe he meant to say the Generator's role is to create data so that it can fool the " Discriminator". Just pointing it out. Good video, however.
@bosepukur
@bosepukur 4 жыл бұрын
if you minimize G(z) doesn't it make the cost function ill defined because it can go to arbitary low value ?
@prelimsiscoming
@prelimsiscoming 4 жыл бұрын
Can we have the lecture slides ??
@vikramnimma
@vikramnimma 4 жыл бұрын
At 0:50, u said the objective of generator is to create data so that it can fool the generator, but in ur previous lecture u said objective is to fool the descriminator, correct me if I am wrong.
@priyabratdash8964
@priyabratdash8964 4 жыл бұрын
This might have been a mistake, the discriminator needs to be fooled
@ogsconnect1312
@ogsconnect1312 4 жыл бұрын
Thanks, Excellent!
@arabnaouel289
@arabnaouel289 3 жыл бұрын
thank you very much for this amazing serie, can you please activate the subtitle in another language please ??
@nirajpudasaini4450
@nirajpudasaini4450 9 ай бұрын
legend
@miranbaban9554
@miranbaban9554 11 ай бұрын
Dear Ahlad, It should be Discriminator in your first slide, it fools discriminator not generator
@abhishekprasad7030
@abhishekprasad7030 5 жыл бұрын
I wanted to know that how many vedios are left in this series to complete DL
@AhladKumar
@AhladKumar 5 жыл бұрын
four more left
@makting009
@makting009 4 жыл бұрын
why fake dataset become circular?
@alibahari4217
@alibahari4217 4 жыл бұрын
totally Unique but there are a lot of disturbing Ads
@AhladKumar
@AhladKumar 4 жыл бұрын
you can take youtube premium to avoid them
@rachittsharmaaa
@rachittsharmaaa 4 жыл бұрын
@@AhladKumar 😂😂😂
@NitishRaj
@NitishRaj 2 жыл бұрын
1:06 Fool the discriminator. Kindly correct it
@aadityasingh4911
@aadityasingh4911 4 жыл бұрын
very confusing....u say one thing and then contradict it 5 seconds later
@dddd-rf1xy
@dddd-rf1xy Жыл бұрын
Please open translate
@VR-fh4im
@VR-fh4im Жыл бұрын
Generator fools the Discriminator.
@MrSushantsingh
@MrSushantsingh 5 жыл бұрын
Even Ian Goodfellow will not be able to explain that well, anyways I heard he's pretty arrogant.
@NR_Tutorials
@NR_Tutorials 4 жыл бұрын
i saw the lecture iangoodfellow also.. not better than this
@NandakishanRajagiri
@NandakishanRajagiri 6 ай бұрын
very interesting but boring...
A Friendly Introduction to Generative Adversarial Networks (GANs)
21:01
Serrano.Academy
Рет қаралды 253 М.
Amazing Parenting Hacks! 👶✨ #ParentingTips #LifeHacks
00:18
Snack Chat
Рет қаралды 20 МЛН
Men Vs Women Survive The Wilderness For $500,000
31:48
MrBeast
Рет қаралды 101 МЛН
Пришёл к другу на ночёвку 😂
01:00
Cadrol&Fatich
Рет қаралды 10 МЛН
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
31:25
Preserve Knowledge
Рет қаралды 151 М.
Generative Adversarial Networks (GANs) - Computerphile
21:21
Computerphile
Рет қаралды 647 М.
[Classic] Generative Adversarial Networks (Paper Explained)
37:04
Yannic Kilcher
Рет қаралды 63 М.
Variational Autoencoders
15:05
Arxiv Insights
Рет қаралды 499 М.
247 - Conditional GANs and their applications
39:51
DigitalSreeni
Рет қаралды 44 М.
Amazing Parenting Hacks! 👶✨ #ParentingTips #LifeHacks
00:18
Snack Chat
Рет қаралды 20 МЛН