Deep Learning 28: (2) Generative Adversarial Network (GAN) : Loss Derivation from Scratch

  Рет қаралды 47,302

Ahlad Kumar

Ahlad Kumar

Күн бұрын

Пікірлер: 104
@MLDawn
@MLDawn 4 жыл бұрын
Please indulge me! The two people who have disliked this video, what were they expecting to see that they could not find here! I genuinely cannot understand what they disliked about this lecture! This is just Gold!
@abdelghanikadouri1626
@abdelghanikadouri1626 4 жыл бұрын
I can't believe that I just find out about this wonderful tutorial I never thought I'd understand the gan like this easy thank you so much
@MLDawn
@MLDawn 5 жыл бұрын
You are killing me here! This is too amazing of a lecture to be true! Well done a million times
@AhladKumar
@AhladKumar 5 жыл бұрын
Thanks for the feedback...watch out for more
@muhammadyaqoob9129
@muhammadyaqoob9129 7 күн бұрын
I have seen your complete playlist of Brain MRI tumor detection. Those were also too good. Love you from depth of the heart.
@NR_Tutorials
@NR_Tutorials 5 жыл бұрын
Sir jaan lena hai jaan le lo is lecture ke liye ... kya padhya hau aap ne great sir salute
@shahriarshayesteh8602
@shahriarshayesteh8602 3 жыл бұрын
Fantastic explanation! we need such detailed and in-depth tutorials. I hope you can go over transformers and BERT in such detail. Thanks!
@ziauldba
@ziauldba 2 жыл бұрын
Seriously my feeling is like im in a classroom . Thanks professor. god bless u .
@syeddanish7055
@syeddanish7055 Жыл бұрын
Very nicely explained... was able to learn all in one go. Thanks a lot, sir!!!
@hamzaameer2213
@hamzaameer2213 5 жыл бұрын
Thanks Sir , i wasted 4-Days in reading research papers . You have done it in just 4 Lectures. You are pure Gold.
@iliasaarab7922
@iliasaarab7922 4 жыл бұрын
This channel deserves a million+ subs! Amazing explanation!
@learner3539
@learner3539 2 жыл бұрын
Thanks for your lecture. Your lecture makes all complicated topic much easier . You are doing great work.
@soumambanerjee1816
@soumambanerjee1816 3 жыл бұрын
Ian Goodfellow will be shocked to see such a great lecture. Thank you for choosing to be a professor. ❤️
@jyotideshwal337
@jyotideshwal337 4 жыл бұрын
There are not enough words to thank you for all that you do! I am lucky enough to call you my teacher
@surayuthpintawong8332
@surayuthpintawong8332 3 жыл бұрын
Thanks for sparing your time for teaching us
@DoctorPataMedicast
@DoctorPataMedicast 10 ай бұрын
Hmmm this is one of the best I have seen.You are a born teacher.
@gioxc88
@gioxc88 4 жыл бұрын
Words are not enough to thank you for this!!!
@mozhganrahmatinia1656
@mozhganrahmatinia1656 3 жыл бұрын
your description is perfect. thank you.
@nikhilkumawat1797
@nikhilkumawat1797 Жыл бұрын
Very well explained, Amazing.
@zeinabawad9317
@zeinabawad9317 Жыл бұрын
thanks a lot the best explanation ever for GANs thanks again
@youtubecommenter5122
@youtubecommenter5122 4 жыл бұрын
Bloody good explaination!
@Afshaanjum-ge7dt
@Afshaanjum-ge7dt 9 ай бұрын
Bravo.... your efforts are appreciated.
@shaurovdas5842
@shaurovdas5842 3 жыл бұрын
Isn't there supposed to be a negative sign in the binary cross entropy formula?
@BJ-gj2mv
@BJ-gj2mv 4 жыл бұрын
this is the best lectures i seen on deep learning . Great work . Thank you . Keep it up.
@madhuvarun2790
@madhuvarun2790 3 жыл бұрын
Your videos are a gold mine. Thank you so much sir.
@zainabkhan5859
@zainabkhan5859 3 жыл бұрын
This is such a classy explanation Thank you so much
@shineshine9599
@shineshine9599 5 жыл бұрын
your videos literally saved me while working on Gan. can't thank you enough.
@submagr
@submagr 5 жыл бұрын
Thank you, Sir. I got a much better understanding of the loss function now!
@Vishal-mf2db
@Vishal-mf2db 3 жыл бұрын
Thank you so much for this video. Helped me a lot.
@girishmishra
@girishmishra 4 жыл бұрын
Awesome pick....no other lecture is needed. Thanks a lot
@gauravsahani2499
@gauravsahani2499 4 жыл бұрын
Really interesting video Sir!, Thank you for this Playlist!
@ZERU326
@ZERU326 2 жыл бұрын
at 31:18, when we are training the generator fake data label is 1, thus we get directly L = -log(D(G(z))) as loss function for generator.
@go64bit
@go64bit 5 жыл бұрын
This explanation is pure art. Brilliant!
@AhladKumar
@AhladKumar 5 жыл бұрын
thanks
@minhajansari8272
@minhajansari8272 5 жыл бұрын
Wow! Thank you very much. I slept through all my lectures and tomorrow is my exam (which is going to be tough)
@anubhavgupta4917
@anubhavgupta4917 3 жыл бұрын
Sir, I am from India. Your lecture is diamond i.e. kohinoor hera. It deserves millions of likes and i am shocked why only 445 ??
@zeppelinpage861
@zeppelinpage861 Жыл бұрын
because justin bieber is more important to us than GANs
@RaviKumar-yu8xf
@RaviKumar-yu8xf 4 жыл бұрын
Simply Awesome!. Thank you very much sir!
@aeigreen
@aeigreen Жыл бұрын
great explanation
@danielenrique7184
@danielenrique7184 5 жыл бұрын
Thank you so much for the exceptional explanation! :)
@armankamal3694
@armankamal3694 4 жыл бұрын
Very very very good explanation. Thanks for the lecture it helped me a lot
@digvijaymahamuni7722
@digvijaymahamuni7722 Жыл бұрын
Very Impressive Lecture!
@tarmiziizzuddin337
@tarmiziizzuddin337 5 жыл бұрын
Your videos are gems sir, thank you for the effort!
@rajeshraman1980
@rajeshraman1980 5 жыл бұрын
Awesome .the best on GAN.Thanks a lot
@adminai9450
@adminai9450 5 жыл бұрын
awesome explanation sir thank you
@SW-ud1wt
@SW-ud1wt Жыл бұрын
Dear Sir, very good elaboration. I need to ask question, at 4:44 time, you said weights need to be adjusted after calculating error and we take another samples from distribution Z, why should we not work with previous random points from z? Why we take new points? Because we only want only weights to be updated. Please guide . Thanks
@karimmache4018
@karimmache4018 3 жыл бұрын
Thank you for this wonderful tutorial, It's any tutorial about autoencoder and variational autoencoder?
@praslisa
@praslisa 4 жыл бұрын
I am watching all the ads on these videos...this person deserves $$$$...come up with an udemy course...you will do great :)
@haztec.
@haztec. 4 жыл бұрын
I don't think that gets them any more money though.
@soumyasarkar4100
@soumyasarkar4100 3 жыл бұрын
awesome lectures....these lectures are so good that I want them to keep going for hours .... I was wondering if the minimisation objective is ill defined because it can go arbitary low towards negative infinity
@arjunmajumdar6874
@arjunmajumdar6874 2 жыл бұрын
The binary cross-entropy loss is: -y.log(yhat) -(1-y).(1-yhat). In your derivation, this isn’t written explicitly. Is it because you have modified the BCE loss ?
@BiranchiNarayanNayak
@BiranchiNarayanNayak 4 жыл бұрын
Thanks. Very well explained.
@ankurbhatia24
@ankurbhatia24 4 жыл бұрын
the cross entropy loss function used here does not have a -ve sign. −(ylog(p)+(1−y)log(1−p)) This is the cross-entropy loss function. So every time we maximized or minimized the thing would be reversed. Right? Please correct me if I am wrong.
@prashant_canada
@prashant_canada 3 жыл бұрын
i actually started following you when i saw your first Deep Learning Video series where you were discussing about kaggle google colab, setting up an environment or getting ready to dive in data science. I found it very informative. but, as Tensflow 1.0 is completely out and we have tensorflow 2.0 instead. so many syntax/concepts does play a role like session, placeholder, summary and many more. so, how can fulfill this gap as i had to discontiue to follow rest of the series.. is there any other way or you have another video series for Tensorflow2.0. please update sir
@unchaineddreameralpa
@unchaineddreameralpa 4 жыл бұрын
Excellent tutorial
@raghavsharma2658
@raghavsharma2658 5 жыл бұрын
thank you again, you made this tough subject very easy...
@bosepukur
@bosepukur 4 жыл бұрын
if you minimize G(z) doesn't it make the cost function ill defined because it can go to arbitary low value ?
@홍성-w4g
@홍성-w4g 3 жыл бұрын
What software are you using for this teaching?
@douglasamoo-sargon5049
@douglasamoo-sargon5049 5 жыл бұрын
Awesome explanation.
@microcosmos9654
@microcosmos9654 4 жыл бұрын
Thank you so much for the lectures, they help me a lot!
@PavanKumar-yk5mq
@PavanKumar-yk5mq Жыл бұрын
excellent
@aakanksha7877
@aakanksha7877 3 жыл бұрын
thanks alot for this.
@StrawHatLuffy750
@StrawHatLuffy750 5 жыл бұрын
Thank you so much :) pls keep explaining things :)
@madhavpr
@madhavpr 4 жыл бұрын
Hi Ahlad, Awesome tutorial. I am wondering if we can interchange max and min in the expression for the loss function of a GAN. More precisely, if L is the loss function, does min max L = max min L where the minimizer and the maximizer are with respect to G and D respectively ?
@zahrakhalid4731
@zahrakhalid4731 2 жыл бұрын
Please make a video on stylegan.
@sharmaannapurna
@sharmaannapurna 4 жыл бұрын
Hi, very nice explanation. Thank u for sharing this. I would like u to check the first slide and the definition of generator. I think the last word is a typo.
@ammarkhan2611
@ammarkhan2611 3 жыл бұрын
At 0:57, isn't the ides of the generator to fool the discriminator ?
@hamidrezanabati
@hamidrezanabati 5 жыл бұрын
Thanks for the great explanation!
@sangeethbalakrishnan9177
@sangeethbalakrishnan9177 4 жыл бұрын
Nice lectures. But equation for binary cross entropy has negative sign , i think
@ambujmittal6824
@ambujmittal6824 4 жыл бұрын
Yes, it should be negative and the loss is always to be minimized. (not maximized) The log graphs will also be drawn reflected. The instructor has taught the concept completely wrong there. (My reason for disliking the video since this concept is very trivial in ML)
@nisaratutube
@nisaratutube 5 жыл бұрын
Thanks Sir, for this great stuff. One request, if you can put some refrence links alongwith the description of the videos, would be very helpful.
@danielmathew5008
@danielmathew5008 4 жыл бұрын
These videos are probably the best explanation of these machine learning concepts, but I have one problem.I don't want to complain about ads since I know you work hard to put these on and have to earn, but please try to put all the ads at the end and not 2 or 3 in the middle of the video since it disrupts the thought flow. Just a suggestion. Thanks for your work.
@prelimsiscoming
@prelimsiscoming 5 жыл бұрын
Can we have the lecture slides ??
@ankurgupta2806
@ankurgupta2806 5 жыл бұрын
Sir at 0:50 I think it should be "to fool the discriminator".
@ogsconnect1312
@ogsconnect1312 4 жыл бұрын
Thanks, Excellent!
@vasukapoor6423
@vasukapoor6423 4 жыл бұрын
Sir in first slide it should be that it can fool the discriminator in the place of fool the generator.
@priyansukushwaha5195
@priyansukushwaha5195 6 ай бұрын
Hey everyone, please note that sir made a mistake at 28:00, mistake that he clarified in next video . The expectation terms will also include minG and maxD on RHS of equation
@nirajpudasaini4450
@nirajpudasaini4450 11 ай бұрын
legend
@venkatbabu186
@venkatbabu186 4 жыл бұрын
A feeder loop string instruments are the modern hardware artificial intelligence for both machine learning and clustered deep learning reduction. Modulators demodulators. Weighed proportional clustering of deep learning. Advisory is error rectification and retuning. The higher the sensory perception the more new kind and a bit slower. Sometimes much faster because of new methods. That's why AI is able to remember almost the entire world and extras. Even gone with the wind. Color sensory says it is red. Smell sensory says it is attractive. Shape sensory says it is conical. So red rose.
@arabnaouel289
@arabnaouel289 3 жыл бұрын
thank you very much for this amazing serie, can you please activate the subtitle in another language please ??
@pasinduekanayaka8023
@pasinduekanayaka8023 3 жыл бұрын
Hi, first of all, thanks for the explanation but I have some questions regarding the training cycle which you explained. so in the explanation, it seems like the discriminator is already trained to a maximum level before the generator starts to train. isn't that a problem? because during the research related to GANs many people have said that if the discriminator is stronger than the generator the model is not gonna perform well. It's somewhat logical as well because the main objective of GANs is to perform a min-max game between the generator and the discriminator so if the discriminator is stronger than the generator, it's not a fair game at all. can you please give an explanation about this? Thank you.
@dkmoni17
@dkmoni17 5 жыл бұрын
Wow. Just One Question - At Slid 08:38, looks like you are Training GENERATOR to correct its Errors during Training. In PART 1, you said to train a GENERATOR Model, we should keep Label = 1 to Fool the DISCRIMINATOR. How come you have shown y = 0 in this Video for Training of GENERATOR. If DISCRIMINATOR is already Trained on Fake and Real Samples, then to Fool it the Generator should put Label = 1 else it will not be Fooled. Please pass your comments. Can someone reply on this please.
@MustafaCoban-hm2ov
@MustafaCoban-hm2ov 5 жыл бұрын
Around 2:40, you say that "z is sampled from a random distribution and we cannot say it is gaussian or some other distribution". I don't think this is correct. First of all, what is random distribution? In GAN implementations, researchers determine a prior distribution such as Normal distribution and then they sample z vector from this normal distribution. This way, generator learns the mapping from normal distribution to the training data distribution. Therefore, what is random is the z vector, not the distribution itself.
@shineshine9599
@shineshine9599 5 жыл бұрын
i agree
@amitkumarshrivastava7437
@amitkumarshrivastava7437 5 жыл бұрын
What is z , is it some kind of image or anything else - I understand its something from distribution but is it images ? Also, what kind of noise we are basically adding -It may be some kind of vector but is it images ? I understand that discriminator will have real images like shoe, cloth etc . in the same context I am asking about z
@abhishekprasad7030
@abhishekprasad7030 5 жыл бұрын
I wanted to know that how many vedios are left in this series to complete DL
@AhladKumar
@AhladKumar 5 жыл бұрын
four more left
@vikramnimma
@vikramnimma 4 жыл бұрын
At 0:50, u said the objective of generator is to create data so that it can fool the generator, but in ur previous lecture u said objective is to fool the descriminator, correct me if I am wrong.
@priyabratdash8964
@priyabratdash8964 4 жыл бұрын
This might have been a mistake, the discriminator needs to be fooled
@adnanhashem98
@adnanhashem98 7 ай бұрын
I think that in 0:48 you meant to write and say that the role of the generator is to "...can fool the *discriminator* " instead of the generator.
@sgrimm7346
@sgrimm7346 Жыл бұрын
I believe he meant to say the Generator's role is to create data so that it can fool the " Discriminator". Just pointing it out. Good video, however.
@makting009
@makting009 4 жыл бұрын
why fake dataset become circular?
@alibahari4217
@alibahari4217 4 жыл бұрын
totally Unique but there are a lot of disturbing Ads
@AhladKumar
@AhladKumar 4 жыл бұрын
you can take youtube premium to avoid them
@rachittsharmaaa
@rachittsharmaaa 4 жыл бұрын
@@AhladKumar 😂😂😂
@miranbaban9554
@miranbaban9554 Жыл бұрын
Dear Ahlad, It should be Discriminator in your first slide, it fools discriminator not generator
@NitishRaj
@NitishRaj 2 жыл бұрын
1:06 Fool the discriminator. Kindly correct it
@ttreza5922
@ttreza5922 12 күн бұрын
Anyone from 2024 watching this?
@dddd-rf1xy
@dddd-rf1xy Жыл бұрын
Please open translate
@aadityasingh4911
@aadityasingh4911 4 жыл бұрын
very confusing....u say one thing and then contradict it 5 seconds later
@VR-fh4im
@VR-fh4im Жыл бұрын
Generator fools the Discriminator.
@NandakishanRajagiri
@NandakishanRajagiri 8 ай бұрын
very interesting but boring...
@MrSushantsingh
@MrSushantsingh 5 жыл бұрын
Even Ian Goodfellow will not be able to explain that well, anyways I heard he's pretty arrogant.
@NR_Tutorials
@NR_Tutorials 5 жыл бұрын
i saw the lecture iangoodfellow also.. not better than this
Triple kill😹
00:18
GG Animation
Рет қаралды 18 МЛН
How Strong is Tin Foil? 💪
00:25
Brianna
Рет қаралды 69 МЛН
Players vs Pitch 🤯
00:26
LE FOOT EN VIDÉO
Рет қаралды 124 МЛН
World’s strongest WOMAN vs regular GIRLS
00:56
A4
Рет қаралды 35 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,2 МЛН
247 - Conditional GANs and their applications
39:51
DigitalSreeni
Рет қаралды 46 М.
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
31:25
Preserve Knowledge
Рет қаралды 152 М.
Generative Adversarial Networks (GANs) - Computerphile
21:21
Computerphile
Рет қаралды 650 М.
Understand the Math and Theory of GANs in ~ 10 minutes
12:03
WelcomeAIOverlords
Рет қаралды 64 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 712 М.
Triple kill😹
00:18
GG Animation
Рет қаралды 18 МЛН