This lecture derives the loss function of Generative Adversarial Network (GAN) from scratch #adversarial#generative#deeplearning
Пікірлер: 102
@MLDawn4 жыл бұрын
Please indulge me! The two people who have disliked this video, what were they expecting to see that they could not find here! I genuinely cannot understand what they disliked about this lecture! This is just Gold!
@abdelghanikadouri16264 жыл бұрын
I can't believe that I just find out about this wonderful tutorial I never thought I'd understand the gan like this easy thank you so much
@MLDawn5 жыл бұрын
You are killing me here! This is too amazing of a lecture to be true! Well done a million times
@AhladKumar5 жыл бұрын
Thanks for the feedback...watch out for more
@shahriarshayesteh86023 жыл бұрын
Fantastic explanation! we need such detailed and in-depth tutorials. I hope you can go over transformers and BERT in such detail. Thanks!
@iliasaarab79224 жыл бұрын
This channel deserves a million+ subs! Amazing explanation!
@hamzaameer22134 жыл бұрын
Thanks Sir , i wasted 4-Days in reading research papers . You have done it in just 4 Lectures. You are pure Gold.
@soumambanerjee18163 жыл бұрын
Ian Goodfellow will be shocked to see such a great lecture. Thank you for choosing to be a professor. ❤️
@NR_Tutorials4 жыл бұрын
Sir jaan lena hai jaan le lo is lecture ke liye ... kya padhya hau aap ne great sir salute
@learner35392 жыл бұрын
Thanks for your lecture. Your lecture makes all complicated topic much easier . You are doing great work.
@jyotideshwal3374 жыл бұрын
There are not enough words to thank you for all that you do! I am lucky enough to call you my teacher
@DoctorPataMedicast8 ай бұрын
Hmmm this is one of the best I have seen.You are a born teacher.
@ziauldba2 жыл бұрын
Seriously my feeling is like im in a classroom . Thanks professor. god bless u .
@shineshine95995 жыл бұрын
your videos literally saved me while working on Gan. can't thank you enough.
@gioxc884 жыл бұрын
Words are not enough to thank you for this!!!
@BJ-gj2mv4 жыл бұрын
this is the best lectures i seen on deep learning . Great work . Thank you . Keep it up.
@Afshaanjum-ge7dt8 ай бұрын
Bravo.... your efforts are appreciated.
@ZERU3262 жыл бұрын
at 31:18, when we are training the generator fake data label is 1, thus we get directly L = -log(D(G(z))) as loss function for generator.
@surayuthpintawong83323 жыл бұрын
Thanks for sparing your time for teaching us
@submagr4 жыл бұрын
Thank you, Sir. I got a much better understanding of the loss function now!
@arjunmajumdar6874 Жыл бұрын
The binary cross-entropy loss is: -y.log(yhat) -(1-y).(1-yhat). In your derivation, this isn’t written explicitly. Is it because you have modified the BCE loss ?
@syeddanish7055 Жыл бұрын
Very nicely explained... was able to learn all in one go. Thanks a lot, sir!!!
@danielmathew50084 жыл бұрын
These videos are probably the best explanation of these machine learning concepts, but I have one problem.I don't want to complain about ads since I know you work hard to put these on and have to earn, but please try to put all the ads at the end and not 2 or 3 in the middle of the video since it disrupts the thought flow. Just a suggestion. Thanks for your work.
@mozhganrahmatinia16563 жыл бұрын
your description is perfect. thank you.
@minhajansari82725 жыл бұрын
Wow! Thank you very much. I slept through all my lectures and tomorrow is my exam (which is going to be tough)
@nikhilkumawat1797 Жыл бұрын
Very well explained, Amazing.
@praslisa4 жыл бұрын
I am watching all the ads on these videos...this person deserves $$$$...come up with an udemy course...you will do great :)
@haztec.4 жыл бұрын
I don't think that gets them any more money though.
@girishmishra4 жыл бұрын
Awesome pick....no other lecture is needed. Thanks a lot
@danielenrique71845 жыл бұрын
Thank you so much for the exceptional explanation! :)
@zeinabawad9317 Жыл бұрын
thanks a lot the best explanation ever for GANs thanks again
@madhuvarun27903 жыл бұрын
Your videos are a gold mine. Thank you so much sir.
@gauravsahani24994 жыл бұрын
Really interesting video Sir!, Thank you for this Playlist!
@anubhavgupta49173 жыл бұрын
Sir, I am from India. Your lecture is diamond i.e. kohinoor hera. It deserves millions of likes and i am shocked why only 445 ??
@zeppelinpage861 Жыл бұрын
because justin bieber is more important to us than GANs
@tarmiziizzuddin3375 жыл бұрын
Your videos are gems sir, thank you for the effort!
@MustafaCoban-hm2ov5 жыл бұрын
Around 2:40, you say that "z is sampled from a random distribution and we cannot say it is gaussian or some other distribution". I don't think this is correct. First of all, what is random distribution? In GAN implementations, researchers determine a prior distribution such as Normal distribution and then they sample z vector from this normal distribution. This way, generator learns the mapping from normal distribution to the training data distribution. Therefore, what is random is the z vector, not the distribution itself.
@shineshine95995 жыл бұрын
i agree
@amitkumarshrivastava74375 жыл бұрын
What is z , is it some kind of image or anything else - I understand its something from distribution but is it images ? Also, what kind of noise we are basically adding -It may be some kind of vector but is it images ? I understand that discriminator will have real images like shoe, cloth etc . in the same context I am asking about z
@go64bit5 жыл бұрын
This explanation is pure art. Brilliant!
@AhladKumar5 жыл бұрын
thanks
@soumyasarkar41003 жыл бұрын
awesome lectures....these lectures are so good that I want them to keep going for hours .... I was wondering if the minimisation objective is ill defined because it can go arbitary low towards negative infinity
@Vishal-mf2db3 жыл бұрын
Thank you so much for this video. Helped me a lot.
@youtubecommenter51223 жыл бұрын
Bloody good explaination!
@SW-ud1wt Жыл бұрын
Dear Sir, very good elaboration. I need to ask question, at 4:44 time, you said weights need to be adjusted after calculating error and we take another samples from distribution Z, why should we not work with previous random points from z? Why we take new points? Because we only want only weights to be updated. Please guide . Thanks
@digvijaymahamuni7722 Жыл бұрын
Very Impressive Lecture!
@shaurovdas58423 жыл бұрын
Isn't there supposed to be a negative sign in the binary cross entropy formula?
@sangeethbalakrishnan91774 жыл бұрын
Nice lectures. But equation for binary cross entropy has negative sign , i think
@ambujmittal68244 жыл бұрын
Yes, it should be negative and the loss is always to be minimized. (not maximized) The log graphs will also be drawn reflected. The instructor has taught the concept completely wrong there. (My reason for disliking the video since this concept is very trivial in ML)
@zainabkhan58592 жыл бұрын
This is such a classy explanation Thank you so much
@RaviKumar-yu8xf4 жыл бұрын
Simply Awesome!. Thank you very much sir!
@dkmoni174 жыл бұрын
Wow. Just One Question - At Slid 08:38, looks like you are Training GENERATOR to correct its Errors during Training. In PART 1, you said to train a GENERATOR Model, we should keep Label = 1 to Fool the DISCRIMINATOR. How come you have shown y = 0 in this Video for Training of GENERATOR. If DISCRIMINATOR is already Trained on Fake and Real Samples, then to Fool it the Generator should put Label = 1 else it will not be Fooled. Please pass your comments. Can someone reply on this please.
@nisaratutube5 жыл бұрын
Thanks Sir, for this great stuff. One request, if you can put some refrence links alongwith the description of the videos, would be very helpful.
@armankamal36944 жыл бұрын
Very very very good explanation. Thanks for the lecture it helped me a lot
@rajeshraman19805 жыл бұрын
Awesome .the best on GAN.Thanks a lot
@aeigreen Жыл бұрын
great explanation
@StrawHatLuffy7505 жыл бұрын
Thank you so much :) pls keep explaining things :)
@ankurgupta28064 жыл бұрын
Sir at 0:50 I think it should be "to fool the discriminator".
@adminai94505 жыл бұрын
awesome explanation sir thank you
@vasukapoor64234 жыл бұрын
Sir in first slide it should be that it can fool the discriminator in the place of fool the generator.
@priyansukushwaha51955 ай бұрын
Hey everyone, please note that sir made a mistake at 28:00, mistake that he clarified in next video . The expectation terms will also include minG and maxD on RHS of equation
@sharmaannapurna4 жыл бұрын
Hi, very nice explanation. Thank u for sharing this. I would like u to check the first slide and the definition of generator. I think the last word is a typo.
@zahrakhalid4731 Жыл бұрын
Please make a video on stylegan.
@microcosmos96544 жыл бұрын
Thank you so much for the lectures, they help me a lot!
@raghavsharma26584 жыл бұрын
thank you again, you made this tough subject very easy...
@ankurbhatia244 жыл бұрын
the cross entropy loss function used here does not have a -ve sign. −(ylog(p)+(1−y)log(1−p)) This is the cross-entropy loss function. So every time we maximized or minimized the thing would be reversed. Right? Please correct me if I am wrong.
@karimmache40183 жыл бұрын
Thank you for this wonderful tutorial, It's any tutorial about autoencoder and variational autoencoder?
@PavanKumar-yk5mq Жыл бұрын
excellent
@BiranchiNarayanNayak4 жыл бұрын
Thanks. Very well explained.
@prashant_canada3 жыл бұрын
i actually started following you when i saw your first Deep Learning Video series where you were discussing about kaggle google colab, setting up an environment or getting ready to dive in data science. I found it very informative. but, as Tensflow 1.0 is completely out and we have tensorflow 2.0 instead. so many syntax/concepts does play a role like session, placeholder, summary and many more. so, how can fulfill this gap as i had to discontiue to follow rest of the series.. is there any other way or you have another video series for Tensorflow2.0. please update sir
@ammarkhan26113 жыл бұрын
At 0:57, isn't the ides of the generator to fool the discriminator ?
@홍성-w4g3 жыл бұрын
What software are you using for this teaching?
@madhavpr4 жыл бұрын
Hi Ahlad, Awesome tutorial. I am wondering if we can interchange max and min in the expression for the loss function of a GAN. More precisely, if L is the loss function, does min max L = max min L where the minimizer and the maximizer are with respect to G and D respectively ?
@adnanhashem985 ай бұрын
I think that in 0:48 you meant to write and say that the role of the generator is to "...can fool the *discriminator* " instead of the generator.
@hamidrezanabati5 жыл бұрын
Thanks for the great explanation!
@aakanksha78773 жыл бұрын
thanks alot for this.
@pasinduekanayaka80232 жыл бұрын
Hi, first of all, thanks for the explanation but I have some questions regarding the training cycle which you explained. so in the explanation, it seems like the discriminator is already trained to a maximum level before the generator starts to train. isn't that a problem? because during the research related to GANs many people have said that if the discriminator is stronger than the generator the model is not gonna perform well. It's somewhat logical as well because the main objective of GANs is to perform a min-max game between the generator and the discriminator so if the discriminator is stronger than the generator, it's not a fair game at all. can you please give an explanation about this? Thank you.
@douglasamoo-sargon50495 жыл бұрын
Awesome explanation.
@venkatbabu1863 жыл бұрын
A feeder loop string instruments are the modern hardware artificial intelligence for both machine learning and clustered deep learning reduction. Modulators demodulators. Weighed proportional clustering of deep learning. Advisory is error rectification and retuning. The higher the sensory perception the more new kind and a bit slower. Sometimes much faster because of new methods. That's why AI is able to remember almost the entire world and extras. Even gone with the wind. Color sensory says it is red. Smell sensory says it is attractive. Shape sensory says it is conical. So red rose.
@unchaineddreameralpa4 жыл бұрын
Excellent tutorial
@sgrimm734611 ай бұрын
I believe he meant to say the Generator's role is to create data so that it can fool the " Discriminator". Just pointing it out. Good video, however.
@bosepukur4 жыл бұрын
if you minimize G(z) doesn't it make the cost function ill defined because it can go to arbitary low value ?
@prelimsiscoming4 жыл бұрын
Can we have the lecture slides ??
@vikramnimma4 жыл бұрын
At 0:50, u said the objective of generator is to create data so that it can fool the generator, but in ur previous lecture u said objective is to fool the descriminator, correct me if I am wrong.
@priyabratdash89644 жыл бұрын
This might have been a mistake, the discriminator needs to be fooled
@ogsconnect13124 жыл бұрын
Thanks, Excellent!
@arabnaouel2893 жыл бұрын
thank you very much for this amazing serie, can you please activate the subtitle in another language please ??
@nirajpudasaini44509 ай бұрын
legend
@miranbaban955411 ай бұрын
Dear Ahlad, It should be Discriminator in your first slide, it fools discriminator not generator
@abhishekprasad70305 жыл бұрын
I wanted to know that how many vedios are left in this series to complete DL
@AhladKumar5 жыл бұрын
four more left
@makting0094 жыл бұрын
why fake dataset become circular?
@alibahari42174 жыл бұрын
totally Unique but there are a lot of disturbing Ads
@AhladKumar4 жыл бұрын
you can take youtube premium to avoid them
@rachittsharmaaa4 жыл бұрын
@@AhladKumar 😂😂😂
@NitishRaj2 жыл бұрын
1:06 Fool the discriminator. Kindly correct it
@aadityasingh49114 жыл бұрын
very confusing....u say one thing and then contradict it 5 seconds later
@dddd-rf1xy Жыл бұрын
Please open translate
@VR-fh4im Жыл бұрын
Generator fools the Discriminator.
@MrSushantsingh5 жыл бұрын
Even Ian Goodfellow will not be able to explain that well, anyways I heard he's pretty arrogant.
@NR_Tutorials4 жыл бұрын
i saw the lecture iangoodfellow also.. not better than this