Please indulge me! The two people who have disliked this video, what were they expecting to see that they could not find here! I genuinely cannot understand what they disliked about this lecture! This is just Gold!
@abdelghanikadouri16264 жыл бұрын
I can't believe that I just find out about this wonderful tutorial I never thought I'd understand the gan like this easy thank you so much
@MLDawn5 жыл бұрын
You are killing me here! This is too amazing of a lecture to be true! Well done a million times
@AhladKumar5 жыл бұрын
Thanks for the feedback...watch out for more
@muhammadyaqoob91297 күн бұрын
I have seen your complete playlist of Brain MRI tumor detection. Those were also too good. Love you from depth of the heart.
@NR_Tutorials5 жыл бұрын
Sir jaan lena hai jaan le lo is lecture ke liye ... kya padhya hau aap ne great sir salute
@shahriarshayesteh86023 жыл бұрын
Fantastic explanation! we need such detailed and in-depth tutorials. I hope you can go over transformers and BERT in such detail. Thanks!
@ziauldba2 жыл бұрын
Seriously my feeling is like im in a classroom . Thanks professor. god bless u .
@syeddanish7055 Жыл бұрын
Very nicely explained... was able to learn all in one go. Thanks a lot, sir!!!
@hamzaameer22135 жыл бұрын
Thanks Sir , i wasted 4-Days in reading research papers . You have done it in just 4 Lectures. You are pure Gold.
@iliasaarab79224 жыл бұрын
This channel deserves a million+ subs! Amazing explanation!
@learner35392 жыл бұрын
Thanks for your lecture. Your lecture makes all complicated topic much easier . You are doing great work.
@soumambanerjee18163 жыл бұрын
Ian Goodfellow will be shocked to see such a great lecture. Thank you for choosing to be a professor. ❤️
@jyotideshwal3374 жыл бұрын
There are not enough words to thank you for all that you do! I am lucky enough to call you my teacher
@surayuthpintawong83323 жыл бұрын
Thanks for sparing your time for teaching us
@DoctorPataMedicast10 ай бұрын
Hmmm this is one of the best I have seen.You are a born teacher.
@gioxc884 жыл бұрын
Words are not enough to thank you for this!!!
@mozhganrahmatinia16563 жыл бұрын
your description is perfect. thank you.
@nikhilkumawat1797 Жыл бұрын
Very well explained, Amazing.
@zeinabawad9317 Жыл бұрын
thanks a lot the best explanation ever for GANs thanks again
@youtubecommenter51224 жыл бұрын
Bloody good explaination!
@Afshaanjum-ge7dt9 ай бұрын
Bravo.... your efforts are appreciated.
@shaurovdas58423 жыл бұрын
Isn't there supposed to be a negative sign in the binary cross entropy formula?
@BJ-gj2mv4 жыл бұрын
this is the best lectures i seen on deep learning . Great work . Thank you . Keep it up.
@madhuvarun27903 жыл бұрын
Your videos are a gold mine. Thank you so much sir.
@zainabkhan58593 жыл бұрын
This is such a classy explanation Thank you so much
@shineshine95995 жыл бұрын
your videos literally saved me while working on Gan. can't thank you enough.
@submagr5 жыл бұрын
Thank you, Sir. I got a much better understanding of the loss function now!
@Vishal-mf2db3 жыл бұрын
Thank you so much for this video. Helped me a lot.
@girishmishra4 жыл бұрын
Awesome pick....no other lecture is needed. Thanks a lot
@gauravsahani24994 жыл бұрын
Really interesting video Sir!, Thank you for this Playlist!
@ZERU3262 жыл бұрын
at 31:18, when we are training the generator fake data label is 1, thus we get directly L = -log(D(G(z))) as loss function for generator.
@go64bit5 жыл бұрын
This explanation is pure art. Brilliant!
@AhladKumar5 жыл бұрын
thanks
@minhajansari82725 жыл бұрын
Wow! Thank you very much. I slept through all my lectures and tomorrow is my exam (which is going to be tough)
@anubhavgupta49173 жыл бұрын
Sir, I am from India. Your lecture is diamond i.e. kohinoor hera. It deserves millions of likes and i am shocked why only 445 ??
@zeppelinpage861 Жыл бұрын
because justin bieber is more important to us than GANs
@RaviKumar-yu8xf4 жыл бұрын
Simply Awesome!. Thank you very much sir!
@aeigreen Жыл бұрын
great explanation
@danielenrique71845 жыл бұрын
Thank you so much for the exceptional explanation! :)
@armankamal36944 жыл бұрын
Very very very good explanation. Thanks for the lecture it helped me a lot
@digvijaymahamuni7722 Жыл бұрын
Very Impressive Lecture!
@tarmiziizzuddin3375 жыл бұрын
Your videos are gems sir, thank you for the effort!
@rajeshraman19805 жыл бұрын
Awesome .the best on GAN.Thanks a lot
@adminai94505 жыл бұрын
awesome explanation sir thank you
@SW-ud1wt Жыл бұрын
Dear Sir, very good elaboration. I need to ask question, at 4:44 time, you said weights need to be adjusted after calculating error and we take another samples from distribution Z, why should we not work with previous random points from z? Why we take new points? Because we only want only weights to be updated. Please guide . Thanks
@karimmache40183 жыл бұрын
Thank you for this wonderful tutorial, It's any tutorial about autoencoder and variational autoencoder?
@praslisa4 жыл бұрын
I am watching all the ads on these videos...this person deserves $$$$...come up with an udemy course...you will do great :)
@haztec.4 жыл бұрын
I don't think that gets them any more money though.
@soumyasarkar41003 жыл бұрын
awesome lectures....these lectures are so good that I want them to keep going for hours .... I was wondering if the minimisation objective is ill defined because it can go arbitary low towards negative infinity
@arjunmajumdar68742 жыл бұрын
The binary cross-entropy loss is: -y.log(yhat) -(1-y).(1-yhat). In your derivation, this isn’t written explicitly. Is it because you have modified the BCE loss ?
@BiranchiNarayanNayak4 жыл бұрын
Thanks. Very well explained.
@ankurbhatia244 жыл бұрын
the cross entropy loss function used here does not have a -ve sign. −(ylog(p)+(1−y)log(1−p)) This is the cross-entropy loss function. So every time we maximized or minimized the thing would be reversed. Right? Please correct me if I am wrong.
@prashant_canada3 жыл бұрын
i actually started following you when i saw your first Deep Learning Video series where you were discussing about kaggle google colab, setting up an environment or getting ready to dive in data science. I found it very informative. but, as Tensflow 1.0 is completely out and we have tensorflow 2.0 instead. so many syntax/concepts does play a role like session, placeholder, summary and many more. so, how can fulfill this gap as i had to discontiue to follow rest of the series.. is there any other way or you have another video series for Tensorflow2.0. please update sir
@unchaineddreameralpa4 жыл бұрын
Excellent tutorial
@raghavsharma26585 жыл бұрын
thank you again, you made this tough subject very easy...
@bosepukur4 жыл бұрын
if you minimize G(z) doesn't it make the cost function ill defined because it can go to arbitary low value ?
@홍성-w4g3 жыл бұрын
What software are you using for this teaching?
@douglasamoo-sargon50495 жыл бұрын
Awesome explanation.
@microcosmos96544 жыл бұрын
Thank you so much for the lectures, they help me a lot!
@PavanKumar-yk5mq Жыл бұрын
excellent
@aakanksha78773 жыл бұрын
thanks alot for this.
@StrawHatLuffy7505 жыл бұрын
Thank you so much :) pls keep explaining things :)
@madhavpr4 жыл бұрын
Hi Ahlad, Awesome tutorial. I am wondering if we can interchange max and min in the expression for the loss function of a GAN. More precisely, if L is the loss function, does min max L = max min L where the minimizer and the maximizer are with respect to G and D respectively ?
@zahrakhalid47312 жыл бұрын
Please make a video on stylegan.
@sharmaannapurna4 жыл бұрын
Hi, very nice explanation. Thank u for sharing this. I would like u to check the first slide and the definition of generator. I think the last word is a typo.
@ammarkhan26113 жыл бұрын
At 0:57, isn't the ides of the generator to fool the discriminator ?
@hamidrezanabati5 жыл бұрын
Thanks for the great explanation!
@sangeethbalakrishnan91774 жыл бұрын
Nice lectures. But equation for binary cross entropy has negative sign , i think
@ambujmittal68244 жыл бұрын
Yes, it should be negative and the loss is always to be minimized. (not maximized) The log graphs will also be drawn reflected. The instructor has taught the concept completely wrong there. (My reason for disliking the video since this concept is very trivial in ML)
@nisaratutube5 жыл бұрын
Thanks Sir, for this great stuff. One request, if you can put some refrence links alongwith the description of the videos, would be very helpful.
@danielmathew50084 жыл бұрын
These videos are probably the best explanation of these machine learning concepts, but I have one problem.I don't want to complain about ads since I know you work hard to put these on and have to earn, but please try to put all the ads at the end and not 2 or 3 in the middle of the video since it disrupts the thought flow. Just a suggestion. Thanks for your work.
@prelimsiscoming5 жыл бұрын
Can we have the lecture slides ??
@ankurgupta28065 жыл бұрын
Sir at 0:50 I think it should be "to fool the discriminator".
@ogsconnect13124 жыл бұрын
Thanks, Excellent!
@vasukapoor64234 жыл бұрын
Sir in first slide it should be that it can fool the discriminator in the place of fool the generator.
@priyansukushwaha51956 ай бұрын
Hey everyone, please note that sir made a mistake at 28:00, mistake that he clarified in next video . The expectation terms will also include minG and maxD on RHS of equation
@nirajpudasaini445011 ай бұрын
legend
@venkatbabu1864 жыл бұрын
A feeder loop string instruments are the modern hardware artificial intelligence for both machine learning and clustered deep learning reduction. Modulators demodulators. Weighed proportional clustering of deep learning. Advisory is error rectification and retuning. The higher the sensory perception the more new kind and a bit slower. Sometimes much faster because of new methods. That's why AI is able to remember almost the entire world and extras. Even gone with the wind. Color sensory says it is red. Smell sensory says it is attractive. Shape sensory says it is conical. So red rose.
@arabnaouel2893 жыл бұрын
thank you very much for this amazing serie, can you please activate the subtitle in another language please ??
@pasinduekanayaka80233 жыл бұрын
Hi, first of all, thanks for the explanation but I have some questions regarding the training cycle which you explained. so in the explanation, it seems like the discriminator is already trained to a maximum level before the generator starts to train. isn't that a problem? because during the research related to GANs many people have said that if the discriminator is stronger than the generator the model is not gonna perform well. It's somewhat logical as well because the main objective of GANs is to perform a min-max game between the generator and the discriminator so if the discriminator is stronger than the generator, it's not a fair game at all. can you please give an explanation about this? Thank you.
@dkmoni175 жыл бұрын
Wow. Just One Question - At Slid 08:38, looks like you are Training GENERATOR to correct its Errors during Training. In PART 1, you said to train a GENERATOR Model, we should keep Label = 1 to Fool the DISCRIMINATOR. How come you have shown y = 0 in this Video for Training of GENERATOR. If DISCRIMINATOR is already Trained on Fake and Real Samples, then to Fool it the Generator should put Label = 1 else it will not be Fooled. Please pass your comments. Can someone reply on this please.
@MustafaCoban-hm2ov5 жыл бұрын
Around 2:40, you say that "z is sampled from a random distribution and we cannot say it is gaussian or some other distribution". I don't think this is correct. First of all, what is random distribution? In GAN implementations, researchers determine a prior distribution such as Normal distribution and then they sample z vector from this normal distribution. This way, generator learns the mapping from normal distribution to the training data distribution. Therefore, what is random is the z vector, not the distribution itself.
@shineshine95995 жыл бұрын
i agree
@amitkumarshrivastava74375 жыл бұрын
What is z , is it some kind of image or anything else - I understand its something from distribution but is it images ? Also, what kind of noise we are basically adding -It may be some kind of vector but is it images ? I understand that discriminator will have real images like shoe, cloth etc . in the same context I am asking about z
@abhishekprasad70305 жыл бұрын
I wanted to know that how many vedios are left in this series to complete DL
@AhladKumar5 жыл бұрын
four more left
@vikramnimma4 жыл бұрын
At 0:50, u said the objective of generator is to create data so that it can fool the generator, but in ur previous lecture u said objective is to fool the descriminator, correct me if I am wrong.
@priyabratdash89644 жыл бұрын
This might have been a mistake, the discriminator needs to be fooled
@adnanhashem987 ай бұрын
I think that in 0:48 you meant to write and say that the role of the generator is to "...can fool the *discriminator* " instead of the generator.
@sgrimm7346 Жыл бұрын
I believe he meant to say the Generator's role is to create data so that it can fool the " Discriminator". Just pointing it out. Good video, however.
@makting0094 жыл бұрын
why fake dataset become circular?
@alibahari42174 жыл бұрын
totally Unique but there are a lot of disturbing Ads
@AhladKumar4 жыл бұрын
you can take youtube premium to avoid them
@rachittsharmaaa4 жыл бұрын
@@AhladKumar 😂😂😂
@miranbaban9554 Жыл бұрын
Dear Ahlad, It should be Discriminator in your first slide, it fools discriminator not generator
@NitishRaj2 жыл бұрын
1:06 Fool the discriminator. Kindly correct it
@ttreza592212 күн бұрын
Anyone from 2024 watching this?
@dddd-rf1xy Жыл бұрын
Please open translate
@aadityasingh49114 жыл бұрын
very confusing....u say one thing and then contradict it 5 seconds later
@VR-fh4im Жыл бұрын
Generator fools the Discriminator.
@NandakishanRajagiri8 ай бұрын
very interesting but boring...
@MrSushantsingh5 жыл бұрын
Even Ian Goodfellow will not be able to explain that well, anyways I heard he's pretty arrogant.
@NR_Tutorials5 жыл бұрын
i saw the lecture iangoodfellow also.. not better than this