[Classic] Generative Adversarial Networks (Paper Explained)

  Рет қаралды 60,503

Yannic Kilcher

Yannic Kilcher

Күн бұрын

#ai #deeplearning #gan
GANs are of the main models in modern deep learning. This is the paper that started it all! While the task of image classification was making progress, the task of image generation was still cumbersome and prone to artifacts. The main idea behind GANs is to pit two competing networks against each other, thereby creating a generative model that only ever has implicit access to the data through a second, discriminative, model. The paper combines architecture, experiments, and theoretical analysis beautifully.
OUTLINE:
0:00 - Intro & Overview
3:50 - Motivation
8:40 - Minimax Loss Function
13:20 - Intuition Behind the Loss
19:30 - GAN Algorithm
22:05 - Theoretical Analysis
27:00 - Experiments
33:10 - Advantages & Disadvantages
35:00 - Conclusion
Paper: arxiv.org/abs/1406.2661
Abstract:
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
Authors: Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
Links:
KZbin: / yannickilcher
Twitter: / ykilcher
Discord: / discord
BitChute: www.bitchute.com/channel/yann...
Minds: www.minds.com/ykilcher
Parler: parler.com/profile/YannicKilcher
LinkedIn: / yannic-kilcher-488534136
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: www.subscribestar.com/yannick...
Patreon: / yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Пікірлер: 68
@Youtoober6947
@Youtoober6947 2 жыл бұрын
I don't know if you have an idea, but I would like to tell you that I believe you have NO idea how helpful (and especially how helpful with time management) the Paper Explained series you're doing is for me. These are SERIOUSLY invaluable, thank you so much.
@TheInfinix
@TheInfinix 3 жыл бұрын
I think that such an initiative will be useful for fresh researchers and beginners.
@Aniket7Tomar
@Aniket7Tomar 3 жыл бұрын
I am loving these classic paper videos. More of these, please.
@neuron8186
@neuron8186 3 жыл бұрын
ok indian
@KensonLeung0
@KensonLeung0 3 жыл бұрын
+1
@kateyurkova6384
@kateyurkova6384 3 жыл бұрын
These reviews are priceless, you add so much more value than just reading the paper would bring, thank you for your work.
@MinecraftLetstime
@MinecraftLetstime 3 жыл бұрын
These are absolutely amazing, please keep them coming.
@datamlistic
@datamlistic 3 жыл бұрын
The classic papers are amazing! Please continue making them!
@sulavojha8322
@sulavojha8322 3 жыл бұрын
Classic paper is too good. Hope you upload such more videos. Thank you !
@SallyZhang-vt2oi
@SallyZhang-vt2oi 3 жыл бұрын
Thank you very much. I really appreciate your understanding of these papers. Please keep on releasing such kind of videos. They helped me a lot. Thanks again!
@bjornhansen9659
@bjornhansen9659 3 жыл бұрын
I like these videos on the papers. It is very helpful to hear how another person views the ideas discussed in these papers. thanks!
@benjaminbenjamin8834
@benjaminbenjamin8834 3 жыл бұрын
@Yannic , this is such a great initiative and you are doing a great great job. Please carry it on.
@fulin3397
@fulin3397 3 жыл бұрын
classic paper and very awesome explanation. Thank you!
@maltejensen7392
@maltejensen7392 3 жыл бұрын
It's extremely helpful to hear your thoughts on what the authors have been thinking and things like researchers trying to put MCMC somewhere it was intended not to be. This gives a better idea of how the machine learning in academia works. Please continue this and thanks!
@andresfernandoaranda5498
@andresfernandoaranda5498 3 жыл бұрын
I thank you for making this resources free to the community ))
@falachl
@falachl 2 жыл бұрын
Yannic, thank you, In this overloaded world for ML you are providing a critical informative service. Please keep it up
@agbeliemmanuel6023
@agbeliemmanuel6023 3 жыл бұрын
It's great to have origin of most models in ML today. Good work
@aa-xn5hc
@aa-xn5hc 3 жыл бұрын
I love these historical videos of you!!
@narinpratap8790
@narinpratap8790 2 жыл бұрын
This was awesome! I am currently a graduate student, and I have to write a paper review for my Deep Learning course. Loved your explainer on GANs. This has helped me understand so much of the intuition behind GANs, and also the developments in Generative Models since the paper's release. Thank you for making this.
@ambujmittal6824
@ambujmittal6824 3 жыл бұрын
You're truly a God's gift for people who are comparatively new in the field. (Maybe even for experienced ones) Thanks a lot and keep up the good work!
@YtongT
@YtongT 3 жыл бұрын
very useful, thank you for such quality content!
@avishvj
@avishvj 2 жыл бұрын
brilliant, would love more of these!
@frankd1156
@frankd1156 3 жыл бұрын
Wow ...this is gold.Keep up man.be blessed
@kristiantorres1080
@kristiantorres1080 3 жыл бұрын
Beautiful paper and superb review!
@AnassHARMAL
@AnassHARMAL Жыл бұрын
This is amazing, thank you! As a materials scientist trying to utilize machine learning, this just hits the spot!
@alexandravalavanis2282
@alexandravalavanis2282 2 жыл бұрын
Damn. I’m enjoying this video very much. Very helpful. Thank you!
@aman6089
@aman6089 2 жыл бұрын
Thank you for the explaination. It is a great resource for beginner like myself!
@goldfishjy95
@goldfishjy95 3 жыл бұрын
Hi this is incredibly useful, thank you so much!
@herp_derpingson
@herp_derpingson 3 жыл бұрын
12:00 I never quite liked the min-max analogy. I think a better analogy would be a teacher student analogy. The discriminator says, "The image you generated does not look like a real image and here are the gradients which tells you why. Use the gradients to improve yourself." . 32:30 I am pretty sure this interpolations existed in auto-encoder literature . Mode collapse is pretty common for human teachers and students. Teachers often say that you need to solve the problems the way I taught in class. "My way or the highway" XD
@YannicKilcher
@YannicKilcher 3 жыл бұрын
Yes the teacher student phrasing would make more sense, I think the min-max is just the formal way of expressing the optimization problem to be solved and then people go from there into game theory etc. The mode collapse could also be the student that knows exactly what to write in any essay to make the one particular teacher happy :D
@Throwingness
@Throwingness 2 жыл бұрын
I'd appreciate more explaining on the math in the future. This kind of math is rarely encountered by most programmers.
@sergiomanuel2206
@sergiomanuel2206 3 жыл бұрын
Very good paper!! , can you please go to the paper of next bigger step to the state of art in GANs. Thank you!
@bosepukur
@bosepukur 3 жыл бұрын
great initiative ....love to see some classis NLP papers
@AltafHussain-gk2xe
@AltafHussain-gk2xe 2 жыл бұрын
Sir I'm big fan of you. I'm following you for last one year I find your every video is full of information and really useful. Sir I request you to please make few videos one segmentation as well I shall be thankful to you.
@flyagaric23
@flyagaric23 3 жыл бұрын
Thank you, Excellent.
@utku_yucel
@utku_yucel 3 жыл бұрын
YES! THANKS!
@kvawnmartin1562
@kvawnmartin1562 3 жыл бұрын
Best GAN explanation ever
@fluent_styles6720
@fluent_styles6720 3 жыл бұрын
agreed
@DasGrosseFressen
@DasGrosseFressen 3 жыл бұрын
"Historical" in ML : 6 years :D The series ist nice, thanks! one question though: you said that the objective is to minimize the exoectations in (1), but the minmax is already performed to get to the equality, right? How does V look? Edit: oh, never mind. In (3) you see that (1) is in the typical CS-sloppy notation...
@lcslima45
@lcslima45 2 жыл бұрын
This channel is awesome
@TheKoreanfavorites
@TheKoreanfavorites 2 жыл бұрын
Great!!!
@jintaoren6755
@jintaoren6755 3 жыл бұрын
why youtube hasn't recommended me this channel earlier?
@westcott2204
@westcott2204 9 ай бұрын
Thank you for providing your insights and current point of view on the paper. it was very helpful.
@Notshife
@Notshife 3 жыл бұрын
Hey @Yannic, I followed up on the BYOL paper you covered. While I'm not super familiar with machine learning I do feel I implemented something which is mechanically the same as what was presented and I thought it might interest you that the result for me was that it converged to a constant, every time. The exponential moving average weighted network and the separate augmentations did not prevent it. I will be going back through to see if I maybe have made a mistake. But I have been trying a bit of everything and so far nothing has been able to prevent the trivial solution. Maybe I'm missing something, which I hope because I liked the idea. My experimentation with parameters and network architecture has not been exhaustive... But yeah, so far: no magic.
@YannicKilcher
@YannicKilcher 3 жыл бұрын
Yes, I was expecting most people to have your experience and then apparently someone else can somehow make it work sometimes.
@dl569
@dl569 Жыл бұрын
thanks a lot!
@hahawadda
@hahawadda 3 жыл бұрын
Funny how now we can say the original paper on GAN is classic
@robo2.069
@robo2.069 3 жыл бұрын
Nice explained thanku.......Can you make a video on Dual motion GAN(DMGAN) .
@ehza
@ehza 2 жыл бұрын
Thanks
@dandy-lions5788
@dandy-lions5788 3 жыл бұрын
Thank you so much!! Can you do a paper on UNet?
@rameshravula8340
@rameshravula8340 3 жыл бұрын
Yannic, could you give application examples at the end of each paper you review.
@jeromeblanchet3827
@jeromeblanchet3827 3 жыл бұрын
Most people tells stories with data insights and model prediction. Yannic tells stories with papers. An image is worth a 1000 word, and a good story is worth a 1000 image.
@sweatobertrinderknecht3480
@sweatobertrinderknecht3480 3 жыл бұрын
I‘d like to see a mix of papers and actual (python) code
@shivombhargava2166
@shivombhargava2166 3 жыл бұрын
Please make a video on pix2pix GANs
@paulijzermans7637
@paulijzermans7637 8 ай бұрын
i'm writing my thesis on GAN's atm. Would enjoy an interesting conversation with an expert:)
@vigneshbalaji21
@vigneshbalaji21 2 жыл бұрын
Can you please post a video of GAIL ?
@XOPOIIIO
@XOPOIIIO 3 жыл бұрын
In the future there'll be an algorithm to transform scientific papers into your videos.
@adamantidus
@adamantidus 3 жыл бұрын
No matter how efficient this algorithm might be, Yannic will still be faster
@DANstudiosable
@DANstudiosable 3 жыл бұрын
What you mean by prior on input distribution?
@YannicKilcher
@YannicKilcher 3 жыл бұрын
it's the way the inputs are distributed
@aishwaryadhumale1278
@aishwaryadhumale1278 3 жыл бұрын
Can I please more content on GAN
@jithendrayenugula7137
@jithendrayenugula7137 3 жыл бұрын
very awesome explanation! Thanks man! Is it too late or waste of time to play with and explore GANs in 2020 where BERT/GPT are hot and trending in AI community?
@ssshukla26
@ssshukla26 3 жыл бұрын
Is it too late to learn something? No... Is it too late to research into GANs? Absolutely not... Nothing is perfect, GANs are not, there will be decades of research on these same topics. Whether you can make money out of knowing GANs... Ummmm debatable...
@chinbold
@chinbold 3 жыл бұрын
I'm only inspired by watching your videos 😢😢😢
@timothyschollux
@timothyschollux 3 жыл бұрын
The famous Schmidhuber-Goodfellow moment: kzbin.info/www/bejne/fni8iniLiNJgZrM
@sadface7457
@sadface7457 3 жыл бұрын
Revisit attention is all you need because that is now a classic paper.
@audrius0810
@audrius0810 3 жыл бұрын
He's done the actual paper already
Generative Adversarial Networks (GANs) - Computerphile
21:21
Computerphile
Рет қаралды 641 М.
Homemade Professional Spy Trick To Unlock A Phone 🔍
00:55
Crafty Champions
Рет қаралды 28 МЛН
FOOLED THE GUARD🤢
00:54
INO
Рет қаралды 15 МЛН
Diffusion Models | Paper Explanation | Math Explained
33:27
Outlier
Рет қаралды 226 М.
Rethinking Attention with Performers (Paper Explained)
54:39
Yannic Kilcher
Рет қаралды 55 М.
The Math Behind Generative Adversarial Networks Clearly Explained!
17:04
Flow Matching for Generative Modeling (Paper Explained)
56:16
Yannic Kilcher
Рет қаралды 38 М.
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
31:25
Preserve Knowledge
Рет қаралды 150 М.
OpenAI CLIP: ConnectingText and Images (Paper Explained)
48:07
Yannic Kilcher
Рет қаралды 122 М.
Understand the Math and Theory of GANs in ~ 10 minutes
12:03
WelcomeAIOverlords
Рет қаралды 60 М.
i love you subscriber ♥️ #iphone #iphonefold #shortvideo
0:14
Si pamerR
Рет қаралды 3,2 МЛН
Жёсткий тест чехла Spigen Classic C1
0:56
Romancev768
Рет қаралды 715 М.
Bluetooth Desert Eagle
0:27
ts blur
Рет қаралды 7 МЛН
WWDC 2024 Recap: Is Apple Intelligence Legit?
18:23
Marques Brownlee
Рет қаралды 5 МЛН