L18.3: Modifying the GAN Loss Function for Practical Use

  Рет қаралды 6,039

Sebastian Raschka

Sebastian Raschka

Күн бұрын

Sebastian's books: sebastianrasch...
Slides: sebastianrasch...
-------
This video is part of my Introduction of Deep Learning course.
Next video: • L18.4: A GAN for Gener...
The complete playlist: • Intro to Deep Learning...
A handy overview page with links to the materials: sebastianrasch...
-------
If you want to be notified about future videos, please consider subscribing to my channel: / sebastianraschka

Пікірлер: 15
@prateekpatel6082
@prateekpatel6082 6 ай бұрын
Quite vague poor explanation of gradient vanishing when strong discriminator. The gradient -1 is not weak , the reason it vanishes is because -1 is multiplied (chain rule) to gradient of D w.r.t G, which is too small or confident/saturated discrimators. Once you change the formulation to 1/y_hat , the small gradient is not multiplied to a very large number
@jonathansum9084
@jonathansum9084 3 жыл бұрын
Hi. I have a question. For the second equation at 7:30, how did you get the info for the [0, inf]? Log(0) = -inf. Or is that the equation was changed to be negative log likely hood, so it became inf. Thank you.
@SebastianRaschka
@SebastianRaschka 3 жыл бұрын
Good catch, looks like I forgot the minus sign at the bottom.
@adityarajora7219
@adityarajora7219 2 жыл бұрын
4:08 how D(G(z)) --> 0 maximize the gradient descent equation??????? please help im completely confused!
@adityarajora7219
@adityarajora7219 2 жыл бұрын
is minimize meant only magnitude? ....I am considering you are minimizing magnitude! is that the case?
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
"how D(G(z)) --> 0 maximize the gradient descent equation?" Actually we are not trying to maximize it but minimize it. So in the equation at the top, it's gradient descent with log(1-D(G(z))). Since we feed fake images, D(G(z)) will be close to 0 in the beginning, so the loss is log(1-0) = log(1) = 0. With gradient descent it will go towards -infty as D(G(z)) -> 1.
@adityarajora7219
@adityarajora7219 2 жыл бұрын
in the end everything is clear.....i have gone through many videos but dint get proper understandig....but this video explains whats going inside those equation!! great
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
awesome! glad to hear!
@techgurlpk
@techgurlpk 2 жыл бұрын
Thank you so much for this beautiful and detailed discussion.. after reading and getting confused from so many places.. landed on your channel .. and it cleared all ambiguities.. 🙏😇
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
awesome, I am happy to hear this!
@tiana717
@tiana717 Жыл бұрын
Thank you for the great video! At 14:19, I understand that the first equation is the gradient ascent from the original paper, and the second equation is negative log-likelihood, which from what I understand is just adding negative signs to the normal log-likelihood. But how did you transform eqn 1 to the normal log-likelihood to begin with? Integration? Thank you!
@BarryAllen-lh6jg
@BarryAllen-lh6jg Жыл бұрын
well explained. thanks
@Belishop
@Belishop 2 жыл бұрын
This video is a bit complicated.
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Thanks for the feedback!
@Belishop
@Belishop 2 жыл бұрын
@@SebastianRaschka Thank you for the video!
L18.4: A GAN for Generating Handwritten Digits in PyTorch -- Code Example
22:46
Brawl Stars Edit😈📕
00:15
Kan Andrey
Рет қаралды 59 МЛН
when you have plan B 😂
00:11
Andrey Grechka
Рет қаралды 67 МЛН
Остановили аттракцион из-за дочки!
00:42
Victoria Portfolio
Рет қаралды 3,9 МЛН
L17.4 Variational Autoencoder Loss Function
12:16
Sebastian Raschka
Рет қаралды 10 М.
CS 182: Lecture 19: Part 3: GANs
27:31
RAIL
Рет қаралды 13 М.
L18.5: Tips and Tricks to Make GANs Work
17:14
Sebastian Raschka
Рет қаралды 3,8 М.
L19.5.1 The Transformer Architecture
22:36
Sebastian Raschka
Рет қаралды 18 М.
[Classic] Generative Adversarial Networks (Paper Explained)
37:04
Yannic Kilcher
Рет қаралды 63 М.
WGAN implementation from scratch (with gradient penalty)
25:59
Aladdin Persson
Рет қаралды 54 М.
247 - Conditional GANs and their applications
39:51
DigitalSreeni
Рет қаралды 44 М.
The Math Behind Generative Adversarial Networks Clearly Explained!
17:04
L17.3 The Log-Var Trick
7:35
Sebastian Raschka
Рет қаралды 7 М.
Brawl Stars Edit😈📕
00:15
Kan Andrey
Рет қаралды 59 МЛН