Lesson 13: Deep Learning Foundations to Stable Diffusion

  Рет қаралды 17,158

Jeremy Howard

Jeremy Howard

Күн бұрын

Пікірлер: 13
@mattst.hilaire9101
@mattst.hilaire9101 Жыл бұрын
That e^a trick shows that, even though algebra is such a pain, it comes in handy so often to make things move smooth. Reminds me of that trick to avoid overflow in binary search: mid = low + ((high - low) / 2). Favorite thing about these lectures are the small hints for math and Python along the way. Thanks for being so detail oriented!
@michaelmuller136
@michaelmuller136 5 ай бұрын
Great, very enlightening, liked the small details also, thank you!
@make_education
@make_education 9 күн бұрын
Thanks a lot!
@MichaelChenAdventures
@MichaelChenAdventures 7 ай бұрын
Thank you Jeremy!
@markozege
@markozege Жыл бұрын
When we compare result of the softmax with the one-hot vector (at 1:21:00), we take only the value of the softmax where one-hot vector is one. Isn't this a missed opportunity to incorporate other "wrong" predictions into the loss function? E.g. if the model is highly confident in making the prediction for some other wrong class (eg. numbers that look similar) then getting more penalised for this could further speed up the training?
@SKULDROPR
@SKULDROPR Жыл бұрын
I think I understand what you are getting at. Focal loss lets you control the amount of penalty you are talking about for the other wrong predictions.
@amitaswal7359
@amitaswal7359 Жыл бұрын
if our prediction is wrong then the log value of our wrong prediction will be a large -ve number, so it won't matter
@SKULDROPR
@SKULDROPR Жыл бұрын
@@amitaswal7359 Now I think about it, you are correct, it is no big deal either way
@maxim_ml
@maxim_ml Жыл бұрын
Softmax makes it so that the larger the probability for a wrong class is, the smaller the probability for the right class is. So there already is a penalty for having a high probability for the wrong class. Maybe having a loss that penalizes an uneven distribution of probabilities among the wrong classes would be useful. I guess Soft Labels already end up doing that.
@myfolder4561
@myfolder4561 4 ай бұрын
I've found the walk thru on backward propagation in this lesson a bit lacking and jumpy. Highly recommend Andrej Karpathy's zero to hero series for those who're interested to dig a bit deeper into the math and step thru of application of the chain rule in deriving gradients
@bomb3r422
@bomb3r422 3 ай бұрын
i second that, it was a bit rushed and unclear. Andrej does a fantastic job with explaining backprop.
@DaddyCool-o4f
@DaddyCool-o4f Жыл бұрын
1:06:03
@carnap355
@carnap355 Жыл бұрын
good one 👉🥺👈
Lesson 14: Deep Learning Foundations to Stable Diffusion
1:49:37
Jeremy Howard
Рет қаралды 14 М.
Lesson 15: Deep Learning Foundations to Stable Diffusion
1:37:18
Jeremy Howard
Рет қаралды 13 М.
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 15 МЛН
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН
Lesson 12: Deep Learning Foundations to Stable Diffusion
1:50:24
Jeremy Howard
Рет қаралды 18 М.
Getting Started With CUDA for Python Programmers
1:17:56
Jeremy Howard
Рет қаралды 65 М.
Startup.ML Deep Learning Conference: Jeremy Howard's Keynote
45:14
Intro to FastHTML
7:29
Jeremy Howard
Рет қаралды 78 М.
Lesson 20: Deep Learning Foundations to Stable Diffusion
1:45:42
Jeremy Howard
Рет қаралды 8 М.
Stable Diffusion Deep Dive Notebook Run-through
41:09
DataScienceCastnet
Рет қаралды 12 М.
Lesson 16: Deep Learning Foundations to Stable Diffusion
1:25:39
Jeremy Howard
Рет қаралды 10 М.
Lesson 11 2022: Deep Learning Foundations to Stable Diffusion
1:48:17
Jeremy Howard
Рет қаралды 24 М.
Lesson 25: Deep Learning Foundations to Stable Diffusion
1:38:26
Jeremy Howard
Рет қаралды 7 М.
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН