Lesson 13: Deep Learning Foundations to Stable Diffusion

  Рет қаралды 15,803

Jeremy Howard

Jeremy Howard

Күн бұрын

Пікірлер: 11
@mattst.hilaire9101
@mattst.hilaire9101 Жыл бұрын
That e^a trick shows that, even though algebra is such a pain, it comes in handy so often to make things move smooth. Reminds me of that trick to avoid overflow in binary search: mid = low + ((high - low) / 2). Favorite thing about these lectures are the small hints for math and Python along the way. Thanks for being so detail oriented!
@michaelmuller136
@michaelmuller136 Ай бұрын
Great, very enlightening, liked the small details also, thank you!
@MichaelChenAdventures
@MichaelChenAdventures 3 ай бұрын
Thank you Jeremy!
@myfolder4561
@myfolder4561 18 күн бұрын
I've found the walk thru on backward propagation in this lesson a bit lacking and jumpy. Highly recommend Andrej Karpathy's zero to hero series for those who're interested to dig a bit deeper into the math and step thru of application of the chain rule in deriving gradients
@markozege
@markozege Жыл бұрын
When we compare result of the softmax with the one-hot vector (at 1:21:00), we take only the value of the softmax where one-hot vector is one. Isn't this a missed opportunity to incorporate other "wrong" predictions into the loss function? E.g. if the model is highly confident in making the prediction for some other wrong class (eg. numbers that look similar) then getting more penalised for this could further speed up the training?
@SKULDROPR
@SKULDROPR Жыл бұрын
I think I understand what you are getting at. Focal loss lets you control the amount of penalty you are talking about for the other wrong predictions.
@amitaswal7359
@amitaswal7359 Жыл бұрын
if our prediction is wrong then the log value of our wrong prediction will be a large -ve number, so it won't matter
@SKULDROPR
@SKULDROPR Жыл бұрын
@@amitaswal7359 Now I think about it, you are correct, it is no big deal either way
@maxim_ml
@maxim_ml Жыл бұрын
Softmax makes it so that the larger the probability for a wrong class is, the smaller the probability for the right class is. So there already is a penalty for having a high probability for the wrong class. Maybe having a loss that penalizes an uneven distribution of probabilities among the wrong classes would be useful. I guess Soft Labels already end up doing that.
@DaddyCool-o4f
@DaddyCool-o4f Жыл бұрын
1:06:03
@carnap355
@carnap355 10 ай бұрын
good one 👉🥺👈
Lesson 14: Deep Learning Foundations to Stable Diffusion
1:49:37
Jeremy Howard
Рет қаралды 13 М.
Lesson 11 2022: Deep Learning Foundations to Stable Diffusion
1:48:17
Jeremy Howard
Рет қаралды 22 М.
Миллионер | 1 - серия
34:31
Million Show
Рет қаралды 1,8 МЛН
Worst flight ever
00:55
Adam W
Рет қаралды 26 МЛН
Please Help This Poor Boy 🙏
00:40
Alan Chikin Chow
Рет қаралды 15 МЛН
OYUNCAK MİKROFON İLE TRAFİK LAMBASINI DEĞİŞTİRDİ 😱
00:17
Melih Taşçı
Рет қаралды 12 МЛН
How to Create a Neural Network (and Train it to Identify Doodles)
54:51
Sebastian Lague
Рет қаралды 1,9 МЛН
How are memories stored in neural networks? | The Hopfield Network #SoME2
15:14
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 388 М.
What is backpropagation really doing? | Chapter 3, Deep learning
12:47
3Blue1Brown
Рет қаралды 4,6 МЛН
Deep Learning: A Crash Course (2018) | SIGGRAPH Courses
3:33:03
ACMSIGGRAPH
Рет қаралды 3 МЛН
Lesson 15: Deep Learning Foundations to Stable Diffusion
1:37:18
Jeremy Howard
Рет қаралды 12 М.
Lesson 12: Deep Learning Foundations to Stable Diffusion
1:50:24
Jeremy Howard
Рет қаралды 17 М.
Lesson 17: Deep Learning Foundations to Stable Diffusion
1:56:33
Jeremy Howard
Рет қаралды 9 М.
CppCon 2014: Mike Acton "Data-Oriented Design and C++"
1:27:46
Миллионер | 1 - серия
34:31
Million Show
Рет қаралды 1,8 МЛН