Lesson 3: Practical Deep Learning for Coders 2022

  Рет қаралды 103,145

Jeremy Howard

Jeremy Howard

Күн бұрын

00:00 Introduction and survey
01:36 "Lesson 0" How to fast.ai
02:25 How to do a fastai lesson
04:28 How to not self-study
05:28 Highest voted student work
07:56 Pets breeds detector
08:52 Paperspace
10:16 JupyterLab
12:11 Make a better pet detector
13:47 Comparison of all (image) models
15:49 Try out new models
19:22 Get the categories of a model
20:40 What’s in the model
21:23 What does model architecture look like
22:15 Parameters of a model
23:36 Create a general quadratic function
27:20 Fit a function by good hands and eyes
30:58 Loss functions
33:39 Automate the search of parameters for better loss
42:45 The mathematical functions
43:18 ReLu: Rectified linear function
45:17 Infinitely complex function
49:21 A chart of all image models compared
52:11 Do I have enough data?
54:56 Interpret gradients in unit?
56:23 Learning rate
1:00:14 Matrix multiplication
1:04:22 Build a regression model in spreadsheet
1:16:18 Build a neuralnet by adding two regression models
1:18:31 Matrix multiplication makes training faster
1:21:01 Watch out! it’s chapter 4
1:22:31 Create dummy variables of 3 classes
1:23:34 Taste NLP
1:27:29 fastai NLP library vs Hugging Face library
1:28:54 Homework to prepare you for the next lesson
Many thanks to bencoman, wyquek, Raymond Wu, and fmussari on forums.fast.ai for writing the transcript.
Timestamps thanks to "Daniel 深度碎片" on forums.fast.ai.

Пікірлер: 81
@kentcartridge8709
@kentcartridge8709 5 ай бұрын
Wow, this guy is a deep learning/ML genius! I've been studying deep learning for 2 months now, and I consider myself quite good at math and coding. I've been looking for an explanation of what is happening under the hood when the model is training - an "explain like I'm 5" type of explanation. But the only things I could find were academic explanations of how a deep neural network trains with matrix multiplication of weight, bias, backpropagation, etc. I've probably watched 30 videos of those that are all copycats of each other, and I think those people don't know what they are talking about, just spitting out what they saw or read in academic papers/courses. This video was an eye-opener; the guy really knows what is happening behind the scenes, and his 30 years of expertise in the field really shows in those simple yet very easy-to-understand explanations. Thank you! 🙏
@TheCJD89
@TheCJD89 Жыл бұрын
The quadratic example was a really good illustration of how gradient descent works - it is really good for building intuition. Then, the Excel example cements the understanding really well with a solid dataset. This is my favourite of the 3 lectures so far.
@manug4604
@manug4604 Жыл бұрын
I "knew" that deep learning models used the sum of wi +xi + b function, I "knew" that it supposedly was used because it was an "all purpose" function, but now thanks to you Jeremy I know WHY its an "all purpose" function 10/10 explanation. Math should always be explained like this, its actually beautiful to see it all unfold.
@orchestra4841
@orchestra4841 8 ай бұрын
I’ve watched so many videos…. Read so many blogs…. Books…. Trying to understand this thing to understand what a neural network is and how it learns- you explained it perfectly making all the words just fit. The meanings become obvious when presented like this, you did this in…. 15 minutes 🔥
@chronicfantastic
@chronicfantastic Жыл бұрын
The quadratic section is a beautifully crafted example. Thanks
@d14drums
@d14drums Жыл бұрын
yeah that made it fully click for me
@TomHutchinson5
@TomHutchinson5 3 ай бұрын
I greatly appreciate this effort to uplift the community worldwide
@ed23333
@ed23333 Жыл бұрын
Great lesson!! Jeremy deciding to approach chapter 4 differently after seeing many student quit at this point really shows that he cares about students' learning. Greatly appreciated for the effort!🙏
@andrespineda7620
@andrespineda7620 Жыл бұрын
Great foundational lecture. Jeremy has a relaxed, non-intimidating approach that works for me. Brilliant step by step walk into the deep end of the pool without getting us lost or scared :) Thank you for taking the time to put this together.
@howardjeremyp
@howardjeremyp Жыл бұрын
Glad you enjoyed it!
Жыл бұрын
Unbelievable content! Thanks to all who have made it possible!
@santiago4198
@santiago4198 Жыл бұрын
Amazing talk! Thanks thanks thanks! You're doing the machine learning field so much easier to understand, and that's something invaluable.
@mindurownbussines
@mindurownbussines Жыл бұрын
I had watched hundreds of deep learning tutorials and read too many DL books yet I couldn't form a clear intuition of what was exactly happening under the hood. Then I watched this video and 29:00 was my aha moment, Suddenly everything fell into place. Thanks Jeremy
@maraoz
@maraoz Жыл бұрын
This is god-tier educational content, sir. Thanks for sharing it!
@ucmaster2210
@ucmaster2210 9 ай бұрын
I couldn't understand why ReLu was needed and now I understand. I'm a programmer and I think this is the DL course for me. The explanation is very easy to understand. Thank you!
@TheBhumbak
@TheBhumbak 8 ай бұрын
many terms i had heard already, like loss function, fitting a model, activation function, relu JH is Amazing amazing teacher that these things are now clear crystal in my mind Thank you so much JH
@Al-yo7vz
@Al-yo7vz 8 ай бұрын
Probably the most easy to digest material I've seen on the subject, thank you.
@DevashishJose
@DevashishJose Жыл бұрын
Thank you so much jeremy for making this course, I am going slow but learning a lot everyday, you are a very patient teacher. Thank you.
@LeoMedinaDev
@LeoMedinaDev Жыл бұрын
This is mind blowing! Great job explaining all these concepts.
@tumadrep00
@tumadrep00 Жыл бұрын
As always, an excellent video Jeremy.
@allthatyouare
@allthatyouare Жыл бұрын
The excel example blew my mind. Loved this lesson. Thank you.
@pranavdeshpande4942
@pranavdeshpande4942 11 ай бұрын
Simply amazing! Excellent lecture.
@mrjohn4711
@mrjohn4711 Жыл бұрын
New didactic and methodological ideas - like them very much - still a bit rough in execution - but discovers amazing new territory to approach neural networks - deep learning ... well done!
@abdulkadirguner1282
@abdulkadirguner1282 6 ай бұрын
the explanation of deep learning foundations as is here, is too good! As said by Jeremy, one has to remind oneself, that is it, there is no more.
@dingus4138
@dingus4138 Жыл бұрын
I've gone through many great courses in all sorts of subjects, but I think this course might be the best. Kudos for putting out this fantastic content out there for free for everyone to learn.
@howardjeremyp
@howardjeremyp Жыл бұрын
Great to hear!
@anonanonous
@anonanonous Жыл бұрын
what a great lesson. mind blown! Thank you so much! You are a great teacher!
@_ptoni_
@_ptoni_ Жыл бұрын
I was lucky to have good math teachers in high school. Jeremy explaining the concepts reminded me of them. Thanks.
@mukhtarbimurat5106
@mukhtarbimurat5106 Жыл бұрын
Wow, great explanation! Thanks!
@ekbastu
@ekbastu Жыл бұрын
Quadratic example was just superb. 🎉
@analyticsroot1898
@analyticsroot1898 Жыл бұрын
Thanks Jeremy, great tutorial.
@user-ie3fl5ni8y
@user-ie3fl5ni8y 5 ай бұрын
1:05:02 - "There's a competition I've actually helped create many years ago called Titanic" Biggest flex ever.
@wadeedahmad5212
@wadeedahmad5212 8 ай бұрын
i am in love with this course
@acceptapply1491
@acceptapply1491 Жыл бұрын
For those following along, there was a mistake in the spreadsheet range when calculating total loss, both at 1:14:27 and 1:17:40, it selects from row 662 instead row 4. Correct solved loses are 0.144 and 0.143.
@Levy957
@Levy957 Жыл бұрын
loved the excelTorch!!
@3duybuidoi312
@3duybuidoi312 Жыл бұрын
I am a newbie in machine learning. But the approach, you took in this lesson to explain difficult concepts, is making it so easy to understand. Great work.
@howardjeremyp
@howardjeremyp Жыл бұрын
Great to hear!
@hovh03
@hovh03 8 ай бұрын
Skip 10 minutes to start the lesson
@egorasirotiv270
@egorasirotiv270 Жыл бұрын
Excellent!
@goutamgarai9624
@goutamgarai9624 Жыл бұрын
great content.
@OscarRangelMX
@OscarRangelMX 2 жыл бұрын
Thanks! Jeremy, great Lecture, never got into NPL, but now I am understanding it.
@howardjeremyp
@howardjeremyp Жыл бұрын
Excellent!
@jordankuzmanovik5297
@jordankuzmanovik5297 Жыл бұрын
@@howardjeremyp Hi Jeremy, You mentioned that there will be part 2 of this course. When can we expect those videos? Thanks
@mohdsadik1784
@mohdsadik1784 Жыл бұрын
@@jordankuzmanovik5297 you can see videos now
@bbalban
@bbalban Ай бұрын
great course! so weird that the videos have less than 100k views.
@hovh03
@hovh03 8 ай бұрын
I think one way to improve the slow/fast issue is that it is actually sometimes, both. The part that needs to go faster, would/should be going faster, or trimmed out unnecessary part. The parts that is complicated, maybe slow down a bit. Then add very short/fast "teaching" for each topic, and then goes into details after each short teaching, short teaching is not summary. So people who gets it can move ahead to the next topic.
@ShravanKumar147
@ShravanKumar147 Жыл бұрын
👏👏👏 applause from online
@moustaphaebnou3817
@moustaphaebnou3817 8 ай бұрын
Thank you for providing this insightful course, which has been instrumental in enhancing in cementing intuition. I have a question regarding the updating loop at the 41:30 mark. It appears that there may be a minor oversight. Shouldn't we consider resetting all gradients to zero prior to each subsequent call of the backward() function? Because PyTorch, by default, accumulates the accumulation of gradients from previous iterations, eventually leading to inaccuracies in gradient computation.
@prameshgautam5239
@prameshgautam5239 Жыл бұрын
17:27 minor correction: it's error rate going down instead of accuracy
@bolshak34
@bolshak34 4 ай бұрын
At 28:40 I believe you run the cell again and it changes the tensors slightly - drove me a bit mad trying to figure out why my results were different.
@sunr8152
@sunr8152 Ай бұрын
building a neural net in spreadsheet. Heck yea!
@cantabr0
@cantabr0 Жыл бұрын
Excellent tutorial! I have one question, in the excel, why are Parch and SibSp not normalized? Because they are not "big enough" to negatively interfere?
@danielhemmati
@danielhemmati Жыл бұрын
basically we have data, now let's create a general function (from those data) that can kind of produce those data and also predict what the next data would be.
@diettoms
@diettoms Жыл бұрын
I just made a NN in Excel. Wow. If you want to predict two different things, do you just have a separate set of weights and Lins for the second item?
@Moiez101
@Moiez101 Ай бұрын
just a quick question: by reproduce the code, is it mean that one should be able to write out the code by memory/understanding as in know all of the parameters within the arguments as well as the defined functions? Of course that would be best case scenario but I feel it would get in the way of moving through the course as one does not need to perfectly be able to reproduce the code, just understand what the parameters are doing, right?
@harumambaru
@harumambaru Жыл бұрын
48:55 the computer draws the owl :)
@toromanow
@toromanow Жыл бұрын
So paperspace appears to not be free. When I try starting a notebook he forces me to upgrade to 8/month. Is this still the recommended platform? IS it worth it?
@toromanow
@toromanow Жыл бұрын
Looks like it's not worth it at all. I purchased the subscription only to get an error message that 'The VM I selected is currently not available please select another'. They indeed showed me a list of available VM. The available ones were at an additional cost of 0.7-3.50 USD per hour. Yes, that's on top of the 8USD/month subcription.
@tha_ba2s
@tha_ba2s 8 ай бұрын
1:00:50 how did we go from trying to fit a function to computer vision's pixels ? The jump from relu functions applied on linear functions to speaking about pixels in an image is not clear. Can you please elaborate ? Why did u say each pixel will have a variable of its own ? what is the mapping from computer vision to function fitting in this context ? Why is every single pixel in an image is a single variable ? what is the rationale ?
@arnoldaquino7495
@arnoldaquino7495 5 ай бұрын
@matthewrice7590
@matthewrice7590 Жыл бұрын
I'm slightly confused about the intuition behind how multiple ReLUs can lead to a squiggly line. Wouldn't it more specifically lead to a line that is always either stagnant or gradually increasing because of how the output must be >=0 ?
@greatfate
@greatfate Жыл бұрын
In 43:00, isn’t there supposed to be abc.zero_grad() to zero out the gradients?
@solaxun
@solaxun Жыл бұрын
I was wondering the same... otherwise wouldn't each backward call be accumulating progressively larger gradients, from keeping around the prior gradient before the updates occurred?
@greatfate
@greatfate Жыл бұрын
@@solaxun Yup, exactly. It's one of the worst bugs (it's bitten me in the neck several times)
@brentmarquez9057
@brentmarquez9057 7 ай бұрын
At 1:14:13 Jeremy describes calculating a loss. Can anyone explain this more, i.e. why subtracting whether the passenger survived (0 or 1) squared from the output of the linear equation for each row equates to a loss or error? It seems arbitrary and I'm not understanding why this is how we judge an error rate.
@curiousboy7015
@curiousboy7015 5 ай бұрын
We want to make prediction equal to actual value. so we dont want a large gap between actual and predicted value thus we define loss as the square of the distance between actual and predicted value (the square will increase loss at higher rate if there is a large distance) now we just have to minimize loss - it will occur by changing weights and biases
@adityabhatt04
@adityabhatt04 Жыл бұрын
Where can I find the walk through of Gradio?
@tha_ba2s
@tha_ba2s 8 ай бұрын
1:11:41 was a nice contradiction :D
@abdelhaksaouli8802
@abdelhaksaouli8802 10 ай бұрын
how much the difference betewen train_loss and validation_loss should be accepted ?
@GtorT-ec6cc
@GtorT-ec6cc 6 ай бұрын
lesson 1 needing math is a myth, awesome lets continue lesson 3 - here are all these math terms/equations you have no idea are or what you are looking at. Now I'm overwhelmed and feel defeated.
@curiousboy7015
@curiousboy7015 5 ай бұрын
try doing atleast highschool math
@garfieldnate
@garfieldnate Жыл бұрын
I don't quite see how the Excel example qualifies as a "deep" neural network, since the layers were not stacked on top of each other but added together. The example is still great, though, and I could see how to stack the layers.
@elnur0047
@elnur0047 Жыл бұрын
Hi, can you elaborate bit more regarding this? how does stacking differ from the approach in the video?
@yaptor0
@yaptor0 Жыл бұрын
@@elnur0047 Rather than both multiplying the same inputs the 2nd one would multiply the products from the previous output. I was also a little confused when he just added them up at the end instead of feeding one into the other.
@tungo96
@tungo96 Жыл бұрын
yeah I have exactly the same doubt when I saw that, these are still 2 independent layers.
@lifthrasir1609
@lifthrasir1609 Жыл бұрын
Jeremy actually confirms that at 1:16:15
@JayPinho
@JayPinho Жыл бұрын
@@yaptor0 How would that calculation work? Doesn't he have to first sum up all the products from a given layer and RELU them (i.e. take the max of the sumproduct and 0)? If the 2nd layer simply accepted the individual products as inputs, wouldn't this 2-layer network just be a linear function?
@sasukeslime
@sasukeslime 9 ай бұрын
=IF([@Embarked]="S" , 1, 0) and other IF statements like this seem not to work for me. Anyone experienced the same thing.
@MattMcConaha
@MattMcConaha Жыл бұрын
I tried to make a Paperspace account and accidentally mistyped the phone verification, so they decided that I'm no longer allowed to verify with my phone number. Disappointing.
@romainrouiller4889
@romainrouiller4889 Жыл бұрын
Vpn.
@vokoaxecer
@vokoaxecer Жыл бұрын
I don't even know how to use Excel.
Lesson 4: Practical Deep Learning for Coders 2022
1:34:37
Jeremy Howard
Рет қаралды 85 М.
Lesson 2: Practical Deep Learning for Coders 2022
1:16:42
Jeremy Howard
Рет қаралды 148 М.
FOOLED THE GUARD🤢
00:54
INO
Рет қаралды 35 МЛН
Они убрались очень быстро!
00:40
Аришнев
Рет қаралды 3,2 МЛН
MEU IRMÃO FICOU FAMOSO
00:52
Matheus Kriwat
Рет қаралды 11 МЛН
Sprinting with More and More Money
00:29
MrBeast
Рет қаралды 181 МЛН
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 273 М.
The End of Finetuning - with Jeremy Howard of Fast.ai
1:24:48
Latent Space
Рет қаралды 18 М.
But what is a neural network? | Chapter 1, Deep learning
18:40
3Blue1Brown
Рет қаралды 16 МЛН
Lesson 7: Practical Deep Learning for Coders 2022
1:46:43
Jeremy Howard
Рет қаралды 38 М.
Lesson 6: Practical Deep Learning for Coders 2022
1:42:55
Jeremy Howard
Рет қаралды 42 М.
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 205 М.
FOOLED THE GUARD🤢
00:54
INO
Рет қаралды 35 МЛН