Lecture 6: Backpropagation

  Рет қаралды 108,382

Michigan Online

Michigan Online

Күн бұрын

Пікірлер: 60
@sachinpaul2111
@sachinpaul2111 3 жыл бұрын
Prof...stop ...stop...it's already dead! Oh BP you thought you were this tough complex thing and then you met Prof. Justin Johnson who ended you once and for all! The internet is 99.99% garbage but content like this makes me so glad that it exists. What a masterclass! What a man!
@quanduong8917
@quanduong8917 3 жыл бұрын
this lecture is an example of a perfect technical lecture
@odysy5179
@odysy5179 3 ай бұрын
I work in ML and am doing review for interviews, this lecture is extremely thorough!
@ritvikkhandelwal1462
@ritvikkhandelwal1462 3 жыл бұрын
Amazing! One of the best Backprop explanation out there!
@piotrkoodziej4336
@piotrkoodziej4336 3 жыл бұрын
Sir, you are amazing! I've wasted hours reading and watching internet gurus on this topic, and they could not explain it at all, but your lecture worked!
@ShuaiGe-n3g
@ShuaiGe-n3g 13 күн бұрын
I 've just watched 30 minutes, but I 'm so excited to comment here that it's definately the best course for back propagation!!!!
@vardeep277
@vardeep277 4 жыл бұрын
Dr. JJ, you sly sun of a gun. This is one of the best things ever. 47:39, the way he asks if it is clear. It is damn clear man. Well Done!
@rookie2641
@rookie2641 2 жыл бұрын
Best lecture ever on explanation of backpropagation in math
@achronicstudent
@achronicstudent 2 ай бұрын
Finally!! I understood how to apply backpropagation. Thank you sir! Thank you!
@dbzrz1048
@dbzrz1048 2 жыл бұрын
finally some coverage on backprop with tensors
@ryliur
@ryliur 3 жыл бұрын
Future reference for anybody, but I think there's a typo @ 50:24. It should be dz/dx * dL/dz when using chain rule to find dL/dx
@liviumircea6905
@liviumircea6905 4 ай бұрын
At 58:56 prof Johnson tells something huge imho , the final equation is not formed by jacobians , finally I got it..simply the best explanation on the backprop .Thank you prof Johnson
@tomashaddad
@tomashaddad 3 жыл бұрын
I don't get how back propagation tutorials by 3B1B, StatQuest, etc, get so much praise, but neither of them are as succinct as you were in those first two examples. Fuck that was simple.
@shoumikchow
@shoumikchow 4 жыл бұрын
10:02. Dr. Johnson means, "right to left" not "left to right"
@KeringKirwa
@KeringKirwa 10 ай бұрын
You earned a like, a comment and a subscriber ... what an explanation .
@kentu3892
@kentu3892 7 ай бұрын
Such an amazing lecture with easy-to-understand examples!
@VikasKM
@VikasKM 3 жыл бұрын
wooooowww.. what a superb lecture on backpropagation. simply amazing.
@mihailshutov105
@mihailshutov105 6 ай бұрын
Thank you very much! I really enjoy this lecture! Hello from Russia with love :)
@minhlong1920
@minhlong1920 2 жыл бұрын
Such awesome and intuitive explaination!
@artcellCTRL
@artcellCTRL 2 жыл бұрын
22:22 the local gradient should be "[1-sigma(1.00)]*sigma(1.00)" where 1.00 is the input to the sigmoid-fcn block
@mohamedgamal-gi5ws
@mohamedgamal-gi5ws 4 жыл бұрын
The good thing about these lectures is that finally Dr.Johnson has more time to speak compared to cs231n !
@debasishdas9610
@debasishdas9610 7 ай бұрын
19:38 Shouldn't 0.39 be 0.4 and 0.59 be 0.6 -- not sure where the rounding errors have creeped in. 49:45 would it not be much easier to use Einstein index notation?
@arisioz
@arisioz Жыл бұрын
At around 18:20 shouldn't the original equation have a w_2 term that gets added to w_0*x_0+w_1*x_1?
@sainikihil9785
@sainikihil9785 Жыл бұрын
w2 is a bias
@apivovarov2
@apivovarov2 Жыл бұрын
@49:44 - Mistake in dL/dx formula - 2nd operand should be dL/dz (not dL/dx)
@훼에워어-u1n
@훼에워어-u1n Жыл бұрын
this is extremly hard. but this is a great lecture for sure. you are awesome Mr Johnson
@tornjak096
@tornjak096 Жыл бұрын
1:03:00 should the dimension of grad x3 / x2 be D2 x D3?
@anupriyochakrabarty4822
@anupriyochakrabarty4822 2 жыл бұрын
how come u are getting the value of e^x as -0.20. Could u explain
@shauryasingh9553
@shauryasingh9553 5 ай бұрын
I finally understand backprop!
@smitdumore1064
@smitdumore1064 Жыл бұрын
Top notch content
@jungjason4473
@jungjason4473 3 жыл бұрын
Can anyone explain 1:08:05? dL/dx1 should be next to dL/dL, not L when it is subject to function f2'. Thereby back propagation cannot connect fs and f's.
@nityunjgoel1438
@nityunjgoel1438 3 ай бұрын
Masterpiece!!!!
@AndyLee-xq8wq
@AndyLee-xq8wq 2 жыл бұрын
Amazing courses!
@נירבןזכרי
@נירבןזכרי 3 жыл бұрын
THANK YOU SO MUCH! finally not shallow and excellent explanation.
@matthewsocoollike
@matthewsocoollike 11 ай бұрын
19:00 where did w2 come from?
@dmitrii-petukhov
@dmitrii-petukhov 4 жыл бұрын
Awesome explanation of Backpropagation! Amazing slides! Much better than CS231n.
@MiD-k7u
@MiD-k7u Жыл бұрын
Great lecture thank you. I have a question, would be great if anyone could clarify. When you first introduce vector valued backpropagation, you have the example showing 2 inputs to the node, each input is a vector of DIFFERENT dimension - when would this be the case in a real scenario? I thought the vector formulation was so that we could compute the gradient for a batch of data (e.g. 100 training points) rather than running backprop 100x. In that case the input vectors and output vectors would always be of the same dimension (100). Thanks!
@akramsystems
@akramsystems 2 жыл бұрын
Beautifully done!
@zainbaloch5541
@zainbaloch5541 2 жыл бұрын
19:14 Can someone explain computing the local gradient of exponential function. I mean how the result -0.2 comes? I'm lost there!!!
@beaverknight5011
@beaverknight5011 2 жыл бұрын
Our upstream gradient was -0.53 right? And now we need the local gradient of e^-x which is -e^-x and -e^-(-1)= -0.36. So upstreamgrad(-0.53) multiplied with local grad (-0.36) is 0.1949 which is approximately 0.2. So 0.2 is not local grad it is local multiplied with upstream
@zainbaloch5541
@zainbaloch5541 2 жыл бұрын
@@beaverknight5011 got it, thank you so much!
@beaverknight5011
@beaverknight5011 2 жыл бұрын
@@zainbaloch5541 you are welcome, good luck with your work
@Valdrinooo
@Valdrinooo Жыл бұрын
I don't think beaver's answer is quite right. The upstream gradient is -0.53. But the local gradient comes from the function e^x not e^-x. The derivative of e^x is e^x. Now we plug in the input which is -1 and we get e^-1 as the local gradient. This is approximately 0.37. Now that we have the local gradient we just multiply it with the upstream gradient -0.53 which results in approximately -0.20.
@genericperson8238
@genericperson8238 2 жыл бұрын
46:16, shouldn't dl/dx be 4, 0, 5, 9 instead of 4, 0, 5, 0?
@kevalpipalia5280
@kevalpipalia5280 Жыл бұрын
No, the operation is not relu, its calculation of the downstream gradient. since last row of jacobian is 0 meaning that changes in that value does not affect the output, so 0.
@kevalpipalia5280
@kevalpipalia5280 Жыл бұрын
For the point of passing or killing the value of the upstream matrix, you have to decide pass or kill by looking at the input matrix, here that is [ 1, -2, 3, -1] so looking at -1, we will kill that value from the upstream matrix, so 0.
@maxbardelang6097
@maxbardelang6097 3 жыл бұрын
54:51 when my cd player gets stuck on a old eminem track
@YoshuaAIL
@YoshuaAIL 7 ай бұрын
Amazing!
@Nihit-n5n
@Nihit-n5n 4 жыл бұрын
great video.thanks for posting it
@DED_Search
@DED_Search 3 жыл бұрын
45:00 Jacobean matrix does not have to be diagonal right?
@blakerichey2425
@blakerichey2425 3 жыл бұрын
Correct. That was unique to the ReLU function. The "local gradient slices" in his discussion at 53:00 are slices of a more complex Jacobian.
@qingqiqiu
@qingqiqiu 2 жыл бұрын
Can anyone clarify the computation of hessian matrix in detail ?
@aoliveira_
@aoliveira_ 2 жыл бұрын
Why is he calculating derivatives relative to the inputs?
@haowang5274
@haowang5274 2 жыл бұрын
thanks, good god, best wish to you.
@Nur_Md._Mohiuddin_Chy._Toha
@Nur_Md._Mohiuddin_Chy._Toha 19 күн бұрын
👍👍👍👍
@jorgeanicama8625
@jorgeanicama8625 Жыл бұрын
It is actually muchhhhhh more simpler than the way he used to explain. I believe he was redundant and too many symbols that hides the beauty of the underneath reason of the algorithm and the math behind it. It all could have been explained in less amount of time.
@kushaagra098
@kushaagra098 3 ай бұрын
do you have any resources that explain this better?
@benmansourmahdi9097
@benmansourmahdi9097 Жыл бұрын
terrible sound quality !
@Hedonioresilano
@Hedonioresilano 3 жыл бұрын
it seems the coughing guy got the china virus at that time
@arisioz
@arisioz Жыл бұрын
I'm pretty sure you'd be called out as racist back in the days of your comment. Now that it's almost proven to be a china virus...
Lecture 7: Convolutional Networks
1:08:53
Michigan Online
Рет қаралды 57 М.
The spelled-out intro to neural networks and backpropagation: building micrograd
2:25:52
ССЫЛКА НА ИГРУ В КОММЕНТАХ #shorts
0:36
Паша Осадчий
Рет қаралды 8 МЛН
Sigma girl VS Sigma Error girl 2  #shorts #sigma
0:27
Jin and Hattie
Рет қаралды 124 МЛН
27. Backpropagation: Find Partial Derivatives
52:38
MIT OpenCourseWare
Рет қаралды 60 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 548 М.
Neural Networks 6 Computation Graphs and Backward Differentiation
10:31
From Languages to Information
Рет қаралды 31 М.
CS231n Winter 2016: Lecture 4: Backpropagation, Neural Networks 1
1:19:39
Andrej Karpathy
Рет қаралды 302 М.
Beren Millidge: Learning in the brain beyond backprop
47:35
Center for Cognitive Neuroscience Berlin
Рет қаралды 6 М.
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 740 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,4 МЛН
Understanding Backpropagation In Neural Networks with Basic Calculus
24:28