Building makemore Part 4: Becoming a Backprop Ninja

  Рет қаралды 182,330

Andrej Karpathy

Andrej Karpathy

Күн бұрын

We take the 2-layer MLP (with BatchNorm) from the previous video and backpropagate through it manually without using PyTorch autograd's loss.backward(): through the cross entropy loss, 2nd linear layer, tanh, batchnorm, 1st linear layer, and the embedding table. Along the way, we get a strong intuitive understanding about how gradients flow backwards through the compute graph and on the level of efficient Tensors, not just individual scalars like in micrograd. This helps build competence and intuition around how neural nets are optimized and sets you up to more confidently innovate on and debug modern neural networks.
!!!!!!!!!!!!
I recommend you work through the exercise yourself but work with it in tandem and whenever you are stuck unpause the video and see me give away the answer. This video is not super intended to be simply watched. The exercise is here:
colab.research.google.com/dri...
!!!!!!!!!!!!
Links:
- makemore on github: github.com/karpathy/makemore
- jupyter notebook I built in this video: github.com/karpathy/nn-zero-t...
- collab notebook: colab.research.google.com/dri...
- my website: karpathy.ai
- my twitter: / karpathy
- our Discord channel: / discord
Supplementary links:
- Yes you should understand backprop: / yes-you-should-underst...
- BatchNorm paper: arxiv.org/abs/1502.03167
- Bessel’s Correction: math.oxford.emory.edu/site/mat...
- Bengio et al. 2003 MLP LM www.jmlr.org/papers/volume3/b...
Chapters:
00:00:00 intro: why you should care & fun history
00:07:26 starter code
00:13:01 exercise 1: backproping the atomic compute graph
01:05:17 brief digression: bessel’s correction in batchnorm
01:26:31 exercise 2: cross entropy loss backward pass
01:36:37 exercise 3: batch norm layer backward pass
01:50:02 exercise 4: putting it all together
01:54:24 outro

Пікірлер: 275
@Davourflave
@Davourflave Жыл бұрын
I can say without a doubt that there are not many highly qualified, passionate teachers who are also able to teach their subject. Sharing knowledge in this way is the greatest gift a researcher can give to the world! Me and everyone else thank you for that! :)
@vaguebrownfox
@vaguebrownfox 11 ай бұрын
I saw his previous micrograd lecture and it literally moved me to tears. I had endured the struggle of drowning in pytorch source code, trying to understand what it is that they are really doing! For someone who simply can't move past without cutting open abstractions, this is pure blessing.
@uniquescience7047
@uniquescience7047 6 ай бұрын
exactly same with me@@vaguebrownfox
@BradCordovaAI
@BradCordovaAI Жыл бұрын
Andrej you are a gifted teacher. I love this teaching style of starting from scratch with a simple specific model to set the structure and ideology of the problem. 2. Add necessary and motivated complexity to get where we are today, 3. Seamlessly transfer to modern technology (eg PyTorch) to solve modern problems. 4. You make it all simple and compress it into the essentials without unnecessary lingo. It reinvigorates my passion for the field. Thank you very much for taking so much time to make this for free for everyone.
@nohcho_9548
@nohcho_9548 Жыл бұрын
Ky .
@cojocarucosmin202
@cojocarucosmin202 Жыл бұрын
Bro just want to say that for the past 3 years I've been looking everywhere on the Internet for an explanation like thsi for backpropagation.. Found all kinf of things(e.g. Jacobian differentiable) but none actually made sense until today. U r the best, you bring so much value and let others light their candles at your light
@shivanandvishwakarma6442
@shivanandvishwakarma6442 22 күн бұрын
The line "let others light their candles at your light" 👏👏👏
@kshitijbanerjee6927
@kshitijbanerjee6927 Жыл бұрын
These lectures are literally GOLD. I'd pay for these, but Andrej is kind enough to give everything for free. I hope others find these gold lectures. Thank you so much for doing this. Please don't lose steam and I hope you continue to create them.
@kemalware4912
@kemalware4912 Жыл бұрын
I will put your poster on my wall to look at you everyday and remember how a great person you are. Your smile is contagious.
@weystrom
@weystrom Жыл бұрын
Man, what a time to be alive. Imagine how hard it would be to get this kind of information just a couple decades ago. And now it's free and easily accessible at any convenient time. Thank you, Andrey, truly.
@andonisudupe3446
@andonisudupe3446 Жыл бұрын
yes, I always wanted to be a backprop ninja, now my dream will become true, thanks Andrej!
@kishantripathi4521
@kishantripathi4521 2 ай бұрын
no words to explain my feelings. karpathy is just Supercalifragilisticexpialidocious
@dohyun0047
@dohyun0047 Жыл бұрын
i am still on part 2 but i had to write this comment , your part 4 thumbnail is awesome and funny I am very grateful for these lectures. I could feel that the artificial intelligence knowledge that was intertwined inside me was well aligned because of you.
@nova2577
@nova2577 10 ай бұрын
I spent almost a whole day digesting this video. It's definitely worth it!
@Sickkkkiddddd
@Sickkkkiddddd 10 ай бұрын
Bruh, I'd be paying a shit ton of money in education for this otherwise free knowledge if it wasn't for your videos. Thank you so much, man. I cannot believe the ease with which you explain what seemed complex to me from a distance years ago. I cannot even believe I understand this stuff, man.
@aaronwill1983
@aaronwill1983 Жыл бұрын
Binge worthy! Ran through all lectures back-to-back after discovering. On the edge of my seat for more. Thanks Andrej!
@DanteNoguez
@DanteNoguez Жыл бұрын
I was "taught" calculus in high school but didn't really understand anything at all. Now, after seven years of no math formal education at all, I was able to immediately understand this exercise thanks to your lecture on micrograd. You're a brilliant teacher and I'm really grateful for that!
@RebeccaBrunner
@RebeccaBrunner Жыл бұрын
Thank you for providing a series that's so approachable but doesn't shy away from explaining the details. Also love the progression through all the impactful papers
@efogleman
@efogleman Жыл бұрын
This lecture series is excellent. Seriously, some of the best learning resources for Neural Networks available anywhere: up-to-date, and goes deep into the details. These lectures with detailed examples and notebooks are an amazing resource. Thanks so much for this, Andrej.
@Themojii
@Themojii Жыл бұрын
Hello Andrej, I truly love this approach that you included exercises in your video. Your suggestion to first attempt to solve the exercises and then watching as you provide the solutions is the most effective way I personally grasp the concepts. Thank you for your outstanding work!
@kimiochang
@kimiochang Жыл бұрын
Finally completed this one. I have to say this lecture is the most valuable one throughout all my studying of deep learning. As always, thank you Andrej for your generosity. Moving on to the next one!
@martakosiv6483
@martakosiv6483 3 ай бұрын
Thanks for the great content! That's the best explanation I've ever seen! Also, regarding the last back propagation in the excersise 1 I've found the following method in pytorch: dC = torch.zeros_like(C) dC.index_add_(0, Xb.view(-1), demb.view(-1, demb.shape[2])) cmp('C', dC, C)
@borismeinardus
@borismeinardus Жыл бұрын
Andrej is providing the world with so much value, be it through his professional work in the industry (e.g. Tesla AI) or through education. He is literally one of the greatest of all time but is so down to earth and such a sweetheart. Thank you very much for your hard work to make it easier for all the rest of us and for inspiring us! 💚
@kapitan104
@kapitan104 Жыл бұрын
Andrej, you are the best techer. I am 100% sure these lectures will become a CORE watching for any student who starts his ML journey. Hope we will have such lectures in CV and RL.
@peterszilvasi752
@peterszilvasi752 Жыл бұрын
I really appreciate the lectures that you share with us. It is not about definitions, raw memorization, or even exercise per se. Instead, first-principle-thinking: take a big "mess" and then broke down into small manageable pieces. You do not solely demonstrate the problem-solving approaches brilliantly but also ignite curiosity to dig deeper (to go down to the level of atoms) into a specific topic. Thank you for the preparation, the passion, and the memes! :D
@cangozpinar
@cangozpinar Жыл бұрын
Thank you, thank you, thank you ... What you are doing with these videos is amazing !
@ThemeParkTeslaCamping360
@ThemeParkTeslaCamping360 Жыл бұрын
Excellent Andrej!! Can't wait for your next lecture. I'm so excited and motivated 🥰
@parasmaliklive
@parasmaliklive Жыл бұрын
Thank you Andrej. I really appreciate your work.
@joneskiller8
@joneskiller8 3 ай бұрын
This dude is based!. I can actually cognitively map and visualize his explanations, and I am so grateful to have found him. Keep the videos coming please, and thank you so much.
@mohammadhomsee8640
@mohammadhomsee8640 7 ай бұрын
That's incredible !!! It's impossible to give such a knowledge without very deep knowledge with neural nets. I am really appreciate your work. I hope we can get more videos. This is defiantly a golden video!!! Thank you so much!
@user-oi3be8dm8x
@user-oi3be8dm8x Жыл бұрын
Thanks for top-level video. Can't wait to see more. Thanks 🙏
@hermestrismegistus9142
@hermestrismegistus9142 Жыл бұрын
This lecture really makes me appreciate autograd. I commend the ancient ML practitioners for surviving this brutality.
@kaushaljani814
@kaushaljani814 10 ай бұрын
Pure gem...💎💎💎 Thanks Andrej for this amazing lecture.
@DiogoSanti
@DiogoSanti 7 ай бұрын
What a wonderful effort Andrej. Thanks for this!
@seanwalsh358
@seanwalsh358 Ай бұрын
I suspect this is a video I'll be coming back to for years to come. Thanks!
@jayhyunjo141
@jayhyunjo141 Жыл бұрын
As a bioinformatician and a part-time data scientist, I should say this series is the best educational youtube video on deep neural network. Thank you for the video and offering the opportunity to learn.
@greatfate
@greatfate Жыл бұрын
These videos are unironically pretty fun! You're not just a genius researcher but an an amazing teacher Andrej
@uncoded0
@uncoded0 Жыл бұрын
Thanks for the videos! Please make a lot more! Please continue to share your knowledge with the world! Thanks
@lagousis
@lagousis Жыл бұрын
Thanks for all the time you put into that lecture!
@kaspiimal3340
@kaspiimal3340 Жыл бұрын
Andrej, thank you for the work you put into this (and previous) lectures❤. Thanks to you, me and a lot of other people can enjoy learning NN 😍from the best.
@fbf3628
@fbf3628 Жыл бұрын
Wow! This lecture is truly incredible and i have certainly learned a ton. Thank you very much, Andrej :)
@sevarbg83
@sevarbg83 11 ай бұрын
Have mercy Andrej, my brain hurts! :D Feels like I'll need years to digest just these few lectures.
@vivekakaviv
@vivekakaviv 5 ай бұрын
This was very insightful. Andrej you are the best!
@owendorsey5866
@owendorsey5866 Жыл бұрын
This is the first time truly understood. Thank you!
@Nimrad780
@Nimrad780 Жыл бұрын
Thank you for "making everything fully explicit"!
@sauloviedo2677
@sauloviedo2677 Жыл бұрын
Andrej is on-firee! Thank for this awesome material!
@srikika
@srikika Жыл бұрын
love your channel and content Andrej.. please keep more videos coming!
@JTMoustache
@JTMoustache Жыл бұрын
Love that he explains matlab as if it is not still used in 80% of labs in the world. Living in a world of tech giants will heal the matlab ptsd This is a masterclass - I've never seen it explained so thoroughly and clearly, and i've been around. PEAK EXPERTISE
@AlecksSubtil
@AlecksSubtil 6 ай бұрын
simply the best! very good lessons with such maestry and passion, thanks a lot for sharing
@badreddinefarah1127
@badreddinefarah1127 Жыл бұрын
Thanks a lot Andrej, can't wait to see more 🙏🙏
@BlockDesignz
@BlockDesignz Жыл бұрын
I come to each of these videos to like them. I can't keep up with his pace of release but I will watch all of them in due time. Thanks Andrej.
@Raix03
@Raix03 4 ай бұрын
I almost completed Exercise 1 all on my own, but I had to step back for a day to refresh the basics because my college algebra was a bit rusty from 10 years of not using it. Exercises 2 and 3 totally overwhelmed me. However, when I follow your explanations, I understand everything. This is a huge because I remember that professors at my college couldn't explain complex concepts so easily. Andrej, you are a gift to this world!
@danielkusuma6473
@danielkusuma6473 Жыл бұрын
Just grateful to have the chance to learn from Andrej Karpathy. Thanks heaps, it means a lot!
@FrozenArtStudio
@FrozenArtStudio Жыл бұрын
my favorite prof with new lecture
@vulkanosaure
@vulkanosaure Жыл бұрын
I just finished part 2 yesterday night, and i was feeling blue that there was only 1 video left ! And this came to my notification, i just had to share my excitement :)))
@arjunsinghyadav4273
@arjunsinghyadav4273 Жыл бұрын
sprinkling Andrej magic through out the video - had me cracking at 43:40
@muhannadobeidat
@muhannadobeidat Жыл бұрын
Excellent series and delivery as usual. Thanks for all the hard work you put into this. Part of it is challenging to get through but a joy to decipher all the moving parts. I think a good understanding of the math behind back prop helps understand this. A good resource that covers this from a math perspective is Andrew Ng original Neural Net course.
@TheOrowa
@TheOrowa Жыл бұрын
I believe the loop implementing the final derivative at 1:24:21 can be vectorized if you just rewrite the selection operation as a matrix operation, then do a matmul derivative like done elsewhere in the video: X_e = F.one_hot(Xb, num_classes = 27).float() # Convert the selection operation into a selection matrix (emb = C[Xb] X_e @ C) dC = (X_e.permute(0,2,1) @ demb).sum(0) # Differentiate like any other matrix operation (dC = X_e.T @ demb; indices to track the batch dimensions)
@barni_7762
@barni_7762 Жыл бұрын
Imo it's cleaner if you do this instead: Xe = F.one_hot(Xb.flatten(), num_classes=27).float().permute(1, 0) dC = Xe @ demb.view((-1, demb.shape[2])) I think this method is more understandable because it uses a 2D matmul...
@arashrouhani5388
@arashrouhani5388 11 ай бұрын
@@barni_7762 Thanks, it seems to have worked for me.
@user-gk8ri6ww7e
@user-gk8ri6ww7e 11 ай бұрын
Very good point on the fact that C[Xb] X_e @ C. It makes things much more clear. I came to the same solution, but from the bottom, experimenting with single records, imagining what I want to get. final solution is: dC = (torch.nn.functional.one_hot(Xb, num_classes=C.shape[0]).float().swapaxes(-1,-2) @ demb).sum(0) and one can investigate what is going on for a single batch element: torch.nn.functional.one_hot(Xb[0], num_classes=C.shape[0]).T.float() @ demb[0]
@inar.timiryasov
@inar.timiryasov 8 ай бұрын
dC = torch.einsum('abc,abg->cg', F.one_hot(Xb, vocab_size).float(), demb)
@amogha7332
@amogha7332 4 ай бұрын
@@barni_7762 very clean solution, this is what i did too!
@yagvtt
@yagvtt 8 ай бұрын
That is so useful, thank you very much for this series.
@TonyStark-cp3tj
@TonyStark-cp3tj 6 ай бұрын
Hey Andrej, I don't know if you'll see this, but I just wanted to thank you whole heartedly for your awesome neural network playlist. It's by far the best and the most in-depth content on NNs I've ever come across. I really appreciate you sharing your knowledge for community. You're the best! Excited and awaiting for more such treasures!
@michadaniluk9604
@michadaniluk9604 Жыл бұрын
Thanks Andrej for your amazing videos. Here is my implementation of finding dC without for loops: dC = F.one_hot(Xb).float().view(-1, C.shape[0]).T @ demb.view(-1, C.shape[1])
@nikita67493
@nikita67493 Жыл бұрын
Unfortunately it produces inexact results: C | exact: False | approximate: True | maxdiff: 9.313225746154785e-10 The for-loop creates an exact match. Another way to do the same is to use Einstein notation (which is also an inexact result): dC = torch.einsum("ijk, ijm -> km", F.one_hot(Xb, num_classes=vocab_size).float(), demb)
@gembancud
@gembancud Жыл бұрын
This one is another impl, though i dont know if it produces the exact results: dC = torch.zeros_like(C).scatter_add_(0, Xb.view(-1,1).repeat(1,demb.shape[-1]),demb.view(-1, demb.shape[-1]))
@rohitsathya8099
@rohitsathya8099 2 ай бұрын
@@nikita67493why do you want an exact match?
@ColinKiegel
@ColinKiegel 2 ай бұрын
On my system all these implementations of dC are equivalent and only match approximately (with the same maxdiff: 5.587935447692871e-09) - including the for-loop I also came up with the same "einsum" solution Xb_onehot = F.one_hot(Xb, num_classes=vocab_size).float() dC = torch.einsum('ija, ijb->ab', Xb_onehot, demb) # shape: [32, 3, 27] @ [32, 3, 10] -> [27, 10]
@art4eigen93
@art4eigen93 11 ай бұрын
It took me days to backprop through this lecture. Phew!. got it now.
@ayogheswaran9270
@ayogheswaran9270 Жыл бұрын
Thanks a lot for making this Andrej !!!
@rmajdodin
@rmajdodin Жыл бұрын
Thank you Andrej for sharing your experience with us! John Carmack used exactly this learning method, as he told in his interview with Lex Fridmann. In his "larval stage", he implemented the whole NN machinary, including back propagation, in C (so really low-level:)), to make sure that he understands how stuff work!
@kl_moon
@kl_moon 8 ай бұрын
Thank you so much for this lecture!!!!TT..It actually made my day.
@santoshk.c.1896
@santoshk.c.1896 Жыл бұрын
Thanks a lot Andrej for all these awesome lectures. Please enable auto generated subtitle for this lecture.
@DavidIvan1991
@DavidIvan1991 2 ай бұрын
Very useful educational videos, thanks for making and sharing them! It's interesting that Andrej also considers the shapes when backpropagating through matrix multiply, just how I came to "memorize" it :)
@mehulajax21
@mehulajax21 Жыл бұрын
This is exactly how I work through my coding problems as well. I also have similar thought process while developing algorithms.
@user-kp2uk3cg4g
@user-kp2uk3cg4g 5 күн бұрын
Teaching taken to a different level.
@ronaldlegere
@ronaldlegere 11 ай бұрын
This is one of the most valuable videos I have come across for building strong intuition about what is going on in the backpropagation. BTW My solution for dC: dC = torch.einsum('bij,bik -> jk', F.one_hot(Xb, vocab_size).float(), demb). Gotta love einsum :)
@anrilombard1121
@anrilombard1121 Жыл бұрын
Can't wait to come watch this when school holiday starts!
@anrilombard1121
@anrilombard1121 Жыл бұрын
13 days later: here I am!
@yoonhero3701
@yoonhero3701 Жыл бұрын
that's awesome! thank you for your passion. i'd like to be like you someday :)
@anrilombard1121
@anrilombard1121 Жыл бұрын
Patiently waiting for part 5 :)
@muhammadbaqir3736
@muhammadbaqir3736 Жыл бұрын
01:25:00 Here is the better implementation of the code: dC = torch.zeros_like(C) dC.index_add_(0, Xb.view(-1), demb.view(-1, 10)) Thanks to the ChatGPT :)
@juanolano2818
@juanolano2818 Жыл бұрын
"...assuming that pytorch is correct..." hahahaha not only a great lecture but also with very funny nuggets. Thank you!
@thasinatabashum6853
@thasinatabashum6853 Жыл бұрын
I'm 3rd year Ph.D. student and I started my Ph.D. right after my undergrad, and I had very little idea how all the calculations are happening in neural networks back then. In the last three years to learn about neural nets I watched lots of videos, attended lectures, and completed summer camp, courses, also read books, papers, and blogs. But undoubtedly this is the best lecture on backprop! Thank you!
@CoolWorm13
@CoolWorm13 2 ай бұрын
what uni are you study in?
@frippRulez
@frippRulez 3 ай бұрын
This one kicked my ass! The way of the ninja is not an easy path, but I really enjoyed it, it was amazing as I started to solve it myself as the lecture progressed. Maybe this is the future of education
@jonathanr4242
@jonathanr4242 Жыл бұрын
very nice. Thank you, Andrej.
@KadeemSometimes
@KadeemSometimes Жыл бұрын
You are a hero!
@nirajs
@nirajs Жыл бұрын
Such a great video for really understanding the detail under the hood! And lol at the momentary disappointment at 1:16:20 just before realizing the calculation wasn't complete yet 😂
@markr9640
@markr9640 Жыл бұрын
Just Brilliant!
@stephennfernandes
@stephennfernandes Жыл бұрын
Excellent content Andrej
@itsm0saan
@itsm0saan Жыл бұрын
Thank you so much for the lecture ;)
@cthzierp5830
@cthzierp5830 9 ай бұрын
Thank you very much for an amazing series! The logit backprop derivation can be simplified a bit by realizing that log(f/g) is log f - log g. The second term is log Sum, the derivative will be 1/Sum times dSum/dxi which immediately yields the activation output. The first term is the log of an exponent, this cancels and the result has a trivial derivative of 0 or -1 when the index isn't/is the correct answer. This neatly shows that the derivative is "softmax output minus correct answer".
@MrEmbrance
@MrEmbrance Жыл бұрын
Can't wait for the next video
@uncoded0
@uncoded0 Жыл бұрын
Thanks Andrej! I feel like a buff doge! Just understood and backproped ~ 80% of the video and colab code from this video (downloaded and did exercises)! Colab kept occasionally throwing errors. Worked fine on local Jupyter.
@sam.rodriguez
@sam.rodriguez 9 ай бұрын
You can love people you don't know. I love you Andrej.
@veeramahendranathreddygang1086
@veeramahendranathreddygang1086 Жыл бұрын
Awesome. Thank you.
@mdrayedbinwahed7126
@mdrayedbinwahed7126 Жыл бұрын
Whatteh lecture! My god was it awesome.
@seanconnollymv
@seanconnollymv Жыл бұрын
Huge fan of your videos, Andrej! I'll admit I've had to pause and watch them all twice or more, but they are so useful! Thank you!. I was really excited when you started down the path or RNN and LTSM in your video, only to find you had other plans for us! Is there an ETA on RNN and LTSM videos? Possibly even GAN tutorial? Again, Thank you so much for these videos, they are so helpful, and your ability to teach is phenomenal.
@yunhuaji3038
@yunhuaji3038 Жыл бұрын
Hi Andrej, congrats on your "new" journey at OpenAI. Thank you very much for this series. It's extremely helpful and arguably the best learning material to go through for deep learning. I've always been looking for something like this series. It solidly deepens my understanding of neural networks even though I have been playing with them for a while. Will you continue on this series after your back to OpenAI? and I look forward to seeing your future work & contribution to this community, to the following generations, and to the world.
@luficerg2007
@luficerg2007 26 күн бұрын
Such a great man, just made all lectures for free , while mean UNI will charge you for even not relevant content now, I wish I can make world a better place by using AI in future. Currently, I can by commenting on this video , so that the ALgo. can recommend this to more people trying to learn neural network.. This is comment and all other comments are making world better place... And Andrej Sir , I will pay you back with some cool stuff build by me for this world.
@b0nce
@b0nce Жыл бұрын
Thank you so much :) It was a bit tough but very interesting task. P.S.: 1:25:47 dC can be done with dC.index_add_(0, Xb.view(-1), demb.view(-1, 10)) ;)
@AndrejKarpathy
@AndrejKarpathy Жыл бұрын
very cool, nice find, didn't know about index_add_, ty :)
@ArvidLunnemark
@ArvidLunnemark Жыл бұрын
I arrived at a very similar solution, but I didn't know about index_add_. Instead you can do: Xb_onehot = F.one_hot(Xb.view(-1), num_classes=C.shape[0]).float() dC = Xb_onehot.T @ demb.view(-1, C.shape[1]) ty for the video :)
@oferyehuda6131
@oferyehuda6131 Жыл бұрын
can also be done with torch.einsum without the reshaping (but a little more confusion)
@danieljaszczyszczykoeczews2616
@danieljaszczyszczykoeczews2616 Жыл бұрын
i've done with a basic approach dC = torch.zeros_like(C)# ([27, 10]) for i,iemb in zip(Xb.view(-1).tolist(),demb.view(-1, n_embd)): dC[i]+=iemb # zip (([96]), ([96, 10]))
@KibberShuriq
@KibberShuriq Жыл бұрын
@@ArvidLunnemark Instead of Xb.view(-1), one could also use Xb.flatten(), which is a bit more straightforward to interpret (and I believe is just a wrapper for view() internally anyway).
@afsarequebal
@afsarequebal Жыл бұрын
really grateful, thanks a lot
@user-vn3vd6wq7n
@user-vn3vd6wq7n 11 ай бұрын
this is a masterpiece
@amortalbeing
@amortalbeing Жыл бұрын
great job.
@lwtwl
@lwtwl Жыл бұрын
Btw, the "low-budget" gray block mask at the end is very creative :D
@sammyblues1979
@sammyblues1979 Жыл бұрын
Excellent tutorial to understand the mathematical process behind Neural net operations. Just shows how intuitively comfortable Andrej is with the fundamentals of the subject. Hats off!
@sigmamk
@sigmamk 2 ай бұрын
Ditto to what others have said-- thank you for your youtube lectures and github code, Andrej, this is all fantastic and we sincerely appreciate your efforts 🙏 Also I just wanted to mention that I didn't appreciate the specially chosen value of 6.8813735870195432 for the bias of an example neuron in the micrograd lecture until I was working through the exercises for this lecture 😂
@steampunkcircus
@steampunkcircus Жыл бұрын
A deluge of knowledge from you so often it's ridiculous. I'm absolutely certain you're a robot. Anyhow, Ninjas are awesome. Wax on Sensei!
@user-co6pu8zv3v
@user-co6pu8zv3v Жыл бұрын
Thank you!
@reubenthomas1033
@reubenthomas1033 Жыл бұрын
awesome!
@beathoven70
@beathoven70 Жыл бұрын
I'm so glad even Andrej forgets how the logits = h @ W2 + b2 backprob works by heart. I've really struggled with to remember that as well and used the same "hack" to just look at the sizes of the matrices and knowing what dimensions i needed to get out of it simply transpose the matrices accordingly, hahaha.
@Pragalbhgarg
@Pragalbhgarg 2 ай бұрын
I've become a fan of yours
The spelled-out intro to neural networks and backpropagation: building micrograd
2:25:52
Building makemore Part 5: Building a WaveNet
56:22
Andrej Karpathy
Рет қаралды 163 М.
Василиса наняла личного массажиста 😂 #shorts
00:22
Денис Кукояка
Рет қаралды 8 МЛН
Just try to use a cool gadget 😍
00:33
123 GO! SHORTS
Рет қаралды 85 МЛН
Geoffrey Hinton: Reasons why AI will kill us all
21:03
GAI Insights (archive)
Рет қаралды 177 М.
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,5 МЛН
Symmetry and Universality - Dr Sophia Sanborn (Science)
26:07
Thinking About Thinking
Рет қаралды 6 М.
Building makemore Part 3: Activations & Gradients, BatchNorm
1:55:58
Andrej Karpathy
Рет қаралды 260 М.
PyTorch at Tesla - Andrej Karpathy, Tesla
11:11
PyTorch
Рет қаралды 511 М.
Let's build GPT: from scratch, in code, spelled out.
1:56:20
Andrej Karpathy
Рет қаралды 4,4 МЛН
How to hack the simulation | Andrej Karpathy and Lex Fridman
7:57
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 355 М.
iOS 18 vs Samsung, Xiaomi,Tecno, Android
0:54
AndroHack
Рет қаралды 43 М.
i like you subscriber ♥️♥️ #trending #iphone #apple #iphonefold
0:14