Neural Networks Explained from Scratch using Python

  Рет қаралды 357,551

Bot Academy

Bot Academy

Күн бұрын

Пікірлер: 224
@BotAcademyYT
@BotAcademyYT 4 жыл бұрын
Please share this video if you know somebody whom it might help. Thanks :) edit: Some people correctly identified the 3Blue1Brown style of the video. That is because I am using the python library manim (created by 3Blue1Brown) for the animations. Link and more information in the description. Huge thanks for all the likes and comments so far. You guys are awesome!
@walidbezoui
@walidbezoui 2 жыл бұрын
WOW FIRST TIME TO KNOW HOW 3Blue1Brown Work Awesoome
@jonathanrigby1186
@jonathanrigby1186 2 жыл бұрын
Can you plz help me with this .. I want a chess ai to teach me what it learnt kzbin.info/www/bejne/hZCxmJ-PprWoasU
@spendyala
@spendyala Жыл бұрын
Can you share your video manim code?
@twanwolthaus
@twanwolthaus Жыл бұрын
Incredible video. Not because of your insight, but because how you use visuals to represent the information as digestible as possible.
@danielcurrin3882
@danielcurrin3882 2 ай бұрын
If anyone can tell me what neural networks on my phone is please let me know. Bye the way, I've been hearing voices that say they are going to kill me, they started at Granville medical centre, and pretty much seems like everyone working for Granville county is in the know and I'm not knowing sht
@blzahz7633
@blzahz7633 2 жыл бұрын
I can't say anything that hasn't been said already: This video is golden. The visualization, explaining, everything is just so well done. Phenomenal work. I'm basically commenting just for the algo bump this video rightfully deserves.
@hepengye4239
@hepengye4239 4 жыл бұрын
As an ML beginner, I know how much effort and time is needed for such visualization of a program. I would like to give you a huge thumb! Thank you for the video.
@xcessiveO_o
@xcessiveO_o 4 жыл бұрын
a thumbs up you mean?
@EagleMasterNews
@EagleMasterNews 11 ай бұрын
His thumb is now massive
@Ibrahim-o3m7m
@Ibrahim-o3m7m 7 ай бұрын
I dont think he want no thumbs
@abdulrafaynawaz1335
@abdulrafaynawaz1335 4 ай бұрын
He will be in pain if you will give him such a huge thumb... just give him a thumbs up
@magitobaetanto5534
@magitobaetanto5534 4 жыл бұрын
You've just explained very clearly in a single video what others try to vaguely explain in series of dozens videos. Thank you. Fantastic job! Looking forward to more great videos from you.
@ejkitchen
@ejkitchen 4 жыл бұрын
FANTASTIC video. Doing Stanford's Coursera Deep Learning Specialization and they should be using your video to teach week 4. Much clearer and far better visualized. Clearly, you put great effort into this. And kudos to using 3Blue1Brown's manim lib. Excellent idea. I am going to put your video link in the course chat room.
@danielcurrin3882
@danielcurrin3882 2 ай бұрын
If anyone can tell me what neural networks on my phone is please let me know. Bye the way, I've been hearing voices that say they are going to kill me, they started at Granville medical centre, and pretty much seems like everyone working for Granville county is in the know and I'm not knowing sht
@craftydoeseverything
@craftydoeseverything Жыл бұрын
I know I'm watching this 2 years after it was released but I really can't stress enough how helpful this is. I've seen heaps of videos explaining the math and heaps of videos explaining the code but this video really helped me to link the two together and demystify what is actually happening in both.
@Transc3nder
@Transc3nder 4 жыл бұрын
This is so interesting. I always wondered how a neural net works... but it's also good to remind ourselves that we're not as clever as we thought. I feel humbled knowing that there's some fierce minds out there working on these complicated problems.
@danielcurrin3882
@danielcurrin3882 2 ай бұрын
If anyone can tell me what neural networks on my phone is please let me know. Bye the way, I've been hearing voices that say they are going to kill me, they started at Granville medical centre, and pretty much seems like everyone working for Granville county is in the know and I'm not knowing sht
@GaithTalahmeh
@GaithTalahmeh 4 жыл бұрын
Welcome back dude! I have been waiting your comeback for so long Please dont go away this long next time :) Great editing and audio quality btw Reminds me of 3b1b
@BotAcademyYT
@BotAcademyYT 4 жыл бұрын
Thanks! I'll try uploading more consistently now that I've finished my Thesis :)
@pythonbibye
@pythonbibye 4 жыл бұрын
I can tell you put a lot of work into this. You deserve more views! (also commenting for algorithm)
@ElNachoMacho
@ElNachoMacho 2 жыл бұрын
This is the kind of video that I was looking for to get beyond the basics of ML and start gaining a better and deeper understanding. Thank you for putting the effort into making this great video.
@pisoiorfan
@pisoiorfan Жыл бұрын
That's it! Comprehensive training code loop for a 1 hidden layer NN in just 20 lines. Thank you sir!
@mici432
@mici432 4 жыл бұрын
Saw your post on Reddit. Thank you very much for the work you put in your videos. New subscriber.
@cactus9277
@cactus9277 4 жыл бұрын
for those actually implementing something, note at 12:08 the values in the hidden layer change back to how they were pre sigmoid application
@BotAcademyYT
@BotAcademyYT 3 жыл бұрын
good point! Must have missed it when creating the video.
@robertplavka6194
@robertplavka6194 Жыл бұрын
yes but wasnt the value before sigmoid in the last cell 9 ? precisely I got something like 8.998 If I missed something please explain I want to know why is that
@hridumdhital
@hridumdhital 7 ай бұрын
As someone beginning machine learning, this video was so useful to really getting a deep understanding on how neural networks work!
@OrigamiCreeper
@OrigamiCreeper 4 жыл бұрын
Nice job with the explanation!!! I felt like I was watching a 3blue1brown video! A few notes: 1.)You should run through examples more often because that is one of the best ways to understand a concept. For example. you should have run through the algorithm for the cost function so people understand it intuitively. 2.)It would be nice if you went more in depth behind backpropagation and why it works. Things you did well: 1.)Nice job with the animations and how you simplified them for learning purposes, the diagrams would be much harder to understand if there was actually 784 input layers. 2.)I love the way you dissect the code line by line! I cant wait to see more videos by you I think this channel could get really big!
@BotAcademyYT
@BotAcademyYT 4 жыл бұрын
Thank you very much for the great feedback!
@eldattackkrossa9886
@eldattackkrossa9886 4 жыл бұрын
oh hell yeah :) just got yourself a new subscriber, support your small channels folks
@angelo9915
@angelo9915 4 жыл бұрын
Amazing video! The explanation was very clear and I understood everything. Really hope you're gonna be posting more videos on neural networks.
@gonecoastaltoo
@gonecoastaltoo 4 жыл бұрын
Such a great video -- high quality and easy to follow. Thanks. One typo in Additional Notes; (X,) + (1,) == (X, 1) -- this is shown correctly in the video, but in the Notes you show result as (1, X)
@BotAcademyYT
@BotAcademyYT 4 жыл бұрын
Thank you very much for pointing out the inconsistency. You're right, it is wrong in the description. I just corrected it.
@mrmotion7942
@mrmotion7942 4 жыл бұрын
Love this so much. So organised and was really helpful. So glad you put the effort into the animation. Keep up the great work!
@photorealm
@photorealm 10 ай бұрын
Excellent video and accompanying code. I just keep staring at the code, its art. And the naming convention with the legend is insightful, the comments tell the story like a first class narrator. Thank you for sharing this.
@michaelbarry755
@michaelbarry755 Жыл бұрын
Amazing video. Especially the matrix effect on the code in the first second. Love it.
@vxqr2788
@vxqr2788 4 жыл бұрын
Subscribed. We need more channels like this!
@BlackSheeeper
@BlackSheeeper 4 жыл бұрын
Glad to have you back :D
@AVOWIRENEWS
@AVOWIRENEWS Жыл бұрын
It's great to see content that helps demystify complex topics like neural networks, especially using a versatile language like Python! Understanding neural networks is so vital in today's tech-driven world, and Python is a fantastic tool for hands-on learning. It's amazing how such concepts, once considered highly specialized, are now accessible to a wider audience. This kind of knowledge-sharing really empowers more people to dive into the fascinating world of AI and machine learning! 🌟🐍💻
@danielcurrin3882
@danielcurrin3882 2 ай бұрын
If anyone can tell me what neural networks on my phone is please let me know. Bye the way, I've been hearing voices that say they are going to kill me, they started at Granville medical centre, and pretty much seems like everyone working for Granville county is in the know and I'm not knowing sht
@andrewfetterolf7042
@andrewfetterolf7042 2 жыл бұрын
Well done, i couldnt ask for a better video, Germans make the best and most detailed educational videos here on youtube. The pupils of the world say thank you.
@dexterroy
@dexterroy 11 ай бұрын
Listen to the man, listen well. He is giving accurate and incredibly valuable knowledge and information that took me years to learn.
@Lambertusjan
@Lambertusjan 2 жыл бұрын
Thanks for a very clear explanation. I was doing the same from scratch in python, but got stuck at dimensioning the weight matrices correctly, especially in this case with the 784 neuron input. Now i can check if this helps me to complete my own three layer implementation. 😅
@bdhaliwal24
@bdhaliwal24 2 жыл бұрын
Fantastic job with your explanation and and especially the animations. All of this really helped to connect the dots
@doomcrest8941
@doomcrest8941 4 жыл бұрын
awesome video :) i did not know that you could use that trick for the mse 👍
@brijeshlakhani4155
@brijeshlakhani4155 4 жыл бұрын
This is really helpful for beginners!! Great work always appreciated bro!!
@Darth_Zuko
@Darth_Zuko 4 жыл бұрын
This is one of the best explained videos i've seen for this. great job! Hope this comment helps :)
@jimbauer9508
@jimbauer9508 4 жыл бұрын
Great explanation - Thank you for making this!
@eirikd1682
@eirikd1682 2 жыл бұрын
Great Video! However, you say that "Mean Squared Error" is used as loss function and you also calculate it. However "o - l" (seemingly the derivative of the loss function) isn't the derivative of MSE. It's the derivative of Categorical Cross Entropy ( -np.sum(Y * np.log(output)), with Softmax before it). Anyways, keep up the great work :)
@gustavgotthelf7117
@gustavgotthelf7117 9 ай бұрын
Best video to this kind of topic on the whole market. Very well done! 😀
@mateborkesz7278
@mateborkesz7278 Жыл бұрын
Such an awesome video! Helped me a lot to understand neural networks. Thanks a bunch!
@chrisogonas
@chrisogonas 2 жыл бұрын
Superbly illustrated! Thanks for sharing.
@kousalyamara8746
@kousalyamara8746 7 ай бұрын
The BEST video ever! Hats off to your efforts and a Big Big Thanks for imparting the knowledge to us. I will never forget the concept and ever. 😊
@Lukas-qy2on
@Lukas-qy2on Жыл бұрын
This video is pretty great, although i had to pause and sketch along and keep referring to the code you showed, it definitely helped me understand better how to do it
@v4dl45
@v4dl45 Жыл бұрын
Thank you for this amazing video. I understand the huge effort in the animations and I am so grateful. I believe this is THE video for anyone trying to get into machine learning.
@devadethan9234
@devadethan9234 Жыл бұрын
yes , finally I had found the golden channel thanks budd
@malamals
@malamals 4 жыл бұрын
Very well explained. I really liked it. making noise for you. Please make such video to understand NLP in the same intuitive way. Thank you :)
@napomokoetle
@napomokoetle Жыл бұрын
Wow! Thanks you so much. You rock. Now looking forward to "Transformers Explained from Scratch using Python" ;)
@susakshamjain1926
@susakshamjain1926 8 ай бұрын
Best video of ML so far i have seen.
@kenilbhikadiya8073
@kenilbhikadiya8073 8 ай бұрын
Great explanation and hats off to ur efforts for these visualisation!!! 🎉❤
@DV-IT
@DV-IT 4 ай бұрын
This video is perfect for beginners, thank u so much
@ThootenTootinTabootin
@ThootenTootinTabootin Жыл бұрын
"does some magic." Great explanation. Thanks.
@jonnythrive
@jonnythrive 2 жыл бұрын
This was actually very good! Subscribed.
@jordyvandertang2411
@jordyvandertang2411 3 жыл бұрын
hey this was a great into! Gave a good playing ground to experiment with in increasing the nodes of the hidden layer, changing the activation function and even adding an addition hidden layer to evaluate the effects/effectiveness! With more epochs could get it above 99% accuracy (on the training set, so might be overfitted, but hey_)
@asfandiyar5829
@asfandiyar5829 2 жыл бұрын
You create some amazing content. Really well explained.
@neuralworknet
@neuralworknet Жыл бұрын
12:40 why dont we use derivative of activation function for delta_o? But we used derivative of activation function for delta_h. Any answers???
@hidoxy1
@hidoxy1 Жыл бұрын
I was confused about the same thing, did you figure it out?
@itzblinkzy1728
@itzblinkzy1728 4 жыл бұрын
Amazing video I hope this gets more views.
@Scronk03
@Scronk03 3 жыл бұрын
Thank you for this. Fantastic video.
@EnglishRain
@EnglishRain 4 жыл бұрын
Great content, subscribed!
@kallattil
@kallattil Жыл бұрын
Excellent content and illustration 🎉
@dormetulo
@dormetulo 4 жыл бұрын
Amazing video really helpful!
@saidhougga2023
@saidhougga2023 2 жыл бұрын
Amazing visualized explanation
@maxstengl6344
@maxstengl6344 3 жыл бұрын
at 14:32 you use the updated weights (to the output layer) to calculate the hidden layer deltas. I never saw anyone doing it this way. Usually, the old weights are used and all weights are updated after backprop. I don't think it makes a large difference but I wonder if this is intentional or I am missing something.
@FlyingUnosaur
@FlyingUnosaur 2 жыл бұрын
I also think this is a mistake. Andrew Ng emphasized that the weights must be updated after calculating the derivatives.
@neuralworknet
@neuralworknet Жыл бұрын
​@@FlyingUnosauryou are talking about the derivative of activation function right?
@appliiblive
@appliiblive Жыл бұрын
Thank you so much for posting this comment, i was wondering why my model was losing accuracy with every epoch. With that little change my accuracy jumped from 20'000 / 60'000 to 56'000 / 60'000
@Maxou
@Maxou Жыл бұрын
Really nice video, keep doing those!!
@Hide310122
@Hide310122 2 жыл бұрын
Such an amazing video with lots of visualization. But I don't think you can simplify delta_o to "o - l" with whatever mathematical tricks. It needs to be "(o - l) * (o * (1 - o))".
@Kuratius
@Kuratius 2 жыл бұрын
I think you're right, but for some reason it seems to work anyway
@neuralworknet
@neuralworknet Жыл бұрын
yess i have been trying to understand this for weeks 🤯
@quant-prep2843
@quant-prep2843 3 жыл бұрын
intuitive video on the whole planet, likewise can you come up with a brief explanation on NEAT algorithm as well ?
@BotAcademyYT
@BotAcademyYT 3 жыл бұрын
Thanks! I‘ll add it to my list. If more people request it or if I‘m out of video ideas, I‘ll do it :-)
@quant-prep2843
@quant-prep2843 3 жыл бұрын
@@BotAcademyYT Nooo, we cant wait.... i shared this video across all discord servers, and most of em asked , wish this guy could make a video like this on NEAT or hyperNEAT. because there isnt much resources out there. Hope you will make it!
@Michael-ty2uo
@Michael-ty2uo Жыл бұрын
The first minute of this video got myself asking who is this dude and does he make more videos explaining compicated topics in a simple way. pls do more
@neliodiassantos
@neliodiassantos 3 жыл бұрын
Great work! thanks for the explication
@VereskM
@VereskM 3 жыл бұрын
Source text Excellent video. Best of the best ) i want to see more and slowly about backpropagation algorithm. It is most interesting moments.. maybe better to make the step by step slides?
@johannesvartdal624
@johannesvartdal624 Жыл бұрын
This video feels like a 3Brown1Blue video, and I like it.
@morty_squared
@morty_squared 4 жыл бұрын
Great video, really interesting!
@HanzoHasashi-bv7rm
@HanzoHasashi-bv7rm Жыл бұрын
Video Level: Overpowered!
@LetsGoSomewhere87
@LetsGoSomewhere87 4 жыл бұрын
Making noise for you, good luck!
@curtezyt1984
@curtezyt1984 Жыл бұрын
you got a subscriber ❤
@miguelhernandez3730
@miguelhernandez3730 4 жыл бұрын
Excellent video
@Ibrahim-o3m7m
@Ibrahim-o3m7m 7 ай бұрын
How would you do the 50000 samples for training? Great video by the way!
@onlineinformation5320
@onlineinformation5320 11 ай бұрын
As a neural network, I can confirm that we work like this
@ziphy_6471
@ziphy_6471 9 ай бұрын
Well , your brain is basically a complex neural network Plus, our body isn't us; our brain is us. We are just a complex meat neural network controlling a big fleshy, meaty and boney body.
@nomnom8127
@nomnom8127 4 жыл бұрын
Great video
@ThePaintingpeter
@ThePaintingpeter 2 жыл бұрын
Fantastic video. I really appreciate the effort u_tubers put into great videos like this one.
@yoctometric
@yoctometric 4 жыл бұрын
Algy comment right here, thanks for the wonderful video!
@_Slach_
@_Slach_ 4 жыл бұрын
11:31 What if the first output neuron wasn't the one with the highest value? Does that mean that the neural network classified the image incorrectly?
@BotAcademyYT
@BotAcademyYT 4 жыл бұрын
Exactly :)
@0xxi1
@0xxi1 Жыл бұрын
you are the man! My respect goes out to you
@georgeseese
@georgeseese Жыл бұрын
What do neurons represent? You say "just numbers" @1:39. That may be true of the input layer (pixel values) and bias. But don't the neurons in other layers represent functions?
@alizaka1467
@alizaka1467 4 ай бұрын
Sad to see a valid question get zero likes and replies for over 8 months
@2wen98
@2wen98 2 жыл бұрын
how could i split the data into training and testing data?
@NikoKun
@NikoKun 2 жыл бұрын
What are you referring to when you talk about "defining the matrix from the right-layer to the left-layer" @ 2:35 ? I'm sure I'm just missing something obvious, but I can't seem to figure out what that's referring to in the code..
@wawan_ikhwan
@wawan_ikhwan Ай бұрын
Where is the batch size parameter sitting on?
@rverm1000
@rverm1000 11 ай бұрын
Thanks. I wonder if I could train it for other pictures?
@oliverb.2083
@oliverb.2083 4 жыл бұрын
For running the code on Ubuntu 20.04 you need to do this: git clone github.com/Bot-Academy/NeuralNetworkFromScratch.git cd NeuralNetworkFromScratch sudo apt-get install python3 python-is-python3 python3-tk -y pip install --user poetry ~/.local/bin/poetry install ~/.local/bin/poetry run python nn.py
@khalil_stuff
@khalil_stuff 7 ай бұрын
but why we can't write :delta_o = (o-l)* (h * (1 - h)) 14:30
@viktorvegh7842
@viktorvegh7842 11 ай бұрын
11:32 why are you checking for the highest value I dont understand when the highest is 0.67 its classified as 0 can you please explain? Like what this number has to be for example for input to be classified as 1
@pu3zle
@pu3zle 4 жыл бұрын
Great content! I can't wait for more of this stuff
@hynesie11
@hynesie11 Жыл бұрын
for the first node in the hidden layer you added the bias node of 1, for the rest of the nodes in the hidden layer you multiplied the bias node of 1 ??
@rejeanto6508
@rejeanto6508 Жыл бұрын
I have a data set with the same size, how do I change the data set? I have tried to change it but failed. BTW thank you this video really helped me
@cryptoknightatheaume6462
@cryptoknightatheaume6462 2 жыл бұрын
awesome man. Could you please tell me how do you realise this neural animation? It's really nice
@noone-du5qu
@noone-du5qu 8 ай бұрын
bro how did u make the first layer know how much number of color scale should be used on the img
@enriquefernandezaraujo3943
@enriquefernandezaraujo3943 2 жыл бұрын
TKU for this excellent video👌
@jnaneswar1
@jnaneswar1 2 жыл бұрын
extremely thankful
@tanvir-tonoy-programmer
@tanvir-tonoy-programmer Жыл бұрын
Hey do you use manim ? I was curious should I use manim or Aftereffect to visualise math concepts like those ???
@cocoarecords
@cocoarecords 4 жыл бұрын
Wow amazing
@waterspray5743
@waterspray5743 3 жыл бұрын
14:32 Where did the equation "delta_h = np.transpose(w_h_o) @ delta_o * h * (1 - h)" come from? I thought we were using gradient descent of the cost function?
@BotAcademyYT
@BotAcademyYT 3 жыл бұрын
Hey. When calculating the delta values for the hidden layers, we are using the derivate of the activation function. And the Derivate of the sigmoid function (h) is h * (1 - h) We just use the derivate of the cost function for delta_o which is already simplified to “o - l” here due to some mathematical tricks that can be used for the mean squared error cost function. I have to admit that this video is optimized to explain the concept and not the mathematics. To get a deep understanding of the mathematics, I’d recommend the free Stanford Machine Learning course by Andrew Mg on Coursera
@waterspray5743
@waterspray5743 3 жыл бұрын
@@BotAcademyYT Hello, I have a problem. Isn't the "derivative of sigmoid(h) wrt h" = "sigmoid(h) * (1 - sigmoid(h))" instead of "h * (1 - h)"?
@BotAcademyYT
@BotAcademyYT 3 жыл бұрын
@@waterspray5743 if you do sigmoid(h) you've applied sigmoid twice. Your statement gets correct if you swap 'h' for 'h_pre'. You can also see this in the code, pay special attention to line 24 and 25. Does it make sense now?
@hoot999
@hoot999 Жыл бұрын
great video, thanks!
@parthpatwari3174
@parthpatwari3174 2 ай бұрын
a neural network with only 2 for loops is underrated
@ipsdon
@ipsdon Жыл бұрын
The forward propagation has 2 sigmoid functions for hidden and output layers, however the gradient calculation for output layer does not compute the sigmoid. The entire calculation only computes gradient of sigmoid on the hidden layer. Am I missing something ? But even that, it seems the network is able to compensate.
@ramazandurmaz3012
@ramazandurmaz3012 10 ай бұрын
Yeah that got my attention too. Instead of MSE using categorical cross entropy for this is much more preferred but then it could be complicated. Let A2 be the activation of last layer for a single neuron and A1 the previous activation L = 1/2(Y_hot - A2)^2 and dL/dW = dL/dZ*dZ/dW. dL/dZ = dL/dA2*dA2/dZ Now dL/dA2 = (A2-Y_hot)*A2*(1-A2) Then calculate dZ/dW which is A1 Putting it all together: dL/dZ = (A2-Y)*A2*(1-A2) dL/dW = dL/dZ@A1^T So definitely the derivative of sigmoid needs to be included. The mathematical trick he mentioned works for sigmoid or softmax combined with cross entropy loss function. There delta simplifies to A-Y but in MSE it does not simplify so.
@payola5000
@payola5000 4 жыл бұрын
I really loved your video, it's so clearly explained. I have a kind of big question. What if you had a data frame where all the columns are related to each other, but there are different functions for certain parts of it? I'm trying to make a neutral network that is meant to understand the functional parts of proteins, in order to create new proteins
@BotAcademyYT
@BotAcademyYT 4 жыл бұрын
Thanks! That's a really hard one :D If there is some temporal difference in the data, you'd need a recurrent NN like an LSTM. But I think its not the case for proteins. So if they are related to each other I guess you'd flatten the data frame and use it as input. If the input dimension is too large, I think you need some other feature extraction technique before applying a NN. But I am just guessing here tbh. There might be better approaches directly for proeteins (there are surely some good papers out there because its a topic with quite some research behind it)
@alangrant5278
@alangrant5278 11 ай бұрын
Gets even more tricky at 50 metres one handed - weak hand!
@xXxxSharkLoverxXx
@xXxxSharkLoverxXx 6 ай бұрын
I can't find tutorials for Java Script, so I am using this. How do I not use any external downloads, or with my own data that I gather later?
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,4 МЛН
How to Create a Neural Network (and Train it to Identify Doodles)
54:51
Sebastian Lague
Рет қаралды 1,9 МЛН
Air Sigma Girl #sigma
0:32
Jin and Hattie
Рет қаралды 45 МЛН
БОЙКАЛАР| bayGUYS | 27 шығарылым
28:49
bayGUYS
Рет қаралды 1,1 МЛН
I Made an AI with just Redstone!
17:23
mattbatwings
Рет қаралды 1,3 МЛН
A Brain-Inspired Algorithm For Memory
26:52
Artem Kirsanov
Рет қаралды 185 М.
10 weird algorithms
9:06
Fireship
Рет қаралды 1,3 МЛН
Create a Simple Neural Network in Python from Scratch
14:15
Polycode
Рет қаралды 782 М.
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,8 МЛН
Why Neural Networks can learn (almost) anything
10:30
Emergent Garden
Рет қаралды 1,2 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 596 М.
Convolutional Neural Network from Scratch | Mathematics & Python Code
33:23
The Independent Code
Рет қаралды 200 М.
Building a Neural Network with PyTorch in 15 Minutes | Coding Challenge
20:34