Really nice explanation, I wonder if we could have access to the written notes also. Thanks!
@eulu_3 күн бұрын
Thanks for the video! Can someone give me any advice on how to prepare and process RGB image with the NN from video, please?
@KarmBiga3 күн бұрын
I would probably look into normalizing the values of each pixel using min Max normalization or some other method which would bring the values down to numbers from 0 to 1. It would make it more accurate with smaller changes.
@eulu_2 күн бұрын
@@KarmBiga thank you for the advice. But what is the next step? Before i flatten the matrix do i need to convert RGB values somehow to get single value for one pixel? Like [(R + G + B) / 3] Or is it ok to flatten image matrix [ [[r, g, b], ..., [r, g, b]], [[r, g, b], ..., [r, g, b]], ] and then feed it to the NN?
@ethanhobson71616 күн бұрын
how would you change the number of nuerons in the network? I've tried some stuff but get dimension errors
@KarmBiga3 күн бұрын
changing the number of neurons should be fine you just need to make sure you are adjusting the values across the program.
@lbirkert6 күн бұрын
don't quite understand where the softmax derivative went
@hocm27 күн бұрын
Maaan, I am so happy you made this video. I was looking for somebody to train the Neural Network from scratch. I will go through it several times to get into the subject. Your English is excellent! Many, many thanks!
@gilbertponce60319 күн бұрын
maybe push a vid of your debugging, i learn so much from my mistakes and digging for my errors. love the video totally cool
@KarmBiga10 күн бұрын
Why do we have each column as an example for the one hot encoded 2d array of labels? can't we have the classifications as rows and the examples (each image) as a row? Won't it make so that we don't have to transpose the input matrix? @you_just @khoa4k266 @jumpierwolf @tecknowledger @hcmcnae
@KarmBiga10 күн бұрын
I am implementing this in java with a couple of twists so it would be good if someone responds quickly
@Smurdy111 күн бұрын
You think no pytorch is bad imagine how I feel having to write my own C++ library from scratch because there aren't any tutorials for that...
@andremarcorin13 күн бұрын
Amazing! Bravo!
@AetherTunes14 күн бұрын
You have to get back to making videos !
@JanRzepkowski-ce6sn14 күн бұрын
Is there such a guide to Iris dataset classification problem?
@pedrosoares390614 күн бұрын
Calm down brother, be patient, everything will be fine with you. Study everything thoroughly so you don't get screwed.
@debjitdas147014 күн бұрын
bro ur blog link got expired coz when i click it just says error 404 not found i just wanna optimise ur math a bit
@saramparkdal898214 күн бұрын
한국인인가요?
@stark186216 күн бұрын
Why we need to take transpose of a matrix?
@faza21017 күн бұрын
Whoa
@venompool868718 күн бұрын
Good video
@tranquil_cove488418 күн бұрын
The reason for activation functions is not to make the network solve for nonlinear combinations... it essentially does that even with the activation function.... the reason for the activation function is to prevent the gradient from exploding.
@momol.989218 күн бұрын
Just learned basics around the neural networks and saw this video. So satisfied to all the math formulas are laid out clearly in numpy and real-world coding and training neural network with back propagation. It really helps beginners like me. Thank you so much!
@quickclipsbysnoopy22 күн бұрын
Thank you so much, that helped a lot!
@DadicekCz22 күн бұрын
Now do it in C with proper memory allocation
@bodaciouschad23 күн бұрын
Not a knock, just a layperson's opinion, but numpy and pandas are not "scratch", per say, as they do not come with a python installation. Matplotlib is for the audience's sake so its not breaking the spirit of the challenge, but importing things that make the underlying math at hand easier does toe the line.
@assasin-23 күн бұрын
Hey can anyone tell me why he did not take derivate of the softmax function while calculating dW2, just like he used derivate of relu while finding dz2. If my understanding is correct, he is not doing any activation at second layer
@joymaurya365824 күн бұрын
He also divided the pixel values with 255 so please take care
@LazyX2024 күн бұрын
bruh y did u stop making videos man ?? mb to ask
@user-ve2sd7gh4y24 күн бұрын
Why is it always asian guy?
@chandakaashok451125 күн бұрын
Great
@mohamedirshaathm3212325 күн бұрын
can somone explain Y we take few transposes in the backprop section's matrix?
@parnavikulkarni331427 күн бұрын
Why did you separate the training data into data_dev & data_train? What is the use of data_Dev??
@okgoogle76528 күн бұрын
thanks bro
@kamaldani168629 күн бұрын
Understood nothing but wow
@blendingdude3429Ай бұрын
how can you be so good in many different things that requires quite a lot of time to get good at
@doctorshadow2482Ай бұрын
Good start. Some points: 1. The article on the link in summary is no more available. 2. It would be nice to pay more attention to backpropagation, since it is tricky; just a simple question how to choose between adjusting bias and weight? 3. It would be nice to compare this NN with a simple checking against average "blurred" weighted maps for all numbers on pictures; just blur all 1's, blur separately all 2'th and then compare cosine distance. If the results would be the same as with the NN, the question would be about the Occam's Razor.
@Agorith_27 күн бұрын
You can just use the wayback machine since its 3 years old vid.
@doctorshadow248227 күн бұрын
@@Agorith_ б, how is it related to my post?
@Agorith_26 күн бұрын
@@doctorshadow2482 umm I said it for your first point-"The article on the link in summary is no more available."
@doctorshadow248226 күн бұрын
@@Agorith_ , so what? I just giving the update tho the author so that he could update the description or bring back the materials. It is not a complain, just a feedback.
@Agorith_26 күн бұрын
@@doctorshadow2482 I am just making awareness to people who look your comment on where to look and which year to look into.
@williamjxjАй бұрын
Genius!
@themaridv2000Ай бұрын
what does the "m" mean on the math calculations?
@izurzuhriАй бұрын
thanks dude i need deep dive into this topic 30 minutes not enough for me
@ItsNaberiusАй бұрын
Really excellent breakdown of a Neural Network, especially the math explanation in the beginning. I also want to say how much I appreciate you leaving in your first attempt at coding it and the mistakes you made. Coding is hard, and spending an hour debugging your code just because of one little number is so real. Great video
@catten8406Ай бұрын
5:53, goofy way to calculate RELU purely mathematically (without any if statements) RELU(X) = X + (sqrt(X^2)) * 0.5 or use abs() RELU(X) = X + (abs(X)) * 0.5 sqrt() is square root, and abs() is absolute.
@jairam2788Ай бұрын
Any complete youtube channel or course for LSTM form scratch???
@dp0813Ай бұрын
Great example! Need more of these kinds of videos to really advance the area. I'm a huge proponent of requiring videos & code for every research paper submitted on a novel topic or algorithm
@wasabiii_Ай бұрын
this man is the definition of my imposter syndrome 😭
@anubhavjain7267Ай бұрын
are you the guy behind @3lue1brown channel he demonstrated the same explanation as you.
@user-kg5xl3dq9pАй бұрын
Hola donde puedo encontrar más material para crear un red neuronal a este estilo?
@aleekazmiАй бұрын
He did all that but couldnt multiply the accuracy with 100 to output percentage
@shdnas6695Ай бұрын
it's always the chinese
@ettoremiglioranza2959Ай бұрын
sorry i'm the only oene who gets 404 er on the math article's link?
@DetectiveConan990v3Ай бұрын
me too
@my14081947Ай бұрын
I heard your name as Samsung- got instantly hooked.
@theJellyjokerАй бұрын
I didn't get the math, but the explanation of the Neural Network pipeline was very informative. Thanks!