Andover Admissions Math Final Video
1:42
Andover Admissions Art Final Video
1:45
Пікірлер
@SUCRAM7627
@SUCRAM7627 2 күн бұрын
Really nice explanation, I wonder if we could have access to the written notes also. Thanks!
@eulu_
@eulu_ 3 күн бұрын
Thanks for the video! Can someone give me any advice on how to prepare and process RGB image with the NN from video, please?
@KarmBiga
@KarmBiga 3 күн бұрын
I would probably look into normalizing the values of each pixel using min Max normalization or some other method which would bring the values down to numbers from 0 to 1. It would make it more accurate with smaller changes.
@eulu_
@eulu_ 2 күн бұрын
@@KarmBiga thank you for the advice. But what is the next step? Before i flatten the matrix do i need to convert RGB values somehow to get single value for one pixel? Like [(R + G + B) / 3] Or is it ok to flatten image matrix [ [[r, g, b], ..., [r, g, b]], [[r, g, b], ..., [r, g, b]], ] and then feed it to the NN?
@ethanhobson7161
@ethanhobson7161 6 күн бұрын
how would you change the number of nuerons in the network? I've tried some stuff but get dimension errors
@KarmBiga
@KarmBiga 3 күн бұрын
changing the number of neurons should be fine you just need to make sure you are adjusting the values across the program.
@lbirkert
@lbirkert 6 күн бұрын
don't quite understand where the softmax derivative went
@hocm2
@hocm2 7 күн бұрын
Maaan, I am so happy you made this video. I was looking for somebody to train the Neural Network from scratch. I will go through it several times to get into the subject. Your English is excellent! Many, many thanks!
@gilbertponce6031
@gilbertponce6031 9 күн бұрын
maybe push a vid of your debugging, i learn so much from my mistakes and digging for my errors. love the video totally cool
@KarmBiga
@KarmBiga 10 күн бұрын
Why do we have each column as an example for the one hot encoded 2d array of labels? can't we have the classifications as rows and the examples (each image) as a row? Won't it make so that we don't have to transpose the input matrix? @you_just @khoa4k266 @jumpierwolf @tecknowledger @hcmcnae
@KarmBiga
@KarmBiga 10 күн бұрын
I am implementing this in java with a couple of twists so it would be good if someone responds quickly
@Smurdy1
@Smurdy1 11 күн бұрын
You think no pytorch is bad imagine how I feel having to write my own C++ library from scratch because there aren't any tutorials for that...
@andremarcorin
@andremarcorin 13 күн бұрын
Amazing! Bravo!
@AetherTunes
@AetherTunes 14 күн бұрын
You have to get back to making videos !
@JanRzepkowski-ce6sn
@JanRzepkowski-ce6sn 14 күн бұрын
Is there such a guide to Iris dataset classification problem?
@pedrosoares3906
@pedrosoares3906 14 күн бұрын
Calm down brother, be patient, everything will be fine with you. Study everything thoroughly so you don't get screwed.
@debjitdas1470
@debjitdas1470 14 күн бұрын
bro ur blog link got expired coz when i click it just says error 404 not found i just wanna optimise ur math a bit
@saramparkdal8982
@saramparkdal8982 14 күн бұрын
한국인인가요?
@stark1862
@stark1862 16 күн бұрын
Why we need to take transpose of a matrix?
@faza210
@faza210 17 күн бұрын
Whoa
@venompool8687
@venompool8687 18 күн бұрын
Good video
@tranquil_cove4884
@tranquil_cove4884 18 күн бұрын
The reason for activation functions is not to make the network solve for nonlinear combinations... it essentially does that even with the activation function.... the reason for the activation function is to prevent the gradient from exploding.
@momol.9892
@momol.9892 18 күн бұрын
Just learned basics around the neural networks and saw this video. So satisfied to all the math formulas are laid out clearly in numpy and real-world coding and training neural network with back propagation. It really helps beginners like me. Thank you so much!
@quickclipsbysnoopy
@quickclipsbysnoopy 22 күн бұрын
Thank you so much, that helped a lot!
@DadicekCz
@DadicekCz 22 күн бұрын
Now do it in C with proper memory allocation
@bodaciouschad
@bodaciouschad 23 күн бұрын
Not a knock, just a layperson's opinion, but numpy and pandas are not "scratch", per say, as they do not come with a python installation. Matplotlib is for the audience's sake so its not breaking the spirit of the challenge, but importing things that make the underlying math at hand easier does toe the line.
@assasin-
@assasin- 23 күн бұрын
Hey can anyone tell me why he did not take derivate of the softmax function while calculating dW2, just like he used derivate of relu while finding dz2. If my understanding is correct, he is not doing any activation at second layer
@joymaurya3658
@joymaurya3658 24 күн бұрын
He also divided the pixel values with 255 so please take care
@LazyX20
@LazyX20 24 күн бұрын
bruh y did u stop making videos man ?? mb to ask
@user-ve2sd7gh4y
@user-ve2sd7gh4y 24 күн бұрын
Why is it always asian guy?
@chandakaashok4511
@chandakaashok4511 25 күн бұрын
Great
@mohamedirshaathm32123
@mohamedirshaathm32123 25 күн бұрын
can somone explain Y we take few transposes in the backprop section's matrix?
@parnavikulkarni3314
@parnavikulkarni3314 27 күн бұрын
Why did you separate the training data into data_dev & data_train? What is the use of data_Dev??
@okgoogle765
@okgoogle765 28 күн бұрын
thanks bro
@kamaldani1686
@kamaldani1686 29 күн бұрын
Understood nothing but wow
@blendingdude3429
@blendingdude3429 Ай бұрын
how can you be so good in many different things that requires quite a lot of time to get good at
@doctorshadow2482
@doctorshadow2482 Ай бұрын
Good start. Some points: 1. The article on the link in summary is no more available. 2. It would be nice to pay more attention to backpropagation, since it is tricky; just a simple question how to choose between adjusting bias and weight? 3. It would be nice to compare this NN with a simple checking against average "blurred" weighted maps for all numbers on pictures; just blur all 1's, blur separately all 2'th and then compare cosine distance. If the results would be the same as with the NN, the question would be about the Occam's Razor.
@Agorith_
@Agorith_ 27 күн бұрын
You can just use the wayback machine since its 3 years old vid.
@doctorshadow2482
@doctorshadow2482 27 күн бұрын
@@Agorith_ б, how is it related to my post?
@Agorith_
@Agorith_ 26 күн бұрын
@@doctorshadow2482 umm I said it for your first point-"The article on the link in summary is no more available."
@doctorshadow2482
@doctorshadow2482 26 күн бұрын
@@Agorith_ , so what? I just giving the update tho the author so that he could update the description or bring back the materials. It is not a complain, just a feedback.
@Agorith_
@Agorith_ 26 күн бұрын
@@doctorshadow2482 I am just making awareness to people who look your comment on where to look and which year to look into.
@williamjxj
@williamjxj Ай бұрын
Genius!
@themaridv2000
@themaridv2000 Ай бұрын
what does the "m" mean on the math calculations?
@izurzuhri
@izurzuhri Ай бұрын
thanks dude i need deep dive into this topic 30 minutes not enough for me
@ItsNaberius
@ItsNaberius Ай бұрын
Really excellent breakdown of a Neural Network, especially the math explanation in the beginning. I also want to say how much I appreciate you leaving in your first attempt at coding it and the mistakes you made. Coding is hard, and spending an hour debugging your code just because of one little number is so real. Great video
@catten8406
@catten8406 Ай бұрын
5:53, goofy way to calculate RELU purely mathematically (without any if statements) RELU(X) = X + (sqrt(X^2)) * 0.5 or use abs() RELU(X) = X + (abs(X)) * 0.5 sqrt() is square root, and abs() is absolute.
@jairam2788
@jairam2788 Ай бұрын
Any complete youtube channel or course for LSTM form scratch???
@dp0813
@dp0813 Ай бұрын
Great example! Need more of these kinds of videos to really advance the area. I'm a huge proponent of requiring videos & code for every research paper submitted on a novel topic or algorithm
@wasabiii_
@wasabiii_ Ай бұрын
this man is the definition of my imposter syndrome 😭
@anubhavjain7267
@anubhavjain7267 Ай бұрын
are you the guy behind @3lue1brown channel he demonstrated the same explanation as you.
@user-kg5xl3dq9p
@user-kg5xl3dq9p Ай бұрын
Hola donde puedo encontrar más material para crear un red neuronal a este estilo?
@aleekazmi
@aleekazmi Ай бұрын
He did all that but couldnt multiply the accuracy with 100 to output percentage
@shdnas6695
@shdnas6695 Ай бұрын
it's always the chinese
@ettoremiglioranza2959
@ettoremiglioranza2959 Ай бұрын
sorry i'm the only oene who gets 404 er on the math article's link?
@DetectiveConan990v3
@DetectiveConan990v3 Ай бұрын
me too
@my14081947
@my14081947 Ай бұрын
I heard your name as Samsung- got instantly hooked.
@theJellyjoker
@theJellyjoker Ай бұрын
I didn't get the math, but the explanation of the Neural Network pipeline was very informative. Thanks!
@s8x.
@s8x. Ай бұрын
why am i getting 10% accuracy
@s8x.
@s8x. Ай бұрын
Why is W2 10 x 10 dimension?