#1 Solved Example Back Propagation Algorithm Multi-Layer Perceptron Network by Dr. Mahesh Huddar

  Рет қаралды 624,517

Mahesh Huddar

Mahesh Huddar

Күн бұрын

Пікірлер: 209
@akinyaman
@akinyaman Жыл бұрын
man l looked many bp explanations and most of them telling of rocket science stuff this is the easy and clear one ever, thanx for sharing
@safayetsuzan1951
@safayetsuzan1951 2 жыл бұрын
Sir, you deserve a big thanks. My teacher gave me an assignment and i was searching for 2 days on youtube for weight calculation. But finally your video has done the work. It was really satisfying. Thank you sir.
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@jarveyjaguar4395
@jarveyjaguar4395 Жыл бұрын
I have an Exam in 2 days and your videos just saved me from failing this module. Thank you so much and much love from 🇩🇿🇩🇿.
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@joeystenbeck6697
@joeystenbeck6697 2 жыл бұрын
This makes the math very clear. I now know the math and have some intuition, so I hope to fully connect the two soon. Thanks for the great video!
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Thank You Do like share and subscribe
@shivamanand8019
@shivamanand8019 Жыл бұрын
It looks like error in ∆wji notation you followed just opposite @@MaheshHuddar
@R41Ryan
@R41Ryan Жыл бұрын
Thank you. I've been trying to implement a reinforcment algorithm from scratch. I understood everything except back propogation and every video on it that I've watched has always been vague until I saw this video. Good stuff!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@shreyaankrao968
@shreyaankrao968 Жыл бұрын
I got crystal clear understanding of this concept only because of you sir. The flow of video is excellant, appreciate your efforts!! Thank you and keep up the good work !!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@tejasvinnarayan2887
@tejasvinnarayan2887 Жыл бұрын
Very clear! How about bias b? What is the formula in case we add a bias?
@ushanandhini1942
@ushanandhini1942 2 жыл бұрын
Easy to understand. Give next one is CNN
@pritam-kunduu
@pritam-kunduu Жыл бұрын
You taught Very good. Today is my exam. Your videos were really helpful. I hope I pass well without getting a backlog in this subject. 👍👍
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@BloodX03
@BloodX03 Жыл бұрын
Damn sir, u are the best youtube teacher in AI.. Love youu ma sir
@horiashahzadi302
@horiashahzadi302 Ай бұрын
Boht zbrdast lecture ha ....boht shukrya....keep it up
@vnbalaji7225
@vnbalaji7225 2 жыл бұрын
Simple lucid example illustrated. Please continue.
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Thank You
@alperari9496
@alperari9496 7 ай бұрын
I believe for hidden units, the w_kj in delta(j) formula should have been w_jk. Namely, other way around.
@alperari9496
@alperari9496 7 ай бұрын
And delta(w_ji) should have been delta(w_ij), again the other way around.
@steveg906
@steveg906 5 ай бұрын
yeh i was about to comment the same thing
@paul-pp1op
@paul-pp1op Жыл бұрын
Best video explanation on ANN back propagation. Many thanks sir
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@25fpslagger81
@25fpslagger81 5 ай бұрын
Thank you sir, all of you machine learning videos have helped us students a lot
@MaheshHuddar
@MaheshHuddar 5 ай бұрын
Welcome Do like share and subscribe
@hemanthd623
@hemanthd623 5 ай бұрын
@@MaheshHuddar Sir Can you solve problems on HMM , CNN pls
@khizarkhan2250
@khizarkhan2250 Жыл бұрын
Guys! Threshold or bias is a tunning parameter you can select something low like 0.01 , 0.02 or high like 0.2 to check error is getting low or not. I hope this will help you.
@ToriliaShine
@ToriliaShine 6 ай бұрын
thank you! understood the concept smoothly with your video!
@MaheshHuddar
@MaheshHuddar 6 ай бұрын
Thank You Do like share and subscribe
@ToriliaShine
@ToriliaShine 6 ай бұрын
@@MaheshHuddar did that, question though. for the forward pass, what about biases? they are values with their own weights right. were they just, not included for this example?
@ToriliaShine
@ToriliaShine 6 ай бұрын
ah nevermind, got it from your next video in this series lol
@abu-yousuf
@abu-yousuf Жыл бұрын
Great work Dr. Mahesh. Thanks from Pakistan.
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@sahmad120967
@sahmad120967 Жыл бұрын
Great sir, it is very clear example how to calculate ANN. Thanks, keep being productive
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@utkarshmangal6559
@utkarshmangal6559 Жыл бұрын
you are a king sir. Thank you for saving me from my exam tommorow.
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe All the very best for your exams
@srisangeeth4131
@srisangeeth4131 5 ай бұрын
Concept is clear , i got confidence in this concept sir,thank you👍👍👍👍
@MaheshHuddar
@MaheshHuddar 5 ай бұрын
Welcome Do like share and subscribe
@srisangeeth4131
@srisangeeth4131 5 ай бұрын
@@MaheshHuddar sir can you provide videos in gaussian process in machine learning
@bharatreddy972
@bharatreddy972 Жыл бұрын
Thank you So much Sir.....This videos made us clear understanding of Machine learning all concepts...
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@jinkagamura2820
@jinkagamura2820 2 жыл бұрын
I have a doubt. In many places, I have seen that the error calculation is done using the formula E = 1/2 (y - y*)^2, but you have calculated by using subtraction. Which is correct?
@keertichauhan6221
@keertichauhan6221 2 жыл бұрын
same doubt .. what is the correct method?
@priyanshumohanty5261
@priyanshumohanty5261 2 жыл бұрын
@@keertichauhan6221 I think there are different methods to compute the error. The one that this guy above has mentioned is mean square error. The one shown in the video is also correct, but using MSE or RMSE, are generally regarded as better measures.
@chukwuemekaomerenna4396
@chukwuemekaomerenna4396 2 жыл бұрын
For a multiple data point, you use error function and for single output you make use of loss function. Loss function is error = actual - target Error function is 1/2(actual-target)square
@chukwuemekaomerenna4396
@chukwuemekaomerenna4396 2 жыл бұрын
For a multiple data point, you use error function. Error function=1/2 summation(y actual - y target)^2. For a single data point, you use the loss function. Loss function= (y actual - y target)
@sharanyas1565
@sharanyas1565 2 жыл бұрын
The errors are made to be positive by squaring.
@msatyabhaskarasrinivasacha5874
@msatyabhaskarasrinivasacha5874 5 ай бұрын
It'd an awesome explanation sir....no words to thank you sir
@MaheshHuddar
@MaheshHuddar 5 ай бұрын
You are most welcome Do like share and subscribe
@LongZzz
@LongZzz Жыл бұрын
Thank you so much, this video comes right after I feel bad at machine learning and wanna give up but now I think it is not so hard as I think
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@sharanyas1565
@sharanyas1565 2 жыл бұрын
Very clear. Thanks for uploading this video.
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@madhusaggi
@madhusaggi 9 ай бұрын
Can you please do vedios on Cnn with mathematical concepts..your vedios are much useful and understandable.Thank you
@ajayofficial3706
@ajayofficial3706 Жыл бұрын
Thank you Sir, for each and every point explain it.
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and Subscribe
@web3sorcerer
@web3sorcerer Ай бұрын
this is a great lecture!
@mehmetakifvardar
@mehmetakifvardar 9 ай бұрын
Mr.Huddar, thanks a lot for perfect explanation. One thing though, how do I calculate the change of bias term for each neuron in my neural network?
@femiwilliam1830
@femiwilliam1830 Жыл бұрын
Good job, but why no account for the bias term before applying sigmoid function?
@iwantpeace6535
@iwantpeace6535 6 ай бұрын
THANK YOU SIR , BIRILIANT INDIAN MIND
@MaheshHuddar
@MaheshHuddar 6 ай бұрын
Welcome Do like share and subscribe
@lalladiva6097
@lalladiva6097 Жыл бұрын
you are a life saver, thank you soooo much.
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@arminmow
@arminmow Жыл бұрын
You saved me you're a hero thank you
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@bagusk_awan
@bagusk_awan 11 ай бұрын
Sir, thank you for this great video! It was really helpful. I appreciate the clear explanation
@MaheshHuddar
@MaheshHuddar 11 ай бұрын
Glad it was helpful! Do like share and subscribe
@mr.commonsense6645
@mr.commonsense6645 11 ай бұрын
GOATED Explaning, mantab mantab
@tanvirahmed552
@tanvirahmed552 2 жыл бұрын
Nice video, easily understood the topic, thank you
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@sahanazbegam6913
@sahanazbegam6913 3 ай бұрын
very clear thank you for the content
@MaheshHuddar
@MaheshHuddar 3 ай бұрын
Welcome Do like share and subscribe
@NotLonely_Traveler
@NotLonely_Traveler 5 ай бұрын
Finally one that makes sense
@MaheshHuddar
@MaheshHuddar 5 ай бұрын
Thank You Do like share and subscribe
@romankyrkalo9633
@romankyrkalo9633 2 жыл бұрын
Great video, easy to understand
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Thank You Do like share and subscribe
@mdnahidulislam13
@mdnahidulislam13 5 ай бұрын
Clear explanation. recommanded...
@MaheshHuddar
@MaheshHuddar 5 ай бұрын
Thank You Do like share and subscribe
@shubhampamecha9650
@shubhampamecha9650 2 жыл бұрын
And where is the b biased There should be some constant also?
@usmanyousaaf
@usmanyousaaf 5 ай бұрын
Today is my exam again well explain sir
@AbhishekSingh-up4rv
@AbhishekSingh-up4rv 2 жыл бұрын
Awesome explanation. Thanks
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@Ayesha_01257
@Ayesha_01257 7 ай бұрын
Very Well Explained ...Keep up the Good Work
@MaheshHuddar
@MaheshHuddar 7 ай бұрын
Thank You Do like share and subscribe
@alizainsaleem5533
@alizainsaleem5533 2 ай бұрын
best explanation
@a5a5aa37
@a5a5aa37 11 ай бұрын
thanks a lot for your explanation!
@MaheshHuddar
@MaheshHuddar 11 ай бұрын
Welcome Do like share and subscribe
@df_iulia_estera
@df_iulia_estera Жыл бұрын
Awesome explanation! Thanks a lot!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@adilmughal2251
@adilmughal2251 Жыл бұрын
Amazing stuff just to the point and clear.
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@flakysob
@flakysob 10 ай бұрын
Thank you so much! You saved me. I subscribed. Thanks
@MaheshHuddar
@MaheshHuddar 10 ай бұрын
Welcome Please do like share
@usmanyousaaf
@usmanyousaaf Жыл бұрын
Last night !! Today is exam well explain boss
@MissPiggyM976
@MissPiggyM976 Жыл бұрын
Very useful, thanks!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@user-fl7bm8jc8o
@user-fl7bm8jc8o Жыл бұрын
Thanks a lot sir 🙏🙏🙏🙏🙏🙏🙏
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Most welcome Do like share and subscribe
@ram-pc4wk
@ram-pc4wk 10 ай бұрын
how are you deriving deltaj formula, u can include derivations of sigmoid functions.
@tatendamachikiche5535
@tatendamachikiche5535 2 жыл бұрын
good staff thank u
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@sowmiya_rocker
@sowmiya_rocker Жыл бұрын
Thanks for the video, sir. I have a doubt. How did you update weights without Gradient Descent (GD) or any other optimization technique, sir? Because I read in blogs that Networks don't get trained without GD and by only using backpropagation. In other words, my doubt is how does the calculation change if we also implemented GD in this? I'm a rookie; kindly guide me, sir.
@adityachalla7677
@adityachalla7677 Жыл бұрын
gradient descent has been used in this video while updating the weights. the change in weights is done through gradient descent. But here he has not written the derivative math.
@rohanwarghade7111
@rohanwarghade7111 9 ай бұрын
how he got ytarget = 0.5@@adityachalla7677
@dydufjfbfnddhbxnddj83
@dydufjfbfnddhbxnddj83 Жыл бұрын
how did you update the weights of connections connecting input layer and the hidden layer?
@iqramunir1468
@iqramunir1468 Жыл бұрын
Thank you so much sir
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@Vishnu_Datta_1698
@Vishnu_Datta_1698 11 ай бұрын
Thank you
@MaheshHuddar
@MaheshHuddar 11 ай бұрын
Welcome Do like share and subscribe
@AnubhavApurva
@AnubhavApurva Жыл бұрын
Thank you!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@pranavgangapurkar195
@pranavgangapurkar195 Жыл бұрын
in one epoch how many times back propagation takes place?
@oposicionine4074
@oposicionine4074 Жыл бұрын
How do you update de weights if you have more input data?? In this case he only has 1 input, how do you do it with 2 inputs? Do you do the same twice?
@animeclub8475
@animeclub8475 8 ай бұрын
everyone says "thank you", but only a few understand that this video is useless if there are more neurons on one layer. Those who say "thank you" do not even plan to make a neural network
@sathviksrikanth7362
@sathviksrikanth7362 Жыл бұрын
Thanks a lot Sir!!!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@umakrishnamarineni3520
@umakrishnamarineni3520 Жыл бұрын
thank you sir.
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Most welcome Do like share and subscribe
@jacki8726
@jacki8726 Жыл бұрын
Very helpful
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Do like share and subscribe
@timebokka3579
@timebokka3579 Жыл бұрын
Can you explain this The input of a neuran is 0.377 Output of the same neuran is 0.5932 How this happens How does the 2.71^-0377 give the answer of 0.6867
@priyaprabhu7101
@priyaprabhu7101 2 жыл бұрын
Nice Video sir.. where is the bias here
@AmnaCode
@AmnaCode 4 ай бұрын
Thanks for solution
@MaheshHuddar
@MaheshHuddar 4 ай бұрын
Welcome Do like share and subscribe
@AmnaCode
@AmnaCode 4 ай бұрын
@@MaheshHuddar sure. Thanks 😊
@JudyXu-d6j
@JudyXu-d6j 9 ай бұрын
I have a question, what if we have a bias term and some bias weights. Do we need to account for those or it would be 0?
@MaheshHuddar
@MaheshHuddar 9 ай бұрын
Yes you have to consider Follow this video: kzbin.info/www/bejne/pGOvYn1rf76ai80
@ervincsengeri1840
@ervincsengeri1840 Жыл бұрын
Köszönjük!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@TXS-xt6vj
@TXS-xt6vj 10 ай бұрын
you are a legend
@MaheshHuddar
@MaheshHuddar 10 ай бұрын
Welcome Do like share and subscribe
@SaiNath-cw7yn
@SaiNath-cw7yn 3 ай бұрын
thank u sir
@MaheshHuddar
@MaheshHuddar 3 ай бұрын
Welcome Do like share and subscribe
@Professor_el
@Professor_el 5 ай бұрын
The formulars for delta W only work because of the nature of the actiation function right? if it is a hyperbolic tangent or RELU , the formulas change right?
@steveg906
@steveg906 5 ай бұрын
yes
@pubuduchanna1736
@pubuduchanna1736 Жыл бұрын
Thank you! This helped me a lot!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@SHUBHAMKUMAR-cd7fs
@SHUBHAMKUMAR-cd7fs 4 ай бұрын
awesome.
@MaheshHuddar
@MaheshHuddar 4 ай бұрын
Thanks! Do like share and subscribe
@arj1045
@arj1045 Жыл бұрын
well done sir
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@muhtasirimran
@muhtasirimran Жыл бұрын
I have a confusion. We use ReLU on hidden layer and not sigmoid. Shouldn't we calculate hidden layer's activation using ReLU instead of sigmoid?
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Yes you have to Do calculation Based on activation function
@muhtasirimran
@muhtasirimran Жыл бұрын
@Mahesh Huddar ik. You have used sigmoid function on the hidden layer. This will result in an error.
@justenjoy3744
@justenjoy3744 Жыл бұрын
From which book u have taken problem from?
@amulyadevireddy5669
@amulyadevireddy5669 2 жыл бұрын
What about the bias factor??
@halihammer
@halihammer 2 жыл бұрын
He did not add it to keep it simple i guess. But you can add the bias by making it a third input. Then the method doesnt change you just have a new (constant) input for each layer. Normaly set to one 1 and the weight associated will act as your bias. since 1*biasWeight = biasWeight. I just added 1 to my input vector and generate an additionol weight for the weights matrix. But im also just learning an not sure if im 100% correct..
@allahthemostmerciful2706
@allahthemostmerciful2706 Жыл бұрын
Soooooo Good❤
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@tlbtlb3950
@tlbtlb3950 Жыл бұрын
不错,老印!
@dhanushvcse4056
@dhanushvcse4056 Жыл бұрын
Sir gradient descant video kocham fastta video upload pannuga
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Videos on Gradient Decent: kzbin.info/www/bejne/oaWqnmONeNSEhck kzbin.info/www/bejne/a5mlZZJupJhnfbc kzbin.info/www/bejne/n5OugWOkfrlqj7c
@makkingeverything6610
@makkingeverything6610 2 жыл бұрын
thank you man
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@lakshsinghania
@lakshsinghania 10 ай бұрын
sir, also each perceptron has bias with it right ?
@MaheshHuddar
@MaheshHuddar 10 ай бұрын
Yes
@MaheshHuddar
@MaheshHuddar 10 ай бұрын
Follow this video for bias: kzbin.info/www/bejne/pGOvYn1rf76ai80
@lakshsinghania
@lakshsinghania 10 ай бұрын
Thank u sir a lot!! @@MaheshHuddar
@지엔서
@지엔서 8 ай бұрын
good video
@MaheshHuddar
@MaheshHuddar 8 ай бұрын
Thank You Do like share and subscribe
@vinayvictory8010
@vinayvictory8010 Жыл бұрын
At what stage we have to stop Forward and Backward pass unti Error becomes zero
@muhtasirimran
@muhtasirimran Жыл бұрын
this is specified as epoch and must be given in a question. There is no defined condition to stop. It really depends on your needs.
@pritampatil4669
@pritampatil4669 2 жыл бұрын
What about the bias terms ?
@ayushhmalikk
@ayushhmalikk 2 жыл бұрын
youre a legend
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Thank You Do like share and subscribe
@kaavyashree6209
@kaavyashree6209 Жыл бұрын
Sir how to update bias in back propagation
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Refer this video: kzbin.info/www/bejne/pGOvYn1rf76ai80
@tuccecintuglu404
@tuccecintuglu404 8 ай бұрын
YOU ARE THE GBEST
@nhatpham5797
@nhatpham5797 Жыл бұрын
hello, thanks for your lecture, please let me ask after we calculate Denta 5 in backpropagation, why don't we calculate new w35 and new w45 and then substitute in the formula Denta 3 and Denta 4 but have to use old w35 and w45, please reply, have a nice day
@MaheshHuddar
@MaheshHuddar Жыл бұрын
The error was due to old weights,hence update the weights using old weights
@nhatpham5797
@nhatpham5797 Жыл бұрын
@@MaheshHuddar Thank you so much!!
@መለኛው-ተ4ዀ
@መለኛው-ተ4ዀ Жыл бұрын
txs
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@redwanhossain7563
@redwanhossain7563 Жыл бұрын
what about bias
@slimeminem7402
@slimeminem7402 4 ай бұрын
It is assumed to be zero here
@satwik4823
@satwik4823 Жыл бұрын
GODDD!!!!!!!!!!!!!!!!!!!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@1981Praveer
@1981Praveer Жыл бұрын
#Mahesh Hunder, dnt we need bias, just curious
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Follow this video kzbin.info/www/bejne/pGOvYn1rf76ai80
@1981Praveer
@1981Praveer Жыл бұрын
@@MaheshHuddar @6:31 min Just curious can you share the web page for these formulae ?
@BujjiNaveen995
@BujjiNaveen995 2 жыл бұрын
Great
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Thank You Do like share and subscribe
@BujjiNaveen995
@BujjiNaveen995 2 жыл бұрын
@@MaheshHuddar done 👍
@amitasharma5481
@amitasharma5481 Жыл бұрын
Hi, can I contact you, I need to solve my assignment with 2d vector data with mixture model. i need to perform EBP. Can you please tell me how can I contact you.
@seabiscuitthechallenger6899
@seabiscuitthechallenger6899 3 ай бұрын
👍👍👍👍👍👍👍👍
@MaheshHuddar
@MaheshHuddar 3 ай бұрын
Thank You Do like share and subscribe
@vaiebhavpatil2340
@vaiebhavpatil2340 2 жыл бұрын
based
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Based...?
@SourabhKumar-nr1yq
@SourabhKumar-nr1yq 10 ай бұрын
🙏🙏🙏🙏🙏🙏
@MaheshHuddar
@MaheshHuddar 10 ай бұрын
Do like share and subscribe
@sajalali8155
@sajalali8155 Жыл бұрын
Use urdu language
Back Propagation in Neural Network with an example
12:45
Naveen Kumar
Рет қаралды 836 М.
Don't look down on anyone#devil  #lilith  #funny  #shorts
00:12
Devil Lilith
Рет қаралды 20 МЛН
iPhone or Chocolate??
00:16
Hungry FAM
Рет қаралды 50 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 313 М.
What is Back Propagation
8:00
IBM Technology
Рет қаралды 61 М.
But what is a neural network? | Chapter 1, Deep learning
18:40
3Blue1Brown
Рет қаралды 17 МЛН
How To Self Study AI FAST
12:54
Tina Huang
Рет қаралды 557 М.
Don't look down on anyone#devil  #lilith  #funny  #shorts
00:12
Devil Lilith
Рет қаралды 20 МЛН