Tutorial 9- Drop Out Layers in Multi Neural Network

  Рет қаралды 183,413

Krish Naik

Krish Naik

Күн бұрын

Пікірлер: 167
@forever-fz1hk
@forever-fz1hk 5 жыл бұрын
krish sir just one thing to say...i too teach myself sometimes to school children,the thing is the effort you are putting in making these videos at free of charge is commendable...May god bless you sir..I am gaining confidence too after seeing ur videos and and thus becoming a data scientist
@snehalbm
@snehalbm 3 жыл бұрын
You are the mentor every aspiring data scientist needs, Thanks!!
@ebisaabebe615
@ebisaabebe615 9 ай бұрын
I am Msc. student from Ethiopia, Really to tell you the fact I have learnt a lot from your videos. May God bless your mind!!
@shivangirastogi9723
@shivangirastogi9723 2 жыл бұрын
Thanks for putting your efforts in making these in-depth videos which clarifies concepts in detail. Your videos are helping students like me who are very new to the ML and AI field.
@laxminarasimhaduggaraju2671
@laxminarasimhaduggaraju2671 5 жыл бұрын
Just I can see ur face is full of happiness when u explains a concept I guess u r like 🙏🙏
@tejateju6303
@tejateju6303 7 ай бұрын
The video explains the concept of dropout layers in deep neural networks, which helps prevent overfitting by randomly deactivating a subset of neurons during training. Key moments: 00:00 Artificial neural networks with many weights and bias parameters can lead to overfitting issues, dropout regularization helps prevent overfitting by randomly dropping units during training. -Explanation of overfitting in deep neural networks due to excessive parameters and the need for regularization techniques like dropout. -Comparison between underfitting in single-layer neural networks and the role of multiple layers in preventing underfitting in deep neural networks. -Introduction to dropout regularization as a technique to prevent overfitting by randomly dropping units during training, with a reference to the 2014 thesis by Srivastava and Hinton. 03:54 The video discusses the concept of dropout layers in neural networks, where a subset of features or neurons are randomly deactivated during training to prevent overfitting and improve model generalization. -Explanation of how dropout layers work in neural networks by randomly deactivating a subset of features or neurons during training to improve model generalization. -Comparison of dropout layers in neural networks to the concept of selecting subsets of features in random forests to create diverse decision trees for better model performance. 07:25 Dropout layer in neural networks randomly deactivates some neurons and activates others during training to prevent overfitting, similar to random forest's feature selection and majority voting. Test data connects all neurons without deactivation or activation, using weights multiplied by dropout probabilities for prediction. -Comparison of dropout layer with random forest for feature selection and majority voting to prevent overfitting in neural networks. -Explanation of how test data is handled in dropout layer, connecting all neurons without deactivation or activation, and using weights multiplied by dropout probabilities for prediction. -Selecting the dropout ratio (p-value) through hyperparameter optimization to prevent overfitting in deep neural networks, with a recommendation for p-value above 0.5.
@shaz-z506
@shaz-z506 5 жыл бұрын
That's the good video Krishna, I never thought about the random forest playing a similar mechanism when the first time I was studying dropout. good, you've cleared my concept with this video. Thanks!
@elmoreglidingclub3030
@elmoreglidingclub3030 3 жыл бұрын
Great stuff. But I have to listen several times to understand given our different dialects. Much appreciation for your work and explanations!! Excellent!
@shosad100
@shosad100 4 жыл бұрын
Krish Sir you are my favorite Teacher...your lessons and explanation's are simple and easy to understand , me like B grade student also can understand the concepts. Thank you Sir.
@nandinisarker6123
@nandinisarker6123 4 жыл бұрын
I found it extremely useful, easier to understand than many known experts
@RanjitSingh-rq1qx
@RanjitSingh-rq1qx 2 жыл бұрын
I watched 10 videos but yet i didn't code anything, but i am sure whenever I will code. I will be perform in more clearly.because these are videos are focusing on more basics and defining the more depth of ANN. Thank you so much sir. 🥰🥰😘🇮🇳🇮🇳
@anujeetswain7368
@anujeetswain7368 4 жыл бұрын
This deeplearning series is extremely good.
@grace30
@grace30 4 жыл бұрын
Really Like the way you explain! I have just completed Udemy Bootcamp and you are definitely reinforcing what I have learned. Keep up the good work!
@paullan-learning-read-dev7040
@paullan-learning-read-dev7040 4 жыл бұрын
Thank you. Much easier to understand than the one by Andrew Ng.
@manishsharma2211
@manishsharma2211 4 жыл бұрын
But can't ignore the fact , that he is God in AI
@nabiltech1366
@nabiltech1366 4 жыл бұрын
Do u take and finish Andrew Ng course?
@MrBemnet1
@MrBemnet1 4 жыл бұрын
@@nabiltech1366 half way . did you finish
@nabiltech1366
@nabiltech1366 4 жыл бұрын
@@MrBemnet1 No bro.The way he teach is very complicated to me.So i decide to learn a new way.When i have a little bit knowledge that i understand,i will try to retake the course so it will be easier than before.What about u?
@MrBemnet1
@MrBemnet1 4 жыл бұрын
@@nabiltech1366 I dont some of the concepts right a way.i will check other resources then come back and view it again. I will finish it everything within 2 weeks
@smarthbakshi7041
@smarthbakshi7041 3 жыл бұрын
This man makes ML a cakewalk!
@pankajverma-sw9oz
@pankajverma-sw9oz 2 жыл бұрын
i was alwasy confuse about deep learning beacuse of u i got clarity
@AmitYadav-ig8yt
@AmitYadav-ig8yt 5 жыл бұрын
Thank you very much, You have been an angel for me. Please upload a video on the theory part of SVM, K-Means or other unsupervised ML. Thanks a lot once again. Hari Om
@maddybharathi
@maddybharathi 4 жыл бұрын
You have a knack of making things short and simple and easy to grasp :)
@smitirashmiguru7649
@smitirashmiguru7649 4 жыл бұрын
Love the Deep Learning Series. Great Learning !!
@chidubem31
@chidubem31 2 жыл бұрын
The effort in these Videos !!! Thanks Krish !!!
@sukumarroychowdhury4122
@sukumarroychowdhury4122 4 жыл бұрын
Krish: You are the very best trainer
@adityashewale7983
@adityashewale7983 Жыл бұрын
hats off to you sir,Your explanation is top level, THnak you so much for guiding us...
@manjularathore1076
@manjularathore1076 4 жыл бұрын
Hi Krish, Thanks for making such nice videos and excellent explanation. Finally I have found somethingl I was looking for better understanding of deep learning.
@sandipansarkar9211
@sandipansarkar9211 4 жыл бұрын
Hello Krish.Came to know about the use of random forest in deep learning.Thanks
@arohawrami8132
@arohawrami8132 Жыл бұрын
Thanks a lot Krish for your best explanation.
@gooopin
@gooopin 4 жыл бұрын
Thanks for the sessions... These are precise and organized...
@Fatima-kj9ws
@Fatima-kj9ws 3 жыл бұрын
Great explanations, thank you very much sir
@gopalakrishna9510
@gopalakrishna9510 5 жыл бұрын
sir i think your enjoying this teaching ? your expressions indicating you are enjoying the teaching ...
@Amanullah-lt6fq
@Amanullah-lt6fq 2 жыл бұрын
I am watching your videos from few months and I learned a lot, your channel deserve subscription, I subscribed your channel
@pranjalijoshi6114
@pranjalijoshi6114 2 жыл бұрын
your all videos are very useful ...thanks alot for this good work
@urwahmunir9636
@urwahmunir9636 4 жыл бұрын
Extraordinary teaching style step wise.You made all my concepts clear , Can you please add some practical implementation of neural network models in which all these techniques can be used. like dropout, loss function , learning rate , regularization , optimizer in one model implementation..Thanks in advance...
@dnakhawa
@dnakhawa 4 жыл бұрын
You teach very well... Gr8 stuff about Data Science in your channel. Thanks Harish!
@krishnaik06
@krishnaik06 4 жыл бұрын
It's Krish buddy
@theforgottenhealth3244
@theforgottenhealth3244 4 жыл бұрын
Great service. Amazing Explanation!!
@debopamsengupta4409
@debopamsengupta4409 4 жыл бұрын
Hi Krish, great work, real smooth and informative explanation
@vishalvaibhav9697
@vishalvaibhav9697 5 жыл бұрын
Hello Krishna, first of all thanks you so much for the videos as lot of my queries are getting cleared up by watching your videos. I have a better understanding of Neural Networks now with all the maths behind it. I have one query though for this particular video : What is Batch Normalization in Neural Networks and how does it help in preventing over-fitting problems in a neural network?
@vikshukla44
@vikshukla44 4 жыл бұрын
Sir you are amazing! , you have cleared everything.
@ameygirdhari8703
@ameygirdhari8703 3 жыл бұрын
simple and clear explanation
@sauravkr.mahato
@sauravkr.mahato 8 ай бұрын
how simply he explained it .
@hokapokas
@hokapokas 5 жыл бұрын
Good work as usual krish... Awaiting its implementation 🙏🙏
@jeevanaddepalli5494
@jeevanaddepalli5494 3 жыл бұрын
I think during test time we should multiply the weights with keep probability value = (1- dropout rate). Intuitively keep probability means how many % of times we have used that weight or edge or connection to train our NN. please correct me if i am wrong Krish sir.
@firstkaransingh
@firstkaransingh 2 жыл бұрын
Great explanation 👍
@vantuannguyen4337
@vantuannguyen4337 3 жыл бұрын
i really love your energy
@wajidiqbal5633
@wajidiqbal5633 3 жыл бұрын
very well explained. thankyou
@adityagamingchanneltv9041
@adityagamingchanneltv9041 3 жыл бұрын
Your lectures are superb
@AKHILESHYADAV-ig7uv
@AKHILESHYADAV-ig7uv 4 жыл бұрын
It's really very good lecture series
@pedramdabaghian1329
@pedramdabaghian1329 2 жыл бұрын
Thank you. It was so helpful.
@davidhakobyan6377
@davidhakobyan6377 4 жыл бұрын
You explain very good! Thank you!
@Aliabbashassan3402
@Aliabbashassan3402 5 жыл бұрын
thank u from Iraq .. Good Job brother
@babbarutkarsh7770
@babbarutkarsh7770 3 жыл бұрын
Can there be a better explaination? Simply perfect!!
@fthialbkosh1632
@fthialbkosh1632 4 жыл бұрын
Thanks a lot, sir, very good explanation.
@marijatosic217
@marijatosic217 4 жыл бұрын
Great as always! Thank you :)
@anindyabanerjee743
@anindyabanerjee743 4 жыл бұрын
krish..you make my life easier
@priyasingh-zd1wm
@priyasingh-zd1wm 5 жыл бұрын
Such awesome content and explanations!!!
@joseguilherme5008
@joseguilherme5008 2 жыл бұрын
Great video 👏
@9971916866
@9971916866 4 жыл бұрын
Thank you Krish for the video, this is excellent!! One question, drop out will be applied at each epoch, then how does it combine the results from all the epoch?
@pawansharma-ij7kg
@pawansharma-ij7kg 3 жыл бұрын
Nice Explanation
@aakashnishad7048
@aakashnishad7048 5 жыл бұрын
Thanks Krish
@adityachandra2462
@adityachandra2462 4 жыл бұрын
P-value in drop rate section of the middle layer would be 0.6 ( blocking 60% instead of 0.5 ( single value of 1.0 means no dropout and value of 0.0 is full dropout or no output from that layer)......u keep on repeating that, plz rectify it
@koushikkonar4186
@koushikkonar4186 Жыл бұрын
Hi, In this video, when we are going to apply for test data...what will be the weight of deactivated neurons
@prasantimohanty6750
@prasantimohanty6750 5 жыл бұрын
I have a doubt. In test data which neurons are not activated we are doing p*w but which neurons are activated what will we doing in that case?
@AbdulRehman-hg9es
@AbdulRehman-hg9es 3 жыл бұрын
Great effort Krish! I like your passion. I have a one confusion about drop-out ratio. Why are you using drop-out ratio of 0.5 for input layer ? According to my knowledge that should be higher (i.e 1.0 or 0.9).
@shashwatdev2371
@shashwatdev2371 4 жыл бұрын
I have a doubt - On every iteration drop out ratio of any particular layer remains same or not? If not then do we take average to multipy with weights for test data ?
@lol-ki5pd
@lol-ki5pd Жыл бұрын
just a question . during back propogration, for each neuron we get updated weights. Now when we back propogate to starting, and again random starting feature points are chosen, what happens to back propogated weights?
@zx3215
@zx3215 5 жыл бұрын
In your sketch - did you really drop a couple of inputs out? Is this allowed in dropout approach?
@bhushanbowlekar4539
@bhushanbowlekar4539 2 жыл бұрын
guys please note that .....If you're dropping neurons or activation functions at the rate of p then 1-p will be multiplied at test phase.
@pankajkumarbarman765
@pankajkumarbarman765 4 жыл бұрын
Sir you are great 💖
@abdulqadar9580
@abdulqadar9580 2 жыл бұрын
Amazing Sir
@Sovereignl55
@Sovereignl55 Жыл бұрын
Sir if we're dropping some input and also hidden layers, It will not affect our output? Mean correct predictions
@anujsinha12
@anujsinha12 5 жыл бұрын
Hello @Krish Naik, You mentioned in Video that for test data w should be multiplied by P. Do we need to write a code for that in Model ? Does it happens aromatically?
@shiffin_chippe
@shiffin_chippe 5 жыл бұрын
So when the neurons are reactivated what are their weights?
@akashkewar
@akashkewar 4 жыл бұрын
their weights are the same as before because you didn't update them using backpropagation. You only update the weights corresponding to neurons that are activated at an iteration. So in the next iteration, if we happen to activate the neuron which was not active on the last iteration it's weight will be the same until backpropagation updates it (because that neuron is active now and hence will get updated).
@shiffin_chippe
@shiffin_chippe 4 жыл бұрын
@@akashkewar thanks for the reply after 8 months😃😃😃♥️
@akashkewar
@akashkewar 4 жыл бұрын
​@@shiffin_chippe :D "Better late than never". I hope you are doing fine in life and don't give up.
@VamsiKrishna-vg6vd
@VamsiKrishna-vg6vd 5 жыл бұрын
For training data suppose we are ignoring few features and neurons as per the drop out ratio and calculating the weights and with back propagation v r updating the weights. In the second step another set of features and neurons are selected randomly, Now if we are again calculating the new weights that doesn't make sense rights as this will keep on repeating with different random combinations.... Please correct me if I am wrong...Thanks in advance.
@vaishnavkrishnan7996
@vaishnavkrishnan7996 4 жыл бұрын
so after all of this is done the best set of features are selected for that particular output value i guess
@pranjalgupta9427
@pranjalgupta9427 4 жыл бұрын
Amazing explanation but what happen if p=0 Or p=1?
@vaibhavhariramani
@vaibhavhariramani 3 жыл бұрын
I just have a little query if we keep activating and de-activating neurons while training doesn't it cause overfitting when testing with all neurons activated at once which were trained in some different combinations during training
@midhileshmomidi2434
@midhileshmomidi2434 5 жыл бұрын
Hi Sir, I have a doubt. If we take p=0.5 half of the features which will be deactivated at 1st epoch will be reactivated in 2nd epoch and same goes on for other features in upcoming epochs as well Please explain
@vishalaaa1
@vishalaaa1 4 жыл бұрын
Hi, You did not explain how the exploding problem can be corrected - is it through Same RELU ?
@ankitbisht1517
@ankitbisht1517 3 ай бұрын
Can you please share the URL of any report related to this regularisation technique
@rahul-wz7rn
@rahul-wz7rn 3 жыл бұрын
if we apply drop out ratio is there any chance that the features which are selected first time get selected in second time..or new features get selected.
@sameerherkal9205
@sameerherkal9205 9 ай бұрын
Hi @Krish, I got asked in an interview, what if we remove one hidden layer instead of DropOut, wont it be good to remove one Hidden Layer instead of DropOut, Can you please help me with the Answer.
@gouravdidwania1070
@gouravdidwania1070 3 жыл бұрын
p=0.7 so 0.7 will be selected or 0.7 will be dropped out?
@namangoyal8477
@namangoyal8477 3 жыл бұрын
same question i have.
@mohd.faizan3003
@mohd.faizan3003 4 жыл бұрын
Sir I have a doubt that when the neurons are randomly selected base on p value then for next epochs from which neurons the random selection which will performed activated ones or all of them
@sharadkolse6871
@sharadkolse6871 4 жыл бұрын
Best explained:)
@kuskargupt2887
@kuskargupt2887 4 жыл бұрын
Sir as we are randomly selecting some features or neurons, then those are being updated according to that set of neurons in that particular FP and BP, so how come the model is going to predict the right answer when all the neurons are activated together for Test data as we have trained the weights of the neurons when there where less number of the activated neuron, like how, the model will sum up all the weights to give the right prediction(with least error).
@RajeshRajesh-sh7zj
@RajeshRajesh-sh7zj 9 ай бұрын
In the next iteration, will the deactivated neurons get activated randomly???
@michaelloturco5584
@michaelloturco5584 3 жыл бұрын
Thank you for this excellent explanation! could you link the original research thesis you mentioned? (or maybe i'm just not finding in description)
@dmlane_sougata
@dmlane_sougata 3 жыл бұрын
Sir all weights will be updated as (P*W) while testing data or the P value will be updated as (P*W) ? Please clear this.
@shubhamchauda425
@shubhamchauda425 4 жыл бұрын
i have a question. we have to add different drop layers for different layers or we have to add once for all layer ?
@chaoxi8966
@chaoxi8966 2 жыл бұрын
Hi, Sir. I would like to know in each epoch of training, does dropout have relations to batch_size?
@manikosuru5712
@manikosuru5712 5 жыл бұрын
Hi sir, Amazing explanation.. small doubt.. while multiplying 'p' value with weight 'w' for test data,do we include(add) bias value with input??
@krishnaik06
@krishnaik06 5 жыл бұрын
We have to include the bias..
@VidyaranyaSaiNuthalapatiNSV
@VidyaranyaSaiNuthalapatiNSV 2 жыл бұрын
I think there is a mistake in the explanation when dealing with test time. If p is the probability of dropping a neuron, then the weights should be multiplied by 1-p during test time
@shahariarsarkar3433
@shahariarsarkar3433 9 ай бұрын
Please suggest a good reference book for Deep learning.
@absolutelynobody3837
@absolutelynobody3837 3 жыл бұрын
Wouldn't the weights in testing be w(1-p) rather than wp?
@palashchanda9308
@palashchanda9308 4 жыл бұрын
Can you please provide link for the Machine Learning playlist?
@krishnakanthbandaru9308
@krishnakanthbandaru9308 5 жыл бұрын
sir if in train we drop the x2 and x4 features we won't get weights then while testing how those weights(Unknown) get multiplied with drop out ratio. I did not get that ,please explain ..
@amankapri
@amankapri 5 жыл бұрын
@Krishna : The features x2 and x4 will be dropped out only in 1st epoch. Once the epoch is completed, again it will select 2 other features as per the drop out ratio. Once this loop gets completed, all the neurons in each layer will have some weight with it.
@spadiyar6725
@spadiyar6725 3 жыл бұрын
you have not explained why everything will be connected for test data. you explained the calculation after connected. but i would like to know why everything connected? what happens if we use dropout for the test data.
@debasispatra8368
@debasispatra8368 4 жыл бұрын
Krish i have a doubt. Suppose i have 5 inputs & 5 neurons in my 1st hidden layer. In training time, i have given drop out ratio as 0.5, & due to this suppose 2 inputs & 2 neurons got deactivated. In this case now we have 3 i/p & 3 neuron left, so 9 weights we have to train. But at testing time we have to multiply 'p' value with 25 weights as testing time all i/p & neurons exists. So how to do this?
@vaishnavkrishnan7996
@vaishnavkrishnan7996 4 жыл бұрын
i think the drop out ratio for other deactivated neurons in the test set would be 0 i guess doesnt make sense though
@amitghodke838
@amitghodke838 5 жыл бұрын
Can you explain how it is helping to avoid overfitting problem.
@ketanchaudhari5642
@ketanchaudhari5642 4 жыл бұрын
hello sir just want to ask during dropout we will drop few neurons and at the time of testing we will connect them all and will update weights of whole network as (W*P) but what are the weights of drop neurons should we take W=1 for drop neurons
@benvelloor
@benvelloor 4 жыл бұрын
All neurons will get updated with weights as there are multiple forward and backward prorogation. Dropout just prevents a random set of neurons from updating after each forward and backward iteration.
@ParthivShah
@ParthivShah Жыл бұрын
thank you sir.
@karndeepsingh
@karndeepsingh 5 жыл бұрын
Sir, a small doubt.... DROP OUT RATIO FOR DIFFERENT LAYER WOULD BE DIFFERENT OR IT WILL BE SAME FOR ALL THE LAYERS?
@krishnaik06
@krishnaik06 5 жыл бұрын
It can be different
@karndeepsingh
@karndeepsingh 5 жыл бұрын
@@krishnaik06 Thankyou sir. Keep Uploading the conceptual videos we are learning a lot.
@swastikpathak4669
@swastikpathak4669 4 жыл бұрын
Krish Naik: that's like the coolest name
Tutorial 8- Exploding Gradient Problem in Neural Network
11:11
Krish Naik
Рет қаралды 138 М.
УЛИЧНЫЕ МУЗЫКАНТЫ В СОЧИ 🤘🏻
0:33
РОК ЗАВОД
Рет қаралды 7 МЛН
Their Boat Engine Fell Off
0:13
Newsflare
Рет қаралды 15 МЛН
Counter-Strike 2 - Новый кс. Cтарый я
13:10
Marmok
Рет қаралды 2,8 МЛН
Regularization in a Neural Network | Dealing with overfitting
11:40
Batch Normalization (“batch norm”) explained
7:32
deeplizard
Рет қаралды 235 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 831 М.
Tutorial 12- Stochastic Gradient Descent vs Gradient Descent
12:17
Tutorial 6-Chain Rule of Differentiation with BackPropagation
13:43
But what is a neural network? | Deep learning chapter 1
18:40
3Blue1Brown
Рет қаралды 18 МЛН
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 502 М.
Understanding Dropout (C2W1L07)
7:05
DeepLearningAI
Рет қаралды 156 М.
How I'd learn ML in 2025 (if I could start over)
16:24
Boris Meinardus
Рет қаралды 106 М.
УЛИЧНЫЕ МУЗЫКАНТЫ В СОЧИ 🤘🏻
0:33
РОК ЗАВОД
Рет қаралды 7 МЛН