First of all thanks for the video. Bagging: when we take different models and train them parallely with each getting sub-set if data from the total data and each model has high-variance and low bias. Boosting: same as above but the difference is instead of training them parallely ,output of one model is given as input to the other and each model should have high-bias and low-variance.
@UnfoldDataScience3 жыл бұрын
Well said!
@crazycurlzXD4 жыл бұрын
I've been struggling to understand this for quite a few hours now. Finally, got it. Thank you so much!
@UnfoldDataScience4 жыл бұрын
Glad it helped!
@hetal19263 жыл бұрын
I am a new in this field and I was trying to understand this concept, refered many webpages and seen many videos. You explain Very nicely. I got it the concept.
@UnfoldDataScience3 жыл бұрын
Thanks for watching.
@TheMuktesh89 Жыл бұрын
That is very nicely Explained. Thank you, Sir.
@tosinlitics949 Жыл бұрын
I love that you throughly explained the theory before you dove into the code. Great job!
@UnfoldDataScience Жыл бұрын
Thank you.
@sridattu44673 жыл бұрын
I beg my pardon....I was struggling with this technique Very clearly understood and the code n got executed!! Thanks a lot
@UnfoldDataScience3 жыл бұрын
Thanks for watching Sri.
@davidgao535110 ай бұрын
Great explanation of the concept. Thank you for also showing the python samples to really bring it home.
@sharatainapur2 жыл бұрын
Hello Aman Sir, Thank you for the great video, simple explanation. Could you please elaborate on how the meta-model is built and used for the testing / real-test set? Like here, the meta-model uses Logistic Regression, right? How a logistic regression works to stack the results from base model?
@ranajaydas89063 жыл бұрын
Thanks a lot... Was struggling with this Stacking approach.... Now it's clear!
@UnfoldDataScience3 жыл бұрын
Cheers Ranajay :)
@bharatbajoria4 жыл бұрын
Bagging is Bootstrap Aggregation which is used primarily to reduce Variance , it uses CLT to do the same. Boosting improves the base learners by learning from mistake of the previous model using homogeneous weak learners, it helps in reducing Bias.
@UnfoldDataScience3 жыл бұрын
Thanks Bharat.
@saurabhdeokar37913 жыл бұрын
In bagging , we make different subset of dataset by using row sampling and replacement and that subset we pass different model's to make prediction and at the end we combine or aggregate all of the model prediction.
@FarhanAhmed-xq3zx3 жыл бұрын
Greatly explained💥👌
@UnfoldDataScience3 жыл бұрын
Eid Mubarak Farhan. Tc
@arrafihriday13332 жыл бұрын
Laudable teaching. Learnt a lot.
@sonalkapuriya4944 Жыл бұрын
crystal clear explanation
@reflex26273 жыл бұрын
Absolutely very good explanation , better than my professor
@UnfoldDataScience3 жыл бұрын
Thanks a lot. your comments motivate me.
@vaddadisairahul29564 жыл бұрын
Bagging helps in reducing variance due to overfitting in decision trees and further to reduce bias, boosting is used. Hence, ultimately we achieve a model with low bias and low variance.
@UnfoldDataScience4 жыл бұрын
Correct Rahul.
@akshayshenoy74173 жыл бұрын
Beautiful man got my concepts cleared you deserve more reach.
@UnfoldDataScience3 жыл бұрын
Thanks a lot Akshay. Kindly share in the data science groups you are part of :)
@nan89223 жыл бұрын
Wow, fast and cleare, thanks.
@UnfoldDataScience3 жыл бұрын
You're welcome!
@sandipansarkar92113 жыл бұрын
finished watching
@preranatiwary76904 жыл бұрын
Good content
@UnfoldDataScience4 жыл бұрын
Thanks a lot.
@phanikumar31364 жыл бұрын
bagging is the process known as parallel computing and in this method, we can choose rows and columns with replacement and its example is randomforest....but in boosting it is a series computing and example xgboost.
@UnfoldDataScience4 жыл бұрын
Correct. Thank you.
@pankajnegi92784 жыл бұрын
In bagging we take base learner models, with high variance and low bias,. Eg - random forest we typically take decision trees ( which are Fully grown to their max depth, with max_depth = None) as such decisions trees are high variance, models. The main aim of bagging itself is to reduce high variance of the overall/final model, In bagging we have bootstrap (row sampling and column sampling) and aggregation steps which helps to achieve low variance final model, Also every base learner is being train on a sample dataset not on whole data set, so every base learner is learning something unique or different from other base learners
@mikohotori42764 жыл бұрын
Thanks for your sharing.
@UnfoldDataScience4 жыл бұрын
My pleasure
@magesh10mano3 жыл бұрын
Good Explanation... Thank you
@UnfoldDataScience3 жыл бұрын
You are welcome
@Mars78222 жыл бұрын
Nice class
@UnfoldDataScience2 жыл бұрын
Thanks
@tusharsalunkhe79164 жыл бұрын
Thank you sir for this lecture. Want to know one thing... data which goes to Meta model consist of Independent variables and actual output value (Target variable Y) along with predictions from weak learners like LR, SVM, NN... so how does Meta Model use predictions from weak learners to predict the final output/ prediction? Is Meta model consider predictions from weak learners as additional independent variables(along with existing independent variables) and target variable as dependent variable and give final prediction ? Please help.
@UnfoldDataScience3 жыл бұрын
Good question Tushar. The meta model will take predictions from weak learner as features.(no original feature)
@tusharsalunkhe79163 жыл бұрын
@@UnfoldDataScience Thanks for reply. So predictions from weak learners are taken as Independent variables and original target variable as dependant variable... Right?
@samuelpradhan18994 ай бұрын
How to use stacking regressor models from sklearn and keras??
@harshakrishna82593 жыл бұрын
Thanks a lot bro! ... Helped a lot for one of my projects!!
@UnfoldDataScience3 жыл бұрын
Welcome Harsha.
@hansmeiser60783 жыл бұрын
Simply wonderful!
@UnfoldDataScience3 жыл бұрын
Many thanks!
@SandeepSSMishra10 ай бұрын
Can you make a separate video for Blending with detailed example and implementation without the libraries?
@Kumarsashi-qy8xh4 жыл бұрын
Nice subject
@UnfoldDataScience4 жыл бұрын
Thanks a lot :)
@manjunambiar4954 Жыл бұрын
While executing the for loop, there is an error message. "type error:KNN not iterable" .How to solve this?
@tolifeandlearning39194 жыл бұрын
Good Explanation.
@UnfoldDataScience4 жыл бұрын
Glad you liked it
@christian.belmar Жыл бұрын
excellent
@UnfoldDataScience Жыл бұрын
Thanks Chris
@pranitflora9482 Жыл бұрын
Very well explained. Can you also explain KcrossK cross validations and go in dept of meta model.
@ujjwala72862 жыл бұрын
Thank you for explaining .Can you suggest which ensemble techniques is suitable for deep learning model for video classification task
@UnfoldDataScience2 жыл бұрын
Yes
@sangrammishra43963 жыл бұрын
Love your study sir..
@UnfoldDataScience3 жыл бұрын
Thanks Sangram.
@sandipansarkar92113 жыл бұрын
I am unable to locate this ipynb file in your respective google drive .Please guide
@UnfoldDataScience3 жыл бұрын
I think this file is missing, I will try to find out and place it however I doubt it may be in my old laptop and difficult to recover.
@sonambarsainya37472 жыл бұрын
Hi Aman, thanks for your explanation. I have a question regarding deep learning models, Can we stack yolov4 (which I converted the .weight file into a .h5 file) and other CNN models like InceptionResnetV2+LSTM into one ensembling model for different classification with different data?
@diniebalqisay26582 жыл бұрын
I also have the same question...
@seema55792 жыл бұрын
Thnks 4 the video, sir, can i perform stacking between different CNN models and feature fusion in between these models
@SandeepSSMishra Жыл бұрын
Sir Namaskar. That code you did in Python is for stacking or blending, kindly say.
@MegaBoss19803 жыл бұрын
Can we do level 2 meta model. Also can we insert new training features in meta model?
@Neerajkumar-xl9kx3 жыл бұрын
Thanks a lot , i am a beginner
@UnfoldDataScience3 жыл бұрын
Thanks Neeraj.
@eduardocasanova73012 жыл бұрын
Hi Aman, thanks for your explanation! I have a question though - is regularization and ensembling the same? In the decision trees case we use the same techniques of bagging and boosting, so, if i'm regularizing am i implicitly ensembling and viceversa? Thank you!
@talaasoudalial-bimany66054 жыл бұрын
Than you very much Just I want to understand there are how many approaches when imlpimenting stacked ensemble learning I mean when we combine base learners to meta learner
@UnfoldDataScience3 жыл бұрын
The way of implementing can be many depending on how u implement that in code however the internal logic remains same.
@sadhnarai87574 жыл бұрын
Very good Aman
@UnfoldDataScience4 жыл бұрын
Thank you.
@shivanshsingh55554 жыл бұрын
@5:48 at this time, you said "and this training data goes to another model, called meta model." the way you pointing the finger and that what to said is not getting understood by me. For me these are very much imp so i goes to the depth of each n every word along with action. Please kindly sort my query please? and what is training model here after dividing 75 records into 80-20%....? if it is 80% (as i know) then y didn't you mentioned it....im confused
@mohammadmoslemuddin72744 жыл бұрын
I can share with you my understanding. First, we divide the 100 training examples into 75 Training and 25 Test examples. Then we divide the 75 Training examples into (80% Training and 20% Validation examples) i.e. 60 training and 15 validation examples. After that, we train the different base models on these 60 training examples and make predictions on the 15 validation examples. The predictions on the 15 examples become the input to our meta-model. Now we train the meta-model and test our accuracy on the initial 25 test examples. In short this Blending. When we follow the K-fold approach to divide the 75 Training examples to divide and train as 60 training and 15 validation examples, it is called stacking. Hope it helps. Happy learning.
@UnfoldDataScience4 жыл бұрын
Thanks Mohammad and Shivansh for discussion.
@chitramdasgupta31224 жыл бұрын
Thank you! Keep making these videos.
@UnfoldDataScience4 жыл бұрын
Will do Chitram. Your comments are my motivation.
@vinushan242 ай бұрын
Thanks!
@orchidchetiaphukan46584 жыл бұрын
Clearly Explained.
@UnfoldDataScience4 жыл бұрын
Thanks a lo for motivating me.
@orchidchetiaphukan46584 жыл бұрын
@@UnfoldDataScience Sir, Can you share the notebook of this tutorial?
@sagaradoshi2 жыл бұрын
Hi Aman, Thanks for the video..I have one question When we have finished training the stacked model and now when we have test sample..will the test sample go through all the learners + metal model (i.e, SVM, Random forest, gaussian and logistic regression-meta model) or we will feed the test sample only to meta model (i.e., logistic regression in our case)?
@stemicalengineer Жыл бұрын
Also have this question in mind. Why is no one answering this?
@anuragchandnani80374 жыл бұрын
thank you
@UnfoldDataScience4 жыл бұрын
You're welcome Anurag.
@ling67012 жыл бұрын
Thank you.
@jyotireddy3413 жыл бұрын
Hi. Thank you so much for the video. Can you please guide on how to merge 2 BERT models together. Thanks for the help!
@UnfoldDataScience3 жыл бұрын
Thanks Jyoti, will do
@MAK3354 жыл бұрын
sir which ensemble technique we should choose? and when we should choose ? how do we decide that
@UnfoldDataScience4 жыл бұрын
Bagging and Boosting are good. Decision will happen based on available resources , Data Size etc.
@MAK3354 жыл бұрын
@@UnfoldDataScience nobody has made a single video on this on KZbin. you should definitely make a video on this topic !!!
@svltechnologiespvtltd91813 жыл бұрын
nice explanation, how can we do testing with test data set
@UnfoldDataScience3 жыл бұрын
same way like normal ML
@nasali51164 жыл бұрын
is it possible if blending model sometimes lower accuracy that the initial model??
@UnfoldDataScience3 жыл бұрын
Possible.
@UnfoldDataScience3 жыл бұрын
Possible.
@RinP31034 жыл бұрын
Hi.. I am very much interested in ML concepts and try to build career in this, but I can see lots of mathematical derivations are there when trying to learn any new concept, also so many libraries are there, its quite difficult to get acquainted to all these, can you please guide how to actually learn all these so that can be understood well.
@UnfoldDataScience4 жыл бұрын
Hi Rinky, Please watch my machine learning playlist once. Tell me if it boosts your confidence: kzbin.info/www/bejne/boGppWeAntNqeJI
@MrXRes4 жыл бұрын
Thank you for the video Can this approach be usfull for semantic segmentation purposes? For example we have metamodel consists of UNet, Deeplab and FCN And metaclassifier FCN Is it going to get better result?
@UnfoldDataScience4 жыл бұрын
Yes, we can try that, I am not 100% sure it will work though.
@ASNPersonal3 жыл бұрын
Getting error with this code: # creating stacking classifier with above models stackingclf = StackingClassifier(classifiers=[myclf1, myclf2, myclf3], meta_classifier=mylr) Without error below code: # creating stacking classifier with above models stackingclf = StackingClassifier(estimators=[myclf1, myclf2, myclf3], final_estimator=mylr)
@UnfoldDataScience3 жыл бұрын
may be this argument method is not taking "meta_classifier" due to version issue.
@gauravkanu44863 жыл бұрын
Thanks a lot @anugati saved my time!
@sabeenak71593 жыл бұрын
Thank you sir
@UnfoldDataScience3 жыл бұрын
So nice of you Sabeena.
@studywithme42752 жыл бұрын
thanks
@UnfoldDataScience2 жыл бұрын
You're most welcome
@raphaeldayan4 жыл бұрын
thank you a lot!
@UnfoldDataScience4 жыл бұрын
You're welcome raphael.
@FlorieAnnRibucan3 жыл бұрын
Good day! May I request a link for a copy of your code, sir? Thank you
@sandipansarkar92113 жыл бұрын
finished coding
@jagannathdas39913 жыл бұрын
Sir stacking thoda aur clearly bolte... Blending achha tha..🙏🙏
@UnfoldDataScience3 жыл бұрын
Feedback k liye Dhanyavaad. :) dekhne k liye bhi :)
@MAK3354 жыл бұрын
iris is not binary classification it has more than 2 classes in target variable
@UnfoldDataScience4 жыл бұрын
Yes, Correct it has three categories. Did I say 2, Thanks for pointing out.
@MAK3354 жыл бұрын
@@UnfoldDataScience you didn't say 2 but you said iris is a binary classification dataset .....
@zainabfatima99323 жыл бұрын
Code for this one?
@UnfoldDataScience3 жыл бұрын
I m searching for it however there is a possibility it may have it in my old laptop. I will try to find and upload.