Stacking Ensemble Learning|Stacking and Blending in ensemble machine learning

  Рет қаралды 52,760

Unfold Data Science

Unfold Data Science

Күн бұрын

Пікірлер: 127
@atomicbreath4360
@atomicbreath4360 3 жыл бұрын
First of all thanks for the video. Bagging: when we take different models and train them parallely with each getting sub-set if data from the total data and each model has high-variance and low bias. Boosting: same as above but the difference is instead of training them parallely ,output of one model is given as input to the other and each model should have high-bias and low-variance.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Well said!
@crazycurlzXD
@crazycurlzXD 4 жыл бұрын
I've been struggling to understand this for quite a few hours now. Finally, got it. Thank you so much!
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Glad it helped!
@hetal1926
@hetal1926 3 жыл бұрын
I am a new in this field and I was trying to understand this concept, refered many webpages and seen many videos. You explain Very nicely. I got it the concept.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks for watching.
@TheMuktesh89
@TheMuktesh89 Жыл бұрын
That is very nicely Explained. Thank you, Sir.
@tosinlitics949
@tosinlitics949 Жыл бұрын
I love that you throughly explained the theory before you dove into the code. Great job!
@UnfoldDataScience
@UnfoldDataScience Жыл бұрын
Thank you.
@sridattu4467
@sridattu4467 3 жыл бұрын
I beg my pardon....I was struggling with this technique Very clearly understood and the code n got executed!! Thanks a lot
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks for watching Sri.
@davidgao5351
@davidgao5351 10 ай бұрын
Great explanation of the concept. Thank you for also showing the python samples to really bring it home.
@sharatainapur
@sharatainapur 2 жыл бұрын
Hello Aman Sir, Thank you for the great video, simple explanation. Could you please elaborate on how the meta-model is built and used for the testing / real-test set? Like here, the meta-model uses Logistic Regression, right? How a logistic regression works to stack the results from base model?
@ranajaydas8906
@ranajaydas8906 3 жыл бұрын
Thanks a lot... Was struggling with this Stacking approach.... Now it's clear!
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Cheers Ranajay :)
@bharatbajoria
@bharatbajoria 4 жыл бұрын
Bagging is Bootstrap Aggregation which is used primarily to reduce Variance , it uses CLT to do the same. Boosting improves the base learners by learning from mistake of the previous model using homogeneous weak learners, it helps in reducing Bias.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks Bharat.
@saurabhdeokar3791
@saurabhdeokar3791 3 жыл бұрын
In bagging , we make different subset of dataset by using row sampling and replacement and that subset we pass different model's to make prediction and at the end we combine or aggregate all of the model prediction.
@FarhanAhmed-xq3zx
@FarhanAhmed-xq3zx 3 жыл бұрын
Greatly explained💥👌
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Eid Mubarak Farhan. Tc
@arrafihriday1333
@arrafihriday1333 2 жыл бұрын
Laudable teaching. Learnt a lot.
@sonalkapuriya4944
@sonalkapuriya4944 Жыл бұрын
crystal clear explanation
@reflex2627
@reflex2627 3 жыл бұрын
Absolutely very good explanation , better than my professor
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks a lot. your comments motivate me.
@vaddadisairahul2956
@vaddadisairahul2956 4 жыл бұрын
Bagging helps in reducing variance due to overfitting in decision trees and further to reduce bias, boosting is used. Hence, ultimately we achieve a model with low bias and low variance.
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Correct Rahul.
@akshayshenoy7417
@akshayshenoy7417 3 жыл бұрын
Beautiful man got my concepts cleared you deserve more reach.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks a lot Akshay. Kindly share in the data science groups you are part of :)
@nan8922
@nan8922 3 жыл бұрын
Wow, fast and cleare, thanks.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
You're welcome!
@sandipansarkar9211
@sandipansarkar9211 3 жыл бұрын
finished watching
@preranatiwary7690
@preranatiwary7690 4 жыл бұрын
Good content
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thanks a lot.
@phanikumar3136
@phanikumar3136 4 жыл бұрын
bagging is the process known as parallel computing and in this method, we can choose rows and columns with replacement and its example is randomforest....but in boosting it is a series computing and example xgboost.
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Correct. Thank you.
@pankajnegi9278
@pankajnegi9278 4 жыл бұрын
In bagging we take base learner models, with high variance and low bias,. Eg - random forest we typically take decision trees ( which are Fully grown to their max depth, with max_depth = None) as such decisions trees are high variance, models. The main aim of bagging itself is to reduce high variance of the overall/final model, In bagging we have bootstrap (row sampling and column sampling) and aggregation steps which helps to achieve low variance final model, Also every base learner is being train on a sample dataset not on whole data set, so every base learner is learning something unique or different from other base learners
@mikohotori4276
@mikohotori4276 4 жыл бұрын
Thanks for your sharing.
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
My pleasure
@magesh10mano
@magesh10mano 3 жыл бұрын
Good Explanation... Thank you
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
You are welcome
@Mars7822
@Mars7822 2 жыл бұрын
Nice class
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Thanks
@tusharsalunkhe7916
@tusharsalunkhe7916 4 жыл бұрын
Thank you sir for this lecture. Want to know one thing... data which goes to Meta model consist of Independent variables and actual output value (Target variable Y) along with predictions from weak learners like LR, SVM, NN... so how does Meta Model use predictions from weak learners to predict the final output/ prediction? Is Meta model consider predictions from weak learners as additional independent variables(along with existing independent variables) and target variable as dependent variable and give final prediction ? Please help.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Good question Tushar. The meta model will take predictions from weak learner as features.(no original feature)
@tusharsalunkhe7916
@tusharsalunkhe7916 3 жыл бұрын
@@UnfoldDataScience Thanks for reply. So predictions from weak learners are taken as Independent variables and original target variable as dependant variable... Right?
@samuelpradhan1899
@samuelpradhan1899 4 ай бұрын
How to use stacking regressor models from sklearn and keras??
@harshakrishna8259
@harshakrishna8259 3 жыл бұрын
Thanks a lot bro! ... Helped a lot for one of my projects!!
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Welcome Harsha.
@hansmeiser6078
@hansmeiser6078 3 жыл бұрын
Simply wonderful!
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Many thanks!
@SandeepSSMishra
@SandeepSSMishra 10 ай бұрын
Can you make a separate video for Blending with detailed example and implementation without the libraries?
@Kumarsashi-qy8xh
@Kumarsashi-qy8xh 4 жыл бұрын
Nice subject
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thanks a lot :)
@manjunambiar4954
@manjunambiar4954 Жыл бұрын
While executing the for loop, there is an error message. "type error:KNN not iterable" .How to solve this?
@tolifeandlearning3919
@tolifeandlearning3919 4 жыл бұрын
Good Explanation.
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Glad you liked it
@christian.belmar
@christian.belmar Жыл бұрын
excellent
@UnfoldDataScience
@UnfoldDataScience Жыл бұрын
Thanks Chris
@pranitflora9482
@pranitflora9482 Жыл бұрын
Very well explained. Can you also explain KcrossK cross validations and go in dept of meta model.
@ujjwala7286
@ujjwala7286 2 жыл бұрын
Thank you for explaining .Can you suggest which ensemble techniques is suitable for deep learning model for video classification task
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
Yes
@sangrammishra4396
@sangrammishra4396 3 жыл бұрын
Love your study sir..
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks Sangram.
@sandipansarkar9211
@sandipansarkar9211 3 жыл бұрын
I am unable to locate this ipynb file in your respective google drive .Please guide
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
I think this file is missing, I will try to find out and place it however I doubt it may be in my old laptop and difficult to recover.
@sonambarsainya3747
@sonambarsainya3747 2 жыл бұрын
Hi Aman, thanks for your explanation. I have a question regarding deep learning models, Can we stack yolov4 (which I converted the .weight file into a .h5 file) and other CNN models like InceptionResnetV2+LSTM into one ensembling model for different classification with different data?
@diniebalqisay2658
@diniebalqisay2658 2 жыл бұрын
I also have the same question...
@seema5579
@seema5579 2 жыл бұрын
Thnks 4 the video, sir, can i perform stacking between different CNN models and feature fusion in between these models
@SandeepSSMishra
@SandeepSSMishra Жыл бұрын
Sir Namaskar. That code you did in Python is for stacking or blending, kindly say.
@MegaBoss1980
@MegaBoss1980 3 жыл бұрын
Can we do level 2 meta model. Also can we insert new training features in meta model?
@Neerajkumar-xl9kx
@Neerajkumar-xl9kx 3 жыл бұрын
Thanks a lot , i am a beginner
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks Neeraj.
@eduardocasanova7301
@eduardocasanova7301 2 жыл бұрын
Hi Aman, thanks for your explanation! I have a question though - is regularization and ensembling the same? In the decision trees case we use the same techniques of bagging and boosting, so, if i'm regularizing am i implicitly ensembling and viceversa? Thank you!
@talaasoudalial-bimany6605
@talaasoudalial-bimany6605 4 жыл бұрын
Than you very much Just I want to understand there are how many approaches when imlpimenting stacked ensemble learning I mean when we combine base learners to meta learner
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
The way of implementing can be many depending on how u implement that in code however the internal logic remains same.
@sadhnarai8757
@sadhnarai8757 4 жыл бұрын
Very good Aman
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thank you.
@shivanshsingh5555
@shivanshsingh5555 4 жыл бұрын
@5:48 at this time, you said "and this training data goes to another model, called meta model." the way you pointing the finger and that what to said is not getting understood by me. For me these are very much imp so i goes to the depth of each n every word along with action. Please kindly sort my query please? and what is training model here after dividing 75 records into 80-20%....? if it is 80% (as i know) then y didn't you mentioned it....im confused
@mohammadmoslemuddin7274
@mohammadmoslemuddin7274 4 жыл бұрын
I can share with you my understanding. First, we divide the 100 training examples into 75 Training and 25 Test examples. Then we divide the 75 Training examples into (80% Training and 20% Validation examples) i.e. 60 training and 15 validation examples. After that, we train the different base models on these 60 training examples and make predictions on the 15 validation examples. The predictions on the 15 examples become the input to our meta-model. Now we train the meta-model and test our accuracy on the initial 25 test examples. In short this Blending. When we follow the K-fold approach to divide the 75 Training examples to divide and train as 60 training and 15 validation examples, it is called stacking. Hope it helps. Happy learning.
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thanks Mohammad and Shivansh for discussion.
@chitramdasgupta3122
@chitramdasgupta3122 4 жыл бұрын
Thank you! Keep making these videos.
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Will do Chitram. Your comments are my motivation.
@vinushan24
@vinushan24 2 ай бұрын
Thanks!
@orchidchetiaphukan4658
@orchidchetiaphukan4658 4 жыл бұрын
Clearly Explained.
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thanks a lo for motivating me.
@orchidchetiaphukan4658
@orchidchetiaphukan4658 4 жыл бұрын
@@UnfoldDataScience Sir, Can you share the notebook of this tutorial?
@sagaradoshi
@sagaradoshi 2 жыл бұрын
Hi Aman, Thanks for the video..I have one question When we have finished training the stacked model and now when we have test sample..will the test sample go through all the learners + metal model (i.e, SVM, Random forest, gaussian and logistic regression-meta model) or we will feed the test sample only to meta model (i.e., logistic regression in our case)?
@stemicalengineer
@stemicalengineer Жыл бұрын
Also have this question in mind. Why is no one answering this?
@anuragchandnani8037
@anuragchandnani8037 4 жыл бұрын
thank you
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
You're welcome Anurag.
@ling6701
@ling6701 2 жыл бұрын
Thank you.
@jyotireddy341
@jyotireddy341 3 жыл бұрын
Hi. Thank you so much for the video. Can you please guide on how to merge 2 BERT models together. Thanks for the help!
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Thanks Jyoti, will do
@MAK335
@MAK335 4 жыл бұрын
sir which ensemble technique we should choose? and when we should choose ? how do we decide that
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Bagging and Boosting are good. Decision will happen based on available resources , Data Size etc.
@MAK335
@MAK335 4 жыл бұрын
@@UnfoldDataScience nobody has made a single video on this on KZbin. you should definitely make a video on this topic !!!
@svltechnologiespvtltd9181
@svltechnologiespvtltd9181 3 жыл бұрын
nice explanation, how can we do testing with test data set
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
same way like normal ML
@nasali5116
@nasali5116 4 жыл бұрын
is it possible if blending model sometimes lower accuracy that the initial model??
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Possible.
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Possible.
@RinP3103
@RinP3103 4 жыл бұрын
Hi.. I am very much interested in ML concepts and try to build career in this, but I can see lots of mathematical derivations are there when trying to learn any new concept, also so many libraries are there, its quite difficult to get acquainted to all these, can you please guide how to actually learn all these so that can be understood well.
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Hi Rinky, Please watch my machine learning playlist once. Tell me if it boosts your confidence: kzbin.info/www/bejne/boGppWeAntNqeJI
@MrXRes
@MrXRes 4 жыл бұрын
Thank you for the video Can this approach be usfull for semantic segmentation purposes? For example we have metamodel consists of UNet, Deeplab and FCN And metaclassifier FCN Is it going to get better result?
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Yes, we can try that, I am not 100% sure it will work though.
@ASNPersonal
@ASNPersonal 3 жыл бұрын
Getting error with this code: # creating stacking classifier with above models stackingclf = StackingClassifier(classifiers=[myclf1, myclf2, myclf3], meta_classifier=mylr) Without error below code: # creating stacking classifier with above models stackingclf = StackingClassifier(estimators=[myclf1, myclf2, myclf3], final_estimator=mylr)
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
may be this argument method is not taking "meta_classifier" due to version issue.
@gauravkanu4486
@gauravkanu4486 3 жыл бұрын
Thanks a lot @anugati saved my time!
@sabeenak7159
@sabeenak7159 3 жыл бұрын
Thank you sir
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
So nice of you Sabeena.
@studywithme4275
@studywithme4275 2 жыл бұрын
thanks
@UnfoldDataScience
@UnfoldDataScience 2 жыл бұрын
You're most welcome
@raphaeldayan
@raphaeldayan 4 жыл бұрын
thank you a lot!
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
You're welcome raphael.
@FlorieAnnRibucan
@FlorieAnnRibucan 3 жыл бұрын
Good day! May I request a link for a copy of your code, sir? Thank you
@sandipansarkar9211
@sandipansarkar9211 3 жыл бұрын
finished coding
@jagannathdas3991
@jagannathdas3991 3 жыл бұрын
Sir stacking thoda aur clearly bolte... Blending achha tha..🙏🙏
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
Feedback k liye Dhanyavaad. :) dekhne k liye bhi :)
@MAK335
@MAK335 4 жыл бұрын
iris is not binary classification it has more than 2 classes in target variable
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Yes, Correct it has three categories. Did I say 2, Thanks for pointing out.
@MAK335
@MAK335 4 жыл бұрын
@@UnfoldDataScience you didn't say 2 but you said iris is a binary classification dataset .....
@zainabfatima9932
@zainabfatima9932 3 жыл бұрын
Code for this one?
@UnfoldDataScience
@UnfoldDataScience 3 жыл бұрын
I m searching for it however there is a possibility it may have it in my old laptop. I will try to find and upload.
@Monty-hl1rb
@Monty-hl1rb Жыл бұрын
Sir ..Plz exam briefly... Not understanding 😟😟
@riyosantoyosep1749
@riyosantoyosep1749 3 жыл бұрын
please make subtitle
@SESHUNITR
@SESHUNITR 4 жыл бұрын
Good job
@UnfoldDataScience
@UnfoldDataScience 4 жыл бұрын
Thanks Sesha.
XGBoost Overview| Overview Of XGBoost algorithm in ensemble learning
8:20
Unfold Data Science
Рет қаралды 60 М.
Stacking and Blending Ensembles
35:20
CampusX
Рет қаралды 24 М.
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
小丑女COCO的审判。#天使 #小丑 #超人不会飞
00:53
超人不会飞
Рет қаралды 15 МЛН
Stacking Explained for Beginners - Ensemble Learning
17:39
AI Sciences
Рет қаралды 16 М.
Gradient Boost Machine Learning|How Gradient boost work in Machine Learning
14:11
What is AdaBoost (BOOSTING TECHNIQUES)
14:06
Krish Naik
Рет қаралды 356 М.
Machine Learning Tutorial Python - 21: Ensemble Learning - Bagging
23:37
AdaBoost, Clearly Explained
20:54
StatQuest with Josh Starmer
Рет қаралды 790 М.
How to stack machine learning models in Python
14:14
Data Professor
Рет қаралды 30 М.
Learn Machine Learning Like a GENIUS and Not Waste Time
15:03
Infinite Codes
Рет қаралды 212 М.
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН