👉 Watch this next: kzbin.info/www/bejne/pYOZaGOKrdybbpo (The Art of Learning Data Science - How to learn data science in 2021) ---------- 🌟 Download Kite for FREE www.kite.com/get-kite/? 🌟 Buy me a coffee www.buymeacoffee.com/dataprofessor 🌟 Subscribe to this KZbin channel kzbin.info 🌟 Join the Newsletter of Data Professor newsletter.dataprofessor.org
@skylerfranklin70893 жыл бұрын
@Kingston Dangelo thank you, I signed up and it seems to work :D I really appreciate it !
@kingstondangelo45413 жыл бұрын
@Skyler Franklin You are welcome xD
@Dr_Insha_Altaf2 жыл бұрын
Sir can you please provide me its pseudocode???
@TinaHuang13 жыл бұрын
Wow this is great - was just thinking about this couple days back!
@DataProfessor3 жыл бұрын
Awesome, thanks for tuning in Tina!
@gguchristine3 жыл бұрын
I was literally thinking about how to do stacking and I saw your video in my subscription box haha thanks for the video!
@DataProfessor3 жыл бұрын
Awesome, glad to hear!
@remymakota18132 жыл бұрын
@@DataProfessor Hello there, Is it possible to use Particle Swarm Optimization as part of the stacking models? If so, could you kindly show me how?
@paulntalo14253 жыл бұрын
Thank you professor, your contribution towards enablement for data scientists is unmatched in this year. Your my best channel towards full stack data science
@DataProfessor3 жыл бұрын
Thanks Paul for the encouragement and support of the channel :)
@taseersuleman73433 жыл бұрын
Much awaited video 👍
@DataProfessor3 жыл бұрын
Thanks Taseer!
@mogamoga44743 жыл бұрын
sir, why always logistic regression is used for stacking?
@shahadsha869210 ай бұрын
logistic regression for classification tasks
@andykim76548 ай бұрын
I’ve noticed that the outcomes from the random forest model and the stacking model are identical. Any thoughts?
@stephentete12117 ай бұрын
Yes I was wondering also, the same for the svm model. @DataProfessor Could you please clarify this. Thanks!!
@shubhamdandekar203 жыл бұрын
Thanks for this video now i understand what stacking is.
@DataProfessor3 жыл бұрын
Glad to hear that :)
@taki73947 ай бұрын
you are great, prof !
@SandraBabirye-t7d5 ай бұрын
How are you able to obtain features of importance from the stacked model
@SadTeddyBeer Жыл бұрын
At the end of the video, when we print the final df with metric scores in it, we can see that the stack model is mainly inspired of the random forest classifier. Why not go to the decision tree who has 1 to all its scores ? It's because 1 is likely to be a biased and so the stack classifier doesn't take it in count ?
@Julio_Zambrano3 жыл бұрын
Amazing! This video is super helpful! Thank you, Professor! :D
@DataProfessor3 жыл бұрын
Thanks for watching and glad to hear that it's helpful 😊
@Dr_Insha_Altaf2 жыл бұрын
@@DataProfessor kindly provide me its pseudocode...
@mamacita56362 жыл бұрын
Thank you! Quick question, why do you perform train-test split if you’re going to use cross validation? Wouldn’t the cross validation do the split ?
@futureceltic009 ай бұрын
So if I were not to use StackingClassifier, this is basically me consolidating all predicted classes of the base models and using them as features for the meta model? If this is the case, then does stacking also detect overfitting from too many features used in the base model if the performance decreased on the final meta model?
@muditarora98603 жыл бұрын
it shows error at this line stack_model_train_accuracy = accuracy_score(Y_tr, Y_tr_pr) can not be continuous.
@DataProfessor3 жыл бұрын
Can you try r2_score function instead (also import the r2_score function first), I think you're Y is a quantitative value therefore the error is saying "continuous". For accuracy_score function to work your Y has to be categorical (or discrete values).
@gammetube39762 жыл бұрын
Thank Dear. how we check Cv of the model and save the developed model and check the evaluation of the model?
@ama0163 жыл бұрын
This is SO awesome! Laser eyes ML
@DataProfessor3 жыл бұрын
Glad its helpful :)
@tsunamio77502 жыл бұрын
Could you make an example of this with Keras neural networks? This is a very specific issue when you wrap your DNN with KerasClassifier, where you must provide the new model as a function or something... instead of training it.
@hareshk.mangtaniretrita13763 жыл бұрын
Great video! Had a quick question for you. After training and testing the KNN algorithm, how are the metrics of the performance of the test set higher than that of the training set? Haven't we trained the model using the training set? I would expect the model to be more accurate when making predictions in the training set (which is seen data) as opposed to the test set (unseen data). Regards!
@chadgregory90373 жыл бұрын
LOL you are right.... something SUS going on here!
@bedoe96842 жыл бұрын
I dont have a theoretical answer. But it does happen that validation scores are greater than training ones. Might be because of the test split itself containing more favorable data (aka data that the model has learnt very well) Regards
@gammetube39762 жыл бұрын
great question! how we apply the feature extraction on this model?
@nurmukhammad_30k3 жыл бұрын
Very nice explanation! Keep going!
@connectrRomania3 жыл бұрын
Man your videos are awesome, keep up the good work. Thank you
@DataProfessor3 жыл бұрын
Glad to hear! Thanks!
@Kmysiak12 жыл бұрын
Whats your logic for using log classifier final_estimator? How come you didn't tune your hyperparameters? Good clean code and well explained but could be better.
@username423 жыл бұрын
what about the cross validation of the model ? how do we do that in such stacking models?
@DataProfessor3 жыл бұрын
There's a built-in CV option in the function of stackingclassifier and stackingregressor
@username423 жыл бұрын
@@DataProfessor cool so it is doing the cv before stacking the model or afterwards ? for instance in yourcase, is it gonna be cv the logistic regression model or the previous ones?
@kholoodsh46162 жыл бұрын
It shows error: ValueError: The estimator Sequential should be a classifier. at stack_model.fit(x_train,y_train) , how can I fix it ?
@t.t.cooperphd53893 жыл бұрын
Beautiful!
@DataProfessor3 жыл бұрын
Thanks for watching!
@sangnp10 ай бұрын
Sir, Can you please answer me.Is it a 2 layers stacking ?
@aimanjatt42 жыл бұрын
I got error Y_train is not defined how can i fix this error plz tell me
@Karenshow2 жыл бұрын
Can we use the stacking on Time Series Models??
@yaminadjoudi43573 жыл бұрын
thank you for this, is the stacking is the same concept as Modular neural networks MNNs please ?
@aditiarora2128 Жыл бұрын
sir great explanation...but still i am confused about formation of dataset for meta learner...different blogs says different concepts of creating training dataset for meta learner. For example you have first training every model individually and and then stacked model again on same training testing set!!! But in many blogs I have seen that input training dataset for meta learner should be formed by combing Pred_prob of every base model and actual label. Plz clarify!!!
@jaredking92911 ай бұрын
Wouldn’t it be great if you could explain the strengths of each algo and show how they improved the models
@mogamoga44743 жыл бұрын
Sir, when I'm using my own data set, this line "X = data.drop('Activity', axis=1)s" is not working...showing invalid syntax
@DataProfessor3 жыл бұрын
Hi, you have an extra "s" at the end of your syntax, please delete it.
@mogamoga44743 жыл бұрын
@@DataProfessor thanks sir, it worked.
@mohak91023 жыл бұрын
Please always keep up the great work
@DataProfessor3 жыл бұрын
Thanks for the support!
@chandu-mu2cg2 жыл бұрын
but how do we know which meta learner to choose??
@priyadoesdatascience51413 жыл бұрын
excellent video! I am going to use it in my model. I have just one question? How can it predict better than a single model? Is it because of the inputs from the different models?
@DataProfessor3 жыл бұрын
It’s an ensemble of several classifiers, think of it like a team of judges helping to decide together. And yes it uses the predictions from individual classifiers to make a final single prediction
@priyadoesdatascience51413 жыл бұрын
@@DataProfessor Excellent sir thank you! If I want to tune the hyperparameters, should I do it individually for each model and then supply it in the meta algorithm?
@DataProfessor3 жыл бұрын
@@priyadoesdatascience5141 Yes, exactly. The video shows the use of default parameters for the individual classifiers.
@joaomaia28983 жыл бұрын
Thanks for this.. The better way to make data science is making data science... Recomends a method to select models to input in stack? Only models performs better?
@Мага123-о2о3 жыл бұрын
One of the most useful videos in my life! Sure wont be able to find more convenient explanation of models stacking. Thank you professor! But I have a question, is it possible to visualize feature importances after stacking?
@chadgregory90373 жыл бұрын
I don't think it makes sense to "look after stacking", because each model is like an independent set which relies on its own features... but the whole is just a sum of the parts.... So I guess technically speaking, if you want a convoluted way of evaluating feature importance, you could probably do some kinda stats based on each prediction, and which submodel was most accurate to that model, and then take like, the top 3 submodels, and compare features across them. I could be entirely wrong here, lol, but it's my experience that machine learning stuff is quite intuitive. So just by that I have a tendency to feel like it doesn't make sense to see a feature importance after stacking, since the stack relies on each individual model, and for any particular prediction one submodel might be superior to others. It's a very interesting thing to think about though!
@transferlearning69833 жыл бұрын
Tanks a lot for this helpful video, i was wondering on how we can use a loaded models(already pre-trained) as estimators ?
@MegaBoss19803 жыл бұрын
Hi. Can we do level 2 meta model? Any references? Also can we insert new training data in meta model? Any references if yes?
@praveen21122 жыл бұрын
Sir, Here in stacking how do we know our final estimator as logistic regressor???
@ebenezeragbozo2 жыл бұрын
because we are dealing with a set of continuous values (i.e. the results from all the models combined)
@praveen21122 жыл бұрын
@@ebenezeragbozo it can be any regressor now sir such as random forest, linear regressor or decision tree regressor... How can we pick the best one as final estimator among them?
@ahsanm.50402 жыл бұрын
Dear Professor, Could you please help me write python code about how to stack PROPHET and SARIMA univariate regressor models to predict better
@shreyanhce3153 жыл бұрын
hello sir thanks a lot I have a doubt however , if we choose to use our own csv , which column should be our y ?? SET
@DataProfessor3 жыл бұрын
Hi, there's many solutions to this but the easiest is to set Y by using df.Y or df['Y'] (given that your Y variable is called "Y")
@shreyanhce3153 жыл бұрын
@@DataProfessor thank you so much and please make more videos they are very valuable 🤩🤩🤩🤩🤩
@khedirzakaria19162 жыл бұрын
great vide, Why didn't we use hyperparameter?
@vision33093 жыл бұрын
sir, I am getting a error in every fit function like , knn.fit(X_train, y_train), when I am using my own data set. I am showing you the error. can please provide any solution for that. ValueError Traceback (most recent call last) in () 4 5 knn = KNeighborsClassifier(3) # Define classifier ----> 6 knn.fit(X_train, y_train) # Train model, x-> features y->classification of flowers 7 8 # Make predictions 2 frames /usr/local/lib/python3.7/dist-packages/sklearn/utils/multiclass.py in check_classification_targets(y) 196 "multilabel-sequences", 197 ]: --> 198 raise ValueError("Unknown label type: %r" % y_type) 199 200 ValueError: Unknown label type: 'continuous'
@ashwinig82733 жыл бұрын
hello sir its was wonderful video very informative sir can u please suggest me the best denoising network can we stack different denoising algorithms in same manner?
@bambangSiswo Жыл бұрын
Good post
@boulaabimeher5891 Жыл бұрын
But why stack haven't the good result? , I think it was the same of RF results. So if it will take the best results why it's not 1 for all the metrics ? Thnx a lot