7.2 Majority Voting (L07: Ensemble Methods)

  Рет қаралды 13,262

Sebastian Raschka

Sebastian Raschka

Күн бұрын

Sebastian's books: sebastianrasch...
This video discusses one of the most basic case of model ensembles, majority voting. Using a toy example (making certain assumptions), we see why majority voting can be better than using a single classifier alone.
-------
This video is part of my Introduction of Machine Learning course.
Next video: • 7.3 Bagging (L07: Ense...
The complete playlist: • Intro to Machine Learn...
A handy overview page with links to the materials: sebastianrasch...
-------
If you want to be notified about future videos, please consider subscribing to my channel: / sebastianraschka

Пікірлер: 25
@sassk73
@sassk73 Жыл бұрын
So grateful that I came across this channel through this video! I love how you explain the concepts so well and make them easy to understand by being so calm and composed and showcasing easy to understand examples! Please continue the great work :)
@SebastianRaschka
@SebastianRaschka Жыл бұрын
Wow, thanks so much for saying this. It feels really great to hear this!
@cheukchitse8692
@cheukchitse8692 9 ай бұрын
As a beginner in this field, I think your videos make the whole process quite clear and easy to understand. Thank you so much Professor Raschaka. Keep creating excellent content plz :)
@nak6608
@nak6608 2 жыл бұрын
omg you have a youtube channel! I've been working through your Python ML book for months now. I've been struggling on page 228 so I went to youtube. 5 minutes into this video I was like, "this guy is using all the same examples from the book" lol to my surprise it was you! This is amazing! I'll have to go back and watch some of your other lectures!
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
haha small world! The other lectures are a bit different, but yeah, Lecture 7 is closely based on my book :)
@talsveta
@talsveta 2 жыл бұрын
Thank you so much for sharing the lectures. They are really helpful!
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Thanks a lot for saying this! It is very motivating to hear and for producing more videos in the future. (Spoiler: I am planning to make some on Bayesian methods in ML, i.e., Bayes optimal classifiers and naive Bayes, later this year :) )
@talsveta
@talsveta 2 жыл бұрын
@@SebastianRaschka sounds great :)
@mehdimaboudi2703
@mehdimaboudi2703 3 жыл бұрын
Many thanks for sharing the lecture video. Page 16: it seems that instead of ceiling function k>ceil(n/2), floor function should be usedm k>floor(n/2). if we have 15 base estimators, then k>7.
@SebastianRaschka
@SebastianRaschka 3 жыл бұрын
oh yes, good catch
@FarizDarari
@FarizDarari Жыл бұрын
Wonderful illustrations, thanks a lot!
@wellkamp
@wellkamp 3 жыл бұрын
So many thanks to great explication
@rohitgarg776
@rohitgarg776 2 жыл бұрын
Very nice explanation
@Jonathanwu_tech
@Jonathanwu_tech 2 жыл бұрын
very helpful video!
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Glad to hear!
@abdireza1298
@abdireza1298 2 жыл бұрын
Professor Raschka, Please allow me to ask. Is there any theoretical background for what algorithm we can and what we can't combine into a voting classifier? Say, we built a model from the X and y using - Logistic regression with Lasso penalty, - Logistic regression with Elastic net penalty, - Decision Trees, - Random Forest, - AdaBoost, - XGboost for each one showing various accuracy results from stratified k-fold cross-validation. Is it acceptable to create a soft voting classifier (or weighted voting classifier) made from the Decision Trees model and Random Forest? (considering random forest itself is already a combination and soft-voting of several decision trees). Does it acceptable to create a voting classifier consisting of XGBoost and Adaboost? Does it acceptable to create a voting classifier consisting of logistic regression with lasso penalty and other logistic regression with the elastic net penalty? (considering elastic net already a combination of lasso and ridge I understand that we are free to do anything with our data, I believe combining similar models will help at least narrow the standard deviation from the cumulative average of the cross-validation split. But, is it theoretically acceptable? Thank you for your patience. I am sorry for the beginner question. Good luck to everyone.
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
That's a good question. You can put literally everything into the voting classifier. The caveat is if you have too many not-so-good models though, then it will also drag down the performance of the ensemble. But this is essentially a hyperparameter tuning problem. You try to combine different models and see what works. I recently added example 8 here (rasbt.github.io/mlxtend/user_guide/classifier/EnsembleVoteClassifier/#example-8-optimizing-ensemble-weights-with-nelder-mead) to optimize the weighting of the individual models in the voting classifier. (Btw. later in this course, I also talk about stacking, which is essentially the same concept where we use a logistic regression classifier to weight the classifiers predictions)
@charukshiwijesinghe5049
@charukshiwijesinghe5049 2 жыл бұрын
Thank You
@blogger.powerpoint_expert
@blogger.powerpoint_expert 2 ай бұрын
how can i refer to that material? if i want to refer that in my work?
@diniebalqisay2658
@diniebalqisay2658 Жыл бұрын
Hello sir, May I know how voting classifier can be implemented in CNN model approach?
@SebastianRaschka
@SebastianRaschka Жыл бұрын
It depends on how you implement the CNN. The simplest way would be to collect the class label predictions and compute the mode.
@saumyashah6622
@saumyashah6622 3 жыл бұрын
Does all the classifiers have similar weights(in voting) ? If yes then classifier with legit high accuracy would have equal weighted vote as compared to vote by a worse classifier. Please answer me. I am stuck sir. Help.
@SebastianRaschka
@SebastianRaschka 3 жыл бұрын
No, they don't have to have the same weight. So when you look at the code around min 20:00, there is the following line eclf = EnsembleVoteClassifier(..., weights=[1, 1, 1]) You can change the weights to give certain classifiers higher weight than others. How to find good weights? That would be another hyperparameter to tune ...
@vikramxD
@vikramxD 3 жыл бұрын
There seems to be an issue of zip function argument not being iterable in the ensemble.voting classifier method does anyone know how to solve it ?
@SebastianRaschka
@SebastianRaschka 3 жыл бұрын
Hm, are you using the one from MLxtend? Can you share a code snippet of what's not working? E.g., via a GitHub Gist or so? gist.github.com
7.3 Bagging (L07: Ensemble Methods)
37:46
Sebastian Raschka
Рет қаралды 5 М.
7.7 Stacking (L07: Ensemble Methods)
34:13
Sebastian Raschka
Рет қаралды 10 М.
Electric Flying Bird with Hanging Wire Automatic for Ceiling Parrot
00:15
6.5 Gini & Entropy versus misclassification error (L06: Decision Trees)
21:03
Insights from Finetuning LLMs with Low-Rank Adaptation
13:49
Sebastian Raschka
Рет қаралды 5 М.
(AGT8E4) [Game Theory] Voting: Majority Rule
9:14
selcuk ozyurt
Рет қаралды 9 М.
7.4 Boosting and AdaBoost (L07: Ensemble Methods)
39:40
Sebastian Raschka
Рет қаралды 9 М.
How I’d learn ML in 2024 (if I could start over)
7:05
Boris Meinardus
Рет қаралды 1,1 МЛН
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
Voting, Averaging & Stacking Multiple ML Models: Ensemble Learning
5:07
MachineLearningInterview
Рет қаралды 6 М.
The Boundary of Computation
12:59
Mutual Information
Рет қаралды 1 МЛН