At 21:37, Is it 'error' or 'min_error' in the calculation of alpha? If it is 'error', then why are we calculating the 'min_error' ?
@patloeber4 жыл бұрын
Thanks for this catch! It is indeed min_error that we should use : clf.alpha = 0.5 * np.log((1.0 - min_error) / (min_error + EPS)). Btw if you find errors you can double check with the code in my repository: github.com/python-engineer/MLfromscratch. Sometimes the code is a little bit more polished there
@alitaangel86504 жыл бұрын
This video is awesome, I've watched lots of tutorial videos and materials, and found that walking through the basic implementation of an algorithm with viewers is absolutely the best way to help them grasping the gist. Hope you can make more videos like this. Thank you.
@patloeber4 жыл бұрын
thanks! I try to do a multilayer perceptron in a few weeks...
@amrdel27304 жыл бұрын
YEAH ITS A GREAT IDEA TO ALLOW THE LEARNER TO SEE FOR REAL THE STEPS OF FUNCTIONNING OF THE ALGORITHM IT IS A BETTER WAY TO UNDERSTAND IT AND TRY TO REPRODUCE IT ON YOUR OWN
@nftk54134 жыл бұрын
you have been a great help .the way you explain things simply is really great ,keep on. tnx alot
@patloeber4 жыл бұрын
I'm glad you like it :)
@SunilPatil-is6dn4 жыл бұрын
Make a video on bagging
@syedsabeeth79963 жыл бұрын
what about multi-class classification instead of binary? What changes should be made in that case?
@Soninmike3 жыл бұрын
Why at 23:32: we divide weights (w) by the sum of updated w? As we saw on a formula, it should be the sum of non-updated w.
@fedorlaputin91193 жыл бұрын
потому что сумма весов должна быть равной единицы, а сумма только что обновленных весов не равна 1, поэтому мы их и масштабируем в интервал от 0 до 1
@mohammadrahimpoor5132 жыл бұрын
Thank you for your informative Video!!
@mosesmbabaali93814 жыл бұрын
Is this a typo on 12:47? when [X_column < threshold] = -1 and also when [X_column>threshold] = -1? Isn't the else part supposed to be 1?
@patloeber4 жыл бұрын
No this is not a typo. We start with an array full of 1s. And this here sets some values to -1 depending on the polarity. If you compare with the 2D plots I showed in the beginning, this basically tells us if the left or the right side from our decision boundary should be negative...
@mosesmbabaali93814 жыл бұрын
@@patloeber Would it be okay to have your notes for this tutorial?
@minzhang74094 жыл бұрын
Hi Python Engineer, I have almost watched your 3 tutorials. Thanks for your nice job! It helps me a lot. Are you interested in doing a series for algorithm?
@patloeber4 жыл бұрын
Thanks for watching! For which algorithm?
@minzhang74094 жыл бұрын
@@patloeber I mean computer science algorithms, such as BFS, DFS...... (By the way, will you continue to do ML tutorials? Such as the gradient boost, xgboost and so on....)
@karmasince942 жыл бұрын
Is this illustration similar to the calculation of Gini impurity for identifying weak learners?
@arezu73824 жыл бұрын
Hi, thanks for sharing it. how i can use AdaBoost for selecting feature? my data set is image and a file include features. i need your help
@tulgaa1114 Жыл бұрын
bro i got question what is connection with your code between ddos attack detection?
@manishgaurav843 жыл бұрын
Hi Amazing explanation. I understand it takes lot of efforts to make these tutorials. I would really appreciate if you could help us with another tutorial for Gradient boost
@patloeber3 жыл бұрын
i'll have a look at this
@Swan5843 жыл бұрын
I get " predictions[X_column > self.threshold] = -1 IndexError: too many indices for array" when I try this code. Cant seem to figure out why
@redhwanalgabri72813 жыл бұрын
me, too
@TheZombiebrainz2 жыл бұрын
did you ever figure this out?
@amrdel27304 жыл бұрын
canwe use adaboost with a weaklearner other than decisio stumps // like example svm weakened or neuralnet ?? if so can you show us an example
@grimonce4 жыл бұрын
Going to do XGBoost from scratch video or article? :) That's quite enjoyable to watch. Awsome work!
@patloeber4 жыл бұрын
Thanks! Yes it is on my list for the future
@finderlandrs79654 жыл бұрын
I see that many Adaboost approaches are very "hard coded" to the usage of decision stumps, and unfortunately cannot be applied to other cases. It'd be great if you could show us a more generic way to code it (for other types of weak learners). Cheers!
@patloeber4 жыл бұрын
Thanks for the suggestion! I'll look into that
@richkhid12984 жыл бұрын
Great video. Can I use this tutorial to predict students' performance?
@patloeber4 жыл бұрын
I don't know what your dataset is, but I guess you can
@richkhid12984 жыл бұрын
@@patloeber Am using a custom dataset and I can't figure how how to get the X and y variables from the dataset
@richkhid12984 жыл бұрын
@@patloeber Help will be appreciated...Thank you
@revolutionarydefeatism3 жыл бұрын
@@richkhid1298 X is your features, like students' marks, their performance, etc. and y is the thing you want to predict, like the final GPA.
@ivanong28573 жыл бұрын
Can you use adaboost for arduino nano?
@Ivanskiful2 жыл бұрын
It is possible to improve runtime alot by simply just removing those thresholds of the decision stumps which have a neighboring threshold that is the same sign, before running the greedy search. Why? Example where we have 4 thresholds: + - - + Splitting as following: + | - - + is obviously better than: + - | - +
@ray8110303 жыл бұрын
could you add gradient boosting
@HuyNguyen-fp7oz4 жыл бұрын
great video
@patloeber4 жыл бұрын
Thanks :)
@hamzahal-qadasi17712 жыл бұрын
If you made your algorithm with class 0 and class 1, it would be easier to understand. But anyway thank you for this informative video.
@souviktewary89134 жыл бұрын
Hi I have a question, I am using adaboost, but before that on a separate program I have already made several decision stumps which I want to use in adaboost algorithm, I have stored them all together on a csv file, so that I can use it as a list. My question is here you are using prediction values as -1 or 1 for the adaboost training part, what if the predict_proba value aquired from the decision tree be used in adaboost? example my aquired decision tree list consist of predict proba values of class 0 and class 1 (such as for patient 1 being in class 0 predict proba value is 0.3 and the same patient being class 1 is 0.7)
@patloeber4 жыл бұрын
why not just convert your 0s to -1s? np.where(x==0, -1, x)