We are near 250k. Please do subscribe my channel and share with all your friends. :)
@_curiosity...87314 жыл бұрын
Krish Naik please make video on decisions tree pruning with mathematical details
@ArunKumar-sg6jf4 жыл бұрын
Lgbm is Missing
@yashkhandelwal38774 жыл бұрын
@@tamildramaclips8548 Depends on your college. Which college with these branches are you talking about?
@yashkhandelwal38774 жыл бұрын
@@tamildramaclips8548 You should definitely go with ECE. Since AI DS is a very new branch there is no surety how your college would groom the students with this branch. Also your college is not a national level college. So you shouldn't take any risk. That's all my suggestion.
@hirdhaymodi4 жыл бұрын
sir could you make any video for a roadmap of machine learning engineer??
@animeshsharma73324 жыл бұрын
Man, this guy is now coming in my dreams. Who else have been binge watching his channel for months?
@gauravpatil29263 жыл бұрын
😂😂
@thepresistence59353 жыл бұрын
I am learnng from him for data science
@geekyprogrammer48313 жыл бұрын
Same here 😂😂😂 But this man should be given nobel prize for inspiring the present and future generations!
@gandhalijoshi92423 жыл бұрын
I have started following his machine learning series..And it's very nice.. I am also doing data science course simultaneously . His videos are helping a lot .
@shaelanderchauhan19632 жыл бұрын
HAHAHAHA ! You are being haunted by Ghost Naik
@bhavikdudhrejiya8523 жыл бұрын
Great video. Understood in depth I have jotted down the processing steps from this video: 1. We have a Data 2. Constructing base leaner 3. Base learner takes probability 0.5 & computing residual 4. Constructing Decision as per below Computing Similarity Weights: ∑(Residual)^2 / ∑P(1-P) + lambda - Computing Similarity Weight of Root Node - Computing Similarity Weight of left side decision node & its leaf node - Computing Similarity Weight of right side decision node & its leaf node Computing Gain = Leaf1 Similarity W + Leaf2 Similarity W - Root Node Similarity W - Computing Gain of Root Node & left side of decision node and its leaf node - Computing Gain of Root Node & right side of decision node and its leaf node - Computing Gain of other combination of features of decision node and its leaf node - Selecting the Root Node, Decision node and leaf node have high information gain 5. Predicting the probability = Sigmoid(log(odd) of Prediction of Base Learner + learning rate(Prediction of Decision Tree)) 6. Predicting residual = Previous residual - Predicted Probability 7. Running the iteration from point 2 to 6 and at the end of the iteration, The residual will be the minimal. 8. Test Prediction on the model of iteration have minimal residual
@manojsamal72483 жыл бұрын
what if there are no. of classification in output (0,1,2,3) the average will be 1.5 but this is more than 1 i.e this cant be probality which 0.5 to base learner that time what we should do..? ]
@pawanthakur-df2yk3 жыл бұрын
Thank you🙏
@manojrangera2 жыл бұрын
@@manojsamal7248 yes bro..same question ...did you get the answer of this?..please let me know..
@manojsamal72482 жыл бұрын
@@manojrangera not yet bro
@manojrangera2 жыл бұрын
@@manojsamal7248 I was thinking if there are 4 classes then probability will be 1/4 = .25 and if there are 5 then 1/5 =.20 because we are calculating probability ..I will confirm this but I think this is right..
@johnnyfry24 жыл бұрын
Great work Krish. Don't ever lose your passion for teaching, you're a natural. I appreciate how you simplify the details.
@yashkhandelwal38774 жыл бұрын
Hats off to you Krish for doing so much hardwork so that we can learn each and every concept of ML, DataScience!
@nareshjadhav49624 жыл бұрын
I was desparately waiting for this since last 7 months...now I will complete mashine learning playlist💥 Than you Krish..god bless you😀
@joeljoseph263 жыл бұрын
Guys, please watch for the mistake. There is a mistake made at 16:10 i.e. For credit >50 (G,B) = {-0.5,0.5} its not three, there is only two. The information gain for the right side is 0.67. However, you chose the right node. Btw, your teaching very simple and understandable. Keep doing more videos. Love your content.
@moindalvs2 жыл бұрын
Thanks a lot, for eveyrthing you do. You did turn off the fan so that it doesn't interrupt the audio, you were sweating and breathing heavily with all this trouble and hardship you deserve more. I wish you success in life and a healthy and a prosperous life.
@amitsahoo19894 жыл бұрын
Hi krish, i have been watching ur videos for the last few months and it has helped me a lot in my interviews. A special thanks from my end. In this video, at 10:54 min 0.33 - 0.14 should be 0.19.
@gshan9944 жыл бұрын
yes indeed bdw were u a fresher when u went for an interview?
@sandipansarkar92114 жыл бұрын
Very very important to crack in product based companies.Great explantion too.Thanks
@mohitjoshi42093 жыл бұрын
So much to learn from a single video, hats off to you sir
@felixzhao90702 жыл бұрын
This is pure gold! Thanks for the tutorial!
@yashkhant58744 жыл бұрын
Great Explanation sir... keep contributing to the community. We love your videos and most importantly you are serving your experience is the best thing.
@sajidchoudhary11654 жыл бұрын
i am most happiest person to see this videos thank you
@marijatosic2174 жыл бұрын
This was amazing, I literally feel like I'm sitting in your class at a Uni.
@mrzaidivlogs4 жыл бұрын
How do u stay so focused , strong and learn everything in a very efficient way?
@yasharya1066 Жыл бұрын
Nation wants to know🙃
@HistoryUnlocked-fi3erАй бұрын
Willpower
@dhruvenkalpeshkumarparvati48744 жыл бұрын
Just what I was waiting for 🔥
@abhishek_maity4 жыл бұрын
Great.... Clear explanation !! Thanks a lot 😄
@narendradamodardasmodi32864 жыл бұрын
Thanks, Krish for building the nation Towards AI Journey.
@ajayrana42964 жыл бұрын
chutiya nokri bhi to de
@shashwattiwari43463 жыл бұрын
"Day 1 or 1 Day your Choice" Thanks a lot Krish!
@islamicinterestofficial3 жыл бұрын
what does this mean?
@nitinahlawat24794 жыл бұрын
Really Data science Bisham Pitama🙏 Respect you a lot👍
@frozen18604 жыл бұрын
Sir the way you teaching us is more better than any varsity classes. pls do a practical implementation on XGBoost. sir pls it will be very helpful for us...
@MrPetarap5 ай бұрын
Lovely explanation !
@antonym97444 жыл бұрын
Amazing !!!
@ShahnawazKhan-xl6ij4 жыл бұрын
Great
@vishnukv65373 жыл бұрын
Sir you are too pleasant and amazing in teaching
@mohamedgaal5340 Жыл бұрын
Thank You, Krish. Well explained!
@raneshmitra81564 жыл бұрын
Super explanation
@Mazree1523 ай бұрын
16:33 In my opinion there is a mistake in calculations. It should be computed for (>50K) but G & B are also included from
@adityagupta89012 күн бұрын
I also noticed that, i guess maybe that is a mistake
@amitupadhyay65113 жыл бұрын
its tough to understand in first attempt ,but thanks for giving the outline so clearly, I will watch it untill I understand I implement it from scratch .
@nukulkhadse52534 жыл бұрын
Hey Krish, you should also have a video about Similarity Based Modelling (SBM) and Multivariate State Estimation Technique (MSET). They are actually widely used in the industries since 90s. There are many research papers to validate that. They also calculate similarity weight and residuals.
@ishitachakraborty13624 жыл бұрын
Please do a indepth maths intuition video on catboost
@BatBallBites4 жыл бұрын
agree
@thisismuchbetter21944 жыл бұрын
I don't know why people don't talk about Catboost and LightGBM much..
@stabgan4 жыл бұрын
Congratulations on your new job in E&Y. Checked you on LinkedIn. Very impressive profile.
@muhammadsaqib29613 жыл бұрын
Quite amazing and clear explanation
@datakube30534 жыл бұрын
thank you so much
@sheikhshah2593 Жыл бұрын
Great sir🔥🔥
@mihirjha1486 Жыл бұрын
Loved It. Thank You!
@Kiddzzvideos Жыл бұрын
hi, have one doubt, for p(1-p) + lambda in denominator to calculate similarity weight, if the residual is -0.5 it should be 0.5(1-(-0.5))= .75? or the negative sign does not matter?
@navyamokmod13178 ай бұрын
In the denominator, we are not taking residuals for calculation, p = probability which is 0.5
@nothing89194 жыл бұрын
thank you alot sir, you are my best teacher
@ppersia184 жыл бұрын
1st view 1st like krish sir op
@Amansingh-tr1cf3 жыл бұрын
the most awaited video
@ayanmullick92022 жыл бұрын
You are legend sir.
@bayazjafarli38672 жыл бұрын
Hi, thank you very much for this explanation! Great video! But I have one question. In 19:39 you first wrote 0 which is the probability of first row then you added learning rate*similarity weight. My question is instead of 0 shouldn't we write 0.5 which is the average probability of first (base model). 0.5+learning rate*similarity. Please correct me if I am wrong.
@rutvikvatsa767 Жыл бұрын
base model comes after we put the first probability (0.5) through log(odds) at bottom right corner. Hence it is 0
@modhua44973 жыл бұрын
Good! Could you make a video explain the difference between XGB and Gradients Boosting? Thanks
@govind17064 жыл бұрын
Finally !!!!
@jainitafulwadwa81813 жыл бұрын
The similarity score is not the output value, there is a different formula for calculating the output based on residuals, you just have to remove the square in the numerator of the similarity score function.
@arshaachu635110 ай бұрын
Is there any detailed videos about Adaboost regressor and gradient boosting classifier? Please help me
@user-rw6iw8jg2t16 күн бұрын
XGBoost fire nehi Wild Fire ML Algorithm 🔥only thing is it needs a bit careful hyperparameter tuning to prevent overfitting. Also Random Forest is a top notch algo.
@saptarshisanyal48692 жыл бұрын
Statquest Light !!!! Fantastic effort though.
@IamGaneshSingh3 жыл бұрын
This video is "pretty much important!"
@mohittahilramani99562 жыл бұрын
Seriously thank u so much
@RahulKumar-hb8cl3 жыл бұрын
Sir, How will the Prob value( 0.5 for the base tree ) be updated in each tree?
@jamalnuman11 ай бұрын
great
@alokranjanthakur57464 жыл бұрын
Sir can you refer some NLP projects using python. I mean with live implementation
@ajiths16894 жыл бұрын
what should be the new probability value we need to consider when we are considering the second decision tree?
@brunojosebertora79353 жыл бұрын
Krish, I have a question: when you compute the output value you are catching the similarity weighted. I think it is incorrect for classification, isn't it? To compute the output you shouldn't square the residuals. THANKS for the video!!
@satwikram24794 жыл бұрын
Finally❤
@davidd27023 жыл бұрын
Thank you for your fabulous video! I enjoy it and understand well! Could you tell me if the output from the xgb classifier gives 'confidence' in a specific output (allowing you to assign a class) ? or is this functionally equivalent to statistical probability of an event occuring?
@vishaldas63464 жыл бұрын
Hi Krish, I have a doubt, can you please confirm if XGBOOST is a part of ensemble technique or not as while importing from the library we are doing it separately not from sklearn library.
@krishnaik064 жыл бұрын
It is a seperate library
@vishaldas63464 жыл бұрын
@@krishnaik06 but is it an ensemble technique?
@gshan9944 жыл бұрын
@@vishaldas6346 what is XGBoost and where does it fit in the world of ML? Gradient Boosting Machines fit into a category of ML called Ensemble Learning, which is a branch of ML methods that train and predict with many models at once to produce a single superior output.
@saimanohar33633 жыл бұрын
Grt teacher. Just a doubt, can't we take the credit as first node?
@REHAN-ANSARI-2 жыл бұрын
XG-Boost is the secret of my energy
@nandangupta7273 жыл бұрын
Thank you so much for such a step to step explanation. but I have a quick question what would we do if we have continuous variable than categorical. would we proceed as we do in decision tree for continuous features? or it's not recommended to use XGBoost in case of continuous features?
@thepresistence59353 жыл бұрын
i think we use all the models and will take the result by comparing those, I think It will be better for that.
@subratakar43922 жыл бұрын
for continous data, like salary , first it will sort that particular column in ascending, then for each consucutive value will create an avg.Now each avg will be taken as a spliting condition. The one where the gain is the highest will be considered for the split . Like suppose you have 5 salaries 10,20,30,40,50. first splt would be on salary
@gardeninglessons39493 жыл бұрын
sir please make a video on differences in all the boosting techniques , they are elaborate and couldn't find out the exact differences
@accentureprep10922 жыл бұрын
Hi @krish First of all kudos to you Great video Can you tell me how xgboost is different from Aprori alogrithm or does it cover every combination as in Aprori cover ( ie it's covers all the combination while creating tree as Aprori will cover for same problem statement) Thanks and love your work Keep rocking
@deepsarkar20033 жыл бұрын
Can anyone explain to me the video during 21:38 Mins ( 0-0.6)=-0.6 right not 0.4 right? or did I get it wrong Please Advise
@sudiptodas62723 жыл бұрын
I got the same question .
@Jaydonj21 күн бұрын
yeaaa me toooooooooooo....helpppwwwww meeee!! arghhh
@pulakdas32164 ай бұрын
It started good but I got lost as the video ended. Can you please prepare something simpler and show that? as u did for adaboost and gradboost?
@ashwinkrishnan42854 жыл бұрын
Hi Krish, I have a doubt here. Here all the input features (salary, credit) are categorical. so we are making the decision tree easily based on the categories. Say suppose if we get the salary feature as continuous like 30k, 50k and not like 50k, how this split of decision tree will be done.
@shubhambavishi59824 жыл бұрын
Check out decision tree algorithm video in ml playlist. Inside it, he has mentioned how to handle numerical features..
@vishaldas63464 жыл бұрын
Hi Ashwin, for numerical features, you have to set a threshold for each value by taking the average of adjacent values for example for 30k - 40k you have to take (30+40)/2 i.e 35k and create a decision tree by setting value less than 35k i.e
@sohinimitra75594 жыл бұрын
Can you please do a video on feature selection approaches? Especially the use of Mutual Information. Thanks. Great videos!!
@tarabalam996211 ай бұрын
Please upload a video on Light GBM.
@VinodRS014 жыл бұрын
Sir how does the model chooses which similarity weight should be multiplied with learning rate . Thank you sir u r doing great by helping us🙂
@vishaldas63464 жыл бұрын
its not the similarity weight which is multiplied, its the Output of the leaf node. Similiraity weight is used to calculate the Gain for splitting the nodes of the decision tree.
@edwinokwaro9944 Жыл бұрын
is the formula for similarity score of the root node correct? since this is a classification problem?
@amitshende51614 жыл бұрын
It's lambda as hyper parameter, which u mentioned as alpha...
@ManoharKumar-cw3ed3 жыл бұрын
Thank you sir! I have a question in this how we predict the probability value at the begging from 0-1
@ArunKumar-sg6jf4 жыл бұрын
How u determine value of pr in base model
@SRAVANAM_KEERTHANAM_SMARANAM4 жыл бұрын
Dear Krish, We have a course on machine learning. Around 40000 people subscribe to this course. But since they dont understand many of them will drop out in the middle. Why dont you start creating videos parallel to what is taught in the class and make a playlist for it. So that you can easily many views with one shot. Are u interested in this.
@datakube30534 жыл бұрын
250k coming soon
@durjoybhattacharya250 Жыл бұрын
How do you decide on the Learning Rate parameter?
@dulangikanchana82373 жыл бұрын
can you do a video difference between statistical models and machine learning models
@adityarajora72192 жыл бұрын
How is Pr gonna change please explain!!!!
@hemantsharma79863 жыл бұрын
isnt gradient boosting and xgboost same with miner difference?
@KOTESWARARAOMAKKENAPHD Жыл бұрын
is any other value except 0 as a hyperparameter in XGboost algorithm
@Acumentutorial2 жыл бұрын
Wht is the role of lambda in the similarity weight here.
@ajayrana42964 жыл бұрын
what is similarity weight why we use it what is its advantage what is the intution behind it
@mainakray64523 жыл бұрын
the max_depth in xgboost for each tree is 2? plz answer ,
@titangamezone43794 жыл бұрын
sir please make a video on gradient boosting for classification problem
@seniorprog91444 жыл бұрын
Sir . krish Do you have a code that deal with more than one target ( y1,y2,.. Y is 2 columns or 3 columns . (two target , three target )
@SwethaSubramanian-l7v2 ай бұрын
can someone please clear the log of odds part? similarity wt=1 means that's the output but to compute that we calculate the base model output with respect to 0.5 probability, why?
@adireddy6944 жыл бұрын
How you have calculated the probability ?? How you have got 0.5 ??
@RishikeshGangaDarshan3 жыл бұрын
When I training data first calculate residual and create dt but here we are not able to see how it classified the point and in this it say when new data point is come I am confused in this
@KOTESWARARAOMAKKENAPHD Жыл бұрын
what is the need of LOG(odd) function
@stabgan4 жыл бұрын
23:00 that's lambda not alpha, please correct that
@belxismarquez44472 жыл бұрын
Please subtitle the videos in Spanish. There is a community that speaks Spanish and listens to your videos
@swethanandyala Жыл бұрын
Hi sir @Krish Naik. What will be the initial probability when there are multiple classes....if anyone knows the answer please share...
@sachinjaisar57762 жыл бұрын
Shouldn't your similarity weight be 1? Residuals must be squared first before adding up.
@pratikbhansali40864 жыл бұрын
U didn't upload gradient boosting classification videos i. e part 3 and part 4 of gradient boosting
@mohana41792 жыл бұрын
Please put lgbm mathematical explanation sir
@biplabroy14063 жыл бұрын
someone please explain 21:34 . 0-0.6 = -0.6.. then how it is converging to actual value.
@snegas28493 жыл бұрын
yes -0.6,may be in hurry he would have written wrong
@biplabroy14063 жыл бұрын
@@snegas2849 in that case the residual became -0.5 to -0.6... But it should converge to 0. Isn't it?
@snegas28493 жыл бұрын
@@biplabroy1406 actually when we compute the sigmoid the answer is 0.5,not 0.6 so the residual again will be -0.5, he had written 0.6 by mistake I think and in one go it will not become 0...it constructs many dt and finally it becomes zero
@ramnareshraghuwanshi47374 жыл бұрын
Dude!! 3.29 residual = actual - probability? how come?
@subhodipgiri29244 жыл бұрын
how can we subtract probability of a value from that value. if suppose i take approvals in terms of Y and N then also their probability remains same at 0.5. but we cannot subtract 0.5 from Y or N. I did not get your concept of subtracting the probability from value.
@naveenvinayak10884 жыл бұрын
Krish How do u stay so focused
@dheerendrasinghbhadauria97984 жыл бұрын
How he is taking probability = 0.5 in the whole process. What is the calculation of that probability??