Decision Tree Regression Clearly Explained!

  Рет қаралды 161,790

Normalized Nerd

Normalized Nerd

Күн бұрын

Here, I've explained how to solve a regression problem using Decision Trees in great detail. You'll also learn the math behind splitting the nodes. The next video will show you how to code a decision tree regressor from scratch.
#machinelearning #datascience
For more videos please subscribe -
bit.ly/normaliz...
Decision Tree previous video -
• Decision Tree Classifi...
Love my work? Support me -
www.buymeacoff...
Join our discord -
/ discord
Facebook -
/ nerdywits
Instagram -
/ normalizednerd
Twitter -
/ normalized_nerd

Пікірлер: 130
@jayo3074
@jayo3074 3 жыл бұрын
It looks hard at first but with a good teacher explaining it really is so simple
@jamiyana4969
@jamiyana4969 8 ай бұрын
Honestly this is the most high end professional video that's so simply explained! Amazing job!
@saigopal5086
@saigopal5086 Жыл бұрын
brooooooo this is brilliant, I can't resist myself from pressing the like button, it's such a blessing to have people like you
@hannav7125
@hannav7125 3 жыл бұрын
Shout out to this dude for the awesome visualization and clearly explanation.
@drewplayz3765
@drewplayz3765 6 ай бұрын
u kinda bad tho
@sillem4337
@sillem4337 2 жыл бұрын
This video is next level teachning. Concept presented so clearly and so well. Thank you!
@Ivan-cp2hn
@Ivan-cp2hn 3 жыл бұрын
your explanation is so far the best in youtube up till now. Dont know why the view number and likes counts not that high. But keep doing the great work !
@SoubhikBhattacharya
@SoubhikBhattacharya Жыл бұрын
Can you explain why X0
@onetapmanbbr
@onetapmanbbr 10 ай бұрын
8:24 He explains that the algorithm compares every possible split and finds the one with the best variance reduction. So the y in X0
@soumyarm3851
@soumyarm3851 6 ай бұрын
​@@onetapmanbbrv
@pratyushrout7904
@pratyushrout7904 2 жыл бұрын
The nicest explanation video for DT on KZbin...
@prashantmandare2875
@prashantmandare2875 2 жыл бұрын
Best explanation of decision tree for regression that I have come across
@persevere1052
@persevere1052 10 ай бұрын
Extremely helpful and easy to understand!
@illan731
@illan731 10 ай бұрын
Great explanation. Feels too simple to have looked up on video, which means it is explained very well.
@giacomozuccolotto4503
@giacomozuccolotto4503 2 ай бұрын
great video! I still got a question tho: how did you apply the variance formula to get those starting variance values before applying the vairance reduction formula? i do not understand how the number 9744 came up
@rishabhgarodia410
@rishabhgarodia410 2 жыл бұрын
Thanks for these visualizations! Helps a lot
@gustavoalcarde
@gustavoalcarde 2 жыл бұрын
Thank you so much! Very simple and visual, that's all I needed!
@akshaypr9164
@akshaypr9164 10 ай бұрын
amazing man!! love your explanation and style
@yashsaxena7754
@yashsaxena7754 2 жыл бұрын
Great explanation! One question though is if the prediction is based on the average value of the target variable in the leaf node, it would mean that all the observations terminating at a node will have the same prediction. Is that right? For e.g., if 10 observations are terminating at a leaf node all will have the same predictions.
@fabianaltendorfer11
@fabianaltendorfer11 Жыл бұрын
one hell of a explanation video. great!
@bhavinmoriya9216
@bhavinmoriya9216 3 жыл бұрын
Thanks for the video! How do you decide x0
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
Actually, we try every possible value of the threshold and find which one produces the best split. If can go through the code for a better understanding. github.com/Suji04/ML_from_Scratch/blob/master/decision%20tree%20regression.ipynb
@bhavinmoriya9216
@bhavinmoriya9216 3 жыл бұрын
@@NormalizedNerd Thanks buddy :)
@dilse_hindustani
@dilse_hindustani 3 жыл бұрын
@@NormalizedNerd you have done well but plz first explain taking an example by showing every steps from first to the last with every maths used and the computation on how we get the results and why this value and not the other etc. plz teach like this so that every learner even the one who don't have basic can understand it is a request
@sai_sh
@sai_sh 2 жыл бұрын
Hi at 4:40 why did it go towards left node rather than the right coz x=16 and x
@yuyang5575
@yuyang5575 3 жыл бұрын
This is awesome! Clearly we use a binary tree to do the classification first (or build a decision tree first), and then we follow the tree to reach the target leaf node. Btw, which software do you use to make the animation? very impressive
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
Thanks! Well, I use Manim (a python library)
@himtyagi9740
@himtyagi9740 3 жыл бұрын
What a beautiful lecture..kudos to your efforts
@falicitations
@falicitations 3 ай бұрын
Top quality video, which software do u use to create visuals?
@HuyLe-nn5ft
@HuyLe-nn5ft 2 жыл бұрын
You already had 1 more subsciption, Superb explanation and visualization!
@Rahul.alpha.Mishra
@Rahul.alpha.Mishra 10 ай бұрын
Thanks a lot bro. And your viz helped me explain my Model in the presentation. Carry on foreward
@jullienb2055
@jullienb2055 Жыл бұрын
Excellent visualization, kudos!
@yurpipipchz75
@yurpipipchz75 2 ай бұрын
yeah, wow! Really well done!
@himanshuverma3984
@himanshuverma3984 2 ай бұрын
Could not understand variance reduction part. If we're talking about the variance reduction, then as per your explanation, 2nd set should have been chosen, but you selected the first set. Am I assuming something wrong here?
@yuvikayadav8203
@yuvikayadav8203 Жыл бұрын
Great explanation! Just one doubt that in decision tree classifier we split the nodes until we get pure leaf nodes if hyperparameters are not clearly stated but in the case of regression problems how do it decide when to stop generating the tree if no hyperparameter is defined?
@vasilisandreou8792
@vasilisandreou8792 4 ай бұрын
Could you explain how the split would occur if we had 3 or more criteria please?
@nithishar2781
@nithishar2781 2 ай бұрын
It's soo good , but i have doubt when to end the splitting ?
@sadjiajfiarei3498
@sadjiajfiarei3498 2 жыл бұрын
bro you made it so easy
@itsme1674
@itsme1674 2 жыл бұрын
Wonderful explanation
@ismailwangde580
@ismailwangde580 Жыл бұрын
Bro, you are better than krish naik lol. Thank you for the efforts. really appreciate it
@jamalnuman
@jamalnuman 10 ай бұрын
great presentation
@dgaphysics4026
@dgaphysics4026 11 ай бұрын
Great explanation!
@rishiksarkar62
@rishiksarkar62 2 жыл бұрын
Fabulous explanation sir! Thank you very much!!
@ckeong9012
@ckeong9012 9 ай бұрын
Awesome. May i know what kind of software you use for the visualization?
@AthiBalaji-q1r
@AthiBalaji-q1r 3 ай бұрын
Great, but how does the model learn to arrive at the optimal splitting point
@diabl2master
@diabl2master 11 ай бұрын
What a brilliant video!!
@suryahr307
@suryahr307 8 ай бұрын
does it take care of outlier data points?
@estefvasqu
@estefvasqu 4 жыл бұрын
Thank you for sharing your knowledge. We appreciate it Greetings from Argentina
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
You're most welcome...Keep supporting!
@shanks9758
@shanks9758 5 ай бұрын
good job , thank you
@leolei9352
@leolei9352 3 жыл бұрын
Really good teaching.
@gatecseaspirant-dk9ze
@gatecseaspirant-dk9ze Жыл бұрын
hello people from the future! you nailed it here
@ahmedelsabagh6990
@ahmedelsabagh6990 2 жыл бұрын
Very good visualizations
@kajalmishra6895
@kajalmishra6895 3 жыл бұрын
I love all your videos.
@razieqilham8327
@razieqilham8327 3 жыл бұрын
So in love with your explanation sir, but im confused with the dataset, could u build a dataset in table, not graph?
@smartshoppingapp3585
@smartshoppingapp3585 3 жыл бұрын
How do you find the best root node. ? Coz in video it's about finding the best split which really helped. But how to find the best root node???
@the_senthil
@the_senthil 3 жыл бұрын
Quality content... I never seen before 💯
@osama11osama
@osama11osama 10 ай бұрын
I like it really, thanks
@siddhantpathak6289
@siddhantpathak6289 3 жыл бұрын
Awesome visualization and explanation, I went through the Github implementation and it seems you are using unique feature values as possible thresholds. How this approach would work for a continuous feature with millions of records, as there will be many unique values to test. Possible thresholds in the video were 1 and 2, right? Just checking my understanding.
@samarthtandale9121
@samarthtandale9121 Жыл бұрын
Incredible 🔥
@madhumatinarule4489
@madhumatinarule4489 2 жыл бұрын
Kindly make full course on fundamentals of machine learning as we are from not from computer science
@diegobarrientos6271
@diegobarrientos6271 2 жыл бұрын
Thanks for the explanation!, I have a question... I watched in some videos that use MSE instead of variance, so Should I use the sum of squared error or variance? It'd be great of someone could clarify this please
@DaaniaKhalith
@DaaniaKhalith Жыл бұрын
MSE is used if both input and output is continuous ,variance is for discrete input n continuous output
@takeshi7441
@takeshi7441 7 ай бұрын
Thank you sir
@mihirsheth9918
@mihirsheth9918 3 жыл бұрын
great explaination.thanks
@SachinModi9
@SachinModi9 2 жыл бұрын
Man... Loved it..
@nickgeo8250
@nickgeo8250 Жыл бұрын
Really good explanation well done! Only one question, how do you calculate the wi weights?
@AidarMHTV
@AidarMHTV Жыл бұрын
The weights, I think, it's just a fraction of items for each side. For example, 2 items on left and 6 items on right gives .25 and 0.75 weights respectively.
@XuanTran-ri1hn
@XuanTran-ri1hn 2 жыл бұрын
Hi, may I ask 1 thing in minute 4:41, because x=16, so I think that condition is not true for x1
@explovictinischool2234
@explovictinischool2234 9 ай бұрын
This condition is, indeed, true. You may have confused x0 and x1: x0 = 16 but x1 = -2. Here, we are talking about x1
@Mudiaga11
@Mudiaga11 2 жыл бұрын
this is excellent
@subarnasubedi7938
@subarnasubedi7938 Жыл бұрын
I am confused why you took x2
@inderkaur3012
@inderkaur3012 2 жыл бұрын
Hi, Though Var R1>Var R2, how do we conclude that R2 is best suited for split? Graphically I understand your logic as the colors are best segregated due to R1, we must choose R1, but I was unable to conclude the same from variance perspective. Could you please explain the same?
@konstantinosmaravegias4198
@konstantinosmaravegias4198 Жыл бұрын
Do not confuse the variance reduction with the variance of each split. VarR1 < VarR2, hence the VarR1 split reduces more variance from the parent node (1 - VarR1 > 1 - Var2)
@candicerusser9095
@candicerusser9095 2 жыл бұрын
Hi, great video! A small doubt, what does "desired depth" mean in decision tree regressor, does it mean that we reach a point where we can't split anymore, like variance becomes 0?
@jakeezetci
@jakeezetci 2 жыл бұрын
i think that's the depth of the tree you want, that you need to find by trying out yourself. you want to stop before variance becomes 0, as then the prediction really goes wild
@kellymarchisio377
@kellymarchisio377 Жыл бұрын
Excellent!
@ckeong9012
@ckeong9012 3 жыл бұрын
Awesome. Great video. Much appreciated if you could put the values or labels on the cartesian. TQ~
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
Suggestion noted...
@kalpanakadirvel9220
@kalpanakadirvel9220 3 жыл бұрын
Excellent video.. Thank you
@soniasnia
@soniasnia 2 жыл бұрын
ur video is just awesome!
@RohitAlexKoshy
@RohitAlexKoshy 3 жыл бұрын
Great video. Keep up the good work!
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
Thanks!
@sebastiaanvanhassel4664
@sebastiaanvanhassel4664 3 жыл бұрын
One extra like for the classical music!!! 👏😀
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
😁❤
@Randomstiontastic
@Randomstiontastic 3 жыл бұрын
How do you find the value of the inequalities for the filters?
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
By checking every possible value of a feature as the threshold and splitting the dataset based on that. Then taking that particular feature and the corresponding threshold that gives the maximum information gain. Please see the code provided in the next video for more clarity.
@Nirjhar85
@Nirjhar85 Жыл бұрын
Awesome!
@polishettysairam6466
@polishettysairam6466 3 жыл бұрын
Its good, can you provide the dataset used here
@asrjy
@asrjy 3 жыл бұрын
Sick vid! Did you use manim to make this video?
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
Yeah!
@asrjy
@asrjy 3 жыл бұрын
@@NormalizedNerd nice
@saqlainshaikh5483
@saqlainshaikh5483 3 жыл бұрын
Great Expectations ✌️
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
Thanks!! More to come :)
@Asmallpanda1
@Asmallpanda1 2 жыл бұрын
Very Nice ty
@akashkundu4520
@akashkundu4520 2 жыл бұрын
Can you add post prunning of the tree and visual representation of the tree please.I have an assignment 😭
@xINeXuSlx
@xINeXuSlx 3 жыл бұрын
Great video! Helping me a lot in preparing for my Data Science exam soon. One thing I did not quite understand yet is when I should use Decision Tree Classification or Regression? I understand that one uses Information Gain and the other Variance Reduction, but how do I know in the first place what to apply?
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
It depends on the problem you are trying to solve. As a thumb rule: if the target variable takes continuous values then go for regression and if it takes discrete (and few) values then go for classification.
@xINeXuSlx
@xINeXuSlx 3 жыл бұрын
@@NormalizedNerd I understand, thanks a lot :)
@tinyeinmoe5147
@tinyeinmoe5147 8 ай бұрын
you're the best, thank you Soooo much, india is the best
@robm838
@robm838 2 жыл бұрын
Which values are -7 and -12 cannot be found on the grid.
@mindlessambient1791
@mindlessambient1791 2 жыл бұрын
In the ending example, the weighted average of variance used weights with denominators of 20 (i.e. 11/20, 9/20, etc.) . Has anyone ever thought to adjust these weights using Bessel's correction? Not sure how much of a difference that would make but just curious. I am guessing that the weights would be something like 10/19 and 8/19 with this adjustment.
@piero8284
@piero8284 2 жыл бұрын
Nice explanation, i was struggling a little bit to find some detailed material about this topic. As I thought, decision Trees in general always check for the best split looking for every possible feature, that means if there are k features and n samples, at each split the tree will perform O(m*k) variance computations, right?
@p337maB
@p337maB 2 жыл бұрын
It's not O(m*k) but exactly m*k computations at every split.
@piero8284
@piero8284 2 жыл бұрын
@@p337maB sure
@61_shivangbhardwaj46
@61_shivangbhardwaj46 3 жыл бұрын
Thnx sir😊
@NormalizedNerd
@NormalizedNerd 3 жыл бұрын
Most welcome
@anuragshrivastava7855
@anuragshrivastava7855 2 жыл бұрын
how did we get average
@alexandrfedorov7297
@alexandrfedorov7297 Жыл бұрын
You are awesome
@MrHardgabi
@MrHardgabi 3 жыл бұрын
incredible
@dilse_hindustani
@dilse_hindustani 3 жыл бұрын
you have done well but plz first explain taking an example by showing every steps from first to the last with every maths used and the computation on how we get the results and why this value and not the other etc. plz teach like this so that every learner even the one who don't have basic can understand it is a request
@JPDEV092
@JPDEV092 8 ай бұрын
how to do videos like this ?
@paultvshow
@paultvshow 9 ай бұрын
At 1.28, what you mentioned was misleading and could be misinterpreted. A line is still a line in 2D, a line will never be a plane in 2D. You should have said “or a plane in 3D”, or simply call it a hyperplane instead of a line or plane.
@lucarohrer8665
@lucarohrer8665 Ай бұрын
is this a decision tree or Regression Tree?
@Rehul-gw3yj
@Rehul-gw3yj Жыл бұрын
I actually came here to understand how we get the route note. anyone >?
@tomgt428
@tomgt428 3 жыл бұрын
cool
@ronitganguly3318
@ronitganguly3318 2 жыл бұрын
Dada tumi bangali?😁
@NormalizedNerd
@NormalizedNerd 2 жыл бұрын
Ha bhai :))
@ronitganguly3318
@ronitganguly3318 2 жыл бұрын
@@NormalizedNerd bengoli accent ftw!
@lkny631
@lkny631 2 ай бұрын
Loved everything! You’re awesome but bro your lines all look the same. It’s so hard to follow. Jesus couldn’t you have used bright and contrasting colors? But thank you so much for it!!
@iloveblender8999
@iloveblender8999 2 ай бұрын
Lol this looks like a kd-tree.
@DikshuStuffyVlogs
@DikshuStuffyVlogs 4 ай бұрын
I can't understand
@Gulshankumar-fg9ls
@Gulshankumar-fg9ls 2 жыл бұрын
Bro… I would suggest you to get the proper knowledge when you start teaching any topic in machin learning, sometimes your statement is vague
@ccuuttww
@ccuuttww 3 жыл бұрын
U spend a lot of time to make an animation
@ccuuttww
@ccuuttww 3 жыл бұрын
I am not sure if u should use MSE for every split
@DustinGunnells
@DustinGunnells 4 жыл бұрын
How are you determining the filter splits further down from the root of the tree? I don't see the reasoning that you're using to make this useful. I see the filtering, I see data points, but what is determining the other filters from the initial filter? Why is the partitioning valuable? How would the partitioning be applied? Why would you have two of the same filter between the two x variables, x sub 0 and x sub 1? Why is x sub zero represented in the root but not x sub 1? What is the relationship/difference between the two x variables? This looks initially useful, then it looks like a bunch of snow on a cathode.
@NormalizedNerd
@NormalizedNerd 4 жыл бұрын
"but what is determining the other filters from the initial filter?" The initial filter (at root) divides the data into two sets. The left one is then again divided so is the right one. We do this process recursively. While splitting a set we choose the condition that maximizes variance reduction. Please see the implementation to get more clarity: kzbin.info/www/bejne/hmO9c2uZaq2UZ7M
@DustinGunnells
@DustinGunnells 4 жыл бұрын
@@NormalizedNerd So are you saying that x sub 0 and x sub 1 are two sets of decision sets? Rather, two collections of boundaries? Something still looks off. If it's an array of decision boundaries, how do you jump from 1 (of x sub 0) to -7 and -12 (of x sub 1)? I've even tried to figure out the symmetry in the tree to find logic. 4 elements of x sub 1 3 elements of x sub 0. 20 partitions in the grid for 20 elements in the set. I've watched several of your videos trying to understand your message. Explain this one where it makes sense and I'll definitely continue to watch your other content. I try to give everybody that says they're providing "knowledge" a chance. This is outstandingly bonkers to me. I'm also a programmer and MBA
@dnyaneshjalamkar4257
@dnyaneshjalamkar4257 2 жыл бұрын
Your so called autopilot ruined the video!
Decision Tree Regression in Python (from scratch!)
14:03
Normalized Nerd
Рет қаралды 40 М.
Decision Tree Classification Clearly Explained!
10:33
Normalized Nerd
Рет қаралды 760 М.
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
Сестра обхитрила!
00:17
Victoria Portfolio
Рет қаралды 958 М.
黑天使只对C罗有感觉#short #angel #clown
00:39
Super Beauty team
Рет қаралды 36 МЛН
1% vs 100% #beatbox #tiktok
01:10
BeatboxJCOP
Рет қаралды 67 МЛН
Regression Trees, Clearly Explained!!!
22:33
StatQuest with Josh Starmer
Рет қаралды 684 М.
Standardization vs Normalization Clearly Explained!
5:48
Normalized Nerd
Рет қаралды 165 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 449 М.
Understanding B-Trees: The Data Structure Behind Modern Databases
12:39
Random Forest Algorithm Clearly Explained!
8:01
Normalized Nerd
Рет қаралды 684 М.
How might LLMs store facts | DL7
22:43
3Blue1Brown
Рет қаралды 1 МЛН
one year of studying (it was a mistake)
12:51
Jeffrey Codes
Рет қаралды 328 М.
Decision Tree Classification in Python (from scratch!)
17:43
Normalized Nerd
Рет қаралды 211 М.
Covariance Clearly Explained!
7:47
Normalized Nerd
Рет қаралды 107 М.
ROC and AUC, Clearly Explained!
16:17
StatQuest with Josh Starmer
Рет қаралды 1,6 МЛН
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН