We are happy to announce iNeuron is coming up with the 6 months Live Full Stack Data Analytics batch with job assistance and internship starting from 18th June 2022.The instructor of the course will be me and Sudhanshu. The course price is really affordable 4000rs Inr including GST. The course content will be available for lifetime along with prerecorded videos. You can check the course syllabus below Course link: courses.ineuron.ai/Full-Stack-Data-Analytics From my side you can avail addition 10% off by using Krish10 coupon code. Don't miss this opportunity and grab it before it's too late. Happy Learning!!
@AmeliaMelia-tj3kc5 ай бұрын
best teacher
@harshdeepjaggi9715 Жыл бұрын
wonderful explanation sir.... I'm already enrolled in Data Science with one of the edtech of India... no doubt waha ke teachers bhi accha padhate par jo english mei content hai wo mind mei ek baar mei acche se nhi jaata... ye content hindi wala raise ghus gya mind mei ki bus ab hamesha yaad rhega... Thankyou for your efforts..
@netviz86735 ай бұрын
decision tree for both classification regression here classification. 2 techniqiues. ID3 and CART. In Cart the decision tress spilts into binary trees. a)Entropy and Gini Index(Purity spirit) b) Information Gain(feature decision tree split). To check for Pure split two techniques called Entropy and Gini impurity are used and second technique called INformation gain (how the features are selected) is used. When H(S) is zero then that is pure split. And when H(s) is 1 then that is impure split ie equal distribution (eg 3yes and 3nos). The range of entropy remains between 0 to 1. In impure split the Gini impurity comes out to be 0.5 and in pure split it is 0. So the gini impurity ranges between 0 to 0.5. So in impure split the max value of gini impurity is 0.5 and in pure split it is 0. gini impurity is preferrable over entropy because of involvement of log it may slow down Now if you have multiple features, you use information gain to know how to make the tree using the given features whether which feature will start and which one will follow later. The feature starting with which the information gain calculation comes out to be the most should be the one with which the decision tree should be started.
@techinicalocean87333 ай бұрын
Thank you so much buddy God bless you
@aravindkumar6631Ай бұрын
God bless you frnd
@prathameshekal73082 жыл бұрын
मंडळ आभारी आहे
@adityapradhan84748 күн бұрын
Thank you so much sir. All these maths concepts are helping a lot! I am able to visualize clearly that what's happening in the background of all these ml algorithms.
@jasonbourn292 жыл бұрын
Thanx sir in Hindi explanations you tend to cover topics better(English vedios are also of far better quality than anyone else)
@pritamrajbhar9504 Жыл бұрын
It is one of the best and simplest explanations till far
@GopalKrishna-p4m Жыл бұрын
after one an era No one will beat you sir !! incredible explanation thankyou so much sir
@patelanjali2909 Жыл бұрын
your teaching skill awesomwe.
@RiteshBhalerao-b2v Жыл бұрын
Great explaination...hard to find anywhere else👌👌
@akashlinganwar48102 жыл бұрын
hello krish sir... your explanation is easy to understand and anyone can learn easily..thank you sir...😊
@utkarsh51659 ай бұрын
Thank you Krish for Crystal Clear Explanation.❤
@adityatiwari728710 ай бұрын
Awesome Explanation....Thanks A Lot....Keep It Up !!!
@maukaladka41002 жыл бұрын
Worth watching😍😍😍
@sandeepbehera36052 жыл бұрын
wonderful explanations sir no one can explain like you...🙏🙏🙏 thankyou..sir😇
@palakgrover99252 ай бұрын
Amazing Explanation 😃
@Keep_Laughfing2 жыл бұрын
Sir aise hi video bnate rhiye apko shayd pta bhi nhi hogaa ki ye aapki kitni bdi help h DATA SCIENCE lovers ke liye. Dil se dhanyvaad 🙏🙏🙏
@Er_IT_prashantjha8 ай бұрын
Bohot achha explain krte ho Sir aap 👌🏻💯
@aamiransarii Жыл бұрын
Many, Many Thanks .....so lovely of you
@mainakseal5027 Жыл бұрын
what an amazing tutorial...hats off sirji!!!...
@DrZubairulIslam4 ай бұрын
Thanks Krish, Best Ever Video, Wao.
@sauravsahay88039 ай бұрын
You make everything look so easy
@IrfanSaleem5416 ай бұрын
Thanks a lot. Love and Respect from Oman
@AbhishekSharma-cj9to3 ай бұрын
this is really amazing sir 🙏
@SumitJindal-e3k Жыл бұрын
That was awesome
@mohammadshad4168Ай бұрын
Thanks sir for the great explanation
@ruhiraj95512 жыл бұрын
Hello Krish sir ..thanku so much 🙏 for a very excellent explanation .
@maths_impact2 жыл бұрын
Wonderful explanation given by you sir in hindi.
@ketanitaliya9493 Жыл бұрын
Thanks it is really helpful and easy to understand
@vidvaanwithromi5213 Жыл бұрын
wonderfully explained sir!
@arunchougale59272 жыл бұрын
Very Good explained by you .it is lot help me Thank u very much
@dhavalsukhadiya86549 ай бұрын
great explaination sir
@nisho404 Жыл бұрын
in one word bosssss
@jaiprakashsingh7562 жыл бұрын
sir, I really find your videos very helpful. thanks a lot.
@osamaosama-vh6vu2 жыл бұрын
Your legend deae sir thank u be happy 😍
@gurpreetkaur-pf1bf8 ай бұрын
Amazing ❤
@bryan45925 ай бұрын
Amazing video sir
@SonuK78952 жыл бұрын
Very well explained sirjiiii
@girishgogate27332 жыл бұрын
Thankyou krish sir ........
@AjmalShah-s7n2 ай бұрын
Thank you sir G
@tuhinbarai7592 Жыл бұрын
thank you sir..its to understand...
@BikashKonwar-w7q Жыл бұрын
In Entropy formula summation of p(x) * log2(p(x))
@katw4342 жыл бұрын
Thanks sir please continue this series
@abhiWorldIN5 ай бұрын
Awesome video
@mrityunjayupadhyay73322 жыл бұрын
Great explanation
@meenalpande Жыл бұрын
Nice explanation
@jasanimihir49942 жыл бұрын
As always Very well explained. I have one query sir. You told that if the dataset is very big then use gini index otherwise entropy is fine. But finding the entropy is must for the information gain as no mention of Gini index in information gain formula. So is it possible to use gini index to find information gain? Kindly throw light on that. 😊
@shaiqmahmood Жыл бұрын
There is a way to calculate the Information Gain using Gini index as well.
@krishj80118 ай бұрын
nice tutorial
@razashaikh86982 жыл бұрын
Sir, can we find information gain using ginny impurity?
@abhinayjangdeАй бұрын
completed 😘
@anusuyaghosal69619 ай бұрын
sir you said H(s) is the entropy of root node but i think it is the entropy of target attribute
@Pankaj_Khanal_Joshi Жыл бұрын
Sir sklearrn and seaborn ka video banaiye . thank u
@neelkalyani78498 ай бұрын
In calculating information gain, can we use gini impurity instead of entropy?
@maths_impact2 жыл бұрын
Wonderful
@asifmahmoud12629 күн бұрын
Start at 21:17
@pkumar02126 ай бұрын
👌
@sadiqpurki10423 күн бұрын
I understand
@sugandhaarora81745 ай бұрын
is this required for data analyst role?
@abhishekbhad402911 ай бұрын
Very nice explanation sir.i have one question how to get intership as no one is hiring for fresher
@shreyashnarvekar6391Ай бұрын
Where is the practical implementation ? can anyone guide me where can I find it ?
@priyanshupokhariya8866 Жыл бұрын
0*log0 is undefined how is it coming 0??
@VikasSingh-nq5yx2 жыл бұрын
Sirrrr.... ❤ I have a question 🙋! If interviewer ask a question why we are using minus ( - ) sign in Entropy? Please reply........ ❤
@saranshsehrawat85442 жыл бұрын
its formula
@neeraj.kumar.12 жыл бұрын
Don't worry they don't ask these types of mathematical formulas. They can ask what is Gini impurity.
@rushikeshwaghmare3446 Жыл бұрын
Sir acc to external sites gini impurity ranges from 0-1
@rushikeshwaghmare3446 Жыл бұрын
Please confirm on this…
@prerakchoksi23792 жыл бұрын
wow
@vatsalshingala3225 Жыл бұрын
❤❤❤❤❤❤❤❤❤❤
@SpectreWarfare Жыл бұрын
Video volume is very less. It is difficult to listen
@ShivamSharma-if1oh2 жыл бұрын
My answer of Entropy is coming 0.6 not 1
@aaqibafraaz62152 жыл бұрын
Hello sir
@aaqibafraaz62152 жыл бұрын
You r doing a great job
@importantforlifeyasharora90422 ай бұрын
What a definition, entropy ranges 0 to 1 and ginni impurity ranges between 0 to 0.5.😂
@roshanYadav-y8f2 ай бұрын
❤❤❤
@roshanYadav-y8f2 ай бұрын
Kya bhai tum bhi data science ki preparation ker rahe ho