Your explanations are obvious, they remind me of all the knowledge that I had forgotten after my statistical class in the second year. I'm doing my capstone project, and your videos help me a lot. I appreciate your attribution.
@MarwaneElMoufaoued3 күн бұрын
Thank you bro, i really needed this video because i was struggling a little bit with probabilities ✨🙏
@must_be_good4 күн бұрын
Man using manhattan distance really amps up the speed to reach desired result
@martusha15 күн бұрын
great video man
@arcsaber11276 күн бұрын
Please make a video on isolation forest
@savage18517 күн бұрын
Loved how u said in the intro "people from the future". Kind of a new intro for me that i have experienced today.
@giovanniberardi41348 күн бұрын
You're an excellent teacher👍
@ismail372110 күн бұрын
Awesome video! You explained the concept so much better than many university lectures. I would just like to make a quick point about the random feature selection step when bootstrapping. (imho) According to most recent/popular papers on random forest algorithms, the most common and efficient approach appears to randomly select feature subsets at each node when traversing each tree rather than selecting it only once at the tree level. One explanation I found online is that "while sampling features at every node still allows the trees to see most variables (in different orders) and learn complex interactions, using a subsample for every tree greatly limits the amount of information that a single tree can learn. This means that trees grown in this fashion are going to be less deep, and with much higher bias, in particular for complex datasets. On the other hand, it is true that trees built this way will tend to be less correlated to each other, as they are often built on completely different subsets of features, but in most scenarios, this will not outweigh the increase in bias, therefore, giving a worse performance on most use cases." I really hope this was clear. Any comments are very welcome!
@Symon_Musician10 күн бұрын
Thanks!
@abhikpanda158111 күн бұрын
Asking out of Context ! Are you Bengali ?
@dimasaldisallam572011 күн бұрын
why the x0 uses vertical line on visualization and the x1 uses horizontal line? how to describe visualization the x-axis if I have x1,x2,x3,x4?
@jenamartin615712 күн бұрын
In a certain way, this video was less about Markov chains themselves and more about the underlying directed graphs. Using different language to describe the same things, the communicating classes are called “strongly connected components”, and you can form a “condensation graph” (which is a directed acyclic graph) by collapsing these communicating states.
@rajarshimaity683813 күн бұрын
Good Old Maths... Man i miss it..😍
@treya11114 күн бұрын
the demonstration of n gets bigger only shows cases where n is an odd number? how about even numbers like L4, L6?
@kevinwangglobal15 күн бұрын
video is awesome!
@homakashefiamiri374915 күн бұрын
it was wonderful.
@SergioLust17 күн бұрын
sooo good
@Frog-c5y18 күн бұрын
Is there a video on No U-Turn Sampler (NUTS)? Thanks
@AsafJerbi21 күн бұрын
The best explanations and visualizations I've ever seen. Thank you for that!
@LouisKahnIII21 күн бұрын
This is excellent info well presented. Thank Yoyu
@iantassin761122 күн бұрын
Good video but there seems to be a small error. In particular, you say we are assuming X1 and X2 are independent, but we do not actually make that assumption for Naive Bayes, we assume only conditional independence (conditioned on the class label) which does not imply general independence.
@lucutes293624 күн бұрын
cam you make a more complicated chain?
@orrin-manning24 күн бұрын
Raise your camera so that you’re not looking down on us
@hoganwarlock143025 күн бұрын
When bootstrapping, how do you decide how many trees to make?
@tomaszbaczkun857225 күн бұрын
Thank you - that was a really clear explanation! 8 minutes and you made me understand the basics of Random Forest. Crazy.
@basarselvi473126 күн бұрын
terrible English
@F3lp1s25 күн бұрын
Para de ser fresco
@andrew.sandler26 күн бұрын
Got to about 7 minutes 30 seconds in before I needed to learn some of your fancy symbols
@happyslug27 күн бұрын
So clear. Thanks for explaining. Also the background music was super calming
@lucutes293628 күн бұрын
thx
@さくら-z4y3k28 күн бұрын
Thank you so much
@saqlainsajid127428 күн бұрын
Man you're really good at explaining things simply and visually love your work
@kritikabhateja11028 күн бұрын
How did we create the 4 random trees? Like how do we choose root node and the leaf nodes. What was the criteria?
@sarynasser99329 күн бұрын
thank you
@eduarddez441629 күн бұрын
Very good and clear explanation ,thank you :D
@NiksonkАй бұрын
Great!
@Echoooo-ex7zfАй бұрын
It's such a great explanation! Thank you so much!
@zerosumgame9071Ай бұрын
Excellent explanation. Thumbs up and subscribed! Thank you!
@giacomozuccolotto4503Ай бұрын
great video! I still got a question tho: how did you apply the variance formula to get those starting variance values before applying the vairance reduction formula? i do not understand how the number 9744 came up
@maurosobreira8695Ай бұрын
Wonderful!
@victormiene520Ай бұрын
Very good explanation, thank you. On a side note, I wish we could use more descriptive notation, like P(R) for the probability of rain. It would make things much clearer.
@yurpipipchz75Ай бұрын
yeah, wow! Really well done!
@LazzyLazzzАй бұрын
Why red and green? 10% of thenpopulation does not see the difference between the small dots... next time maybe circle and squeres.
@ts.nathan7786Ай бұрын
The colours green and blue are appearing similar on the image. You may use some other method or colour (like red, green, yellow).
@alihadi-vv4ybАй бұрын
Thank you. It was great.
@himanshuverma3984Ай бұрын
Could not understand variance reduction part. If we're talking about the variance reduction, then as per your explanation, 2nd set should have been chosen, but you selected the first set. Am I assuming something wrong here?