Covariance Clearly Explained!
7:47
3 жыл бұрын
What is Norm in Machine Learning?
5:15
Пікірлер
@neoblackcyptron
@neoblackcyptron Күн бұрын
Excellent
@LẠCTRẦNHOÀNGGIA
@LẠCTRẦNHOÀNGGIA 2 күн бұрын
Your explanations are obvious, they remind me of all the knowledge that I had forgotten after my statistical class in the second year. I'm doing my capstone project, and your videos help me a lot. I appreciate your attribution.
@MarwaneElMoufaoued
@MarwaneElMoufaoued 3 күн бұрын
Thank you bro, i really needed this video because i was struggling a little bit with probabilities ✨🙏
@must_be_good
@must_be_good 4 күн бұрын
Man using manhattan distance really amps up the speed to reach desired result
@martusha1
@martusha1 5 күн бұрын
great video man
@arcsaber1127
@arcsaber1127 6 күн бұрын
Please make a video on isolation forest
@savage1851
@savage1851 7 күн бұрын
Loved how u said in the intro "people from the future". Kind of a new intro for me that i have experienced today.
@giovanniberardi4134
@giovanniberardi4134 8 күн бұрын
You're an excellent teacher👍
@ismail3721
@ismail3721 10 күн бұрын
Awesome video! You explained the concept so much better than many university lectures. I would just like to make a quick point about the random feature selection step when bootstrapping. (imho) According to most recent/popular papers on random forest algorithms, the most common and efficient approach appears to randomly select feature subsets at each node when traversing each tree rather than selecting it only once at the tree level. One explanation I found online is that "while sampling features at every node still allows the trees to see most variables (in different orders) and learn complex interactions, using a subsample for every tree greatly limits the amount of information that a single tree can learn. This means that trees grown in this fashion are going to be less deep, and with much higher bias, in particular for complex datasets. On the other hand, it is true that trees built this way will tend to be less correlated to each other, as they are often built on completely different subsets of features, but in most scenarios, this will not outweigh the increase in bias, therefore, giving a worse performance on most use cases." I really hope this was clear. Any comments are very welcome!
@Symon_Musician
@Symon_Musician 10 күн бұрын
Thanks!
@abhikpanda1581
@abhikpanda1581 11 күн бұрын
Asking out of Context ! Are you Bengali ?
@dimasaldisallam5720
@dimasaldisallam5720 11 күн бұрын
why the x0 uses vertical line on visualization and the x1 uses horizontal line? how to describe visualization the x-axis if I have x1,x2,x3,x4?
@jenamartin6157
@jenamartin6157 12 күн бұрын
In a certain way, this video was less about Markov chains themselves and more about the underlying directed graphs. Using different language to describe the same things, the communicating classes are called “strongly connected components”, and you can form a “condensation graph” (which is a directed acyclic graph) by collapsing these communicating states.
@rajarshimaity6838
@rajarshimaity6838 13 күн бұрын
Good Old Maths... Man i miss it..😍
@treya111
@treya111 14 күн бұрын
the demonstration of n gets bigger only shows cases where n is an odd number? how about even numbers like L4, L6?
@kevinwangglobal
@kevinwangglobal 15 күн бұрын
video is awesome!
@homakashefiamiri3749
@homakashefiamiri3749 15 күн бұрын
it was wonderful.
@SergioLust
@SergioLust 17 күн бұрын
sooo good
@Frog-c5y
@Frog-c5y 18 күн бұрын
Is there a video on No U-Turn Sampler (NUTS)? Thanks
@AsafJerbi
@AsafJerbi 21 күн бұрын
The best explanations and visualizations I've ever seen. Thank you for that!
@LouisKahnIII
@LouisKahnIII 21 күн бұрын
This is excellent info well presented. Thank Yoyu
@iantassin7611
@iantassin7611 22 күн бұрын
Good video but there seems to be a small error. In particular, you say we are assuming X1 and X2 are independent, but we do not actually make that assumption for Naive Bayes, we assume only conditional independence (conditioned on the class label) which does not imply general independence.
@lucutes2936
@lucutes2936 24 күн бұрын
cam you make a more complicated chain?
@orrin-manning
@orrin-manning 24 күн бұрын
Raise your camera so that you’re not looking down on us
@hoganwarlock1430
@hoganwarlock1430 25 күн бұрын
When bootstrapping, how do you decide how many trees to make?
@tomaszbaczkun8572
@tomaszbaczkun8572 25 күн бұрын
Thank you - that was a really clear explanation! 8 minutes and you made me understand the basics of Random Forest. Crazy.
@basarselvi4731
@basarselvi4731 26 күн бұрын
terrible English
@F3lp1s
@F3lp1s 25 күн бұрын
Para de ser fresco
@andrew.sandler
@andrew.sandler 26 күн бұрын
Got to about 7 minutes 30 seconds in before I needed to learn some of your fancy symbols
@happyslug
@happyslug 27 күн бұрын
So clear. Thanks for explaining. Also the background music was super calming
@lucutes2936
@lucutes2936 28 күн бұрын
thx
@さくら-z4y3k
@さくら-z4y3k 28 күн бұрын
Thank you so much
@saqlainsajid1274
@saqlainsajid1274 28 күн бұрын
Man you're really good at explaining things simply and visually love your work
@kritikabhateja110
@kritikabhateja110 28 күн бұрын
How did we create the 4 random trees? Like how do we choose root node and the leaf nodes. What was the criteria?
@sarynasser993
@sarynasser993 29 күн бұрын
thank you
@eduarddez4416
@eduarddez4416 29 күн бұрын
Very good and clear explanation ,thank you :D
@Niksonk
@Niksonk Ай бұрын
Great!
@Echoooo-ex7zf
@Echoooo-ex7zf Ай бұрын
It's such a great explanation! Thank you so much!
@zerosumgame9071
@zerosumgame9071 Ай бұрын
Excellent explanation. Thumbs up and subscribed! Thank you!
@giacomozuccolotto4503
@giacomozuccolotto4503 Ай бұрын
great video! I still got a question tho: how did you apply the variance formula to get those starting variance values before applying the vairance reduction formula? i do not understand how the number 9744 came up
@maurosobreira8695
@maurosobreira8695 Ай бұрын
Wonderful!
@victormiene520
@victormiene520 Ай бұрын
Very good explanation, thank you. On a side note, I wish we could use more descriptive notation, like P(R) for the probability of rain. It would make things much clearer.
@yurpipipchz75
@yurpipipchz75 Ай бұрын
yeah, wow! Really well done!
@LazzyLazzz
@LazzyLazzz Ай бұрын
Why red and green? 10% of thenpopulation does not see the difference between the small dots... next time maybe circle and squeres.
@ts.nathan7786
@ts.nathan7786 Ай бұрын
The colours green and blue are appearing similar on the image. You may use some other method or colour (like red, green, yellow).
@alihadi-vv4yb
@alihadi-vv4yb Ай бұрын
Thank you. It was great.
@himanshuverma3984
@himanshuverma3984 Ай бұрын
Could not understand variance reduction part. If we're talking about the variance reduction, then as per your explanation, 2nd set should have been chosen, but you selected the first set. Am I assuming something wrong here?
@NeotronHorxen
@NeotronHorxen Ай бұрын
i didnt understand the statement at 7:52
@PedroHenrique-cy8up
@PedroHenrique-cy8up Ай бұрын
Me neither
@KellofHeadaches
@KellofHeadaches Ай бұрын
looks like you're the blueberry
@rxzin7201
@rxzin7201 Ай бұрын
Thanks, got saved for Deep Learning exam
@madsvindknudsen8428
@madsvindknudsen8428 Ай бұрын
I NEVER eat pizza the day after i ate hotdog!