Thanks Andrew and Stanford for putting this out. K-means clustering: 00:46 Density estimation (prelude to mixture of Gaussians): 16:20 Mixture of Gaussians: 21:16 EM Algorithm: 23:41
@TheGarnt Жыл бұрын
I've watched a few videos explaining EM and this is by far the best one. Most of the others forget to explain some of the concepts or terms that are part of the algorithm, but this gives a clear explanation for everything.
@stanfordonline Жыл бұрын
Great feedback, thanks for watching!
@kisome24237 ай бұрын
Excellent teacher! I learn this chapter for three time from different channels. This teacher let me know further on this chapter. Thank you!
@Avichinky2725 Жыл бұрын
To choose the right number of clusters, we can use Elbow method.13:50
@krishnendu-jana10 ай бұрын
Whats not in this lecture 😮, ML, Stat, Information Coding Theory 🔥🔥
@user-wr4yl7tx3w Жыл бұрын
can someone please buy Andrew some decent pens, with darker ink.
@Neiltxu Жыл бұрын
thank you so much for making me learn and really understand. You´re helping me so much with AI
@rajkane9892 жыл бұрын
uploaded 2 years ago, 80k views, and no comments?
@yuxiang31472 жыл бұрын
The views are from bots LOL
@blaubeerbrot Жыл бұрын
@@yuxiang3147😂😂😂😂😂😂😂😂 under a Stanford lecture 👍😂🤡
@mohammadnafisalam429211 ай бұрын
@@yuxiang3147 bots are collecting data to train themself. should we compete with them?
@otheanh5306 Жыл бұрын
Awesome
@AHMEDRAZA-nh4xm Жыл бұрын
Awesome 🤩
@abhay_cs2 жыл бұрын
You should have used different indices for x and z. x^(i) and z^(k) perhaps...
@EjazAhmed-pf5tz Жыл бұрын
thank you so much prof.
@xiaowei854611 ай бұрын
Why Q is with subscript i? Isn't it iid?
@Kokso.11 ай бұрын
32:30 How is p(xi|zi=j) calculated from Gaussian? Isn't p of single "point" = 0? What am I missing here?
@ahamuffin474711 ай бұрын
Its likelihood. You can take a look at bayes theorem