No video

Expectation Maximization: how it works

  Рет қаралды 278,621

Victor Lavrenko

Victor Lavrenko

Күн бұрын

Full lecture: bit.ly/EM-alg
We run through a couple of iterations of the EM algorithm for a mixture model with two univariate Gaussians. We initialise the Gaussian means and variances with random values, then compute the posterior probabilities for each data point, and use the posteriors to re-estimate the means and variances.

Пікірлер: 165
@Ewerlopes
@Ewerlopes 7 жыл бұрын
Man, this guy is unbelievable! I wish I had professors like him! Great explanation, thanks!
@salmankhalifa2867
@salmankhalifa2867 9 жыл бұрын
God, this is so interesting and now makes sense. I wish you were my data-mining lecturer! You should look into getting with khan academy!
@MrReierz
@MrReierz 7 жыл бұрын
You have no idea how good this presentation was. Ive searched the web for hours. Nobody could explain this, except your video! Thankyou
@Murphyalex
@Murphyalex 10 жыл бұрын
What an excellent explanation! As soon as I pulled a face trying to figure out what the new mean estimation was doing, you stopped and explained it and realised how unusual it might look at first. So many teachers lack this ability to go beyond what they know and imagine how certain formulas/concepts might need an extra minute or two of explanation for people who haven't seen it before. Subscribed!
@vlavrenko
@vlavrenko 9 жыл бұрын
Thank you! Very happy you find my videos helpful.
@NickLilovich
@NickLilovich 4 ай бұрын
This video has (by far) the highest knowledge/time of any other video on this topic on KZbin. Clear explanation of the math and the iterative method, along with analogy to the simpler algorithm (k-means). Thanks Victor!
@xinpang9611
@xinpang9611 7 жыл бұрын
This explanation is fantastic! I have been studying machine learning courses in my master's but always found it difficult to understand. Now I finally understand EM. Thank you Prof. Lavrenko.
@archismanghosh7283
@archismanghosh7283 2 ай бұрын
You just cleared every doubts on this topic, it's 10 days before my exam watching your video and getting everything cleared
@xisnuutube
@xisnuutube 7 жыл бұрын
Very nice explanation. But need to see your pointing..
@sanjaykrish8719
@sanjaykrish8719 7 жыл бұрын
One lecture like this can uncomplicate things so much to so many people around the world. After understanding this 1D it is so much easy to get a grasp of higher dimensions.
@roshinishake9400
@roshinishake9400 Жыл бұрын
I want professors like you in my college. You're so great sir thank you so much ❤️ for your great explanation ☺️. Your video's making easy machine learning easy
@drianhoward
@drianhoward 8 жыл бұрын
Dear Victor Your machine learning tutorial videos are really great!
@joelharsten2408
@joelharsten2408 9 жыл бұрын
That was very good explained! It was nice that you often referred and compared to K-means. That made it easy to understand this algorithm! Thank you!
@dhineshkumarr3182
@dhineshkumarr3182 9 жыл бұрын
you really mastered the technique of explaining big things to a beginer.it was very helpful.I am definitely going to follow your lectures in future.Thank you so much for the knowledge.
@erfandejband8945
@erfandejband8945 2 жыл бұрын
so simple explanation and at the same time so comprehensive. this video really help me to understanding this algorithem
@azuriste8856
@azuriste8856 8 ай бұрын
Great Explanation Sir. I don't know why it motivated me to appreciate and comment on the video.
@InvictusForever
@InvictusForever Ай бұрын
So helpful. Really lucky to have found this goldmine!!
@ashokharnal
@ashokharnal 7 жыл бұрын
So easily explained with superb clarity.
@JD-rx8vq
@JD-rx8vq 10 ай бұрын
Wow, you explain very well, thank you! I was having a hard time understanding my professor's explanation in our class.
@rayanaay5905
@rayanaay5905 Жыл бұрын
Incredible, I read a lot of article and papers around EM. but this video gave me everything I need to know !
@deepakjoshi7730
@deepakjoshi7730 5 ай бұрын
Splendid. Example very well portrays the algorithm stepwise!
@orresearch007
@orresearch007 9 жыл бұрын
Lucid explanation on EM. Mr Lavrenko is a superb teacher, explaining concepts using understandable language and helps learner to make sense of the equations. Keep up the good work.
@theOceanMoon
@theOceanMoon 7 жыл бұрын
annotations really helped understand better. Thanx man
@tchen8124
@tchen8124 7 жыл бұрын
You are the best teacher I ever had. Thank you so much.
@westsidde
@westsidde 9 жыл бұрын
I have tried to find good explanation on GMM for beginners (me) new to this topic and your explenation was the best i found. Very clear and good explanation, im very thankful sir! Keep up the good work!
@Luckymator
@Luckymator 7 жыл бұрын
I can't thank you enough! You explain it so well, that i can now understand what the formuals in the script mean, i have to learn.
@vitid1
@vitid1 8 жыл бұрын
best explanation I've ever seen
@ghilesdjebara8066
@ghilesdjebara8066 Жыл бұрын
OMG this is such simple and intuitive explanation. THANKS !
@rishabhchopra6418
@rishabhchopra6418 6 жыл бұрын
Amazing! Was stuck with the Udacity Course! Now, its all clear! :)
@lima073
@lima073 2 жыл бұрын
Best explanation i've seen about this subject. Thank you very much, your videos are really awesome !
@mohamadkoohi-moghadam7657
@mohamadkoohi-moghadam7657 8 жыл бұрын
Awesome!! Obviously you are a teacher who knows the art of teaching! Thank you!
@phuongdinh5836
@phuongdinh5836 7 жыл бұрын
I wish you were my professor! I keep having to go to your channel after the expensive in-class lectures.
@MrYoyo2345
@MrYoyo2345 Жыл бұрын
God bless you sir, and your excellent explanations. You are a life saver
@rohanshah9593
@rohanshah9593 Жыл бұрын
Amazing explanation! I was struggling to understand this concept. Thank you so much!
@SyedArefinulHaque
@SyedArefinulHaque 7 жыл бұрын
This is a perfect practical example, helped clear my concept about EM algorithm!!
@SandraMenesesBarroso
@SandraMenesesBarroso 9 жыл бұрын
You are very good at teaching, you make it looks easy.
@muhammadfaizanasghar77
@muhammadfaizanasghar77 2 жыл бұрын
Brilliantly done, hats off.
@playerseazay
@playerseazay 7 жыл бұрын
You explains really clear. Thanks for saving me from struggling in EM algorithm.
@luisfernandocamarillo9071
@luisfernandocamarillo9071 7 жыл бұрын
Congratulations Sir, magistral EM lecture.
@sairaamvenkatraman5998
@sairaamvenkatraman5998 7 жыл бұрын
You explain this so well! Awesome!!
@ShahzadHassanBangash
@ShahzadHassanBangash 2 жыл бұрын
Beautifully explained. Keep it up professor
@user-sl4sj2cs6w
@user-sl4sj2cs6w 8 жыл бұрын
may I ask what is the value for P(b) in Estep, if I have two clusters, does that mean I can use 0.5 as Initial P(b) in estep?
@fengjeremy7878
@fengjeremy7878 Жыл бұрын
Students that have such a great teacher make me jealous.
@pudinda
@pudinda 7 жыл бұрын
great explanation! the animations and the equations on the side, coming at the right time, really helped :)
@uniqguy111
@uniqguy111 Жыл бұрын
This is gold standard
@Artaxerxes.
@Artaxerxes. 2 жыл бұрын
what a brilliant explanation
@gregoire33
@gregoire33 7 жыл бұрын
hello, thanks for this video at the first iteration what value of p (b) do you use?
@hasnainvohra7754
@hasnainvohra7754 10 жыл бұрын
Excellent explanation. Thank you.
@vlavrenko
@vlavrenko 10 жыл бұрын
Thanks!
@m07hcn62
@m07hcn62 10 ай бұрын
This is awesome explanation. Thanks !
@sagarmeisheri2361
@sagarmeisheri2361 10 жыл бұрын
Thank you very much. Excellent Example !!
@vlavrenko
@vlavrenko 10 жыл бұрын
Thanks!
@andriananicolaou4087
@andriananicolaou4087 9 жыл бұрын
Thank you very much for the presentation. I have one stupid question though. During the EM algorithm, when we apply the bayes rule to calculate the posterior p(b|x), how do we know the prior probabilities p(b) and p(a) (which are supposed to be equal)?
@raymendoza8157
@raymendoza8157 6 жыл бұрын
I had the same question. Fishing through the comments it seems some have used 0.5 and 0.5 in this two-class example. So I guess our null assumptions would just be that they are equally likely for all classes. Not sure what happens after we go past the initial step. Did you figure out this? Hope this helps!
@jamesschoi87
@jamesschoi87 6 жыл бұрын
so what are P(a) and P(b) initially?
@schummelhase6055
@schummelhase6055 9 жыл бұрын
Thanks from Germany! Very very helpfull.
@MrMopuri
@MrMopuri 8 жыл бұрын
great work Mr. lavrenko!
@muhmazabd
@muhmazabd 9 жыл бұрын
Thanks a lot, Very well explained,
@vlavrenko
@vlavrenko 9 жыл бұрын
Thank you!
@TheDionator
@TheDionator 8 жыл бұрын
Best explanation I've seen on the topic! Please make more!!
@wadewang574
@wadewang574 2 жыл бұрын
I have a question : At 4:16, the formula of calcualting μ_b has the division by (b_1 + b_2 + ... + b_n), but my first idea is division by the number of data points, i.e. n, so why not divided by n ?
@JiangXiang
@JiangXiang 10 жыл бұрын
Excellent! Thanks for the effort to make it easy to understand!
@vlavrenko
@vlavrenko 10 жыл бұрын
Thanks! Glad to know these videos are useful.
@DM-py7pj
@DM-py7pj 2 жыл бұрын
Probability and likelihood are used interchangeably here. When looking at P(xi | b), given the notation I assume the calculation is indeed a probability. I enjoyed this. I would like to have seen an explanation of stopping at a threshold for convergence.
@mikelmenaba
@mikelmenaba Жыл бұрын
Great explanation mate, thanks!
@raymendoza8157
@raymendoza8157 6 жыл бұрын
Got a question for anyone who can answer! I follow most of the calculations but I'm not sure how we get p(b) in the second equation (application of Bayes' Theorem). Or, for that matter how we calculate p(a) for the denominator. We can't estimate them from prior class membership samples since we haven't labeled anything. Any idea? Am I misunderstanding it?
@ismailatadinc816
@ismailatadinc816 2 жыл бұрын
Amazing explanation!
@babyroo555
@babyroo555 Жыл бұрын
incredible explanation!
@UC46Vf8SyesF0MGwcZDowszA
@UC46Vf8SyesF0MGwcZDowszA 8 жыл бұрын
Excellent! Clean notation, very clear explanation. One question, are the priors the mean of the Gaussians?
@TheChavakula
@TheChavakula 6 жыл бұрын
Wow! the best explanation ever!
@harshtrivedi6605
@harshtrivedi6605 8 жыл бұрын
+1. Thanks a lot .. I finally understood EM. And now it makes complete sense :) Would you mind citing a resource which proves that initial choice of gaussians don't make any difference. And the proof that these gaussians when keep shifting iterationwise how they always converge?. I guess it would be same as K-means, isn't it? Thanks.
@pperez1224
@pperez1224 Жыл бұрын
Excellent , the only one that i understood :-) Question : Why does it converge ? Does it has something to do with the central limit theorem?
@mahoneyj2
@mahoneyj2 7 жыл бұрын
Great explanation and example! Many thanks!
@stochasticneuron
@stochasticneuron 7 жыл бұрын
at [7:34] instead of mu_1 to mu_n it should be mu_a and mu_b for variance calculation if I am not wrong?
@arnabsen8633
@arnabsen8633 7 жыл бұрын
yes. and he mentioned the mistake in the video.
@sreichli
@sreichli 7 жыл бұрын
Thank you for creating this video -- very helpful!
@_Junkers
@_Junkers 10 жыл бұрын
Thank you for this, very valuable.
@vlavrenko
@vlavrenko 10 жыл бұрын
Thank you!
@Luca-yy4zh
@Luca-yy4zh 2 жыл бұрын
Very good explanation. Thanks a lot!
@HongjoKim
@HongjoKim 9 жыл бұрын
Thank you very much. This video helps me a lot.
@ahmedibrahim-lw1ut
@ahmedibrahim-lw1ut Жыл бұрын
Great explanation. I have a question tho, about the values of xi. Why do we need to multiply those values to the posterior probability? What does the values of x really represent, if they are just points on the line why do we need to multiply them in the equation of recalculating the means?
@brynkimura6738
@brynkimura6738 7 жыл бұрын
Thank you so much for posting this explanation!
@DesmondCaulley
@DesmondCaulley 7 жыл бұрын
Thanks man. Really really great explanations.
@virgenalosveinte5915
@virgenalosveinte5915 9 ай бұрын
Amazing, thank you!
@ventezcamilo
@ventezcamilo 6 жыл бұрын
Thanks! it is great to finally understand
@ajayram198
@ajayram198 9 жыл бұрын
In 2:04 why is the posterior calcluated ? and what is the difference between p(x | a) , p(x |b ) and p( a | x) , p(b |x) ? which of these correponds to the probability that the data point belongs to cluster a /b ?
@cumaliturkmenoglu2240
@cumaliturkmenoglu2240 7 жыл бұрын
wonderful lecture. But how to determine the number of gaussians(clusters) are behind the data?
@NuclearSpinach
@NuclearSpinach 2 жыл бұрын
I'm trying to fully connect the theoretical pieces that justify the weighted average of mu_a, mu_b, sigma_a, and sigma_b estimates. I.e., where is the E step and the M step and how do they connect to what's written? *Great* explanation. Thank you for uploading.
@RyeCA
@RyeCA 27 күн бұрын
excellent, thank you
@charlibravo2578
@charlibravo2578 2 жыл бұрын
Thanks a lot for this
@welovemusic9056
@welovemusic9056 8 жыл бұрын
Thank you very much. Well explained !!
@Dominoanty
@Dominoanty 7 жыл бұрын
Great Explanation ! Thank you !
@G1aVoL
@G1aVoL 7 жыл бұрын
Thank you for the amazing video.
@shaimaamohamed8797
@shaimaamohamed8797 8 жыл бұрын
wonderful explanation ! thanks a lot .
@oTwOrDsNe
@oTwOrDsNe 7 жыл бұрын
Thank you for making this so clear and intuitive! Love your lectures
@sameerjain6086
@sameerjain6086 10 жыл бұрын
When you refer to a point saying 'that' point there I have no visual indication on the screen.
@vlavrenko
@vlavrenko 10 жыл бұрын
Added the annotations, hope this helps.
@thestuarts3815
@thestuarts3815 6 жыл бұрын
Hi Victor, Thank you a lot for these interesting videos. do you post your slides anywhere ? id love to keep a copy of everything i watch here.
@tonyngo9400
@tonyngo9400 8 жыл бұрын
Great explanation, thank you!
@maryamomrani155
@maryamomrani155 9 жыл бұрын
Thanks for such a good explanation!
@vlavrenko
@vlavrenko 9 жыл бұрын
Thank you!
@CHANTI8947
@CHANTI8947 7 жыл бұрын
Video states that we are aware of the data points coming from 2 gaussians..GMM being unsupervised model, would we have that kind of information(# of Gaussians) beforehand..
@aakankshachoudhary8532
@aakankshachoudhary8532 7 жыл бұрын
Thank you so much...This helped me a lot!!! Thanks a lot, again!
@Chr0nalis
@Chr0nalis 8 жыл бұрын
It sounded as if you said: "we will discover the Gaussians automagically" :)
@hoovie3000
@hoovie3000 6 жыл бұрын
Great explanation!!
@dionwang
@dionwang 8 жыл бұрын
Hi, when you calculate the variance of a, does the nominator is a1(x1-ua)^2+...+an(xn-ua)^2?
@conradsnowman
@conradsnowman Жыл бұрын
I cant help but notice the middle dotted line looks like a logistic regression curve. I should know this.. But is there any relation?
@TankNSSpank
@TankNSSpank 9 жыл бұрын
You are the best!
@vlavrenko
@vlavrenko 9 жыл бұрын
Thank you!
@lorrainewang9929
@lorrainewang9929 10 жыл бұрын
Such a nice video! Thanks a lot!
@vlavrenko
@vlavrenko 9 жыл бұрын
Thank you for the kind words. Good to know you find it useful.
@8147333930
@8147333930 2 жыл бұрын
thank you so much
@damianoazzalini8573
@damianoazzalini8573 10 жыл бұрын
Very clear video. Thank you.
@vlavrenko
@vlavrenko 10 жыл бұрын
Thanks!
Mixture Models 3: multivariate Gaussians
3:05
Victor Lavrenko
Рет қаралды 88 М.
EM algorithm: how it works
7:53
Victor Lavrenko
Рет қаралды 488 М.
Can This Bubble Save My Life? 😱
00:55
Topper Guild
Рет қаралды 81 МЛН
SPONGEBOB POWER-UPS IN BRAWL STARS!!!
08:35
Brawl Stars
Рет қаралды 16 МЛН
Алексей Щербаков разнес ВДВшников
00:47
WHO CAN RUN FASTER?
00:23
Zhong
Рет қаралды 44 МЛН
How Bayes Theorem works
25:09
Brandon Rohrer
Рет қаралды 541 М.
EM Algorithm
28:40
Francisco Iacobelli
Рет қаралды 108 М.
Clustering with DBSCAN, Clearly Explained!!!
9:30
StatQuest with Josh Starmer
Рет қаралды 297 М.
Gaussian Mixture Model | Object Tracking
15:56
First Principles of Computer Vision
Рет қаралды 32 М.
(ML 16.3) Expectation-Maximization (EM) algorithm
14:37
mathematicalmonk
Рет қаралды 229 М.
EM Algorithm : Data Science Concepts
24:08
ritvikmath
Рет қаралды 68 М.
Clustering (4): Gaussian Mixture Models and EM
17:11
Alexander Ihler
Рет қаралды 280 М.
Gaussian Naive Bayes, Clearly Explained!!!
9:26
StatQuest with Josh Starmer
Рет қаралды 335 М.
Explaining Probability Distributions
12:54
Very Normal
Рет қаралды 19 М.
Can This Bubble Save My Life? 😱
00:55
Topper Guild
Рет қаралды 81 МЛН