Nando's lectures are certainly some of the best on the web. Mastery is step one. Communicating that mastery is a real gift!
@htetnaing0072 жыл бұрын
Don't stop sharing these knowledge for those are vital to the progress of humankind!
@blaznsmasher6 жыл бұрын
Amazing! I've watched a couple videos on HMM before this and this is by far the most clear and easy to understand
@qorod1237 жыл бұрын
Having teachers like Prof. Nando makes me fell in love with science. Thank you so much Professor.
@jmrjmr825410 жыл бұрын
Great! Now this paper I'm reading finally starts to make sense! Most helpful video on this topic !
@azkasalsabila53287 жыл бұрын
The best lectures I have ever watched on KZbin!!! Great professor. The explanation is easy to follow. Thank you.
@mehr1methanol9 жыл бұрын
Very very helpful!! Unfortunately when I joined UBC you were already gone to Oxford. But I'm so glad you have the lectures here.
@OmarCostillaReyes11 жыл бұрын
Great presentation Nando. you succeed on teaching excellence on this lecture. you made you presentation interesting, funny and knowledgeable. Congratulations!
@wackyfeet7 жыл бұрын
I am so glad someone told him to fix the colour on the screen. I was losing it hoping he would fix the issue.
@23karthikb7 жыл бұрын
Fantastic explanation Nando - great lecture! Thank you!
@tzu-minghuang71008 жыл бұрын
Great video for understanding HMM, worth every minutes
@ThuleTheKing11 жыл бұрын
It is by far the best presentation of HMM out there. However I miss an example going from x0 - x2, so both cases (initial and t-1) are illustrated. It could also be nice, if you uploaded the assignments and answers.
@nargeschinichian62867 жыл бұрын
He is an amazing teacher! I suggest you to watch his class!
@ilnurgazizov29594 жыл бұрын
Excellent! A great and clear explanation of HMM!
@fisherh91115 жыл бұрын
This guy has a great sense of humour.
@noorsyathirahmohdisa27207 жыл бұрын
Best explanation of all. Tq you helped my research on speech recognition
@user-eh5wo8re3d8 жыл бұрын
Really nice lecture. Very engaging and informative as well.
@MrA09897418189 жыл бұрын
Very good lecture!!! Thanks so much for saving me a large amount of time!
@hsinyang17964 жыл бұрын
I'm at student at UBC, we need you back teaching this course :
@asharigamage74866 жыл бұрын
Very clear explanation and best examples from GOT ..Thank you so much :-D....
@_dhruvawasthi3 жыл бұрын
At 43:22 why it is outside the markov blanket?
@DrINTJ9 жыл бұрын
Most lectures seem to go in length and repeat ad infinitum the obvious parts, then jump over the important bits.
@pandalover5557 жыл бұрын
"You were so happy to see those dragons" LOL this guy is hilarious
@riyasamanta32366 жыл бұрын
simple..easy to understand..real world problems..thank you for this video.Can you upload more about the applications and extension of HMM ??
@PoyanNabati10 жыл бұрын
This is fantastic, thank you Nando!
@ar_rahman_907 жыл бұрын
Thankyou so much for these lectures!
@qureshlokhandwala386610 жыл бұрын
Amazing intuitive explantation.Thanks!
@doggartwuttus108211 жыл бұрын
Thank you! This was really engaging and helpful.
@acltm10 жыл бұрын
Hi- could you also tell how to estimate the transition matrix?
@arnabpaul69965 жыл бұрын
Me Before this Video: X(0): Sad Y(0): Crying Me After this Video: X(1): Happy Y(1): Watching GoT Me After Watching GoT: X(2): Sad Y(2): Crying, because the last season sucks
@ivansorokin805410 жыл бұрын
If node x0 do not have y0 on the graph then we assume P(x0/y0) = P(x0) when predicting P(x1/y0) or we don't count transition between x0 to x1. I think graph must have node y0 and we can use Bayes rule to compute P(x0/y0) like in the beginning of the lecture. Anyway, thanks for sharing lectures.
@amizan865311 жыл бұрын
Thank you very much for posting this!
@phillipblunt6 жыл бұрын
Really fantastic lecture, thanks a lot!
@saijaswanth50852 жыл бұрын
Can i know reference book of above lecture?
@yuvrajsingh-wn3up10 жыл бұрын
If the events W,S,C and F are not mutually exclusive then what changes we need to make to present HMM?
@IndrianAmalia9 жыл бұрын
thanks for the amazing lecture ! helps me alot :)
@ghufranghuzlan44047 жыл бұрын
omg the best explanation eveeeeeeeeeer .very helpfull thank u sooo much
@Michael-kt3tf4 жыл бұрын
Just wondering since we just care about the posterior. Why the forward algorithm compute the joint distribution? What is the point of that?
@AnekwongYoddumnern8 жыл бұрын
Dear sir, if I use pir sensor with markov chain how many state that I should to setting?
@noorsyathirahmohdisa27207 жыл бұрын
Where to find his next video on HMM?
@ryuzakace4 жыл бұрын
were you able to find it? It seems the HMM part is missing in this lecture, there is a slide present though which is not covered in this lecture
@allisonzhang65279 жыл бұрын
awesome! thanks ,Nando!
@youssefdirani4 жыл бұрын
I didn't know the smoothing assignment ... What was it ?
@aashishraina28318 жыл бұрын
I loved teh material. Thanks a lot
@shashanksagarjha28075 жыл бұрын
someone please let me know.. can HMM be used for anomaly detection.. if yes . does it work better than techniques such as SMOTEENN and wighted class
@yuezhao86575 жыл бұрын
I do not feel either HMM or smote is a major anomaly detection technique. The more common approaches are LOF, Isolation forest, ocsvm, abod, loci and so on.
@shashanksagarjha28075 жыл бұрын
@@yuezhao8657 as far as i know LOF, isolation forest works better in case of unsupervised learning but technique such as weighted class or smoteEnn work better when we have labels. How much accuracy we can get in HMM
@ddarhe7 жыл бұрын
at the beginning of the lecture; shouldnt the columns of the table add up to 1 instead of rows? P(y|x) + P(y|~x) = 1, right?
@gggrow7 жыл бұрын
No... P(y|x) + P(~y|x) = 1 Whereas P(y|x) + P(y|~x) means "the probability of y given x plus the probability of y given not x". That could equal more than 1 if y is likely in both cases, or less than 1 if y is unlikely in both cases
@gggrow7 жыл бұрын
So... P(sad|crying) + P(sad| not crying) doesnt have to equal one because maybe I'm not likely to be sad either way, but P(sad|crying) + P(not sad| crying) = 1 because that exhausts the list of possible states; I have to be either sad or not!
@charliean92377 жыл бұрын
That's what I thought too. Summing rows to 1 means this puppy always does one of the 4 things, and the puppy never eats. However, summing cols to 1 means the puppy is either happy or sad, which makes more sense.
@upinsanity8 жыл бұрын
absolute masterpiece!
@gopalnarayanan42178 жыл бұрын
very good explanation
@shineminem11 жыл бұрын
OMG this is so helpful!
@ralphsaymoremakuyana71269 жыл бұрын
great. well explained!!
@kunwaravikalnathmathur20036 жыл бұрын
This video has baye's theorem applied in full form
@RelatedGiraffe10 жыл бұрын
6:37 We are all gonna be there one day? Speak for yourself! :P