Thank you so much. Following the previous lectures (...08,09, 10) up till 11 cleared all my doubts and confusion. The best I have been waiting for!
@djp34 жыл бұрын
You are most welcome
@mrlh694 жыл бұрын
I'm from Peru. I'm writing a thesis about speech recognition using HMM. This was really helpful, thank you!!
@taggebagge4 жыл бұрын
Best of luck with your thesis and hope that you stay safe from the corona man.
@bengonoobiang66332 жыл бұрын
it's just impressive. I'm a PhD student working on a speech recognition system. And understanding the algorithmic foundations and modeling of the underlying problem is not very easy with neurological methods. This HMM course has helped me to understand the problems and solutions of sequence processing problems in detail.
@djp32 жыл бұрын
Thanks! It's a cool algorithm as long as your state space isn't really huge.
@codediporpal3 жыл бұрын
This is great stuff. A bit repetitive, but that's a good thing for a complicated subject like this. The "motivations" videos were gold, since too often subject are taught without teaching why it's important.
@goonerrn3 жыл бұрын
he looks like the child of Steve Carell and Ryan Reynolds.. and great video
@djp33 жыл бұрын
It's could be worse
@wandering_star3655 ай бұрын
Maybe it's because I'm watching this on my phone lol but I totally see it too! I was like woah, who's Reynolds trying to fool with those glasses🤭
@RandomTrash-ey5di3 күн бұрын
@@djp3 I never thought I'd be learning about HMMs from deadpool
@theuser969 Жыл бұрын
Thanks for providing this well explained video on this topic.
@djp3 Жыл бұрын
Glad I could help!
@amitotc3 жыл бұрын
Very well explained, really enjoyed watching the seires of videos on HMM. Thank you so much :)
@djibrildassebe50554 жыл бұрын
Very nice series of course thank you Prof. Patterson
@okh62013 жыл бұрын
Thank you for the great explanation Sir, but i have one question. What if some states's emission probabilities contain a probability to stay silent will viterbi's algorithm still work?
@djp33 жыл бұрын
It can, but you need to add a place holder emission symbol of “silent” so that the probability of emission still adds up to 1.
@okh62013 жыл бұрын
@@djp3 Thanks for the reply Sir, so when we calculate Alpha or Beta do we have to always assume there is a silent emission and if so, how do we factor it into the observation sequence?
@djp33 жыл бұрын
@@okh6201 thats a modeling question. It depends on the phenomenon you are trying to represent. If your observation sequence has “silent” observations then explicitly treating that as a fake observation is one way to manage it
@okh62013 жыл бұрын
@@djp3 I see I will try that then, thank you 😁
@Martin-iy8gd2 жыл бұрын
Another great video !
@ryanjones17042 жыл бұрын
Why are we seeking to maximize P( Q, O | lambda) and not P(Q | O, lambda)? It's stated at ~10 minutes that the two are equivalent, so I'm sure there's some bayesian principal I'm missing. But even at ~12 minutes, it's stated "that accounts for the first t observations," which seems like "given the first t observations." Can somebody point me in the right direction? In any case: awesome series! Much appreciated!
@haipingwang70752 жыл бұрын
To my understanding , Q is the hidden part, it can be more Q s jointed with one O , the joint probability is good enough for us to solve the backward and forward problems. No need to know how , like to figure out P(Q|O,lambda), it is a trick
@akashgopal68963 жыл бұрын
This was a fantastic explanation and really cleared things up for me, but I have a question. In the scenario where all states have both the probability to emit or remain silent, how do you factor that into the calculations? As the observation sequence does not indicate how many times the states have been silent, would you still be able to use Viterbi's algorithm to find the most probable sequence of states?
@djp33 жыл бұрын
You can add a fake emission which is “silent”.
@pierrestemmettmusic3 жыл бұрын
Great video! Very helpful, thank you
@Oleg_Litvinov4 жыл бұрын
Awesome lecturer!
@fgfanta6 ай бұрын
10:42 It makes me think that {O_1, ..., O_T} should be on the right side of the |, with Lambda. Because the observations are given.
@ycdantywong4 жыл бұрын
Since both alpha_t(i) and beta_t(i) are calculated using the transition matrix which already includes information regarding which transitions are possible/impossible; I don't understand how argmax[gamma_t(i)] could end up with a state i that came from an impossible transition ?
@youssefdirani4 жыл бұрын
2:45 it is regrdless
@djp33 жыл бұрын
gamma aggregates over all paths. So it is just saying that it is possible by some path to get to and from i.
@wafaguendouz8430 Жыл бұрын
Ryan Reynolds is that you?
@djp3 Жыл бұрын
No but I appreciate the vote of confidence!
@xntumrfo9ivrnwf2 жыл бұрын
Hello, my head exploded. Please advise.
@djp32 жыл бұрын
band-aids and aspirin
@jasmeetgujral56654 жыл бұрын
Amazing
@djp33 жыл бұрын
Thanks
@karannchew25342 жыл бұрын
3:40 Normalise All possible observation given all possible states.