Hidden Markov Models 11: the Viterbi algorithm

  Рет қаралды 38,527

djp3

djp3

Күн бұрын

Пікірлер: 42
@kevinigwe3143
@kevinigwe3143 4 жыл бұрын
Thank you so much. Following the previous lectures (...08,09, 10) up till 11 cleared all my doubts and confusion. The best I have been waiting for!
@djp3
@djp3 4 жыл бұрын
You are most welcome
@mrlh69
@mrlh69 4 жыл бұрын
I'm from Peru. I'm writing a thesis about speech recognition using HMM. This was really helpful, thank you!!
@taggebagge
@taggebagge 4 жыл бұрын
Best of luck with your thesis and hope that you stay safe from the corona man.
@bengonoobiang6633
@bengonoobiang6633 2 жыл бұрын
it's just impressive. I'm a PhD student working on a speech recognition system. And understanding the algorithmic foundations and modeling of the underlying problem is not very easy with neurological methods. This HMM course has helped me to understand the problems and solutions of sequence processing problems in detail.
@djp3
@djp3 2 жыл бұрын
Thanks! It's a cool algorithm as long as your state space isn't really huge.
@codediporpal
@codediporpal 3 жыл бұрын
This is great stuff. A bit repetitive, but that's a good thing for a complicated subject like this. The "motivations" videos were gold, since too often subject are taught without teaching why it's important.
@goonerrn
@goonerrn 3 жыл бұрын
he looks like the child of Steve Carell and Ryan Reynolds.. and great video
@djp3
@djp3 3 жыл бұрын
It's could be worse
@wandering_star365
@wandering_star365 5 ай бұрын
Maybe it's because I'm watching this on my phone lol but I totally see it too! I was like woah, who's Reynolds trying to fool with those glasses🤭
@RandomTrash-ey5di
@RandomTrash-ey5di 3 күн бұрын
@@djp3 I never thought I'd be learning about HMMs from deadpool
@theuser969
@theuser969 Жыл бұрын
Thanks for providing this well explained video on this topic.
@djp3
@djp3 Жыл бұрын
Glad I could help!
@amitotc
@amitotc 3 жыл бұрын
Very well explained, really enjoyed watching the seires of videos on HMM. Thank you so much :)
@djibrildassebe5055
@djibrildassebe5055 4 жыл бұрын
Very nice series of course thank you Prof. Patterson
@okh6201
@okh6201 3 жыл бұрын
Thank you for the great explanation Sir, but i have one question. What if some states's emission probabilities contain a probability to stay silent will viterbi's algorithm still work?
@djp3
@djp3 3 жыл бұрын
It can, but you need to add a place holder emission symbol of “silent” so that the probability of emission still adds up to 1.
@okh6201
@okh6201 3 жыл бұрын
​@@djp3 Thanks for the reply Sir, so when we calculate Alpha or Beta do we have to always assume there is a silent emission and if so, how do we factor it into the observation sequence?
@djp3
@djp3 3 жыл бұрын
@@okh6201 thats a modeling question. It depends on the phenomenon you are trying to represent. If your observation sequence has “silent” observations then explicitly treating that as a fake observation is one way to manage it
@okh6201
@okh6201 3 жыл бұрын
@@djp3 I see I will try that then, thank you 😁
@Martin-iy8gd
@Martin-iy8gd 2 жыл бұрын
Another great video !
@ryanjones1704
@ryanjones1704 2 жыл бұрын
Why are we seeking to maximize P( Q, O | lambda) and not P(Q | O, lambda)? It's stated at ~10 minutes that the two are equivalent, so I'm sure there's some bayesian principal I'm missing. But even at ~12 minutes, it's stated "that accounts for the first t observations," which seems like "given the first t observations." Can somebody point me in the right direction? In any case: awesome series! Much appreciated!
@haipingwang7075
@haipingwang7075 2 жыл бұрын
To my understanding , Q is the hidden part, it can be more Q s jointed with one O , the joint probability is good enough for us to solve the backward and forward problems. No need to know how , like to figure out P(Q|O,lambda), it is a trick
@akashgopal6896
@akashgopal6896 3 жыл бұрын
This was a fantastic explanation and really cleared things up for me, but I have a question. In the scenario where all states have both the probability to emit or remain silent, how do you factor that into the calculations? As the observation sequence does not indicate how many times the states have been silent, would you still be able to use Viterbi's algorithm to find the most probable sequence of states?
@djp3
@djp3 3 жыл бұрын
You can add a fake emission which is “silent”.
@pierrestemmettmusic
@pierrestemmettmusic 3 жыл бұрын
Great video! Very helpful, thank you
@Oleg_Litvinov
@Oleg_Litvinov 4 жыл бұрын
Awesome lecturer!
@fgfanta
@fgfanta 6 ай бұрын
10:42 It makes me think that {O_1, ..., O_T} should be on the right side of the |, with Lambda. Because the observations are given.
@ycdantywong
@ycdantywong 4 жыл бұрын
Since both alpha_t(i) and beta_t(i) are calculated using the transition matrix which already includes information regarding which transitions are possible/impossible; I don't understand how argmax[gamma_t(i)] could end up with a state i that came from an impossible transition ?
@youssefdirani
@youssefdirani 4 жыл бұрын
2:45 it is regrdless
@djp3
@djp3 3 жыл бұрын
gamma aggregates over all paths. So it is just saying that it is possible by some path to get to and from i.
@wafaguendouz8430
@wafaguendouz8430 Жыл бұрын
Ryan Reynolds is that you?
@djp3
@djp3 Жыл бұрын
No but I appreciate the vote of confidence!
@jasmeetgujral5665
@jasmeetgujral5665 4 жыл бұрын
Amazing
@djp3
@djp3 3 жыл бұрын
Thanks
@xntumrfo9ivrnwf
@xntumrfo9ivrnwf 2 жыл бұрын
Hello, my head exploded. Please advise.
@djp3
@djp3 2 жыл бұрын
band-aids and aspirin
@karannchew2534
@karannchew2534 2 жыл бұрын
3:40 Normalise All possible observation given all possible states.
@ganeshkshet5631
@ganeshkshet5631 4 жыл бұрын
thaank you
@djp3
@djp3 4 жыл бұрын
You're welcome!
@beko_61
@beko_61 2 жыл бұрын
thank you ryan reynolds
Hidden Markov Models 12: the Baum-Welch algorithm
27:02
The Viterbi Algorithm : Natural Language Processing
21:13
ritvikmath
Рет қаралды 112 М.
Lazy days…
00:24
Anwar Jibawi
Рет қаралды 8 МЛН
Как Я Брата ОБМАНУЛ (смешное видео, прикол, юмор, поржать)
00:59
黑天使只对C罗有感觉#short #angel #clown
00:39
Super Beauty team
Рет қаралды 14 МЛН
Муж внезапно вернулся домой @Oscar_elteacher
00:43
История одного вокалиста
Рет қаралды 7 МЛН
Hidden Markov Models 09: the forward-backward algorithm
16:16
Viterbi Algorithm
11:18
Keith Chugg
Рет қаралды 94 М.
I Day Traded $1000 with the Hidden Markov Model
12:33
ritvikmath
Рет қаралды 23 М.
(ML 14.6) Forward-Backward algorithm for HMMs
14:56
mathematicalmonk
Рет қаралды 175 М.
4 Forward and Viterbi algorithm HMM
9:06
OU Education
Рет қаралды 39 М.
Hidden Markov Models
30:18
Bert Huang
Рет қаралды 88 М.
Forward Algorithm Clearly Explained | Hidden Markov Model | Part - 6
11:01
NLP Lecture5(a) - Hidden Markov Models
8:48
Prof. Ghassemi Lectures and Tutorials
Рет қаралды 12 М.
Lazy days…
00:24
Anwar Jibawi
Рет қаралды 8 МЛН