Words cannot express how good this lecture is. Thank you.
@mohdkashif72952 жыл бұрын
i didn't know Ryan Reynold is so good in HMM
@vaidehij84982 жыл бұрын
Had been trying to get the intuitive understanding for so long. This video was a blessing!!!
@benbadman12310 ай бұрын
This is a great video. Just thought I'd point out two things. The backward algorithm doesn't describe the probability of being in a state given the future observations, it describes the probability of future observations given the current state and model. 2nd) The base case for the backward algorithm can be thought of as a probability as well. For the last time step T, Beta_i(T) = P( {} | q_T = S_i, \lambda) = 1, where {} denotes the empty set. This is because since there are no observations after time step T. So Beta_i(T) asks whats the probability of observing nothing (empty set) given we are at the last time step for our model. This is deterministic since we cannot observe any more by the model's constraints and so is 1.
@khanhtruong32549 ай бұрын
Thank you. Your comment is very helpful. In addition, do you know how to calculate the P(O | lambda) using the backward algorithm? Prof.Patterson calculated it with "final step" with the forward algorithm at 7:10, but didn't have a similar final step with backward algorithm.
@alessandrofazio8584 Жыл бұрын
Top explanation. Only who do get the subject is able to explain clearly what is going on. Thank you so much!
@garbour4563 жыл бұрын
Great video! Really fills in the gaps and clarifies some of the confusing aspects of the Rabiner tutorial
@djp33 жыл бұрын
Glad it was helpful!
@benjaminbenjamin88342 жыл бұрын
Best explanations so far on HMM.
@djp32 жыл бұрын
Wow, thanks!
@glrk03292 жыл бұрын
Thank you so much! I tried understanding this from the Rabiner tutorial, but this makes it so much clearer.
@djp32 жыл бұрын
You're very welcome!
@bikinibottom2100 Жыл бұрын
And this is for free!? What a time to be alive
@arnoschmidhauser52233 ай бұрын
Outstanding, congratulations!
@jonashetterich53753 жыл бұрын
Insanely good explanation. Chapeau.
@adamtran57472 жыл бұрын
LOVE THE CONTENT.
@srijanshovit8449 ай бұрын
Superb!!
@djp326 күн бұрын
I'm glad it helped!
@NickSpeer4 жыл бұрын
Dope intro
@djp33 жыл бұрын
The intro wars
@danielm5724 жыл бұрын
Great vid, I get it now
@ryantrusler3863 жыл бұрын
Would it be possible to access these slides to be able to reference them? Very helpful presentation!
@djp33 жыл бұрын
Email me and I'll see what I can do...djp3.net
@liufangguo2442 жыл бұрын
Thank you. It helps me.
@djp32 жыл бұрын
Glad to hear that!
@kurtishaut9103 жыл бұрын
Nice vid homie!
@grumpyoldman1257 Жыл бұрын
If this is to help the robot locate itself, presumably at the time it needs to know, where does it get the "once you know what the future looks like" data from? (12:40)
@djp3 Жыл бұрын
The reasoning happens after the data is collected, but the model collects statistics for each moment in time looking forward and looking backward.
@thecurious9263 жыл бұрын
What if I wanted to calculate the posterior probability of getting a given sequence of states given the observations. Can I get that from the forward backward algorithm because it seems forward backward gets me posterior of every state within the sequence but what about the sequcne of states as a whole
@djp33 жыл бұрын
It gives a posterior over the sequence
@thecurious9263 жыл бұрын
@@djp3 ok but what about this equation of the joint probability of observations and state sequence - product of emission probabilities into product of transition probabilities
@djp33 жыл бұрын
@@thecurious926 Maybe the best way to answer your questions would be to refer to Rabiner? "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition"
@shrinivastalnikar42362 жыл бұрын
Thank you Ryan Reynolds :)
@djp32 жыл бұрын
Really it could be much worse...
@DCentFN4 жыл бұрын
So it is my understanding that the Forward backward algorithm is the same as doing the forward algorithm and the backward algorithm and then multiplying them? I've watched a few videos and I feel way out of my element about this
@willorchard4 жыл бұрын
That's correct, but I think it's worth noting what you're trying to achieve by doing this. In these videos, Prof Patterson set out three key HMM problems he wanted to solve (~8 min mark in the previous video in this series). Note that in this video (~ 12 min mark) Prof Patterson said that the Forward algorithm was all that was needed to solve the first of these three problems (finding P(O | lambda)), but that the Backward algorithm was needed for solving the other two (watch the later videos in this series). I think he very well explains these three problems, why one might like to solve them, and how to do so. With that said, the Forward-Backward algorithm is designed to solve a different problem to the one that Prof Patterson here presents. What the Forward-Backward algorithm is designed to get you is the posterior marginals of all hidden state variables given a sequence of observations, that is, using his notation: P(q_t | O, lambda). Put into English, what is the probability of each possible hidden state at a given time t, given all your observations (o_1 ... o_T) and your model (lambda). To use the analogy of the jars of M&Ms from an earlier video, these are the probabilities for each jar being the jar from which the M&M was taken at time t, given all of the observations (the sequence of colours of M&Ms) and the model (lambda). This is what you get (following normalisation) by multiplying the results of the Forward and Backward algorithms. This is different to what Prof Patterson demonstrates you can calculate in this video. Both a very useful things, but it is important not to get confused about what the algorithm is built to achieve, as distinct from what other useful things you can calculate using it. I think it's worth having a scan through the wiki page for the FB algorithm or watching the mathematicalmonk video for some clarification of this point, otherwise feel free to ask me any questions. Absolutely love the video btw Prof Patterson, really very well presented and explained! I had a pretty dry, mathematical understanding of the alpha function, but your video really clarified what it means :)
@willorchard4 жыл бұрын
I have only just seen that Professor Patterson actually explains this in the first 5 minutes of video 11 in the series. There he calls the posterior marginals of all hidden state variables given a sequence of observations gamma_t(i). I appreciate that the motivation for doing both the Forward and the Backward parts of the algorithm and collating them into a single 'Forward-Backward' algorithm isn't made explicitly clear in this video. What this video has that I haven't seen anywhere else is a very nice description of alpha and how that alone is handy for computing the likelihood function for lambda.
@xntumrfo9ivrnwf2 жыл бұрын
@@willorchard Thank you Will for taking the time to explain this in so much detail! Very rare to find a civilized comments section on YT nowadays :)