Completed a project thanks to this video. You're the best man!!!
@yi-chenlu91374 жыл бұрын
Thank you for the great video! I would like to point out that it is not obvious in 9:30 to get from \alpha(x_t) * \beta(x_t) = p(x_t, Y). My thought is that \alpha(x_t) * \beta(x_t) = p(x_t, y_1~y_T) * p(y_{t+1} ~ y_T | x_t) = p(y_1~y_t | x_t) * p(x_t) * p(y_{t+1} ~ y_T | x_t) *=* p(y_1 ~ y_T | x_t) * p(x_t) = p(x_t, y_1 ~ y_T). The '*=*' place is derived from the Markov assumption, which can be explained as "given the current state x_t, the furture state x_{t+1} so as the outcome y_{t+1} is independent of previous states {x_1 ~ x_{t-1}}, so as the previous outcomes {y_1 ~ y_{t-1}}", therefore we can merge the probability as shown. (wondering if my thought is correct...)
@Jacob-jc6hj4 жыл бұрын
Your math checks out to me, but I am new to this as well.
@pardisranjbarnoiey63565 жыл бұрын
thanks
@himautub73453 жыл бұрын
At 9:48 he says p(y1,y2,y3,x3) x p(y4,y5,y6|x3) = p(x3, Y) where Y = {y1,y2,...,y6} anyone figured out how?
@himautub73453 жыл бұрын
figured out ... y1,y2,y3 independent of y4,y5,y6 given x3; that is : p(a,b,c) = p(b,c| a) x p(a) = p(b|a) p(c|a) p(a) = p(a,b) p(c|a)
@storiesbyvivek5 жыл бұрын
thanks
@deeplearn658410 ай бұрын
Thanks for the great explanation! Finally understood the implementation of HMM`s
@dusaovox5 жыл бұрын
thanks
@siomokof342511 ай бұрын
6:52
@yutongban90165 жыл бұрын
thanks
@fuzzyip5 жыл бұрын
thanks
@samidelhi61504 жыл бұрын
Would you kindly do another vedio series on the Hierarchical version of HMM ? And when shall we prefer to use the Hierarchical version ? It would be great if you provide an implementation as well in Python , R , Mathlab