I got my PhD back in 2000s. I wish material like this existed back then. I struggled so much with a lot of basic concepts. I had to look at material written in the most extreme mathematical notation without gaining any intuition. You have done an amazing job explaining something I really had hard time grasping. Well done.
@amrdel2730 Жыл бұрын
yeah phd level is almost all time in use or link to some machine learnning theorie understanding i doin it now
@kurghen988 Жыл бұрын
I thought exactly the same!
@HeartDiamond_variedad11 ай бұрын
En español no hay contenido así, tenemos que recurrir a videos en inglés
@stryker23226 ай бұрын
because of people like you we alll are able to go deep into artificial intelligence and able to simplify what we are watching right now
@RkR20015 ай бұрын
Great teaching Nerd. Where are You based now? ,I am a doctor from INDIA need your academic help
@pradipkumardas24142 жыл бұрын
I am 60 by age. I am learning how to simplify a complex subject. You are a great teacher! God bless you
@NormalizedNerd2 жыл бұрын
Thanks a lot sir 🙏 I am very flattered.
@BhargavSripada2 жыл бұрын
didnt ask
@mr.nobody125 Жыл бұрын
@@BhargavSripada and we certainly didn't ask for your opinion
@gracemerino32673 жыл бұрын
Thank you for making these videos! I am starting my master's thesis on Markov Chains and these videos help me get a solid introduction to the material. Much easier to understand the books now!
@abdullahalsulieman20962 жыл бұрын
That is great. How is it so far? I want to apply it to predict system reliability. Not sure how complicated it is. Could you advise please?
@xherreraoc2 ай бұрын
This series of videos has been incredibly useful for me to write my Masters thesis. Thank you so much.
@adaelasm64672 жыл бұрын
This might be the fastest I've gone from never having used a concept to totally grokking it. Very well explained.
@dheerajmakhija975211 ай бұрын
Very rare topic explained in most precise manner. Very helpful 👍
@mertenesyurtseven4823 Жыл бұрын
Dude your videos are severly underappreciated. The animations, basic but complete style of you used when talking about abstract and complex topics. I just discovered this channel, and I will recommend all the people including beginners because even they will be able to understand as well as advanced people.
@NormalizedNerd Жыл бұрын
Thanks a lot for appreciating!!
@mienisko Жыл бұрын
I'm just going through the markov chain playlist and because of its quality I'm gonna subscribe to this channel. Great material!
@HKwak3 жыл бұрын
Thanks for videos. I am majoring in Data Science and obviously videos like this sometimes help enormously comparing to reading texts. Very Intuitive and visual. I don't think I will forget the weather signs you showed us today.
@NormalizedNerd3 жыл бұрын
That's the goal 😉
@Portfolio-qb7et Жыл бұрын
@@NormalizedNerdCan you briefly explain the steps involved in finding the probability of sunny day..I really don't understand
@k.i.a72403 ай бұрын
Better explained than over 3 AI university classes I have gone through. Simple, efficient. Thank you
@josippardon8933 Жыл бұрын
Great! Just Great! I really dont understand why most professors at colleges hate this type of explaining things. They always choose standard "lets be super formal and use super formal mathematical notation". Yes, it is important to learn formal mathematical things, but why not combine both formal and informal approach and put them together in textbook?
@mayurdeo6273 жыл бұрын
Amazing job. This really helps if you are preparing for interviews, and want a quick revision. Thank you for doing this.
@NormalizedNerd3 жыл бұрын
You're very welcome!
@IndianDefenseAnalysis Жыл бұрын
What a channel. Have never came across any Data Science channel like yours. You are doing a fantastic work. Love your videos and going thru them ❤
@muneshchauhan2 жыл бұрын
Many thanks for the beautiful visualization and summarization of the Markov model. It was effortless understanding it. May require a little revision but comprehensible with ease. 🙂
@songweimai64112 жыл бұрын
Thank you, really appreciate your work. Watched this video let me consider that my professor in school class is not a good teacher.
@aramisfarias53163 жыл бұрын
After going through your Markov chains series, you my friend got yourself a new subscriber! Great work. Your channel deserves to grow!
@ivanportnoy49413 жыл бұрын
Loved it! I am looking forward to (maybe) seeing a video on the Markov Chain Monte Carlo (MCMC) algorithms. Best regards!
@tatvamkrishnam66912 жыл бұрын
Excellent! Skipped 4th part of this Magnificient Markov series. Took roughly 3hrs to verify at moments and coninvce myself. HIT MOVIE!!
@Hangglide6 ай бұрын
Hello author from the past! Your video is really helpful! Thank you!
@Mosil02 жыл бұрын
Hey, thanks a lot for making these! One suggestion if you don't mind: you could avoid using red and green (especially those particular shades you used) as contrasting colors, given that they're close to indistinguishable to about 8% of males. Basically any other combination is easier to tell apart, e.g. either of those colors with blue. Just a minor quibble, the videos are otherwise very good!
@NormalizedNerd2 жыл бұрын
Thanks a lot for pointing this out. Definitely will keep this in mind.
@sye95227 ай бұрын
Amazing!! It really helps me understand the logic behind those scary HMM python codes. Thank you.
@brainnuke54503 жыл бұрын
Don't stop what you are doing! It's amazing.
@NormalizedNerd3 жыл бұрын
Thanks!! :D
@asaha94793 жыл бұрын
From the accent u r a bengali but r u from isi? GREAT vdo,keep going
@NormalizedNerd3 жыл бұрын
@@asaha9479 Yup I'm a bong...But I don't study in ISI.
@philipbutler3 жыл бұрын
this was the perfect video for where I'm currently at. I learned about Markov chains last year, and just finally got a good grasp on Bayes' (I struggled through Prob & Stats years ago). Thanks so much! keep it up!
@NormalizedNerd3 жыл бұрын
That's amazing! :D
@adrenochromeaddict4232 Жыл бұрын
you are one of the rare indian youtubers that don't have a god awful incomprehensible accent and you're also good at teaching. congrats
@sreelatharajesh2365 Жыл бұрын
Wonderful video. Amazing explanation. Please explain why P(Y) is neglected? Or is considered as 1?
@rodrigodiana17742 ай бұрын
arg max is computed by varying X, so we can neglect P(Y) because it's not varying on each iteration and will not change the final result
@gregs78093 жыл бұрын
A really well laid out video. Looking forward to watching more
@NormalizedNerd3 жыл бұрын
Thanks! Keep supporting :D
@girishthatte3 жыл бұрын
Awesome explanation ! Loved the way you explained Math used for calculations !
@NormalizedNerd3 жыл бұрын
Thanks a lot! :)
@debasiskar46622 жыл бұрын
You explain things nicely. I would request you to make videos on advanced stochastic processes like semi-Markov Process, martingale etc.
@ramchillarege1658 Жыл бұрын
Very nice and clear. Thank you.
@noame2 жыл бұрын
Thank you for the clarity of explanation. Why did you neglect the denominator P(Y). How can we calculate it ? I assume that the correct arg max should take into consideration the denominator P(Y)
@Ujjayanroy10 ай бұрын
we are taking argmax of a function with X as the variables, so Y doesnt matter because of argmax...you can refer to bayes theorem for maximum likelihood, they always do the same thing
@ViralPanchal977 ай бұрын
Thanks a ton, I wish my professors from Monash Uni taught this way.
@zoegorman82333 жыл бұрын
Super entertaining videos helping me with my Oxford master's thesis. Study night or movie night? Plus he has an awesome accent :-)
@NormalizedNerd3 жыл бұрын
Thanks a lot mate! :D :D
@ITTOMC Жыл бұрын
Simply wonderful. Keep up your excellent work. Really really well done!
@aveenperera31283 жыл бұрын
please do more examples for hidden markov chains
@VersatileAnthem2 жыл бұрын
superb explanations ! that shows how in depth your knowledge is !
@Louisssy3 жыл бұрын
Really cool explanation! Can you also explain why is P(Y) ignored?
@NormalizedNerd3 жыл бұрын
Because of two reasons... 1. It's often hard to compute P(Y) 2. To maximize that expression we only need to maximize the numerator (depends on X). Note that P(Y) doesn't depend on X.
@ts-ny6mx3 жыл бұрын
@@NormalizedNerd Thank you! I had the same concern and your explanation makes sense!
@mariamichela2 жыл бұрын
@@NormalizedNerd 1. can't we just compute P(Y) as P(X1,Y) x P(X1) + P(X2,Y) x P(X2) + P(X3,Y) x P(X3) ? 2. true, I agree. Since you didn't say it in the video I was just wondering where did P(Y) disappear, and didn't bother to think that the max was actually over X
@tyrannicalguy7262 Жыл бұрын
"Hello People From The Future!" that was very thoughtful
@vocabularybytesbypriyankgo1558 Жыл бұрын
Great explanation, super helpful
@artemiosantiagopadillarobl42803 жыл бұрын
Great videos, keep it up! :) I would be nice to have a video about MCMC (Markov Chains via Monte Carlo) and the Metropolis-Hastings algorithm
@NormalizedNerd3 жыл бұрын
Great suggestion.
@nonconsensualopinion3 жыл бұрын
@@NormalizedNerd I'm on the edge of subscribing. A video on MCMC would convince me to subscribe and never leave!
@NormalizedNerd3 жыл бұрын
@@nonconsensualopinion 😂😂...Let's see what comes next
@tetamusha Жыл бұрын
I got a little confused with the two HMM videos. I thought the second video would solve the argmax expression presented at the end of the first one, but the algorithm that solves this expression is the Viterbi algorithm and not the Forward algorithm from the second video. Just a heads-up to those that got a little lost like me.
@abdsmmeat74922 жыл бұрын
We need more videos on Markov chains
@asmaaziz24363 жыл бұрын
Explained so simply. Well done. Helped me a lot.
@NormalizedNerd3 жыл бұрын
Great to hear!
@xiaoruidu86033 жыл бұрын
please keep updating~ you are doing an amazing job~~.
@NormalizedNerd3 жыл бұрын
Sure I will
@varshachinnammal Жыл бұрын
Dear sir, Your explanation was very well explained and understandable. It is full of mathematics coming here matrix, probability and so. I'm from a science background without maths. I needed this for bioinformatics but it is difficult to compare with nitrogenous bases with these matrix and formulae. Will you explain it in simpler method? It would be very helpful sir 🥺
@freenrg8882 ай бұрын
Beautiful! Thank you. Question: in the final formula, "arg max(over X) Prod P(Yi | Xi) P(Xi | Xi-1)" ... We have a product term P(X1 | X0) the assumes there is an X0 value. However, there is no X0. Don't we need to replace this term with a different expression that does not rely on X0?
@ishhanayy Жыл бұрын
Very well explained ! 👏👏
@truongsonmai69153 жыл бұрын
Very helpful video as well as the rest of the Markov series. Wish you luck from Vietnam!
@NormalizedNerd3 жыл бұрын
Thanks a lot mate :)
@Gamer2002 Жыл бұрын
You should make more videos, you are awesome
@muqaddaszahranaqvi55852 жыл бұрын
Finally, some video helped.
@najeebjebreel2885 Жыл бұрын
Nice explanation. Thanks.
@surajpandey863 жыл бұрын
I really like the visuals and your presentation with the content. Good work!
@NormalizedNerd3 жыл бұрын
Glad you like them!
@mika-ellajarapa56463 жыл бұрын
hello!!!! please explain conditional random field :( thank you
@Garrick6453 ай бұрын
As soon as the emoji's left, the video went over my head
@streamindigoflags80352 жыл бұрын
Thanks a lot! These type of videos are amazing and helps me understand the concepts in good way that are there in the books. It boosts my interest to this area. It helps me a lot in doing my project! U make these kind of videos for almost all concepts!😍
@Sam-rz5hw7 ай бұрын
excellent explanation!
@acesovernines Жыл бұрын
Superb explanation
@keithmartinkinyua20672 жыл бұрын
this video is good but it left me hanging. I was expecting you to calculate the probability at the end.
@MB-rc8ie3 жыл бұрын
This is gold. Thanks for uploading. Very helpful
@NormalizedNerd3 жыл бұрын
You're welcome :D
@nad41533 жыл бұрын
nice video ! but what happen to P(Y) at 8:30 in the final formula, why does it disappear ?
@papa01-h2z2 жыл бұрын
We want to find the X_1, ..., X_n that gives us the maximum value. Note that P(Y) does not depend on the Xs and is therefore a constant. A constant does not change the Xs that give us the maximum.
@islemsahli7375 ай бұрын
Thank you well explained
@ameyapawar70972 жыл бұрын
Very well explained!
@barhum57652 жыл бұрын
Bro you are a king
@aadi.p415911 ай бұрын
ssuper super good. awesome work bro
@thejswaroop52303 жыл бұрын
hey you from the past....good video
@nchsrimannarayanaiyengar80033 жыл бұрын
very nice explanation thank you
@丰岛龙健3 жыл бұрын
best videos! hope to see more of the videos about markov chain! Thank you
@NormalizedNerd3 жыл бұрын
Sure thing!
@dannymoore15306 ай бұрын
This was good! Thank you.
@shivkrishnajaiswal83942 жыл бұрын
good one
@stefankermer77828 ай бұрын
this is great - had to laugh a lot when you inroduced indices to the matrices haha
@Cookies41252 жыл бұрын
Thanks for the explanation! You went through the math of how to simplify \argmax P(X=X_1,\cdots,X_n\mid Y=Y_1,\cdots,Y_n) but how do you actually compute the argmax once you've done the simplification? There must be a better way than to brute force search through all combinations of values for X_1,\cdots,X_n right?
@ThePablo505 Жыл бұрын
something that looks so hard, you make it understandable even for a 4 years old kid
@elixvx3 жыл бұрын
thank you for your videos! please continue the great work!
@NormalizedNerd3 жыл бұрын
Thanks, will do!
@TariqulHasan-p2y Жыл бұрын
HI.. thanks for wonderful videos on Markov Chain. I just want to know how do you define the probability of transition state and emission state? What to do about unknown probabilities of state?. Regards
@annanyatyagip9 ай бұрын
excellent video!
@nakjoonim3 жыл бұрын
You deserve my subscription thanks a lot!
@NormalizedNerd3 жыл бұрын
:D :D
@hyeonjusong1159 Жыл бұрын
You have a knack for teaching! The explanation was clear and the example along with the emojis was so cute! Thank you!!!
@lucasqwert1 Жыл бұрын
How do we calculate the stationary distribution of the first state? I watched your previous videos but still cant calculate it! Thanks for answering!
@rohanthakrar7599 Жыл бұрын
can you explain the calculation of P(X|Y), the last step of the video when you put inside the products of P(Y|X) and P(X|X). Where does the P(Y) in the denominator go? Thanks
@Periareion Жыл бұрын
Howdy! I think of it like this: P(Y) is a constant, and doesn't affect which sequence X has the highest probability of occurring. In other words: since every term gets divided by P(Y), we can just ignore it. Perhaps he could've made that a little clearer in the video. Cheers!
@michaeljfigueroa Жыл бұрын
Excellent
@mediwise2474 Жыл бұрын
Sir pl.make a video on finite element method
@jasonhuang2270 Жыл бұрын
Thank you so much!
@deepayaganti76533 жыл бұрын
Clearly explained.superb
@NormalizedNerd3 жыл бұрын
Glad you liked it
@jonathanisai92863 жыл бұрын
Amazing video bro! you rock!
@NormalizedNerd3 жыл бұрын
Thanks bro!
@vrl90373 жыл бұрын
Excellent explanation
@NormalizedNerd3 жыл бұрын
Glad you liked it!
@MERIEMELBATOULEDDAIF7 ай бұрын
hiii please i just want to know the tool you create those exemples with , its urgent save me
@stephenchurchill39292 жыл бұрын
Great Video, Thanks!
@NormalizedNerd2 жыл бұрын
You're welcome!
@minjun872 жыл бұрын
Thanks for the great video. What would be the hidden state sequence data and observed sequence data in speech to text use case?
@malikabdulsalam66283 жыл бұрын
highly appriciated Sir. Can you please share your ppt slide.
@marcovieira83562 ай бұрын
Thank you for the vid, it is probably the arg max (understandable) available around. But something remains unclear for me. You said at 09:30 that one Markov property is that X_i "depends only of X_i-1 but Markov property I know is the opposit: X_i is independent of X_i-1 (future does not depends on past, just on the current state). Where I am missing the point?
@user-se9uk2py5k3 жыл бұрын
Really good explanation ! Thank you!
@NormalizedNerd3 жыл бұрын
Glad it was helpful!
@xxanton8xx2 жыл бұрын
Thanks for the awesome video!
@sudhasenthilkumar3353 жыл бұрын
kindly provide video for viterbi and forward backward algorithm
@24CARLOKB2 жыл бұрын
Great video! Just a little lost where you get the prob(sad or happy | weather), which I think are emission probabilities? Thanks!
@jtanium2 жыл бұрын
I've been going through these videos and doing the calculations by hand to make sure I understand the math correctly. When I tried to calculate the left eigenvector ([0.218, 0.273, 0.509]) using the method described in the first video (Mark Chains Clearly Explained Part 1), I got a different result ([0.168, 0.273, 0.559]), and I'm wondering if I missed a step. Here's what I did: starting with [0 0 1] meaning it is sunny, pi0A = [0.0 0.3 0.7], pi1A = [0.12 0.27 0.61], pi2A = [0.168, 0.273, 0.559]. It's interesting the second element matches. If anyone might help me understand where I went wrong, I'd greatly appreciate it!
@sushobhitrathore25552 жыл бұрын
you must start the probability of states as [0 1 0].....you will get the same values
@angadbagga9166 Жыл бұрын
@@sushobhitrathore2555 - lol why 010 ..sunny is at last position. And even if we go through your way we don't get the right result. Result of pi2A as per you funda is [0.252, 0.272, 0.476]
@abhivaryakumar3107 Жыл бұрын
Amazing video I really needed that and thank you so so much for explaining everything so clearly. If I may have one request, could you please share the python code?? Thank you so so much
@bassami743 жыл бұрын
Thank you for sharing, could you please explain how to implement HMM on measuring the earnings quality. Need your help 🙏🙏
@festusboakye36982 жыл бұрын
Thank you so much for making this video. Could you please extend HMM to treeHMM using Forward algorithm to make inferences.
@Extra-jv5xr8 ай бұрын
Is this related to the viterbi algorithm? Could you make a video on that?
@DungPham-ai3 жыл бұрын
Thank so much .Can you make video about conditional random fields ?
@NormalizedNerd3 жыл бұрын
TBH I haven't explored that yet...If I do, I'll make a video 😌