why this video has views only on thousands? it needs to be in millions!
@nujranujranujra4 жыл бұрын
Great to see high-quality educational channels like 3Blue1Brown coming from India. Btw, what software do you use to create the animations?
@NormalizedNerd4 жыл бұрын
It's a python library named manim, created by Grant Sanderson!
@abhirajarora76314 ай бұрын
Are you sure about that comparison?
@NikethNathАй бұрын
@@abhirajarora7631i mean grant is sanderson is 3b1b, so it's bound to be similar
@suryanshvarshney111Ай бұрын
@@abhirajarora7631 Normalised Nerd will reach that level in future dw
@yiyiyan72733 жыл бұрын
This is really nice for the beginners to understand the basic properties of markov chain. It would be great if your video could go further to the hidden markov chain and factorial markov chain:)
@nicolasrodrigo92 жыл бұрын
You are a very good math professor, thanks a lot!
@NormalizedNerd2 жыл бұрын
Thanks a lot!!
@jayeshpatil511211 ай бұрын
Can't believe that Indian is at it's prime. Ek number explanation 🔥🔥🔥
@tristanlouthrobins6 ай бұрын
Absolutely brilliant, clear explanation!
@real.biswajit2 жыл бұрын
Your videos are really helpful dada❤
@iglesiaszorro2974 жыл бұрын
Very catchy! I request you to make more such videos on markov chains with these kinds of awesome representations!! Markov chains were a dread to me previously.. your videos are too cool!
@NormalizedNerd4 жыл бұрын
Definitely will do!
@olesiaaltynbaeva41323 жыл бұрын
Your channel is a great resource! Thanks!
@NormalizedNerd3 жыл бұрын
Glad you think so!
@ianbowen63444 жыл бұрын
5:46 - "Between any of these classes, we can always go from one state to the other." But how can we do that if two of the classes are self-contained? Do you mean that we can always move between states within each class?
@NormalizedNerd4 жыл бұрын
"we can always move between states within each class" This is what I meant.
@zenchiassassin2834 жыл бұрын
@@NormalizedNerd thanks
@zzzt6nАй бұрын
I also think it is a bit hard to understand why it can be called a communication class when 1 cannot reach 0.
@Mithu140623 жыл бұрын
Very good precised explanation with nice animation. Thank you for your video. Please make more for solving numericals and implementation of practical scenario.
@LouisKahnIIIАй бұрын
This is excellent info well presented. Thank Yoyu
@georgemavran97012 жыл бұрын
Amazing explanation! Can you also please explain the periodicity of a state in a Markov chain?
@harishsuthar46043 жыл бұрын
Looks like Stat Quest Channel BAM!!! Clearly Explained!!!
@NormalizedNerd3 жыл бұрын
Haha...He's a legend!
@jenamartin6157Ай бұрын
In a certain way, this video was less about Markov chains themselves and more about the underlying directed graphs. Using different language to describe the same things, the communicating classes are called “strongly connected components”, and you can form a “condensation graph” (which is a directed acyclic graph) by collapsing these communicating states.
@wonseoklee803 жыл бұрын
Thanks for the video. Now I can understand whenever I hear Markov chain!
@Garrick6455 ай бұрын
Bro we need more videos. Don't wait for comments just do it 🙏🙏❤❤
@amarparajuli6923 жыл бұрын
Amazing content for ML and Data Science people. Keep up Bro. Will share it with my ML comrades.
@NormalizedNerd3 жыл бұрын
Much appreciated! Please do :D
@AnonymousAnonymous-ug8tp Жыл бұрын
2:48 Sir, how come state 2 is recurrent state? It is possible that after reaching state 1, it keeps on looping back to state 1 forever, it is not "bound" to come back to state 2 from 1.
@alewis7041 Жыл бұрын
Recurrent state just means that after going from state to state infinitely, you will reach a giving state also infinitely. Generally, for very large numbers, 2 will be reached. 0, if we ran the transitions infinitely, would have a finite occurrence, a specific amount before it left state 0 and unable to return
@davethesid8960 Жыл бұрын
No, because recurrence at 1 isn't with probability 1. So, provided you wait long enough, you will eventually leave state 1.
@sushmitagoswami20339 ай бұрын
Love the explaination!
@amritayushman3443 Жыл бұрын
Thanks for the videos. Helped me a lot. Would appreciate if you upload a video for complete in depth mathematical analysis of the Marco chain and its stationary probability.
@kirananumalla4 жыл бұрын
Very clearly explained! Yes would be useful if there are more videos..
@NormalizedNerd4 жыл бұрын
Sure!
@nid84902 жыл бұрын
At @2:36 : I beg to differ. There is a non-zero probability that once I go from State 2 to State 1; I would continue to be in State 1 forever. In this case, we are not *bound * to come back to State 2 ever again. So I wouldn't say the probability of ever coming back to State 2 from State 2 is *1*. (Or am I missing something here?)
@mohamedaminekhadhraoui64178 ай бұрын
There isn’t a probability we’ll stay at state 1 forever. We can go from state 1 to state 1 again once twice or a billion times but we will come back to state 2 eventually.
@willbutplural2 жыл бұрын
Amazing video again 👍
@jingyingsophie8822 Жыл бұрын
I don't quite understand the part where 2 is also a recurrent state in the first example. If the definition of the recurrent state is where the probability of returning back to that state is =1 (i.e. guaranteed), wouldn't 2 be a transient state since there is the possible case where 1 goes back to itself only ad infinitum?
@dariovaccaro9401 Жыл бұрын
Yes that s true, I think he doesn't define well enough the two different cases
@martusha122 күн бұрын
great video man
@stivenap1563 жыл бұрын
I am now a fan! New subscriber !
@さくら-z4y3kАй бұрын
Thank you so much
@cassidygonzalez3744 жыл бұрын
Love your videos! Very clearly explained
@NormalizedNerd4 жыл бұрын
Thanks mate!
@Realstranger69 Жыл бұрын
Hello, dumb question. Shouldn't state 2be transient also. I mean, there is a extremely small chance (but not zero), that in a random walk we go from state 2 to state 1 and then we keep looping through state 1 forever, hence not coming back to state 2? No? Thanks love your vids.
@kindykomal2 жыл бұрын
Why don't our teachers teach like this , was hating maths few mins ago, till I turned this video ,Thank you so for this much-needed video 🥺, Now I kinda want to do PhD instead in this 😂🙏🏻
@melissachen15813 жыл бұрын
I think there is a mistake at 2:56? 2 is not a recurrent state because after we leave 2, the chance of going back to 2 is less than 1 when 1 recurse itself. Only 1 is a recurrent state because after we leave 1, it's 100% that we will come back to 1. Can someone confirm that?
@Mosil03 жыл бұрын
I was thinking the same thing, but I suppose if you consider an infinite number of steps, eventually the probability of going back to 2 approaches 100%
@arafathossain18033 жыл бұрын
Great one
@niccolosimonato14784 жыл бұрын
Damn that's a smooth explaination
@NormalizedNerd4 жыл бұрын
Thanks!!
@williammoody19113 жыл бұрын
Love the videos. Can't wait to get you to 100k subs!
@NormalizedNerd3 жыл бұрын
Keep supporting 😁
@مصطفىعبدالجبارجداح2 жыл бұрын
Thanks
@lebzgold74753 жыл бұрын
Amazing animation! Thank you.
@NormalizedNerd3 жыл бұрын
My pleasure!
@muhammadrivandra50654 жыл бұрын
Subscribed, awesome stuff dude
@NormalizedNerd4 жыл бұрын
Awesome, thank you!
@OmerMan9923 жыл бұрын
Great videos! Would you consider making video/s on Queueing theory for stochastic models please?
@SuiLamSin8 ай бұрын
very good video
@preritgoyal92939 ай бұрын
Great brother 👌👌 So, if the stationary distribution has all non zero values, the chain will be irreducible ? (Since all states can communicate with each other) And Reducible if any of the states has 0 value in stationary distribution ?
@ahlemchouial46213 жыл бұрын
thank yo u so much, amazing videos!!!
@NormalizedNerd3 жыл бұрын
You're very welcome!
@zahraheydari1722 жыл бұрын
Thank you for your channel and all your videos. I had a question watching this video: How does this relate to the definition of Markov chain which you provided in part one which said the probability of the future state only depends on the current state?
@sumitlahiri2094 жыл бұрын
Fantastic !!
@NormalizedNerd4 жыл бұрын
Many thanks!
@johnmandrake88294 жыл бұрын
yes more please.
@NormalizedNerd4 жыл бұрын
Working on it!
@ayushshekhar1901 Жыл бұрын
Good presentation but I have a doubt in the end. How can we go from any state to any other state after transformation to similar states?
@asthaagha9505 Жыл бұрын
🥺🥺🥺thanq
@yijingwang7308 Жыл бұрын
Thank you for your video. But I am confused, you said Sum of Outgoing Probabilities Equals 1, but in the first example, the sum of outgoing probabilities of state 0 is less than 1?
@丁珊珊-t4o4 жыл бұрын
wow this kind of random walk demo is very helpful
@NormalizedNerd4 жыл бұрын
Glad you found this helpful!
@karannchew25342 жыл бұрын
Notes for my future revision. *New Terminologies* Transient states. Recurrence state. Reducible Markov chain. Irreducible Markov chain. Communicating Classes.
@Frog-c5yАй бұрын
Is there a video on No U-Turn Sampler (NUTS)? Thanks
@SARKARSAIMAISLAM Жыл бұрын
gr8 vdo... class 1(state 0 ) and class 3 (state 3)...cant communicate with others, how are they communicative classes???
@anushaganesanpmp76024 жыл бұрын
please upload more in detail for properties and applications
@NormalizedNerd4 жыл бұрын
Video coming soon :)
@daniekpo Жыл бұрын
Great video. Just one observation; state 1 is NOT recurrent. A state cannot be recurrent and transient at the same time. The probability of never visiting state 0 again is greater than 0 so by definition it can't be recurrent. To be recurrent all paths leading out of the state has to eventually lead back to that state but that's no the case for state 0. I'm I missing something?
@llss793 жыл бұрын
You could have explained why what is the utility of simplifying markov chains into irreducible and what is the math difference when considering them separated.
@aerodynamico64272 жыл бұрын
"...why what is the utility"?
@mohammedbelgoumri2 жыл бұрын
Great video, is the source code available somewhere?
@arvinpradhan4 жыл бұрын
discrete time markov chains and continuous time markov chains please
@NormalizedNerd4 жыл бұрын
Suggestion noted!
@kaushalgagan67234 жыл бұрын
More 🤩....
@NormalizedNerd4 жыл бұрын
Sure!
@webdeveloper-vy7hb3 жыл бұрын
How did you use Manim to represent the random walk by blinking effect? Could you share the portion of that code? I started learning manim recently but couldn't manage to do that.
@NormalizedNerd3 жыл бұрын
I created a custom manim object to create the graphs (markov chains). Then I'm just walking through the vertices and edges. The blinking effect is just creating a circle and fading it immediately.
@webdeveloper-vy7hb3 жыл бұрын
@@NormalizedNerd I see. It will be great if you could share the custom object codes.
@7369394 жыл бұрын
Basically these are the strongest connected components.
@NormalizedNerd4 жыл бұрын
Right you are...strongly connected components
@dareenoudeh44853 жыл бұрын
you are awsome
@NormalizedNerd3 жыл бұрын
Thanks a lot! :D
@geethanarvadi Жыл бұрын
If we have state space {0,1,2,3} And given Matrix then how to find the pij(n)? Please explain this 😢
@c0d232 жыл бұрын
¿What books to learn statistics, prob and markov chain?
@NormalizedNerd2 жыл бұрын
Element of Statistical Learning (Springer) Markov Chains by J.R. Norris
@MrFelco10 ай бұрын
Hang on, if you define transient state as 'the probably of a state returning to itself is less than 1', then in the first example, would state 2 not also be a transient state? Reason being, there could be a random walk, in which you go from state 2 to state 1, and then state 1 keeps looping back on itself infinitely, never going back to state 2. Then the probability of state 2 returning to itself is less than 1, given there is a random walk in which it does not return to itself.
@mohamedaminekhadhraoui64178 ай бұрын
The probability of state 1 returning to itself infinitely is 0. It is bound to return to 2 at some point.
@mohamedaminekhadhraoui64178 ай бұрын
In all random walks that go on forever, we will go back to 2 if we start there.
@hrithiksingla57097 күн бұрын
Saw video 1
@migratingperson1165 Жыл бұрын
Found this math concept from Numb3rs and got curious
@PsynideNeel4 жыл бұрын
Facecam kobe asbe?
@NormalizedNerd4 жыл бұрын
Ota deri ache 😅
@flyguggenheim5 ай бұрын
i think it's heal my light depression, thank you
@arounderror3747 Жыл бұрын
osu?
@SJ239823983 жыл бұрын
I will be honest, was ready to find another video when heard the Indian accent. But then saw high upvote/downvote and stayed, and don't regret it!
@NormalizedNerd3 жыл бұрын
Haha
@DejiAdegbite6 ай бұрын
No wonder it's called the Gambler's Ruin. 🤣
@tsunningwah34713 жыл бұрын
i love you
@NormalizedNerd3 жыл бұрын
❤❤❤
@laodrofotic77132 жыл бұрын
I paused the video @1:00 minute mark to tell you it is NOT DUCKING GOOD TO REFER TO STATE A B AND C WHILE THE F-ING PICTURE SAYS STATE 1 2 and 3. FFS, ok now I will watch the rest of it but I think this will be a waste of time just from this start, I can tell you cant explain crap.
@lorinx72553 ай бұрын
A and B are definition variables, like generalized variables you find in books so you can use it in any example.