As an economist specialized in finance and mathematics lover I adore this institution despite I've never set a footstep in Massachusetts nor even the USA. Thanks Lord for these geniuses.
@johnvonhorn29425 жыл бұрын
What a great professor. It's a real pleasure being able to watch these lectures.
@NazriB2 жыл бұрын
Lies again? Drink Beer + Red Wine
@ZelenoJabko Жыл бұрын
Nah, it's boring as f, even at 2x speed.
@Robdahelpa6 жыл бұрын
5:00 i just love this guys personality, what an amazing lecturer to have! so glad MIT uploads these breakthrough lectures
@Lulue_905 жыл бұрын
Breathtaking? 🤔🕯
@Tadesan6 жыл бұрын
I love the way when he comes to a stopping point he stares down the class like a gangster. You got questions huh!?
@harrypotter11556 жыл бұрын
Really nice refresher for python. What a funny professor! I enjoy this a lot. Thanks MIT!
@riibbert5 жыл бұрын
20:05 Wait, did he just gave a candy to the studient just for trying to ask a question? Damm thats a positive reinforcement that i would like to have.
@MilesBellas4 жыл бұрын
"...like to have for future obesity."
@adiflorense14774 жыл бұрын
yep. it candy
@محبةالرحمان-د4ب3 жыл бұрын
@@adiflorense1477 ممكن خاص
@gulmiraleman4487 Жыл бұрын
Dear Sir, huge thanks to make this course such an easy peasy! Thanks MIT! "Share your knowledge. It’s a way to achieve immortality" - Dalai Lama
@smartdatalearning33123 жыл бұрын
Another well presented lecture illustrated with Python examples
@akbarrauf27417 жыл бұрын
thanks,mit
@kwokhoilam24515 жыл бұрын
Good professor, make things simple and fun
@actias_official4 жыл бұрын
I think the first video he shows is not the brownian motion simulation but rather a course- grained collision algorithm such as DPD or MPC.
@phillipworts50929 ай бұрын
If we consider what was covered in the previous lecture about Stoic thinking, and we were trying to create a realistic model about the wandering drunk, wouldn’t we need to create an additional factor that would affect probability called fatigue? Where, by the more of the drunk walk the more tired they get, and they start to either walk less or take smaller steps.
@abu8123 Жыл бұрын
In the simWalks function , I think there is an error in Loop, numTrias has been passed to the WLAK function instead of passing numSteps, this is why the simulations are not dependent on the number of steps .
@AlDumbrava8 ай бұрын
Yeah I spotted the same bug.... Edit: Continued watching... it was an intended bug xD Good prof!
@alacocacola5 ай бұрын
I aslo got it , but then you can see that in minute 22 he goes to it The idea was to detect the odd results and find out what was failing
@rasraster6 жыл бұрын
PLEASE - next time you film a class, show the screen whenever the teacher is discussing what's on it. Countless times I could not see what he was talking about.
@Robdahelpa6 жыл бұрын
late for you but for anybody else seeing this. in the first minute he alludes to the project files which you should ideally download an then you can see them in your own time :)
@hanwengu44084 жыл бұрын
Maybe it is just a license thing.
@aazz79974 жыл бұрын
@Anifco67 You are a fool. Use the lecture notes
@studywithjosh51094 жыл бұрын
Anifco67 if you are not watching this, you are a fool😂
@dodgingdurangos9243 жыл бұрын
@@aazz7997 if the NEW STUDENT is limited to downloading the slides, will this include the laser-pointing he's doing, or should the NEW STUDENT just randomly point their finger on a note on the slide and assume that "this here" or "that over there" is where he's laser-pointing?
@siniquichi3 жыл бұрын
Thanks Mr. Gutag and MIT
@waltwhitman75453 жыл бұрын
his jokes are so good and fall flat way too often loll
@shobhamourya83965 жыл бұрын
Simulations are used in reinforcement learning
@batatambor4 жыл бұрын
This is a very misleading class in my humild opinion, because he is computing the average of distances and not the expected value of the random walk. The random walk should be a bell curve with its peak at distance zero, so the expected value of the walk is always zero for the Usual Drunk. What happens is that when the number of steps increases the bell curve becomes wider and you have small probabilities of finding bigger distances, hence the 'mean' distance increases a little bit. Hower the expected distance to the origin is still zero.
@stefankk26743 жыл бұрын
I didnt see that when I wrote my comment below. Yeah you figured it out right I guess.
@stefankk26743 жыл бұрын
What he is talking about as distances is basically the variance of the expected value you are talking about... I think.
@stefankk26743 жыл бұрын
Or rather the standard deviation.
@EOh-ew2qf3 жыл бұрын
but why is the expected distance to the origin zero? for a point that is 1 step away from the origin, there is 3/4 chance for the second step to be even further away from the origin. So the distance will eventually get bigger and bigger.
@stefankk26743 жыл бұрын
This is what I wrote earlier: When talking about distances the sign is not relevant. Say you have two trials with one step each and lets only allow movements in x. Asumme the first trial ends at -1 and the secon one at +1. The mean of covered distance is then one while average distance from origin is zero. Its just tow ways to look at the probem: The expected value of distance from origin is 0 while the average distnace covered is not.
@narnbrez4 жыл бұрын
Why not give the abstract class the "usual" method for walking and then override it in the inherited class? operator overloading and inheritance in one example
@brendawilliams80623 жыл бұрын
He is standing next to his shadow and attached
@adiflorense14774 жыл бұрын
the course at MIT is all meat. thanks MIT
@leixun4 жыл бұрын
*My takeaways:* 1. Why random walks 1:05
@adipurnomo5683 Жыл бұрын
Nice explained
@minhsp32 жыл бұрын
Show the damn screen!
@alute5532 Жыл бұрын
Drunkard walk Simulate one walk k steps & n such walks 3 abstractuons 1 location (immutable) 2 (possible) ield 3 the drunk
@brendawilliams80624 жыл бұрын
Tens float towards you.
@leejosephcommon32464 жыл бұрын
I wasn't sure if I would watch this drunk walk, however if a tartus is in play...I can make some time
@JohnbelMahautiereАй бұрын
Merci
@tomaschmelevskij6236 жыл бұрын
I love how lazy this guy is when it comes to math 😂 Need to calculate probability? Blah, let's just code do that. Pythagor for triangle with 1x1? Nahh, can't be bothered... 😂 True programmers approach IMO
@Momonga-s7o5 жыл бұрын
Just like me when I fire up matlab to add 2 numbers
@mikevincent63325 жыл бұрын
the maths comes in later, these are intro's
@ases43204 жыл бұрын
Looking a professor pointing at the wall was never so interesting...
@batatambor4 жыл бұрын
If someone could help me, in the textbook there's another exemple of drunk: the EW Drunk, moving only in the horizontal axe (-1, 0) and (1, 0). However this drunk is also getting farther away from the origin. But why? If after n number of steps he has equal chance to step etiher W or E, wasn't he supposed to be back to the origin according to the law of big numbers? Isn't it the same as to flip n coins and count number of head and tails?
@narnbrez4 жыл бұрын
Have you plotted it on a graph as the professor explains near the end of the video? I would expect an hourglass shape of end points. I would like to know what you found if you end up running this sim.
@batatambor4 жыл бұрын
@@narnbrez I did not have run the simmulation because the result is presented in the textbook. The professor is computing the average of distances and not the expected value of the random walk. The random walk should be a bell curve with its peak at distance zero, so the expected value of the walk is always zero for the EW. What happens is that when the number of steps increases the bell curve becomes wider and you have small probabilities of finding bigger distances, hence the 'mean' distance increases a little bit. Hower the expected distance to the origin is still zero. Kind of misleading IMO but it is correct.
@stefankk26743 жыл бұрын
When talking about distances the sign is not relevant. Say you have two trials with one step each and lets only allow movements in x. Asumme the first trial ends at -1 and the secon one at +1. The mean of covered distance is then one while average distance from origin is zero. Its just tow ways to look at the probem: The expected value of distance from origin is 0 while the average distnace covered is not.
@weilinglynn6 жыл бұрын
HI... does anyone here has watched lecture 600.1X ?? I don't seem to find it. I need some help here. Thank you and appreciate
@vitor6135 жыл бұрын
it is on edx
@JohnbelMahautiereАй бұрын
iranium inheritance
@hamitdes78653 жыл бұрын
Hey guys I have one question from the one who read this, Is there any sorting algorithm which directly predicts the place of every element in array subsequently reducing the time complexity because I m working on such a algorithm so if it is there then plz tell me.
@sharan99933 жыл бұрын
can u explain what you mean by predicts? Look at trim sort once
@hamitdes78653 жыл бұрын
@@sharan9993 consider this data[3,1,5,7,2,9,10,4,6,5,2,14] Here min =1 Max =14 Total numbers = 12 Now consider the first element 3 Here predict mean to predict that in this array where should be 3’s position Position =( 3-1/(14-1))*12 = 1.84 so the 3’s position should be at second which is good because when sort the list 3 stands at third position and if we have more numbers than we don’t have to compare every number with others because we only have to compare number with the other number which is at our numbers location
@sharan99933 жыл бұрын
@@hamitdes7865 what about list= [0.1, 10.6, 10.4, 10.5, 10.3, 10.1]
@hamitdes78653 жыл бұрын
@@sharan9993 actually I have thought about this and I m still solving this problem but if you know that mean of the list is around min-mix/2 than this algorithm is good And if you have any thoughts about solving that problem than inform me I will glad😇😇
@sharan99933 жыл бұрын
@@hamitdes7865 think why logically it would work instead of computationally first. Why we can apply to a general case?
@abduogalal534 жыл бұрын
i did't understand how it became .05 ?? if any one can enplane what he divide ?
@lindgren.bjorn13 жыл бұрын
When the masochistic drunk moves on the y-axis he on average gets 0.1 to the north ((1.1-0.9)/2). And since he moves in the y-axis 50% of the time, he gets .05 (0.1/2) to the north on average for every step he takes. See stepChoices in the class definition.
@aviral5502 жыл бұрын
What was the point of this whole lecture? is it that random walk is not so random?
@JohnbelMahautiereАй бұрын
prestige
@JohnbelMahautiereАй бұрын
union
@ccindy9513577 жыл бұрын
Excuse me, where can I find the material and slides of this lecture?
@mitocw7 жыл бұрын
The course materials can be found on MIT OpenCourseWare at ocw.mit.edu/6-0002F16. Best wishes on your studies!
@anonviewerciv3 жыл бұрын
Not-so-random. (21:21)
@augustinusntjamba49143 жыл бұрын
what software is being used here?
@mitocw3 жыл бұрын
Python, see the course for more info at: ocw.mit.edu/6-0002F16. Best wishes on your studies!
@syedadeelhussain26917 жыл бұрын
python is tough
@quocvu9847 Жыл бұрын
38:58
@minhsp32 жыл бұрын
When you attend a class, the professor would face the board and write something on the board. Do your eyes follow what he writes or his back, or his but? Video guys are pretty dumb, they think they have to show the speaker as much as possible. When the professor discusses some point on the result, the video guy should show the viewers the screen Does it make sense to all of you? I am sure what I say does not make sense to you since I am the only one pointing this out. In all my lectures in the US or elsewhere, my face appears only for one minute and the rest of the video shows what I write or the results of my equations.
@minhsp32 жыл бұрын
Show the damn screen Who cares what the professor looks like