Patrick Winston, the professor of this lecture, pass away this July... Thank you Patrick.
@joefagan93354 жыл бұрын
Oh sorry to hear that. RIP
@ThePaypay883 жыл бұрын
is it because of Corona?
@dipmodi8443 жыл бұрын
So sad hearing that, true jem of a teacher. RIP
@BabbyCat30083 жыл бұрын
@@ThePaypay88 His McDonald's belly
@piyushsingh64623 жыл бұрын
,,🙏🏻🙏🏻🙏🏻 respect from India Rest in peace 🕊️🕊️🕊️ A great professor....
@yassinehani6 жыл бұрын
Minimax : 16:17 alpha beta simple example : 21:51 alpha beta big example : 24:54
@ahmedmamdouhkhaled8750 Жыл бұрын
thx
@yassinehani Жыл бұрын
@@ahmedmamdouhkhaled8750 welcome, good luck ^^
@BrunoAlmeidaSilveira9 жыл бұрын
One of the best lectures in the series, fantastic professor and amazing didactic. Many thanks to MIT for this contribution.
@Atknss6 жыл бұрын
Are u serious? The class and professor disappointed me. The small dwarf guy explains very well. But this professor...Not even close.
@MrEvilFreakout6 жыл бұрын
@@Atknss can you tell me where i can find this mestirious dwarf that can help me understand AI, would be much appreciated, thank you in advance :)
@nesco38364 жыл бұрын
I dont know where u study or what u study but this is an amazing lecture abt AI. If u cant follow thats allows seriously conclusions about you tho
@MichielvanderBlonk2 жыл бұрын
@@MrEvilFreakout I think he plays in Game of Thrones. Ask George R.R. Martin. LOL no seriously I would like to know too. Although I think this lecture was pretty good.
@kardsh8 жыл бұрын
didn't pay attention in my classes, now here i am at 4 am watching a lecture from 7 years ago......
@kardsh8 жыл бұрын
thank you for saving my ass. also, Christopher impressed me at the end lol
@darkweiss12348 жыл бұрын
same boat here mate
@JNSStudios7 жыл бұрын
This was 3 year ago... this isn't 2021!
@kardsh7 жыл бұрын
Viral Villager I came from the future
@JNSStudios7 жыл бұрын
Nafolchan O.o
@RyanCarmellini8 жыл бұрын
Great lecture. Very clearly explained alpha beta pruning. I liked the greater than and less than comparisons on each level. This was much clearer then just defining alpha and beta at each level.
@sixpooltube7 жыл бұрын
This is the Breaking Bad of AI lectures. Epic beyond comparison. I've watched it more than once and I've learned something new every time.
@johnnybegood86695 жыл бұрын
R.I.P. Patrick Winston, your work will last forever
@Apollys7 жыл бұрын
Wowwww I've never seen anyone evaluate the time cost of brute forcing chess the way he did! Amazing! This guy is just amazing.
@tcveatch11 ай бұрын
He’s only off by 10^10. While still being right. See my other comment.
@cameronmoore76757 жыл бұрын
Came here for a good explanation of alpha-beta pruning, and got what I came for. Fantastic lecture! ...but what really blew me away was how *absurdly clean* that blackboard is. Just look at it!
@maresfillies60419 жыл бұрын
For those who want to know where he talks about Min Max go to 25:00. It saved my ass.
@KulbhushanChand9 жыл бұрын
+Mares Fillies Thanks , World needs more people like you.
@mekaseymour17847 жыл бұрын
bless you
@chg_m7 жыл бұрын
fuck you. it starts around min 16.
@flavillus7 жыл бұрын
thats alpha-beta part, not the original min-max.
@zes72156 жыл бұрын
no such thing as savex about it, doesnt matter, schoolx, scox, these gamex etc. meaningless, cepit, do, be can do,be any nmw and any be perfx. also buyer not seller, always test profx not for test
@moosesnWoop4 жыл бұрын
Love these lectures - think about them throughout my day. Well seasoned Lecture. Sad to hear about his passing.
@avinkon10 ай бұрын
Patrick Winston has a great teaching style with a subtle humor , childlike playfulness, enthusiasm , energetic and engaging lecture, enjoyed thoroughly :)
@adityavardhanjain Жыл бұрын
This lecture is so good. It clears the concept on a theoretical and practical aspects both.
@ishratrhidita93936 жыл бұрын
He is an amazing professor. I would have considered myself lucky to be in his class.
@yyk9008207 жыл бұрын
Love this professor. Calm clear explanation. Smooth voice. And humour.
@-_Nuke_- Жыл бұрын
I wanted to say a huge thank you, this was an amazing lecture! I can only imagine the elegance of modern chess engines like StockFish and LC0... StockFish being a brute force and neural network hybrid and LC0 being a pure neural netword powerhouse... The amount of knowledge someone could get from studying them would be extraordinary! If only I could had the pleasure...
@sagarpanwar27245 жыл бұрын
The most clear explanation of Alpha Beta Pruning and Minimax
@codykala70147 жыл бұрын
This was an excellent lecture. The explanation of alpha-beta pruning was so clear and easy to follow, and Prof. Winston is excellent at presenting the material in an engaging fashion. And I loved how Prof. Winston goes the extra mile to tie in these concepts to real life situations such as Deep Blue. Thank you so much!
@hslyu3 жыл бұрын
You gave me a great inspiration. Rest in peace my teacher.
@GoogleUser-ee8ro5 жыл бұрын
Prof Winston is quite a genius in giving funny Memorable names for algorithms - British Museum, dead horse, Marshall Art etc. Also the way he explained how Deep Blue applied minimax + alphabet prune + Progressive Deepening etc immediate relate the material to real-life applications. Good Job! But I hope he could explain more on how paralleled computing helped alpha beta punning in DB.
@MichielvanderBlonk2 жыл бұрын
Perhaps it can be organized by branch: one process takes a branch, then when it splits it also splits the process in two. Of course when b=15 that can become cumbersome I guess.
@mass139829 жыл бұрын
Amazing professor. My hat off to you sir
@keyboard_toucher2 жыл бұрын
Human chess players do use the alpha-beta approach (even if we don't recognize it by name), we just have a lot of additional tricks like heuristics about which moves to explore and the order in which to do so.
@insidioso43047 жыл бұрын
Greetings from the Politecnico di Milano; thank you for these beautiful lectures!
@DusanResin5 жыл бұрын
Great explanation! It's basically everything you need to build any game with AI opponent in one lecture. And you can easily determinate the level of difficulty by limiting the depth level of calculating.
@FreshEkie7 жыл бұрын
Excellent, very helpful for my Artificial Intelligence exam. Greetings from Germany.
@OlivierNayraguet4 жыл бұрын
I am into AI and Game Theory now with Columbia Engineering, I really enjoyed this presentation. So long professor.
@alakamyok12614 жыл бұрын
Amazing teacher, thanks to engineers of yesterday, and MIT, we have access to these gems.
@dagual44738 жыл бұрын
Thanks to the guy who wrote the subtitles. It clearly made me understand beter.
@behind-the-scenes420 Жыл бұрын
Excellent instructor ever. Love from Comsats Islamabad
@JenniferLaura929 жыл бұрын
what a great explanation. Elaborated very well! thank you
@nallarasukrishnan4364Ай бұрын
Great lecture! Pls upload such lecture video for Monte Carlo Tree Search algorithm. Thank you!
@celialiang14854 жыл бұрын
Thank you for this great speech. RIP professor.
@shiningyrlife10 жыл бұрын
best minimax and alpha beta pruning explanation i ever see!
@HanhTangE5 жыл бұрын
Seems like a really nice professor. My AI professor also nice and good teacher but leaves out some details which I learn it from here. Thanks for great courses!
@AlexandrSudakov3 жыл бұрын
I wish I had such a lecturer in my university :) Especially I liked the moment about cloud computing at 11:07
@bhargavaramudu32424 жыл бұрын
This lecture is awesome...such a great professor he is...I absolutely love him
@chemicalfiend1015 жыл бұрын
I didn't think anyone would call a bulldozer sophisticated, but they are! This course is quite eye-opening.
@GustavoFreitas-fe4rq4 ай бұрын
Amazing lecture!!! I'm so glad to have been recomended this.
@EliusMaximus4 жыл бұрын
Amazing lecture, I am very grateful that this has been recorded, thank you for spreading knowledge for free
@EtzEchad11 ай бұрын
The game tree depth is just one factor. I bigger problem is the evaluation of the board at each level. That is what makes current chess engines winners.
@perseusgeorgiadis78212 жыл бұрын
Finally. Someone explained this stuff in a way I could understand
@jaceks63387 жыл бұрын
This prof explains stuff so well. Respect.
@mccleod62352 жыл бұрын
Great lecture, but I hope that AI has advanced far enough now to automatically filter the coughing noises out of the audio. It sounds like a TB clinic in there.
@dawidlokiec15262 ай бұрын
"[...] Hubert Dreyfus wrote a paper in about 1963 in which he had a heading titled, Computers Can't Play Chess. [...] Whereupon Seymour Pavitt wrote a rebuttal to Dreyfus' famous paper, which had a subject heading, Dreyfus Can't Play Chess Either." I can't find this papers anywhere. Can anyone help me here?
@stevenstefano87784 жыл бұрын
Great video and lecture! Required viewing from my AI professor at Pace University. Worth every second!
@junweima6 жыл бұрын
I pray every day for more lectures
@narobot7 жыл бұрын
This is such a great video, I am pretty amazed at how anyone could have came up with this. Great lecture.
@debangan4 жыл бұрын
The dude with leg up just reinvented a whole damn idea in a class. No wonder he is in MIT and I am not.
@charlesrodriguez62764 жыл бұрын
Since school is online anyways and the whole course is project-based for me. I'm going to MIT online for my Fall semester.
@mofakah59064 жыл бұрын
Came for just the minimax but I stayed for the whole lecture. Thanks MIT
@shawntsai32777 жыл бұрын
This professor is perfect. It is waste of time to attend the same classes in other school.
@ohserra10 жыл бұрын
Thank you Professor Patrick! I wish I have had some professors like you!
@garimamandal982 жыл бұрын
Very well explained. All my doubts got cleared
@themathaces83703 жыл бұрын
Beautiful lecture. Thanks very much.
@davidiswhat6 жыл бұрын
Damn, this was good. I ended up skipping the proof like stuff and could only really understood the actual algorithm. Might watch more of these.
@tcveatch11 ай бұрын
On full depth search (13:44 ish) he says 10*(80+10+9+7) = 10^106 but it’s actually 10^116. Sure his point holds but he’s just off by a factor of 10^10=10 billion.
@rnmisrahi2 жыл бұрын
Small error: to convert seconds to nanoseconds you need to add 6 to the exponential factor, not 3. Still, this would not impact the point made by this brilliant professor
@__-to3hq5 жыл бұрын
I'm glad I never went to a university, someone like me needs to hear or see something done a few times, this is better for me video lectures from MIT xD
@shubhamsawant56092 жыл бұрын
Cleared the outlook for Games search
@EchoVids2u5 жыл бұрын
30:29 Shouldn't the root then be = 8 ?
@JITCompilation2 жыл бұрын
Lectures like this make me wish I didn't screw around so much in high school :C should've gone to MIT instead of my crappy uni
@majusumanto90165 жыл бұрын
I find something here about alpha & betha, what if we're changing the position between 3 and 9 on the left tree...then the first cut off wouldn't happen... so the interesting thing is the alpha betha depended on the evaluation method... For example if you're doing evaluation from the right position so the cut-off will be different :D ... anyway thank you for the explanation... it's really clear
@alphabeta55046 жыл бұрын
A SIMPLER explanation: 1.- HEURISTIC We have 3 variables (Grahics=8, Gameplay=7, Sound=10) which represent how good a VIDEOGAME is, but we want to find the SIMPLEST WAY to get a UNIQUE NUMBER to represent the 3 values (this is called heuristic): Option 1: multiply them = Graphics * Gameplay * Sound = 560 This has 2 problems: 1.1.- If one of the variables is zero, the heuristic is zero 1.2.- All the variables have the same importance (weight in the equation) Option 2: sum them = Graphics + Gameplay + Sound = 25 This solves the cancellation problem when one of the variables is zero but the problem of all the variables having the same importance (weight) persists. Option 3: (Graphics * GraphicsWeight) + (Gameplay * GameplayWeight) + (Sound * SoundWeight) = 8 *0.5 + 7 * 0.3 + 10 *0.2 = 8.1 This SOLVES BOTH PROBLEMS: cancellation and same weight for all variables. Generalization: Sum (Vi * Wi) NOTE: Same equation for NEURON in NEURAL NETWORKS. In fact, MINIMAX could be seen as a run-time generated FEEDFORWARD neural network. 2.- STATES A game like chess, Tic-Tac-Toe, Checkers... is made of a board and tokens. Every combination of board + tokens is called 'state' or 'node'. 3.- TREE of STATES If two gamers play chess, they alternate movements and they have several possible moves to choose between. This generates a tree: Level1 (1st turn): All the possible moves of WHITE player Level2 (2nd turn): All the possible moves of BLACK player Level3 (3rd turn): All the possible moves of WHITE player : The IDEA is to evaluate each node (STATE) and assign an number (heuristic) representing how good or bad is the result of the move (one level for the WHITE (MAX), and the next for the BLACK (min)). You only need to find a PATH to a STATE (node) where you have won (eliminate the other player's king). The difficulty lies in that you only can pick xor ODD levels (playing WHITE) xor EVEN levels (if you're playing with BLACK tokens). MAX (WHITE ) have to pick the branches that left min (BLACK) the highest values for the WHITE to choose in the next turn (the WORST for min is the BETTER for MAX). 4.- PRUNING BRANCHES with BAD results (alpha-beta pruning) to avoid evaluating the HUGE TREE You CANNOT evaluate the whole tree because it's generated in run-time and it's HUGE so you use "alpha-beta pruning" to avoid analyzing branches with bad results for your interest. Alpha starts with value +infinite and Beta starts with value - infinite; when they cross, that branch is discarded. 5.- IMPLEMENTATION (of chess): 0.- Define heuristic based on things like occupying the center of the board, different weights for each piece,... 1.- Openings library (en.wikipedia.org/wiki/List_of_chess_openings) with an heuristic value for each one. 2.- MiniMax with alpha-beta pruning for middle game 3.- Mate library (en.wikipedia.org/wiki/Checkmate_pattern) to deal with game endings You only need to look a few levels down to choose the branch. How many levels ? as many as you can inside a TIME LIMIT (the more difficulty, the more time to search).
@rohitdas4753 жыл бұрын
Pruning explained in the perfect way !!
@zerziszain8 жыл бұрын
I have a question: At 36:22 he says what if we don't have enough time and we went only till the (d-1) th level. And then he also suggests we can have a temporary answer at every level as we go down as we should have some approximate answer at any point of time. But!! How can we have any answer without going to the leaf nodes because it's only at the leaf nodes we can conclude who can win the game. Think this for tic-tac-toe game. At (d-1)th level we don't have enough information to decide if this series of moves till this node at (d-1) will win me or lose me the game. At higher levels say at (d-3) it's so blur! Everything is possible as we go down! Isn't it? So, if an algorithm decides to compute till (d-1) th level then all those path options are equal!! Nothing guarantees a win and nothing guarantees a lose at (d-1)th level because if I understand correctly wins and losses are calculated only at the leaf nodes. This is so true especially in pure MinMax algorithm. So how exactly are we going to have an 'approximate answer' at (d-1)th level or say (d-5)th level?
@jaybhavsar57388 жыл бұрын
You are correct in sense that the any levels less than d do not 'guarantee' a winner. going down to d levels guarantees a winner iff both players play 'optimally'. this is feasible in games with small depth(ie. tic tac toe) but in the game like chess it is impossible to make tree that huge. (it is estimated that it would take world's fastest computer around a billion years to make first move!!) so here we don't care about most optimal but we just care about somewhat good move.and yes, technically (but not practically) you can beat it. hope this helps.
@zerziszain8 жыл бұрын
Thanks for your response. You didn't get my question. How are you going to decide if a certain move is a "somewhat good move"? Only leaf nodes can tell you what is a good move and what is a bad move. Forget about the win move at (d-5) level, think about not choosing a lose move, at (d-5)level how can you decide that a certain move isn't going to lose you the game down the road?
@VladimirMilojevicheroj8 жыл бұрын
Heuristic function is used. That function(given the current state of the game) returns a number which measures how LIKELY it is for me to win. How that function is constructed depends on the nature of the game and analytics which conclude how easy for other player is to respond to my move(this comes from the power of searching the tree). The quality of that function we may change dynamically during the game because it will greatly influence our success. In chess it may look something like this: c1 * material + c2 * mobility + c3 * king safety + c4 * center control , where c1, c2,c3,c4 are constants which tell us what thing is more important than the other. As you can conclude, these functions certainly aren't perfect. Humans tend to have better understanding of the field using the experience and common patterns.
@yellowsheep10007 жыл бұрын
Agree! It's basically determining the likelihood of winning the game based on the features f's of the current state. g(weights * features) is shown in the lecture.
@nownomad7 жыл бұрын
I was confused about it for a little bit too. From what I can understand, in our case, each node of the tree represents a board configuration. As someone else, we must have a heuristic to determine how good a particular board configuration (node) from the perspective of min and max. Lecturer mentioned that one of the possibilities for heuristics is the piece count for each player (just as an example). But obviously value of each piece is not equal, so that wouldn't be the best heuristics. But you get the idea of what could be used as a heuristic. If we didn't have any such heuristic, then your only values would be -1, 0, 1 at leaf nodes. They would represent final configurations - loss, draw, win. In this case yes, you wouldn't be able to determine values at intermediate nodes because there's nothing to tell you whether min or max are closer to winning or losing.
@nicoscarpa5 жыл бұрын
In the alpha-beta example, the branch that doesn't actually get created is just the right-most one that leads to the terminal node (not computed, because of the "cut"). Is that right? If it's right, than the statement "it's as if that branch doesn't exist" (24:00) must be interpreted such that the algorithm will never choose the action that leads to the right-hand node (the one
@spectator51442 жыл бұрын
yes
@XTrumpet63X7 жыл бұрын
I feel proud that I've been watching MIT lectures enough to have gotten the "celebration of learning" reference. xD
@axelkennedal56566 жыл бұрын
What does it mean?
@jamesbalajan38504 жыл бұрын
@@axelkennedal5656 Euphemism for an exam?
@haoyangy70266 жыл бұрын
Jeez this prof is so cool in the way he talks about things wish I have a teacher like that so I don't have to watch this in a class with a super bad teacher lol
@alpotato653110 ай бұрын
rip. great explanations!
@khairolhazeeq54264 жыл бұрын
I have a few questions. The first one is, is Minimax considered as a state space search? If it is is there a Goal state/node?
@amrmoneer58814 жыл бұрын
This is beautiful. he explained it in simple terms very vell
@BertVerhelst9 жыл бұрын
I wonder how much you save by using the tree of the last move as a basis for the next one, since the min player can be a human, and he might not take the branch you predicted. So the alpha beta algorithm assumes the min player will always take the option that is most in the min player's own interest, which is not always the case in computationally "flawed" humans.
@richardwalker37609 жыл бұрын
If the computer is the superior player, then it doesn't matter when the human makes a poor move. The computer, when doing the initial search, decided that the branch in question was "too good to be true." Thus when the human makes that move, the computer can re-discover the path that was originally "too good to be true" with less effort than it took to find it the first time (because we are one level deeper in the tree). Bottom line: Computers (when properly programmed) spend the bulk of their time analyzing the game under the assumption that the opponent is just as good as the computer. Whenever the opponent makes a poor move, the computer can recognize and capitalize on that gain relatively quickly, making the time wasted earlier irrelevant.
@MichielvanderBlonk2 жыл бұрын
@@richardwalker3760 Probably caching values or entire trees can be of value? Otherwise you are recalculating things you've seen before.
@alikirigopoulou704 жыл бұрын
Does anyone know what is the app that he is using to visualize the different algorithms?
@mitocw4 жыл бұрын
He is using Java for the Demonstrations, see ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/demonstrations/ for more info. Best wishes on your studies!
@berke-ozgen3 жыл бұрын
Great and impressive lecture.
@ShivangiSingh-wc3gk6 жыл бұрын
I got thrown off a little on the alpha beta part. So at each level we when we make comparisons do we look at the values from both the min and max perspective?
@blackmage-893 жыл бұрын
That guy with the cough REALLY should have stayed at home that day wth
@livb41392 жыл бұрын
39:02 "unfortunate choice of variable names" lmfao
@rj-nj3uk6 жыл бұрын
When the intro didn't said "This content is provided under MIT open course ware...." I thought my earphone broke.
@maliknaveedphotography8 жыл бұрын
Sir u R Great ! Really This is Excellent Lecture :) Thanks
@Apollys7 жыл бұрын
"Marshall" Arts >_< I was hoping it was a pun, but it looks like it's not...
@vasugupta16 жыл бұрын
very nicely explained the concepts my ai lecturer couldnt teach
@anastazjaanna2634 жыл бұрын
Why is bS an unfortunate choice of variables?
@thomascook85709 жыл бұрын
I paused the video just after he explained the 2^2 British museum method, and I thought "hang on, shouldn't the player who's making the move be trying to figure out what the implications of planning 5 moves ahead are given that each second successive state has to make assumptions about what the opponent would do in the intermediate state? So I started thinking about how you might predict an opponents move and even got onto thinking if you could use a neural network to predict it based off the opponents previous moves!" Then he explained minimax and I felt stupid... Although it does raise a second question, "If you matched two of these algorithms against each other, is the result deterministic?" Any ideas?
@Apollys7 жыл бұрын
Yes, of course. Then you simply have a single deterministic program.
@marmathic98747 жыл бұрын
Not necessarily. The algorithms can use random numbers if there are moves with the same value.
@MichielvanderBlonk2 жыл бұрын
@@marmathic9874 But they don't. And combining two deterministic algorithms always results in a new deterministic algorithm, and thus a deterministic result.
@Ceilvia5 жыл бұрын
This Prof draws beautiful trees
@wolftribe665 жыл бұрын
24:59 man had a tree prepared like a G
@ahgiynq4 жыл бұрын
Thank you for these great lectures
@xinxingyang44774 жыл бұрын
A problem I can not understand about the minimax algorithm is about the other player. Do we consider the other player can make the same calculation of the tree to a similar depth? What if they can not and made some decisions to different branches... Will that be a problem? Or not a problem?
@Conor.Mcnuggets7 жыл бұрын
@ 29:34, for the deep cut, did he compare two Max nodes? or compared the bottom Min node with the root Max node?
@RealMcDudu8 жыл бұрын
00:30 - funny, but Dreyfus account of this all "battle against AI" thing, is that he won (check out the really cool philosophy movie "Being in the World")
@WepixGames5 жыл бұрын
R.I.P Patrick Winston
@IsaacBhasme10 жыл бұрын
Good lecture. Elaborated very well.
@timgoppelsroeder1212 жыл бұрын
Funny that the minimax algorithm is isomorphic to extensive form games in game theory
@myrsinivak69936 жыл бұрын
When exactly does A-B prune the most nodes?
@zfranklyn7 жыл бұрын
Such a great, clear lecturer!
@rchuso7 жыл бұрын
The AI controlling the camera is faulty. It should be showing some of the writing on the board.
@maresfillies60419 жыл бұрын
Awesome lecture, I have a test on these topics today. :D
@larry_the3 жыл бұрын
How'd you do on it?
@rubiskelter8 жыл бұрын
21:36 It's not "branching down", he says "branch and bound" from previous class.
@richardwalker376010 жыл бұрын
Phenomenal lecture. Thank you.
@bangerproductions15849 жыл бұрын
what does he mean by martial art example? Using your opponents force against him