6. Search: Games, Minimax, and Alpha-Beta

  Рет қаралды 455,374

MIT OpenCourseWare

MIT OpenCourseWare

Күн бұрын

Пікірлер: 267
@66javi66
@66javi66 5 жыл бұрын
Patrick Winston, the professor of this lecture, pass away this July... Thank you Patrick.
@joefagan9335
@joefagan9335 3 жыл бұрын
Oh sorry to hear that. RIP
@ThePaypay88
@ThePaypay88 3 жыл бұрын
is it because of Corona?
@dipmodi844
@dipmodi844 3 жыл бұрын
So sad hearing that, true jem of a teacher. RIP
@BabbyCat3008
@BabbyCat3008 3 жыл бұрын
@@ThePaypay88 His McDonald's belly
@piyushsingh6462
@piyushsingh6462 3 жыл бұрын
,,🙏🏻🙏🏻🙏🏻 respect from India Rest in peace 🕊️🕊️🕊️ A great professor....
@yassinehani
@yassinehani 5 жыл бұрын
Minimax : 16:17 alpha beta simple example : 21:51 alpha beta big example : 24:54
@ahmedmamdouhkhaled8750
@ahmedmamdouhkhaled8750 Жыл бұрын
thx
@yassinehani
@yassinehani Жыл бұрын
@@ahmedmamdouhkhaled8750 welcome, good luck ^^
@BrunoAlmeidaSilveira
@BrunoAlmeidaSilveira 9 жыл бұрын
One of the best lectures in the series, fantastic professor and amazing didactic. Many thanks to MIT for this contribution.
@Atknss
@Atknss 6 жыл бұрын
Are u serious? The class and professor disappointed me. The small dwarf guy explains very well. But this professor...Not even close.
@MrEvilFreakout
@MrEvilFreakout 5 жыл бұрын
@@Atknss can you tell me where i can find this mestirious dwarf that can help me understand AI, would be much appreciated, thank you in advance :)
@nesco3836
@nesco3836 3 жыл бұрын
I dont know where u study or what u study but this is an amazing lecture abt AI. If u cant follow thats allows seriously conclusions about you tho
@MichielvanderBlonk
@MichielvanderBlonk 2 жыл бұрын
@@MrEvilFreakout I think he plays in Game of Thrones. Ask George R.R. Martin. LOL no seriously I would like to know too. Although I think this lecture was pretty good.
@kardsh
@kardsh 7 жыл бұрын
didn't pay attention in my classes, now here i am at 4 am watching a lecture from 7 years ago......
@kardsh
@kardsh 7 жыл бұрын
thank you for saving my ass. also, Christopher impressed me at the end lol
@darkweiss1234
@darkweiss1234 7 жыл бұрын
same boat here mate
@JNSStudios
@JNSStudios 7 жыл бұрын
This was 3 year ago... this isn't 2021!
@kardsh
@kardsh 7 жыл бұрын
Viral Villager I came from the future
@JNSStudios
@JNSStudios 7 жыл бұрын
Nafolchan O.o
@RyanCarmellini
@RyanCarmellini 8 жыл бұрын
Great lecture. Very clearly explained alpha beta pruning. I liked the greater than and less than comparisons on each level. This was much clearer then just defining alpha and beta at each level.
@johnnybegood8669
@johnnybegood8669 5 жыл бұрын
R.I.P. Patrick Winston, your work will last forever
@sixpooltube
@sixpooltube 7 жыл бұрын
This is the Breaking Bad of AI lectures. Epic beyond comparison. I've watched it more than once and I've learned something new every time.
@Apollys
@Apollys 7 жыл бұрын
Wowwww I've never seen anyone evaluate the time cost of brute forcing chess the way he did! Amazing! This guy is just amazing.
@tcveatch
@tcveatch 10 ай бұрын
He’s only off by 10^10. While still being right. See my other comment.
@moosesnWoop
@moosesnWoop 4 жыл бұрын
Love these lectures - think about them throughout my day. Well seasoned Lecture. Sad to hear about his passing.
@maresfillies6041
@maresfillies6041 9 жыл бұрын
For those who want to know where he talks about Min Max go to 25:00. It saved my ass.
@KulbhushanChand
@KulbhushanChand 9 жыл бұрын
+Mares Fillies Thanks , World needs more people like you.
@mekaseymour1784
@mekaseymour1784 7 жыл бұрын
bless you
@chg_m
@chg_m 7 жыл бұрын
fuck you. it starts around min 16.
@flavillus
@flavillus 7 жыл бұрын
thats alpha-beta part, not the original min-max.
@zes7215
@zes7215 5 жыл бұрын
no such thing as savex about it, doesnt matter, schoolx, scox, these gamex etc. meaningless, cepit, do, be can do,be any nmw and any be perfx. also buyer not seller, always test profx not for test
@adityavardhanjain
@adityavardhanjain Жыл бұрын
This lecture is so good. It clears the concept on a theoretical and practical aspects both.
@cameronmoore7675
@cameronmoore7675 7 жыл бұрын
Came here for a good explanation of alpha-beta pruning, and got what I came for. Fantastic lecture! ...but what really blew me away was how *absurdly clean* that blackboard is. Just look at it!
@avinkon
@avinkon 9 ай бұрын
Patrick Winston has a great teaching style with a subtle humor , childlike playfulness, enthusiasm , energetic and engaging lecture, enjoyed thoroughly :)
@ishratrhidita9393
@ishratrhidita9393 6 жыл бұрын
He is an amazing professor. I would have considered myself lucky to be in his class.
@codykala7014
@codykala7014 7 жыл бұрын
This was an excellent lecture. The explanation of alpha-beta pruning was so clear and easy to follow, and Prof. Winston is excellent at presenting the material in an engaging fashion. And I loved how Prof. Winston goes the extra mile to tie in these concepts to real life situations such as Deep Blue. Thank you so much!
@yyk900820
@yyk900820 7 жыл бұрын
Love this professor. Calm clear explanation. Smooth voice. And humour.
@insidioso4304
@insidioso4304 7 жыл бұрын
Greetings from the Politecnico di Milano; thank you for these beautiful lectures!
@keyboard_toucher
@keyboard_toucher Жыл бұрын
Human chess players do use the alpha-beta approach (even if we don't recognize it by name), we just have a lot of additional tricks like heuristics about which moves to explore and the order in which to do so.
@GoogleUser-ee8ro
@GoogleUser-ee8ro 5 жыл бұрын
Prof Winston is quite a genius in giving funny Memorable names for algorithms - British Museum, dead horse, Marshall Art etc. Also the way he explained how Deep Blue applied minimax + alphabet prune + Progressive Deepening etc immediate relate the material to real-life applications. Good Job! But I hope he could explain more on how paralleled computing helped alpha beta punning in DB.
@MichielvanderBlonk
@MichielvanderBlonk 2 жыл бұрын
Perhaps it can be organized by branch: one process takes a branch, then when it splits it also splits the process in two. Of course when b=15 that can become cumbersome I guess.
@-_Nuke_-
@-_Nuke_- 11 ай бұрын
I wanted to say a huge thank you, this was an amazing lecture! I can only imagine the elegance of modern chess engines like StockFish and LC0... StockFish being a brute force and neural network hybrid and LC0 being a pure neural netword powerhouse... The amount of knowledge someone could get from studying them would be extraordinary! If only I could had the pleasure...
@mass13982
@mass13982 9 жыл бұрын
Amazing professor. My hat off to you sir
@JenniferLaura92
@JenniferLaura92 9 жыл бұрын
what a great explanation. Elaborated very well! thank you
@FreshEkie
@FreshEkie 7 жыл бұрын
Excellent, very helpful for my Artificial Intelligence exam. Greetings from Germany.
@hslyu
@hslyu 3 жыл бұрын
You gave me a great inspiration. Rest in peace my teacher.
@OlivierNayraguet
@OlivierNayraguet 4 жыл бұрын
I am into AI and Game Theory now with Columbia Engineering, I really enjoyed this presentation. So long professor.
@DusanResin
@DusanResin 5 жыл бұрын
Great explanation! It's basically everything you need to build any game with AI opponent in one lecture. And you can easily determinate the level of difficulty by limiting the depth level of calculating.
@alakamyok1261
@alakamyok1261 4 жыл бұрын
Amazing teacher, thanks to engineers of yesterday, and MIT, we have access to these gems.
@nallarasukrishnan4364
@nallarasukrishnan4364 23 күн бұрын
Great lecture! Pls upload such lecture video for Monte Carlo Tree Search algorithm. Thank you!
@sagarpanwar2724
@sagarpanwar2724 5 жыл бұрын
The most clear explanation of Alpha Beta Pruning and Minimax
@dagual4473
@dagual4473 8 жыл бұрын
Thanks to the guy who wrote the subtitles. It clearly made me understand beter.
@themathaces8370
@themathaces8370 3 жыл бұрын
Beautiful lecture. Thanks very much.
@HanhTangE
@HanhTangE 5 жыл бұрын
Seems like a really nice professor. My AI professor also nice and good teacher but leaves out some details which I learn it from here. Thanks for great courses!
@shiningyrlife
@shiningyrlife 10 жыл бұрын
best minimax and alpha beta pruning explanation i ever see!
@behind-the-scenes420
@behind-the-scenes420 Жыл бұрын
Excellent instructor ever. Love from Comsats Islamabad
@GustavoFreitas-fe4rq
@GustavoFreitas-fe4rq 3 ай бұрын
Amazing lecture!!! I'm so glad to have been recomended this.
@EliusMaximus
@EliusMaximus 4 жыл бұрын
Amazing lecture, I am very grateful that this has been recorded, thank you for spreading knowledge for free
@stevenstefano8778
@stevenstefano8778 4 жыл бұрын
Great video and lecture! Required viewing from my AI professor at Pace University. Worth every second!
@bhargavaramudu3242
@bhargavaramudu3242 4 жыл бұрын
This lecture is awesome...such a great professor he is...I absolutely love him
@junweima
@junweima 6 жыл бұрын
I pray every day for more lectures
@AlexandrSudakov
@AlexandrSudakov 3 жыл бұрын
I wish I had such a lecturer in my university :) Especially I liked the moment about cloud computing at 11:07
@shubhamsawant5609
@shubhamsawant5609 2 жыл бұрын
Cleared the outlook for Games search
@narobot
@narobot 7 жыл бұрын
This is such a great video, I am pretty amazed at how anyone could have came up with this. Great lecture.
@celialiang1485
@celialiang1485 3 жыл бұрын
Thank you for this great speech. RIP professor.
@mccleod6235
@mccleod6235 2 жыл бұрын
Great lecture, but I hope that AI has advanced far enough now to automatically filter the coughing noises out of the audio. It sounds like a TB clinic in there.
@jaceks6338
@jaceks6338 6 жыл бұрын
This prof explains stuff so well. Respect.
@charlesrodriguez6276
@charlesrodriguez6276 4 жыл бұрын
Since school is online anyways and the whole course is project-based for me. I'm going to MIT online for my Fall semester.
@chemicalfiend101
@chemicalfiend101 5 жыл бұрын
I didn't think anyone would call a bulldozer sophisticated, but they are! This course is quite eye-opening.
@garimamandal98
@garimamandal98 2 жыл бұрын
Very well explained. All my doubts got cleared
@mofakah5906
@mofakah5906 4 жыл бұрын
Came for just the minimax but I stayed for the whole lecture. Thanks MIT
@ohserra
@ohserra 10 жыл бұрын
Thank you Professor Patrick! I wish I have had some professors like you!
@debangan
@debangan 4 жыл бұрын
The dude with leg up just reinvented a whole damn idea in a class. No wonder he is in MIT and I am not.
@perseusgeorgiadis7821
@perseusgeorgiadis7821 2 жыл бұрын
Finally. Someone explained this stuff in a way I could understand
@EtzEchad
@EtzEchad 10 ай бұрын
The game tree depth is just one factor. I bigger problem is the evaluation of the board at each level. That is what makes current chess engines winners.
@berke-ozgen
@berke-ozgen 3 жыл бұрын
Great and impressive lecture.
@dawidlokiec1526
@dawidlokiec1526 Ай бұрын
"[...] Hubert Dreyfus wrote a paper in about 1963 in which he had a heading titled, Computers Can't Play Chess. [...] Whereupon Seymour Pavitt wrote a rebuttal to Dreyfus' famous paper, which had a subject heading, Dreyfus Can't Play Chess Either." I can't find this papers anywhere. Can anyone help me here?
@zerziszain
@zerziszain 8 жыл бұрын
I have a question: At 36:22 he says what if we don't have enough time and we went only till the (d-1) th level. And then he also suggests we can have a temporary answer at every level as we go down as we should have some approximate answer at any point of time. But!! How can we have any answer without going to the leaf nodes because it's only at the leaf nodes we can conclude who can win the game. Think this for tic-tac-toe game. At (d-1)th level we don't have enough information to decide if this series of moves till this node at (d-1) will win me or lose me the game. At higher levels say at (d-3) it's so blur! Everything is possible as we go down! Isn't it? So, if an algorithm decides to compute till (d-1) th level then all those path options are equal!! Nothing guarantees a win and nothing guarantees a lose at (d-1)th level because if I understand correctly wins and losses are calculated only at the leaf nodes. This is so true especially in pure MinMax algorithm. So how exactly are we going to have an 'approximate answer' at (d-1)th level or say (d-5)th level?
@jaybhavsar5738
@jaybhavsar5738 8 жыл бұрын
You are correct in sense that the any levels less than d do not 'guarantee' a winner. going down to d levels guarantees a winner iff both players play 'optimally'. this is feasible in games with small depth(ie. tic tac toe) but in the game like chess it is impossible to make tree that huge. (it is estimated that it would take world's fastest computer around a billion years to make first move!!) so here we don't care about most optimal but we just care about somewhat good move.and yes, technically (but not practically) you can beat it. hope this helps.
@zerziszain
@zerziszain 8 жыл бұрын
Thanks for your response. You didn't get my question. How are you going to decide if a certain move is a "somewhat good move"? Only leaf nodes can tell you what is a good move and what is a bad move. Forget about the win move at (d-5) level, think about not choosing a lose move, at (d-5)level how can you decide that a certain move isn't going to lose you the game down the road?
@VladimirMilojevicheroj
@VladimirMilojevicheroj 8 жыл бұрын
Heuristic function is used. That function(given the current state of the game) returns a number which measures how LIKELY it is for me to win. How that function is constructed depends on the nature of the game and analytics which conclude how easy for other player is to respond to my move(this comes from the power of searching the tree). The quality of that function we may change dynamically during the game because it will greatly influence our success. In chess it may look something like this: c1 * material + c2 * mobility + c3 * king safety + c4 * center control , where c1, c2,c3,c4 are constants which tell us what thing is more important than the other. As you can conclude, these functions certainly aren't perfect. Humans tend to have better understanding of the field using the experience and common patterns.
@yellowsheep1000
@yellowsheep1000 7 жыл бұрын
Agree! It's basically determining the likelihood of winning the game based on the features f's of the current state. g(weights * features) is shown in the lecture.
@nownomad
@nownomad 7 жыл бұрын
I was confused about it for a little bit too. From what I can understand, in our case, each node of the tree represents a board configuration. As someone else, we must have a heuristic to determine how good a particular board configuration (node) from the perspective of min and max. Lecturer mentioned that one of the possibilities for heuristics is the piece count for each player (just as an example). But obviously value of each piece is not equal, so that wouldn't be the best heuristics. But you get the idea of what could be used as a heuristic. If we didn't have any such heuristic, then your only values would be -1, 0, 1 at leaf nodes. They would represent final configurations - loss, draw, win. In this case yes, you wouldn't be able to determine values at intermediate nodes because there's nothing to tell you whether min or max are closer to winning or losing.
@shawntsai3277
@shawntsai3277 7 жыл бұрын
This professor is perfect. It is waste of time to attend the same classes in other school.
@tcveatch
@tcveatch 10 ай бұрын
On full depth search (13:44 ish) he says 10*(80+10+9+7) = 10^106 but it’s actually 10^116. Sure his point holds but he’s just off by a factor of 10^10=10 billion.
@majusumanto9016
@majusumanto9016 5 жыл бұрын
I find something here about alpha & betha, what if we're changing the position between 3 and 9 on the left tree...then the first cut off wouldn't happen... so the interesting thing is the alpha betha depended on the evaluation method... For example if you're doing evaluation from the right position so the cut-off will be different :D ... anyway thank you for the explanation... it's really clear
@alpotato6531
@alpotato6531 9 ай бұрын
rip. great explanations!
@davidiswhat
@davidiswhat 5 жыл бұрын
Damn, this was good. I ended up skipping the proof like stuff and could only really understood the actual algorithm. Might watch more of these.
@__-to3hq
@__-to3hq 5 жыл бұрын
I'm glad I never went to a university, someone like me needs to hear or see something done a few times, this is better for me video lectures from MIT xD
@ahgiynq
@ahgiynq 4 жыл бұрын
Thank you for these great lectures
@JITCompilation
@JITCompilation 2 жыл бұрын
Lectures like this make me wish I didn't screw around so much in high school :C should've gone to MIT instead of my crappy uni
@rnmisrahi
@rnmisrahi 2 жыл бұрын
Small error: to convert seconds to nanoseconds you need to add 6 to the exponential factor, not 3. Still, this would not impact the point made by this brilliant professor
@rohitdas475
@rohitdas475 3 жыл бұрын
Pruning explained in the perfect way !!
@nicoscarpa
@nicoscarpa 5 жыл бұрын
In the alpha-beta example, the branch that doesn't actually get created is just the right-most one that leads to the terminal node (not computed, because of the "cut"). Is that right? If it's right, than the statement "it's as if that branch doesn't exist" (24:00) must be interpreted such that the algorithm will never choose the action that leads to the right-hand node (the one
@spectator5144
@spectator5144 Жыл бұрын
yes
@IsaacBhasme
@IsaacBhasme 10 жыл бұрын
Good lecture. Elaborated very well.
@mbousse
@mbousse 2 жыл бұрын
Great lecture!
@livb4139
@livb4139 2 жыл бұрын
39:02 "unfortunate choice of variable names" lmfao
@zfranklyn
@zfranklyn 7 жыл бұрын
Such a great, clear lecturer!
@brianbaker1124
@brianbaker1124 6 жыл бұрын
wonderful lecture
@XTrumpet63X
@XTrumpet63X 7 жыл бұрын
I feel proud that I've been watching MIT lectures enough to have gotten the "celebration of learning" reference. xD
@axelkennedal5656
@axelkennedal5656 6 жыл бұрын
What does it mean?
@jamesbalajan3850
@jamesbalajan3850 4 жыл бұрын
@@axelkennedal5656 Euphemism for an exam?
@EchoVids2u
@EchoVids2u 5 жыл бұрын
30:29 Shouldn't the root then be = 8 ?
@BertVerhelst
@BertVerhelst 9 жыл бұрын
I wonder how much you save by using the tree of the last move as a basis for the next one, since the min player can be a human, and he might not take the branch you predicted. So the alpha beta algorithm assumes the min player will always take the option that is most in the min player's own interest, which is not always the case in computationally "flawed" humans.
@richardwalker3760
@richardwalker3760 9 жыл бұрын
If the computer is the superior player, then it doesn't matter when the human makes a poor move. The computer, when doing the initial search, decided that the branch in question was "too good to be true." Thus when the human makes that move, the computer can re-discover the path that was originally "too good to be true" with less effort than it took to find it the first time (because we are one level deeper in the tree). Bottom line: Computers (when properly programmed) spend the bulk of their time analyzing the game under the assumption that the opponent is just as good as the computer. Whenever the opponent makes a poor move, the computer can recognize and capitalize on that gain relatively quickly, making the time wasted earlier irrelevant.
@MichielvanderBlonk
@MichielvanderBlonk 2 жыл бұрын
@@richardwalker3760 Probably caching values or entire trees can be of value? Otherwise you are recalculating things you've seen before.
@xinxingyang4477
@xinxingyang4477 4 жыл бұрын
A problem I can not understand about the minimax algorithm is about the other player. Do we consider the other player can make the same calculation of the tree to a similar depth? What if they can not and made some decisions to different branches... Will that be a problem? Or not a problem?
@amrmoneer5881
@amrmoneer5881 4 жыл бұрын
This is beautiful. he explained it in simple terms very vell
@xiaoliw
@xiaoliw 8 жыл бұрын
What did Christopher ever do to get picked on? lmao jk This lecture was really clear and I'm so glad that there are subtitles.
@marklucernas6964
@marklucernas6964 3 жыл бұрын
probably keeping his knee raised while talking to his prof
@risingredstone5949
@risingredstone5949 3 жыл бұрын
@@marklucernas6964 my man Christopher's the real alpha here
@berke-ozgen
@berke-ozgen 3 жыл бұрын
@@risingredstone5949 hahaha :D nice one
@richardwalker3760
@richardwalker3760 10 жыл бұрын
Phenomenal lecture. Thank you.
@dalest.hillaire5542
@dalest.hillaire5542 7 жыл бұрын
Very clear and concise.
@maliknaveedphotography
@maliknaveedphotography 8 жыл бұрын
Sir u R Great ! Really This is Excellent Lecture :) Thanks
@khairolhazeeq5426
@khairolhazeeq5426 4 жыл бұрын
I have a few questions. The first one is, is Minimax considered as a state space search? If it is is there a Goal state/node?
@AChannelFrom2006
@AChannelFrom2006 8 жыл бұрын
Thank you very much for these.
@EngenhariadeSoftwareLuciana
@EngenhariadeSoftwareLuciana 8 жыл бұрын
perfect lecture!
@wolftribe66
@wolftribe66 5 жыл бұрын
24:59 man had a tree prepared like a G
@Apollys
@Apollys 7 жыл бұрын
"Marshall" Arts >_< I was hoping it was a pun, but it looks like it's not...
@thomascook8570
@thomascook8570 9 жыл бұрын
I paused the video just after he explained the 2^2 British museum method, and I thought "hang on, shouldn't the player who's making the move be trying to figure out what the implications of planning 5 moves ahead are given that each second successive state has to make assumptions about what the opponent would do in the intermediate state? So I started thinking about how you might predict an opponents move and even got onto thinking if you could use a neural network to predict it based off the opponents previous moves!" Then he explained minimax and I felt stupid... Although it does raise a second question, "If you matched two of these algorithms against each other, is the result deterministic?" Any ideas?
@Apollys
@Apollys 7 жыл бұрын
Yes, of course. Then you simply have a single deterministic program.
@marmathic9874
@marmathic9874 7 жыл бұрын
Not necessarily. The algorithms can use random numbers if there are moves with the same value.
@MichielvanderBlonk
@MichielvanderBlonk 2 жыл бұрын
@@marmathic9874 But they don't. And combining two deterministic algorithms always results in a new deterministic algorithm, and thus a deterministic result.
@vasugupta1
@vasugupta1 6 жыл бұрын
very nicely explained the concepts my ai lecturer couldnt teach
@RealMcDudu
@RealMcDudu 8 жыл бұрын
00:30 - funny, but Dreyfus account of this all "battle against AI" thing, is that he won (check out the really cool philosophy movie "Being in the World")
@ShivangiSingh-wc3gk
@ShivangiSingh-wc3gk 6 жыл бұрын
I got thrown off a little on the alpha beta part. So at each level we when we make comparisons do we look at the values from both the min and max perspective?
@alikirigopoulou70
@alikirigopoulou70 4 жыл бұрын
Does anyone know what is the app that he is using to visualize the different algorithms?
@mitocw
@mitocw 4 жыл бұрын
He is using Java for the Demonstrations, see ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/demonstrations/ for more info. Best wishes on your studies!
@maresfillies6041
@maresfillies6041 9 жыл бұрын
Awesome lecture, I have a test on these topics today. :D
@larry_the
@larry_the 2 жыл бұрын
How'd you do on it?
@rubiskelter
@rubiskelter 8 жыл бұрын
21:36 It's not "branching down", he says "branch and bound" from previous class.
@alphabeta5504
@alphabeta5504 6 жыл бұрын
A SIMPLER explanation: 1.- HEURISTIC We have 3 variables (Grahics=8, Gameplay=7, Sound=10) which represent how good a VIDEOGAME is, but we want to find the SIMPLEST WAY to get a UNIQUE NUMBER to represent the 3 values (this is called heuristic): Option 1: multiply them = Graphics * Gameplay * Sound = 560 This has 2 problems: 1.1.- If one of the variables is zero, the heuristic is zero 1.2.- All the variables have the same importance (weight in the equation) Option 2: sum them = Graphics + Gameplay + Sound = 25 This solves the cancellation problem when one of the variables is zero but the problem of all the variables having the same importance (weight) persists. Option 3: (Graphics * GraphicsWeight) + (Gameplay * GameplayWeight) + (Sound * SoundWeight) = 8 *0.5 + 7 * 0.3 + 10 *0.2 = 8.1 This SOLVES BOTH PROBLEMS: cancellation and same weight for all variables. Generalization: Sum (Vi * Wi) NOTE: Same equation for NEURON in NEURAL NETWORKS. In fact, MINIMAX could be seen as a run-time generated FEEDFORWARD neural network. 2.- STATES A game like chess, Tic-Tac-Toe, Checkers... is made of a board and tokens. Every combination of board + tokens is called 'state' or 'node'. 3.- TREE of STATES If two gamers play chess, they alternate movements and they have several possible moves to choose between. This generates a tree: Level1 (1st turn): All the possible moves of WHITE player Level2 (2nd turn): All the possible moves of BLACK player Level3 (3rd turn): All the possible moves of WHITE player : The IDEA is to evaluate each node (STATE) and assign an number (heuristic) representing how good or bad is the result of the move (one level for the WHITE (MAX), and the next for the BLACK (min)). You only need to find a PATH to a STATE (node) where you have won (eliminate the other player's king). The difficulty lies in that you only can pick xor ODD levels (playing WHITE) xor EVEN levels (if you're playing with BLACK tokens). MAX (WHITE ) have to pick the branches that left min (BLACK) the highest values for the WHITE to choose in the next turn (the WORST for min is the BETTER for MAX). 4.- PRUNING BRANCHES with BAD results (alpha-beta pruning) to avoid evaluating the HUGE TREE You CANNOT evaluate the whole tree because it's generated in run-time and it's HUGE so you use "alpha-beta pruning" to avoid analyzing branches with bad results for your interest. Alpha starts with value +infinite and Beta starts with value - infinite; when they cross, that branch is discarded. 5.- IMPLEMENTATION (of chess): 0.- Define heuristic based on things like occupying the center of the board, different weights for each piece,... 1.- Openings library (en.wikipedia.org/wiki/List_of_chess_openings) with an heuristic value for each one. 2.- MiniMax with alpha-beta pruning for middle game 3.- Mate library (en.wikipedia.org/wiki/Checkmate_pattern) to deal with game endings You only need to look a few levels down to choose the branch. How many levels ? as many as you can inside a TIME LIMIT (the more difficulty, the more time to search).
@rj-nj3uk
@rj-nj3uk 6 жыл бұрын
When the intro didn't said "This content is provided under MIT open course ware...." I thought my earphone broke.
@QueenGraceFace
@QueenGraceFace 10 жыл бұрын
This is really helpful! Great lecture :)
@Conor.Mcnuggets
@Conor.Mcnuggets 7 жыл бұрын
@ 29:34, for the deep cut, did he compare two Max nodes? or compared the bottom Min node with the root Max node?
@WepixGames
@WepixGames 5 жыл бұрын
R.I.P Patrick Winston
@haoyangy7026
@haoyangy7026 6 жыл бұрын
Jeez this prof is so cool in the way he talks about things wish I have a teacher like that so I don't have to watch this in a class with a super bad teacher lol
@nXqd
@nXqd 9 жыл бұрын
thanks mr winston. This is so good :)
7. Constraints: Interpreting Line Drawings
49:13
MIT OpenCourseWare
Рет қаралды 135 М.
Minimax: How Computers Play Games
14:37
Spanning Tree
Рет қаралды 207 М.
This mother's baby is too unreliable.
00:13
FUNNY XIAOTING 666
Рет қаралды 41 МЛН
Kluster Duo #настольныеигры #boardgames #игры #games #настолки #настольные_игры
00:47
Minimax with Alpha Beta Pruning
13:44
John Levine
Рет қаралды 340 М.
Mega-R3. Games, Minimax, Alpha-Beta
50:56
MIT OpenCourseWare
Рет қаралды 84 М.
16. Learning: Support Vector Machines
49:34
MIT OpenCourseWare
Рет қаралды 2 МЛН
Algorithms Explained - minimax and alpha-beta pruning
11:01
Sebastian Lague
Рет қаралды 1,1 МЛН
Minimax Algorithm for Tic Tac Toe (Coding Challenge 154)
26:33
The Coding Train
Рет қаралды 813 М.
12a: Neural Nets
50:43
MIT OpenCourseWare
Рет қаралды 531 М.
Meet the Mind: The Brain Behind Shor’s Algorithm
9:12
MIT CSAIL
Рет қаралды 46 М.
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,8 МЛН
This mother's baby is too unreliable.
00:13
FUNNY XIAOTING 666
Рет қаралды 41 МЛН