As a CS student your videos are really helpful for me. I hope you will keep making more content.
@MortalCowIsHere Жыл бұрын
What currency is that?
@themoldeater Жыл бұрын
Kromer
@Delibro Жыл бұрын
I too was studying counter strike for years.
@sshay90 Жыл бұрын
@@MortalCowIsHere israeli shekel
@opayke980 Жыл бұрын
@@MortalCowIsHere its israeli shekel
@SquidScopes Жыл бұрын
I love how at the beginning of the video the chess game was a Berlin draw! It was such a nice detail to add that chess computers playing at the same strength will most likely draw.
@rensblom8855 Жыл бұрын
Saw that too, loved the extra bit of attention
@AraliciaMoran Жыл бұрын
A potential issue with a basic Minimax is that since it expect the opponent to also play optimaly, it is unable to chose the best potential move when having multiple same-value moves. This doesn't matter with multiple winning moves, but it does if the algorithm has only access to moves resulting in a tie or a loss. For example, when having no winning moves and multiple potential moves that would optimaly result in a tie, an ideal algorithm should pick the move with the highest potential to win, should the opponent plays unoptimally. Minimax doesn't account for that by default. Of course that can be resolved by considering the state value of a board as broader than simply {-1, 0, +1}, and considering decimal values as a secondary weight representing the move potential value in case of non-optimal move by the opponent.
@darkdudironaji Жыл бұрын
You brought up chess at the beginning of the video. Then you explained the algorithm with Tic-Tac-Toe. I was thinking, "There's so many moves in a chess game that you'll be sitting there forever in between moves." So I'm glad you clarified some things.
@puturavindrawiguna3025 Жыл бұрын
Please do a monte carlo tree search next! It is quite similar to minimax, and used for computer to play games too
@zivmbs Жыл бұрын
Just recently created an MCTS AI for a project of mine and scoured the web for any resources I could find, and I can say there are not so many simple explanations for it.
@brod515 Жыл бұрын
I've heard this mentioned years ago when I started learning about graphics... but never though I could fully understand it. Spanning Tree should definitely do a video on this.
@NW11189 Жыл бұрын
When somebody in Harvard is doing this, this channel can do it... The voice is familiar, CS50...
@devanshgarg31 Жыл бұрын
Yes
@nayankumarbarwa4317 Жыл бұрын
@@NW11189 Yes, he taught cs50w and cs50ai
@PistachioAnimates8 ай бұрын
I have been trying to tackle minimax with python for a while now, this helped so much!
@Mushrooms683 Жыл бұрын
What about games like Competitive Pokémon and Rock Paper Scissors where you have to making decisions without knowing what your opponent is going to do?
@patrickwienhoft7987 Жыл бұрын
First, there is another level to Pokémon which is randomness. You can handle this with expected values or setting a threshold (e.g., choose the action that maximizes the chance the outcome is valued 0 or more, i.e., minimize the chance of losing without caring whether you win or draw) These are called stochastic games and they can be solved with a bit more work. What you are referring to are concurrent games. You can still decide whether a strategy is winning or not, but interestingly there are cases where you MUST pick a random strategy to guarantee a win. Consider for example we play rock-paper-scissors and your goal is to win any round (no matter how often we draw or you lose before). The only rule is that you have to specify your strategy beforehand (not to me necessarily). If you always picked rock, or decided on some intricate pattern (e.g.,in the n-th round take noch if n is prime, otherwise if it's even paper and else scissors) there is always a chance that I picked the exact opposite strategy and always counter you. Even if you include my past choices, it doesn't matter. Surprisingly, the only way you can 100% guarantee a win is if you say "I'm picking at random" because then the chance that you win eventually approaches 1 as we play more and more rounds. Technically, there is an even worse concurrent games: One that you cannot win with 100% probability, even if you play infinitely long, but with any probability ever so slightly smaller than 100%. Consider this: I have a single snowball I want to throw at you. You hide behind a hill. We play in rounds and at each round you can decide to run to your house, which would win you the game. However, if I throw my snowball in the same turn, you lose. If I decide to throw my snowball I will not have it ever again and you can just run to your house. Again, if you decide to determine your strategy beforehand as "I run at the 27th turn" there is a small chance that I would always throw at that turn, making you lose 100% of the time. So the only way that you can guarantee that you have a chance to win is again to decide randomly, e.g., by flipping a coin. If I had chosen to immediately throw my snowball you would have a 50% chance at winning. However, you could also not throw a coin but roll a dice and only run if you roll a 6. In that case I would at best have a 1/6 chance to hit you, so you win 5/6 times. And you can continue this - bring a 1000 sided dice and you increase your chances of winning to 999/1000. In fact, any number smaller than 1 can be achieved, but not 1 itself. So while this seems like very constructed games (surprise: they are!) remember than any algorithm that would solve all concurrent games (like minimax solves all normal games) would also have to solve these games correctly, so there is no realistic way to do that. What you would do in practice is to *assume* a probability distribution over your opponent's strategy, i.e., assume that they pick 40% rock, 30% paper and 30% paper, or similarly assign a probability to each of the opposing Pokémon's moves/swapping out, and then pick the optimal strategy from there.
@warmpianist Жыл бұрын
I'm not 100% sure of the mechanics of competitive Pokémon, but for other complete information games (you know how much all players will score) like rock paper scissors or prisoner's dilemma, we can analyze using Nash's equilibrium to find the best strategy *regardless* of what other opponent will do. In rock paper scissors, there are no Nash equilibrium.
@omnipresentcatgod245 Жыл бұрын
There is a bot on showdown called eaubot and it's pretty good at predicting stuff
@phoe87 Жыл бұрын
Competitive pokemon has also some "backwards guessing" meaning you know the species of the opposing mon, you don't know the set tho then before even attempting to predict that Lando you really want to know if he's av, scarf or something else (specs with sludge bomb Kappa), that's why BO3 are a little bit lore balanced, both players have enough time to make information gathering an essential part of the set, especially in the first game. I feel that Pokémon has so many variables that making a program play at tournament level is basically impossible and, if you doubt it, there was this game at worlds that I remember, one player would've wanted something like protect and U-turn at the end of the turn, obv it's not possible, the madlad used Roar (it was VGC, I think 7gen, one of the two Mons was a Zapdos) on his own protecting Pokémon and got his protect switch out in the same turn, that day I learned that Roar bypasses protect and that humans are way too crazy for machines
@Mushrooms683 Жыл бұрын
@@phoe87 Wow that is awesome.
@justingolden21 Жыл бұрын
As someone who already knew all of this, I've never seen a video or person anywhere explain this so well. Both concise and simple, but also covers everything in the detail it needs. You go over each function required, pruning, and that it assumes your opponent is perfect. The only simple thing you missed is the reason for the name: min then max since you play best, then your opponent does, and so on.
@justingolden21 Жыл бұрын
More info for others: It gets a lot harder in something like chess also where not only are you limiting depth, but you have to "score" the game somehow other than by the terminal state, since after a depth of 5, 10, 15, or whichever, the game will encounter plenty of states where it's not over. So you need to find the best one. You implement a function that takes in the state and returns a score as you mentioned. In chess, this means valuing each piece in a certain way, and even making a heatmap of what value each piece has on each square. These values are determined by humans sadly, and there are plenty of other ways to evaluate these things. Chess bots will often have lots of info on openings, since otherwise it's harder to compare game states that early.
@vsyovklyucheno Жыл бұрын
I love that the position at 13:50 is from the recent world chess championship match (Ian Nepomniatchtchi vs. Ding Liren). It didn't actually happen, but the commentators were analyzing that possibility. The berlin draw at the start too! Just crazy how many detalis are there.
@Musicrafter12 Жыл бұрын
And the positions starting from 12:19 are all from Kasparov-Karpov 1987, Game 24!
@braiandeivid Жыл бұрын
I recognized the position at 13:49. The 2023 WCC with a great winning pattern! Such a wonderful easter egg, glad to see your new videos
@ferb1131 Жыл бұрын
I once wrote a good AI for a a game I made using minimax and alpha-beta pruning, but I relied a lot on pseudo-code examples and even after writing it there were things about it, especially the alpha-beta pruning, that I didn't understand. This makes it a lot clearer!
@shashankhegde4007 Жыл бұрын
Thanks for uploading videos, great to see you back
@jaideepshekhar4621 Жыл бұрын
OMG 13:50 it's the Ding Liren game 6! :D Although the game ended before this position, I love how you played 1. Qf7 Qxc3 2. Qxg8 Kxg8 to throw us off, but as soon as I saw the d5 pawn, some gears started spinning lol. XD Edit: Went back to look at the chess games more closely, and 0:10 is also a famous Berlin draw. XD I love the references. :)
@samuelbartik526510 ай бұрын
Bruh, you just summarized my 90 minute lecture and fit it in to 15 minutes with beautiful animations. Thanks!
@stanleydodds9 Жыл бұрын
There are some things that can continue on from alpha-beta pruning that are no more difficult to understand. For example, principle variation search, where you "guess" a good move by some cheaper (perhaps iterative) method, and then use alpha-beta pruning but with a zero window (i.e. alpha and beta are the same) to more efficiently search the rest of the moves. The downside with a zero-window search is that the only results you can get are that a move is simply better, worse, or the same. But if you have a sufficiently good guess of the best move, you might expect that this is good enough; if every other move is worse, then you have proved that your guess is still optimal, despite not knowing the exact score of any other move (and allowing extra pruning). If you find a move that is better, you have to pay the cost of performing a full alpha-beta search to get it's exact score, but then you can continue with PVS on the remaining moves. Another change is negamax, which really is just a way to write the minimax algorithm in a more compact form. The idea is that the MIN player and the MAX player are really doing the same algorithm, just with reversed ordering of evaluations. And an easy way to reverse the order of the values is to simply negate them. So negamax can skip the branching into two almost identical algorithms by simply swapping alpha and beta and reversing their signs at each recursive step. You do also need to be careful with the evaluation function here though, because negamax expects the result to be negated for one of the players. Finally, another simple optimisation is the killer move heuristic. This is very simple to understand and doesn't really have much to do with alpha-beta pruning. The idea is that in a lot of similar game states, there will be a lot of the same moves, and it's quite possible that if one move is good in one state, then it'll be good in a lot of similar states where it exists. So we keep track of good or bad moves in some way, and base our ordering of move evaluations from this. The whole point of alpha-beta pruning is that you are able to prune more branches if your move ordering is better; if you evaluate a sufficiently good move, you don't need to check the rest (because it's good enough that it would never need to be considered). So by evaluating potentially good (or bad) moves first, we can prune more branches.
@RexPerfection Жыл бұрын
what a wonderful glimpse into the world of decision making in games, minimax, and alpha-beta pruning!
@jademonass2954 Жыл бұрын
i did this exact problem in college, about month ago really cool to see this in my reccomended
@andrewhancock2451 Жыл бұрын
This video really got me pondering. Why are the shapes of the players' head different. Why does the player with an antenna sometimes have two antennas? Is a function of how dispirited he/she is in the state of the game? More seriously, this is an awesome explanation of minimax and what seems to a ignoramus like myself to be branch-and-bound. I thought I knew what minimax was from mathematical programming, but it's more general than I thought. I appreciated that the robot players moved around, adjusted their head position, and blinked.
@arshiasaleem5005 Жыл бұрын
The best tutorial, I was looking for. Thank you so muchhhh
@HanLinnKyaw9 ай бұрын
I watched many videos on KZbin, but I didn't completely understand. However, this video made me understand completely. Thanks a lot! 💞
@Btmodz Жыл бұрын
This is wild. I have a program for my CS class where I need to implement minimax to solve tic-tac-toe... scary timing this video was released
@YourMom-rg5jk Жыл бұрын
yea weird timing indeed i made a tic tac toe engine already and have been working on a chess engine for the past 3 months
@yooogort Жыл бұрын
I think this is a great introduction to this concept, and I know this intentionally did not cover all topics that would be necessary to create a computer to limit the complexity of the video so the average person could follow along, but if you were truly interested in this, I recommend that you explore concepts such as transposition tables which is another optimization that computer make when playing complex games like chess. Another concept not covered in this video is move generation, for tic-tac-toe this is simple, you can place your tile anywhere your opponent hasn’t, but for games like chess, it is somewhat difficult to create an efficient method of generating every possible move in a given position. For example, one method you could use to generate move in a position is by creating an array of every piece of which side it is to move, this could be done my looping over all squares in a chessboard and adding the qualifying squares to an array, along with the piece, and then you have to create individual functions to generate the moves of each piece, for sliding pieces this is easy, you can use a list of offsets to gather the directions each piece can move in. For pieces like the knight and pawn this isn’t so easy, I recommend looking at an example chess engine that implements its own move generation to learn more about this. When he was talking about the evaluation function, he only talked about material values, though I believe that he should have at least mentioned other things such as piece square tables, which encourage the computer to move its pieces to certain squares, also pawn structure, which encourages the computer to keep their pawns undoubled, and there are much more, but one more important one is king safety, which makes the bot keep its king away from enemy pieces. If you want to learn more about this I recommend looking at open source chess bots such as stockfish, or sunfish, if you want to explore a more simple engine, or don’t know c++.
@ParthPaTeL-wm3kt10 ай бұрын
I'm very grateful to your content, Thanks for clear and concise explanation of very tough topic initially.
@Wyatt516 Жыл бұрын
omg yes i was just wondering about minimax and then you uploaded after almost half a year about minimax!
@pizzaguy_ Жыл бұрын
mood
@prakash_77 Жыл бұрын
The stars have finally aligned for us to be able to have another video from Brian!!
@gregorymorse8423 Жыл бұрын
With tic tac toe there are many symmetrical positions. Thus dynamic programming can significantly reduce the search tree.
@thewisearchitect Жыл бұрын
Wonderful explanation and illustration.
@debadityanath4398 Жыл бұрын
No way, this channel is not dead, this is so good to see
@RadishAcceptable Жыл бұрын
It's worth understanding that LLMs still work with all of these principles, just scaled up a whole whole lot, with oppositional logic being replaced by manually tuned reward functions. Similar to a modern chess engine, language models create their own algorithm for estimating the result of any action it takes. This results in a system that is effectively a "predict the next word in a sentence" generator that understands very well how language works, but that also understands very little about the meaning of individual words other than how they are related to each other through language. To an LLM, words and word combinations are no different than pieces moving on a chess board to a learning model based chess engine.
@narrowhead Жыл бұрын
Babe wake up new spanning tree video🔥🔥🔥
@Rise-Higher Жыл бұрын
well explained.🎉 Helps a lot thank you for sharing your knowledge so perfectly. ❤
@cee2615 Жыл бұрын
Absolutely adore the little characters, they are so cute. I watch the videos just because they are in it ❤❤❤
@ahmedhamdy25146 ай бұрын
Just perfect explanation 👍👍
@reda29100 Жыл бұрын
12:27 the confusion in the yellow bot's eyes of (what the hell is this dawg doin'?) stepping away from the chess table, being hypnotized with the narrator's voice 31:21 for those who easily loses train of thought, I'd prefer the caption to state (optimal result-wise), to not be confused with processing-wise, as despite it being more "optimal", "optimal" implies a better algorithm while still being thorough; when it means it compromises accuracy for time.
@Bill010211 ай бұрын
This material is absolutely sterling. A similar book I read was nothing short of life-changing. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell
@barakap2372 Жыл бұрын
This is my favorite channel
@nathanbaylon Жыл бұрын
I was just going through week 0 of CS50 AI and was wondering why you sounded so similar to Brian Yu, until I realised this channel IS by Brian Yu 😅
@progamer36 Жыл бұрын
love your animation style and explanation 😍
@ZaidKhan-kv8ok Жыл бұрын
Just a note for some who may have missed - while going recursively for minimax function call, the TURN variable will switch. Maybe it is obvious, but i think the guy didn't mention that explicitly.
@flameofthephoenix83958 ай бұрын
Hm, one interesting method you could try is a evaluative selective kind of tree where it will start with the current state then look at all possible next states, evaluate all possible next states then randomly select a pool of those states based partly on the evaluated result and then compute the next states for each of those states evaluate and proceed, you'd have perhaps a limit of 500 states being considered at any time, this would mean that it would go all the way to the ends of a tree but clip tons of branches based off of a combination of RNG and the estimation algorithm, likely you'd want a random selection algorithm that prioritizes choosing to include states on both the heavy loss and heavy win ends of things so that it only considers the states that will affect the overall decision greatly.
@shawnruby7011 Жыл бұрын
I think having it determine game state values by derivatives of an end game state just throws a lot out the window and I don't think an evaluator function can bridge that. One thing is at the start there simply are many positions one can take. For tictactoe that's not really an issue as all choices are equal (generally right, except the middle) but for other games, it's important to determine the value of positions themselves such as holding the middle (in chess or tictactoe) where deriving a move from an end state can by chance cover that but without valuing it like that, it can just as well set up somewhere else. That especially comes about with closing state branches off which leads me to number two where there's more to a game than playing optimally. Sometimes we sacrifice a piece in order to gain an upper hand or there's other tricky moves like opening up a color for diagonals to get a bishop or queen out or something idk. I think because it can't value any moves themselves it's necessary for it to have to try to calculate everything and then to cut it off to 5 moves out or whichever. It's inefficient and not even the best way to play. There are pokemon mods where the computer knows all your moves, pokemon, what they're holding etc and it picks the best move based on that. Now us knowing that means we can simply play into that and switch into what would be a terrible play had they not played "optimally". Anyways ty for the video thought it was interesting.
@TheGrandCOL Жыл бұрын
I do really love this channel and I'm glad you are back! I hope you keep uploading videos soon
@fiddlefox Жыл бұрын
Incredible. Excellently explained.
@khrsgr Жыл бұрын
please make more videos your videos are great
@omniaahmedeldsoky9755 Жыл бұрын
This was a great video Thank you! You're on of the best
@JOAOPROGAMER00 Жыл бұрын
Your videos are very interesting! But how do you make these animations? Do you use blender?
@taekookworld9271 Жыл бұрын
Hi, loved your video. Could you please make a video on the A* Algorithm too?
@matheuscosta5330 Жыл бұрын
Amazing content!
@kasufert Жыл бұрын
THE KING IS BACK
@qvgaming3795 Жыл бұрын
Awesome video man!! Great structure!!
@heyyayesh Жыл бұрын
Such a good explanation of the topic!
@jake-rg3fd Жыл бұрын
Really happy to see another vid, your content is always very interesting and well made 🙏
@workonly-bf4vz Жыл бұрын
love the style of your video
@nanto6865 Жыл бұрын
YOU ARE BACK! Yes, this is great! Happy to see that :D I'll watch the video now xD
@weckar Жыл бұрын
Two questions: 1. Can this in any way be applied to games with more than 2 players? 2. Is there a way to use this for a game with hidden information? Cards, for example?
@lukeb8349 Жыл бұрын
You can do this with cards, as long as you don't assume the cards they're holding. Essentially, the actions(s) function for the opponent would return them playing every card that both haven't been played yet and is not in your hand.
@barakeel Жыл бұрын
1. no 2. no.
@Somebodyherefornow Жыл бұрын
@@barakeel akchually, with 4 players, you might be able to use inaginary numbers; for all even numbers of players; idk if there a 3-way numbers
@ancientaccount7982 Жыл бұрын
Luke already answered the cards half of the question, but the answers for the question regarding player count are... lacking. The short answer is, yes, you can, as long as certain conditions are met and you shift your paradigm -- no imaginary numbers needed. First up, the condition: the game must be one where there is exactly one win state and one loss state. While this sounds restrictive, think about it: in chess, you have exactly one win state -- the enemy king is currently under threat and there is no action which can rescue him -- and one lose state -- ditto, but your king instead of the enemy's. There are an absurdly large number of scenarios where this occurs, but only one such core state. (EDIT: an example where MinMax doesn't apply well would be the Civ series, where multiple end goals exist. You /can/ apply MinMax to it if you try hard enough, but it becomes a matter of unrealistic complexity far too rapidly to be of use.) So how do we implement MinMax for, say, a 4-player game that satisfies this condition? Simple: we consider the three other players in sequence. It may be easier to visualize by considering them to be a single player with 3 successive turns where they can only perform specific actions. We also consider any terminal state where our MinMax player isn't the winner to be a variant of our single lose state. Then we write our algorithm, creating our move tree like so: MinMax->Enemy[A]->Enemy[B]->Enemy[C]->MinMax->... This generalizes, too, to team-based games, where each player on each team has a specific allowed set of actions. (TeamMinMax[A]->Enemy[A]->TeamMinMax[B]->Enemy[B]->...)
@sitharama2196 Жыл бұрын
Just now I happened to watch your pigeon and wholes video .namasthe Sir ,In India Iknow mystic persons who percept or touch the physical or material sence of maths how they crossed this barrier .any how sir very nice plessure tomeet you from some other part of globe .minds crossed the barriers .Res sir me remain good night
@nickbrooks5684 Жыл бұрын
Great video! Visually amazing!
@khrsgr Жыл бұрын
after very long time you are back
@darqed Жыл бұрын
Very underrated
@huh_wtf Жыл бұрын
good to see you back
@kierenhuynh8 Жыл бұрын
What would be the approach if you wanted to set difficulty for a bot? In this case it's able to calculate the optimal solution but what if you wanted it to purposely play at a lower level while still trying to win?
@ericxue3244 Жыл бұрын
Maybe let the bot randomly choose the less optimal move, such as choosing a 0 on purpose when it instead could choose the -1 and continue making you lose. In chess this becomes easier, because a bot can just choose a slightly worse evaluation score, so it seems like a genuine subpar move, and not the bot purposefully throwing the game because you were playing too poorly.
@g10royalle Жыл бұрын
I sooo love these animations!! Like even if I know these already, I still watch it bec I enjoy ur animations and also helps me better understand or think about it in another way
@honestarsenalfan7038 Жыл бұрын
providing the best platform for AI to cook me again and once again in scrabble. when i get to 54 its already at 189,we simply cannoh compete!
@shingi_ Жыл бұрын
i love how the intro showed an en passant
@MirralisDias Жыл бұрын
At 5:42, would it be possible to determine the player of the first instance, without knowing who played previously?
@brucea9871 Жыл бұрын
At 9:09 you claimed a correct implementation of the minimax algorithm will never lose a game of tic tac toe but that implicitly assumes tic tac toe is fair; that is each player has an equal chance of winning (or the game is a draw with best play by both sides). If tic tac toe is NOT fair (i.e. a win can always be forced by either the first or the second player) then a computer employing the minimax algorithm will lose against best play by the opponent. This is of course true for other games too; if the game is not fair then a player can lose despite always making the best move.
@TopRankX Жыл бұрын
Hey Good to see you again. Keep up the good work!
@brod515 Жыл бұрын
I was watching a few videos about AI (as we all have) during past few days. as soon as you got to the part about evaluation functions.. In my head I was like "hmm I guess probably you could train a network on finding the best possible move based on the current positions of pieces using the actual algorithm as training data" and then you ended the video by mentioning that.
@Diego01201 Жыл бұрын
Yeah that is the technique that the top chess engines use
@nicholas3354 Жыл бұрын
When I was younger, I had that exact same table.
@lel7531 Жыл бұрын
Amazing video ! As always keep it up please 🙏
@parkloqi Жыл бұрын
I’m not bragging or anything, but I just have to inform y’all that I am a full grandmaster of tick-tack-toe.
@nayankumarbarwa4317 Жыл бұрын
I totally thought you were bragging about being good at chess! I still feel computer would win against you in tic-tac-toe
@parkloqi Жыл бұрын
@@nayankumarbarwa4317 I only play in the all-biological division. That’s where I earned my ranking against humans and trained birds.
@nayankumarbarwa4317 Жыл бұрын
@@parkloqi Pardon me sir! You have all the respects from a person who has defeated every single kindergartener of his area single handedly.
@parkloqi Жыл бұрын
@@nayankumarbarwa4317 You gotta watch out for the quiet ones. Some of the older kindergartners are sharks!
@paulpach Жыл бұрын
in TikTacToe any game cannot last more than 9 moves. But in chess, there are no bounds. Games can go on forever if players just move the same piece back and forth. Thus no matter how powerful your computer is, you can't apply minimax without limiting the depth, or it will recurse until it gets a stack oveflow.
@ribbonsofnight Жыл бұрын
There is a theoretical game length limit a bit over 5000 moves each because of the 50 move draw rule which states a draw can be claimed if no capture is made and no pawn is moved for 50 consecutive moves as each of the 16 pawns can only move a maximum of 6 times and only 30 pieces can be captured there's a strict upper bound of 50(6*16+30) = 6300 If the only goal was to maximise the number of moves then maybe close to 6300 could be possible The problem isn't that the number of chess games is infinite (because it's not) but because it's a finite number that might be bigger than the number of atoms in the known universe. Chess is actually fully solved for all legal board states with 7 or less pieces
@cosmicsvids Жыл бұрын
The opponent is just going to lose if they keep moving pieces back and forth with someone who's playing optimally.
@ribbonsofnight Жыл бұрын
@@cosmicsvids 99.999999999% of the possible chess games that could be played are absolutely completely absurd and involve both players playing in ways that show know evidence of understanding the objective; in the same way that most possible impulses that my nervous system can send to my body would not result in me walking. Because the amount of chess games are so vast, the tiny subset of games where both players are (mostly) playing with an objective of checkmate is still vast beyond human comprehension.
@paulpach Жыл бұрын
@@cosmicsvids Not necessarily. Suppose only the 2 kings remain. Neither player would be able to checkmate the other, so the game would go on forever. In that case the only thing stopping the game would be the 50 move draw rule
@cosmicsvids Жыл бұрын
@@paulpach at the start of the game I mean if one player keeps moving a piece back and forth the other player will just start gobbling up pieces or do a quick check mate you can only get away with that when the games down to a few pieces.
@CuentaSims3 Жыл бұрын
Missed you so muuuch!!
@GabrielObolari Жыл бұрын
Your videos are great!
@nl_morrison Жыл бұрын
you sound like nathan fielder, good video:)
@janlavcharivmakhgalsuren6127 Жыл бұрын
which software do you use to create an animation
@stefanguiton Жыл бұрын
Excellent
@stillprophet7529 Жыл бұрын
"value = -infinity" hurts my soul :D other than that great video!
@nityarajan93238 ай бұрын
This felt like a eureka moment
@VideoNOLA8 ай бұрын
What's with-uh the extra syllable-uh at the end-uh of some words-uh?
@lucasaraujo6. Жыл бұрын
What software do you use to create the image from your videos?
@Cen_t1369 Жыл бұрын
Looks cool! And... It's cool!
@whtiequillBj Жыл бұрын
Do you know of any videos that could break down what kind of minimax algorithm OpenAI Alpha Star (StarCraft 2) would use?
@edward3190 Жыл бұрын
it uses neuron network, which is very different from minmax
@mohammadsalman1187 Жыл бұрын
Why on 5:27, the second example is a terminal state?
@Iankangaroo Жыл бұрын
because X wins
@mohammadsalman1187 Жыл бұрын
@@Iankangaroo oh yeah. Brain fart. I saw two empty spaces and was like, wait, the game isn't over 😅
@Iankangaroo Жыл бұрын
@@mohammadsalman1187 😂
@victorhugo-lp5rd Жыл бұрын
Just great
@gorillaau Жыл бұрын
Ooo. Tic-Tac-Toe. Can we play a game of Global Thermonuclear War?
@VerseUtopia Жыл бұрын
There's a Red Green Blue train in a maze, You can control its 3 move to 2 exits.. exit A Lose 1 point / exit B Gain 1 Point ; both exit only one will open in each round ; collect 100 point win the game. ( Each Train touched to other Train the Color will mixed ) Can Computer Win this Game?
@Zorak9595 Жыл бұрын
I assume all tic-tac-toe games played by AI result in a tie, right? Also, the optimal first move is probably always to pick the middle? If this is the case, how does player 2 determine where to make his move, when all corner and all sides are equal to each other? How does an AI randomly choose a spot? Some sort of default positional choice?
@ribbonsofnight Жыл бұрын
AI can't play any better than the average human could if they tried. It's a game where looking 7 or 8 moves ahead is really easy because after about 4 moves all moves are usually forced.
@pittiedemann7275 Жыл бұрын
About the definition of the games: I would call chess a complicate game, not complex. Therer are more variables than in tic-tac-toe, but they are still set. A chess game with a changing pattern would be complex, i guess.
@nyxiestarz Жыл бұрын
What happens when there is more than three terminal states? ie. chess, where both stalemates and draws are possible?
@ericxue3244 Жыл бұрын
The only important part is that the terminal state is assigned a value based on how good it is to the player. For example, winning is good, so it is assigned a number higher than the other states. Stalemates and draws are desireable over losing, but not over winning, so you can assign them any number between the losing state and the winning state. I'm not sure if stalemates or draws are better because I do not play much chess, so I don't know if stalemates should have a higher score than draw, or vice versa
@nanto6865 Жыл бұрын
Wow your mic really improved!
@superspartanman4480 Жыл бұрын
No way! So glad you’re back!
@ReligiousReasoning Жыл бұрын
I would say it should be the average of all possible terminal states
@clementrimagne40536 ай бұрын
username checks out
@Nova0 Жыл бұрын
nice video
@psp.youtube Жыл бұрын
amazing!
@tarksurmani6335 Жыл бұрын
Answer is actually really simple. "How computers play games?" If game is simple they calculate, if it's hard they use predefined shortcuts and if there is any random element at all, then they cheat.