(AGT3E10) [Game Theory] Subgame Perfect Nash Equilibrium of Finite Horizon Repeated Game: An Example

  Рет қаралды 12,564

selcuk ozyurt

selcuk ozyurt

Күн бұрын

In this episode I study on a finite horizon repeated game and describe how we find/verify its SPNE.
It's crucial to watch lecture videos in the proper order to ensure effective learning. This is because the concepts in each video build upon those introduced in previous videos. To help you with this, I recommend visiting my website, www.selcukozyurt.com, for a recommended course outline.

Пікірлер: 14
@saeedehparkam6048
@saeedehparkam6048 3 жыл бұрын
Hello. Thanks for this video. I have a question. If in this example, strategy A for player 2 is dominated by B, then now we have to first eliminate A for player 2 and then find the NE of the game? Thank you
@caio868
@caio868 3 жыл бұрын
I have a question: in previous videos, you defined strategy as a function from H to A_i (which I understand since at each history h^t we need to know what will be the player i's action). Also, you defined the payoff by taking action profiles 'a' as arguments (which makes sense because a player's payoff is the result of all his actions - and also his opponents' - up until the final stage game). However, in this video, you calculated the payoff as taking 's' as an argument, and now I am confused because s is a mapping. WOuld you please clarify?
@selcukozyurt
@selcukozyurt 3 жыл бұрын
Hi there! Sorry that I can't reply all questions, as I have so many other commitments. However, some of the questions, including yours, are very foundational, well-posed, and relatively shorter to answer in here, so I can't leave them unanswered. Yes, a pure strategy profile s is a mapping: A strategy profile s describes a unique *path* all the way from the initial node to a terminal node (the finish line), where we can tell what payoff each player will get. This path is the "puzzle" and we need to solve the puzzle to be able to find out the payoffs. Each player's strategy (s_i) is just one piece of the entire puzzle and unless we bring all the players' pieces together, we can't say where the path ends up. Having said that, a single player's strategy (s_i) is a more complicated "thing" than just being a sole piece of a path: It is a complete contingency plan: It tells what player i will do even though the game somehow moves "off road" (off the path described by the strategy profile s.) An interpretation regarding the description of strategy is that players form their strategies *before* the game starts, and so they feel they must have backup plans; plan A, B, C, and so on. This may sound unintuitive because in reality we are not so extremely cautious. We usually make a move, and then say, "I will decide what to do as the game unfolds". Then we call our sequence of actions as our "strategy". This isn't a strategy (from game theoretic point of view): What you did was your reaction conditional upon a realization of events (caused by your choices and your opponents' choices). In game theory, strategy is what your reactions will be under all possible realization of events. Every history is a possible realization of events, and so, your strategy must specify what your reaction will be after all of these possible realizations. In this sense, strategy profile s is nothing but a single realization of events that are collectively created by all the players' reactions. Hope that was clear.
@caio868
@caio868 3 жыл бұрын
@@selcukozyurt Amazing reply. Thank you very much!
@zakstephenson4545
@zakstephenson4545 11 ай бұрын
Why do they play A,A at the start if BB and CC are nash eq??
@dylanthornsberry8778
@dylanthornsberry8778 3 жыл бұрын
@4:18 why do they play (C,C) if period 1 was (A,A). WHy not play (C,C)? And if period one was NOT (A,A) why play (B,B), why not play (C,C) with higher returns?
@selcukozyurt
@selcukozyurt 3 жыл бұрын
We want players play (A,A). If they do, then they should “reward” themselves. If somebody deviates, then she should be “punished”. Here (C, C) is the reward and (B, B) is punishment. Because this is a two period game, the second period, i.e., reward and punishment must be Nash (to sustain subgame perfection). In repeated games we (almost) always use this kind of strategy: To reinforce a specific behavior, we offer reward, and punishment in case behavior wasn’t observed. If players play B in second period regardless of their first period actions, then playing A won’t be a part of equilibrium in the first period because both players have incentive to deviate to C and get payoff 5 from the first period. Hope that helps.
@dylanthornsberry8778
@dylanthornsberry8778 3 жыл бұрын
@@selcukozyurt That does help. Thank you.
@ameerah2026
@ameerah2026 2 жыл бұрын
is player one represent rows?
@saeedehparkam6048
@saeedehparkam6048 3 жыл бұрын
I mean A is strictly dominated by B for player 2
@blackmane1999
@blackmane1999 2 жыл бұрын
The mixed strategy nash equilibrium should be between A and C since B is the one being strictly dominated all the time.
@t_mini5371
@t_mini5371 2 жыл бұрын
No, a Nash equilibrium outcome can not be dominated. Row B and Column B as well as Row C column C can not be eliminated by definition and construction of NE.
@t_mini5371
@t_mini5371 2 жыл бұрын
A quick check shows that B,B = 1,1 checking if it is dominated is easy look a (C,B) = 0,0 clearly not true. Now look at (B,C) = 0,0 as well which is clearly not greater than 1,1.
@blackmane1999
@blackmane1999 2 жыл бұрын
Got it thanks!
버블티로 부자 구별하는법4
00:11
진영민yeongmin
Рет қаралды 20 МЛН
Friends make memories together part 2  | Trà Đặng #short #bestfriend #bff #tiktok
00:18
Good teacher wows kids with practical examples #shorts
00:32
I migliori trucchetti di Fabiosa
Рет қаралды 12 МЛН
Хасанның өзі эфирге шықты! “Қылмыстық топқа қатысым жоқ” дейді. Талғарда не болды? Халық сене ме?
09:25
Демократиялы Қазақстан / Демократический Казахстан
Рет қаралды 332 М.
Infinitely Repeated Prisoner's Dilemma
9:39
Iris Franz
Рет қаралды 19 М.
7. An Overview of Repeated Games (Game Theory Playlist 8)
51:17
selcuk ozyurt
Рет қаралды 7 М.
Backwards Induction Game Tree
8:28
Ashley Hodgson
Рет қаралды 77 М.
Repeated Games in Game Theory
10:01
Ashley Hodgson
Рет қаралды 25 М.
Backwards induction and subgame perfection in the Centipede game
6:50
버블티로 부자 구별하는법4
00:11
진영민yeongmin
Рет қаралды 20 МЛН