Algorithmic Strategy Complexity

Size: px
Start display at page:

Download "Algorithmic Strategy Complexity"

Transcription

1 Algorithmic Strategy Complexity Abraham Neyman Hebrew University of Jerusalem Jerusalem Israel Algorithmic Strategy Complexity, Northwesten 2003 p.1/52

2 General Introduction The classical paradigm of game theory assumes full rationality of the interactive agents. Algorithmic Strategy Complexity, Northwesten 2003 p.2/52

3 General Introduction The classical paradigm of game theory assumes full rationality of the interactive agents. In particular, it often assumes unlimited computational power. Algorithmic Strategy Complexity, Northwesten 2003 p.2/52

4 General Introduction The classical paradigm of game theory assumes full rationality of the interactive agents. In particular, it often assumes unlimited computational power. However, there are many decision problems and games for which it is impossible to assume that the agents (players) can either compute or implement an optimal (or best response or approximate optimal) strategy. Algorithmic Strategy Complexity, Northwesten 2003 p.2/52

5 Design and Implementation Algorithmic Strategy Complexity, Northwesten 2003 p.3/52

6 Design and Implementation It is often argued that evolutionary self selection leaves us with agents that act optimally. Algorithmic Strategy Complexity, Northwesten 2003 p.3/52

7 Design and Implementation It is often argued that evolutionary self selection leaves us with agents that act optimally. Therefore, the complexity of finding an optimal (or approximate optimal) strategy is conceptually less disturbing. Algorithmic Strategy Complexity, Northwesten 2003 p.3/52

8 Design and Implementation It is often argued that evolutionary self selection leaves us with agents that act optimally. Therefore, the complexity of finding an optimal (or approximate optimal) strategy is conceptually less disturbing. However, the computational feasibility and the computational cost of implementing various strategies should be taken into account. Algorithmic Strategy Complexity, Northwesten 2003 p.3/52

9 Design and Implementation One can imagine scenarios where the design and choice of strategies is by rational agents with (essentially) unlimited computation power and the selected strategies need be implemented by players with restricted computational resources. Algorithmic Strategy Complexity, Northwesten 2003 p.4/52

10 Design and Implementation One can imagine scenarios where the design and choice of strategies is by rational agents with (essentially) unlimited computation power and the selected strategies need be implemented by players with restricted computational resources. A corporation The USA Navy A soccer team A chess player A computer network Algorithmic Strategy Complexity, Northwesten 2003 p.4/52

11 Pure Mixed and Behavioral Algorithmic Strategy Complexity, Northwesten 2003 p.5/52

12 Pure Mixed and Behavioral In theory, mixed and behavioral strategies are equivalent (in games of perfect recall). Algorithmic Strategy Complexity, Northwesten 2003 p.5/52

13 Pure Mixed and Behavioral In theory, mixed and behavioral strategies are equivalent (in games of perfect recall). In practice, mixed and behavioral strategies are not equivalent. Algorithmic Strategy Complexity, Northwesten 2003 p.5/52

14 Pure Mixed and Behavioral In theory, mixed and behavioral strategies are equivalent (in games of perfect recall). In practice, mixed and behavioral strategies are not equivalent. Recall that A mixed strategy reflects uncertainty regarding the chosen pure strategy, and A behavioral strategies randomizes actions at the decision nodes. Algorithmic Strategy Complexity, Northwesten 2003 p.5/52

15 Strategies in the Repeated Game The number of pure strategies of the repeated game grows at a double exponential rate in the number of repetitions. Many of the strategies are not implementable by reasonable sized computing agents. Algorithmic Strategy Complexity, Northwesten 2003 p.6/52

16 General Motivation: Soccer Algorithmic Strategy Complexity, Northwesten 2003 p.7/52

17 General Motivation: Soccer What makes a good soccer player? Algorithmic Strategy Complexity, Northwesten 2003 p.7/52

18 General Motivation: Soccer What makes a good soccer player? What makes a good soccer coach? Algorithmic Strategy Complexity, Northwesten 2003 p.7/52

19 General Motivation: Soccer What makes a good soccer player? What makes a good soccer coach? Why Argentina failed in the recent world cup? Algorithmic Strategy Complexity, Northwesten 2003 p.7/52

20 General Motivation: Soccer What makes a good soccer player? What makes a good soccer coach? Why Argentina failed in the recent world cup? Answer: failure to understand the MINMAX THEOREM Algorithmic Strategy Complexity, Northwesten 2003 p.7/52

21 Further Motivation: Chess Algorithmic Strategy Complexity, Northwesten 2003 p.8/52

22 Further Motivation: Chess Why is Chess of any interest? Algorithmic Strategy Complexity, Northwesten 2003 p.8/52

23 Further Motivation: Chess Why is Chess of any interest? Answer: It is not interesting! Algorithmic Strategy Complexity, Northwesten 2003 p.8/52

24 Further Motivation: Chess Why is Chess of any interest? Answer: It is not interesting! It is a trivial game: (Zermelo s theorem.) Algorithmic Strategy Complexity, Northwesten 2003 p.8/52

25 Further Motivation: Chess Why is Chess of any interest? Answer: It is not interesting! It is a trivial game: (Zermelo s theorem.) A life threatening experience in my meeting with Kasparov. Algorithmic Strategy Complexity, Northwesten 2003 p.8/52

26 General Objective The impact on strategic interactions the value and equilibrium payoffs of variations of the game where players are restricted to employ Simple Strategies Algorithmic Strategy Complexity, Northwesten 2003 p.9/52

27 Simple Strategies Computable Strategies Algorithmic Strategy Complexity, Northwesten 2003 p.10/52

28 Simple Strategies Finite Automata Algorithmic Strategy Complexity, Northwesten 2003 p.11/52

29 Sample of References: F.A. Ben-Porath (1993) J. of Econ. Theory Repeated Games with Finite Automata Neyman (1985) Economics Letters Bounded Complexity Justifies Cooperation in the Finitely Repeated Prisoner s Dilemma Neyman (1997) in Cooperation: Game-Theoretic Approaches, Hart and Mas Colell, (eds.). Cooperation, Repetition, and Automata Neyman (1998) Math. of Oper. Res. Finitely Repeated Games with Finite Automata Algorithmic Strategy Complexity, Northwesten 2003 p.12/52

30 References: Finite Automata Neyman and Okada (2000) Int. J. of G. Th. Two-person R. Games with Finite Automata Amitai (1989) M.Sc. Thesis, Hebrew Univ. Stochastic Games with Automata (Hebrew) Aumann (1981) in Essays in Game Theory and Mathematical Economics in Honor of O. Morgenstern Survey of Repeated Games Algorithmic Strategy Complexity, Northwesten 2003 p.13/52

31 References: F. A. Ben-Porath and Peleg (1987) Hebrew Univ. (DP). On the Folk Theorem and Finite Automata Papadimitriou and Yannakakis (1994) On Complexity as Bounded Rationality (extended abstract) STOC - 94 Papadimitriou and Yannakakis (1995, 1996) On Bounded Rationality and Complexity manuscript (1995, revised 1996, revised 1998). Stearns (1997) Memory-bounded game-playing computing devices. Mimeo. Algorithmic Strategy Complexity, Northwesten 2003 p.14/52

32 References: Finite Automata Kalai (1990) in Game Theory and Applications, Ichiishi, Neyman and Tauman (eds.) Bounded Rat. and Strat. Complexity in R. G. Piccione (1989) Journal of Economic Theory Finite Automata Eq. with Discounting and Unessential Modifications of the Stage Game Rubinstein (1986) Journal of Economic Theory Finite Automata Play the R. P. s Dilemma Zemel (1989) Journal of Economic Theory Small Talk and Cooperation: A Note on Bounded Rationality Algorithmic Strategy Complexity, Northwesten 2003 p.15/52

33 Simple Strategies Recall Bounded Recall Algorithmic Strategy Complexity, Northwesten 2003 p.16/52

34 References: Bounded Recall Lehrer (1988) Journal of Economic Theory R.G.s with Bounded Recall Strategies Lehrer (1994) Games and Economic Behavior Many Players with Bounded Recall in Infinite Repeated Games Algorithmic Strategy Complexity, Northwesten 2003 p.17/52

35 References: Bounded Recall Neyman (1997) in Cooperation: Game-Theoretic Approaches, Hart and Mas Colell (eds.) Cooperation, Repetition, and Automata Aumann and Sorin (1990) GEB Cooperation and Bounded Recall Bavly and Neyman (forthcoming) Concealed Correlation by Boundedly Rational Players Algorithmic Strategy Complexity, Northwesten 2003 p.18/52

36 Simple Strategies Bounded Strategic Entropy Algorithmic Strategy Complexity, Northwesten 2003 p.19/52

37 References: Bounded Entropy Neyman and Okada Strategic Entropy and Complexity in Repeated Games Games and Economic Behavior (1999) Repeated Games with Bounded Entropy Games and Economic Behavior (2000) Algorithmic Strategy Complexity, Northwesten 2003 p.20/52

38 Simple Strategies Algorithmic Complexity Algorithmic Strategy Complexity, Northwesten 2003 p.21/52

39 References/Origin Solomonov (1964) A formal theory of inductive inference, Information and Control Kolmogorov (1965) Three approaches to the quantitative definition of information, Problems in Information Transmission Chaitin Stearns (1997) Memory-bounded game-playing computing devices. Mimeo. Algorithmic Strategy Complexity, Northwesten 2003 p.22/52

40 Simple Strategies Computable Strategies Finite Automata Bounded Recall Bounded Strategic Entropy Algorithmic Complexity Algorithmic Strategy Complexity, Northwesten 2003 p.23/52

41 Definition: Algoritmic Strategy Complexity The algorithmic complexity of a strategy is the length of shortest description of the strategy. length of shortest program that outputs the strategy. Algorithmic Strategy Complexity, Northwesten 2003 p.24/52

42 Examples of simple strategies in the repeated matching pennies where each players has two actions H and T: Algorithmic Strategy Complexity, Northwesten 2003 p.25/52

43 Examples of simple strategies in the repeated matching pennies where each players has two actions H and T: 1. Tit for Tat: Start playing H; at each other stage i play the action of the other players in the previous stage. Algorithmic Strategy Complexity, Northwesten 2003 p.25/52

44 Examples of simple strategies in the repeated matching pennies where each players has two actions H and T: 1. Tit for Tat: Start playing H; at each other stage i play the action of the other players in the previous stage. 2. At stage i play H whenever i is not a multiple of n, and play T otherwise. Algorithmic Strategy Complexity, Northwesten 2003 p.25/52

45 Examples of simple strategies in the repeated matching pennies where each players has two actions H and T: 1. Tit for Tat: Start playing H; at each other stage i play the action of the other players in the previous stage. 2. At stage i play H whenever i is not a multiple of n, and play T otherwise. 3. At stages i n play H. At stage i > n, play the action of the other player at stage i n. Algorithmic Strategy Complexity, Northwesten 2003 p.25/52

46 Algoritmic Strategy Complexity of Examples 1, 2, and 3. Algorithmic Strategy Complexity, Northwesten 2003 p.26/52

47 Algoritmic Strategy Complexity of Examples 1, 2, and 3. What are the shortest binary computer programs describing each of these strategies? Algorithmic Strategy Complexity, Northwesten 2003 p.26/52

48 Algoritmic Strategy Complexity of Examples 1, 2, and 3. What are the shortest binary computer programs describing each of these strategies? 1) The first strategy is definitely simple. It requires a short program. Algorithmic Strategy Complexity, Northwesten 2003 p.26/52

49 Algoritmic Strategy Complexity of Examples 1, 2, and 3. What are the shortest binary computer programs describing each of these strategies? 1) The first strategy is definitely simple. It requires a short program. 2) The second strategy needs essentially to count up to n, and its algorithmic complexity is roughly log n. Algorithmic Strategy Complexity, Northwesten 2003 p.26/52

50 Algoritmic Strategy Complexity of Examples 1, 2, and 3. What are the shortest binary computer programs describing each of these strategies? 1) The first strategy is definitely simple. It requires a short program. 2) The second strategy needs essentially to count up to n, and its algorithmic complexity is roughly log n. 3) The algorithmic complexity of strategy (3) is also roughly log n. Algorithmic Strategy Complexity, Northwesten 2003 p.26/52

51 More examples Algorithmic Strategy Complexity, Northwesten 2003 p.27/52

52 More examples 4. Fix a binary sequence y. The strategy σ y plays y i at stage i. Algorithmic Strategy Complexity, Northwesten 2003 p.27/52

53 More examples 4. Fix a binary sequence y. The strategy σ y plays y i at stage i. 5. Fix a positive integer n and a distribution p = ( m n, n m ) on {H, T }. n Algorithmic Strategy Complexity, Northwesten 2003 p.27/52

54 More examples 4. Fix a binary sequence y. The strategy σ y plays y i at stage i. 5. Fix a positive integer n and a distribution p = ( m n, n m ) on {H, T }. n Fix an n-periodic (p-typical) sequence y, i.e., n i=1 I(y i = H) = np(h) = m Consider the strategy σ y. Algorithmic Strategy Complexity, Northwesten 2003 p.27/52

55 An Important Example 6. Fix 0 x 1, x 2 1, Algorithmic Strategy Complexity, Northwesten 2003 p.28/52

56 An Important Example 6. Fix 0 x 1, x 2, t 1 1, t 2 = 1 t 1, Algorithmic Strategy Complexity, Northwesten 2003 p.28/52

57 An Important Example 6. Fix 0 x 1, x 2, t 1 1, t 2 = 1 t 1, and n. Algorithmic Strategy Complexity, Northwesten 2003 p.28/52

58 An Important Example 6. Fix 0 x 1, x 2, t 1 1, t 2 = 1 t 1, and n. Fix an n-periodic (x 1, t 1 ; x 2, t 2 )-typical sequence y: Algorithmic Strategy Complexity, Northwesten 2003 p.28/52

59 An Important Example 6. Fix 0 x 1, x 2, t 1 1, t 2 = 1 t 1, and n. Fix an n-periodic (x 1, t 1 ; x 2, t 2 )-typical sequence y: 1 i t 1 n I(y i = H) = [t 1 x 1 n] Algorithmic Strategy Complexity, Northwesten 2003 p.28/52

60 An Important Example 6. Fix 0 x 1, x 2, t 1 1, t 2 = 1 t 1, and n. Fix an n-periodic (x 1, t 1 ; x 2, t 2 )-typical sequence y: 1 i t 1 n I(y i = H) = [t 1 x 1 n] I.e., in the first t 1 n stages, the frequency of Hs is x 1. Algorithmic Strategy Complexity, Northwesten 2003 p.28/52

61 An Important Example 6. Fix 0 x 1, x 2, t 1 1, t 2 = 1 t 1, and n. Fix an n-periodic (x 1, t 1 ; x 2, t 2 )-typical sequence y: 1 i t 1 n I(y i = H) = [t 1 x 1 n] I.e., in the first t 1 n stages, the frequency of Hs is x 1. And, t 1 n<i n I(y i = H) = [t 2 x 2 n] Algorithmic Strategy Complexity, Northwesten 2003 p.28/52

62 An Important Example 6. Fix 0 x 1, x 2, t 1 1, t 2 = 1 t 1, and n. Fix an n-periodic (x 1, t 1 ; x 2, t 2 )-typical sequence y: 1 i t 1 n I(y i = H) = [t 1 x 1 n] I.e., in the first t 1 n stages, the frequency of Hs is x 1. And, t 1 n<i n I(y i = H) = [t 2 x 2 n] I.e. in the last t 2 n stages of the cycle the frequency of Hs is x 2. Algorithmic Strategy Complexity, Northwesten 2003 p.28/52

63 An Important Example 6. Fix 0 x 1, x 2, t 1 1, t 2 = 1 t 1, and n. Fix an n-periodic (x 1, t 1 ; x 2, t 2 )-typical sequence y: 1 i t 1 n I(y i = H) = [t 1 x 1 n] I.e., in the first t 1 n stages, the frequency of Hs is x 1. And, t 1 n<i n I(y i = H) = [t 2 x 2 n] I.e. in the last t 2 n stages of the cycle the frequency of Hs is x 2. Consider σ y. Algorithmic Strategy Complexity, Northwesten 2003 p.28/52

64 Algorithmic Complexity of σ y Algorithmic Strategy Complexity, Northwesten 2003 p.29/52

65 Algorithmic Complexity of σ y 4) The algorithmic complexity of a strategy σ y is the Kolmogorov-Chaitin (K-C) complexity y. Algorithmic Strategy Complexity, Northwesten 2003 p.29/52

66 Algorithmic Complexity of σ y 4) The algorithmic complexity of a strategy σ y is the Kolmogorov-Chaitin (K-C) complexity y. 5) The K-complexity of strategy (5) is roughly nh(x) where H denotes the entropy function H(x) = x log x (1 x) log(1 x) Algorithmic Strategy Complexity, Northwesten 2003 p.29/52

67 Algorithmic Complexity of σ y 4) The algorithmic complexity of a strategy σ y is the Kolmogorov-Chaitin (K-C) complexity y. 5) The K-complexity of strategy (5) is roughly nh(x) where H denotes the entropy function H(x) = x log x (1 x) log(1 x) 6) The K-complexity of strategy (6) is roughly t 1 nh(x 1 ) + t 2 nh(x 2 ) Algorithmic Strategy Complexity, Northwesten 2003 p.29/52

68 A Machine U Algorithmic Strategy Complexity, Northwesten 2003 p.30/52

69 A Machine U is a map from a subset P U of the set of finite length binary strings {0, 1} (the program input) Algorithmic Strategy Complexity, Northwesten 2003 p.30/52

70 A Machine U is a map from a subset P U of the set of finite length binary strings {0, 1} (the program input) to the set of strategies in the repeated game. Algorithmic Strategy Complexity, Northwesten 2003 p.30/52

71 A Machine U is a map from a subset P U of the set of finite length binary strings {0, 1} (the program input) to the set of strategies in the repeated game. Thus U : P U Σ. Algorithmic Strategy Complexity, Northwesten 2003 p.30/52

72 A Machine U is a map from a subset P U of the set of finite length binary strings {0, 1} (the program input) to the set of strategies in the repeated game. Thus U : P U Σ. The set of strategies describable the machine U is U(P U ). Algorithmic Strategy Complexity, Northwesten 2003 p.30/52

73 Computable strategies A = A i action profiles of the stage game. A all finite strings of elements of A and Σ strategies of player (i) in the repeated game. A strategy σ is a map from A to A i, the stage actions of player i. A has a natural identification with the natural numbers N. Algorithmic Strategy Complexity, Northwesten 2003 p.31/52

74 Computable strategies in the Repeated Game A computable strategy in the infinitely repeated game is an A i -valued recursive strategy, i.e., a partial recursive function defined over all histories. Algorithmic Strategy Complexity, Northwesten 2003 p.32/52

75 Computable strategies in the Repeated Game A computable strategy in the infinitely repeated game is an A i -valued recursive strategy, i.e., a partial recursive function defined over all histories. A computable strategy in the n-stage repeated game is an A i -valued partial recursive function that is defined over all histories of length < n. Algorithmic Strategy Complexity, Northwesten 2003 p.32/52

76 Programable strategies Fix a partial recursive function of two variables, called a programable machine, U : {0, 1} A A i For every p {0, 1} the function h U(b, h) is a partial strategy. The strategies programable on U are those strategies σ s.t. p {0, 1} s.t. h we have U(p, h) = σ(h). Algorithmic Strategy Complexity, Northwesten 2003 p.33/52

77 Universal Machines A universal machine is a programable machine U such that for every partial recursive strategy σ there is p {0, 1} such that U(p, h) = σ(h) where the equality holds whenever either side is defined. Algorithmic Strategy Complexity, Northwesten 2003 p.34/52

78 Independence of the Universal Machine The set of strategies computable by a universal machine is independent of the specific universal machine, i.e., for every two universal machines U and A we have U(P U ) = A(P A ). Algorithmic Strategy Complexity, Northwesten 2003 p.35/52

79 Independence of the Universal Machine The set of strategies computable by a universal machine is independent of the specific universal machine, i.e., for every two universal machines U and A we have U(P U ) = A(P A ). Indeed, if U and A are two universal machines then there is a string x {0, 1} such that for every binary string y {0, 1} we have U(y) = A(y, x). Algorithmic Strategy Complexity, Northwesten 2003 p.35/52

80 A Machine: a function of the program and history We can think of a (strategic) universal computer U as a function of two variables, x and y. Algorithmic Strategy Complexity, Northwesten 2003 p.36/52

81 A Machine: a function of the program and history We can think of a (strategic) universal computer U as a function of two variables, x and y. The first binary string x {0, 1} can be thought as the program. Algorithmic Strategy Complexity, Northwesten 2003 p.36/52

82 A Machine: a function of the program and history We can think of a (strategic) universal computer U as a function of two variables, x and y. The first binary string x {0, 1} can be thought as the program. The second finite string y B is viewed as the history input. Algorithmic Strategy Complexity, Northwesten 2003 p.36/52

83 A Machine: a function of the program and history We can think of a (strategic) universal computer U as a function of two variables, x and y. The first binary string x {0, 1} can be thought as the program. The second finite string y B is viewed as the history input. The output U(x, y) is either an action of the player or undefined. Algorithmic Strategy Complexity, Northwesten 2003 p.36/52

84 A Machine: a function of the program and history We can think of a (strategic) universal computer U as a function of two variables, x and y. The first binary string x {0, 1} can be thought as the program. The second finite string y B is viewed as the history input. The output U(x, y) is either an action of the player or undefined. Thus, y U(x, y) is a partial function. Algorithmic Strategy Complexity, Northwesten 2003 p.36/52

85 Independence of U Thus, U, A x {0, 1} s.t. U(x x, y) = A(x, y) Algorithmic Strategy Complexity, Northwesten 2003 p.37/52

86 Definition of K U (σ) Definition 1 The algorithmic strategic complexity K U (σ) of a strategy σ is K U (σ) = min p p:u(p)=σ Algorithmic Strategy Complexity, Northwesten 2003 p.38/52

87 Definition of K U (σ) Definition 2 The algorithmic strategic complexity K U (σ) of a strategy σ w.r.t. the universal computer U is K U (σ) = min p p:u(p)=σ The K complexity of a strategy depends obviously on the universal computer U. Algorithmic Strategy Complexity, Northwesten 2003 p.38/52

88 Definition of K U (σ) Definition 3 The algorithmic strategic complexity K U (σ) of a strategy σ w.r.t. the universal computer U is K U (σ) = min p p:u(p)=σ The K complexity of a strategy depends obviously on the universal computer U. Moreover, for every (partial) recursive strategy σ there is a universal interactive computer U such that K U (σ) = 1. Algorithmic Strategy Complexity, Northwesten 2003 p.38/52

89 Asymptotic Independence of the Universal Machine However, the K strategic complexity is asymptotically computer independent: Proposition 1 Let U and A be two universal computers. There is a constant c = c(u, A) s.t. for every strategy σ K U (σ) K A (σ) c Algorithmic Strategy Complexity, Northwesten 2003 p.39/52

90 A simple counting argument U there are less then 2 m strategies with K U (σ) < m. Algorithmic Strategy Complexity, Northwesten 2003 p.40/52

91 A simple counting argument U there are less then 2 m strategies with K U (σ) < m. Therefore, if Σ is a set of strategies with Σ 2 m, then U σ Σ s.t. K U (σ) > m Algorithmic Strategy Complexity, Northwesten 2003 p.40/52

92 Bounded Recall and K-complexity Algorithmic Strategy Complexity, Northwesten 2003 p.41/52

93 Bounded Recall and K-complexity Most strategies of k-recall have K-complexity o( A k ) Algorithmic Strategy Complexity, Northwesten 2003 p.41/52

94 Bounded Recall and K-complexity Most strategies of k-recall have K-complexity o( A k ) Let σ be a strategy of k-recall. Then K U (σ) A k log A i +2 log k c(u) + 2 log A C A k log A i Algorithmic Strategy Complexity, Northwesten 2003 p.41/52

95 Finite Automata and K-complexity Algorithmic Strategy Complexity, Northwesten 2003 p.42/52

96 Finite Automata and K-complexity Corollary 2 Most strategies implementable by an automaton of size m have K-complexity m log m Algorithmic Strategy Complexity, Northwesten 2003 p.42/52

97 Finite Automata and K-complexity Corollary 3 Most strategies implementable by an automaton of size m have K-complexity m log m Proposition 4 Let σ be a strategy implementable by an automaton of size m. Then K U (σ) A i m log m + m log A i + 4 log m + c(u) + 2 log A i Cm log m Algorithmic Strategy Complexity, Northwesten 2003 p.42/52

98 K-strategy-complexity and Bounded strategy entropy Every mixed strategy σ is a probability distribution over the set of pure strategies. Algorithmic Strategy Complexity, Northwesten 2003 p.43/52

99 K-strategy-complexity and Bounded strategy entropy Every mixed strategy σ is a probability distribution over the set of pure strategies. Proposition 6 Let σ be a mixture of pure strategies of Kolmogorov complexity k. Then the strategic entropy of σ is k. Algorithmic Strategy Complexity, Northwesten 2003 p.43/52

100 Lemma Let 0 t 1, P, Q are distributions on A. Algorithmic Strategy Complexity, Northwesten 2003 p.44/52

101 Lemma Let 0 t 1, P, Q are distributions on A. Let X = (X i ) be an n-sequences of independent A-valued random variables s.t. X i P if i tn and X i Q if i > tn. Algorithmic Strategy Complexity, Northwesten 2003 p.44/52

102 Lemma Let 0 t 1, P, Q are distributions on A. Let X = (X i ) be an n-sequences of independent A-valued random variables s.t. X i P if i tn and X i Q if i > tn. The probability that K(X) tnh(p ) (1 t)nh(q) εn Algorithmic Strategy Complexity, Northwesten 2003 p.44/52

103 Lemma Let 0 t 1, P, Q are distributions on A. Let X = (X i ) be an n-sequences of independent A-valued random variables s.t. X i P if i tn and X i Q if i > tn. The probability that K(X) tnh(p ) (1 t)nh(q) εn goes to zero as n. Algorithmic Strategy Complexity, Northwesten 2003 p.44/52

104 Lemma II Assume that X i sequences of independent A-valued random variables such that the distribution of X i is a recursive function of i. Let σ n be the mixed strategy equivalent to the behavioral strategy in the n stage game where the distribution of X i is σ n,i ( ). Then Pr σn ( K(x) n i=1 H(X i ) εn) n 0 Algorithmic Strategy Complexity, Northwesten 2003 p.45/52

105 The function u and cav u Algorithmic Strategy Complexity, Northwesten 2003 p.46/52

106 The function u and cav u Fix a 2-person 0-sum game G = I, J, g. Algorithmic Strategy Complexity, Northwesten 2003 p.46/52

107 The function u and cav u Fix a 2-person 0-sum game G = I, J, g. Define the functions u : [0, ) IR by: Algorithmic Strategy Complexity, Northwesten 2003 p.46/52

108 The function u and cav u Fix a 2-person 0-sum game G = I, J, g. u : [0, ) IR by: action x of player 1 to Define the functions Map each mixed min j g(x, j) Algorithmic Strategy Complexity, Northwesten 2003 p.46/52

109 The function u and cav u Fix a 2-person 0-sum game G = I, J, g. u : [0, ) IR by: Define the functions Maximize over all mixed actions x with entropy H(x) α max min x:h(x) α j g(x, j) Algorithmic Strategy Complexity, Northwesten 2003 p.46/52

110 The function u and cav u Fix a 2-person 0-sum game G = I, J, g. Define the functions u : [0, ) IR by: Set u(α) = max min x:h(x) α j g(x, j) Algorithmic Strategy Complexity, Northwesten 2003 p.46/52

111 The function u and cav u Fix a 2-person 0-sum game G = I, J, g. Define the functions u : [0, ) IR by: u(α) = max min x:h(x) α j g(x, j) The function cav u is the least concave function that is u. Algorithmic Strategy Complexity, Northwesten 2003 p.46/52

112 The Game G n (K(k), Σ) Algorithmic Strategy Complexity, Northwesten 2003 p.47/52

113 The Game G n (K(k), Σ) is the n-stage repeated game where Algorithmic Strategy Complexity, Northwesten 2003 p.47/52

114 The Game G n (K(k), Σ) is the n-stage repeated game where Player 1 is restricted to strategies of algorithmic complexity k, Algorithmic Strategy Complexity, Northwesten 2003 p.47/52

115 The Game G n (K(k), Σ) is the n-stage repeated game where Player 1 is restricted to strategies of algorithmic complexity k, and Player 2 is unrestricted. Algorithmic Strategy Complexity, Northwesten 2003 p.47/52

116 The Game G n (K(αn), Σ) Algorithmic Strategy Complexity, Northwesten 2003 p.48/52

117 The Game G n (K(αn), Σ) is the n-stage repeated game where Algorithmic Strategy Complexity, Northwesten 2003 p.48/52

118 The Game G n (K(αn), Σ) is the n-stage repeated game where Player 1 is restricted to strategies of algorithmic complexity αn, Algorithmic Strategy Complexity, Northwesten 2003 p.48/52

119 The Game G n (K(αn), Σ) is the n-stage repeated game where Player 1 is restricted to strategies of algorithmic complexity αn, and Player 2 is unrestricted. Algorithmic Strategy Complexity, Northwesten 2003 p.48/52

120 The Result Val G n (K U (αn), Σ) Algorithmic Strategy Complexity, Northwesten 2003 p.49/52

121 The Result Val G n (K U (αn), Σ) Algorithmic Strategy Complexity, Northwesten 2003 p.49/52

122 The Result (cav u)(α) Val G n (K U (αn), Σ) Algorithmic Strategy Complexity, Northwesten 2003 p.49/52

123 The Result (cav u)(α) Val G n (K U (αn), Σ) n Algorithmic Strategy Complexity, Northwesten 2003 p.49/52

124 The Result (cav u)(α) Val G n (K U (αn), Σ) n Algorithmic Strategy Complexity, Northwesten 2003 p.49/52

125 The Result (cav u)(α) Val G n (K U (αn), Σ) n (cav u)(α) Algorithmic Strategy Complexity, Northwesten 2003 p.49/52

126 Proof: Part 1 Algorithmic Strategy Complexity, Northwesten 2003 p.50/52

127 Proof: Part 1 The number of strategies in K U (αn) is 2 αn. Algorithmic Strategy Complexity, Northwesten 2003 p.50/52

128 Proof: Part 1 The number of strategies in K U (αn) is 2 αn. Therefore the strategic entropy of a mixture of pure strategies in K U (αn) is αn. Algorithmic Strategy Complexity, Northwesten 2003 p.50/52

129 Proof: Part 1 The number of strategies in K U (αn) is 2 αn. Therefore the strategic entropy of a mixture of pure strategies in K U (αn) is αn. Therefore, by a result of N and Okada, (cav u)(α) Val G n (K(αn), Σ) Algorithmic Strategy Complexity, Northwesten 2003 p.50/52

130 Proof: Part 2 Algorithmic Strategy Complexity, Northwesten 2003 p.51/52

131 Proof: Part 2 Let α Algorithmic Strategy Complexity, Northwesten 2003 p.51/52

132 Proof: Part 2 Let α (cav u)(α) Algorithmic Strategy Complexity, Northwesten 2003 p.51/52

133 Proof: Part 2 Let α 1 α α 2 (cav u)(α) Algorithmic Strategy Complexity, Northwesten 2003 p.51/52

134 Proof: Part 2 Let α 1 α α 2 and 0 t 1 = 1 t 2 1 be such that α = t 1 α 1 + t 2 α 2 and (cav u)(α) Algorithmic Strategy Complexity, Northwesten 2003 p.51/52

135 Proof: Part 2 Let α 1 α α 2 and 0 t 1 = 1 t 2 1 be such that α = t 1 α 1 + t 2 α 2 and (cav u)(α) = t 1 u(α 1 ) + t 2 u(α 2 ). Algorithmic Strategy Complexity, Northwesten 2003 p.51/52

136 Proof: Part 2 Let α 1 α α 2 and 0 t 1 = 1 t 2 1 be such that α = t 1 α 1 + t 2 α 2 and (cav u)(α) = t 1 u(α 1 ) + t 2 u(α 2 ). Let x i, i = 1, 2, be mixed strategies of the stage game with H(x i ) α i and min j g(x i, j) = u(α i ). Algorithmic Strategy Complexity, Northwesten 2003 p.51/52

137 Proof: Part 2 Let α 1 α α 2 and 0 t 1 = 1 t 2 1 be such that α = t 1 α 1 + t 2 α 2 and (cav u)(α) = t 1 u(α 1 ) + t 2 u(α 2 ). Let x i, i = 1, 2, be mixed strategies of the stage game with H(x i ) α i and min j g(x i, j) = u(α i ). The algorithmic complexity of each n-periodic (x 1, t 1, x 2, t 2 )-typical sequence is t 1 nh(x 1 ) + t 2 nh(x 2 ) + o(n) αn. Algorithmic Strategy Complexity, Northwesten 2003 p.51/52

138 Corollaries and Implications Algorithmic Strategy Complexity, Northwesten 2003 p.52/52

139 Corollaries and Implications U (τ n ) s.t. σ K U (αn) g n (σ, τ n ) (cav u)(α) Algorithmic Strategy Complexity, Northwesten 2003 p.52/52

140 Corollaries and Implications U (τ n ) s.t. σ K U (αn) g n (σ, τ n ) (cav u)(α) (τ n ) s.t. U σ K U (αn) g n (σ, τ n ) (cav u)(α + c(u) n ) Algorithmic Strategy Complexity, Northwesten 2003 p.52/52

141 Corollaries and Implications U (τ n ) s.t. σ K U (αn) g n (σ, τ n ) (cav u)(α) (τ n ) s.t. U σ K U (αn) g n (σ, τ n ) (cav u)(α + c(u) n ) U (τ n ) n σ K U (αn) s.t. g n (σ, τ n ) (cav u)(α c(u)/n) Algorithmic Strategy Complexity, Northwesten 2003 p.52/52

Toward a theory of repeated games with bounded memory

Toward a theory of repeated games with bounded memory Toward a theory of repeated games with bounded memory Gilad Bavly Ron Peretz May 22, 2017 Abstract We study repeated games in which each player i is restricted to (mixtures of) strategies that can recall

More information

Two-person repeated games with nite automata

Two-person repeated games with nite automata Int J Game Theory 2000) 29:309±325 999 2000 Two-person repeated games with nite automata Abraham Neyman1, Daijiro Okada2 1 Institute of Mathematics, The HebrewUniversity of Jerusalem, Givat Ram, Jerusalem

More information

Evolutionary Bargaining Strategies

Evolutionary Bargaining Strategies Evolutionary Bargaining Strategies Nanlin Jin http://cswww.essex.ac.uk/csp/bargain Evolutionary Bargaining Two players alternative offering game x A =?? Player A Rubinstein 1982, 1985: Subgame perfect

More information

Complexity and Mixed Strategy Equilibria

Complexity and Mixed Strategy Equilibria Complexity and Mixed Strategy Equilibria ai-wei Hu Northwestern University First Version: Nov 2008 his version: March 2010 Abstract Unpredictable behavior is central for optimal play in many strategic

More information

Bounded Rationality, Strategy Simplification, and Equilibrium

Bounded Rationality, Strategy Simplification, and Equilibrium Bounded Rationality, Strategy Simplification, and Equilibrium UPV/EHU & Ikerbasque Donostia, Spain BCAM Workshop on Interactions, September 2014 Bounded Rationality Frequently raised criticism of game

More information

Excludability and Bounded Computational Capacity Strategies

Excludability and Bounded Computational Capacity Strategies Excludability and Bounded Computational Capacity Strategies Ehud Lehrer and Eilon Solan June 11, 2003 Abstract We study the notion of excludability in repeated games with vector payoffs, when one of the

More information

Monotonic ɛ-equilibria in strongly symmetric games

Monotonic ɛ-equilibria in strongly symmetric games Monotonic ɛ-equilibria in strongly symmetric games Shiran Rachmilevitch April 22, 2016 Abstract ɛ-equilibrium allows for worse actions to be played with higher probability than better actions. I introduce

More information

Games and Economic Behavior 28, (1999) Article ID game , available online at on. Distributed Games*

Games and Economic Behavior 28, (1999) Article ID game , available online at  on. Distributed Games* Games and Economic Behavior 28, 55 72 (1999) Article ID game.1998.0689, available online at http://www.idealibrary.com on Distributed Games* Dov Monderer and Moshe Tennenholtz Faculty of Industrial Engineering

More information

The Folk Theorem for Finitely Repeated Games with Mixed Strategies

The Folk Theorem for Finitely Repeated Games with Mixed Strategies The Folk Theorem for Finitely Repeated Games with Mixed Strategies Olivier Gossner February 1994 Revised Version Abstract This paper proves a Folk Theorem for finitely repeated games with mixed strategies.

More information

Existence of Optimal Strategies in Markov Games with Incomplete Information

Existence of Optimal Strategies in Markov Games with Incomplete Information Existence of Optimal Strategies in Markov Games with Incomplete Information Abraham Neyman 1 December 29, 2005 1 Institute of Mathematics and Center for the Study of Rationality, Hebrew University, 91904

More information

REPEATED GAMES. Jörgen Weibull. April 13, 2010

REPEATED GAMES. Jörgen Weibull. April 13, 2010 REPEATED GAMES Jörgen Weibull April 13, 2010 Q1: Can repetition induce cooperation? Peace and war Oligopolistic collusion Cooperation in the tragedy of the commons Q2: Can a game be repeated? Game protocols

More information

Sequential Equilibrium in Computational Games

Sequential Equilibrium in Computational Games Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Sequential Equilibrium in Computational Games Joseph Y. Halpern and Rafael Pass Cornell University halpern rafael@cs.cornell.edu

More information

Efficiency and converse reduction-consistency in collective choice. Abstract. Department of Applied Mathematics, National Dong Hwa University

Efficiency and converse reduction-consistency in collective choice. Abstract. Department of Applied Mathematics, National Dong Hwa University Efficiency and converse reduction-consistency in collective choice Yan-An Hwang Department of Applied Mathematics, National Dong Hwa University Chun-Hsien Yeh Department of Economics, National Central

More information

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg*

Computational Game Theory Spring Semester, 2005/6. Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg* Computational Game Theory Spring Semester, 2005/6 Lecture 5: 2-Player Zero Sum Games Lecturer: Yishay Mansour Scribe: Ilan Cohen, Natan Rubin, Ophir Bleiberg* 1 5.1 2-Player Zero Sum Games In this lecture

More information

Learning Equilibrium as a Generalization of Learning to Optimize

Learning Equilibrium as a Generalization of Learning to Optimize Learning Equilibrium as a Generalization of Learning to Optimize Dov Monderer and Moshe Tennenholtz Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Haifa 32000,

More information

Microeconomics. 2. Game Theory

Microeconomics. 2. Game Theory Microeconomics 2. Game Theory Alex Gershkov http://www.econ2.uni-bonn.de/gershkov/gershkov.htm 18. November 2008 1 / 36 Dynamic games Time permitting we will cover 2.a Describing a game in extensive form

More information

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Extensive games with perfect information OR6and7,FT3,4and11

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Extensive games with perfect information OR6and7,FT3,4and11 Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Extensive games with perfect information OR6and7,FT3,4and11 Perfect information A finite extensive game with perfect information

More information

Kolmogorov structure functions for automatic complexity

Kolmogorov structure functions for automatic complexity Kolmogorov structure functions for automatic complexity Bjørn Kjos-Hanssen June 16, 2015 Varieties of Algorithmic Information, University of Heidelberg Internationales Wissenschaftssentrum History 1936:

More information

Equilibrium Computation

Equilibrium Computation Equilibrium Computation Ruta Mehta AGT Mentoring Workshop 18 th June, 2018 Q: What outcome to expect? Multiple self-interested agents interacting in the same environment Deciding what to do. Q: What to

More information

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria Basic Game Theory University of Waterloo January 7, 2013 Outline 1 2 3 What is game theory? The study of games! Bluffing in poker What move to make in chess How to play Rock-Scissors-Paper Also study of

More information

The Nash Bargaining Solution in Gen Title Cooperative Games. Citation Journal of Economic Theory, 145(6):

The Nash Bargaining Solution in Gen Title Cooperative Games. Citation Journal of Economic Theory, 145(6): The Nash Bargaining Solution in Gen Title Cooperative Games Author(s) OKADA, Akira Citation Journal of Economic Theory, 145(6): Issue 2010-11 Date Type Journal Article Text Version author URL http://hdl.handle.net/10086/22200

More information

Rational Irrationality

Rational Irrationality Rational Irrationality Dan Ventura Computer Science Department righam Young University ventura@cs.byu.edu Abstract We present a game-theoretic account of irrational agent behavior and define conditions

More information

The Game of Normal Numbers

The Game of Normal Numbers The Game of Normal Numbers Ehud Lehrer September 4, 2003 Abstract We introduce a two-player game where at each period one player, say, Player 2, chooses a distribution and the other player, Player 1, a

More information

A Folk Theorem For Stochastic Games With Finite Horizon

A Folk Theorem For Stochastic Games With Finite Horizon A Folk Theorem For Stochastic Games With Finite Horizon Chantal Marlats January 2010 Chantal Marlats () A Folk Theorem For Stochastic Games With Finite Horizon January 2010 1 / 14 Introduction: A story

More information

Realization Plans for Extensive Form Games without Perfect Recall

Realization Plans for Extensive Form Games without Perfect Recall Realization Plans for Extensive Form Games without Perfect Recall Richard E. Stearns Department of Computer Science University at Albany - SUNY Albany, NY 12222 April 13, 2015 Abstract Given a game in

More information

Conservative Belief and Rationality

Conservative Belief and Rationality Conservative Belief and Rationality Joseph Y. Halpern and Rafael Pass Department of Computer Science Cornell University Ithaca, NY, 14853, U.S.A. e-mail: halpern@cs.cornell.edu, rafael@cs.cornell.edu January

More information

The Revealed Preference Implications of Reference Dependent Preferences

The Revealed Preference Implications of Reference Dependent Preferences The Revealed Preference Implications of Reference Dependent Preferences Faruk Gul and Wolfgang Pesendorfer Princeton University December 2006 Abstract Köszegi and Rabin (2005) propose a theory of reference

More information

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about

Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about 0368.4170: Cryptography and Game Theory Ran Canetti and Alon Rosen Lecture 7 02 December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about Two-Player zero-sum games (min-max theorem) Mixed

More information

Selfishness vs Altruism vs Balance

Selfishness vs Altruism vs Balance Selfishness vs Altruism vs Balance Pradeep Dubey and Yair Tauman 18 April 2017 Abstract We give examples of strategic interaction which are beneficial for players who follow a "middle path" of balance

More information

Self-stabilizing uncoupled dynamics

Self-stabilizing uncoupled dynamics Self-stabilizing uncoupled dynamics Aaron D. Jaggard 1, Neil Lutz 2, Michael Schapira 3, and Rebecca N. Wright 4 1 U.S. Naval Research Laboratory, Washington, DC 20375, USA. aaron.jaggard@nrl.navy.mil

More information

Better-Reply Dynamics with Bounded Recall

Better-Reply Dynamics with Bounded Recall ` Kyiv School of Economics & Kyiv Economics Institute DISCUSSION PAPER SERIES Better-Reply Dynamics with Bounded Recall Andriy Zapechelnyuk (Kyiv School of Economics and Kyiv Economics Institute) First

More information

האוניברסיטה העברית בירושלים

האוניברסיטה העברית בירושלים האוניברסיטה העברית בירושלים THE HEBREW UNIVERSITY OF JERUSALEM TOWARDS A CHARACTERIZATION OF RATIONAL EXPECTATIONS by ITAI ARIELI Discussion Paper # 475 February 2008 מרכז לחקר הרציונליות CENTER FOR THE

More information

Solving Extensive Form Games

Solving Extensive Form Games Chapter 8 Solving Extensive Form Games 8.1 The Extensive Form of a Game The extensive form of a game contains the following information: (1) the set of players (2) the order of moves (that is, who moves

More information

Bargaining Efficiency and the Repeated Prisoners Dilemma. Bhaskar Chakravorti* and John Conley**

Bargaining Efficiency and the Repeated Prisoners Dilemma. Bhaskar Chakravorti* and John Conley** Bargaining Efficiency and the Repeated Prisoners Dilemma Bhaskar Chakravorti* and John Conley** Published as: Bhaskar Chakravorti and John P. Conley (2004) Bargaining Efficiency and the repeated Prisoners

More information

Small Sample of Related Literature

Small Sample of Related Literature UCLA IPAM July 2015 Learning in (infinitely) repeated games with n players. Prediction and stability in one-shot large (many players) games. Prediction and stability in large repeated games (big games).

More information

Optimal Machine Strategies to Commit to in Two-Person Repeated Games

Optimal Machine Strategies to Commit to in Two-Person Repeated Games Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Optimal Machine Strategies to Commit to in Two-Person Repeated Games Song Zuo and Pingzhong Tang Institute for Interdisciplinary

More information

Correlated Equilibria: Rationality and Dynamics

Correlated Equilibria: Rationality and Dynamics Correlated Equilibria: Rationality and Dynamics Sergiu Hart June 2010 AUMANN 80 SERGIU HART c 2010 p. 1 CORRELATED EQUILIBRIA: RATIONALITY AND DYNAMICS Sergiu Hart Center for the Study of Rationality Dept

More information

Complexity and Mixed Strategy Equilibria

Complexity and Mixed Strategy Equilibria Complexity and Mixed Strategy Equilibria ai-wei Hu Northwestern University his version: Oct 2012 Abstract Unpredictable behavior is central to optimal play in many strategic situations because predictable

More information

SANDWICH GAMES. July 9, 2014

SANDWICH GAMES. July 9, 2014 SANDWICH GAMES EHUD LEHRER AND ROEE TEPER July 9, 204 Abstract. The extension of set functions (or capacities) in a concave fashion, namely a concavification, is an important issue in decision theory and

More information

NOTE. A 2 2 Game without the Fictitious Play Property

NOTE. A 2 2 Game without the Fictitious Play Property GAMES AND ECONOMIC BEHAVIOR 14, 144 148 1996 ARTICLE NO. 0045 NOTE A Game without the Fictitious Play Property Dov Monderer and Aner Sela Faculty of Industrial Engineering and Management, The Technion,

More information

THE BIG MATCH WITH A CLOCK AND A BIT OF MEMORY

THE BIG MATCH WITH A CLOCK AND A BIT OF MEMORY THE BIG MATCH WITH A CLOCK AND A BIT OF MEMORY KRISTOFFER ARNSFELT HANSEN, RASMUS IBSEN-JENSEN, AND ABRAHAM NEYMAN Abstract. The Big Match is a multi-stage two-player game. In each stage Player hides one

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

LEARNING IN CONCAVE GAMES

LEARNING IN CONCAVE GAMES LEARNING IN CONCAVE GAMES P. Mertikopoulos French National Center for Scientific Research (CNRS) Laboratoire d Informatique de Grenoble GSBE ETBC seminar Maastricht, October 22, 2015 Motivation and Preliminaries

More information

A Polynomial-time Nash Equilibrium Algorithm for Repeated Games

A Polynomial-time Nash Equilibrium Algorithm for Repeated Games A Polynomial-time Nash Equilibrium Algorithm for Repeated Games Michael L. Littman mlittman@cs.rutgers.edu Rutgers University Peter Stone pstone@cs.utexas.edu The University of Texas at Austin Main Result

More information

A Detail-free Mediator

A Detail-free Mediator A Detail-free Mediator Peter Vida May 27, 2005 Abstract Two players are allowed to communicate repeatedly through a mediator and then have direct communication. The device receives private inputs from

More information

Dynamic Games with Asymmetric Information: Common Information Based Perfect Bayesian Equilibria and Sequential Decomposition

Dynamic Games with Asymmetric Information: Common Information Based Perfect Bayesian Equilibria and Sequential Decomposition Dynamic Games with Asymmetric Information: Common Information Based Perfect Bayesian Equilibria and Sequential Decomposition 1 arxiv:1510.07001v1 [cs.gt] 23 Oct 2015 Yi Ouyang, Hamidreza Tavafoghi and

More information

Title: The Castle on the Hill. Author: David K. Levine. Department of Economics UCLA. Los Angeles, CA phone/fax

Title: The Castle on the Hill. Author: David K. Levine. Department of Economics UCLA. Los Angeles, CA phone/fax Title: The Castle on the Hill Author: David K. Levine Department of Economics UCLA Los Angeles, CA 90095 phone/fax 310-825-3810 email dlevine@ucla.edu Proposed Running Head: Castle on the Hill Forthcoming:

More information

A Generic Bound on Cycles in Two-Player Games

A Generic Bound on Cycles in Two-Player Games A Generic Bound on Cycles in Two-Player Games David S. Ahn February 006 Abstract We provide a bound on the size of simultaneous best response cycles for generic finite two-player games. The bound shows

More information

Economics 703 Advanced Microeconomics. Professor Peter Cramton Fall 2017

Economics 703 Advanced Microeconomics. Professor Peter Cramton Fall 2017 Economics 703 Advanced Microeconomics Professor Peter Cramton Fall 2017 1 Outline Introduction Syllabus Web demonstration Examples 2 About Me: Peter Cramton B.S. Engineering, Cornell University Ph.D. Business

More information

CS154, Lecture 18: 1

CS154, Lecture 18: 1 CS154, Lecture 18: 1 CS 154 Final Exam Wednesday December 12, 12:15-3:15 pm STLC 111 You re allowed one double-sided sheet of notes Exam is comprehensive (but will emphasize post-midterm topics) Look for

More information

Other-Regarding Preferences: Theory and Evidence

Other-Regarding Preferences: Theory and Evidence Other-Regarding Preferences: Theory and Evidence June 9, 2009 GENERAL OUTLINE Economic Rationality is Individual Optimization and Group Equilibrium Narrow version: Restrictive Assumptions about Objective

More information

by burning money Sjaak Hurkens y CentER, Tilburg University P.O. Box November 1994 Abstract

by burning money Sjaak Hurkens y CentER, Tilburg University P.O. Box November 1994 Abstract Multi-sided pre-play communication by burning money Sjaak Hurkens y CentER, Tilburg University P.O. Box 90153 5000 LE Tilburg, The Netherlands November 1994 Abstract We investigate the consequences of

More information

Incentives to form the grand coalition versus no incentive to split off from the grand coalition

Incentives to form the grand coalition versus no incentive to split off from the grand coalition Incentives to form the grand coalition versus no incentive to split off from the grand coalition Katsushige FUJIMOTO College of Symbiotic Systems Science, Fukushima University, 1 Kanayagawa Fukushima,

More information

Distributed Learning based on Entropy-Driven Game Dynamics

Distributed Learning based on Entropy-Driven Game Dynamics Distributed Learning based on Entropy-Driven Game Dynamics Bruno Gaujal joint work with Pierre Coucheney and Panayotis Mertikopoulos Inria Aug., 2014 Model Shared resource systems (network, processors)

More information

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY

FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY 15-453 FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY KOLMOGOROV-CHAITIN (descriptive) COMPLEXITY TUESDAY, MAR 18 CAN WE QUANTIFY HOW MUCH INFORMATION IS IN A STRING? A = 01010101010101010101010101010101

More information

First Prev Next Last Go Back Full Screen Close Quit. Game Theory. Giorgio Fagiolo

First Prev Next Last Go Back Full Screen Close Quit. Game Theory. Giorgio Fagiolo Game Theory Giorgio Fagiolo giorgio.fagiolo@univr.it https://mail.sssup.it/ fagiolo/welcome.html Academic Year 2005-2006 University of Verona Summary 1. Why Game Theory? 2. Cooperative vs. Noncooperative

More information

BELIEFS & EVOLUTIONARY GAME THEORY

BELIEFS & EVOLUTIONARY GAME THEORY 1 / 32 BELIEFS & EVOLUTIONARY GAME THEORY Heinrich H. Nax hnax@ethz.ch & Bary S. R. Pradelski bpradelski@ethz.ch May 15, 217: Lecture 1 2 / 32 Plan Normal form games Equilibrium invariance Equilibrium

More information

Lectures Road Map

Lectures Road Map Lectures 0 - Repeated Games 4. Game Theory Muhamet Yildiz Road Map. Forward Induction Examples. Finitely Repeated Games with observable actions. Entry-Deterrence/Chain-store paradox. Repeated Prisoners

More information

Columbia University. Department of Economics Discussion Paper Series

Columbia University. Department of Economics Discussion Paper Series Columbia University Department of Economics Discussion Paper Series Equivalence of Public Mixed-Strategies and Private Behavior Strategies in Games with Public Monitoring Massimiliano Amarante Discussion

More information

A Characterization of the Nash Bargaining Solution Published in Social Choice and Welfare, 19, , (2002)

A Characterization of the Nash Bargaining Solution Published in Social Choice and Welfare, 19, , (2002) A Characterization of the Nash Bargaining Solution Published in Social Choice and Welfare, 19, 811-823, (2002) Nir Dagan Oscar Volij Eyal Winter August 20, 2001 Abstract We characterize the Nash bargaining

More information

The Value of Information in Zero-Sum Games

The Value of Information in Zero-Sum Games The Value of Information in Zero-Sum Games Olivier Gossner THEMA, Université Paris Nanterre and CORE, Université Catholique de Louvain joint work with Jean-François Mertens July 2001 Abstract: It is not

More information

Entropic Selection of Nash Equilibrium

Entropic Selection of Nash Equilibrium Entropic Selection of Nash Equilibrium Zeynel Harun Alioğulları Mehmet Barlo February, 2012 Abstract This study argues that Nash equilibria with less variations in players best responses are more appealing.

More information

Robust Learning Equilibrium

Robust Learning Equilibrium Robust Learning Equilibrium Itai Ashlagi Dov Monderer Moshe Tennenholtz Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Haifa 32000, Israel Abstract We introduce

More information

Tijmen Daniëls Universiteit van Amsterdam. Abstract

Tijmen Daniëls Universiteit van Amsterdam. Abstract Pure strategy dominance with quasiconcave utility functions Tijmen Daniëls Universiteit van Amsterdam Abstract By a result of Pearce (1984), in a finite strategic form game, the set of a player's serially

More information

Distribution of Environments in Formal Measures of Intelligence: Extended Version

Distribution of Environments in Formal Measures of Intelligence: Extended Version Distribution of Environments in Formal Measures of Intelligence: Extended Version Bill Hibbard December 2008 Abstract This paper shows that a constraint on universal Turing machines is necessary for Legg's

More information

Extensive games (with perfect information)

Extensive games (with perfect information) Extensive games (with perfect information) (also referred to as extensive-form games or dynamic games) DEFINITION An extensive game with perfect information has the following components A set N (the set

More information

Smooth Calibration, Leaky Forecasts, Finite Recall, and Nash Dynamics

Smooth Calibration, Leaky Forecasts, Finite Recall, and Nash Dynamics Smooth Calibration, Leaky Forecasts, Finite Recall, and Nash Dynamics Sergiu Hart August 2016 Smooth Calibration, Leaky Forecasts, Finite Recall, and Nash Dynamics Sergiu Hart Center for the Study of Rationality

More information

The computational complexity of trembling hand perfection and other equilibrium refinements

The computational complexity of trembling hand perfection and other equilibrium refinements The computational complexity of trembling hand perfection and other equilibrium refinements Kristoffer Arnsfelt Hansen 1, Peter Bro Miltersen 1, and Troels Bjerre Sørensen 2 1 {arnsfelt,bromille}@cs.au.dk

More information

Reputation and Bounded Memory in Repeated Games with Incomplete Information

Reputation and Bounded Memory in Repeated Games with Incomplete Information Reputation and Bounded Memory in Repeated Games with Incomplete Information A Dissertation Presented to the Faculty of the Graduate School of Yale University in Candidacy for the Degree of Doctor of Philosophy

More information

Walras-Bowley Lecture 2003

Walras-Bowley Lecture 2003 Walras-Bowley Lecture 2003 Sergiu Hart This version: September 2004 SERGIU HART c 2004 p. 1 ADAPTIVE HEURISTICS A Little Rationality Goes a Long Way Sergiu Hart Center for Rationality, Dept. of Economics,

More information

Action-Independent Subjective Expected Utility without States of the World

Action-Independent Subjective Expected Utility without States of the World heoretical Economics Letters, 013, 3, 17-1 http://dxdoiorg/10436/tel01335a004 Published Online September 013 (http://wwwscirporg/journal/tel) Action-Independent Subjective Expected Utility without States

More information

Kolmogorov-Loveland Randomness and Stochasticity

Kolmogorov-Loveland Randomness and Stochasticity Kolmogorov-Loveland Randomness and Stochasticity Wolfgang Merkle 1 Joseph Miller 2 André Nies 3 Jan Reimann 1 Frank Stephan 4 1 Institut für Informatik, Universität Heidelberg 2 Department of Mathematics,

More information

Introduction to game theory LECTURE 1

Introduction to game theory LECTURE 1 Introduction to game theory LECTURE 1 Jörgen Weibull January 27, 2010 1 What is game theory? A mathematically formalized theory of strategic interaction between countries at war and peace, in federations

More information

On Acyclicity of Games with Cycles

On Acyclicity of Games with Cycles On Acyclicity of Games with Cycles Daniel Andersson, Vladimir Gurvich, and Thomas Dueholm Hansen Dept. of Computer Science, Aarhus University, {koda,tdh}@cs.au.dk RUTCOR, Rutgers University, gurvich@rutcor.rutgers.edu

More information

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2 6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2 Daron Acemoglu and Asu Ozdaglar MIT October 14, 2009 1 Introduction Outline Mixed Strategies Existence of Mixed Strategy Nash Equilibrium

More information

Random Extensive Form Games and its Application to Bargaining

Random Extensive Form Games and its Application to Bargaining Random Extensive Form Games and its Application to Bargaining arxiv:1509.02337v1 [cs.gt] 8 Sep 2015 Itai Arieli, Yakov Babichenko October 9, 2018 Abstract We consider two-player random extensive form games

More information

Lecture Notes on Game Theory

Lecture Notes on Game Theory Lecture Notes on Game Theory Levent Koçkesen Strategic Form Games In this part we will analyze games in which the players choose their actions simultaneously (or without the knowledge of other players

More information

Decision, Computation and Language

Decision, Computation and Language Decision, Computation and Language Non-Deterministic Finite Automata (NFA) Dr. Muhammad S Khan (mskhan@liv.ac.uk) Ashton Building, Room G22 http://www.csc.liv.ac.uk/~khan/comp218 Finite State Automata

More information

Information Theory and Coding Techniques: Chapter 1.1. What is Information Theory? Why you should take this course?

Information Theory and Coding Techniques: Chapter 1.1. What is Information Theory? Why you should take this course? Information Theory and Coding Techniques: Chapter 1.1 What is Information Theory? Why you should take this course? 1 What is Information Theory? Information Theory answers two fundamental questions in

More information

Gains in evolutionary dynamics. Dai ZUSAI

Gains in evolutionary dynamics. Dai ZUSAI Gains in evolutionary dynamics unifying rational framework for dynamic stability Dai ZUSAI Hitotsubashi University (Econ Dept.) Economic Theory Workshop October 27, 2016 1 Introduction Motivation and literature

More information

PS2 - Comments. University of Virginia - cs3102: Theory of Computation Spring 2010

PS2 - Comments. University of Virginia - cs3102: Theory of Computation Spring 2010 University of Virginia - cs3102: Theory of Computation Spring 2010 PS2 - Comments Average: 77.4 (full credit for each question is 100 points) Distribution (of 54 submissions): 90, 12; 80 89, 11; 70-79,

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

Belief-free Equilibria in Repeated Games

Belief-free Equilibria in Repeated Games Belief-free Equilibria in Repeated Games Jeffrey C. Ely Johannes Hörner Wojciech Olszewski November 6, 003 bstract We introduce a class of strategies which generalizes examples constructed in two-player

More information

Game Theory: Lecture 2

Game Theory: Lecture 2 Game Theory: Lecture 2 Tai-Wei Hu June 29, 2011 Outline Two-person zero-sum games normal-form games Minimax theorem Simplex method 1 2-person 0-sum games 1.1 2-Person Normal Form Games A 2-person normal

More information

Events Concerning Knowledge

Events Concerning Knowledge Events Concerning Knowledge Edward J. Green Department of Economics The Pennsylvania State University University Park, PA 16802, USA eug2@psu.edu DRAFT: 2010.04.05 Abstract Aumann s knowledge operator

More information

Rationalizable Partition-Confirmed Equilibrium

Rationalizable Partition-Confirmed Equilibrium Rationalizable Partition-Confirmed Equilibrium Drew Fudenberg and Yuichiro Kamada First Version: January 29, 2011; This Version: May 3, 2013 Abstract Rationalizable partition-confirmed equilibrium (RPCE)

More information

Economics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7).

Economics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7). Economics 201B Economic Theory (Spring 2017) Bargaining Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7). The axiomatic approach (OR 15) Nash s (1950) work is the starting point

More information

On Renegotiation-Proof Collusion under Imperfect Public Information*

On Renegotiation-Proof Collusion under Imperfect Public Information* Journal of Economic Theory 85, 38336 (999) Article ID jeth.998.500, available online at http:www.idealibrary.com on NOTES, COMMENTS, AND LETTERS TO THE EDITOR On Renegotiation-Proof Collusion under Imperfect

More information

A statistical mechanical interpretation of algorithmic information theory

A statistical mechanical interpretation of algorithmic information theory A statistical mechanical interpretation of algorithmic information theory Kohtaro Tadaki Research and Development Initiative, Chuo University 1 13 27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan. E-mail: tadaki@kc.chuo-u.ac.jp

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora princeton univ F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora Scribe: Today we first see LP duality, which will then be explored a bit more in the

More information

Synthesis weakness of standard approach. Rational Synthesis

Synthesis weakness of standard approach. Rational Synthesis 1 Synthesis weakness of standard approach Rational Synthesis 3 Overview Introduction to formal verification Reactive systems Verification Synthesis Introduction to Formal Verification of Reactive Systems

More information

Bayesian Learning in Social Networks

Bayesian Learning in Social Networks Bayesian Learning in Social Networks Asu Ozdaglar Joint work with Daron Acemoglu, Munther Dahleh, Ilan Lobel Department of Electrical Engineering and Computer Science, Department of Economics, Operations

More information

Repeated Games with Perfect Monitoring

Repeated Games with Perfect Monitoring Repeated Games with Perfect Monitoring Ichiro Obara April 17, 2006 We study repeated games with perfect monitoring (and complete information). In this class of repeated games, players can observe the other

More information

The obvious way to play repeated Battle-of-the-Sexes games: Theory and evidence.

The obvious way to play repeated Battle-of-the-Sexes games: Theory and evidence. The obvious way to play repeated Battle-of-the-Sexes games: Theory and evidence. Christoph Kuzmics, Thomas Palfrey, and Brian Rogers October 15, 009 Managerial Economics and Decision Sciences, Kellogg

More information

CS 798: Multiagent Systems

CS 798: Multiagent Systems CS 798: Multiagent Systems and Utility Kate Larson Cheriton School of Computer Science University of Waterloo January 6, 2010 Outline 1 Self-Interested Agents 2 3 4 5 Self-Interested Agents We are interested

More information

I M ONLY HUMAN: GAME THEORY WITH COMPUTATIONALLY BOUNDED AGENTS

I M ONLY HUMAN: GAME THEORY WITH COMPUTATIONALLY BOUNDED AGENTS I M ONLY HUMAN: GAME THEORY WITH COMPUTATIONALLY BOUNDED AGENTS A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree

More information

Computing Uniformly Optimal Strategies in Two-Player Stochastic Games

Computing Uniformly Optimal Strategies in Two-Player Stochastic Games Noname manuscript No. will be inserted by the editor) Computing Uniformly Optimal Strategies in Two-Player Stochastic Games Eilon Solan and Nicolas Vieille Received: date / Accepted: date Abstract We provide

More information

Normal-form games. Vincent Conitzer

Normal-form games. Vincent Conitzer Normal-form games Vincent Conitzer conitzer@cs.duke.edu 2/3 of the average game Everyone writes down a number between 0 and 100 Person closest to 2/3 of the average wins Example: A says 50 B says 10 C

More information

Strongly Consistent Self-Confirming Equilibrium

Strongly Consistent Self-Confirming Equilibrium Strongly Consistent Self-Confirming Equilibrium YUICHIRO KAMADA 1 Department of Economics, Harvard University, Cambridge, MA 02138 Abstract Fudenberg and Levine (1993a) introduce the notion of self-confirming

More information

Unique Nash Implementation for a Class of Bargaining Solutions

Unique Nash Implementation for a Class of Bargaining Solutions Unique Nash Implementation for a Class of Bargaining Solutions Walter Trockel University of California, Los Angeles and Bielefeld University Mai 1999 Abstract The paper presents a method of supporting

More information