: Cryptography and Game Theory Ran Canetti and Alon Rosen. Lecture 8

Similar documents
Lecture December 2009 Fall 2009 Scribe: R. Ring In this lecture we will talk about

Lecture th January 2009 Fall 2008 Scribes: D. Widder, E. Widder Today s lecture topics

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky. Lecture 10

Towards a Game Theoretic View of Secure Computation

Cryptography and Game Theory

Round-Efficient Multi-party Computation with a Dishonest Majority

Lecture Notes 20: Zero-Knowledge Proofs

Lecture 14: Secure Multiparty Computation

Extensive games (with perfect information)

Parallel Coin-Tossing and Constant-Round Secure Two-Party Computation

Lecture 6. 2 Adaptively-Secure Non-Interactive Zero-Knowledge

A Computational Game-Theoretic Framework for Cryptography

On Achieving the Best of Both Worlds in Secure Multiparty Computation

Fairness with an Honest Minority and a Rational Majority

Lecture 5: Pseudo-Random Generators and Pseudo-Random Functions

Lecture 11: Key Agreement

Lecture Notes 17. Randomness: The verifier can toss coins and is allowed to err with some (small) probability if it is unlucky in its coin tosses.

Lecture 9 Julie Staub Avi Dalal Abheek Anand Gelareh Taban. 1 Introduction. 2 Background. CMSC 858K Advanced Topics in Cryptography February 24, 2004

Game-Theoretic Security for Two-Party Protocols

Notes for Lecture A can repeat step 3 as many times as it wishes. We will charge A one unit of time for every time it repeats step 3.

Notes on Zero Knowledge

Towards a Game Theoretic View of Secure Computation

1/p-Secure Multiparty Computation without an Honest Majority and the Best of Both Worlds

Extensive Form Games I

1 Recap: Interactive Proofs

Protocols for Multiparty Coin Toss with a Dishonest Majority

Solving Extensive Form Games

Introduction to Modern Cryptography Lecture 11

Economics 201B Economic Theory (Spring 2017) Bargaining. Topics: the axiomatic approach (OR 15) and the strategic approach (OR 7).

First Prev Next Last Go Back Full Screen Close Quit. Game Theory. Giorgio Fagiolo

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky. Lecture 9

Lecture 15 - Zero Knowledge Proofs

Lecture 11: Non-Interactive Zero-Knowledge II. 1 Non-Interactive Zero-Knowledge in the Hidden-Bits Model for the Graph Hamiltonian problem

SF2972 Game Theory Exam with Solutions March 15, 2013

Microeconomics. 2. Game Theory

Lecture 17: Constructions of Public-Key Encryption

Zero-Knowledge Proofs and Protocols

1 The General Definition

Long-Run versus Short-Run Player

Exact and Approximate Equilibria for Optimal Group Network Formation

Theory Field Examination Game Theory (209A) Jan Question 1 (duopoly games with imperfect information)

MS&E 246: Lecture 4 Mixed strategies. Ramesh Johari January 18, 2007

Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs

Econ 618: Correlated Equilibrium

For general queries, contact

Lecture 5, CPA Secure Encryption from PRFs

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrosky. Lecture 4

General-sum games. I.e., pretend that the opponent is only trying to hurt you. If Column was trying to hurt Row, Column would play Left, so

Lecture 22. We first consider some constructions of standard commitment schemes. 2.1 Constructions Based on One-Way (Trapdoor) Permutations

6.891 Games, Decision, and Computation February 5, Lecture 2

Non-Interactive ZK:The Feige-Lapidot-Shamir protocol

1 What are Physical Attacks. 2 Physical Attacks on RSA. Today:

Lecture 3: Interactive Proofs and Zero-Knowledge

1 Secure two-party computation

Selecting correlated random actions

Lecture 9 - Symmetric Encryption

Lecture 15: Interactive Proofs

Advanced Microeconomics

Weak Dominance and Never Best Responses

Lecture 6: April 25, 2006

Introduction to Cryptography Lecture 13

Perfect Concrete Implementation of Arbitrary Mechanisms

From Secure MPC to Efficient Zero-Knowledge

Lecture 26. Daniel Apon

Economics 3012 Strategic Behavior Andy McLennan October 20, 2006

How many rounds can Random Selection handle?

Lecture 17 - Diffie-Hellman key exchange, pairing, Identity-Based Encryption and Forward Security

Lecture 18 - Secret Sharing, Visual Cryptography, Distributed Signatures

Lectures 1&2: Introduction to Secure Computation, Yao s and GMW Protocols

Game Theory Lecture 10+11: Knowledge

Basic Game Theory. Kate Larson. January 7, University of Waterloo. Kate Larson. What is Game Theory? Normal Form Games. Computing Equilibria

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions

Synthesis weakness of standard approach. Rational Synthesis

Lecture 38: Secure Multi-party Computation MPC

Notes for Lecture 25

1 Number Theory Basics

c 2010 Society for Industrial and Applied Mathematics

COMS W4995 Introduction to Cryptography October 12, Lecture 12: RSA, and a summary of One Way Function Candidates.

Computing Minmax; Dominance

A Note on Negligible Functions

Lectures One Way Permutations, Goldreich Levin Theorem, Commitments

6.897: Selected Topics in Cryptography Lectures 7 and 8. Lecturer: Ran Canetti

Lecture 2: Perfect Secrecy and its Limitations

Lecture 3,4: Universal Composability

CPSC 467b: Cryptography and Computer Security

Commitment Schemes and Zero-Knowledge Protocols (2011)

Non-Conversation-Based Zero Knowledge

Rationality in the Full-Information Model

Lecture 4: Computationally secure cryptography

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Extensive games with perfect information OR6and7,FT3,4and11

EC3224 Autumn Lecture #04 Mixed-Strategy Equilibrium

arxiv: v1 [cs.gt] 29 Aug 2008

CS364A: Algorithmic Game Theory Lecture #13: Potential Games; A Hierarchy of Equilibria

Lecture 7: CPA Security, MACs, OWFs

Copyright (C) 2013 David K. Levine This document is an open textbook; you can redistribute it and/or modify it under the terms of the Creative

Economics 703 Advanced Microeconomics. Professor Peter Cramton Fall 2017

Notes on BAN Logic CSG 399. March 7, 2006

Lecture 13: Private Key Encryption

Lecture 4 Chiu Yuen Koo Nikolai Yakovenko. 1 Summary. 2 Hybrid Encryption. CMSC 858K Advanced Topics in Cryptography February 5, 2004

Lecture 18: Zero-Knowledge Proofs

Transcription:

0368.4170: Cryptography and Game Theory Ran Canetti and Alon Rosen Lecture 8 December 9, 2009 Scribe: Naama Ben-Aroya Last Week 2 player zero-sum games (min-max) Mixed NE (existence, complexity) ɛ-ne Correlated equilibrium Today Using crypto to implement correlated equilibrium Extensive form game Computational game Computational NE Actual implementation 1 Background We want to use cryptography to solve a game-theoretic problem which arises naturally in the area of two party strategic games. The standard game-theoretic solution concept for such games is that of an equilibrium, which is a pair of self-enforcing strategies making each players strategy an optimal response to the other players strategy. It is known that for many games the expected equilibrium payoffs can be much higher when a trusted third party (a mediator) assists the players in choosing their moves (correlated equilibria), than when each player has to choose its move on its own (Nash equilibria). It is natural to ask whether there exists a mechanism that eliminates the need for the mediator yet allows the players to maintain the high payoffs offered by mediator-assisted strategies. 2 Extensive Form Game An extensive game is an explicit description of the sequential structure of the decision problems encountered by the players in a strategic situation. The model allows us to study solutions in which each player can consider his plan of action not only at the beginning of the game but also at any point of time at which he has to make a decision. Today we will discuss extensive games with perfect information, in which each player is perfectly informed about the players previous actions at each point in the game. Definition 1 (Extensive Form Game) An extensive form game is a tuple G = (N, H, P, (A i ), (U i )) where: N is the set of players. H is a (possibly infinite) set of (finite) history sequence s.t. ɛ H. A h H is terminal if {a (h, a) H} =. (The set of terminal histories is denoted by Z). P : (H \ Z) N is a function that assigns a next player to every h H \ Z. 1

i N, A i is a function that assigns h H \ Z a finite set A i (h) of actions available to player i = P (h). U i : Z R utility for player i N. We interpret such a game as follows. After any nonterminal history h player P (h) = i chooses an action from the set A i (h) = {a (h, a) H}. The empty history is the starting point of the game; we sometimes refer to it as the initial history. At this point player P (ɛ) chooses a member of A(ɛ). For each possible choice a 0 from this set, player P (a 0 ) subsequently chooses a member of the set A(a 0 ); this choice determines the next player to move, and so on. A history after which no more choices have to be made is terminal. Note that a history may be an infinite sequence of actions. As in the case of a strategic game we often specify the players preferences over terminal histories by giving payoff functions that represent the preferences. Figure 1: A schematic representation of an extensive form game Figure 1 suggests an alternative definition of an extensive game in which the basic component is a tree. In this formulation each node corresponds to a history and any pair of nodes that are connected corresponds to an action. The leaves refer to histories after which no more choices have to be made; Indicating terminal histories. This is a convenient representation of an extensive form game. The circle at the top of the diagram represents the initial history ɛ (the starting point of the game). P (ɛ) = i 1 (player i 1 makes the first move). The line segments that emanate from the circle correspond to the members of A(ɛ) (the possible actions of player i 1 at the initial history); Each line segment leads to a circle indicating the sequel history. The highlighted segment indicates the selected action of i 1 : a 1. P (a 1 ) = i 2 (Player i 2 makes the second move). The line segments that emanate from that circle correspond to the members of A(a 1 ) (the possible actions of player i 2 at the current history). And so on, until reaching a terminal history - which is the outcome of that game. 2

Simultaneous Moves To model situations in which players move simultaneously after certain histories, we can modify the definition of an extensive game as follows. An extensive game with simultaneous moves is a tuple G = (N, H, P, (U i )) where N, H, and U i for each i N are the same as in Definition 1, P is a function that assigns to each nonterminal history a set of players (P : (H \ Z) I N), and H and P jointly satisfy the condition that for every nonterminal history h there is a collection {A i (h)} i P (h) of sets for which A(h) = {a (h, a) H} = i P (h) A i (h). A history in such a game is a sequence of vectors; the components of each vector a k are the actions taken by the players whose turn it is to move after the history (a l ) k 1 l=1. The set of actions among which each player i P (h) can choose after the history h is A i (h); the interpretation is that the choices of the players in P (h) are made simultaneously. Strategies A strategy of a player in an extensive game is a plan that specifies the action chosen by the player for every history after which it is his turn to move. Definition 2 (Strategy) A strategy for player i N is a function that takes h H \Z (s.t. P (h) = i) and outputs a i A i (h). Notes: 1. A strategy completely defines the actions taken by a player h H \ Z, even for histories that, if the strategy is followed, are never reached. 2. As in a strategic game we can define a mixed strategy to be a probability distribution over the set of (pure) strategies. 3. In the computational setting s is usually PPT. Figure 2: Example 1 Examples 1. Two players game. The formal definition of this extensive form game: N = {1, 2} H = {(ɛ), (L), (R), (L, L ), (L, R ), (R, L ), (R, R )} Z = {(L, L ), (L, R ), (R, L ), (R, R )} P (ɛ) = 1; P (L) = P (R) = 2 3

A 1 (ɛ) = {L, R}; A 2 (R) = A 2 (L) = {L, R } U 1 (L, L ) = 2; U 2 (L, L ) = 1; U 1 (L, R ) = 1; U 2 (L, R ) = 3;... A strategy s 1 for the first player will define which action to chose in the initial history (ɛ), i.e. s 1 (ɛ) A 1 (ɛ) = {L, R}. A strategy s 2 for the second player will define which action to chose in any optional history such that player 2 is the next player, i.e. s 2 (L) A 2 (L), and s 2 (R) A 2 (R). In this game there are 2 NE: (L, R ), (R, L ). Figure 3: Example 2: OWP game (interactive version) 2. The one-way permutation game: the first player chooses an arbitrarily x {0, 1} k and sends f(x), where f : {0, 1} k {0, 1} k is a permutation. The second player is challenged to come up with a z such that z = x. The formal definition of this extensive form game: N = {1, 2} H = {ɛ} {0, 1} k {0, 1} k {0, 1} k Z = {0, 1} k {0, 1} k P (ɛ) = 1; h {0, 1} k, P (h) = 2 A 1 (ɛ) = {0, 1} k ; h {0, 1} k, A 2 (h) = {0, 1} k { 1, z = x U 2 (x, z) = 1, otherwise 3 Implementing Mediator (2-player game) In the language of cryptography, we ask if we can design a two party game to eliminate the trusted third party from the original game. We consider an extended game, in which the players first exchange messages, and then choose their moves and execute them simultaneously as in the original game. The payoffs are still computed as a function of the moves, according to the same payoff function as in the original game. It is very easy to see that every Nash equilibrium payoff of the extended game is also a correlated equilibrium payoff of the original game. We would like to show that for players which are computationally bounded to probabilistic polynomial time, any correlated equilibrium payoff of the original game can always be achieved by some Nash equilibrium of the extended game. In the cryptographic setting, parties are assumed to be indifferent to negligible change in their utilities. We will define a computational Nash equilibrium as a pair of efficient strategies where no polynomially bounded player can gain a non-negligible advantage by not following its strategy: 4

Definition 3 (Computational Game) A computational game is a sequence G (k) = (N, (A (k) i ), ( i )) where: N is the set of players. k N, i N, A (k) i is the (finite) set of actions available to player i. A (k) i i = A i { } where means abort. : A (k) 1... A (k) n R utility of player i. Definition 4 (ɛ-computational NE) A strategy profile s = (s 1,..., s n ) is said to be in ɛ-computational NE (ɛ-cne) in a computational game G (k) if i N, s i P P T, k N: i (s i, s i ) i (s i, s i ) ɛ(k) (1) where: i (s i, s i ) denotes the expected utility of player i that follows strategy s i. The strategies of both players are restricted to probabilistic polynomial time. Also, since we are talking about a computational model, the definition must account for the fact that the players may break the underlying cryptographic scheme with negligible probability, thus gaining some advantage in the game. In the definition, we denote by ɛ : N R some function that is negligible in k. We notice that the new philosophy for both players is still to maximize their expected payoff, except that the players will not change their strategy if their gain is negligible. We will now define the extended game: Let G be any two player game and M be a correlated equilibrium for G. Let π be a 2-party protocol for securely sampling M. Stage 1: The players are given the security parameter and they freely exchanging messages (i.e. executing protocol π). Histories are possible transcripts of π. Actions are possible messages (all string) of π. Question: What if a player aborts? (Add to the available actions). Stage 2: Play G. Let (a 1, a 2 ) be (local) recommendation of π (implementing M). Players decide whether to play according to (a 1, a 2 ). The final payoff u i of the extended game are just the corresponding payoff of the original game applied to the players simultaneous moves at the second stage. The idea of getting rid of the mediator is now very simple. Consider a correlated equilibrium s of the original game. We can view the mediator as a trusted party who securely computes a probabilistic function s. Thus, to remove it we can have the two players execute a cryptographic protocol π that securely computes the function s. The strategy of each player would be to follow the protocol π, and then play the action a that it got from π. Definition 5 (Secure Protocol) Protocol π is said to secure if for every probabilistic polynomial time algorithm π (representing a real model adversary strategy) there exists a probabilistic polynomial time algorithm π (representing an ideal model adversary strategy) such that where: {real π i,π i (1 k )} k N {ideal π i,π i (1 k )} k N (2) comp. 5

{real π i,π i (1 k )} k N - the joint output of players in real world execution. {ideal π i,π i (1 k )} k N - the joint output of players in ideal world execution. According to standard definitions of secure protocols, π is secure if the above output pair can be simulated in an ideal world. This ideal world is almost exactly the model of the trusted mediator. The security of π implies that the output distribution in the execution of the protocol in the real world is indistinguishable from that of the ideal world. Theorem 6 Let G be a 2-player game, M be a correlated equilibrium for G, and suppose π = (π 1, π 2 ) is a secure protocol for computing M. Then, the following strategy is a CNE in the game G (the extended game): 1. Play according to (π 1, π 2 ). 2. Let (a 1, a 2 ) be π s output - play (a 1, a 2 ). Moreover, the payoff of both players are exactly the same as the ones they would receive by playing according to M in G. That is, any correlated equilibrium payoff of G can be achieved using a computational Nash equilibrium of G. Thus, the mediator can be eliminated if the players are computationally bounded and can communicate prior to the game. Proof: Observation 7 By the definition of π i, k N: Where: i (s i, s i ) = i (a i, a i a i ) (3) i (a i, a i a i ) - expected utility of i N given that it plays a i after having received a i, and others play a i. a i = real π1,π2 (1 k ) s = 1. Play protocol π. 2. Act according to output of π. Now let s i = π i (s i decides what action to take based on the transcript of π i ) be some PPT strategy, and consider π i that is guaranteed by definition of security of π. (π i works in ideal world) Observation 8 Working in ideal world is equivalent to playing G with M. By definition of correlated equilibrium: i (a i, a i a i ) i (a i, a i a i ) (4) Where: a i = idealπ i,π i (1 k ) a i = output of M. 6

Claim 9 Suppose that i can be computed in poly (k) -time, then there exist a negligible function ɛ(k) s.t. k N: i (a i, s i ) ɛ(k) < i (a i, a i a i ) (5) Where: a i = output of correlated device. a i = realπ i,π i (1 k ) a i = idealπ i,π i (1 k ) Putting it all together: i (s i, s i ) ɛ(k) < i (a i, a i a i ) (6) Where equations 6, 7 and 8 follow from equations 5, 4 and 3 respectively. i (a i, a i a i ) (7) = U i (s i, s i ) (8) The above description does not completely specify the strategy of the players. The models of Game Theory and Cryptography are significantly different. Two-party cryptographic protocols always assume that at least one player is honest, while the other player could be malicious. In the game-theory settings, both players are selfish and rational: they deviate from the protocol if they benefit from it. A full specification of a strategy must also indicate what a player should do if the other player deviates from its strategy (does not follow the protocol π). While cryptography does not address this question, it is crucial to resolve it in our setting, since in game theory - you always have to play: No matter what happens inside π, both players eventually have to take simultaneous actions, and receive the corresponding payoff. Question: What if a player aborts? What if player 1 gets output and aborts? (Protocol is not fair ). Possible solution: Implement a punishment for deviation : punish the cheating player to his minmax level (the smallest payoff that one player can force the other player to have). Let s 2 be a min-max (possibly mixed) strategy against player 1, that is, s 2 minimizes maxu 1 (s 1, s s 1 2). s 2 punishes player 1 by giving it the lowest possible utility, assuming player 1 plays best response to s 2. We will claim that it is never in any player advantage to deviate from the protocol / abort: Claim 10 Let s = (s 1, s 2 ) be a correlated equilibrium. Let v i be the min-max payoff of player i, then for every a 1, a 2 it holds that U i (a i, s i a i ) v i. (Where U i (a i, s i a i ) is the expected utility of player i when its recommended action is a i ) Proof: Let s 2 = s 2 (a 1 ) be player 2 s marginal distribution conditions on player 1 s recommendation being a 1. Since M is a correlated equilibrium, a 1 is best response to s 2. If player 1 aborts then player 2 will play s 2. By definition of s 2, player 1 s response to s 2 cannot give better utility then it best response to s 2. So, player 1 always does worse by aborting. Example: L R P U 6,6 2,7, D 7,2 0,0, P,,, 7

In this game, the best strategy (that will achieve the correlated equilibrium) is to play (U,L), (U,R) or (D,L) (each with probability 1 3 ) and gain (expected) payoff of 5. If player 1 deviates from that strategy, the punishment strategy of player 2 will make sure that player 1 will get the smallest possible utility ( ), even if it will cost him his worst payoff ( ). 8