A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty V. Jeyakumar, G.Y. Li and G. M. Lee Revised Version: January 20, 2011 Abstract The celebrated von Neumann minimax theorem is a fundamental theorem in two-person zero-sum games. In this paper, we present a generalization of the von Neumann minimax theorem, called robust von Neumann minimax theorem, in the face of data uncertainty in the payoff matrix via robust optimization approach. We establish that the robust von Neumann minimax theorem is guaranteed for various classes of bounded uncertainties, including the matrix 1-norm uncertainty, the rank-1 uncertainty and the column-wise affine parameter uncertainty. Key words. Robust von Neumann minimax theorem, minimax theorems under payoff uncertainty, robust optimization, conjugate functions. 1 Introduction The celebrated von Neumann Minimax Theorem [21] asserts that, for an (n m) matrix M, min max x S n y S xt My = max min m y S m x S xt My, n where S n is the n-dimensional simplex. It is a fundamental equality in two-person zero-sum games [19]. Due to its importance in mathematics, decision theory, economics and game theory, numerous generalizations have been given in the literature (see [9, 10, 11, 18] and The authors are grateful to the referee and the editors for their valuable comments and constructive suggestions which have contributed to the final preparation of the paper. The first and second authors were partially supported by a grant from the Australian Research Council. The third author was supported by the Korea Science and Engineering Foundation (KOSEF) NRL program grant funded by the Korea government (MEST)(No. ROA-2008-000-20010-0). Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia Department of Applied Mathematics, Pukyong National University, Busan 608-737, Korea 1
other reference therein). However, these generalizations and their applications have so far been limited mainly to problems without data uncertainty, despite the reality of data uncertainty in many real-world problems due to modeling or prediction errors [2, 3, 5, 4, 6, 13, 14, 15]. For related recent work on incomplete-information games, see [1] and other references therein. The purpose of this paper is to present a new form of the von Neumann minimax theorem, called robust von Neumann minimax theorem, for two-person zero sum games under data uncertainty via robust optimization and to establish that the robust von Neumann minimax theorem always holds under various classes of uncertainties, including the matrix 1-norm uncertainty, the rank-1 uncertainty and the column-wise affine parameter uncertainty. The minimax value γ 1 := min max x S n y S xt My and the maxmin value γ 2 := max m can be calculated by the following two optimization problems: γ 1 = min {t : (x,t) S n R max y S xt My t} and γ 2 = max {t : min m (y,t) S m R x S xt My t}. Whenever the cost function n min y S m x S xt My n is affected by data uncertainty, the effect of uncertain data on the cost matrix M can be captured by a new matrix M(u) where u is an uncertain parameter and it belongs to the compact uncertainty set U R q. For instance, the effect of uncertain data ( ) a1 a (a 1, a 2, a 3 ) on the cost matrix M = 2 can be captured by the new matrix ( ) a 2 a 3 a1 (u) a M(u) = 2 (u) where u U R. So, the minimax value and the maxmin a 2 (u) a 3 (u) value in the face of cost matrix data uncertainty can be obtained by the following two uncertain optimization problems (UP I ) min {t : max (x,t) S n R y S xt M(u)y t} m and (UP II ) max {t : min (y,t) S m R x S xt M(u)y t}. n The robust counterpart [3, 15, 16] of the uncertain optimization problem (UP I ) is a deterministic optimization problem defined by (RP I ) min {t : max (x,t) S n R y S xt M(u)y t, for all u U}, (1.1) m and the optimistic counterpart [2, 15, 16] of the uncertain optimization problem (UP II ) is another deterministic optimization problem defined by (OP II ) max {t : min (y,t) S m R x S xt M(u)y t, for some u U}. (1.2) n The robust minimax theorem states that the optimal values of the robust counterpart problem (RP I ) ( worst possible loss of Player I ) and the optimistic counterpart (OP II ) ( best possible gain of Player II ) are equal. Equivalently, it asserts that min max max x S n y S m u U xt M(u)y = max 2 max min u U y S m x S n xt M(u)y. (1.3)
Employing conjugate analysis [20] and Ky Fan s minimax theorem [8], we derive the robust minimax equality (1.3) under a concave-like condition. We also show that the concave-like condition is also necessary for the robust minimax theorem in the sense that it holds if and only if inf max max x A y B u U {xt M(u)y + x T a} = max max inf u U y B x A {xt M(u)y + x T a}, a R n. Importantly, we establish that the robust minimax theorem always holds for various classes of bounded uncertainty sets, including the matrix 1-norm uncertainty set, the rank-1 matrix uncertainty set, the column-wise affine parameter uncertainty set and isotone matrix-data uncertainty set. Consequently, we also derive a robust theorem of the alternative for uncertain linear inequality systems from the robust minimax theorem. 2 A Robust Minimax Theorem under Uncertainty In this Section, we present a concave-like condition ensuring (1.3). We also show that the condition is also necessary for (1.3) to hold for every linear perturbation. We begin this section by fixing notation and preliminaries of convex analysis. Throughout this paper, R n denotes the Euclidean space with dimension n. The inner product in R n is defined by x, y := x T y for all x, y R n. The nonnegative orthant of R n is denoted by R n + and is defined by R n + := {(x 1,...,x n ) R n : x i 0}. For a set A in R n, the convex hull of A is denoted by coa. We say A is convex whenever µa 1 +(1 µ)a 2 A for all µ [0, 1], a 1, a 2 A. A function f : R n R {+ } is said to be convex if for all µ [0, 1] f((1 µ)x+µy) (1 µ)f(x)+µf(y) for all x, y R n. The function f is said to be concave whenever f is convex. As usual, for any proper (i.e., domf ) convex function f on R n, its conjugate function f : R n R {+ } is defined by f (x ) = sup x R n{ x, x f(x)} for all x R n. Clearly, f is a proper lower semicontinuous convex function and for any proper lower semicontinuous convex functions f 1, f 2 (cf. [12, 17]), f 1 f 2 f 1 f 2 epif 1 epif 2. (2.1) The following special case of Ky Fan minimax theorem [8] plays a key role in deriving our robust von Neumann minimax theorem. Recall from Ky Fan [8] that the function f(., y) is said to be convex-like whenever ( x 1, x 2 C) ( λ (0, 1)) ( x 3 C) ( y D) f(x 3, y) λf(x 1, y) + (1 λ)f(x 2, y). The function f(x,.) is said to be concave-like whenever ( y 1, y 2 D) ( λ (0, 1)) ( y 3 D) ( x C) f(x, y 3 ) λf(x, y 1 ) + (1 λ)f(x, y 2 ), where f : C D R and C and D are sets. Theorem 2.1. [8, 11] Let C be a compact subset of R n and let D R m. Let f : C D R. Suppose that f(., y) is concave-like and f(x,.) is convex-like and that f(., y) is upper-semi-continuous. Then, max inf f(x, y) = inf x C y D max y D x C f(x, y). 3
Theorem 2.1. (Robust von Neumann Minimax Theorem) Let A be a closed convex subset of R n and let B be a convex compact subset of R m. Let U be a convex compact subset of R q. Assume that ( λ [0, 1]) ( (y 1, u 1 ), (y 2, u 2 ) B U) ( (y, u) B U) ( x A) Then x T M(u)y λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2. (2.2) inf max max x A y B u U xt M(u)y = max max inf u U y B x A xt M(u)y. Proof. Let z = (y, u) R m R q and define F : R n R m R q R by F(x, z) = x T M(u)y. Then, we see that x F(x, z) is linear for any z R n R q and z F(x, z) is concavelike. So, by Theorem 2.1 gives us that Thus, the conclusion follows. inf max F(x, z) = max inf F(x, z). x A z B U z B U x A Corollary 2.1. Let A = {(x 1,...,x n ) R n x i 0, i = 1,...,n, n i=1 x i = 1}; let B be a convex compact subset of R m and let U be a convex compact subset of R q. If ( u U, y B {M(u)y} Rn +) is a convex set then inf max max x A y B u U xt M(u)y = max max inf u U y B x A xt M(u)y. Proof. The conclusion will follow from Theorem 2.1 if we show that (2.2) holds. To see this, let λ [0, 1]) and let (y 1, u 1 ), (y 2, u 2 ) B U). Then, M(u i )y i ( u U, y B {M(u)y} R n +), for i = 1, 2. By the convexity hypothesis, we can find (y, u) B U such that M(u)y λm(u 1 )y 1 (1 λ)m(u 2 )y 2 R n +. This together with the fact that x A gives us the required inequality (2.2). It is worth noting that whenever A is simplex, i.e. A = {(x 1,, x n ) R n x i 0, i = 1,, n, n i=1 x i = 1}, (2.2) is equivalent to the convexity of the set ( u U, y B {M(u)y} R n + )Ȧs an illustration, we provide a simple numerical example verifying Corollary 2.1. Example 2.1. Let A = B = {(x 1, x 2 ) : x 1, x 2 0 and x 1 + x 2 = 1}. Let U = [0, 1] and let M(u) = M 0 + um 1 where ( ) ( ) 0 5/6 1 0 M 0 = and M 1 1/2 1 =. 0 1 Then, u U, y B {M(u)y} = u [0,1] (y 1,y 2 ) co{(0,1),(1,0)} = u [0,1] co{( 5 6 1 2 + u ( u 5 { 6 1 1 + u 2 ) ( u, 1 ) }. ) ( y1 y 2 ) } 4
1.5 1.4 1.3 1.2 1.1 1 0.9 0.8 0.7 0.6 0.5 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 Figure 1 Clearly, the set u [0,1], y B {(M 0+uM 1 )y} which is shown by the shaded region of figure 1, is not convex; whereas the set {(M 0 + um 1 )y} R 2 + = {(a 1, a 2 ) : a 1 5 6, a 2 1.5} u [0,1], y B is convex. To verify the robust minimax equality (1.3), let x 2 = 1 x 1 and y 2 = 1 y 1. Then, x T (M 0 + um 1 )y = ( 1 3 u 4 3 y 1)x 1 + 1 2 + u + 1 2 y 1 uy 1. Calculating extreme values with respect to each variable gives us max u U max y B min x A xt (M 0 + um 1 )y = max max u [0,1] y 1 [0,1] min f u(x 1, y 1 ) = 5 x 1 [0,1] 6. where f u (x 1, y 1 ) = ( 1 3 u 4 3 y 1)x 1 + 1 2 +u+1 2 y 1 uy 1. Also, min x A max u U max y B x T (M 0 + um 1 )y = 5 6. We now show that our jointly concavelike condition of Theorem 2.1 is indeed a characterization for the robust von Neumann minimax theorem in the sense that the condition holds if and only if the robust von Neumann minimax theorem is valid for every linear perturbation, i.e., inf max max x A y B u U {xt M(u)y + x T a} = max max inf u U y B x A {xt M(u)y + x T a}, a R n. Theorem 2.2. (Characterization) Let A be a closed convex subset of R n and let B be a convex compact subset of R m. Let U be a convex compact subset of R q. Then, the following statements are equivalent: 5
(1) ( λ [0, 1]) ( (y 1, u 1 ), (y 2, u 2 ) B U) ( (y, u) B U) ( x A) x T M(u)y λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2. (2.3) (2) inf x A max y B max u U {xt M(u)y + x T a} = max max inf u U y B x A {xt M(u)y + x T a} a R n. Proof. [(1) (2)] Let z = (y, u) R m R q and define F : R n R m R q R by F(x, z) = x T M(u)y + a T x. Then, we see that x F(x, z) is linear for any z R m R q and z F(x, z) is concavelike. So, the Ky Fan s minimax theorem (Theorem 2.1) gives us the statement (2). [(2) (1)] We will establish this implication by the method of contradiction and suppose that (1) fails. Then, there exist λ [0, 1], y 1, y 2 B and u 1, u 2 U such that for all (y, u) B U, there exists x A such that x T M(u)y < λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2. (2.4) Let a 0 = λm(u 1 )y 1 + (1 λ)m(u 2 )y 2 and let a = a 0. Then, by (2.4) and statement (2), inf max max xt( ) M(u)y a 0 = max max inf xt( ) M(u)y a 0 < 0. x A y B u U u U y B x A Let h(x) := max y B max u U xt M(u)y + δ A (x). Then, h is convex and h (a 0 ) = inf x A max y B max u U xt( M(u)y a 0 ) < 0. Thus, (a 0, 0) / epih. Let h y,u (x) = x T M(u)y, for (y, u) B U. As h h y,u, for each (y, u) B U, we see from (2.1) that epih epih y,u for each (y, u) B U. This together with the convexity of epih gives us that epih co epih y,u = co {M(u)y} [0, + ). y B,u U Since (a 0, 0) / epih, it follows that which is impossible. y B,u U λm(u 1 )y 1 + (1 λ)m(u 2 )y 2 = a 0 / co y B,u U M(u)y, It is easy to see that, the jointly concavelike condition (2.3) holds if the classical condition (u, y) x T M(u)y is concave is satisfied, and so, robust von Neumann minimax theorem holds under the classical condition. However, we shall see, in the following simple example, that this classical condition is hard to satisfy even in the case of a linear perturbation: 6
( m1 m Example 2.2. Let M 0 = 2 m 2 m 3 ), and consider M( ) = M 0 + where is a (2 2) symmetric matrix (which can be equivalently regarded as a vector in R q with q = 3). Let n = m = 2. Now, we show that (, y) x T (M 0 + )y is not concave for any fixed x R 2 +\{0}. To see this, fix x = (x 1, x 2 ) T R 2 +\{0} and let ( ) a1 a = 2 a 2 a 3 Then, for each fixed x = (x 1, x 2 ), the mapping (, y) x T (M 0 + )y can be equivalently rewritten (up to an invertible linear transformation) as f(a 1, a 2, a 3, y 1, y 2 ) = (m 1 +a 1 )x 1 y 1 +(m 2 +a 2 )x 1 y 2 +(m 2 +a 2 )x 2 y 1 +(m 3 +a 3 )x 2 y 2. Note that an invertible linear transformation preserves concavity, we only need to show f is not concave. To see this, note that, for each (a 1, a 2, a 3, y 1, y 2 ) R 5, 2 f(a 1, a 2, a 3, y 1, y 2 ) is a constant (5 5) matrix 0 0 0 x 1 0 0 0 0 x 2 x 1 C = 0 0 0 0 x 2 x 1 x 2 0 0 0 0 x 1 x 2 0 0 As x = (x 1, x 2 ) T R 2 + \{0}, et 5 Ce 5 = 4x 1 + 4x 2 > 0 where e 5 = (1, 1, 1, 1, 1) T. So, f is not concave. From the preceding example, we see that the classical sufficient condition (, y) x T (M + )y is concave is somewhat limited from the application viewpoint. However, we shall see in the next section that our condition (2.2) can be satisfied under various types of simple and commonly used data uncertainty sets, and hence produces various classes of the robust von Neumann minimax theorems in the face of payoff matrix data uncertainty. 3 Classes of Robust Minimax Theorems In this Section, we establish that robust von Neumann s minimax theorem always holds under various classes of uncertainty sets by verifying the joint concavelike condition in Theorem 2.1. 3.1 Matrix 1-Norm Uncertainty In the first case, we assume that the matrix data in the bilinear function of the von Neumann s minimax theorem is uncertain and the uncertain data matrix belongs to the matrix 1-norm uncertainty set U 1 = {M 0 + R n m 1 ρ} where M 0 R n m, 1 is the matrix 1-norm defined by 1 = sup x 1 and x 1 is the x R m, x 1 =1 l 1 -norm of the vector x R n. 7
Theorem 3.1. (Robust Minimax Theorem I) Let M 0 R n n and let U 1 = {M 0 + R n m 1 ρ}. ρ > 0. Let S n = {(x 1,..., x n ) R n x i 0, i = 1,..., n, n i=1 x i = 1} and let S m = {(x 1,..., x m ) R m x i 0, i = 1,..., m, m i=1 x i = 1}. Then, we have min max max x S n y S m M U 1 x T My = max max min M U 1 y S m x S n xt My. (3.5) Proof. Let A = S n, B = S m. Consider U 1 = { : 1 ρ} R n m as a subset of R q with q = mn and let M( ) = M 0 +, U 1. Note that (3.5) is equivalent to min max max x S n y S m U 1 x T M( )y = max max min U 1 y S m x S n xt M( )y Thus, to see the conclusion from Theorem 2.1, it suffices to show that: for any λ [0, 1], y 1, y 2 B and 1, 2 U 1, there exists (y, ) B U 1 such that x T M( )y λx T M( 1 )y 1 + (1 λ)x T M( 2 )y 2 x A. (3.6) To see this, fix λ [0, 1], y 1, y 2 B, 1 U 1 and 2 U 1. Let y = λy 1 +(1 λ)y 2 S m and a = λ 1 y 1 + (1 λ) 2 y 2. Now, consider a matrix defined by = ae T, where e R m with each coordinate is equal to 1. As y S m, we have y = a and y 1 = 1. Moreover, as y 1 = 1, we have Note that 1 = sup x 1 = a 1 sup e T x a 1 sup x 1 = a 1. x 1 =1 x 1 =1 x 1 =1 a 1 = λ 1 y 1 + (1 λ) 2 y 2 1 λ 1 1 y 1 1 + (1 λ) 2 1 y 2 1 ρ. So, satisfying y = a = λ 1 y 1 + (1 λ) 2 y 2 and 1 ρ. Now, for any x A, from y = a, we have λx T M( 1 )y 1 + (1 λ)x T M( 2 )y 2 = λx T (M 0 + 1 )y 1 + (1 λ)x T (M 0 + 2 )y 2 = x T M 0 y + λx T 1 y 1 + (1 λ)x T 2 y 2 = x T M 0 y + x T a So, the conclusion follows from Theorem 2.1. 3.2 Rank-1 Matrix Uncertainty = x T (M 0 + )y = x T M( )y. Secondly, we derive the robust minimax theorem in terms of rank-1 uncertainty sets U 2 = {M 0 + ρuv T u R n, v R m, u 1 and v 1} where u (resp. v ) is the l -norm of u = (u 1,...,u n ) R n (resp. v = (v 1,...,v m ) R m ) defined by u = max 1 i n u i (resp. v = max 1 i m v i ). 8
Theorem 3.2. (Robust Minimax Theorem II) Let M 0 R n n. Let S n = {(x 1,, x n ) R n x i 0, i = 1,, n, n i=1 x i = 1} and let S m = {(x 1,..., x m ) R m x i 0, i = 1,...,m, m i=1 x i = 1}. Let U 2 = {M 0 + ρuv T u R n, v R m, u 1 and v 1} where ρ > 0. Then, min max max x S n y S m M U 2 x T My = max max min M U 2 y S m x S n xt My. (3.7) Proof. Let A = S n, B = S m. Consider U 2 = {ρuv T : u R n, v R m, u 1 and v 1} R n m as a subset of R q with q = mn and let M( ) = M 0 +, U 2. Note that (3.7) is equivalent to min max max x T M( )y = max x S n y S m U 2 max min U 2 y S m x S n xt M( )y. The conclusion will follow from Theorem 2.1, if we show that for any λ [0, 1], y 1, y 2 B and 1, 2 U 2, there exists (y, ) B U 2 such that x T M( )y λx T M( 1 )y 1 + (1 λ)x T M( 2 )y 2 x A. (3.8) To see this, fix λ [0, 1], y 1, y 2 B and 1, 2 U 2. Then, we can find u 1, u 2 R n and v 1, v 2 R m such that u 1 1, u 2 1, v 1 1, v 2 1, 1 = ρu 1 v T 1 and 2 = ρu 2 v T 2. Now, consider a matrix defined by = ρaet, where e R m with each coordinate is equal to 1 and a = λu 1 v T 1 y 1 +(1 λ)u 2 v T 2 y 2. Letting y = λy 1 +(1 λ)y 2, we see that y S m and y = ρa. Moreover, as y 1 1 = 1 and y 2 1 = 1, it follows that a = λu 1 v T 1 y 1 + (1 λ)u 2 v T 2 y 2 λ u 1 v T 1 y 1 + (1 λ) u 2 v T 2 y 2 λ v T 1 y 1 + (1 λ) v T 2 y 2 λ v 1 y 1 1 + (1 λ) v 2 y 2 1 1. So, = ρae T {ρuv T : u, v R n, u 1 and v 1}. Then, we see that U 2 and it satisfies y = ρa = ρ(λu 1 v T 1 y 1+(1 λ)u 2 v T 2 y 2). Now, for each x A, we have x T M( 1 )y 1 + (1 λ)x T M( 2 )y 2 = λx T (M 0 + ρu 1 v T 1 )y 1 + (1 λ)x T (M 0 + ρu 2 v T 2 )y 2 = x T M 0 y + ρx T (λu 1 v T 1 y 1 + (1 λ)u 2 v T 2 y 2 ) = x T M 0 y + ρx T a So, the conclusion follows from Theorem 2.1. = x T (M 0 + )y = x T M( )y. 9
3.3 Column-wise Affine Parameter Uncertainty Thirdly, we obtain our robust minimax theorem in the case where the matrix data is uncertain and the uncertain data matrix is columnwise affinely parameterized, i.e., the matrix data M belongs to the uncertainty set U 3 = { ( a 1 0 + q 1 i=1 u 1 ia 1 i,..., a m 0 + q m i=1 u m i a m i ) : (u j 1,...,u j q j ) Z j, j = 1,...,m}, where Z j, j = 1,...,m is a compact convex set in R q j, a j i R n, i = 0, 1,..., q j, j = 1,..., m. To begin with, we first derive the following proposition as a preparation. Proposition 3.1. Let A be a closed convex set in R n and let S m = {(x 1,...,x m ) R m x j 0, j = 1,...,m, m j=1 x i = 1}. Let a j : R q j R n, j = 1,, m, be affine functions and let U = {(a 1 (u 1 ),...,a m (u m )) : u j U j } where U j R q j is a convex compact set, j = 1,..., m. Then, inf max x A max y S m M U x T My = max M U max x A xt My. (3.9) y S m inf Proof. Let B = S m and consider U = m j=1 U j as a subset of R q with q = m j=1 q j and define M(u) = (a 1 (u 1 ),..., a m (u m )), u = (u 1,...,u m ) U. Note that (3.9) is equivalent to inf max max x A y S m u U xt M(u)y = max max inf u U y S m x A xt M(u)y. As it was seen earlier, the conclusion will follow from Theorem 2.1 if we show that for any λ [0, 1], y 1, y 2 B and u 1, u 2 U, there exists (y, u) B U such that x T M(u)y λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2 x A. (3.10) To see this, fix λ [0, 1], y 1, y 2 B, u 1 = (u 1 1,...,u 1 m) U and u 2 = (u 2 1,...,u 2 m) U. So, for any x A, we have λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2 = λx ( T a 1 (u 1 1 ),...,a m(u 1 m )) y 1 + (1 λ)x ( T a 1 (u 2 1 ),..., a m(u 2 m )) y 2 m m = λ yj 1 a j(u 1 j )T x + (1 λ) yj 2 a j(u 2 j )T x = j=1 j=1 m ( λy 1 j a j (u 1 j) + (1 λ)yja 2 j (u 2 j) ) T x. j=1 Let y = λy 1 + (1 λ)y 2. Then, y = (y 1,...,y m ) with y j = λyj 1 + (1 λ)y2 j. Let u = (u 1,...,u m ) where each u j, j = 1,...,m, is given by u j = { λy 1 j u 1 j +(1 λ)y2 j u2 j y j if y j 0 u 1 j else. 10
So, u j U j and y j u j = λyj 1u1 j + (1 λ)y2 j u2 j (this equality is straightforward by the construction of u j when y j 0. On the other hand, if y j = 0, then yj 1 = y2 j = 0 so the equality again follows). This implies that u U and y j a(u j ) = λyja 1 j (u 1 j) + (1 λ)yj 2a j(u 2 j ). Thus, λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2 = m y j a j (u j ) T x = x T M(u)y. j=1 Thus, the conclusion follows. Remark 3.1. Using a similar method of proof, if we further assume that A R n +, then the assumption each a j is an affine function can be relaxed to each a j is a concave function. Now, we establish the robust minimax theorem for columnwise affine parameterization case. Theorem 3.3. (Robust Minimax Theorem III) Let S n = {(x 1,, x n ) R n x i 0, i = 1,, n, n i=1 x i = 1} and let S m = {(x 1,..., x m ) R m x i 0, i = 1,..., m, m i=1 x i = 1}. Let U 3 = { ( a 1 0+ k i=1 u1 ia 1 i,..., a m 0 + k ) i=1 um i a m i : (u j 1,...,u j k ) Z j, j = 1,..., m}, where Z j, j = 1,...,m is a compact convex set in R k, a j i R n, i = 0, 1,..., k, j = 1,..., m Then, min max max x S n y S m M U 3 x T My = max max min M U 3 y S m x S n xt My. (3.11) Proof. The conclusion follows by the preceding proposition by letting A = S n (which is convex compact and so the infimum is attained on A), U j = Z j and letting a j, j = 1,..., m, be an affine mapping defined by q j a j (u j ) = a j 0 + u j i aj i, u j = (u j 1,...,uj q j ) R q j. i=1 As a simple application of Proposition 3.1, we derive a robust theorem of the alternative for a parameterized linear inequality system. Corollary 3.1. (Robust Gordan Alternative Theorem) For each j = 1,, m, let a j : R q j R n be an affine function. Let U j be a convex compact subset of R q j, j = 1,, m. Then exactly one of the following two statements holds: (i) ( x R n ) ( u j U j ) a j (u j ) T x < 0, j = 1,, m (ii) ( 0 λ R m +) ( ū j U j, j = 1,, m), m j=1 λ j a j (ū j ) = 0. 11
Proof. As both (i) and (ii) can not have a solution simultaneously, we only need to show that [Not(i) (ii)]. To see this, let M(u) = (a 1 (u 1 ),...,a m (u m )) R n m and let u j R q j, j = 1, 2,, m. Then, m x T M(u)y = y j a j (u j ) T x. j=1 Let A = R n, B = {(y 1,...,y m ) : m j=1 y j = 1, y j 0} and let U = m j=1 U j. Then Not(i) implies that inf max max x A y B u U xt M(u)y = inf x R n P m max max j=1 y j=1,y j 0 u j U j j=1 m y j a j (u j ) T x 0. (Otherwise, inf x A max P m j=1 y j=1,y j 0 max m uj U j j=1 y ja j (u j ) T x < 0, and so, there exists x 0 A such that m j=1 y ja j (u j ) T x 0 < 0 for all y j 0 with m j=1 y j = 1 and for all u j U j. This means that the statement (i) is true which contradicts our assumption.) Hence, by Proposition 3.1, we have m P m max max inf y j a j (u j ) T x = max max inf j=1 y j=1,y j 0 u j U j x R n y B u U x A xt M(u)y j=1 = inf x A max y B max u U xt M(u)y 0. Thus, there exist λ j 0, j = 1,, m, not all zero, and ū j U j, j = 1,, m, such that, for each x R n, m j=1 λ j a j (ū j ) T x 0. So, the conclusion follows. Remark 3.2. If U j, j = 1,, m, are singletons, then Corollary 3.1 collapses to the classical Gordan s alternative theorem [7]. 3.4 Isotone Matrix Data Uncertainty Now, we obtain a form of robust minimax theorem in the case where the matrix data is uncertain and the uncertain matrix is isotone on U in the sense that the mapping u M(u) satisfies the condition that, for any u 1, u 2 U, max{u 1, u 2 } U and u 1, u 2 U, u 1 u 2 M(u 1 ) M(u 2 ). Note that max{u 1, u 2 } is the vector whose ith coordinate is the maximum of the ith coordinate of u 1 and u 2, and that C 1 C 2 means each entry in the matrix C 1 C 2 is nonnegative. For a simple example of an isotone matrix data uncertainty, let U 0 = {(u 1,...,u q ) R q : 0 u i 1, i = 1,...,q}, Û 0 = {M 0 + q i=1 u im i : M i R n R n, M i 0, i = 1,...,u q, u = (u 1,...,u q ) U 0 }. Theorem 3.4. (Robust Minimax Theorem IV) Let S n = {(x 1,, x n ) R n x i 0, i = 1,, n, n i=1 x i = 1} and let S m = {(x 1,..., x m ) R m x i 0, i = 1,..., m, m i=1 x i = 1}. Suppose that U is a convex compact set in R q and u M(u) is an isotone mapping on U. Then, min max max x S n y S m u U xt M(u)y = max 12 max min u U y S m x S n xt M(u)y. (3.12)
Proof. Let A = S n, B = S m. Then, the conclusion will follow from Theorem 2.1 if we show that for any λ [0, 1], y 1, y 2 B and u 1, u 2 U, there exists (y 0, u 0 ) B U such that x T M(u 0 )y 0 λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2 x A. (3.13) To see this, fix λ [0, 1], y 1, y 2 B, u 1 = (u 1 1,...,u1 m ) U and u2 = (u 2 1,...,u2 m ) U. Let u 0 = max{u 1, u 2 } and y 0 = λy 1 + (1 λ)y 2. As u M(u) is isotone on U, it follows that u 0 U, M(u 0 ) M(u 1 ) and M(u 0 ) M(u 2 ). Now, for each x A, noting that x R n + and y1, y 2 R m +, we obtain that x T M(u 1 )y 1 x T M(u 0 )y 1 = x T (M(u 1 ) M(u 0 ))y 1 0 and x T M(u 2 )y 2 x T M(u 0 )y 2 = x T (M(u 1 ) M(u 0 ))y 2 0. This gives us that λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2 λx T M(u 0 )y 1 + (1 λ)x T M(u 0 )y 2 = x T M(u 0 )y 0. References [1] M. Aghassi and D. Bertsimas, Robust game theory, Mathematical Programming 107(1-2) (2006), 231-273. [2] A. Beck and A. Ben-Tal, Duality in robust optimization: Primal worst equals dual best, Operations Research Letters, 37(2009), 1 6. [3] A. Ben-Tal, L.E. Ghaoui and A. Nemirovski, Robust Optimization, Princeton Series in Applied Mathematics, 2009. [4] A. Ben-Tal and A. Nemirovski, Robust optimization methodology and applications, Mathematical Programming, Ser B, 92 (2002), 453-480. [5] D. Bertsimas and D. Brown, Constructing uncertainty sets for robust linear optimization, Operations Research, 57 (2009), 1483-1495. [6] D. Bertsimas, D. Pachamanova and M. Sim, Robust linear optimization under general norms, Operations Research Letters, 32 (2004), 510-516. [7] B. D. Craven and V. Jeyakumar, Equivalence of a Ky Fan type minimax theorem and a Gordan type alternative theorem, Operations Research Letters, 5(2) (1986), 99 102. 13
[8] K. Fan, Minimax Theorems, Proceedings of the National Academy of Sciences, USA, 39 (1953), 42-47. [9] J.B.G. Frenk, P. Kas and G. Kassay,. On linear programming duality and necessary and sufficient conditions in minimax theory. Journal of Optimization Theory and Applications, 132(3) (2007), 423 439. [10] J.B.G. Frenk and G. Kassay, On noncooperative games, minimax theorems, and equilibrium problems. Pareto optimality, game theory and equilibria, 53 94, Springer Optim. Appl., 17, Springer, New York, 2008. [11] V. Jeyakumar, A generalization of a minimax theorem of Ky Fan via a theorem of the alternative, Journal of Optimization Theory and Applications, 48 (1986), 525 533. [12] V. Jeyakumar, G. M. Lee and N. Dinh, New sequential Lagrange multiplier conditions characterizing optimality without constraint qualification for convex programs, SIAM Journal on Optimization 14 (2003), 534-547. [13] V. Jeyakumar and G. Li, Characterizing robust set containments and solutions of uncertain linear programs without qualifications, Operations Research Letters, 38 (2010), 188 194. [14] V. Jeyakumar and G. Li, Robust Farkas lemma for uncertain linear systems with applications, Positivity, DOI 10. 1007/s11117-010-0078-4. [15] V. Jeyakumar and G. Li, Strong duality in robust convex programming: complete characterizations, SIAM Journal on Optimization, 20 (2010), 3384-3407. [16] G. Li, V. Jeyakumar, G. M. Lee, Robust conjugate duality for convex optimization under uncertainty with application to data classification, Nonlinear Analysis Series A: Theory, Methods and Applications, DOI: 10.1016/j.na.2010.11.036 (2011). [17] G. Li, and K.F. Ng, On extension of Fenchel duality and its application, SIAM Journal on Optimization, 19 (2008), 1489-1509. [18] S.J. Li, G.Y. Chen and G.M. Lee, Minimax theorems for set-valued mappings. Journal of Optimization Theory and Applications 106(1) (2000), 183 199. [19] T. Parthasarathy and T. E. Raghavan, Some Topics in Two-Persons Games, Elsevier, New York, New York, 1971. [20] R. T. Rockafellar, Convex Analysis, Princeton Univ. Press, Princeton, N. J, 1970. [21] J. Von Neumann, Zur Theorie der Gesellschaftsspiele, Mathematische Annalen, 100 (1928), 295-320. 14