A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty

Size: px
Start display at page:

Download "A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty"

Transcription

1 A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty V. Jeyakumar, G.Y. Li and G. M. Lee Revised Version: January 20, 2011 Abstract The celebrated von Neumann minimax theorem is a fundamental theorem in two-person zero-sum games. In this paper, we present a generalization of the von Neumann minimax theorem, called robust von Neumann minimax theorem, in the face of data uncertainty in the payoff matrix via robust optimization approach. We establish that the robust von Neumann minimax theorem is guaranteed for various classes of bounded uncertainties, including the matrix 1-norm uncertainty, the rank-1 uncertainty and the column-wise affine parameter uncertainty. Key words. Robust von Neumann minimax theorem, minimax theorems under payoff uncertainty, robust optimization, conjugate functions. 1 Introduction The celebrated von Neumann Minimax Theorem [21] asserts that, for an (n m) matrix M, min max x S n y S xt My = max min m y S m x S xt My, n where S n is the n-dimensional simplex. It is a fundamental equality in two-person zero-sum games [19]. Due to its importance in mathematics, decision theory, economics and game theory, numerous generalizations have been given in the literature (see [9, 10, 11, 18] and The authors are grateful to the referee and the editors for their valuable comments and constructive suggestions which have contributed to the final preparation of the paper. The first and second authors were partially supported by a grant from the Australian Research Council. The third author was supported by the Korea Science and Engineering Foundation (KOSEF) NRL program grant funded by the Korea government (MEST)(No. ROA ). Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia Department of Applied Mathematics, Pukyong National University, Busan , Korea 1

2 other reference therein). However, these generalizations and their applications have so far been limited mainly to problems without data uncertainty, despite the reality of data uncertainty in many real-world problems due to modeling or prediction errors [2, 3, 5, 4, 6, 13, 14, 15]. For related recent work on incomplete-information games, see [1] and other references therein. The purpose of this paper is to present a new form of the von Neumann minimax theorem, called robust von Neumann minimax theorem, for two-person zero sum games under data uncertainty via robust optimization and to establish that the robust von Neumann minimax theorem always holds under various classes of uncertainties, including the matrix 1-norm uncertainty, the rank-1 uncertainty and the column-wise affine parameter uncertainty. The minimax value γ 1 := min max x S n y S xt My and the maxmin value γ 2 := max m can be calculated by the following two optimization problems: γ 1 = min {t : (x,t) S n R max y S xt My t} and γ 2 = max {t : min m (y,t) S m R x S xt My t}. Whenever the cost function n min y S m x S xt My n is affected by data uncertainty, the effect of uncertain data on the cost matrix M can be captured by a new matrix M(u) where u is an uncertain parameter and it belongs to the compact uncertainty set U R q. For instance, the effect of uncertain data ( ) a1 a (a 1, a 2, a 3 ) on the cost matrix M = 2 can be captured by the new matrix ( ) a 2 a 3 a1 (u) a M(u) = 2 (u) where u U R. So, the minimax value and the maxmin a 2 (u) a 3 (u) value in the face of cost matrix data uncertainty can be obtained by the following two uncertain optimization problems (UP I ) min {t : max (x,t) S n R y S xt M(u)y t} m and (UP II ) max {t : min (y,t) S m R x S xt M(u)y t}. n The robust counterpart [3, 15, 16] of the uncertain optimization problem (UP I ) is a deterministic optimization problem defined by (RP I ) min {t : max (x,t) S n R y S xt M(u)y t, for all u U}, (1.1) m and the optimistic counterpart [2, 15, 16] of the uncertain optimization problem (UP II ) is another deterministic optimization problem defined by (OP II ) max {t : min (y,t) S m R x S xt M(u)y t, for some u U}. (1.2) n The robust minimax theorem states that the optimal values of the robust counterpart problem (RP I ) ( worst possible loss of Player I ) and the optimistic counterpart (OP II ) ( best possible gain of Player II ) are equal. Equivalently, it asserts that min max max x S n y S m u U xt M(u)y = max 2 max min u U y S m x S n xt M(u)y. (1.3)

3 Employing conjugate analysis [20] and Ky Fan s minimax theorem [8], we derive the robust minimax equality (1.3) under a concave-like condition. We also show that the concave-like condition is also necessary for the robust minimax theorem in the sense that it holds if and only if inf max max x A y B u U {xt M(u)y + x T a} = max max inf u U y B x A {xt M(u)y + x T a}, a R n. Importantly, we establish that the robust minimax theorem always holds for various classes of bounded uncertainty sets, including the matrix 1-norm uncertainty set, the rank-1 matrix uncertainty set, the column-wise affine parameter uncertainty set and isotone matrix-data uncertainty set. Consequently, we also derive a robust theorem of the alternative for uncertain linear inequality systems from the robust minimax theorem. 2 A Robust Minimax Theorem under Uncertainty In this Section, we present a concave-like condition ensuring (1.3). We also show that the condition is also necessary for (1.3) to hold for every linear perturbation. We begin this section by fixing notation and preliminaries of convex analysis. Throughout this paper, R n denotes the Euclidean space with dimension n. The inner product in R n is defined by x, y := x T y for all x, y R n. The nonnegative orthant of R n is denoted by R n + and is defined by R n + := {(x 1,...,x n ) R n : x i 0}. For a set A in R n, the convex hull of A is denoted by coa. We say A is convex whenever µa 1 +(1 µ)a 2 A for all µ [0, 1], a 1, a 2 A. A function f : R n R {+ } is said to be convex if for all µ [0, 1] f((1 µ)x+µy) (1 µ)f(x)+µf(y) for all x, y R n. The function f is said to be concave whenever f is convex. As usual, for any proper (i.e., domf ) convex function f on R n, its conjugate function f : R n R {+ } is defined by f (x ) = sup x R n{ x, x f(x)} for all x R n. Clearly, f is a proper lower semicontinuous convex function and for any proper lower semicontinuous convex functions f 1, f 2 (cf. [12, 17]), f 1 f 2 f 1 f 2 epif 1 epif 2. (2.1) The following special case of Ky Fan minimax theorem [8] plays a key role in deriving our robust von Neumann minimax theorem. Recall from Ky Fan [8] that the function f(., y) is said to be convex-like whenever ( x 1, x 2 C) ( λ (0, 1)) ( x 3 C) ( y D) f(x 3, y) λf(x 1, y) + (1 λ)f(x 2, y). The function f(x,.) is said to be concave-like whenever ( y 1, y 2 D) ( λ (0, 1)) ( y 3 D) ( x C) f(x, y 3 ) λf(x, y 1 ) + (1 λ)f(x, y 2 ), where f : C D R and C and D are sets. Theorem 2.1. [8, 11] Let C be a compact subset of R n and let D R m. Let f : C D R. Suppose that f(., y) is concave-like and f(x,.) is convex-like and that f(., y) is upper-semi-continuous. Then, max inf f(x, y) = inf x C y D max y D x C f(x, y). 3

4 Theorem 2.1. (Robust von Neumann Minimax Theorem) Let A be a closed convex subset of R n and let B be a convex compact subset of R m. Let U be a convex compact subset of R q. Assume that ( λ [0, 1]) ( (y 1, u 1 ), (y 2, u 2 ) B U) ( (y, u) B U) ( x A) Then x T M(u)y λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2. (2.2) inf max max x A y B u U xt M(u)y = max max inf u U y B x A xt M(u)y. Proof. Let z = (y, u) R m R q and define F : R n R m R q R by F(x, z) = x T M(u)y. Then, we see that x F(x, z) is linear for any z R n R q and z F(x, z) is concavelike. So, by Theorem 2.1 gives us that Thus, the conclusion follows. inf max F(x, z) = max inf F(x, z). x A z B U z B U x A Corollary 2.1. Let A = {(x 1,...,x n ) R n x i 0, i = 1,...,n, n i=1 x i = 1}; let B be a convex compact subset of R m and let U be a convex compact subset of R q. If ( u U, y B {M(u)y} Rn +) is a convex set then inf max max x A y B u U xt M(u)y = max max inf u U y B x A xt M(u)y. Proof. The conclusion will follow from Theorem 2.1 if we show that (2.2) holds. To see this, let λ [0, 1]) and let (y 1, u 1 ), (y 2, u 2 ) B U). Then, M(u i )y i ( u U, y B {M(u)y} R n +), for i = 1, 2. By the convexity hypothesis, we can find (y, u) B U such that M(u)y λm(u 1 )y 1 (1 λ)m(u 2 )y 2 R n +. This together with the fact that x A gives us the required inequality (2.2). It is worth noting that whenever A is simplex, i.e. A = {(x 1,, x n ) R n x i 0, i = 1,, n, n i=1 x i = 1}, (2.2) is equivalent to the convexity of the set ( u U, y B {M(u)y} R n + )Ȧs an illustration, we provide a simple numerical example verifying Corollary 2.1. Example 2.1. Let A = B = {(x 1, x 2 ) : x 1, x 2 0 and x 1 + x 2 = 1}. Let U = [0, 1] and let M(u) = M 0 + um 1 where ( ) ( ) 0 5/6 1 0 M 0 = and M 1 1/2 1 =. 0 1 Then, u U, y B {M(u)y} = u [0,1] (y 1,y 2 ) co{(0,1),(1,0)} = u [0,1] co{( u ( u 5 { u 2 ) ( u, 1 ) }. ) ( y1 y 2 ) } 4

5 Figure 1 Clearly, the set u [0,1], y B {(M 0+uM 1 )y} which is shown by the shaded region of figure 1, is not convex; whereas the set {(M 0 + um 1 )y} R 2 + = {(a 1, a 2 ) : a 1 5 6, a 2 1.5} u [0,1], y B is convex. To verify the robust minimax equality (1.3), let x 2 = 1 x 1 and y 2 = 1 y 1. Then, x T (M 0 + um 1 )y = ( 1 3 u 4 3 y 1)x u y 1 uy 1. Calculating extreme values with respect to each variable gives us max u U max y B min x A xt (M 0 + um 1 )y = max max u [0,1] y 1 [0,1] min f u(x 1, y 1 ) = 5 x 1 [0,1] 6. where f u (x 1, y 1 ) = ( 1 3 u 4 3 y 1)x u+1 2 y 1 uy 1. Also, min x A max u U max y B x T (M 0 + um 1 )y = 5 6. We now show that our jointly concavelike condition of Theorem 2.1 is indeed a characterization for the robust von Neumann minimax theorem in the sense that the condition holds if and only if the robust von Neumann minimax theorem is valid for every linear perturbation, i.e., inf max max x A y B u U {xt M(u)y + x T a} = max max inf u U y B x A {xt M(u)y + x T a}, a R n. Theorem 2.2. (Characterization) Let A be a closed convex subset of R n and let B be a convex compact subset of R m. Let U be a convex compact subset of R q. Then, the following statements are equivalent: 5

6 (1) ( λ [0, 1]) ( (y 1, u 1 ), (y 2, u 2 ) B U) ( (y, u) B U) ( x A) x T M(u)y λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2. (2.3) (2) inf x A max y B max u U {xt M(u)y + x T a} = max max inf u U y B x A {xt M(u)y + x T a} a R n. Proof. [(1) (2)] Let z = (y, u) R m R q and define F : R n R m R q R by F(x, z) = x T M(u)y + a T x. Then, we see that x F(x, z) is linear for any z R m R q and z F(x, z) is concavelike. So, the Ky Fan s minimax theorem (Theorem 2.1) gives us the statement (2). [(2) (1)] We will establish this implication by the method of contradiction and suppose that (1) fails. Then, there exist λ [0, 1], y 1, y 2 B and u 1, u 2 U such that for all (y, u) B U, there exists x A such that x T M(u)y < λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2. (2.4) Let a 0 = λm(u 1 )y 1 + (1 λ)m(u 2 )y 2 and let a = a 0. Then, by (2.4) and statement (2), inf max max xt( ) M(u)y a 0 = max max inf xt( ) M(u)y a 0 < 0. x A y B u U u U y B x A Let h(x) := max y B max u U xt M(u)y + δ A (x). Then, h is convex and h (a 0 ) = inf x A max y B max u U xt( M(u)y a 0 ) < 0. Thus, (a 0, 0) / epih. Let h y,u (x) = x T M(u)y, for (y, u) B U. As h h y,u, for each (y, u) B U, we see from (2.1) that epih epih y,u for each (y, u) B U. This together with the convexity of epih gives us that epih co epih y,u = co {M(u)y} [0, + ). y B,u U Since (a 0, 0) / epih, it follows that which is impossible. y B,u U λm(u 1 )y 1 + (1 λ)m(u 2 )y 2 = a 0 / co y B,u U M(u)y, It is easy to see that, the jointly concavelike condition (2.3) holds if the classical condition (u, y) x T M(u)y is concave is satisfied, and so, robust von Neumann minimax theorem holds under the classical condition. However, we shall see, in the following simple example, that this classical condition is hard to satisfy even in the case of a linear perturbation: 6

7 ( m1 m Example 2.2. Let M 0 = 2 m 2 m 3 ), and consider M( ) = M 0 + where is a (2 2) symmetric matrix (which can be equivalently regarded as a vector in R q with q = 3). Let n = m = 2. Now, we show that (, y) x T (M 0 + )y is not concave for any fixed x R 2 +\{0}. To see this, fix x = (x 1, x 2 ) T R 2 +\{0} and let ( ) a1 a = 2 a 2 a 3 Then, for each fixed x = (x 1, x 2 ), the mapping (, y) x T (M 0 + )y can be equivalently rewritten (up to an invertible linear transformation) as f(a 1, a 2, a 3, y 1, y 2 ) = (m 1 +a 1 )x 1 y 1 +(m 2 +a 2 )x 1 y 2 +(m 2 +a 2 )x 2 y 1 +(m 3 +a 3 )x 2 y 2. Note that an invertible linear transformation preserves concavity, we only need to show f is not concave. To see this, note that, for each (a 1, a 2, a 3, y 1, y 2 ) R 5, 2 f(a 1, a 2, a 3, y 1, y 2 ) is a constant (5 5) matrix x x 2 x 1 C = x 2 x 1 x x 1 x As x = (x 1, x 2 ) T R 2 + \{0}, et 5 Ce 5 = 4x 1 + 4x 2 > 0 where e 5 = (1, 1, 1, 1, 1) T. So, f is not concave. From the preceding example, we see that the classical sufficient condition (, y) x T (M + )y is concave is somewhat limited from the application viewpoint. However, we shall see in the next section that our condition (2.2) can be satisfied under various types of simple and commonly used data uncertainty sets, and hence produces various classes of the robust von Neumann minimax theorems in the face of payoff matrix data uncertainty. 3 Classes of Robust Minimax Theorems In this Section, we establish that robust von Neumann s minimax theorem always holds under various classes of uncertainty sets by verifying the joint concavelike condition in Theorem Matrix 1-Norm Uncertainty In the first case, we assume that the matrix data in the bilinear function of the von Neumann s minimax theorem is uncertain and the uncertain data matrix belongs to the matrix 1-norm uncertainty set U 1 = {M 0 + R n m 1 ρ} where M 0 R n m, 1 is the matrix 1-norm defined by 1 = sup x 1 and x 1 is the x R m, x 1 =1 l 1 -norm of the vector x R n. 7

8 Theorem 3.1. (Robust Minimax Theorem I) Let M 0 R n n and let U 1 = {M 0 + R n m 1 ρ}. ρ > 0. Let S n = {(x 1,..., x n ) R n x i 0, i = 1,..., n, n i=1 x i = 1} and let S m = {(x 1,..., x m ) R m x i 0, i = 1,..., m, m i=1 x i = 1}. Then, we have min max max x S n y S m M U 1 x T My = max max min M U 1 y S m x S n xt My. (3.5) Proof. Let A = S n, B = S m. Consider U 1 = { : 1 ρ} R n m as a subset of R q with q = mn and let M( ) = M 0 +, U 1. Note that (3.5) is equivalent to min max max x S n y S m U 1 x T M( )y = max max min U 1 y S m x S n xt M( )y Thus, to see the conclusion from Theorem 2.1, it suffices to show that: for any λ [0, 1], y 1, y 2 B and 1, 2 U 1, there exists (y, ) B U 1 such that x T M( )y λx T M( 1 )y 1 + (1 λ)x T M( 2 )y 2 x A. (3.6) To see this, fix λ [0, 1], y 1, y 2 B, 1 U 1 and 2 U 1. Let y = λy 1 +(1 λ)y 2 S m and a = λ 1 y 1 + (1 λ) 2 y 2. Now, consider a matrix defined by = ae T, where e R m with each coordinate is equal to 1. As y S m, we have y = a and y 1 = 1. Moreover, as y 1 = 1, we have Note that 1 = sup x 1 = a 1 sup e T x a 1 sup x 1 = a 1. x 1 =1 x 1 =1 x 1 =1 a 1 = λ 1 y 1 + (1 λ) 2 y 2 1 λ 1 1 y (1 λ) 2 1 y 2 1 ρ. So, satisfying y = a = λ 1 y 1 + (1 λ) 2 y 2 and 1 ρ. Now, for any x A, from y = a, we have λx T M( 1 )y 1 + (1 λ)x T M( 2 )y 2 = λx T (M )y 1 + (1 λ)x T (M )y 2 = x T M 0 y + λx T 1 y 1 + (1 λ)x T 2 y 2 = x T M 0 y + x T a So, the conclusion follows from Theorem Rank-1 Matrix Uncertainty = x T (M 0 + )y = x T M( )y. Secondly, we derive the robust minimax theorem in terms of rank-1 uncertainty sets U 2 = {M 0 + ρuv T u R n, v R m, u 1 and v 1} where u (resp. v ) is the l -norm of u = (u 1,...,u n ) R n (resp. v = (v 1,...,v m ) R m ) defined by u = max 1 i n u i (resp. v = max 1 i m v i ). 8

9 Theorem 3.2. (Robust Minimax Theorem II) Let M 0 R n n. Let S n = {(x 1,, x n ) R n x i 0, i = 1,, n, n i=1 x i = 1} and let S m = {(x 1,..., x m ) R m x i 0, i = 1,...,m, m i=1 x i = 1}. Let U 2 = {M 0 + ρuv T u R n, v R m, u 1 and v 1} where ρ > 0. Then, min max max x S n y S m M U 2 x T My = max max min M U 2 y S m x S n xt My. (3.7) Proof. Let A = S n, B = S m. Consider U 2 = {ρuv T : u R n, v R m, u 1 and v 1} R n m as a subset of R q with q = mn and let M( ) = M 0 +, U 2. Note that (3.7) is equivalent to min max max x T M( )y = max x S n y S m U 2 max min U 2 y S m x S n xt M( )y. The conclusion will follow from Theorem 2.1, if we show that for any λ [0, 1], y 1, y 2 B and 1, 2 U 2, there exists (y, ) B U 2 such that x T M( )y λx T M( 1 )y 1 + (1 λ)x T M( 2 )y 2 x A. (3.8) To see this, fix λ [0, 1], y 1, y 2 B and 1, 2 U 2. Then, we can find u 1, u 2 R n and v 1, v 2 R m such that u 1 1, u 2 1, v 1 1, v 2 1, 1 = ρu 1 v T 1 and 2 = ρu 2 v T 2. Now, consider a matrix defined by = ρaet, where e R m with each coordinate is equal to 1 and a = λu 1 v T 1 y 1 +(1 λ)u 2 v T 2 y 2. Letting y = λy 1 +(1 λ)y 2, we see that y S m and y = ρa. Moreover, as y 1 1 = 1 and y 2 1 = 1, it follows that a = λu 1 v T 1 y 1 + (1 λ)u 2 v T 2 y 2 λ u 1 v T 1 y 1 + (1 λ) u 2 v T 2 y 2 λ v T 1 y 1 + (1 λ) v T 2 y 2 λ v 1 y (1 λ) v 2 y So, = ρae T {ρuv T : u, v R n, u 1 and v 1}. Then, we see that U 2 and it satisfies y = ρa = ρ(λu 1 v T 1 y 1+(1 λ)u 2 v T 2 y 2). Now, for each x A, we have x T M( 1 )y 1 + (1 λ)x T M( 2 )y 2 = λx T (M 0 + ρu 1 v T 1 )y 1 + (1 λ)x T (M 0 + ρu 2 v T 2 )y 2 = x T M 0 y + ρx T (λu 1 v T 1 y 1 + (1 λ)u 2 v T 2 y 2 ) = x T M 0 y + ρx T a So, the conclusion follows from Theorem 2.1. = x T (M 0 + )y = x T M( )y. 9

10 3.3 Column-wise Affine Parameter Uncertainty Thirdly, we obtain our robust minimax theorem in the case where the matrix data is uncertain and the uncertain data matrix is columnwise affinely parameterized, i.e., the matrix data M belongs to the uncertainty set U 3 = { ( a q 1 i=1 u 1 ia 1 i,..., a m 0 + q m i=1 u m i a m i ) : (u j 1,...,u j q j ) Z j, j = 1,...,m}, where Z j, j = 1,...,m is a compact convex set in R q j, a j i R n, i = 0, 1,..., q j, j = 1,..., m. To begin with, we first derive the following proposition as a preparation. Proposition 3.1. Let A be a closed convex set in R n and let S m = {(x 1,...,x m ) R m x j 0, j = 1,...,m, m j=1 x i = 1}. Let a j : R q j R n, j = 1,, m, be affine functions and let U = {(a 1 (u 1 ),...,a m (u m )) : u j U j } where U j R q j is a convex compact set, j = 1,..., m. Then, inf max x A max y S m M U x T My = max M U max x A xt My. (3.9) y S m inf Proof. Let B = S m and consider U = m j=1 U j as a subset of R q with q = m j=1 q j and define M(u) = (a 1 (u 1 ),..., a m (u m )), u = (u 1,...,u m ) U. Note that (3.9) is equivalent to inf max max x A y S m u U xt M(u)y = max max inf u U y S m x A xt M(u)y. As it was seen earlier, the conclusion will follow from Theorem 2.1 if we show that for any λ [0, 1], y 1, y 2 B and u 1, u 2 U, there exists (y, u) B U such that x T M(u)y λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2 x A. (3.10) To see this, fix λ [0, 1], y 1, y 2 B, u 1 = (u 1 1,...,u 1 m) U and u 2 = (u 2 1,...,u 2 m) U. So, for any x A, we have λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2 = λx ( T a 1 (u 1 1 ),...,a m(u 1 m )) y 1 + (1 λ)x ( T a 1 (u 2 1 ),..., a m(u 2 m )) y 2 m m = λ yj 1 a j(u 1 j )T x + (1 λ) yj 2 a j(u 2 j )T x = j=1 j=1 m ( λy 1 j a j (u 1 j) + (1 λ)yja 2 j (u 2 j) ) T x. j=1 Let y = λy 1 + (1 λ)y 2. Then, y = (y 1,...,y m ) with y j = λyj 1 + (1 λ)y2 j. Let u = (u 1,...,u m ) where each u j, j = 1,...,m, is given by u j = { λy 1 j u 1 j +(1 λ)y2 j u2 j y j if y j 0 u 1 j else. 10

11 So, u j U j and y j u j = λyj 1u1 j + (1 λ)y2 j u2 j (this equality is straightforward by the construction of u j when y j 0. On the other hand, if y j = 0, then yj 1 = y2 j = 0 so the equality again follows). This implies that u U and y j a(u j ) = λyja 1 j (u 1 j) + (1 λ)yj 2a j(u 2 j ). Thus, λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2 = m y j a j (u j ) T x = x T M(u)y. j=1 Thus, the conclusion follows. Remark 3.1. Using a similar method of proof, if we further assume that A R n +, then the assumption each a j is an affine function can be relaxed to each a j is a concave function. Now, we establish the robust minimax theorem for columnwise affine parameterization case. Theorem 3.3. (Robust Minimax Theorem III) Let S n = {(x 1,, x n ) R n x i 0, i = 1,, n, n i=1 x i = 1} and let S m = {(x 1,..., x m ) R m x i 0, i = 1,..., m, m i=1 x i = 1}. Let U 3 = { ( a 1 0+ k i=1 u1 ia 1 i,..., a m 0 + k ) i=1 um i a m i : (u j 1,...,u j k ) Z j, j = 1,..., m}, where Z j, j = 1,...,m is a compact convex set in R k, a j i R n, i = 0, 1,..., k, j = 1,..., m Then, min max max x S n y S m M U 3 x T My = max max min M U 3 y S m x S n xt My. (3.11) Proof. The conclusion follows by the preceding proposition by letting A = S n (which is convex compact and so the infimum is attained on A), U j = Z j and letting a j, j = 1,..., m, be an affine mapping defined by q j a j (u j ) = a j 0 + u j i aj i, u j = (u j 1,...,uj q j ) R q j. i=1 As a simple application of Proposition 3.1, we derive a robust theorem of the alternative for a parameterized linear inequality system. Corollary 3.1. (Robust Gordan Alternative Theorem) For each j = 1,, m, let a j : R q j R n be an affine function. Let U j be a convex compact subset of R q j, j = 1,, m. Then exactly one of the following two statements holds: (i) ( x R n ) ( u j U j ) a j (u j ) T x < 0, j = 1,, m (ii) ( 0 λ R m +) ( ū j U j, j = 1,, m), m j=1 λ j a j (ū j ) = 0. 11

12 Proof. As both (i) and (ii) can not have a solution simultaneously, we only need to show that [Not(i) (ii)]. To see this, let M(u) = (a 1 (u 1 ),...,a m (u m )) R n m and let u j R q j, j = 1, 2,, m. Then, m x T M(u)y = y j a j (u j ) T x. j=1 Let A = R n, B = {(y 1,...,y m ) : m j=1 y j = 1, y j 0} and let U = m j=1 U j. Then Not(i) implies that inf max max x A y B u U xt M(u)y = inf x R n P m max max j=1 y j=1,y j 0 u j U j j=1 m y j a j (u j ) T x 0. (Otherwise, inf x A max P m j=1 y j=1,y j 0 max m uj U j j=1 y ja j (u j ) T x < 0, and so, there exists x 0 A such that m j=1 y ja j (u j ) T x 0 < 0 for all y j 0 with m j=1 y j = 1 and for all u j U j. This means that the statement (i) is true which contradicts our assumption.) Hence, by Proposition 3.1, we have m P m max max inf y j a j (u j ) T x = max max inf j=1 y j=1,y j 0 u j U j x R n y B u U x A xt M(u)y j=1 = inf x A max y B max u U xt M(u)y 0. Thus, there exist λ j 0, j = 1,, m, not all zero, and ū j U j, j = 1,, m, such that, for each x R n, m j=1 λ j a j (ū j ) T x 0. So, the conclusion follows. Remark 3.2. If U j, j = 1,, m, are singletons, then Corollary 3.1 collapses to the classical Gordan s alternative theorem [7]. 3.4 Isotone Matrix Data Uncertainty Now, we obtain a form of robust minimax theorem in the case where the matrix data is uncertain and the uncertain matrix is isotone on U in the sense that the mapping u M(u) satisfies the condition that, for any u 1, u 2 U, max{u 1, u 2 } U and u 1, u 2 U, u 1 u 2 M(u 1 ) M(u 2 ). Note that max{u 1, u 2 } is the vector whose ith coordinate is the maximum of the ith coordinate of u 1 and u 2, and that C 1 C 2 means each entry in the matrix C 1 C 2 is nonnegative. For a simple example of an isotone matrix data uncertainty, let U 0 = {(u 1,...,u q ) R q : 0 u i 1, i = 1,...,q}, Û 0 = {M 0 + q i=1 u im i : M i R n R n, M i 0, i = 1,...,u q, u = (u 1,...,u q ) U 0 }. Theorem 3.4. (Robust Minimax Theorem IV) Let S n = {(x 1,, x n ) R n x i 0, i = 1,, n, n i=1 x i = 1} and let S m = {(x 1,..., x m ) R m x i 0, i = 1,..., m, m i=1 x i = 1}. Suppose that U is a convex compact set in R q and u M(u) is an isotone mapping on U. Then, min max max x S n y S m u U xt M(u)y = max 12 max min u U y S m x S n xt M(u)y. (3.12)

13 Proof. Let A = S n, B = S m. Then, the conclusion will follow from Theorem 2.1 if we show that for any λ [0, 1], y 1, y 2 B and u 1, u 2 U, there exists (y 0, u 0 ) B U such that x T M(u 0 )y 0 λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2 x A. (3.13) To see this, fix λ [0, 1], y 1, y 2 B, u 1 = (u 1 1,...,u1 m ) U and u2 = (u 2 1,...,u2 m ) U. Let u 0 = max{u 1, u 2 } and y 0 = λy 1 + (1 λ)y 2. As u M(u) is isotone on U, it follows that u 0 U, M(u 0 ) M(u 1 ) and M(u 0 ) M(u 2 ). Now, for each x A, noting that x R n + and y1, y 2 R m +, we obtain that x T M(u 1 )y 1 x T M(u 0 )y 1 = x T (M(u 1 ) M(u 0 ))y 1 0 and x T M(u 2 )y 2 x T M(u 0 )y 2 = x T (M(u 1 ) M(u 0 ))y 2 0. This gives us that λx T M(u 1 )y 1 + (1 λ)x T M(u 2 )y 2 λx T M(u 0 )y 1 + (1 λ)x T M(u 0 )y 2 = x T M(u 0 )y 0. References [1] M. Aghassi and D. Bertsimas, Robust game theory, Mathematical Programming 107(1-2) (2006), [2] A. Beck and A. Ben-Tal, Duality in robust optimization: Primal worst equals dual best, Operations Research Letters, 37(2009), 1 6. [3] A. Ben-Tal, L.E. Ghaoui and A. Nemirovski, Robust Optimization, Princeton Series in Applied Mathematics, [4] A. Ben-Tal and A. Nemirovski, Robust optimization methodology and applications, Mathematical Programming, Ser B, 92 (2002), [5] D. Bertsimas and D. Brown, Constructing uncertainty sets for robust linear optimization, Operations Research, 57 (2009), [6] D. Bertsimas, D. Pachamanova and M. Sim, Robust linear optimization under general norms, Operations Research Letters, 32 (2004), [7] B. D. Craven and V. Jeyakumar, Equivalence of a Ky Fan type minimax theorem and a Gordan type alternative theorem, Operations Research Letters, 5(2) (1986),

14 [8] K. Fan, Minimax Theorems, Proceedings of the National Academy of Sciences, USA, 39 (1953), [9] J.B.G. Frenk, P. Kas and G. Kassay,. On linear programming duality and necessary and sufficient conditions in minimax theory. Journal of Optimization Theory and Applications, 132(3) (2007), [10] J.B.G. Frenk and G. Kassay, On noncooperative games, minimax theorems, and equilibrium problems. Pareto optimality, game theory and equilibria, 53 94, Springer Optim. Appl., 17, Springer, New York, [11] V. Jeyakumar, A generalization of a minimax theorem of Ky Fan via a theorem of the alternative, Journal of Optimization Theory and Applications, 48 (1986), [12] V. Jeyakumar, G. M. Lee and N. Dinh, New sequential Lagrange multiplier conditions characterizing optimality without constraint qualification for convex programs, SIAM Journal on Optimization 14 (2003), [13] V. Jeyakumar and G. Li, Characterizing robust set containments and solutions of uncertain linear programs without qualifications, Operations Research Letters, 38 (2010), [14] V. Jeyakumar and G. Li, Robust Farkas lemma for uncertain linear systems with applications, Positivity, DOI /s [15] V. Jeyakumar and G. Li, Strong duality in robust convex programming: complete characterizations, SIAM Journal on Optimization, 20 (2010), [16] G. Li, V. Jeyakumar, G. M. Lee, Robust conjugate duality for convex optimization under uncertainty with application to data classification, Nonlinear Analysis Series A: Theory, Methods and Applications, DOI: /j.na (2011). [17] G. Li, and K.F. Ng, On extension of Fenchel duality and its application, SIAM Journal on Optimization, 19 (2008), [18] S.J. Li, G.Y. Chen and G.M. Lee, Minimax theorems for set-valued mappings. Journal of Optimization Theory and Applications 106(1) (2000), [19] T. Parthasarathy and T. E. Raghavan, Some Topics in Two-Persons Games, Elsevier, New York, New York, [20] R. T. Rockafellar, Convex Analysis, Princeton Univ. Press, Princeton, N. J, [21] J. Von Neumann, Zur Theorie der Gesellschaftsspiele, Mathematische Annalen, 100 (1928),

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty V. Jeyakumar, G. M. Lee and G. Li Communicated by Sándor Zoltán Németh Abstract This paper deals with convex optimization problems

More information

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite

More information

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Applied Mathematics Volume 2012, Article ID 674512, 13 pages doi:10.1155/2012/674512 Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Hong-Yong Fu, Bin Dan, and Xiang-Yu

More information

Jae Hyoung Lee. Gue Myung Lee

Jae Hyoung Lee. Gue Myung Lee 数理解析研究所講究録第 2011 巻 2016 年 8-16 8 On Approximate Solutions for Robust Convex optimization Problems Jae Hyoung Lee Department of Applied Mathematics Pukyong National University Gue Myung Lee Department of

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

Almost Convex Functions: Conjugacy and Duality

Almost Convex Functions: Conjugacy and Duality Almost Convex Functions: Conjugacy and Duality Radu Ioan Boţ 1, Sorin-Mihai Grad 2, and Gert Wanka 3 1 Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany radu.bot@mathematik.tu-chemnitz.de

More information

Robust linear semi-in nite programming duality under uncertainty?

Robust linear semi-in nite programming duality under uncertainty? Mathematical Programming (Series B) manuscript No. (will be inserted by the editor) Robust linear semi-in nite programming duality under uncertainty? Robust linear SIP duality Goberna M.A., Jeyakumar V.

More information

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency Applied Mathematics E-Notes, 16(2016), 133-143 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data Robust Solutions to Multi-Objective Linear Programs with Uncertain Data M.A. Goberna yz V. Jeyakumar x G. Li x J. Vicente-Pérez x Revised Version: October 1, 2014 Abstract In this paper we examine multi-objective

More information

Operations Research Letters

Operations Research Letters Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 25 (2012) 974 979 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml On dual vector equilibrium problems

More information

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006 CONVEX OPTIMIZATION VIA LINEARIZATION Miguel A. Goberna Universidad de Alicante Iberian Conference on Optimization Coimbra, 16-18 November, 2006 Notation X denotes a l.c. Hausdorff t.v.s and X its topological

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1)

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1) ROBUST SOLUTIONS OF MULTI-OBJECTIVE LINEAR SEMI-INFINITE PROGRAMS UNDER CONSTRAINT DATA UNCERTAINTY M.A. GOBERNA, V. JEYAKUMAR, G. LI, AND J. VICENTE-PÉREZ Abstract. The multi-objective optimization model

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

Chapter 9. Mixed Extensions. 9.1 Mixed strategies

Chapter 9. Mixed Extensions. 9.1 Mixed strategies Chapter 9 Mixed Extensions We now study a special case of infinite strategic games that are obtained in a canonic way from the finite games, by allowing mixed strategies. Below [0, 1] stands for the real

More information

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

A New Fenchel Dual Problem in Vector Optimization

A New Fenchel Dual Problem in Vector Optimization A New Fenchel Dual Problem in Vector Optimization Radu Ioan Boţ Anca Dumitru Gert Wanka Abstract We introduce a new Fenchel dual for vector optimization problems inspired by the form of the Fenchel dual

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

Constraint qualifications for convex inequality systems with applications in constrained optimization

Constraint qualifications for convex inequality systems with applications in constrained optimization Constraint qualifications for convex inequality systems with applications in constrained optimization Chong Li, K. F. Ng and T. K. Pong Abstract. For an inequality system defined by an infinite family

More information

6.891 Games, Decision, and Computation February 5, Lecture 2

6.891 Games, Decision, and Computation February 5, Lecture 2 6.891 Games, Decision, and Computation February 5, 2015 Lecture 2 Lecturer: Constantinos Daskalakis Scribe: Constantinos Daskalakis We formally define games and the solution concepts overviewed in Lecture

More information

On the relations between different duals assigned to composed optimization problems

On the relations between different duals assigned to composed optimization problems manuscript No. will be inserted by the editor) On the relations between different duals assigned to composed optimization problems Gert Wanka 1, Radu Ioan Boţ 2, Emese Vargyas 3 1 Faculty of Mathematics,

More information

Characterizations of the solution set for non-essentially quasiconvex programming

Characterizations of the solution set for non-essentially quasiconvex programming Optimization Letters manuscript No. (will be inserted by the editor) Characterizations of the solution set for non-essentially quasiconvex programming Satoshi Suzuki Daishi Kuroiwa Received: date / Accepted:

More information

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES CHONG LI, DONGHUI FANG, GENARO LÓPEZ, AND MARCO A. LÓPEZ Abstract. We consider the optimization problem (P A )

More information

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games 6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence

More information

On ɛ-solutions for robust fractional optimization problems

On ɛ-solutions for robust fractional optimization problems Lee and Lee Journal of Inequalities and Applications 2014, 2014:501 R E S E A R C H Open Access On ɛ-solutions for robust fractional optimization problems Jae Hyoung Lee and Gue Myung Lee * * Correspondence:

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Centre d Economie de la Sorbonne UMR 8174

Centre d Economie de la Sorbonne UMR 8174 Centre d Economie de la Sorbonne UMR 8174 On alternative theorems and necessary conditions for efficiency Do Van LUU Manh Hung NGUYEN 2006.19 Maison des Sciences Économiques, 106-112 boulevard de L'Hôpital,

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data Robust Solutions to Multi-Objective Linear Programs with Uncertain Data arxiv:1402.3095v1 [math.oc] 13 Feb 2014 M.A. Goberna, V. Jeyakumar, G. Li, and J. Vicente-Pérez December 9, 2013 Abstract In this

More information

Extended Monotropic Programming and Duality 1

Extended Monotropic Programming and Duality 1 March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

Could Nash equilibria exist if the payoff functions are not quasi-concave?

Could Nash equilibria exist if the payoff functions are not quasi-concave? Could Nash equilibria exist if the payoff functions are not quasi-concave? (Very preliminary version) Bich philippe Abstract In a recent but well known paper (see [11]), Reny has proved the existence of

More information

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory

More information

Part IB Optimisation

Part IB Optimisation Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions Abstract and Applied Analysis, Article ID 203467, 6 pages http://dx.doi.org/10.1155/2014/203467 Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions Xiang-Kai

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed

More information

Cowles Foundation for Research in Economics at Yale University

Cowles Foundation for Research in Economics at Yale University Cowles Foundation for Research in Economics at Yale University Cowles Foundation Discussion Paper No. 1904 Afriat from MaxMin John D. Geanakoplos August 2013 An author index to the working papers in the

More information

Stability of efficient solutions for semi-infinite vector optimization problems

Stability of efficient solutions for semi-infinite vector optimization problems Stability of efficient solutions for semi-infinite vector optimization problems Z. Y. Peng, J. T. Zhou February 6, 2016 Abstract This paper is devoted to the study of the stability of efficient solutions

More information

FROM WEIERSTRASS TO KY FAN THEOREMS AND EXISTENCE RESULTS ON OPTIMIZATION AND EQUILIBRIUM PROBLEMS. Wilfredo Sosa

FROM WEIERSTRASS TO KY FAN THEOREMS AND EXISTENCE RESULTS ON OPTIMIZATION AND EQUILIBRIUM PROBLEMS. Wilfredo Sosa Pesquisa Operacional (2013) 33(2): 199-215 2013 Brazilian Operations Research Society Printed version ISSN 0101-7438 / Online version ISSN 1678-5142 www.scielo.br/pope FROM WEIERSTRASS TO KY FAN THEOREMS

More information

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2 6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2 Daron Acemoglu and Asu Ozdaglar MIT October 14, 2009 1 Introduction Outline Mixed Strategies Existence of Mixed Strategy Nash Equilibrium

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Constructive Proof of the Fan-Glicksberg Fixed Point Theorem for Sequentially Locally Non-constant Multi-functions in a Locally Convex Space

Constructive Proof of the Fan-Glicksberg Fixed Point Theorem for Sequentially Locally Non-constant Multi-functions in a Locally Convex Space Constructive Proof of the Fan-Glicksberg Fixed Point Theorem for Sequentially Locally Non-constant Multi-functions in a Locally Convex Space Yasuhito Tanaka, Member, IAENG, Abstract In this paper we constructively

More information

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Topics in Mathematical Economics. Atsushi Kajii Kyoto University Topics in Mathematical Economics Atsushi Kajii Kyoto University 25 November 2018 2 Contents 1 Preliminary Mathematics 5 1.1 Topology.................................. 5 1.2 Linear Algebra..............................

More information

Some Contributions to Convex Infinite-Dimensional Optimization Duality

Some Contributions to Convex Infinite-Dimensional Optimization Duality Some Contributions to Convex Infinite-Dimensional Optimization Duality Marco A. López Alicante University King s College London Strand Campus June 2014 Introduction Consider the convex infinite programming

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Zero sum games Proving the vn theorem. Zero sum games. Roberto Lucchetti. Politecnico di Milano

Zero sum games Proving the vn theorem. Zero sum games. Roberto Lucchetti. Politecnico di Milano Politecnico di Milano General form Definition A two player zero sum game in strategic form is the triplet (X, Y, f : X Y R) f (x, y) is what Pl1 gets from Pl2, when they play x, y respectively. Thus g

More information

Minimax Inequalities and Related Theorems for Arbitrary Topological Spaces: A Full Characterization

Minimax Inequalities and Related Theorems for Arbitrary Topological Spaces: A Full Characterization Minimax Inequalities and Related Theorems for Arbitrary Topological Spaces: A Full Characterization Guoqiang Tian Department of Economics Texas A&M University College Station, Texas 77843 Abstract This

More information

On Gap Functions for Equilibrium Problems via Fenchel Duality

On Gap Functions for Equilibrium Problems via Fenchel Duality On Gap Functions for Equilibrium Problems via Fenchel Duality Lkhamsuren Altangerel 1 Radu Ioan Boţ 2 Gert Wanka 3 Abstract: In this paper we deal with the construction of gap functions for equilibrium

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS

MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS MAXIMALITY OF SUMS OF TWO MAXIMAL MONOTONE OPERATORS JONATHAN M. BORWEIN, FRSC Abstract. We use methods from convex analysis convex, relying on an ingenious function of Simon Fitzpatrick, to prove maximality

More information

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6 Nonlinear Programming 3rd Edition Theoretical Solutions Manual Chapter 6 Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts 1 NOTE This manual contains

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

2 Statement of the problem and assumptions

2 Statement of the problem and assumptions Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Topics in Mathematical Economics. Atsushi Kajii Kyoto University Topics in Mathematical Economics Atsushi Kajii Kyoto University 26 June 2018 2 Contents 1 Preliminary Mathematics 5 1.1 Topology.................................. 5 1.2 Linear Algebra..............................

More information

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS

CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS CHARACTERIZATION OF (QUASI)CONVEX SET-VALUED MAPS Abstract. The aim of this paper is to characterize in terms of classical (quasi)convexity of extended real-valued functions the set-valued maps which are

More information

MA651 Topology. Lecture 10. Metric Spaces.

MA651 Topology. Lecture 10. Metric Spaces. MA65 Topology. Lecture 0. Metric Spaces. This text is based on the following books: Topology by James Dugundgji Fundamental concepts of topology by Peter O Neil Linear Algebra and Analysis by Marc Zamansky

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

that a broad class of conic convex polynomial optimization problems, called

that a broad class of conic convex polynomial optimization problems, called JOTA manuscript No. (will be inserted by the editor) Exact Conic Programming Relaxations for a Class of Convex Polynomial Cone-Programs Vaithilingam Jeyakumar Guoyin Li Communicated by Levent Tunçel Abstract

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

Research Article Optimality Conditions of Vector Set-Valued Optimization Problem Involving Relative Interior

Research Article Optimality Conditions of Vector Set-Valued Optimization Problem Involving Relative Interior Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2011, Article ID 183297, 15 pages doi:10.1155/2011/183297 Research Article Optimality Conditions of Vector Set-Valued Optimization

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Helly's Theorem and its Equivalences via Convex Analysis

Helly's Theorem and its Equivalences via Convex Analysis Portland State University PDXScholar University Honors Theses University Honors College 2014 Helly's Theorem and its Equivalences via Convex Analysis Adam Robinson Portland State University Let us know

More information

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

SHORT COMMUNICATION. Communicated by Igor Konnov

SHORT COMMUNICATION. Communicated by Igor Konnov On Some Erroneous Statements in the Paper Optimality Conditions for Extended Ky Fan Inequality with Cone and Affine Constraints and Their Applications by A. Capătă SHORT COMMUNICATION R.I. Boţ 1 and E.R.

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

Duality in Regret Measures and Risk Measures arxiv: v1 [q-fin.mf] 30 Apr 2017 Qiang Yao, Xinmin Yang and Jie Sun

Duality in Regret Measures and Risk Measures arxiv: v1 [q-fin.mf] 30 Apr 2017 Qiang Yao, Xinmin Yang and Jie Sun Duality in Regret Measures and Risk Measures arxiv:1705.00340v1 [q-fin.mf] 30 Apr 2017 Qiang Yao, Xinmin Yang and Jie Sun May 13, 2018 Abstract Optimization models based on coherent regret measures and

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction

CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES. Jong Soo Jung. 1. Introduction Korean J. Math. 16 (2008), No. 2, pp. 215 231 CONVERGENCE OF APPROXIMATING FIXED POINTS FOR MULTIVALUED NONSELF-MAPPINGS IN BANACH SPACES Jong Soo Jung Abstract. Let E be a uniformly convex Banach space

More information

Nonlinear Programming Algorithms Handout

Nonlinear Programming Algorithms Handout Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are

More information

DUALITY AND FARKAS-TYPE RESULTS FOR DC INFINITE PROGRAMMING WITH INEQUALITY CONSTRAINTS. Xiang-Kai Sun*, Sheng-Jie Li and Dan Zhao 1.

DUALITY AND FARKAS-TYPE RESULTS FOR DC INFINITE PROGRAMMING WITH INEQUALITY CONSTRAINTS. Xiang-Kai Sun*, Sheng-Jie Li and Dan Zhao 1. TAIWANESE JOURNAL OF MATHEMATICS Vol. 17, No. 4, pp. 1227-1244, August 2013 DOI: 10.11650/tjm.17.2013.2675 This paper is available online at http://journal.taiwanmathsoc.org.tw DUALITY AND FARKAS-TYPE

More information

Monotone Linear Relations: Maximality and Fitzpatrick Functions

Monotone Linear Relations: Maximality and Fitzpatrick Functions Monotone Linear Relations: Maximality and Fitzpatrick Functions Heinz H. Bauschke, Xianfu Wang, and Liangjin Yao November 4, 2008 Dedicated to Stephen Simons on the occasion of his 70 th birthday Abstract

More information