Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty
|
|
- Lucinda Carroll
- 5 years ago
- Views:
Transcription
1 Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty V. Jeyakumar, G. M. Lee and G. Li Communicated by Sándor Zoltán Németh Abstract This paper deals with convex optimization problems in the face of data uncertainty within the framework of robust optimization. It provides various properties and characterizations of the set of all robust optimal solutions of the problems. In particular, it provides generalizations of the constant subdifferential property as well as the constant Lagrangian property for solution sets of convex programg to robust solution sets of uncertain convex programs. The paper shows also that the robust solution sets of uncertain convex quadratic programs and sum-of-squares convex polynomial programs under some commonly used uncertainty sets of robust optimization can be expressed as conic representable sets. As applications, it derives robust optimal solution set characterizations for uncertain fractional programs. The paper presents several numerical examples illustrating the results. Keywords Convex optimization problems with data uncertainty robust optimization optimal solution set uncertain convex quadratic programs uncertain sum-of-squares convex polynomial programs AMS Classification 90C25, 90C20, 90C46. Corresponding Author: Prof. Gue Myung Lee, Department of Applied Mathematics, Pukyong National University, Busan , Korea. Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. Department of Applied Mathematics, Pukyong National University, Busan , Korea. Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. 1
2 1 Introduction The characterizations of the optimal solution sets of mathematical programg problems are important to our understanding of the behaviour of solution methods for mathematical programs that have multiple optimal solutions. These characterizations are well known for various classes of mathematical programs (see [1-11]) and they assume perfect information (that is, precise values for the input quantities or data of the programs). However, in reality, it is common that the input data associated with the objective function and the constraints of programs are uncertain or incomplete due to prediction or measurement errors [12, 13]. In this paper, we study the problem of characterizing the set of robust optimal solutions of uncertain convex programs. This is done by exaing the set of optimal solutions of their robust counterparts (see (3) in Section 2). In recent years, issues related to characterizations of optimal solutions, duality properties and computational tractability of the robust counterparts have been extensively studied in the literature (see [12-19] and other references therein). The purpose of this work is two-fold: its first goal is to derive some properties and characterizations of the robust solution sets of uncertain convex programs under suitable conditions. In particular, we provide generalizations of the constant subdifferential property as well as the constant Lagrangian property for solution sets of convex programg to robust solution sets of uncertain convex programs. Its second aim is to exae special classes of uncertain convex programs for which the robust solution sets can be described as conic representable sets. The significance of conic representable robust solution sets is that they can be further studied using conic programg such as semidefinite programg. For properties and applications of conic representable sets, see [20]. We show that the robust solution sets of convex quadratic programs and sum-of-squares convex (in short, SOS-convex) polynomial programs [21-23] under some commonly used uncertainty sets of robust optimization, such as the ellipsoidal, scenario and spectral norm uncertainties, can be expressed as conic representable sets. The outline of the paper is as follows. Section 2 gives preliary results involving existence of Lagrange multipliers for the robust counterparts of the given uncertain convex programs. Section 3 presents various characterizations of the robust solution sets of uncertain convex programs. Section 4 provides characterizations of the robust solution set of an 2
3 uncertain convex quadratic program where the objective function has spectral norm uncertainty whereas the constraints have either ellipsoidal or scenario data uncertainty. Section 5 exaes the robust solution set for the class of uncertain SOS-convex polynomial programs for which the solution set is described in terms of sums of squares polynomial representations. Section 6 develops characterizations of robust solution sets of uncertain fractional programg problems. Section 7 provides a conclusion of the work presented and outlines further research on the topic area of the work. 2 Preliaries We begin this section by fixing notation and definitions. Throughout this paper, R n denotes the Euclidean space with dimension n. The inner product, is defined on R n. The norm of x R n is defined by x = x, x. The non-negative orthant of R n is denoted by R n + and is defined by R n + := {(x 1,..., x n ) R n : x i 0}. The closed (resp. open) interval between α, β R with α < β is denoted by [α, β] (resp. ]α, β[). For a set A in R n, the interior (resp. relative interior, closure, convex hull) of A is denoted by inta (resp. ris, cla, conva). We say A is convex whenever µa 1 + (1 µ)a 2 A for all µ [0, 1], a 1, a 2 A. A function f : R n R is said to be convex iff f((1 µ)x + µy) (1 µ)f(x) + µf(y) for all µ [0, 1] and for all x, y R n. The function f is said to be concave on R n whenever f is convex on R n. Let A be a closed and convex set in R n. The indicator function δ A respect to a set A, is defined by 0, if x A, δ A (x) := +, otherwise. The (convex) normal cone of A at a point x R n is defined as {y R n : y, a x 0 for all a A}, if x A, N A (x) :=, otherwise. We use S n to denote the space of (n n) symmetric matrices. For A S n, A 0 (resp., A 0) means that A is positive semi-definite (resp., definite). The (n n) identity matrix is denoted by I n. For a continuously differentiable function f : R n R, we use f to denote the gradient of f. Let C R q. If f : R n C R is continuously differentiable, we use x f to denote the gradient of f with respect to the first variable. Let f be a continuous and 3
4 convex functions on R n. The (convex) subdifferential of f at x R n is defined by f(x) := {z R n : z, y x f(y) f(x) y R n }. Moreover, for a function f : R n C R such that f(, u) is convex for all fixed u C, we use x f(, u) to denote the subdifferential of f with respect to the first variable. Lemma 2.1. Let U be a convex compact set in R q 0. Let f : R n R q 0 R be a function such that for each fixed u U, f(, u) is a convex function on R n and for each fixed x R n, f(x, ) is a concave function on R q 0. Let f(x) = max f(x, u). Then, the set { xf( x, u) : f( x, u) = f( x)} is closed and convex, and f( x) = { x f( x, u) : f( x, u) = f( x)}. Proof. To show convexity, let a 1, a 2 { xf( x, u) : f( x, u) = f( x)} and let µ [0, 1]. Then, for i = 1, 2, there exist u i U such that f( x, u i ) = f( x) and a i x f( x, u i ). It then follows from the concavity of f( x, ) that f( x, µu 1 + (1 µ)u 2 ) µf( x, u 1 ) + (1 µ)f( x, u 2 ) = f( x). Note that f( x) = max f( x, u). This implies that f( x, µu 1 + (1 µ)u 2 ) = µf( x, u 1 ) + (1 µ)f( x, u 2 ) = f( x). (1) As a i x f( x, u i ), for each z R n So, we have, for each z R n, µa 1 + (1 µ)a 2, z x a i, z x f(z, u i ) f( x, u i ). ( ) ( ) µf(z, u 1 ) + (1 µ)f(z, u 2 ) µf( x, u 1 ) + (1 µ)f( x, u 2 ) f(z, µu 1 + (1 µ)u 2 ) f( x, µu 1 + (1 µ)u 2 ), where the last inequality follows from (1). Thus, { xf( x, u) : f( x, u) = f( x)} is convex. To see { xf( x, u) : f( x, u) = f( x)} is closed, let a n { xf( x, u) : f( x, u) = f( x)} with a n a. Then, there exist u n U such that a n x f( x, u n ) with f( x, u n ) = f( x), and so, a n, y x f(y, u n ) f( x, u n ) for all y R n. 4
5 As U is compact, we may assume that u n ū U. By passing to the limit, we have f( x, ū) = f( x) and a, y x f(y, ū) f( x, ū) for all y R n. So, a { xf( x, u) : f( x, u) = f( x)}, and hence, { xf( x, u) : f( x, u) = f( x)} is closed. The subdifferential equation now follows from the subdifferential rule for maximum functions [24, 25] that f( x) = clconv { x f( x, u) : f( x, u) = f( x)}, where clconv(a) denotes the closure of the convex hull of a set A. Consider the convex optimization problem x R n{f(x) : g i(x) 0, i = 1,..., m}, (2) where f and g i, i = 1,..., m, are convex functions on R n. This problem assumes perfect information (that is, precise values for the input quantities or data), have been extensively studied in the literature. In particular, many characterizations of the optimal solution sets of mathematical programg problems, including models of the form (P 0 ), have been given (see [1-11]) due to their role in our understanding of the behaviour of solution methods for mathematical programs that have multiple optimal solutions. However, in reality, it is common that the input data associated with the objective function and the constraints of (2) are uncertain or incomplete due to prediction or measurement errors [12, 13]. The model (2) in the face of data uncertainty in the objective and constraint functions can be captured by the following parameterized model (P ) x R n{f(x, u) : g i(x, v i ) 0, i = 1,..., m}, where u and v i are uncertain parameters and they belong to the specified convex and compact uncertainty sets U R q 0 and V i R q, respectively. Assumption 2.1. Throughout this section, we assume that f : R n R q 0 R is a continuous function on R n R q 0 such that for each fixed u U R q 0, f(, u) is a convex function on R n and g i :R n R q R is a continuous function such that for each fixed v i V i R q, g i (, v i ) is a convex function. 5
6 We now study the problem of characterizing the set of robust optimal solutions of (P ) in terms of a given solution point. This is done by exaing the set of optimal solutions of the robust counterpart of (P ) which can be formulated as the robust convex optimization problem: (RP ) x R n max f(x, u) s.t. g i(x, v i ) 0, v i V i, i = 1,..., m. (3) Definition 2.1. (Robust feasible sets) We define the robust feasible set of (P ) by F := {x R n : g i (x, v i ) 0, v i V i, i = 1,..., m}. Definition 2.2. (Robust solution sets) The vector x F is a robust solution of (P ) whenever it is a solution of the robust counterpart. The robust solution set S of (P ) is the set which consists of all the robust solutions of (P) and is given by S = {x F : max f(x, u) max f(y, u), y F }. In recent years, issues related to characterizing solution points of (RP ), duality properties of (RP ) and computational tractability of (RP ) have been extensively studied in the literature (see [12-18] and other reference therein). As a consequence of the preceding Lemma we obtain the following multiplier characterization for a robust solution which plays a key role in deriving characterizations of robust solution sets. Proposition 2.1. (Necessary and Sufficient Condition for Robust Solution) For problem (P), let F be the robust feasible set and let S be the robust solution set. Let x F. Suppose that for each fixed x R n, f(x, ) and g i (x, ) are concave functions and that there exists x 0 R n such that g i (x 0, v i ) < 0, v i V i, i = 1,..., m. Then, x is a robust solution (that is, x S) if and only if there exist λ i 0, ū U and v i V i such that f( x, ū) = max f( x, u) and 0 x f( x, ū) + n λ i x g i ( x, v i ), λi g i ( x, v i ) = 0. (4) Proof. [ ] Let x S. Define f : R n R by f(x) = max f(x, u) for all x R n. As U is compact and f(, u) is continuous and convex for each fixed u, f is a real-valued convex function. So, f is continuous and convex. As x S, 0 ( f + δ F )( x) = f( x) + N F ( x), 6
7 where δ F is the indicator function with respect to the set F, N F ( x) is the normal cone of the set F at x and the last equality follows as f is continuous. From Lemma 2.1, we have 0 { x f( x, u) : f( x, u) = f( x)} + N F ( x). To finish the proof, it suffices to show that n N F ( x) { λ i a i : a i x g i ( x, v i ), λ i g i ( x, v i ) = 0}. v i V i,λ i 0 To see this, let a N F ( x). Then, h(x) := a, x attains its imum over F. Note that F = {x : g i (x, v i ) 0, v i V i, i = 1,..., m} = {x : max v i V i g i (x, v i ) 0, i = 1,..., m}. Then, our assumption, together with the Lagrangian duality [25], implies that a, x = inf { a, x } = max x F λ i 0 = max λ i 0 m inf x R n{ a, x + inf max { a, x + x R n v i V i λ i max v i V i g i (x, v i )} λ i g i (x, v i )}. As each V i is compact, for each fixed v i V i R q, g i (, v i ) is a continuous convex function and for each fixed x R n, g i (x, ) is a continuous concave function, the convex-concave imax theorem [26] implies that max λ i 0 inf x R n max v i V i { a, x + m λ i g i (x, v i )} = max max inf λ i 0 x R n{ a, x + λ i g i (x, v i )} So, there exist λ i 0, v i V i such that a, x a, x + = max v i V i λ i 0,v i V i λ i g i (x, v i ) for all x R n. m inf x R n{ a, x + λ i g i (x, v i )}. Letting x = x, we have m λ ig i ( x, v i ) 0. Note that λ i 0 and x F. So, m λ ig i ( x, v i ) = 0. Then, q(x) := a, x + m λ ig i (x, v i ) attains its imum at x, and so, Hence, (4) holds. a v i V i,λ i 0 { λ i a i : a i x g i ( x, v i ), λ i g i ( x, v i ) = 0}. [ ]. This implication holds by the standard sufficient optimality arguments of convex programg using convexity. 7
8 The following example illustrates that, if the concavity assumption with respect to the uncertainty parameter is dropped, the above existence result multipliers and uncertainty parameters may fail. Example 2.1. (Failure of multiplier characterization without concavity) Consider the robust optimization problem max (x u [0,1] u)2 s.t. x R. Note that any convex function attains its maximum over a polytope at some extreme points, max u [0,1] (x u) 2 = max{x 2, (x 1) 2 }. So, the robust solution set S = {1/2}. On the other hand, let x = 1/2 and f(x, u) = (x u) 2. Let g 1 (x) 1. Then, the robust feasible set F = R = {x : g 1 (x) 0}. We note that the strict feasibility condition is always satisfied. Take λ 1 = 0 and let ū U = [0, 1] with f( x, ū) = max f( x, u). Then, ū {0, 1} and so, x f( x, ū) = x f( x, ū) { 1, 1}. So, 0 / x f( x, ū). Thus, the above multiplier characterization fails. Finally, we observe that u f(x, u) is not concave. 3 Characterizations of Solution Sets In this Section, we present various characterizations of robust solution sets in terms of a given robust solution point of the given problem. We begin by deriving basic properties of the subdifferential of the objective function on the solution set. Note that, in the uncertainty free case, the subdifferential of the objective function is constant on the relative interior of its solution set. A generalization of this result for convex optimization problem in the face of data uncertainty is presented below. For a given point x R n, let A(x) := (x) { xf(x, u)}, where U(x) = {ū : f(x, ū) = max f(x, u)}. We first start with a simple fact which states that A(x) is a constant over the relative interior of the robust solution set S. Lemma 3.1. (Generalized constant subdifferential property) For problem (P), let S be the robust solution set, and suppose that for each fixed x R n, f(x, ) and g i (x, ) are concave functions. Then A(x 1 ) = A(x 2 ) for any x 1, x 2 ris. Moreover, we have A(x) A(x ) for any x ris and x S. 8
9 Proof. Fix any x 1, x 2 ris. Let f(x) = max f(x, u) and let F = {x : g i (x, v i ) 0, v i V i, i = 1,..., m}. Then, the robust solution set S is the solution set of the following nonsmooth convex optimization problem f(x) s.t. x F. From Mangasarian [7], we see that f(x) is a constant over ris, and so, f(x 1 ) = f(x 2 ). Thus, the conclusion follows from Lemma 2.1. To see the second assertion, let x ris and x S. Then, for any λ ]0, 1[, λx + (1 λ)x ris. So, the first assertion implies that A(x) = A(λx + (1 λ)x ) for any λ ]0, 1[. Let w A(x) and let λ n 0. Define x n = λ n x + (1 λ n )x. Then, x n x and w A(x) = A(x n ). So, there exist u n U such that f(x n, u n ) = max f(x n, u) and w x f(x n, u n ). As U is compact, by passing to subsequence, we may assume u n ū U. Then, w, z x n f(z, u n ) f(x n, u n ) for all z R n Letting n, we have f(x, ū) = max f(x, u) and w, z x f(z, ū) f(x, ū) for all z R n. So, w A(x ). Thus, the conclusion follows. The following simple example illustrates that the generalized constant subdifferential property cannot be extended to the whole robust solution set in general. Example 3.1. Consider the robust optimization problem x R max u 1(x 1) + u 2 ( x + 1) + u 3. u 1 +u 2 +u 3 =1,u i 0 Note that any linear function attains its maximum over a polytope at some extreme points of the polytope. So, we have ( ) max u1 +u 2 +u 3 =1,u i 0 u1 (x 1)+u 2 ( x+1)+u 3 = max{(x 1), (x 1), 1} = max{ x 1, 1}. Then, the robust solution set S = {x : x 1 1}. Take x 1 = 0 and x 2 = 2. Then, x 1, x 2 S. Direct verification shows that A(x 1 ) = [ 1, 0] and A(x 2 ) = [0, 1]. Thus, A(x 1 ) A(x 2 ). Note that the generalized constant subdifferential property yields the classical result for uncertainty free case that the gradient of the objective function is a constant over the solution set for a smooth convex optimization problem. 9
10 Corollary 3.1. [7] For problem (P ) with U and V i are singleton sets, let the objective function f be continuously differentiable and let S 0 be the solution set. Then, f is a constant over S 0. Proof. Let U and V i be singleton sets. Then, the above lemma implies that f is a constant over ris 0. Now, take any point x S 0 and a ris 0. Then, for any λ ]0, 1[, λa+(1 λ)x S 0 and so, f(λa + (1 λ)x) = f(a). Letting λ 0, as f is continuously differentiable, we have f(x) = f(a). Thus, the conclusion follows. In the following proposition, we now obtain a basic robust solution set characterization. In the uncertainty free case, this result collapses to [7, Theorem 1a]. Proposition 3.1. (Basic robust solution set characterization) For problem (P), let F be the robust feasible set and let S be the robust solution set. Suppose that for each fixed x R n, f(x, ) and g i (x, ) are concave functions. Let a S. Then, S = {x F : w a, x a = 0 for some w a A(x) A(a)}, where A(x) := (x) { xf(x, u)} and U(x) = {ū U : f(x, ū) = max f(x, u)}. Proof. [ ] Let x S. Clearly, x F. Fix x ris. We first see that A( x). Indeed, as f(, ) is continuous and f(, u) is convex for any fixed u U, we see that x f( x, u) for all u U. So, A( x) := ( x) { xf( x, u)}. Take w A( x). We note that w A( x) A(x) A(a). As x ris, x = λx + (1 λ)y for some y S and λ ]0, 1[. Since w A( x), there exists ũ U such that f( x, ũ) = max f( x, u) and w x f( x, ũ). So, and (1 λ) w, x y = w, x x max f(x, u) f( x, ũ) = 0 λ w, y x = w, y x max f(y, u) f( x, ũ) = 0. As λ ]0, 1[, this implies that w, x y 0 and w, y x 0. So, we have w, y x = 0, and hence w, x x = (1 λ) w, x y = 0. Similarly, as a S, we can show that w, a x = 0. So, w, x a = w, x x w, a x = 0. [ ] Take x F with w a, x a = 0 for some w a A(x) A(a). As w a A(x), there exists ū U with f(x, ū) = max f(x, u) and w a x f(x, ū). Then, we have 0 = w a, a x f(a, ū) f(x, ū) max 10 f(a, u) max f(x, u).
11 This implies that max f(x, u) max f(a, u), and so, x S. It is worth noting that the concavity of f(x, ) is often automatically satisfied for robust optimization problems, where the data uncertainty is affinely parameterized. For problem (P), let F be the robust feasible set and let S be the robust solution set. Let a S. Let λ a i 0, (ua, v a i ) U V i, i = 1,..., m, satisfy 0 x f(a, u a ) + We define the Lagrangian function L a (, λ a, u a, v a ) by λ a i x g i (a, vi a ), λ a i gi a (a, v i ) = 0 and f(a, u a ) = max f(a, u). (5) L a (x, λ a, u a, v a ) = f(x, u a ) + λ a i g i (x, vi a ) for all x R n. Theorem 3.1. (Constant Lagrangian over the robust solution set) For problem (P), let F be the robust feasible set and let S be the robust solution set. Suppose that for each fixed x R n, f(x, ) and g i (x, ) are concave functions and there exists x 0 R n such that g i (x 0, v i ) < 0, v i V i, i = 1,..., m. Let a S, and let λ a i 0, ua U and v a i V i satisfy (5). Then, for each x S, λ a i g i(x, v a i ) = 0, f(x, ua ) = max f(x, u) and L a (, λ a, u a, v a ) is a constant on S. Proof. As a S, Proposition 2.1 shows us that 0 L a (a, λ a, u a, v a ). So, the definition of convex subdifferential shows that for all x R n, So, for all x R n, f(x, u a ) + 0 L a (x, λ a, u a, v a ) L a (a, λ a, u a, v a ). λ a i g i (x, vi a ) f(a, u a ) + λ a i g i (a, vi a ) = f(a, u a ) = max f(a, u), (6) where the last equality follows from the multiplier characterization of a robust solution. Note that for each x S, max f(x, u) = max f(a, u). So, for each x S, λ a i g i (x, vi a ) 0. For each x S, we have g i (x, vi a) 0. It then follows that λa i g i(x, vi a ) = 0, i = 1,..., m. To see the second assertion, from (6) and λ a i g i(x, vi a ) = 0, i = 1,..., m, we have max f(x, u) f(x, ua ) max f(a, u). 11
12 Note that max f(x, u) = max f(a, u) (as x S). It follows that f(x, u a ) = max f(x, u). To see the last assertion, we only need to notice, for each x S L a (x, λ a, u a, v a ) = f(x, u a ) + λ a i g i (x, vi a ) = max f(x, u) = max f(a, u) = f(a, ua ), which is a constant. Theorem 3.2. (Multiplier characterization of robust solution set) For problem (P), let F be the robust feasible set and let S be the robust solution set. Suppose that for each fixed x R n, f(x, ) and g i (x, ) are concave functions, and there exists x 0 R n such that g i (x 0, v i ) < 0, v i V i, i = 1,..., m. Let a S, and let λ a i 0, ua U and vi a V i be the multiplier and the uncertainty parameters associated with a. Then, S = {x F : λ a i g i (x, vi a ) = 0, i = 1,..., m, f(x, u a ) = max f(x, u), w a x f(x, u a ) x f(a, u a ), w a, x a = 0}. Proof. [ ] Let x S. Clearly x F. From the multiplier characterization, there exist λ a i 0, ua U a and v a i V i such that 0 x f(a, u a ) + λ a i x g i (a, vi a ), λ a i g i (a, vi a ) = 0 and f(a, u a ) = max f(a, u). Then, there exist w a x f(a, u a ) and z a m λa i xg i (a, v a i ) such that w a + z a = 0. As z a m λa i xg i (a, vi a ), we have z a, x a λ a i g i (x, vi a ) λ a i g i (a, vi a ). From the preceding theorem and noting that x S and a S, we have f(x, u a ) = max f(x, u) and λ a i g i(x, vi a) = λa i g i(a, vi a ) = 0, i = 1,..., m. It then follows that z a, x a 0. So, w a + z a = 0 implies that w a, x a 0. (7) On the other hand, as w a x f(a, u a ), w a, x a f(x, u a ) f(a, u a ) max f(x, u) max f(a, u) = 0, (8) where the last equality follows from the fact that x, a S. Combining (7) and (8), we have w a, x a = 0. (9) 12
13 We now show w a x f(x, u a ). To see this, for any y R n, we have w a, y x = w a, y a + w a, a x = w a, y a f(y, u a ) f(a, u a ) = f(y, u a ) f(x, u a ), where the second equality follows from (9), the first inequality is from w a x f(a, u a ) and the last equality follows by the fact that x S and a S (and so, f(a, u a ) = max f(a, u) = max f(x, u) = f(x, u a )). Thus, w a x f(x, u a ). Therefore, S {x F : λ a i g i (x, vi a ) = 0, i = 1,..., m, w a x f(x, u a ) x f(a, u a ), w a, x a = 0}. [ ] Let x F be such that λ a i g i(x, vi a) = 0, i = 1,..., m, f(x, ua ) = max f(x, u) and there exists w a x f(x, u a ) x f(a, u a ), such that w a, x a = 0. Then, we see that 0 = w a, a x f(a, u a ) f(x, u a ), where the last inequality follows from the fact that w a x f(x, u a ). So, max f(x, u) = f(x, ua ) f(a, u a ) = max f(a, u). Note that a S and x F. This implies that x S. Corollary 3.2. (Robust solution set for uncertain smooth convex optimization) For problem (P), let F be the robust feasible set and let S be the robust solution set. Suppose that f(, u) is differentiable for each fixed u R q 0 and that, for each fixed x R n, f(x, ) and g i (x, ) are concave functions, and there exists x 0 R n such that g i (x 0, v i ) < 0, v i V i, i = 1,..., m. Let a S, and let λ a i 0, ua U and v a i V i be the multiplier associated with a. Then, we have S = {x F : λ a i g i (x, vi a ) = 0, i = 1,..., m, f(x, u a ) = max f(x, u), x f(x, u a ) = x f(a, u a ), x f(a, u a ), x a = 0}. Proof. As f(, u) are differentiable for each fixed u R q 0, for any w a x f(x, u a ) x f(a, u a ) with w a, x a = 0, we have w a = x f(x, u a ) = x f(a, u a ). Thus the conclusion follows from the preceding theorem. 13
14 Before we end this section, let us illustrate our robust solution set characterization via an example. Example 3.2. Consider the following uncertain linear programg problem (x 1,x 2 ) R 2 u 1 x 1 + u 2 x 2 s.t. x 1 + α 1 x 2 0, x 1 + α 2 0, (10) where the uncertain coefficients (u 1, u 2 ) {(1, 1) + (α, β) : (α, β) 1}, α 1 [0, 1] and α 2 [ 1, 0]. Its robust counterpart is the following robust linear programg problem max (x 1,x 2 ) R 2 (u 1,u 2 ) U u 1 x 1 + u 2 x 2 s.t. x 1 + α 1 x 2 0, α 1 [0, 1], x 1 + α 2 0, α 2 [ 1, 0], where U = {(1, 1)+(α, β) : (α, β) 1}, V 1 = {1} [0, 1] {0} and V 2 = { 1} {0} [ 1, 0]. This problem can be equivalently rewritten as follows max x=(x 1,x 2 ) R 2 u=(u 1,u 2 ) U f(x, u) = u, x s.t. g 1 (x, (b 1, γ 1 )) = b T 1 x + γ 1 0, (b 1, γ 1 ) V 1, g 2 (x, (b 2, γ 2 )) = b T 2 x + γ 2 0, (b 2, γ 2 ) V 2. Let F be the robust feasible set of (10). It can be verified that Note that, for all (x 1, x 2 ) F, F = {(x 1, x 2 ) R 2 : x 1 0, x 1 + x 2 0}. max {u 1x 1 + u 2 x 2 } = x 1 + x 2 + (x 1, x 2 ) x 2 + ( x 2 ) = 0. (u 1,u 2 ) U It follows that the robust solution set S = {(x 1, x 2 ) : x 1 = 0 and x 2 0}. Let x = (1, 2). We observe that the strict feasibility condition is satisfied at x as g i ( x, (b i, γ i )) < 0, (b i, γ i ) V i, i = 1, 2. Let a = (0, 0) S. Let u a = (u a 1, ua 2 ) = (1, 0), λa 1 = 0, λa 2 = 1, va 1 = (ba 1, γa 1 ) with γa 1 = 0, b a 1 = (1, 1) and va 2 = (ba 2, γa 2 ) with ba 2 = ( 1, 0) and γa 2 = 0. Then, we have n (0, 0) = (u a 1, u a 2) + λ a 1(1, v1) a + λ a 2( 1, 0) = x f(a, u a ) + λ a i x g i (a, vi a ), 14
15 λ a i g i(a, v a i ) = 0 and f(a, ua ) = max f(a, u). It can be verified that x f(x, u a ) = x f(a, u a ) = u a = (1, 0), and so, {x F : λ a i ( b a i, x + γ a i = {x F : x 1 = 0, x 1 = x 1 + x 2 + (x 1, x 2 ) } = {(x 1, x 2 ) R 2 : x 1 = 0, x 2 0} = S. ) = 0, i = 1, 2, u a, x = max u, x, u a, x a = 0} This verifies our robust solution set characterization. 4 Solution Sets of Uncertain Convex Quadratic Programs In this section, we exae robust solution sets of uncertain convex quadratic programs under various classes of commonly used uncertainty sets and describe the structure of the solution sets. Consider the following robust convex quadratic optimization problem: x R n max {1 x, Ax + h, x } A U 2 s.t. b i, x + γ i 0, (b i, γ i ) V i, i = 1,..., m, where, U S n and V i R n R are convex compact uncertainty sets. Note that this robust quadratic optimization problem is indeed the robust counterpart of the following uncertain quadratic optimization problem (QP ) x R n f(x, A) s.t. g i(x, v i ) 0, i = 1,..., m, where A U, v i := (b i, γ i ) V i, i = 1,..., m, f(x, A) = 1 x, Ax + h, x, 2 and g i (x, v i ) = b i, x + γ i, v i = (b i, γ i ), i = 1,..., m. We now obtain simplified robust solution set characterizations for various commonly used data uncertainties. 15
16 Ellipsoidal constraint data uncertainty Consider the quadratic convex program under ellipsoidal constraint data uncertainty where U is the spectral norm uncertainty set U spec given by U spec := {A 0 + M : M S n, A + M 0, M spec ρ} (11) with ρ 0 and A 0 S n with A 0 0, and V i is the ellipsoidal uncertainty set V e i given by q Vi e = {b 0 i + βib l l i : (βi 1,..., β q i ) 1} [γ i, γ i], (12) l=1 with b l i Rn, l = 0, 1,..., q and γ i, γ i R. In this case, the robust quadratic convex program under ellipsoidal constraint data uncertainty [12] is given by x R n max A U spec{1 x, Ax + h, x } 2 s.t. b i, x + γ i 0, (b i, γ i ) V e i, i = 1,..., m, Now, we see that the robust solution set of the quadratic convex program under ellipsoidal constraint data uncertainty can be described in terms of the feasible set of a second-order cone programg problem. Theorem 4.1. (Convex QP under ellipsoidal constraint data uncertainty) For problem (QP ) with U = U spec and V i = Vi e, i = 1,..., m, let F be the robust feasible set and let S be the robust solution set. Suppose that there exists x 0 such that b i, x 0 + γ i < 0, (b i, γ i ) V e i, i = 1,..., m. Let a S. Then, we have S = {x R n : ( b 1 i, x,..., b q i, x ) b 0 i, x γ i (A 0 + ρi n )(x a) = 0, (A 0 + ρi n )a + h, x a = 0}. Proof. Identify S n with R n(n+1) 2 and consider f(x, A) = 1 x, Ax + h, x 2 and g i (x, (b i, γ i )) = b i, x + γ i. 16
17 Clearly, f(, A) is differentiable and convex for all A U spec and f(x, ) is affine for each x R n ; g i (, (b i, γ i )) is affine for all (b i, γ i ) V e i and g i (x, ) is affine for each x R n. Then applying Proposition 2.1 gives us that there exist λ a i 0, Aa U spec and (b a i, γa i ) Ve i that 0 = (A 0 + ρi n )a + h + n λ a i b a i and λ a i So, it follows from Corollary 3.2 that ( ) b a i, a + γi a = 0, i = 1,..., m. S = {x F : λ a i g i (x, (b a i, γi a )) = 0, i = 1,..., m, f(x, A a ) = max f(x, A), A U spec x f(x, A a ) = x f(a, A a ), x f(a, A a ), x a = 0}. Note that max A U spec f(x, A) is attained at A a = A 0 + ρi n. Then, u a = A 0 + ρi n. On the other hand, S = {x F : λ a i g i (x, (b a i, γ a i )) = 0, i = 1,..., m, (A 0 + ρi n )(x a) = 0, (A 0 + ρi n )a + h, x a = 0}. F = {x : b i, x + γ i 0, (b i, γ i ) Vi e, i = 1,..., m} q = {x : b 0 i + βib l l i, x + γ i 0 (βi 1,..., β q i ) 1 and γ i [γ i, γ i ]} l=1 = {x : b 0 i, x + ( b 1 i, x,..., b q i, x ) + γ i 0} Now, S can be expressed as S = {x R n : b 0 i, x + ( b 1 i, x,..., b q i, x ) + γ i 0 λ a ( i b a i, x + γi a ) = 0, i = 1,..., m, (A 0 + ρi n )(x a) = 0, ( A 0 + ρi n )a + h, x a = 0}. Finally, to see the conclusion, we only need to show, for any x F with we have (A 0 + ρi n )a + h, x a = 0, λ a ( i b a i, x + γi a ) = 0, i = 1,..., m To see this, take any x F. Then, we have b a i, x + γa i 0. This together with λ a i 0 implies that such λ a ( i b a i, x + γi a ) 0, i = 1,..., m. (13) 17
18 Now, So, 0 = (A 0 + ρi n )a + h + λ a i b a i, x a = λ a i b a i, x a. λ a ( i b a i, x + γi a ) m = λ a ( i b a i, a + γi a ) = 0. This together with (13) implies that λ a ( i b a i, x + γi a ) = 0, i = 1,..., m. In the uncertainty free case, that is, ρ = 0 and b l i = 0, l = 1,..., q, our result collapses to the solution set characterization of uncertainty-free convex quadratic optimization problem given in [7]. Remark 4.1. Note that Theorem 4.1 shows that the robust solution set of an uncertain convex quadratic optimization problem under ellipsoidal data uncertainty is either an empty set or a conic representable set in the sense that it can be described as the feasible set of a suitable second order cone programg problem. Scenario constraint data uncertainty Consider the quadratic convex program under scenario constraint data uncertainty [13, 12] where U is the spectral norm uncertainty set U spec given by U spec := {A 0 + M : M S n, A + M 0, M spec ρ} with ρ 0 and A 0 S n with A 0 0, and V i is the scenario uncertainty set V s i given by V s i = co{(b 1 i, γ 1 i ),..., (b p i i, γp i i )}, with (b l i, γl i ) Rn R, l = 1,..., p i. In this case, the robust quadratic convex program under scenario constraint data uncertainty is given by x R n max A U spec{1 x, Ax + h, x } 2 s.t. b i, x + γ i 0, (b i, γ i ) V s i, i = 1,..., m, Now, we see that the robust solution set of the quadratic convex program under scenario constraint data uncertainty can be expressed as a polyhedral set. 18
19 Theorem 4.2. (Convex QP under scenario data uncertainty) For problem (QP ) with U = U spec and V i = Vi s, i = 1,..., m, let F be the robust feasible set and let S be the robust solution set. Suppose that there exists x 0 such that b i, x 0 +γ i < 0, (b i, γ i ) Vi s, i = 1,..., m. Let a S. Then, we have S = {x R n : b l i, x + γ l i 0, l = 1,..., p i, i = 1,..., m, Proof. Identify S n with R n(n+1) 2, and consider (A 0 + ρi n )(x a) = 0, (A 0 + ρi n )a + h, x a = 0}. f(x, A) = 1 x, Ax + h, x 2 and g i (x, (b i, γ i )) = b i, x + γ i, i = 1,..., m. Using similar method of proof as in Theorem 4.1, we see that there exist λ a i 0, Aa U spec and (b a i, γa i ) Vs i such that λ a i 0 = (A 0 + ρi n )a + h + n λ a i b a i, ( ) b a i, a + γi a = 0, i = 1,..., m and S = {x F : λ a i g i (x, (b a i, γ a i )) = 0, i = 1,..., m, (A 0 + ρi n )(x a) = 0, (A 0 + ρi n )a + h, x a = 0}. Note that F = {x : b i, x + γ i 0 (b i, γ i ) co{(b 1 i, γ 1 i ),..., (b p i = {x : b l i, x + γ l i 0, l = 1,..., p i, i = 1,..., m}, i, γp i i )}, i = 1,..., m} where the second equality follows by the fact that (b i, γ i ) g i (x, (b i, γ i )) is affine and the maximum of max (bi,γ i ) Vi s g i(x, (b i, γ i )) is attained at some extreme point of Vi s. Similar to the proof as in Theorem 4.1, we can show that for any x F with (A 0 +ρi n )a+h, x a = 0, we have λ a ( i b a i, x + γi a ) = 0, i = 1,..., m. 19
20 Remark 4.2. Theorem 4.2 shows that the robust solution set of an uncertain convex quadratic optimization problem under scenario data uncertainty is either an empty set or a polyhedral set. Now, we obtain a characterization of the boundedness of the robust solution set of the quadratic convex program under scenario constraint data uncertainty. closed and convex set A, its recession cone A is defined by Recall that, for a A := {d : x + γd A for all γ 0, x A}. Recall also that a closed convex set A is bounded if and only if its recession cone A = {0}. Corollary 4.1. (Boundedness of the robust solution set) For problem (QP ) with U = U spec and V i = Vi s, i = 1,..., m, let F be the robust feasible set and let S be the robust solution set. Suppose that there exists x 0 such that b i, x 0 + γ i < 0, (b i, γ i ) Vi s, i = 1,..., m and S. Then, the robust solution set S is bounded if and only if {d R n : b l i, d 0, l = 1,..., p i, i = 1,..., m, (A 0 + ρi n )d = 0, h, d = 0} = {0}. Proof. Let a S. From the preceding theorem, we have S = {x R n : b l i, x + γ l i 0, l = 1,..., p i, i = 1,..., m, (A 0 + ρi n )(x a) = 0, (A 0 + ρi n )a + h, x a = 0}. Note that S is bounded if and only its recession cone S = {0} and S = {d R n : b l i, d 0, l = 1,..., p i, i = 1,..., m, Thus, the conclusion follows. (A 0 + ρi n )d = 0, (A 0 + ρi n )a + h, d = 0} = {d R n : b l i, d 0, l = 1,..., p i, i = 1,..., m, (A 0 + ρi n )d = 0, h, d = 0}. 5 Uncertain Sum-of-squares convex Polynomial Optimization Problems In this Section, we establish robust solution set characterization for an uncertain sum-ofsquares convex (in short, SOS-convex) polynomial optimization problem. As a special case, 20
21 we show that the robust solution set of a quadratically constrained quadratic optimization problem under scenario uncertainty can be described as a semidefinite representable set. We recall that a real polynomial f on R m is sum of squares if there exist real polynomials f j, j = 1,..., r, such that f(x) = r j=1 f 2 j (x) for all x Rm. The set consisting of all sum of squares real polynomials is denoted by Σ 2. Moreover, the set consisting of all sum of squares real polynomials with degree at most d is denoted by Σ 2 d. One of the interesting and important features of a sum-of-squares polynomial is that checking a polynomial is sum of squares or not is equivalent to solving a linear matrix inequality problem. Definition 5.1. (SOS-Convexity [27, 28]) A real polynomial f on R n is called SOS-convex if σ(x, y) := f(x) f(y) f(y), (x y) is a sum of squares polynomial. Clearly, a SOS-convex polynomial is convex. However, the converse is not true. Thus, there exists a convex polynomial which is not SOS-convex [27]. It is known that any convex quadratic function and any convex separable polynomial is a SOS-convex polynomial. Moreover, a SOS-convex polynomial can be non-quadratic and non-separable. For instance, f(x) = x x2 1 + x 1x 2 + x 2 2 is an SOS-convex polynomial, which is non-quadratic and nonseparable. Consider the following uncertain SOS-convex polynomial programg problem (P P ) x R n f(x, u) s.t. g i(x, v i ) 0, i = 1,..., m, where u U s, v i V s i, and U s and V s i are scenario uncertainty sets, that is, Here f : R n R q 0 U s = co{u 1,..., u p0 } and V s i = co{v 1 i,...,, v p i i }. R is a function such that for each fixed u U R q 0, f(, u) is a SOS-convex polynomial with degree at most d on R n ; for each fixed x R n, f(x, ) is an affine function on R q, g i : R n R q R is a function such that for each fixed v i V i R q, g i (, v i ) is a SOS-convex polynomial with degree at most d and for each fixed x R n, g i (x, ) is an affine function. The robust counterpart of the above uncertain SOS-convex polynomial programg problem can be given by x R n max f(x, u) s.t. g i(x, v i ) 0, v i V s s i, i = 1,..., m. 21
22 Theorem 5.1. For problem (P P ), let F be the robust feasible set and let S be the robust solution set. Suppose that there exists x 0 such that g i (x, v i ) < 0, v i V i, i = 1,..., m. Let a S, and let λ a i 0, ua U s and v a i Vs i be the multipliers associated with a. Then, S = {z R n : g i (z, vi) l 0, l = 1,..., p i, i = 1,..., m, f(, u a ) + λ a i g i (, vi a ) max f(z, u i ) Σ 2 d }. 1 i p 0 Proof. Let z S. Clearly, z F. Note that any affine function attains its maximum on a compact polytope at some extreme points of the polytope. As for each fixed x R n, g i (x, ) is an affine function, we obtain that max vi V s i g i(z, v i ) = max 1 l pi g i (z, v l i ), l = 1,..., p i, i = 1,..., m. This implies that F = {z : g i (z, v i ) 0, v i V s i } = {z : g i (z, v l i) 0, l = 1,..., p i, i = 1,..., m}. We now show that f(, u a ) + m λa i g i(, vi a) f(z) Σ 2 d. To see this, note that a S and λ a i 0, ua U and v a i V i are the multipliers associated with a. This shows that 0 = x f(a, u a ) + λ a i x g i (a, vi a ) and λ a i g i (a, vi a ) = 0, i = 1,..., m. As f(, u a ) and g i (, v a ) are SOS-convex polynomials and λ a i 0, h(x) := f(x, u a ) + λ a i g i (x, vi a ) f(a, u a ) is also a SOS-convex polynomial with degree at most d. Note that h attains its global imum at a with h(a) = 0 and h(a) = 0. From the definition of SOS-convex polynomial σ(x, y) := h(x) h(y) h(y), x y is a sum-of-squares polynomial. In particular, letting y = a, we see that h(x) = σ(x, a) is also a sum-of-squares polynomial. Note that, for each x R n, max s f(x, u) = max 1 i p0 f(x, u i ) (as f(x, ) is affine and U s = co{u 1,..., u p0 } is a polytope). So, we have max f(a, u i ) = max f(a, u) = max f(z, u) = max f(z, u 1 i p s s i ), 0 1 i p 0 where the second equality follows by the fact that z S and a S. This implies that f(x, u a ) + λ a i g i (x, vi a ) max f(z, u i ) = h(x) + (f(a, u a ) max f(z, u i )) 1 i p 0 1 i p 0 = h(x) + (max f(a, u) max f(z, u s i )) = h(x) 1 i p 0 22
23 is a sum-of-squares polynomial. Moreover, note that f(, u a ) is a SOS-convex polynomial with degree at most d on R n, g i (, vi a ) is a SOS-convex polynomial with degree at most d and max 1 i p0 f(z, u i ) is a constant. So, h is a sum-of-squares polynomial with degree at most d. Conversely, let z F with f(, u a ) + m λa i g i(, v a i ) max 1 i p 0 f(z, u i ) Σ 2 d. Note that any sum-of-squares polynomial must take non-negative value. So, 0 f(a, u a ) + This together with z F shows that z S. λ a i g i (a, v a i ) max 1 i p 0 f(z, u i ) max 1 i p 0 f(a, u i ) max 1 i p 0 f(z, u i ) = max f(a, u) max f(z, u). s s We present an example to illustrate the robust solution set characterization of an uncertain SOS-convex polynomial program. Example 5.1. Consider the following uncertain SOS-convex polynomial optimization problem (x 1,x 2 ) R 2{x4 1 + α 1 x 1 + α 2 x 2 } s.t. βx , where the uncertain parameters α 1 [ 1, 1], α 2 [0, 1] and β [ 2, 1]. Its robust counterpart is given by max (x 1,x 2 ) R 2 α 1 [ 1,1],α 2 [0,1] {x4 1 + α 1 x 1 + α 2 x 2 } s.t. βx , β [ 2, 1]. Let f(x, u) = x u 1x 1 + u 2 x 2. Then, f(, u) is a separable convex polynomial (and so, SOS-convex) and f(x, ) is affine. Let g 1 (x, v 1 ) = v 1 1 x 1 + v 2 1 x Then, g 1 (, v 1 ) is affine and g 1 (x, ) is affine. This robust optimization problem can be written as x R 2 max f(x, u) s.t. g 1(x, v 1 ) 0, v 1 V 1. where U = co{( 1, 0), (1, 0), ( 1, 1), (1, 1)} and V 1 = co{( 2, 0), ( 1, 0)}. This problem is equivalent to (x 1,x 2 ) R 2{x4 1 + x 1 + max{x 2, 0}} s.t. x and so, F = {(x 1, x 2 ) : x 1 1} and S = {(x 1, x 2 ) : x 1 = 1, x 2 0}. 23
24 To verify our robust solution characterization, let a = (1, 0) S, u a = (1, 0), v a = ( 1, 0) and λ a = 5. Then, we have (0, 0) T = x f(a, u a ) + λ a x g 1 (a, v a ) = (5, 0) T + 5( 1, 0) T and λ a g 1 (a, v a ) = 0. Note that U = co{u 1,..., u 4 } with u 1 = ( 1, 0), u 2 = (1, 0), u 3 = ( 1, 1) and u 4 = (1, 1). So, {z R n : g 1 (z, v l 1) 0, l = 1, 2, f(, u a ) + λ a g 1 (, v a ) max 1 i 4 f(z, u i) Σ 2 d } = {z R 2 : z 1 1, h (z z 1 + max{z 2, 0}) Σ 2 4}, (14) where h(x 1, x 2 ) = x 4 1 4x 1+5. Note that for any z 1 = 1 and z 2 0, z z 1 +max{z 2, 0} = 2, and so, h (z z 1 + max{z 2, 0}) = x 4 1 4x = (x 2 1 1) 2 + 2(x 1 1) 2 Σ 2 4. Moreover, for any (z 1, z 2 ) ([1, + ) R)\({1} (, 0]), z1 4 + z 1 + max{z 2, 0} > 2, and so, h(1, 0) (z1 4 + z 1 + max{z 2, 0}) = 2 (z1 4 + z 1 + max{z 2, 0}) < 0, This shows that for any (z 1, z 2 ) ([1, + ) R)\({1} (, 0]), h (z z 1 + max{z 2, 0}) / Σ 2 4. Therefore, (14) implies that {z R n : g 1 (z, v l 1) 0, l = 1, 2, f(, u a ) + λ a g 1 (, v a ) f(z) Σ 2 d } = {z R 2 : z 1 1, h (z z 1 + max{z 2, 0}) Σ 2 4} = {z R 2 : z 1 = 1, z 2 0} = S. This verifies our robust solution characterization. Consider the following quadratic convex program with quadratic constraint under scenario data uncertainty 1 x R n 2 x, Ax + h, x s.t. 1 2 x, B ix + b i, x + γ i 0, i = 1,..., m, 24
25 where the data (A, h) S n R n and (B i, b i, γ i ) S n R n R are uncertain, (A, h) U s, (B i, b i, γ i ) V s i and U s, V s i are the scenario data uncertainty sets given by U s := co{(a 1, h 1 ),..., (A p0, h p0 )} and V s i = co{(b 1 i, b 1 i, γ 1 i ),..., (B p i i, bp i i, γp i i )}, with (A i, h i ) S n R n and (Bi l, bl i, γl i ) Sn R n R, l = 1,..., p i and A i and Bi l are positive semidefinite matrices. Define f(x, u) = 1 x, Ax + h, x, 2 u = (A, h), and g i (x, v i ) = 1 2 x, B ix + b i, x + γ i, v i = (B i, b i, γ i ). The the above quadratic convex program with quadratic constraint under scenario data uncertainty can be written as a form of (PP) (QQP s ) f(x, u) s.t. g i(x, v i ) 0, i = 1,..., m, x R n where u = (A, h) U s and v i = (B i, b i, γ i ) Vi s. The robust counterpart of quadratic convex program with quadratic constraint under scenario data uncertainty can be given by x R n s.t. max (A,h) U s{1 x, Ax + h, x } x, B ix + b i, x + γ i 0, (B i, b i, γ i ) Vi s, i = 1,..., m. In this case we see that the solution set of quadratic convex program with quadratic constraint under scenario data uncertainty can be described by a semi-definite representable set. To do this, we first introduce some definitions and present a simple fact which will be used later on. For any q N and (u, r) R q R, we define (u, r) 2 := u, u + r 2. Then, it is known that [29], for any (u, r) R q R with q N, we have u 2 + 2r 0 u, u + (1 + r 2 )2 1 r 2 (u, 1 + r 2 ) 2 1 r 2. (15) Corollary 5.1. For (QQP s ), let F be the robust feasible set and let S be the robust solution set. Suppose that there exists x 0 such that 1 2 x 0, (B l i)x 0 + b l i, x 0 + γ l i < 0, l = 1,..., p i, i = 1,..., m. 25
26 Let a S, and let λ a i 0, (Aa, h a ) U s and (B a i, ba i, γa i ) Vs i with a. Then, we have be the multipliers associated S = {z R n : t R s.t. (L l ix, 1 + γl i + bl i, x ) 2 1 γl i + bl i, x, l = 1,..., p i, i = 1,..., m, 2 2 Aa h a + λ a Ba i (b a i ) (h a ) T i 0,, 2t (b a i )T, 2γi a (M i x, 1 + t h i, x 2 ) 2 1 t h i, x, i = 1,..., p 0.} 2 where L l i Rsl i n is a matrix satisfying B l i = (Ll i )T L l i, sl i N, l = 1,..., p i, i = 1,..., m and M i R r i n is a matrix satisfying A i = M T i M i, r i N, i = 1,..., m. Proof. Consider f(x, (A, h)) = 1 2 x, Ax + h, x and g i(x, (B i, b i, γ i )) = 1 2 x, B ix + b i, x + γ i. Then, the preceding theorem implies that S = {z R n : 1 2 x, Bl ix + b l i, x + γi l 0, l = 1,..., p i, i = 1,..., m, 1 2 x, Aa x + h a, x + λ a i ( 1 2 x, Ba i x + b a i, x + γi a ) max { 1 1 i p 0 2 z, A iz + h i, z } Σ 2 d}. The robust solution set S can be equivalently rewritten as S = {z R n : t R n s.t. 1 2 x, Bl ix + b l i, x + γi l 0, l = 1,..., p i, i = 1,..., m, 1 2 x, Aa x + h a, x + λ a i ( 1 2 x, Ba i x + b a i, x + γi a ) t Σ 2 d}, 1 2 x, A ix + h i, x t, i = 1,..., p 0 }. Letting B l i = (Ll i )T L l i where Ll i Rsl i n for some s l i N and A i = M T i M i where M i R r i n for some r i N, then 1 2 x, (Bl i )x + bl i, x +γl i 0 is equivalent to Ll i x 2 +2(γi l + bl i, x ) 0. Applying (15) with u = L l i x and r = γl i + bl i, x, we see that 1 2 x, (Bl i )x + bl i, x + γl i 0 which can be further equivalently rewritten as (L l ix, 1 + γl i + bl i, x 2 ) 2 1 γl i + bl i, x, l = 1,..., p i, i = 1,..., m. (16) 2 Similarly, max (A,h) U s{ 1 2 x, Ax + h, x } t is equivalent to 1 2 x, A ix + h i, x t 0 for all i = 1,..., p 0 which can be equivalently rewritten as (M i x, 1 + t h i, x 2 + 1) 2 1 t h i, x, i = 1,..., p
27 Thus, the conclusion follows by noting that 1 2 x, Aa x + h a, x + λ a i ( 1 2 x, Ba i x + b a i, x + γi a ) t Σ 2 d Aa h a + λ l Ba i (b a i ) (h a ) T i 0. 2t (b a i )T 2γi a Remark 5.1. As x t is equivalent to ti n x 0, the above corollary shows that x T t the robust solution set of the quadratic programg problem with quadratic constraint under scenario data uncertainty can be written as the projection of a set described by linear matrix inequalities (which is often referred as semi-definite representable set). More generally, noting that any lower level set of SOS-convex inequality is semi-definite representable [28], Theorem 5.1 shows that robust solution set of the SOS-convex polynomial programg problem under scenario data uncertainty set is also semi-definite representable. 6 Solution Sets of Uncertain Fractional Programs The uncertain fractional programg problem can be captured by the following parameterized problem: (F P ) x R n f(x, u) h(x, w) s.t. g i (x, v i ) 0, i = 1,..., m, where u, w, v i are uncertain parameters and they belong to the corresponding convex and compact uncertainty sets U R q 0, W R q 1 and V i R q. Note that f : R n R q 0 R is a continuous function such that for each fixed u U, f(, u) is a convex function on R n ; for each fixed x R n, f(x, ) is a concave function on R q. Moreover g i : R n R q R is a continuous function such that g i (, v i ) is a convex function and for each fixed x R n, g i (x, ) is a concave function. Finally, for each fixed v i V i, h : R n R q 1 R is a continuous function such that for each fixed w W R q 1, h(, w) is a concave function on R n ; for each fixed x R n, h(x, ) is a convex function on R q 1. Its robust counterpart can be formulated as max f(x, u) x R n w W h(x, w) s.t. g i (x, v i ) 0, v i V i, i = 1,..., m. 27
28 The robust feasible set of (FP) is denoted by F, and is given by F = {x R n : g i (x, v i ) 0, v i V i, i = 1,..., m}, Moreover, the robust solution set of (FP) is denoted by S, and is defined by S = {x F : max f(x, u) w W h(x, w) max f(y, u) w W h(y, w) y F }. We assume that f(x, u) 0 and h(x, w) > 0 for all x F, u U and w W. In the case where h(, w) are all affine functions for all w W, the condition f(x, u) 0 for all x F and u U can be dropped. Proposition 6.1. For problem (FP), let F be the robust feasible set and let S be the robust solution set. Suppose that f(x, u) 0 and h(x, w) > 0 for all x F, u U and w W. Let a be a robust solution of (FP), that is, a S. Then, there exist λ a i 0, (u a, w a ) U W and v a i V i such that and 0 q(a) x f(a, u a ) + p(a) x ( h)(a, u a ) + q(a)f(x, u a ) p(a)h(x, w a ) = n λ a i x g i (a, vi a ), λ a i gi a (a, v i ) = 0 max {q(a)f(a, u) p(a)h(a, w)}. (u,w) U W Proof. As a S, we see that a is a solution of the following robust convex optimization problem: x R n max {q(a)f(x, u) p(a)h(x, w)} (u,w) U W s.t. g i (x, v i ) 0, v i V i, i = 1,..., m, where q(a) = w W h(a, w) and p(a) = max f(a, u). Define f(x, u, w) := q(a)f(x, u) p(a)h(x, w). From our assumption, it is clear that f(, u, w) is continuous convex for any (u, w) U W and f(x,, ) is continuous concave for any x R n. So, the conclusion follows from Proposition 2.1 with f replaced by f. Theorem 6.1. (Robust solution set of uncertain fractional program) For problem (FP), let F be the robust feasible set and let S be the robust solution set. Suppose that there exists x 0 R n such that g i (x 0, v i ) < 0, v i V i, i = 1,..., m. Let a be a robust solution of 28
29 (FP), that is, a S. Let λ a i 0, (ua, w a ) U and v a i V i be the multiplier associated with a. Then, S = {x F : λ a i g i (x, v a i ) = 0, i = 1,..., m, q(a)f(x, u a ) p(a)h(x, w a ) = max {q(a)f(x, u) p(a)h(x, w)} (u,w) U W w a ( q(a) x f(x, u) + p(a) x ( h)(x, w) ) ( q(a) x f(a, u) + p(a) x ( h)(a, w) ), w a, x a = 0}. Proof. Define f(x, u, w) := q(a)f(x, u) p(a)h(x, w). From our assumption, it is clear that f(, u, w) is continuous convex for any (u, w) U W and f(x,, ) is continuous concave for any x R n. So, the conclusion follows from Theorem 3.2 with f replaced by f. 7 Conclusions Robust optimization has emerged as a powerful approach for dealing with data uncertainty and it treats uncertainty as deteristic, but does not limit data values to point estimates. In this framework, one associates with the uncertain optimization problem its robust counterpart, where the uncertain constraints are enforced for every possible value of the data within their prescribed uncertainty sets. Recent research in robust convex optimization theory has focused on characterizing robust solution points of convex optimization problems in the face of data uncertainty. In this paper, we established simple properties and characterizations of robust solution sets of uncertain convex optimization problems by way of characterizing solution sets of the robust counterpart of the uncertain optimization problems. In particular, we presented generalizations of the constant subdifferential property as well as the constant Lagrangian property for solution sets of convex programg to robust solution sets of uncertain convex programs. We provided various characterizations of robust solution sets of uncertain convex quadratic programs and SOS-convex polynomial programs, under commonly used uncertainty sets of robust optimization, such as the ellipsoidal, scenario and spectral norm uncertainties. We also gave classes of uncertain convex programs where the solution sets can be expressed as conic representable sets. An interesting open problem is to find robust solutions of hard uncertain bi-level optimization problems by way of studying the conic representability, in particular semidefinite 29
A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty
A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty V. Jeyakumar, G.Y. Li and G. M. Lee Revised Version: January 20, 2011 Abstract The celebrated von Neumann minimax
More informationStrong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty
Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite
More informationTrust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization
Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region
More informationA Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials
A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem
More informationRobust Solutions to Multi-Objective Linear Programs with Uncertain Data
Robust Solutions to Multi-Objective Linear Programs with Uncertain Data M.A. Goberna yz V. Jeyakumar x G. Li x J. Vicente-Pérez x Revised Version: October 1, 2014 Abstract In this paper we examine multi-objective
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationRobust Farkas Lemma for Uncertain Linear Systems with Applications
Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of
More informationRobust Duality in Parametric Convex Optimization
Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise
More informationConvex Optimization Notes
Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =
More informationConvex Optimization in Classification Problems
New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationGlobal Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition
Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality
More informationSubgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives
Subgradients subgradients strong and weak subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE364b, Stanford University Basic inequality recall basic inequality
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationIE 521 Convex Optimization
Lecture 1: 16th January 2019 Outline 1 / 20 Which set is different from others? Figure: Four sets 2 / 20 Which set is different from others? Figure: Four sets 3 / 20 Interior, Closure, Boundary Definition.
More informationSelected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.
. Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,
More informationthat a broad class of conic convex polynomial optimization problems, called
JOTA manuscript No. (will be inserted by the editor) Exact Conic Programming Relaxations for a Class of Convex Polynomial Cone-Programs Vaithilingam Jeyakumar Guoyin Li Communicated by Levent Tunçel Abstract
More informationExact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems
Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationCONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS
CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics
More informationChapter 2 Convex Analysis
Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationDUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS
1 DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS Alexander Shapiro 1 School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA, E-mail: ashapiro@isye.gatech.edu
More informationA Dual Condition for the Convex Subdifferential Sum Formula with Applications
Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ
More information1 Robust optimization
ORF 523 Lecture 16 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. In this lecture, we give a brief introduction to robust optimization
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationSOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction
ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems
More informationConstraint qualifications for convex inequality systems with applications in constrained optimization
Constraint qualifications for convex inequality systems with applications in constrained optimization Chong Li, K. F. Ng and T. K. Pong Abstract. For an inequality system defined by an infinite family
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationGEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect
GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationLecture 8 Plus properties, merit functions and gap functions. September 28, 2008
Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationLecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008
Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:
More informationDouglas-Rachford splitting for nonconvex feasibility problems
Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationLecture 2: Convex Sets and Functions
Lecture 2: Convex Sets and Functions Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Network Optimization, Fall 2015 1 / 22 Optimization Problems Optimization problems are
More informationFirst-order optimality conditions for mathematical programs with second-order cone complementarity constraints
First-order optimality conditions for mathematical programs with second-order cone complementarity constraints Jane J. Ye Jinchuan Zhou Abstract In this paper we consider a mathematical program with second-order
More information1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016
AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the
More informationRobust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions
Robust Stability Robust stability against time-invariant and time-varying uncertainties Parameter dependent Lyapunov functions Semi-infinite LMI problems From nominal to robust performance 1/24 Time-Invariant
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationNonlinear Programming Models
Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:
More informationMathematics 530. Practice Problems. n + 1 }
Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined
More informationMinimizing Cubic and Homogeneous Polynomials over Integers in the Plane
Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Alberto Del Pia Department of Industrial and Systems Engineering & Wisconsin Institutes for Discovery, University of Wisconsin-Madison
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationPerturbation Analysis of Optimization Problems
Perturbation Analysis of Optimization Problems J. Frédéric Bonnans 1 and Alexander Shapiro 2 1 INRIA-Rocquencourt, Domaine de Voluceau, B.P. 105, 78153 Rocquencourt, France, and Ecole Polytechnique, France
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationOn duality theory of conic linear problems
On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationGEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III
GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III CONVEX ANALYSIS NONLINEAR PROGRAMMING THEORY NONLINEAR PROGRAMMING ALGORITHMS
More informationConvex hull of two quadratic or a conic quadratic and a quadratic inequality
Noname manuscript No. (will be inserted by the editor) Convex hull of two quadratic or a conic quadratic and a quadratic inequality Sina Modaresi Juan Pablo Vielma the date of receipt and acceptance should
More informationIdentifying Active Constraints via Partial Smoothness and Prox-Regularity
Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationTHE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION
THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed
More informationMin-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1
Min-max- robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Christoph Buchheim, Jannis Kurtz 2 Faultät Mathemati, Technische Universität Dortmund Vogelpothsweg
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More informationNonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6
Nonlinear Programming 3rd Edition Theoretical Solutions Manual Chapter 6 Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts 1 NOTE This manual contains
More informationSummary Notes on Maximization
Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationModule 04 Optimization Problems KKT Conditions & Solvers
Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationSubgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives
Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic
More informationConvex Optimization and an Introduction to Congestion Control. Lecture Notes. Fabian Wirth
Convex Optimization and an Introduction to Congestion Control Lecture Notes Fabian Wirth August 29, 2012 ii Contents 1 Convex Sets and Convex Functions 3 1.1 Convex Sets....................................
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationarzelier
COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.1 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS STABILITY ANALYSIS Didier HENRION www.laas.fr/ henrion henrion@laas.fr Denis ARZELIER www.laas.fr/
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationSolving Global Optimization Problems with Sparse Polynomials and Unbounded Semialgebraic Feasible Sets
Solving Global Optimization Problems with Sparse Polynomials and Unbounded Semialgebraic Feasible Sets V. Jeyakumar, S. Kim, G. M. Lee and G. Li June 6, 2014 Abstract We propose a hierarchy of semidefinite
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationRobust Solutions to Multi-Objective Linear Programs with Uncertain Data
Robust Solutions to Multi-Objective Linear Programs with Uncertain Data arxiv:1402.3095v1 [math.oc] 13 Feb 2014 M.A. Goberna, V. Jeyakumar, G. Li, and J. Vicente-Pérez December 9, 2013 Abstract In this
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS
MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth
More informationLinear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016
Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More informationContinuity of convex functions in normed spaces
Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional
More informationOptimality, identifiability, and sensitivity
Noname manuscript No. (will be inserted by the editor) Optimality, identifiability, and sensitivity D. Drusvyatskiy A. S. Lewis Received: date / Accepted: date Abstract Around a solution of an optimization
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationDivision of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45
Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationLecture 3: Semidefinite Programming
Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality
More information