Handout 6: Some Applications of Conic Linear Programming

Similar documents
Handout 8: Dealing with Data Uncertainty

Handout 6: Some Applications of Conic Linear Programming

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Robust linear optimization under general norms

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Lecture: Examples of LP, SOCP and SDP

Advances in Convex Optimization: Theory, Algorithms, and Applications

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 6: Conic Optimization September 8

6.854J / J Advanced Algorithms Fall 2008

Approximation Algorithms

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Lecture Semidefinite Programming and Graph Partitioning

Convex Optimization in Classification Problems

1 Robust optimization

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Handout 4: Some Applications of Linear Programming

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

A Unified Theorem on SDP Rank Reduction. yyye

EE 227A: Convex Optimization and Applications April 24, 2008

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Multiuser Downlink Beamforming: Rank-Constrained SDP

Semidefinite Programming Basics and Applications

Robust portfolio selection under norm uncertainty

Relaxations and Randomized Methods for Nonconvex QCQPs

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

IE 521 Convex Optimization

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints

SDP Relaxations for MAXCUT

BCOL RESEARCH REPORT 07.04

Lecture 8: The Goemans-Williamson MAXCUT algorithm

Introduction to Semidefinite Programming I: Basic properties a

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

Robust Optimization for Risk Control in Enterprise-wide Optimization

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

Lecture: Introduction to LP, SDP and SOCP

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

1 The independent set problem

Network Localization via Schatten Quasi-Norm Minimization

Distributionally Robust Optimization with ROME (part 1)

Distributionally Robust Convex Optimization

arxiv: v1 [math.oc] 14 Oct 2014

Rank minimization via the γ 2 norm

approximation algorithms I

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Operations Research Letters

On the Sandwich Theorem and a approximation algorithm for MAX CUT

The Trust Region Subproblem with Non-Intersecting Linear Constraints

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

Mathematical Optimization Models and Applications

The Simplest Semidefinite Programs are Trivial

1 Matrix notation and preliminaries from spectral graph theory

Robust conic quadratic programming with ellipsoidal uncertainties

COM Optimization for Communications 8. Semidefinite Programming

Convex relaxations of chance constrained optimization problems

Lecture Note 5: Semidefinite Programming for Stability Analysis

Using Schur Complement Theorem to prove convexity of some SOC-functions

Lecture: Convex Optimization Problems

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk

Largest dual ellipsoids inscribed in dual cones

Bounds on the radius of nonsingularity

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection

4. Convex optimization problems

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

DOWNLINK transmit beamforming has recently received

4. Convex optimization problems

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Robust combinatorial optimization with variable budgeted uncertainty

Convex Optimization M2

Tractable Approximations to Robust Conic Optimization Problems

1 Matrix notation and preliminaries from spectral graph theory

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

Lifting for conic mixed-integer programming

Lecture 3: Semidefinite Programming

On Optimal Frame Conditioners

Convex Optimization M2

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Distributionally Robust Convex Optimization

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

More First-Order Optimization Algorithms

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

Transcription:

ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and in particular, semidefinite programming SDP, has received a lot of attention in recent years. Such a popularity can partly be attributed to the wide applicability of CLP, as well as recent advances in the design of provably efficient interior point algorithms. In this lecture we will consider some applications that arise in engineering and show how CLP techniques can be used to model them. Robust Linear Programming To motivate the problem considered in this section, consider the LP minimize subject to c T x Âx ˆb, where  Rm n, ˆb R m, and c R n are given. As it often occurs in practice, the data of the above LP, namely  and ˆb, are uncertain. In an attempt to tackle such uncertainties, Ben Tal and Nemirovski [4] introduced the idea of robust optimization. In the robust optimization setting, we assume that the uncertain data lie in some given uncertainty set U. Our goal is then to find a solution that is feasible for all possible realizations of the uncertain data and optimizes some given objective. To derive the robust counterpart of, let us first rewrite it as minimize c T z subject to Az 0, z n+ =, where A = [ ˆb] R m n+, c = c, 0 R n+, and z R n+. Now, suppose that each row a i R n+ of the matrix A lies in an ellipsoidal region U i whose center u i R n+ is given here, i =,..., m. Specifically, we impose the constraint that a i U i = { x R n+ : x = u i + B i v, v } for i =,..., m, where a i is the i th row of A, u i R n+ is the center of the ellipsoid U i, and B i is some n+ n+ positive semidefinite matrix. Then, the robust counterpart of is the following optimization problem: minimize c T z subject to Az 0 for all a i U i, i =,..., m, z n+ =. 3

Note that in general there are uncountably many constraints in 3. However, we claim that 3 is equivalent to an SOCP problem. To see this, observe that a T i z 0 for all a i U i iff 0 max v R n+ : v { ui + B i v T z } = u T i z + B i z, where i =,..., m. Hence, we conclude that 3 is equivalent to minimize c T z subject to B i z u T i z for i =,..., m, z n+ =, which is an SOCP problem. For further readings on robust optimization, we refer the readers to [4,, 5, 3]. 3 Chance Constrained Linear Programming In the previous section, we showed how one can handle data uncertainties in a linear optimization problem using the robust optimization approach. Such an approach is particularly suitable if there is little information about the nature of the data uncertainty, and/or if the constraints are not to be violated under any circumstance. However, it can produce very conservative solutions. In situations where the distribution of the data uncertainty is partially known, one can often obtain a better solution by allowing the constraints to be violated with small probability. To illustrate this possibility, let us consider the following simple LP: minimize c T x subject to a T x b, x P, 4 where a, c R n are given vectors, b R is a given scalar, and P R n is a given polyhedron. For the sake of simplicity, suppose that P is deterministic, but the data a, b are randomly affinely perturbed; i.e., l l a = a 0 + ɛ i a i, b = b 0 + ɛ i b i, where a 0, a,..., a l R n and b 0, b,..., b l R are given, and ɛ,..., ɛ l are i.i.d. mean zero random variables supported on [, ]. Then, for any given tolerance parameter δ 0,, we can formulate the following chance constrained counterpart of 4: minimize c T x subject to Pra T x > b δ, x P. 5 In other words, a solution x P is feasible for 5 if it only violates the constraint a T x b with probability at most δ. The constraint is known as a chance constraint. Note that when δ = 0, 5 reduces to a robust linear optimization problem. Moreover, if x R n is feasible for 5 at some tolerance level δ 0, then it is also feasible for 5 at any δ δ. Thus, by varying δ, we will be able to adjust the conservatism of the solution.

Despite the attractiveness of the chance constraint formulation, it has one major drawback; namely, it is generally computationally intractable. Indeed, even when the distributions of ɛ,..., ɛ l are very simple, the feasible set defined by the chance constraint can be non convex. One way to tackle this problem is to replace the chance constraint by its safe tractable approximation; i.e., a system of deterministic constraints H such that i x R n is feasible for whenever it is feasible for H safe approximation, and ii the constraints in H are efficiently computable tractability. To develop a safe tractable approximation of, we first observe that is equivalent to the following system of constraints: l Pr y 0 + ɛ i y i > 0 δ, 6 y i = a i T x b i for i = 0,,..., l. 7 Since ɛ,..., ɛ l are i.i.d. mean zero random variables supported on [, ], by Hoeffding s inequality [], we have l t Pr ɛ i y i > t exp l y i for any t > 0. It follows that when we have Pr y 0 + y 0 ln l yi δ, 8 l y0 ɛ i y i > 0 exp δ. l y i In other words, 8 is a sufficient condition for 6 to hold. The upshot of 8 is that it is a second order cone constraint. Hence, we conclude that constraints 7 and 8 together serve as a safe tractable approximation of the chance constraint. So far we have only discussed how to handle a single scalar chance constraint. Of course, the case of joint scalar chance constraints; i.e., chance constraints of the form Pr a i T x > b i for i =,,..., m δ are also of great interest. However, they are not as easy to handle as a single scale chance constraint. For some recent results in this direction, see [3, 6, 0, 8]. It is instructive to examine the relationship between the robust optimization approach and the safe tractable approximation approach. Upon following the derivations in the previous section, we see that constraint 8 is equivalent to the following robust constraint: d T y 0 for all d U, where U is the ellipsoidal uncertainty set given by { } U = x R l+ : x = e + Bv, v, 3

and B R l+ l+ is given by B = ln [ 0 0 T δ 0 I ]. In other words, we are using the following robust optimization problem minimize subject to c T x l d i a i T x b i 0 for all d U, i=0 x P as a safe tractable approximation of the chance constrained optimization problem 5. For further elaboration on the connection between robust optimization and chance constrained optimization, see [7, 6]. We remark that chance constrained optimization has been employed in many recent applications; see, e.g., [9, 9, 3, 3, ]. For a more comprehensive treatment and some recent advances of chance constrained optimization, we refer the reader to [8, Chapter 4] and []. 4 Quadratically Constrained Quadratic Optimization A class of optimization problems that has frequently arisen in applications is that of quadratically constrained quadratic optimization problems QCQPs; i.e., problems of the form minimize x T Cx subject to x T A i x b i for i =,..., m, 9 where C, A,..., A m S n are given. In general, due to the non convexity of the objective function and constraints, problem 9 is intractable. Nevertheless, it can be tackled by the so called semidefinite relaxation technique. To introduce this technique, we first observe that for any C S n, Hence, problem 9 is equivalent to x T Cx = trx T Cx = trcxx T = C xx T. minimize C xx T subject to A i xx T b i for i =,..., m. Now, using the spectral theorem for symmetric matrices, one can verify that X = xx T X 0, rankx. It follows that problem 9 is equivalent to the following rank cosntrained SDP problem: minimize C X subject to A i X b i for i =,..., m, X 0, rankx. 0 4

The advantage of the formulation in 0 over that in 9 is that it reveals where the difficulty of the problem lies; namely, in the non convex constraint rankx. By dropping this constraint, we obtain the following semidefinite relaxation of problem 9: minimize C X subject to A i X b i for i =,..., m, X 0. Problem is an SDP and can be efficiently solved. However, an optimal solution X to problem may not be feasible for problem 9, since we need not have rankx. This motivates us to consider two fundamental questions:. Under what conditions would the relaxation be tight? In particular, when would it be possible to convert an optimal solution X to into an optimal solution x to 9?. In the case where the relaxation is not tight, how can we extract a feasible solution to 9 from an optimal solution to? More ambitiously, can we establish any theoretical guarantees on the approximation quality of the extracted solution? In the sequel, we shall present some possible approaches to tackling the above questions. 4. Rank Bound via Constraint Counting Consider the following standard form SDP: v SDP = inf C X subject to A i X = b i for i =,..., m, X 0, SDP where C, A,..., A m S n and b,..., b m R are given. The following theorem, which is discovered independently by Shapiro [7], Barvinok [], and Pataki [6], shows that if SDP has an optimal solution, then it has an optimal solution whose rank is bounded by a function of m, the number of constraints. Theorem Suppose that SDP has an optimal solution. Then, there exists an optimal solution X to SDP satisfying rr + / m, where r = rankx. Moreover, such an optimal solution can be computed efficiently. Proof Let X be an optimal solution to SDP and r = rank X. Suppose that r r + / > m. Let X = LL T be the Cholesky factorization of X, where L R n r. Set C = L T CL S r and Ā i = L T A i L S r for i =,..., m, and consider the following auxiliary SDP: v ASDP = inf C W subject to Ā i W = b i for i =,..., m, W 0. ASDP We claim that v SDP = v ASDP, and that W = I S r is an optimal solution to ASDP. Indeed, observe that the solution W = I is feasible for ASDP and has an objective value C I = C LL T = C X = v SDP. 5

This implies that vasdp v SDP. On the other hand, every feasible solution W to ASDP corresponds to a feasible solution ZW = LW L T to SDP, which implies that vasdp v SDP. Thus, the claim is established. Next, we show that every feasible solution to ASDP is in fact optimal for ASDP. Towards that end, consider the dual of ASDP: sup subject to b T y C m y i Ā i 0, y R m. ASDD Since ASDP is bounded below and strictly feasible, by the CLP Strong Duality Theorem, ASDD has an optimal solution y. Moreover, since W = I is optimal for ASDP, the CLP Strong Duality Theorem yields m I C = 0. y i Āi This, together with the fact that C m y i Āi 0, implies C m yi Āi = 0. It follows that every feasible solution W to ASDP satisfies the complementarity condition m W C = 0. Hence, by the CLP Strong Duality Theorem, we conclude that W is optimal for ASDP. To complete the proof of Theorem, consider the following system of homogeneous linear equations: Ā i W = 0 for i =,..., m, W S r. Since W S r, it is completely determined by the entries on and above the diagonal. Thus, we see that is a system of m equations in r r + / variables. Now, since we assume that r r + / > m, there exists a non zero W S r satisfying. We may assume without loss that W has at least one negative eigenvalue, for otherwise we can simply consider W, which also satisfies. Consider the matrix W + = I /λ min W W S r, where λ min W < 0 is the smallest eigenvalue of W. Note that W + 0, since for any u R r with u =, we have u T W + u = λ min W ut W u λ min W min u = ut W u = 0 by the Courant Fischer theorem. In addition, it can be verified that rank W + < r. Lastly, a direct calculation yields Ā i W + = Āi I = b i for i =,..., m. It follows that W + is feasible and hence optimal for ASDP. This implies that X W + = L W + L T is optimal for SDP and satisfies rank X W + < r. 6 y i Āi

By repeating the above procedure until W = 0 is the only solution to, we obtain an optimal solution X with rr + / m, where r = rankx. Moreover, note that an optimal solution to SDP and a non zero solution to, if exists, can be found efficiently. This completes the proof of Theorem. The following is an easy corollary of Theorem : Corollary Consider the following SDP: inf C X subject to A i X = b i for i =,..., m, A i X b i for i = m +,..., m, X 0, 3 where C, A,..., A m S n and b,..., b m R are given. If problem 3 has an optimal solution, then it has an optimal solution X satisfying rr + / m, where r = rankx. Moreover, such an optimal solution can be found efficiently. Proof SDP: Let X be an optimal solution to 3. Then, X is an optimal solution to the following inf C X subject to A i X = A i X for i =,..., m, X 0. By Theorem, an optimal solution X to 4 satisfying rr + /, where r = rankx, can be found efficiently. To complete the proof, it suffices to note that X is also optimal for 3. Using Corollary, we obtain the following tightness result concerning the relaxation. Corollary The semidefinite relaxation is tight for 9 when m. 4. An Approximation Algorithm for Maximum Cut in Graphs Suppose that we are given a simple undirected graph G = V, E and a function w : E R + that assigns to each edge e E a non negative weight w e. The Maximum Cut Problem Max Cut is that of finding a set S V of vertices such that the total weight of the edges in the cut S, V \S; i.e., sum of the weights of the edges with one endpoint in S and the other in V \S, is maximized. By setting w ij = 0 if i, j E, we may denote the weight of a cut S, V \S by ws, V \S = w ij, 5 i S,j V \S and our goal is to choose a set S V such that the quantity in 5 is maximized. The Max Cut problem is one of the fundamental computational problems on graphs and has been extensively studied by many researchers. It has been shown that the Max Cut problem is unlikely to have a polynomial time algorithm see, e.g, [0]. On the other hand, in a seminal work, Goemans and Williamson [] showed how SDP can be used to design a 0.878 approximation algorithm for the Max Cut problem; i.e., given an instance G, w of the Max Cut problem, the algorithm will find a cut S, V \S whose value ws, V \S is at least 0.878 times the optimal value. In this section, we will describe the algorithm of Goemans and Williamson and prove its approximation guarantee. 4 7

To begin, let G, w be a given instance of the Max Cut problem, with n = V. Then, we can formulate the Max Cut problem as an integer quadratic program, viz. v = maximize w ij x i x j 6 subject to x i = for i =,..., n. Here, the variable x i indicates which side of the cut vertex i belongs to. Specifically, the cut S, V \S is given by S = {i {,..., n} : x i = }. Note that if vertices i and j belong to the same side of a cut, then x i = x j, and hence its contribution to the objective function in 6 is zero. Otherwise, we have x i x j, and its contribution to the objective function is w ij / = w ij. In general, problem 6 is hard to solve. Thus, we consider relaxations of 6. One approach is to observe that both the objective function and the constraints in 6 are linear in x i x j, where i, j n. In particular, if we let X = xx T R n n, then problem 6 can be written as v = maximize subject to diagx = e, X = xx T. w ij X ij Since X = xx T iff X 0 and rankx, from our earlier discussion, we can drop the non convex rank constraint and arrive at the following relaxation of 7: v sdp = maximize subject to diagx = e, X 0. w ij X ij Note that 8 is an SDP. Moreover, since 8 is a relaxation of 7, we have v sdp v. Now, suppose that we have an optimal solution X to 8. In general, the matrix X need not be in the form xx T, and hence it does not immediately yield a feasible solution to 7. However, we can extract from X a solution x {, } n to 7 via the following randomized rounding procedure:. Compute the Cholesky factorization X = U T U of X, where U R n n. Let u i R n be the i th column of U. Note that u i = for i =,..., n.. Let r R n be a vector uniformly distributed on the unit sphere S n = {x R n : x = }. 3. Set x i = sgn u T i r for i =,..., n, where { if z 0, sgnz = otherwise. In other words, we choose a random hyperplane through the origin with r as its normal and partition the vertices according to whether their corresponding vectors lie above or below the hyperplane. 7 8 8

Since the solution x {, } n is produced via a randomized procedure, we are interested in its expected objective value; i.e., E w ij x i x j = w ij E [ x ix ] j = w ij Pr [ sgn u T i r sgn u T j r ]. 9 The following theorem provides a lower bound on the expected objective value 9 and allows us to compare it with the optimal value vsdp of 7. Theorem Let u, v S n, and let r be a vector uniformly distributed on S n. Then, we have Furthermore, for any z [, ], we have where Pr [ sgn u T r sgn v T r ] = π arccos u T v. 0 π arccosz α z > 0.878 z, θ α = min 0 θ π π cos θ. Proof To establish 0, observe that by symmetry, we have Pr [ sgn u T r sgn v T r ] = Pr u T r 0, v T r < 0. Now, by projecting r onto the plane containing u and v, we see that u T r 0 and v T r < 0 iff the projection lies in the wedge formed by u and v. Since r is chosen from a spherically symmetric distribution, its projection will be a random direction on the plane containing u and v. Hence, we have Pr u T r 0, v T r < 0 = arccos u T v, π as desired. To establish the first inequality in, consider the change of variable z = cos θ. Since z [, ], we have θ [0, π]. Thus, it follows that π arccosz = θ π = θ π cos θ cos θ α z, as desired. The second inequality in can be established using calculus and we leave the proof to the readers. Corollary 3 Given an instance G, w of the Max Cut problem and an optimal solution to 8, the randomized rounding procedure above will produce a cut S, V \S whose expected objective value satisfies ws, V \S 0.878v. 9

Proof Let x be the solution obtained from the randomized rounding procedure, and let S be the corresponding cut. By 9 and Theorem, we have as desired. E [ ws, V \S ] = π 0.878 = 0.878v sdp 0.878v, w ij arccos u T i u j w ij u T i u j For a survey of the theory and the many applications of the semidefinite relaxation technique, we refer the reader to [5]. References [] F. Alizadeh and D. Goldfarb. Second Order Cone Programming. Mathematical Programming, Series B, 95:3 5, 003. [] A. I. Barvinok. Problems of Distance Geometry and Convex Properties of Quadratic Maps. Discrete and Computational Geometry, 3:89 0, 995. [3] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton Series in Applied Mathematics. Princeton University Press, Princeton, NJ, 009. [4] A. Ben-Tal and A. Nemirovski. Robust Convex Optimization. Mathematics of Operations Research, 34:769 805, 998. [5] A. Ben-Tal and A. Nemirovski. Selected Topics in Robust Convex Optimization. Mathematical Programming, Series B, :5 58, 008. [6] W. Chen, M. Sim, J. Sun, and C.-P. Teo. From CVaR to Uncertainty Set: Implications in Joint Chance Constrained Optimization. Operations Research, 58:470 485, 00. [7] X. Chen, M. Sim, and P. Sun. A Robust Optimization Perspective on Stochastic Programming. Operations Research, 556:058 07, 007. [8] S.-S. Cheung, A. M.-C. So, and K. Wang. Linear Matrix Inequalities with Stochastically Dependent Perturbations and Applications to Chance Constrained Semidefinite Optimization. SIAM Journal on Optimization, 4:394 430, 0. [9] L. El Ghaoui, M. Oks, and F. Oustry. Worst Case Value at Risk and Robust Portfolio Optimization: A Conic Programming Approach. Operations Research, 54:543 556, 003. [0] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP Completeness. W. H. Freeman and Company, New York, 979. 0

[] M. X. Goemans and D. P. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. Journal of the ACM, 46:5 45, 995. [] W. Hoeffding. Probability Inequalities for Sums of Bounded Random Variables. Journal of the American Statistical Association, 5830:3 30, 963. [3] W. W.-L. Li, Y. J. Zhang, A. M.-C. So, and M. Z. Win. Slow Adaptive OFDMA Systems through Chance Constrained Programming. IEEE Transactions on Signal Processing, 587:3858 3869, 00. [4] M. S. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of Second Order Cone Programming. Linear Algebra and Its Applications, 84:93 8, 998. [5] Z.-Q. Luo, W.-K. Ma, A. M.-C. So, Y. Ye, and S. Zhang. Semidefinite Relaxation of Quadratic Optimization Problems. IEEE Signal Processing Magazine, 73:0 34, 00. [6] G. Pataki. On the Rank of Extreme Matrices in Semidefinite Programs and the Multiplicity of Optimal Eigenvalues. Mathematics of Operations Research, 3:339 358, 998. [7] A. Shapiro. Rank Reducibility of a Symmetric Matrix and Sampling Theory of Minimum Trace Factor Analysis. Psychometrika, 47:87 99, 98. [8] A. Shapiro, D. Dentcheva, and A. Ruszczyński. Lectures on Stochastic Programming: Modeling and Theory, volume 9 of MPS SIAM Series on Optimization. Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania, 009. [9] M. B. Shenouda and T. N. Davidson. Outage Based Designs for Multi User Transceivers. In Proceedings of the 009 IEEE International Conference on Acoustics, Speech, and Signal Processing ICASSP 009, pages 389 39, 009. [0] A. M.-C. So. Moment Inequalities for Sums of Random Matrices and Their Applications in Optimization. Mathematical Programming, Series A, 30:5 5, 0. [] A. M.-C. So. Lecture Notes on Chance Constrained Optimization. Available at http://www. se.cuhk.edu.hk/~manchoso/papers/ccopt.pdf, 03. [] K.-Y. Wang, A. M.-C. So, T.-H. Chang, W.-K. Ma, and C.-Y. Chi. Outage Constrained Robust Transmit Optimization for Multiuser MISO Downlinks: Tractable Approximations by Conic Optimization. IEEE Transactions on Signal Processing, 6:5690 5705, 04. [3] Y. J. Zhang and A. M.-C. So. Optimal Spectrum Sharing in MIMO Cognitive Radio Networks via Semidefinite Programming. IEEE Journal on Selected Areas in Communications, 9:36 373, 0.