Handout 6: Some Applications of Conic Linear Programming
|
|
- Sydney Watts
- 5 years ago
- Views:
Transcription
1 G00E04: Convex Optimization and Its Applications in Signal Processing Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So June 3, 05 Introduction Conic linear programming CLP, and in particular, semidefinite programming SDP, has received a lot of attention in recent years. Such a popularity can partly be attributed to the wide applicability of CLP, as well as recent advances in the design of provably efficient interior point algorithms. In this lecture we will consider some applications that arise in engineering and statistics and show how CLP techniques can be used to model them. Logarithmic Chebyshev Approximation Recall that in Handout we considered a data fitting problem in which we are given m data pairs a i, b i, where a i R n and b i R for i =,..., m, and our goal is to find an x R n that best fits the data in the l sense: minimize max b i a i x. i m As we showed in Handout, the above problem can be formulated as an LP. Now, suppose that we are interested in the following optimization problem: minimize max logbi log a i x. i m Such a problem arises when, for instance, the data b,..., b m have the dimension of a power or intensity. Here, we assume that b,..., b m > 0, and that log a i x = if a i x 0. We now show that can be formulated as an SDP problem. o see this, we first observe that whenever a i x > 0, we have logbi log a i x a = log max i x b i, b i a i x, where i =,..., m. It follows that is equivalent to minimize subject to t t a i x t for i =,..., m, b i which in turn is equivalent to the following SDP: minimize subject to t t a i x/b i a i x/b i 0 t 0.
2 3 Robust Linear Programming o motivate the problem considered in this section, consider the following LP: minimize subject to c x Âx ˆb, where  Rm n, ˆb R m, and c R n are given. As it often occurs in practice, the data of the above LP, namely  and ˆb, are uncertain. In an attempt to tackle such uncertainties, Ben al and Nemirovski [3] introduced the idea of robust optimization. In the robust optimization setting, we assume that the uncertain data lie in some given uncertainty set U. Our goal is then to find a solution that is feasible for all possible realizations of the uncertain data and optimizes some given objective. o derive the robust counterpart of, let us first rewrite it as minimize c z subject to Az 0, z n =, 3 where A = [ ˆb] R m n, c = c, 0 R n, and z R n. Now, suppose that each row a i R n of the matrix A lies in an ellipsoidal region U i whose center u i R n is given here, i =,..., m. Specifically, we impose the constraint that a i U i = x R n : x = u i B i v, v } for i =,..., m, where a i is the i th row of A, u i R n is the center of the ellipsoid U i, and B i is some n n positive semidefinite matrix. hen, the robust counterpart of 3 is the following optimization problem: minimize c z subject to Az 0 for all a i U i, i =,..., m, z n =. Note that in general there are uncountably many constraints in 4. However, we claim that 4 is equivalent to an SOCP problem. o see this, observe that a i z 0 for all a i U i iff 0 max v R n : v ui B i v z } = u i z B i z, where i =,..., m. Hence, we conclude that 4 is equivalent to minimize c z subject to B i z u i z for i =,..., m, z n =, which is an SOCP problem. For further readings on robust optimization, we refer the readers to [7,, 4, ]. 4
3 4 Chance Constrained Linear Programming In the previous section, we showed how one can handle data uncertainties in a linear optimization problem using the robust optimization approach. Such an approach is particularly suitable if there is little information about the nature of the data uncertainty, and/or if the constraints are not to be violated under any circumstance. However, it can produce very conservative solutions. In situations where the distribution of the data uncertainty is partially known, one can often obtain a better solution by allowing the constraints to be violated with small probability. o illustrate this possibility, let us consider the following simple LP: minimize c x subject to a x b, x P, 5 where a, c R n are given vectors, b R is a given scalar, and P R n is a given polyhedron. For the sake of simplicity, suppose that P is deterministic, but the data a, b are randomly affinely perturbed; i.e., l l a = a 0 ɛ i a i, b = b 0 ɛ i b i, i= where a 0, a,..., a l R n and b 0, b,..., b l R are given, and ɛ,..., ɛ l are i.i.d. mean zero random variables supported on [, ]. hen, for any given tolerance parameter δ 0,, we can formulate the following chance constrained counterpart of 5: i= minimize c x subject to Pra x > b δ, x P. 6 In other words, a solution x P is feasible for 6 if it only violates the constraint a x b with probability at most δ. he constraint is known as a chance constraint. Note that when δ = 0, 6 reduces to a robust linear optimization problem. Moreover, if x R n is feasible for 6 at some tolerance level δ 0, then it is also feasible for 6 at any δ δ. hus, by varying δ, we will be able to adjust the conservatism of the solution. Despite the attractiveness of the chance constraint formulation, it has one major drawback; namely, it is generally computationally intractable. Indeed, even when the distributions of ɛ,..., ɛ l are very simple, the feasible set defined by the chance constraint can be non convex. One way to tackle this problem is to replace the chance constraint by its safe tractable approximation; i.e., a system of deterministic constraints H such that i x R n is feasible for whenever it is feasible for H safe approximation, and ii the constraints in H are efficiently computable tractability. o develop a safe tractable approximation of, we first observe that is equivalent to the following system of constraints: l Pr y 0 ɛ i y i > 0 δ, 7 i= y i = a i x b i for i = 0,,..., l. 8 3
4 Since ɛ,..., ɛ l are i.i.d. mean zero random variables supported on [, ], by Hoeffding s inequality [4], we have l t Pr ɛ i y i > t exp i= l i= y i for any t > 0. It follows that when y 0 ln l yi δ, 9 we have l y0 Pr y 0 ɛ i y i > 0 exp i= δ. l i= y i In other words, 9 is a sufficient condition for 7 to hold. he upshot of 9 is that it is a second order cone constraint. Hence, we conclude that constraints 8 and 9 together serve as a safe tractable approximation of the chance constraint. So far we have only discussed how to handle a single scalar chance constraint. Of course, the case of joint scalar chance constraints; i.e., chance constraints of the form i= Pr a i x > b i for i =,,..., m δ are also of great interest. However, they are not as easy to handle as a single scale chance constraint. For some recent results in this direction, see [, 6,, 8]. It is instructive to examine the relationship between the robust optimization approach and the safe tractable approximation approach. Upon following the derivations in the previous section, we see that constraint 9 is equivalent to the following robust constraint: d y 0 for all d U, where U is the ellipsoidal uncertainty set given by } U = x R l : x = e Bv, v, and B R l l is given by B = ln [ 0 0 δ 0 I In other words, we are using the following robust optimization problem minimize subject to c x ]. l d i a i x b i 0 for all d U, i=0 x P as a safe tractable approximation of the chance constrained optimization problem 6. For further elaboration on the connection between robust optimization and chance constrained optimization, see [7, 6]. We remark that chance constrained optimization has been employed in many recent applications; see, e.g., [, 0, 6, 5, 4]. For a more comprehensive treatment and some recent advances of chance constrained optimization, we refer the reader to [9, Chapter 4] and []. 4
5 5 Quadratically Constrained Quadratic Optimization A class of optimization problems that has frequently arisen in applications is that of quadratically constrained quadratic optimization problems QCQPs; i.e., problems of the form minimize x Qx subject to x A i x b i for i =,..., m, 0 where Q, A,..., A m S n are given. In general, due to the non convexity of the objective function and constraints, problem 0 is intractable. Nevertheless, it can be tackled by the so called semidefinite relaxation technique. o introduce this technique, we first observe that for any Q S n, x Qx = trx Qx = trqxx. Hence, problem 0 is equivalent to minimize trqxx subject to tra i xx b i for i =,..., m. Now, using the spectral theorem for symmetric matrices see Section 3. of Handout B, one can verify that X = xx X 0, rankx. It follows that problem 0 is equivalent to the following rank cosntrained SDP problem: minimize trqx subject to tra i X b i for i =,..., m, X 0, rankx. he advantage of the formulation in over that in 0 is that it reveals where the difficulty of the problem lies; namely, in the non convex constraint rankx. By dropping this constraint, we obtain the following semidefinite relaxation of problem 0: minimize trqx subject to tra i X b i for i =,..., m, X 0. Note that problem is an SDP, and hence can be efficiently solved. Of course, an optimal solution X to problem may not even be feasible for problem 0, since we need not have rankx. Moreover, it is generally impossible to convert X into an optimal solution x to problem 0. However, as it turns out, it is often possible to extract a near optimal solution to problem 0 from an optimal solution to problem. In the next section, we will illustrate how this can be done for a well known combinatorial problem the Maximum Cut Problem Max Cut. For a survey of the theory and the many applications of the semidefinite relaxation technique, see [8]. 5
6 5. An Approximation Algorithm for Maximum Cut in Graphs Suppose that we are given a simple undirected graph G = V, E and a function w : E R that assigns to each edge e E a non negative weight w e. he Maximum Cut Problem Max Cut is that of finding a set S V of vertices such that the total weight of the edges in the cut S, V \S; i.e., sum of the weights of the edges with one endpoint in S and the other in V \S, is maximized. By setting w ij = 0 if i, j E, we may denote the weight of a cut S, V \S by ws, V \S = w ij, 3 i S,j V \S and our goal is to choose a set S V such that the quantity in 3 is maximized. he Max Cut problem is one of the fundamental computational problems on graphs and has been extensively studied by many researchers. It has been shown that the Max Cut problem is unlikely to have a polynomial time algorithm see, e.g, []. On the other hand, in a seminal work, Goemans and Williamson [3] showed how SDP can be used to design a approximation algorithm for the Max Cut problem; i.e., given an instance G, w of the Max Cut problem, the algorithm will find a cut S, V \S whose value ws, V \S is at least times the optimal value. In this section, we will describe the algorithm of Goemans and Williamson and prove its approximation guarantee. o begin, let G, w be a given instance of the Max Cut problem, with n = V. hen, we can formulate the Max Cut problem as an integer quadratic program, viz. v = maximize w ij x i x j i,j E 4 subject to x i = for i =,..., n. Here, the variable x i indicates which side of the cut vertex i belongs to. Specifically, the cut S, V \S is given by S = i,..., n} : x i = }. Note that if vertices i and j belong to the same side of a cut, then x i = x j, and hence its contribution to the objective function in 4 is zero. Otherwise, we have x i x j, and its contribution to the objective function is w ij / = w ij. In general, problem 4 is hard to solve. hus, we consider relaxations of 4. One approach is to observe that both the objective function and the constraints in 4 are linear in x i x j, where i, j n. In particular, if we let X = xx R n n, then problem 4 can be written as v = maximize i,j E subject to diagx = e, X = xx. w ij X ij Since X = xx iff X 0 and rankx, from our earlier discussion, we can drop the non convex rank constraint and arrive at the following relaxation of 5: v sdp = maximize i,j E subject to diagx = e, 6 X 0. w ij X ij 5 6
7 Note that 6 is an SDP. Moreover, since 6 is a relaxation of 5, we have v sdp v. Now, suppose that we have an optimal solution X to 6. In general, the matrix X need not be in the form xx, and hence it does not immediately yield a feasible solution to 5. However, we can extract from X a solution x, } n to 5 via the following randomized rounding procedure:. Compute the Cholesky factorization X = U U of X, where U R n n. Let u i R n be the i th column of U. Note that u i = for i =,..., n.. Let r R n be a vector uniformly distributed on the unit sphere S n = x R n : x = }. 3. Set x i = sgn u i r for i =,..., n, where if z 0, sgnz = otherwise. In other words, we choose a random hyperplane through the origin with r as its normal and partition the vertices according to whether their corresponding vectors lie above or below the hyperplane. Since the solution x, } n is produced via a randomized procedure, we are interested in its expected objective value; i.e., E i,j E w ij x i x j = i,j E w ij E [ x ix j ] = w ij Pr [ sgn u i r sgn u j r ]. i,j E 7 he following theorem provides a lower bound on the expected objective value 7 and allows us to compare it with the optimal value vsdp of 5. heorem Let u, v S n, and let r be a vector uniformly distributed on S n. hen, we have Furthermore, for any z [, ], we have where Pr [ sgn u r sgn v r ] = π arccos u v. 8 π arccosz α z > z, 9 θ α = min 0 θ π π cos θ. Proof o establish 8, observe that by symmetry, we have Pr [ sgn u r sgn v r ] = Pr u r 0, v r < 0. Now, by projecting r onto the plane containing u and v, we see that u r 0 and v r < 0 iff the projection lies in the wedge formed by u and v. Since r is chosen from a spherically symmetric distribution, its projection will be a random direction on the plane containing u and v. Hence, we have Pr u r 0, v r < 0 = arccos u v, π 7
8 as desired. o establish the first inequality in 9, consider the change of variable z = cos θ. Since z [, ], we have θ [0, π]. hus, it follows that π arccosz = θ π = θ π cos θ cos θ α z, as desired. he second inequality in 9 can be established using calculus and we leave the proof to the readers. Corollary Given an instance G, w of the Max Cut problem and an optimal solution to 6, the randomized rounding procedure above will produce a cut S, V \S whose expected objective value satisfies ws, V \S 0.878v. Proof Let x be the solution obtained from the randomized rounding procedure, and let S be the corresponding cut. By 7 and heorem, we have E [ ws, V \S ] = π i,j E = 0.878v sdp 0.878v, w ij arccos u i u j i,j E w ij u i u j as desired. 6 Sparse Principal Component Analysis Principal Component Analysis PCA see, e.g, [3] is a very important tool in data analysis. It provides a way to reduce the dimension of a given data set, thus revealing the sometimes hidden underlying structure of and facilitating further analysis on the data set. o motivate the problem of finding principal components, consider the following scenario. Suppose that we are interested in some attributes X,..., X n of a population. In order to estimate the values of these attributes, one may sample from the population. Specifically, let X ij be the value of the j th attribute of the i th individual, where i =,..., m and j =,..., n. For j, k =,..., n, define X j = m m X ij, i= σ jk = m m X ij X j X ik X k i= to be the sample mean of X j and the sample covariance between X j and X k, respectively. Define Σ = [σ jk ] j,k to be the sample covariance matrix. he goal is then to find the principal components u,..., u n such that the linear combination n j= u jx j has maximum sample variance. In other words, we would like to solve the following problem: max u u Σu, 0 8
9 where u = u,..., u n. he model given by 0 is based on the belief that principal components with a large associated variance indicate interesting features in the data. However, one of the shortcomings of 0 is that the linear combination may use a large number of the original variables; i.e., most of the factors u,..., u n are non zero, thus posing a difficulty in interpreting which variables are the most influential. his motivates us to consider the problem of finding sparse principal components; i.e., a set u,..., u n } of principal components that maximizes the variance while at the same time having only a few non zero u i s. here are many ways to formulate such a problem. For instance, one can take the following penalty function approach see [9]: vρ = max u u } Σu ρ u 0, where u 0 = i,..., n} : u i 0} is the number of non zero elements in the vector u R n, and ρ R is a parameter controlling the level of sparsity. Note that the objective function in is non convex and the associated optimization problem is hard to solve in general. In this section, we will show how to formulate a tractable relaxation of using SDP. o begin, let us observe that Σ 0, and that we may assume without loss that Σ Σ Σ nn 0. Let Σ / 0 be such that Σ = Σ / Σ /. he following proposition shows that we only need to consider the case where ρ Σ. Proposition If ρ Σ, then u = 0 is optimal for. Proof Note that for any u R n, we have u Σu n Σ ij u i u j i,j= Σ n u i u j i,j= = Σ u, where follows from the fact that for any n n positive semidefinite matrix A, we have A ij max i n A ii for i, j n. Furthermore, by the Cauchy Schwarz inequality, for any u R n, we have u u u 0. It follows that vρ = max u } Σu ρ u 0 Σ ρ u 0 0. u In particular, we see that u = 0 is an optimal solution to. his completes the proof. By Proposition, we may assume that ρ Σ, in which case is equivalent to the problem vρ = max u = u } Σu ρ u 0. 3 Now, for a given vector u R n, let wu 0, } n be the indicator vector of the sparsity pattern of u; i.e., wu i = iff u i 0 for i =,..., n. Observe that u Σu = ū diagwu Σ diagwu ū 9
10 for any ū R n such that ū i = u i whenever u i 0, where i =,..., n. On the other hand, by the Courant Fischer characterization see, e.g., [5, heorem 4..], for a given sparsity pattern vector w 0, } n, we have max u = u diagw Σ diagw u = λ max diagw Σ diagw, where λ max M is the largest eigenvalue of the matrix M. hus, upon letting σ i R n to be the i th column of Σ / where i =,..., n, we may write 3 as follows: } vρ = max λ max diagw Σ diagw ρe w w 0,} n = max λ max diagw Σ / Σ / diagw w 0,} n = max λ max Σ / diagw Σ / } ρe w w 0,} n = max max u = w 0,} n = max max u = w 0,} n i= = max u = } ρe w u Σ / diagw Σ / } u ρe w n [ σ w i i u ] ρ n [ σ i u ] ρ i= 4 5, 6 where 4 follows from the fact that λ max M M = λ max MM for any matrix M and diagw = diagw for any w 0, } n, and 5 follows from the fact that u Σ / diagw Σ / u = diagw Σ / u = n σ i u n = w i σ i u. wi i= i= Note that the objective function in 6 is still non convex. However, the upshot of 6 is that the objective function is linear in u i u j, where i, j n. In particular, let U = uu R n n. hen, it is easy to verify that σ i u = σ i Uσ i for i =,..., n. Moreover, for a symmetric matrix U, we have U = uu and u = iff tru =, ranku = and U 0. hus, we see that 6 is equivalent to the following problem: n vρ = maximize σ i Uσ i ρ i= subject to tru =, U 0, ranku =. 7 Note that the function U n i= σ i Uσ i ρ is convex. Unfortunately, problem 7 is still hard to solve in general, since we are maximizing a convex function, and the constraint ranku = 0
11 is not convex. However, not all is lost, as we may proceed as follows. First, let us introduce a notation. Given an n n symmetric matrix M, let λ,..., λ n be its eigenvalues. Define trm = n maxλ i, 0}. i= Now, observe that if U = uu and u =, then U / = U = uu. Moreover, for any α R, the matrix αuu has one eigenvalue equal to α and n eigenvalues equal to 0, which implies that tr αuu = α. It follows that σ i Uσ i ρ σ = tr i uu σ i ρ uu = tr u u σ i σi u ρ u = tr U / σ i σi U / ρu. he point of the above manipulation is that the function U tr U / σ i σi U / ρu is concave, as the following theorem shows: heorem Let M be an n n symmetric matrix. hen, the function X tr X / MX / is concave on S n. Proof he key step of the proof is to show that tr X / MX / = max trp M = min X P 0 try X. 8 Y M, Y 0 his would imply the desired result, since the last expression is a pointwise minimum of affine functions of X and hence is concave. o prove the first equality in 8, observe that since X / MX / is symmetric, by the spectral theorem for symmetric matrices see, e.g., [5, heorem 4..5], we may write X / MX / = UΓU for some orthogonal matrix U and diagonal matrix Γ. hen, we have tr X / MX / n = maxγ ii, 0} i= = max tr diagw Γ diagw w 0,} n [ = max tr X / U diagw U X / M w 0,} n Since X X / U diagw U X / 0 for all w 0, } n, we conclude that tr X / MX / max trp M. X P 0 Conversely, let P be such that X P 0. For any ɛ > 0, let X ɛ = X ɛi. Note that X ɛ 0 for all ɛ > 0. Let Xɛ / 0 be such that X ɛ = Xɛ / Xɛ /. In particular, Xɛ / is invertible. hus, there exists a symmetric matrix D ɛ such that P = X / ɛ UD ɛ U X / ɛ. ].
12 Since the region Y S n : X Y 0} is compact, upon taking ɛ 0 and considering a subsequence if necessary, we conclude that P = X / UDU X / for some symmetric matrix D. Since P 0, we see that D ɛ 0 for all ɛ > 0. Furthermore, since X ɛ X P, we have X ɛ P = Xɛ / UI D ɛ U Xɛ / 0 for all ɛ > 0. Hence, we conclude that I D ɛ 0 for all ɛ > 0, which in turn implies that I D 0. In particular, all the eigenvalues of D lie in [0, ]. Note, however, that D need not be diagonal. Now, in order to complete the proof of the first equality in 8, it suffices to prove the following: trp M = tr DU X / MX / U = trdγ tr X / MX /. owards that end, we need the following result see, e.g., [5, heorem..]: heorem 3 Fan s Inequality Let M, M S n. Consider the function λ : S n R n defined by λm = λ M,..., λ n M, where λ M λ M λ n M are the eigenvalues of M. hen, we have trm M λm λm. Armed with heorem 3, we proceed as follows. Let Γ be the diagonal matrix whose i th diagonal entry is maxγ ii, 0}, where i =,..., n. hen, we have trp M = trdγ λd λγ trγ = tr X / MX /, as desired. Now, since the second equality in 8 follows from the CLP Strong Duality heorem, the proof of heorem is completed. Upon applying heorem and dropping the problematic rank constraint ranku = in 7, we obtain the following convex relaxation: n v ρ = maximize tr U / σ i σi U / ρu i= 9 subject to tru =, U 0. Upon letting M i ρ = σ i σi ρi for i =,..., n and using 8, we see that 9 is equivalent to n v ρ = maximize trp i M i ρ i= subject to tru =, U P i 0 for i =,..., n, U 0, which is an SDP. We remark that since 9 is a relaxation of 7, we have v ρ vρ. Recently, there has been some interest in analyzing the quality of various convex relaxations of the sparse principal component analysis problem. For an analysis of the above SDP relaxation, we refer the reader to [0].
13 References [] F. Alizadeh and D. Goldfarb. Second Order Cone Programming. Mathematical Programming, Series B, 95:3 5, 003. [] A. Ben-al, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton Series in Applied Mathematics. Princeton University Press, Princeton, NJ, 009. [3] A. Ben-al and A. Nemirovski. Robust Convex Optimization. Mathematics of Operations Research, 34: , 998. [4] A. Ben-al and A. Nemirovski. Selected opics in Robust Convex Optimization. Mathematical Programming, Series B, :5 58, 008. [5] J. M. Borwein and A. S. Lewis. Convex Analysis and Nonlinear Optimization: heory and Examples, volume 3 of CMS Books in Mathematics. Springer ScienceBusiness Media, Inc., New York, second edition, 006. [6] W. Chen, M. Sim, J. Sun, and C.-P. eo. From CVaR to Uncertainty Set: Implications in Joint Chance Constrained Optimization. Operations Research, 58: , 00. [7] X. Chen, M. Sim, and P. Sun. A Robust Optimization Perspective on Stochastic Programming. Operations Research, 556:058 07, 007. [8] S.-S. Cheung, A. M.-C. So, and K. Wang. Linear Matrix Inequalities with Stochastically Dependent Perturbations and Applications to Chance Constrained Semidefinite Optimization. SIAM Journal on Optimization, 4: , 0. [9] A. d Aspremont, F. Bach, and L. El Ghaoui. Optimal Solutions for Sparse Principal Component Analysis. Journal of Machine Learning Research, 9:69 94, 008. [0] A. d Aspremont, F. Bach, and L. El Ghaoui. Approximation Bounds for Sparse Principal Component Analysis. Preprint, available at 0. [] L. El Ghaoui, M. Oks, and F. Oustry. Worst Case Value at Risk and Robust Portfolio Optimization: A Conic Programming Approach. Operations Research, 54: , 003. [] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the heory of NP Completeness. W. H. Freeman and Company, New York, 979. [3] M. X. Goemans and D. P. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. Journal of the ACM, 46:5 45, 995. [4] W. Hoeffding. Probability Inequalities for Sums of Bounded Random Variables. Journal of the American Statistical Association, 5830:3 30, 963. [5] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge,
14 [6] W. W.-L. Li, Y. J. Zhang, A. M.-C. So, and M. Z. Win. Slow Adaptive OFDMA Systems through Chance Constrained Programming. IEEE ransactions on Signal Processing, 587: , 00. [7] M. S. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of Second Order Cone Programming. Linear Algebra and Its Applications, 84:93 8, 998. [8] Z.-Q. Luo, W.-K. Ma, A. M.-C. So, Y. Ye, and S. Zhang. Semidefinite Relaxation of Quadratic Optimization Problems. IEEE Signal Processing Magazine, 73:0 34, 00. [9] A. Shapiro, D. Dentcheva, and A. Ruszczyński. Lectures on Stochastic Programming: Modeling and heory, volume 9 of MPS SIAM Series on Optimization. Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania, 009. [0] M. B. Shenouda and. N. Davidson. Outage Based Designs for Multi User ransceivers. In Proceedings of the 009 IEEE International Conference on Acoustics, Speech, and Signal Processing ICASSP 009, pages , 009. [] A. M.-C. So. Moment Inequalities for Sums of Random Matrices and heir Applications in Optimization. Mathematical Programming, Series A, 30:5 5, 0. [] A. M.-C. So. Lecture Notes on Chance Constrained Optimization. Available at se.cuhk.edu.hk/ manchoso/papers/ccopt.pdf, 03. [3] M. Stuart. A Geometric Approach to Principal Component Analysis. he American Statistician, 364: , 98. [4] K.-Y. Wang, A. M.-C. So,.-H. Chang, W.-K. Ma, and C.-Y. Chi. Outage Constrained Robust ransmit Optimization for Multiuser MISO Downlinks: ractable Approximations by Conic Optimization. IEEE ransactions on Signal Processing, 6: , 04. [5] Y. J. Zhang and A. M.-C. So. Optimal Spectrum Sharing in MIMO Cognitive Radio Networks via Semidefinite Programming. IEEE Journal on Selected Areas in Communications, 9:36 373, 0. 4
Handout 6: Some Applications of Conic Linear Programming
ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and
More informationHandout 8: Dealing with Data Uncertainty
MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationRobust linear optimization under general norms
Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar
More informationCSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization
CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More informationConvex relaxation. In example below, we have N = 6, and the cut we are considering
Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution
More informationConvex relaxation. In example below, we have N = 6, and the cut we are considering
Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More informationApplications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012
Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More information1 Robust optimization
ORF 523 Lecture 16 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. In this lecture, we give a brief introduction to robust optimization
More informationDistributionally Robust Discrete Optimization with Entropic Value-at-Risk
Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National
More informationConvex Optimization in Classification Problems
New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1
More informationIE 521 Convex Optimization
Lecture 14: and Applications 11th March 2019 Outline LP SOCP SDP LP SOCP SDP 1 / 21 Conic LP SOCP SDP Primal Conic Program: min c T x s.t. Ax K b (CP) : b T y s.t. A T y = c (CD) y K 0 Theorem. (Strong
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationRelaxations and Randomized Methods for Nonconvex QCQPs
Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be
More informationLecture 8: The Goemans-Williamson MAXCUT algorithm
IU Summer School Lecture 8: The Goemans-Williamson MAXCUT algorithm Lecturer: Igor Gorodezky The Goemans-Williamson algorithm is an approximation algorithm for MAX-CUT based on semidefinite programming.
More informationStrong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty
Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite
More informationORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016
ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016 When in doubt on the accuracy of these notes, please cross check with the instructor
More informationSDP Relaxations for MAXCUT
SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP
More informationapproximation algorithms I
SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions
More informationRobust portfolio selection under norm uncertainty
Wang and Cheng Journal of Inequalities and Applications (2016) 2016:164 DOI 10.1186/s13660-016-1102-4 R E S E A R C H Open Access Robust portfolio selection under norm uncertainty Lei Wang 1 and Xi Cheng
More informationCS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine
CS295: Convex Optimization Xiaohui Xie Department of Computer Science University of California, Irvine Course information Prerequisites: multivariate calculus and linear algebra Textbook: Convex Optimization
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More informationTrust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization
Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region
More informationConvex relaxations of chance constrained optimization problems
Convex relaxations of chance constrained optimization problems Shabbir Ahmed School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta, GA 30332. May 12, 2011
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage
More informationCOM Optimization for Communications 8. Semidefinite Programming
COM524500 Optimization for Communications 8. Semidefinite Programming Institute Comm. Eng. & Dept. Elect. Eng., National Tsing Hua University 1 Semidefinite Programming () Inequality form: min c T x s.t.
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationConvex Optimization of Graph Laplacian Eigenvalues
Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the
More informationIntroduction to Semidefinite Programming I: Basic properties a
Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite
More informationBCOL RESEARCH REPORT 07.04
BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN
More informationLecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT
CS 80: Introduction to Complexity Theory 0/03/03 Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm Instructor: Jin-Yi Cai Scribe: Christopher Hudzik, Sarah Knoop Overview First, we outline
More informationLecture 20: November 1st
10-725: Optimization Fall 2012 Lecture 20: November 1st Lecturer: Geoff Gordon Scribes: Xiaolong Shen, Alex Beutel Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationMathematical Optimization Models and Applications
Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,
More informationLecture Semidefinite Programming and Graph Partitioning
Approximation Algorithms and Hardness of Approximation April 16, 013 Lecture 14 Lecturer: Alantha Newman Scribes: Marwa El Halabi 1 Semidefinite Programming and Graph Partitioning In previous lectures,
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationSelected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.
. Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,
More informationHandout 4: Some Applications of Linear Programming
ENGG 5501: Foundations of Optimization 2018 19 First Term Handout 4: Some Applications of Linear Programming Instructor: Anthony Man Cho So October 15, 2018 1 Introduction The theory of LP has found many
More informationA Unified Theorem on SDP Rank Reduction. yyye
SDP Rank Reduction Yinyu Ye, EURO XXII 1 A Unified Theorem on SDP Rank Reduction Yinyu Ye Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering Stanford
More informationThe Trust Region Subproblem with Non-Intersecting Linear Constraints
The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region
More informationLecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut
Lecture 10 Semidefinite Programs and the Max-Cut Problem In this class we will finally introduce the content from the second half of the course title, Semidefinite Programs We will first motivate the discussion
More informationLecture 9: Low Rank Approximation
CSE 521: Design and Analysis of Algorithms I Fall 2018 Lecture 9: Low Rank Approximation Lecturer: Shayan Oveis Gharan February 8th Scribe: Jun Qi Disclaimer: These notes have not been subjected to the
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationUsing Schur Complement Theorem to prove convexity of some SOC-functions
Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More information1 Matrix notation and preliminaries from spectral graph theory
Graph clustering (or community detection or graph partitioning) is one of the most studied problems in network analysis. One reason for this is that there are a variety of ways to define a cluster or community.
More informationEE 227A: Convex Optimization and Applications April 24, 2008
EE 227A: Convex Optimization and Applications April 24, 2008 Lecture 24: Robust Optimization: Chance Constraints Lecturer: Laurent El Ghaoui Reading assignment: Chapter 2 of the book on Robust Optimization
More informationA direct formulation for sparse PCA using semidefinite programming
A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon
More informationSparse Covariance Selection using Semidefinite Programming
Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationNetwork Localization via Schatten Quasi-Norm Minimization
Network Localization via Schatten Quasi-Norm Minimization Anthony Man-Cho So Department of Systems Engineering & Engineering Management The Chinese University of Hong Kong (Joint Work with Senshan Ji Kam-Fung
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationAPPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS
APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS P. Date paresh.date@brunel.ac.uk Center for Analysis of Risk and Optimisation Modelling Applications, Department of Mathematical
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationA Continuation Approach Using NCP Function for Solving Max-Cut Problem
A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut
More informationBounds on the radius of nonsingularity
Towards bounds on the radius of nonsingularity, Milan Hladík Department of Applied Mathematics, Charles University, Prague and Institute of Computer Science of the Academy of Czech Republic SCAN 2014,
More informationComputing the Norm A,1 is NP-Hard
Computing the Norm A,1 is NP-Hard Dedicated to Professor Svatopluk Poljak, in memoriam Jiří Rohn Abstract It is proved that computing the subordinate matrix norm A,1 is NP-hard. Even more, existence of
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationOn the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints
On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints Pranjal Awasthi Vineet Goyal Brian Y. Lu July 15, 2015 Abstract In this paper, we study the performance of static
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationThe Simplest Semidefinite Programs are Trivial
The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12
More informationIntroduction to Convex Optimization
Introduction to Convex Optimization Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Optimization
More informationLIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007
LIGHT ROBUSTNESS Matteo Fischetti, Michele Monaci DEI, University of Padova 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 Work supported by the Future and Emerging Technologies
More informationDistributionally Robust Optimization with ROME (part 1)
Distributionally Robust Optimization with ROME (part 1) Joel Goh Melvyn Sim Department of Decision Sciences NUS Business School, Singapore 18 Jun 2009 NUS Business School Guest Lecture J. Goh, M. Sim (NUS)
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationCanonical Problem Forms. Ryan Tibshirani Convex Optimization
Canonical Problem Forms Ryan Tibshirani Convex Optimization 10-725 Last time: optimization basics Optimization terology (e.g., criterion, constraints, feasible points, solutions) Properties and first-order
More informationOn Optimal Frame Conditioners
On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,
More informationRank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities
Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities C.J. Argue Joint work with Fatma Kılınç-Karzan October 22, 2017 INFORMS Annual Meeting Houston, Texas C. Argue, F.
More informationOperations Research Letters
Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst
More informationCSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance
More informationMultiuser Downlink Beamforming: Rank-Constrained SDP
Multiuser Downlink Beamforming: Rank-Constrained SDP Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture
More informationEE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term
EE 150 - Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko JPL Third Term 2011-2012 Due on Thursday May 3 in class. Homework Set #4 1. (10 points) (Adapted
More informationA Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization
A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization Dimitris Bertsimas Sloan School of Management and Operations Research Center, Massachusetts
More informationA Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs
A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit
More informationDistributionally Robust Convex Optimization
Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,
More informationOptimization Problems with Probabilistic Constraints
Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.
More informationSolving Corrupted Quadratic Equations, Provably
Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin
More informationRobust Optimization for Risk Control in Enterprise-wide Optimization
Robust Optimization for Risk Control in Enterprise-wide Optimization Juan Pablo Vielma Department of Industrial Engineering University of Pittsburgh EWO Seminar, 011 Pittsburgh, PA Uncertainty in Optimization
More informationOn deterministic reformulations of distributionally robust joint chance constrained optimization problems
On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,
More informationRobust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty
EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty Jannis Kurtz Received: date / Accepted:
More informationLecture 1: Introduction
EE 227A: Convex Optimization and Applications January 17 Lecture 1: Introduction Lecturer: Anh Pham Reading assignment: Chapter 1 of BV 1. Course outline and organization Course web page: http://www.eecs.berkeley.edu/~elghaoui/teaching/ee227a/
More informationTractable Approximations to Robust Conic Optimization Problems
Math. Program., Ser. B 107, 5 36 2006) Digital Object Identifier DOI) 10.1007/s10107-005-0677-1 Dimitris Bertsimas Melvyn Sim Tractable Approximations to Robust Conic Optimization Problems Received: April
More information