EE5138R: Problem Set 5 Assigned: 16/02/15 Due: 06/03/15

Similar documents
Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem

4. Convex optimization problems

Lecture: Convex Optimization Problems

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex optimization problems. Optimization problem in standard form

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

Convex Optimization and Modeling

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Assignment 1: From the Definition of Convexity to Helley Theorem

4. Convex optimization problems

Lecture: Examples of LP, SOCP and SDP

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

12. Interior-point methods

Convex Optimization Problems. Prof. Daniel P. Palomar

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Semidefinite Programming Basics and Applications

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Convex Optimization M2

8. Conjugate functions

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Linear and non-linear programming

12. Interior-point methods

Lecture: Duality of LP, SOCP and SDP

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

5. Duality. Lagrangian

A Brief Review on Convex Optimization

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Duality.

Optimisation convexe: performance, complexité et applications.

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

15. Conic optimization

4. Convex optimization problems (part 1: general)

13. Convex programming

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

NORMS ON SPACE OF MATRICES

Convex functions. Definition. f : R n! R is convex if dom f is a convex set and. f ( x +(1 )y) < f (x)+(1 )f (y) f ( x +(1 )y) apple f (x)+(1 )f (y)

CSCI : Optimization and Control of Networks. Review on Convex Optimization

EE364 Review Session 4

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

The proximal mapping

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Convex Optimization Boyd & Vandenberghe. 5. Duality

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

Lecture 2: Convex functions

Agenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

IE 521 Convex Optimization

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

Symmetric Matrices and Eigendecomposition

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

Lecture 1: January 12

March 5, 2012 MATH 408 FINAL EXAM SAMPLE

Lecture: Introduction to LP, SDP and SOCP

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

Tutorial on Convex Optimization for Engineers Part II

Convex Functions. Wing-Kin (Ken) Ma The Chinese University of Hong Kong (CUHK)

Lecture 7: Weak Duality

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

Semidefinite Programming Duality and Linear Time-invariant Systems

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

COM Optimization for Communications 8. Semidefinite Programming

Lecture 4: Linear and quadratic problems

Nonlinear Optimization for Optimal Control

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

A Review of Linear Programming

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Convex Optimization in Communications and Signal Processing

Convex Optimization M2

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Matrix Theory. A.Holst, V.Ufnarovski

Linear Algebra: Characteristic Value Problem

Lecture 20: November 1st

Linear Algebra- Final Exam Review

MAT Linear Algebra Collection of sample exams

Convex optimization examples

Discrete Optimization

IE 521 Convex Optimization Homework #1 Solution

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Problem 1 (Exercise 2.2, Monograph)

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Nonlinear Systems Theory

Lecture Semidefinite Programming and Graph Partitioning

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

ME Fall 2001, Fall 2002, Spring I/O Stability. Preliminaries: Vector and function norms

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

Convex Optimization. (EE227A: UC Berkeley) Lecture 4. Suvrit Sra. (Conjugates, subdifferentials) 31 Jan, 2013

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Transcription:

EE538R: Problem Set 5 Assigned: 6/0/5 Due: 06/03/5. BV Problem 4. The feasible set is the convex hull of (0, ), (0, ), (/5, /5), (, 0) and (, 0). (a) x = (/5, /5) (b) Unbounded below (c) X opt = {(0, x ) : x } (d) x = (/3, /3) (e) x = (/, /6). This is optimal because it satisfies x + x = 7/6 >, x + 3x =, and is perpendicular to the line x + 3x =. f 0 (x ) = (, 3). (Optional) Hello World in CVX. Use CVX to verify the optimal values you obtained (analytically) for exercise 4. in Convex Optimization. I strongly encourage you to install CVX on Matlab and try this out so that you re familiar in using CVX in your own research. 3. BV Problem 4.7 (a) The domain of the objective is convex because f 0 (x) is convex. The sublevel sets are convex because f 0 (x)/(c T x + d) α iff c T x + d > 0 and f 0 (x) α(c T x + d). (b) Suppose x is feasible in the original problem. Define t = /(c T x + d) (a positive number), y = x/(c T x + d). Then t > 0 and it is easily verified that t, y are feasible in the transformed problem, with the objective value g 0 (y, t) = f 0 (x)/(c T x + d). Conversely, suppose y, t are feasible for the transformed problem. We must have t > 0, by definition of the domain of the perspective function. Define x = y/t. We have x dom f i for i = 0,,..., m (again, by definition of perspective). x is feasible in the original problem, because f i (x) = g i (y, t)/t 0, i =,..., m, Ax = A(y/t) = b. From the last equality, c T x + d = (c T y + dt)/t = /t, and hence, t = c T x + d, f 0 (x) c T x + d = tf 0(x) = g 0 (y, t). Therefore x is feasible in the original problem, with the objective value g 0 (y, t). In conclusion, from any feasible point of one problem we can derive a feasible point of the other problem, with the same objective value.

(c) The problem is clearly quasiconvex. The convex formulation as above is g 0 (y, t) g i (y, t) 0, Ay = bt h(t, t) i =,..., m where g i is the perspective of f i and h is the perspective of h. For the example, we have the equivalent problem with domain m tr(tf 0 + y F +..., +y n F n ) s.t. det(tf 0 + y F +... + y n F n ) /m {(y, t) : t > 0, tf 0 + y F +... + y n F n 0} 4. BV Problem 4.8 (a) (c) To help you, the optimal value for (a) is + b / R(A) p = λ T b c = A T λ for some λ λ otherwise For part (b), For part (c), where c + i = max{c i, 0} and similarly for c i. (a) There are three possibilities { p λb c = aλ for some λ 0 = λ otherwise p = l T c + + u T c The problem is infeasible b / R(A). The optimal value is. The problem is feasible, and c is orthogonal to the nullspace of A. We can decompose c as c = A T λ + ĉ, Aĉ = 0 Here ĉ is the component in the nullspace of A and A T λ is the component orthogonal to the nullspace. If ĉ = 0, then on the feasible set the objective function reduces to a constant: c T x = λ T Ax + ĉ T x = λ T b. The optimal value is λ T b. All feasible solutions are optimal. The problem is feasible, and c is not in the range of A T and ĉ 0. The problem is unbounded p =. To verify this, note that x = x 0 tĉ feasible for all t; as t goes to infinity, the objective value decreases unboundedly. So we obtain the first part. (b) This problem is always feasible. The vector c can be decomposed into a component parallel to a and a component orthogonal to a: c = aλ + ĉ. with a T ĉ = 0..

If λ > 0, the problem is unbounded below. Choose x = ta and let t go to infinity: and c T x = tc T a = tλ a. a T x b = ta T a b 0 for large t, so x is feasible for large t. Intuitively, by going very far in the direction a, we find feasible points with arbitrarily negative objective values. If ĉ 0, the problem is unbounded below. Choose x = ba tĉ and let t. If c = aλ for some λ 0, the optimal value is c T ab = λb. So we obtain the second part. (c) The objective and the constraints are separable: The objective is a sum of terms c i x i, each dependent on one variable only; each constraint depends on only one variable. We can therefore solve the problem by imizing over each component of x independently. The optimal x i imizes c i x i the constraint l i x i u i. If c i > 0, then x i = l i; if c i < 0, then x i = u i; if c i = 0, then any x i in the interval [l i, u i ] is optimal. Therefore, the optimal value of the problem is p = l T c + + u T c where c + i = max{c i, 0} and c i = max{ c i, 0}. 5. BV Problem 4.9 Solution: Make a change of variables y = Ax. The problem is equivalent to y c T A y s.t. y b If A T c 0, the optimal solution is y = b, with p = c T A b. Otherwise, the LP is unbounded below. 6. BV Problem 4. (a) Equivalent to LP t,x t s.t. Ax b t, Ax b t (b) Equivalent to LP s,x T s s.t. Ax b s, Ax b s Assume x is fixed in this problem, and we optimize only over s. The constraints say that s k a T k x b k s k. The objective function of the LP is separable, so we achieve the optimum over s by choosing s k = a T k x b k and obtain the optimal value p (x) = Axb. Therefore optimizing over t and s simultaneously is equivalent to the original problem. (c) Equivalent to the LP x,y T y s.t. y Ax b y, x 3

(d) Equivalent to the LP x,y T y s.t. y x y, Ax b Another good solution is to write x = x + x and to express the problem as (e) Equivalent to the LP T x + + T x s.t. A(x + x ) b, x +, x 0. x +,x 7. (Optional) BV Problem 4. Solution: This can be formulated as the LP x,y,t T y + t s.t. y Ax b y, t x t C = c ij x ij i,j= b + x ij x ji = 0, j= j= l ij x ij u ij., i =,..., n 8. BV Problem 4.(a) If A 0, the solution is x = ct A c A c, p = A / c This can be shown as follows. We make a change of variables y = A / x, and write c = A / c. With this new variable the optimization problem becomes The answer is y = c/ c. y c T y, s.t. y T y In the general case, we can make a change of variables based on the eigenvalue decomposition A = Qdiag(λ)Q T = We define y = Qx, b = Qc, and express the problem as λ i q i qi T. i= b i y i, i= s.t. λ i yi i= If λ i > 0 for all i, the problem reduces to the case we already discussed. Otherwise, we can distinguish several cases. λ n < 0: The problem is unbounded below. By letting y n ±, we can make any point feasible. λ n = 0: If for some i, b i 0 and λ i = 0, the problem is unbounded below. 4

λ n = 0, and b i = 0 for all i with λ i = 0. In this case we can reduce the problem to a smaller one with all λ i > 0. 9. (Optional) BV Problem 4.3 Solution: We can rewrite the the l 4 norm approximation problem as y,z m i= z i This is exactly a QCQP. a T i x b i = y i, y i z i, i =,..., m. 0. BV Problem 4.8 (a) The objective function is a maximum of convex function, hence convex. We can write the problem as t s.t. xt P i x + q T x + r t, i =,..., K, Ax b which is a QCQP in the variables x and t. (b) For given x, the supremum of x T P x over γi P γi is given by Therefore we can express the robust QP as which is a QP. (c) For given x, the quadratic objective function is ( x T P 0 x + sup u i= sup x T P x = γx T x γi P γ xt (P 0 + γi)x + q T x + r, s.t. Ax b ) ( K u i (x T P i x) + q T x + r = xt P 0 x + K ) / (x T P i x) + q T x + r. This is a convex function of x: each of the functions x T P i x is convex since P i 0. The second term is a composition h(g (x),..., g K (x)) of h(y) = y with g i (x) = x T P i x. The functions g i are convex and nonnegative. The function h is convex and, for y R K +, nondecreasing in each of its arguments. Therefore the composition is convex. The resulting problem can be expressed as xt P 0 x + y + q T x + r xt P i x y i, i =,..., K, Ax b which can be further reduced to an SOCP u + t i= 5

[ / P 0 x u + /4, u /4] [ / P i x y i /4] y i + /4, i =,..., K, y t, Ax b. The variables are x, u, t, and y R K. Note that if we square both sides of the first inequality, we obtain x T P 0 x + (u /4) (u + /4) i.e., x T P 0 x u. Similar, the other constraints are equivalent to xt P i x y i.. BV Problem 4.40(a)-(b) Solution: (a) The LP can be expressed as x c T x + d s.t. diag(gx h) 0, Ax = b (b) With P = W W T and W R n r, the QP can be expressed as [ ] I W t + x R n,t R qt x + r s.t. T x x T 0, diag(gx h) 0, Ax = b W ti (c) With P i = W i W T i and W i R n ri, the QCQP can be expressed as x R n,t i R,i [m] t 0 + q T 0 x + r 0 t i + q T i x + r i 0, i [m], [ ] I W T i x x T 0, i [m], Ax = b. W i t i I (d) The SOCP can be expressed as x c T x s.t. [ ] (c T i x + d i )I A i x + b i (Ax i + b i ) T (c T i x + d 0, i [N], F x = g i)i By the result in the hint, the constraint is equivalent with A i x+b i < c T i x+d i when c T i x+d i > 0. We have to check the case c T i x + d i = 0 separately. In this case, the LMI constraint means A i x + b i = 0, so we can conclude that the LMI constraint and the SOC constraint are equivalent.. (Optional) BV Problem 4.43(a)-(c) Solution: (a) We use the property that λ (x) t if and only if A(x) ti. We imize the maximum eigenvalue by solving the SDP t s.t. A(x) ti t,x (b) λ (x) t if and only if A(x) t I and λ m (A(x)) t if and only if A(x) t I so we can imize λ λ m by solving t t s.t. t I A(x) t I t,t,x 6

(c) We first note that the problem is equivalent to λ/γ s.t. γi A(x) λi () if we take as domain of the objective {(λ, γ) : γ > 0}. This problem is quasiconvex, and can be solved by bisection: The optimal value is less than or equal to α if and only if the inequalities λ γα, γi A(x) λi, γ > 0 (with variables γ, λ, x) are feasible. Following the hint we can also pose the problem as the SDP t s.t. I sa 0 + y A +... + y n A n ti, s 0 () We now verify more carefully that the two problems are equivalent. Let p be the optimal value of (), and p sdp is the optimal value of the SDP (). Let λ/γ be the objective value of (), evaluated at a feasible point (γ, λ, x). Define s = /γ, y = x/γ, t = λ/γ. This yields a feasible point in (), with objective value t = λ/γ. This proves that p p sdp. Now suppose that s, y, t are feasible in (). If s > 0, then γ = /s, x = y/s, λ = t/s are feasible in () with objective value t. If s = 0, we have I y A +... + y n A n ti Choose x = τy with τ sufficiently large so that A(τy) A 0 + τi 0. We have The first inequality is justified as follows: λ (τy) λ (0) + τt, λ m (τy) λ m (0) + τ. A(τy) = A 0 + τ(y A +... + y n A n ) A 0 + τti λ (0)I + τti = (λ (0) + τt)i By using the fact that λ (x) t if and only if A(x) t I, we recover the first inequality. Hence, for τ sufficiently large κ(x 0 + τy) λ (0) + τt λ m (0) + τ Letting τ go to infinity, we can construct feasible points in (), with objective value arbitrarily close to t. We conclude that t p if (s, y, t) are feasible in (). Minimizing over t yields p sdp p. 7