Solution to EE 617 Mid-Term Exam, Fall November 2, 2017

Size: px
Start display at page:

Download "Solution to EE 617 Mid-Term Exam, Fall November 2, 2017"

Transcription

1 Solution to EE 67 Mid-erm Exam, Fall 207 November 2, 207

2 EE 67 Solution to Mid-erm Exam - Page 2 of 2 November 2, 207 (4 points) Convex sets (a) (2 points) Consider the set { } a R k p(0) =, p(t) for t β, where p(t) = a + a 2 t + + a k t k Is this set convex? he set can be rewritten as { } a R k p(0) =, p(t) for t β { } = a R k p(0) = { } a R k p(t) = { } a R k a = t [,β t [,β { a R k a + a 2 t + + a k t k } he set is the intersection of a hyperplane and infinitely many halfspaces herefore, it is convex (b) (2 points) he polar of a set C R n is defined as C = { y R n y x for all x C } Is the polar C of a (possibly nonconvex) set C convex? he set can be rewritten as C = { y R n y x } x C he set is the intersection of halfspaces herefore, it is convex, regardless of whether C is convex or not

3 EE 67 Solution to Mid-erm Exam - Page 3 of 2 November 2, (4 points) Convex functions (a) (2 points) he logarithmic barrier for the second-order cone constraint x 2 t is f(x, t) = log ( t 2 x x ), with domf = {(x, t) R n R x 2 < t} Show that the function f(x, t) is convex in (x, t) (Hint: use the convexity of x x/t) he domain of f is a second-order cone, and hence is a convex set We can rewrite the function as f(x, t) = log [ t (t x x/t ) = log t log ( t x x/t ) Clearly, the first term log t is convex For the second term, note that x x/t is convex, and therefore t x x/t is concave Hence, log ( t x x/t ) is concave, and log ( t x x/t ) is convex In summary, the function f(x, t) is the sum of two convex functions his concludes the proof (b) (2 points) he directional derivative of a function g : R n R is f(x) = inf Show that f : R n R is convex if g is convex g(y + x) g(y) he domain of f is R n, and hence is a convex set We show the convexity of f by definition For any x, z R n and θ [0,, we have g [y + (θx + ( θ)z) g(y) f (θx + ( θ)z) = inf Using the convexity of g, we have herefore, we have = inf g [θ (y + x) + ( θ) (y + z) g(y) g [θ (y + x) + ( θ) (y + z) θg (y + x) + ( θ)g (y + z) f (θx + ( θ)z) θg (y + x) + ( θ)g (y + z) g(y) inf θ [g (y + x) g(y) + ( θ) [g (y + z) g(y) = inf = inf [ g (y + x) g(y) θ + ( θ) g (y + z) g(y) ()

4 EE 67 Solution to Mid-erm Exam - Page 4 of 2 November 2, 207 Now we need to show that g (y + x) g(y) g (y + x) g(y) inf = lim, 0 which is true if g(y+x) g(y) is nondecreasing in For any 0 < < 2, we have [( g(y + x) = g ) y + (y + 2 x) 2 2 ( ) g(y) + g(y + 2 x), 2 2 where the last inequality comes from the convexity of g he above equation implies g (y + x) g(y) g (y + 2x) g(y) 2 As a result, we have Finally, from (), we have [ f (θx + ( θ)z) inf θ g (y + x) g(y) g (y + x) g(y) inf = lim, 0 = lim 0 g (y + x) g(y) [ g (y + x) g(y) θ + ( θ) g (y + z) g(y) + ( θ) g (y + z) g(y) g (y + x) g(y) g (y + z) g(y) = θ lim + ( θ) lim 0 0 g (y + x) g(y) = θ inf = θf(x) + ( θ)f(z) + ( θ) inf g (y + z) g(y) his concludes the proof that f is convex

5 EE 67 Solution to Mid-erm Exam - Page 5 of 2 November 2, (4 points) Convex optimization problems (a) (2 points) Consider the following optimization problem: ( ) max i=,,m a i x + b i ( ) min i=,,p c i x + d i subject to F x g, with variables x R n Assume that c i x + d i > 0 for all x that satisfy F x g Give the best convex reformulation of the problem By best, we mean LP is better than SOCP, SOCP is better than SDP, and SDP is better than general convex program he objective function is the ratio of a convex function to a concave function herefore, the problem is a convex-concave fractional problem in Exercise 47 (c) of the book We will use the same approach here he convex formulation is g 0 (y, t) subject to g i (y, t) 0, i =,, m h(y, t), ( ) where g 0 is the perspective of f 0 (x) = max i=,,m a ( ) i x + b i, h is the perspective of h(x) = min i=,,p c i x + d i, and gi is the perspective of f i (x) = fi x g i, where fi is the ith row of matrix F and g i is the ith element of vector g Next, we compute the above functions explicitly he objective function is g 0 : R n R ++ R, which can be calculated as ( ) ( g 0 (y, t) = tf 0 (y/t) = t max a i (y/t) + b i = max a i y + b i t ) i=,,m i=,,m Similarly, we can compute h : R n R ++ R as ( ) ( h(y, t) = th(y/t) = t min c i (y/t) + d i = min c i y + d i t ) i=,,p i=,,p Finally, we have g i : R n R ++ R as g i (y, t) = tf i (y/t) = t (f i (y/t) g i ) = f i y g i t herefore, the original problem can be reformulated as ( a i y + b i t ) which is equivalent to the LP subject to max i=,,m fi y g i t 0, i =,, m ( min c i y + d i t ), i=,,p subject to z a i y + b i t z, i =,, m F y tg 0 c i y + d i t, i =,, p

6 EE 67 Solution to Mid-erm Exam - Page 6 of 2 November 2, 207 (b) (2 points) Consider the following complex least l -norm problem: x subject to Ax = b, with complex variables x C n and complex parameters A C m n and b C m Note that x = max x i i=,,n Formulate the complex least l -norm problem as a problem with real parameters and real variables Moreover, give the best convex reformulation of the new problem By best, we mean LP is better than SOCP, SOCP is better than SDP, and SDP is better than general convex program (Hint: use z = (Re(x), Im(x)) R 2n as the variable) We first write the objective function and constraints in real variables z = (Re(x), Im(x)) R 2n he objective function is x = max x ( i = max Re(xi ) 2 + Im(x i ) 2) /2 i=,,n i=,,n Each term ( Re(x i ) 2 + Im(x i ) 2) /2 can be written compactly as ( Re(xi ) 2 + Im(x i ) 2) /2 = [ Re(xi ) Im(x i ) where C i R 2 2n is a matrix such that [ Re(xi ) Im(x i ) 2 = C i 2 = C i [ Re(x) Im(x) [ Re(x) Im(x) 2 = C i z 2 2, In particular, the matrix C i has zero elements except (C i ),i = and (C i ) 2,n+i = Using j =, the linear equality can be rewritten as Ax = [Re(A) + jim(a) [Re(x) + jim(x) = [Re(A)Re(x) Im(A)Im(x) + j [Re(A)Im(x) + Im(A)Re(x) = [ Re(A) Im(A) [ Re(x) + j [ Im(A) Re(A) [ Re(x) Im(x) Im(x) Noting that b = Re(b) + jim(b), the linear equality can be written compactly as [ [ [ Re(A) Im(A) Re(x) Re(b) = Im(A) Re(A) Im(x) Im(b) }{{}}{{} Ã Ãz = b herefore, the original problem in real variables can be written as b 2 subject to max i=,,n C iz 2 2 Ãz = b,

7 EE 67 Solution to Mid-erm Exam - Page 7 of 2 November 2, 207 which is equivalent to the following SOCP subject to t C i z 2 t, i =,, n Ãz = b

8 EE 67 Solution to Mid-erm Exam - Page 8 of 2 November 2, (4 points) Dual problems Consider the problem of projecting a point a R n on the unit ball in l -norm: 2 x a 2 2 subject to x, with variables x R n Derive the dual problem (Hint: use the additional variable y = x a) With the additional variable y = x a, the problem is equivalent to 2 y 2 2 subject to x, with optimization variable x, y R n he Lagrangian is he dual function is y = x a, L(x, y, λ, ν) = 2 y λ ( x ) + ν (y x + a) g(λ, ν) = inf x,y he optimization over x is = inf x inf x [ 2 y λ ( x ) + ν (y x + a) ( λ x ν x ) ( ) + inf y 2 y y + ν y λ a ν ( λ x ν x ) = inf x n (λ x i ν i x i ) If λ < ν i, we will let x i, which makes the objective value unbounded below If λ ν i, the objective value satisfies i= λ x i ν i x i λ x i ν i x i = (λ ν i ) x i 0, where the inequality holds with equality when x i = 0 herefore, when λ ν i, the optimal x i = 0, and the objective value is 0 he optimization over y is minimization of a convex quadratic function herefore, the optimal y = ν, and the objective value is 2 ν ν Combining the above two parts, the dual function is { g(λ, ν) = 2 ν ν a ν λ if ν λ otherwise After making the implicit constraints explicit, the dual problem is maximize subject to ν λ, with optimization variable λ R and ν R n 2 ν ν a ν λ λ 0,

9 EE 67 Solution to Mid-erm Exam - Page 9 of 2 November 2, (4 points) Semidefinite relaxation of nonconvex problem A signal ŝ {, } n is sent through a noisy channel, and received as y = Hŝ + w, where H R m n, y R m, and w is the Gaussian noise he maximum likelihood detection of the signal can be formulated as the following nonconvex problem: y Hs 2 2 (2) subject to s 2 i =, i =,, n In the next series of questions, we explore the semidefinite relaxation (SDR) of the original nonconvex problem (2), and relationship between their dual problems (a) ( point) Show that the original problem (2) can be rewritten in homogenous form: x Lx (3) subject to x 2 i =, i =,, n, x n+ = Specify L as a function of the original problem data H and y (Hint: define x = he objective function of the original problem (2) can be written as y Hs 2 2 = (y Hs) (y Hs) = s H Hs 2y Hs+y y = [ [ s H herefore, by defining x = and L = H H y y H y y be rewritten in homogenous form: [ s [ s [ H H H y y H y y ) [ s, the original problem can x Lx (4) subject to x 2 i =, i =,, n, x n+ =

10 EE 67 Solution to Mid-erm Exam - Page 0 of 2 November 2, 207 (b) (2 points) Derive the dual problem of the problem in homogenous form (5) he problem in (5) is equivalent to x Lx (5) subject to x 2 i =, i =,, n + Note that the equality constraints are all quadratic terms of x i in this formulation, which makes it easier to derive the dual problem he Lagrangian is he dual function is n+ L(x, ν) = x Lx + ν i (x 2 i ) g(ν) = inf x = inf x he dual function is unbounded below unless herefore, we have i= [ n+ x Lx + ν i (x 2 i ) i= [ n+ x (L + diag(ν)) x ν i L + diag(ν) 0 i= { n+ g(ν) = i= ν i if L + diag(ν) 0 otherwise he dual problem is then maximize n+ ν i i= subject to L + diag(ν) 0

11 EE 67 Solution to Mid-erm Exam - Page of 2 November 2, 207 (c) ( point) Now we consider the SDR of the nonconvex problem (5) variable X = xx S n + he problem in (5) is equivalent to We define a new tr(lx) subject to diag(x) = n+, X 0, rank(x) =, where n+ is (n + )-dimensional vector of s By removing the nonconvex rank constraint, we have the following SDR: tr(lx) (6) subject to diag(x) = n+, X 0, which is a semidefinite program (SDP) Derive the dual problem of the SDR (6) and show that it is the same as the dual of the original problem derived in part (b) (In other words, the SDR is the dual of the dual of the original nonconvex problem) (Hint: the Lagrangian multiplier associate with the constraint X 0 is Z 0, and we need to use the term tr(zx) in the Lagrangian) Write ν R n+ as the Lagrangian multiplier associated with the linear constraint diag(x) = n+, and Z S n + as the Lagrangian multiplier associated with the constraint X 0 hen the Lagrangian is Observing that the Lagrangian can be rewritten as he dual function is L(X, Z) = tr(lx) + ν (diag(x) n+ ) tr(zx) ν diag(x) = tr (diag(ν)x), L(X, Z) = tr ((L + diag(ν) Z) X) n+ν g(z) = inf X [ tr ((L + diag(ν) Z) X) n+ ν If L + diag(ν) Z 0, we can make tr ((L + diag(ν) Z) X) unbounded below If L + diag(ν) Z = 0, we have tr ((L + diag(ν) Z) X) = 0 herefore, the dual function is { g(z) = n+ ν if L + diag(ν) = Z otherwise he dual problem is then maximize subject to n+ν L + diag(ν) = Z Z 0,

12 EE 67 Solution to Mid-erm Exam - Page 2 of 2 November 2, 207 which is equivalent to maximize n+ν subject to L + diag(ν) 0, which is the same as the dual problem derived in part (b)

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Convex Optimization in Communications and Signal Processing

Convex Optimization in Communications and Signal Processing Convex Optimization in Communications and Signal Processing Prof. Dr.-Ing. Wolfgang Gerstacker 1 University of Erlangen-Nürnberg Institute for Digital Communications National Technical University of Ukraine,

More information

SF2822 Applied nonlinear optimization, final exam Saturday December

SF2822 Applied nonlinear optimization, final exam Saturday December SF2822 Applied nonlinear optimization, final exam Saturday December 20 2008 8.00 13.00 Examiner: Anders Forsgren, tel. 790 71 27. Allowed tools: Pen/pencil, ruler and eraser; plus a calculator provided

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

10-725/ Optimization Midterm Exam

10-725/ Optimization Midterm Exam 10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term EE 150 - Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko JPL Third Term 2011-2012 Due on Thursday May 3 in class. Homework Set #4 1. (10 points) (Adapted

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Lecture 7: Weak Duality

Lecture 7: Weak Duality EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly

More information

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6 Nonlinear Programming 3rd Edition Theoretical Solutions Manual Chapter 6 Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts 1 NOTE This manual contains

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx. Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If

More information

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem Exercises Basic terminology and optimality conditions 4.1 Consider the optimization problem f 0(x 1, x 2) 2x 1 + x 2 1 x 1 + 3x 2 1 x 1 0, x 2 0. Make a sketch of the feasible set. For each of the following

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b. Solutions Chapter 5 SECTION 5.1 5.1.4 www Throughout this exercise we will use the fact that strong duality holds for convex quadratic problems with linear constraints (cf. Section 3.4). The problem of

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Problem 1 (Exercise 2.2, Monograph)

Problem 1 (Exercise 2.2, Monograph) MS&E 314/CME 336 Assignment 2 Conic Linear Programming January 3, 215 Prof. Yinyu Ye 6 Pages ASSIGNMENT 2 SOLUTIONS Problem 1 (Exercise 2.2, Monograph) We prove the part ii) of Theorem 2.1 (Farkas Lemma

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction Convex Optimization Boyd & Vandenberghe mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history

More information

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics 1. Introduction Convex Optimization Boyd & Vandenberghe mathematical optimization least-squares and linear programming convex optimization example course goals and topics nonlinear optimization brief history

More information

Convex Optimization for Signal Processing and Communications: From Fundamentals to Applications

Convex Optimization for Signal Processing and Communications: From Fundamentals to Applications Convex Optimization for Signal Processing and Communications: From Fundamentals to Applications Chong-Yung Chi Institute of Communications Engineering & Department of Electrical Engineering National Tsing

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

Optimisation convexe: performance, complexité et applications.

Optimisation convexe: performance, complexité et applications. Optimisation convexe: performance, complexité et applications. Introduction, convexité, dualité. A. d Aspremont. M2 OJME: Optimisation convexe. 1/128 Today Convex optimization: introduction Course organization

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

There are several approaches to solve UBQP, we will now briefly discuss some of them:

There are several approaches to solve UBQP, we will now briefly discuss some of them: 3 Related Work There are several approaches to solve UBQP, we will now briefly discuss some of them: Since some of them are actually algorithms for the Max Cut problem (MC), it is necessary to show their

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Tutorial on Convex Optimization: Part II

Tutorial on Convex Optimization: Part II Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Chapter 14 Unconstrained and Constrained Optimization Problems

Chapter 14 Unconstrained and Constrained Optimization Problems main def 2011/9/26 15:22 page 480 #511 Chapter 14 Unconstrained and Constrained Optimization Problems Shuguang Cui, Man-Cho Anthony So, and Rui Zhang Texas A&M University, College Station, USA Chinese

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

COM Optimization for Communications 8. Semidefinite Programming

COM Optimization for Communications 8. Semidefinite Programming COM524500 Optimization for Communications 8. Semidefinite Programming Institute Comm. Eng. & Dept. Elect. Eng., National Tsing Hua University 1 Semidefinite Programming () Inequality form: min c T x s.t.

More information

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1, 4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

Lagrangian Duality and Convex Optimization

Lagrangian Duality and Convex Optimization Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Homework Set #6 - Solutions

Homework Set #6 - Solutions EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples

More information

Tutorial on Convex Optimization for Engineers Part II

Tutorial on Convex Optimization for Engineers Part II Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

More information

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a UCM Final Exam, 05/8/014 Solutions 1 Given the parameter a R, consider the following linear system x +y t = 1 x +y +z +t = x y +z t = 7 x +6y +z +t = a (a (6 points Discuss the system depending on the

More information

The proximal mapping

The proximal mapping The proximal mapping http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/37 1 closed function 2 Conjugate function

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Quadratic reformulation techniques for 0-1 quadratic programs

Quadratic reformulation techniques for 0-1 quadratic programs OSE SEMINAR 2014 Quadratic reformulation techniques for 0-1 quadratic programs Ray Pörn CENTER OF EXCELLENCE IN OPTIMIZATION AND SYSTEMS ENGINEERING ÅBO AKADEMI UNIVERSITY ÅBO NOVEMBER 14th 2014 2 Structure

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

Lecture 2: Convex functions

Lecture 2: Convex functions Lecture 2: Convex functions f : R n R is convex if dom f is convex and for all x, y dom f, θ [0, 1] f is concave if f is convex f(θx + (1 θ)y) θf(x) + (1 θ)f(y) x x convex concave neither x examples (on

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

On the Method of Lagrange Multipliers

On the Method of Lagrange Multipliers On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems Instructor: Shaddin Dughmi Outline 1 Convex Optimization Basics 2 Common Classes 3 Interlude: Positive Semi-Definite

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

Part I Formulation of convex optimization problems

Part I Formulation of convex optimization problems Nonlinear Optimization Part I Formulation of convex optimization problems Instituto Superior Técnico Carnegie Mellon University jxavier@isr.ist.utl.pt 1 Convex sets 2 Definition (Convex set) Let V be a

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

13. Convex programming

13. Convex programming CS/ISyE/ECE 524 Introduction to Optimization Spring 2015 16 13. Convex programming Convex sets and functions Convex programs Hierarchy of complexity Example: geometric programming Laurent Lessard (www.laurentlessard.com)

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lectures 10-12 Lectures 10,11 Slide# 1 Today s Lecture 1. Introduction to norms: L 1,L 2,L. 2. Casting absolute value and max operators. 3. Norm minimization problems.

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Mathematics 530. Practice Problems. n + 1 }

Mathematics 530. Practice Problems. n + 1 } Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined

More information

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Canonical Problem Forms. Ryan Tibshirani Convex Optimization Canonical Problem Forms Ryan Tibshirani Convex Optimization 10-725 Last time: optimization basics Optimization terology (e.g., criterion, constraints, feasible points, solutions) Properties and first-order

More information

ECON 5111 Mathematical Economics

ECON 5111 Mathematical Economics Test 1 October 1, 2010 1. Construct a truth table for the following statement: [p (p q)] q. 2. A prime number is a natural number that is divisible by 1 and itself only. Let P be the set of all prime numbers

More information

Practice Exam 1: Continuous Optimisation

Practice Exam 1: Continuous Optimisation Practice Exam : Continuous Optimisation. Let f : R m R be a convex function and let A R m n, b R m be given. Show that the function g(x) := f(ax + b) is a convex function of x on R n. Suppose that f is

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Laura Galli 1 Adam N. Letchford 2 Lancaster, April 2011 1 DEIS, University of Bologna, Italy 2 Department of Management Science,

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information