subject to (x 2)(x 4) u,

Size: px
Start display at page:

Download "subject to (x 2)(x 4) u,"

Transcription

1 Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the optimal value, and the optimal solution. (b) Lagrangian and dual function. Plot the objective x versus x. On the same plot, show the feasible set, optimal point and value, and plot the Lagrangian L(x, λ) versus x for a few positive values of λ. Verify the lower bound property (p inf x L(x, λ) for λ 0). Derive and sketch the Lagrange dual function g. (c) Lagrange dual problem. State the dual problem, and verify that it is a concave maximization problem. Find the dual optimal value and dual optimal solution λ. Does strong duality hold? (d) Sensitivity analysis. Let p (u) denote the optimal value of the problem Solution. minimize x subject to (x 2)(x 4) u, as a function of the parameter u. Plot p (u). Verify that dp (0)/du = λ. (a) The feasible set is the interval [2, 4]. The (unique) optimal point is x = 2, and the optimal value is p = 5. The plot shows f 0 and f f PSfrag replacements 5 0 f 1 (b) The Lagrangian is x L(x, λ) = (1 + λ)x 2 6λx + (1 + 8λ). The plot shows the Lagrangian L(x, λ) = f 0 + λf 1 as a function of x for different values of λ 0. Note that the minimum value of L(x, λ) over x (i.e., g(λ)) is always less than p. It increases as λ varies from 0 toward 2, reaches its maximum at λ = 2, and then decreases again as λ increases above 2. We have equality p = g(λ) for λ = 2.

2 5 Duality PSfrag replacements f f 1 f f 1 f f f x For λ > 1, the Lagrangian reaches its minimum at x = 3λ/(1 + λ). For λ 1 it is unbounded below. Thus 9λ 2 /(1 + λ) λ λ > 1 g(λ) = λ 1 which is plotted below g(λ) PSfrag replacements λ We can verify that the dual function is concave, that its value is equal to p = 5 for λ = 2, and less than p for other values of λ. (c) The Lagrange dual problem is maximize 9λ 2 /(1 + λ) λ subject to λ 0. The dual optimum occurs at λ = 2, with d = 5. So for this example we can directly observe that strong duality holds (as it must Slater s constraint qualification is satisfied). (d) The perturbed problem is infeasible for u < 1, since inf x(x 2 6x + 8) = 1. For u 1, the feasible set is the interval [3 1 + u, u], given by the two roots of x 2 6x + 8 = u. For 1 u 8 the optimum is x (u) = u. For u 8, the optimum is the unconstrained minimum of f 0,

3 i.e., x (u) = 0. In summary, p (u) = u < u u 1 u 8 1 u 8. The figure shows the optimal value function p (u) and its epigraph epi p 6 p (u) 4 PSfrag replacements 2 0 p (0) λ u u Finally, we note that p (u) is a differentiable function of u, and that dp (0) du = 2 = λ. 5.2 Weak duality for unbounded and infeasible problems. The weak duality inequality, d p, clearly holds when d = or p =. Show that it holds in the other two cases as well: If p =, then we must have d =, and also, if d =, then we must have p =. Solution. (a) p =. The primal problem is unbounded, i.e., there exist feasible x with arbitrarily small values of f 0(x). This means that L(x, λ) = f 0(x) + m λ if i(x) is unbounded below for all λ 0, i.e., g(λ) = for λ 0. Therefore the dual problem is infeasible (d = ). (b) d =. The dual problem is unbounded above. This is only possible if the primal problem is infeasible. If it were feasible, with f i( x) 0 for i = 1,..., m, then for all λ 0, g(λ) = inf(f 0(x) + λ if i(x)) f 0( x) + λ if i( x), so the dual problem is bounded above. 5.3 Problems with one inequality constraint. Express the dual problem of i minimize c T x subject to f(x) 0, i

4 (c) Defining z = λw, we obtain the equivalent problem This is the dual of the original LP. maximize b T z subject to A T z + c = 0 z Dual of general LP. Find the dual function of the LP minimize subject to c T x Gx h Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. (a) The Lagrangian is L(x, λ, ν) = c T x + λ T (Gx h) + ν T (Ax b) = (c T + λ T G + ν T A)x hλ T ν T b, which is an affine function of x. It follows that the dual function is given by (b) The dual problem is λ T g(λ, ν) = inf L(x, λ, ν) = h ν T b c + G T λ + A T ν = 0 x otherwise. maximize g(λ, ν) subject to λ 0. After making the implicit constraints explicit, we obtain maximize λ T h ν T b subject to c + G T λ + A T ν = 0 λ Lower bounds in Chebyshev approximation from least-squares. Consider the Chebyshev or l -norm approximation problem minimize Ax b, (5.103) where A R m n and rank A = n. Let x ch denote an optimal solution (there may be multiple optimal solutions; x ch denotes one of them). The Chebyshev problem has no closed-form solution, but the corresponding least-squares problem does. Define x ls = argmin Ax b 2 = (A T A) 1 A T b. We address the following question. Suppose that for a particular A and b we have computed the least-squares solution x ls (but not x ch ). How suboptimal is x ls for the Chebyshev problem? In other words, how much larger is Ax ls b than Ax ch b? (a) Prove the lower bound using the fact that for all z R m, Ax ls b m Ax ch b, 1 m z 2 z z 2.

5 5 Duality (b) In example 5.6 (page 254) we derived a dual for the general norm approximation problem. Applying the results to the l -norm (and its dual norm, the l 1-norm), we can state the following dual for the Chebyshev approximation problem: maximize b T ν subject to ν 1 1 A T ν = 0. (5.104) Any feasible ν corresponds to a lower bound b T ν on Ax ch b. Denote the least-squares residual as r ls = b Ax ls. Assuming r ls 0, show that ˆν = r ls / r ls 1, ν = r ls / r ls 1, are both feasible in (5.104). By duality b T ˆν and b T ν are lower bounds on Ax ch b. Which is the better bound? How do these bounds compare with the bound derived in part (a)? Solution. (a) Simple manipulation yields Ax cheb b 1 m Ax cheb b 2 1 m Ax ls b 2 1 m Ax ls b. (b) From the expression x ls = (A T A) 1 A T b we note that A T r ls = A T (b A(A T A) 1 b) = A T b A T b = 0. Therefore A T ˆν = 0 and A T ν = 0. Obviously we also have ˆν 1 = 1 and ν 1 = 1, so ˆν and ν are dual feasible. We can write the dual objective value at ˆν as b T ˆν = bt r ls r ls 1 = (Ax ls b) T r ls r ls 1 = r ls 2 2 r ls 1 and, similarly, b T ν = r ls 2 2 r ls 1. Therefore ν gives a better bound than ˆν. Finally, to show that the resulting lower bound is better than the bound in part (a), we have to verify that r ls r ls. r ls 1 m This follows from the inequalities which hold for general x R m. x 1 m x 2, x x Piecewise-linear minimization. We consider the convex piecewise-linear minimization problem minimize max,...,m(a T i x + b i) (5.105) with variable x R n.

6 5.10 Optimal experiment design. The following problems arise in experiment design (see 7.5). (a) D-optimal design. minimize log det ( ) p 1 xivivt i subject to x 0, 1 T x = 1. (b) A-optimal design. minimize tr ( ) p 1 xivivt i subject to x 0, 1 T x = 1. p xivivt i The domain of both problems is x 0}. The variable is x R p ; the vectors v 1,..., v p R n are given. Derive dual problems by first introducing a new variable X S n and an equality constraint X = p xivivt i, and then applying Lagrange duality. Simplify the dual problems as much as you can. Solution. (a) D-optimal design. The Lagrangian is minimize log det(x 1 ) subject to X = p xivivt i x 0, 1 T x = 1. L(x, Z, z, ν) = log det(x 1 ) + tr(zx) = log det(x 1 ) + tr(zx) + p x ivi T Zv i z T x + ν(1 T x 1) p x i( vi T Zv i z i + ν) ν. The minimum over x i is bounded below only if ν vi T Zv i = z i. Setting the gradient with respect to X equal to zero gives X 1 = Z. We obtain the dual function log det Z + n ν ν v T g(z, z) = i Zv i = z i, i = 1,..., p otherwise. The dual problem is maximize log det Z + n ν subject to vi T Zv i ν, i = 1,..., p, with domain S n ++ R. We can eliminate ν by first making a change of variables W = (1/ν)Z, which gives maximize log det W + n + n log ν ν subject to vi T Ŵ v i 1, i = 1,..., p. Finally, we note that we can easily optimize n log ν ν over ν. The optimum is ν = n, and substituting gives maximize log det W + n log n subject to vi T W v i 1, i = 1,..., p.

7 5 Duality (b) A-optimal design. The Lagrangian is minimize tr(x 1 ) subject to X = ( p xivivt i x 0, 1 T x = 1. L(X, Z, z, ν) = tr(x 1 ) + tr(zx) = tr(x 1 ) + tr(zx) + ) 1 p x ivi T Zv i z T x + ν(1 T x 1) p x i( vi T Zv i z i + ν) ν. The minimum over x is unbounded below unless vi T Zv i + z i = ν. The minimum over X can be found by setting the gradient equal to zero: X 2 = Z, or X = Z 1/2 if Z 0, which gives 2 tr(z 1/2 ) Z 0 inf X 0 (tr(x 1 ) + tr(zx)) = otherwise. The dual function is g(z, z, ν) = The dual problem is ν + 2 tr(z 1/2 ) Z 0, v T i Zv i + z i = ν otherwise. maximize ν + 2 tr(z 1/2 ) subject to vi T Zv i nu, i = 1,..., p Z 0. As a first simplification, we define W = (1/ν)Z, and write the problem as By optimizing over ν > 0, we obtain 5.11 Derive a dual problem for maximize ν + 2 ν tr(w 1/2 ) subject to vi T W v i 1, i = 1,..., p W 0. maximize (tr(w 1/2 )) 2 subject to vi T W v i 1, i = 1,..., p W 0. minimize N Aix + bi 2 + (1/2) x x The problem data are A i R m i n, b i R m i, and x 0 R n. First introduce new variables y i R m i and equality constraints y i = A ix + b i. Solution. The Lagrangian is L(x, z 1,..., z N ) = N y i N 2 x x0 2 2 zi T (y i A ix b i).

8 5 Duality 5.13 Lagrangian relaxation of Boolean LP. A Boolean linear program is an optimization problem of the form minimize c T x subject to Ax b x i 0, 1}, i = 1,..., n, and is, in general, very difficult to solve. In exercise 4.15 we studied the LP relaxation of this problem, minimize c T x subject to Ax b (5.107) 0 x i 1, i = 1,..., n, which is far easier to solve, and gives a lower bound on the optimal value of the Boolean LP. In this problem we derive another lower bound for the Boolean LP, and work out the relation between the two lower bounds. (a) Lagrangian relaxation. The Boolean LP can be reformulated as the problem minimize subject to c T x Ax b x i(1 x i) = 0, i = 1,..., n, which has quadratic equality constraints. Find the Lagrange dual of this problem. The optimal value of the dual problem (which is convex) gives a lower bound on the optimal value of the Boolean LP. This method of finding a lower bound on the optimal value is called Lagrangian relaxation. (b) Show that the lower bound obtained via Lagrangian relaxation, and via the LP relaxation (5.107), are the same. Hint. Derive the dual of the LP relaxation (5.107). Solution. (a) The Lagrangian is L(x, µ, ν) = c T x + µ T (Ax b) ν T x + x T diag(ν)x = x T diag(ν)x + (c + A T µ ν) T x b T µ. Minimizing over x gives the dual function b T µ (1/4) n g(µ, ν) = (ci + at i µ ν i) 2 /ν i ν 0 otherwise where a i is the ith column of A, and we adopt the convention that a 2 /0 = if a 0, and a 2 /0 = 0 if a = 0. The resulting dual problem is sup ν i 0 maximize b T µ (1/4) n (ci + at i µ ν i) 2 /ν i subject to ν 0. In order to simplify this dual, we optimize analytically over ν, by noting that ( (ci + ) at i µ ν i) 2 (ci + a T i µ) c i + a T i µ 0 = ν i 0 c i + a T i µ 0 = min0, (c i + a T i µ)}. This allows us to eliminate ν from the dual problem, and simplify it as maximize b T µ + n min0, ci + at i µ} subject to µ 0.

9 (b) We follow the hint. The Lagrangian and dual function of the LP relaxation re The dual problem is L(x, u, v, w) = c T x + u T (Ax b) v T x + w T (x 1) = (c + A T u v + w) T x b T u 1 T w b T u 1 T w A T u v + w + c = 0 g(u, v, w) = otherwise. maximize b T u 1 T w subject to A T u v + w + c = 0 u 0, v 0, w 0, which is equivalent to the Lagrange relaxation problem derived above. We conclude that the two relaxations give the same value A penalty method for equality constraints. We consider the problem minimize f 0(x) subject to Ax = b, (5.108) where f 0 : R n R is convex and differentiable, and A R m n with rank A = m. In a quadratic penalty method, we form an auxiliary function φ(x) = f(x) + α Ax b 2 2, where α > 0 is a parameter. This auxiliary function consists of the objective plus the penalty term α Ax b 2 2. The idea is that a minimizer of the auxiliary function, x, should be an approximate solution of the original problem. Intuition suggests that the larger the penalty weight α, the better the approximation x to a solution of the original problem. Suppose x is a minimizer of φ. Show how to find, from x, a dual feasible point for (5.108). Find the corresponding lower bound on the optimal value of (5.108). Solution. If x minimizes φ, then Therefore x is also a minimizer of f 0( x) + 2αA T (A x b) = 0. f 0(x) + ν T (Ax b) where ν = 2α(A x b). Therefore ν is dual feasible with g(ν) = inf x (f0(x) + νt (Ax b)) = f 0( x) + 2α A x b 2 2. Therefore, for all x that satisfy Ax = b Consider the problem f 0(x) f 0( x) + 2α A x b 2 2 minimize f 0(x) subject to f i(x) 0, i = 1,..., m, (5.109)

10 (c) with variables x, u, t, v. minimize subject to x T Σx p T x r min 1 T x = 1, x 0 n/20 t + 1 T u 0.9 λ1 + u 0 u 0, 5.20 Dual of channel capacity problem. Derive a dual for the problem minimize subject to c T x + m P x = y x 0, 1 T x = 1, yi log yi where P R m n has nonnegative elements, and its columns add up to one (i.e., P T 1 = 1). The variables are x R n, y R m. (For c j = m pij log pij, the optimal value is, up to a factor log 2, the negative of the capacity of a discrete memoryless channel with channel transition probability matrix P ; see exercise 4.57.) Simplify the dual problem as much as possible. Solution. The Lagrangian is m L(x, y, λ, ν, z) = c T x + y i log y i λ T x + ν(1 T x 1) + z T (P x y) = ( c λ + ν1 + P T z) T x + The minimum over x is bounded below if and only if m y i log y i z T y ν. c λ + ν1 + P T z = 0. To minimize over y, we set the derivative with respect to y i equal to zero, which gives log y i + 1 z i = 0, and conclude that The dual function is g(λ, ν, z) = The dual problem is inf y i 0 (yi log yi ziyi) = ez i 1. m ez i 1 ν c λ + ν1 + P T z = 0 otherwise. maximize m exp(zi 1) ν subject to P T z c + ν1 0. This can be simplified by introducing a variable w = z + ν1 (and using the fact that 1 = P T 1), which gives maximize m exp(wi ν 1) ν subject to P T w c. Finally we can easily maximize the objective function over ν by setting the derivative equal to zero (the optimal value is ν = log( i e1 w i ), which leads to maximize log( m exp wi) 1 subject to P T w c. This is a geometric program, in convex form, with linear inequality constraints (i.e., monomial inequality constraints in the associated geometric program).

11 There are four solutions: corresponding to ν = 3.15, ν = 0.22, ν = 1.89, ν = 4.04, x = (0.16, 0.47, 0.87), x = (0.36, 0.82, 0.45), x = (0.90, 0.35, 0.26), x = ( 0.97, 0.20, 0.17). (c) ν is the largest of the four values: ν = This can be seen several ways. The simplest way is to compare the objective values of the four solutions x, which are f 0(x) = 1.17, f 0(x) = 0.67, f 0(x) = 0.56, f 0(x) = We can also evaluate the dual objective at the four candidate values for ν. Finally we can note that we must have 2 f 0(x ) + ν 2 f 1 (x ) 0, because x is a minimizer of L(x, ν ). In other words [ ] [ ν and therefore ν Derive the KKT conditions for the problem minimize tr X log det X subject to Xs = y, with variable X S n and domain S n ++. y R n and s R n are given, with s T y = 1. Verify that the optimal solution is given by X = I + yy T 1 s T s sst. Solution. We introduce a Lagrange multiplier z R n for the equality constraint. The KKT optimality conditions are: ] 0, X 0, Xs = y, X 1 = I (zst + sz T ). (5.30.A) We first determine z from the condition Xs = y. Multiplying the gradient equation on the right with y gives s = X 1 y = y (z + (zt y)s). (5.30.B) By taking the inner product with y on both sides and simplifying, we get z T y = 1 y T y. Substituting in (5.30.B) we get z = 2y + (1 + y T y)s, and substituting this expression for z in (5.30.A) gives X 1 = I ( 2ysT 2sy T + 2(1 + y T y)ss T ) = I + (1 + y T y)ss T ys T sy T.

12 5 Duality Finally we verify that this is the inverse of the matrix X given above: ( I + (1 + y T y)ss T ys T sy T ) X = (I + yy T (1/s T s)ss T ) + (1 + y T y)(ss T + sy T ss T ) (ys T + yy T ys T ) (sy T + (y T y)sy T (1/s T s)ss T ) = I. To complete the solution, we prove that X 0. An easy way to see this is to note that ( ) ( ) T X = I + yy T sst s T s = I + yst sst I + yst sst. s 2 s T s s 2 s T s 5.31 Supporting hyperplane interpretation of KKT conditions. Consider a convex problem with no equality constraints, minimize f 0(x) subject to f i(x) 0, i = 1,..., m. Assume that x R n and λ R m satisfy the KKT conditions Show that f i(x ) 0, i = 1,..., m λ i 0, i = 1,..., m λ i f i(x ) = 0, i = 1,..., m f 0(x ) + m λ i f i(x ) = 0. f 0(x ) T (x x ) 0 for all feasible x. In other words the KKT conditions imply the simple optimality criterion of Solution. Suppose x is feasible. Since f i are convex and f i(x) 0 we have Using λ i 0, we conclude that 0 f i(x) f i(x ) + f i(x ) T (x x ), i = 1,..., m. 0 = m λ i ( fi(x ) + f i(x ) T (x x ) ) m m λ i f i(x ) + λ i f i(x ) T (x x ) = f 0(x ) T (x x ). In the last line, we use the complementary slackness condition λ i f i(x ) = 0, and the last KKT condition. This shows that f 0(x ) T (x x ) 0, i.e., f 0(x ) defines a supporting hyperplane to the feasible set at x. Perturbation and sensitivity analysis 5.32 Optimal value of perturbed problem. Let f 0, f 1,..., f m : R n R be convex. Show that the function p (u, v) = inff 0(x) x D, f i(x) u i, i = 1,..., m, Ax b = v}

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Homework Set #6 - Solutions

Homework Set #6 - Solutions EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Convex Optimization and SVM

Convex Optimization and SVM Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem: HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017 Solution to EE 67 Mid-erm Exam, Fall 207 November 2, 207 EE 67 Solution to Mid-erm Exam - Page 2 of 2 November 2, 207 (4 points) Convex sets (a) (2 points) Consider the set { } a R k p(0) =, p(t) for t

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Convex Optimization and Support Vector Machine

Convex Optimization and Support Vector Machine Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

On the Method of Lagrange Multipliers

On the Method of Lagrange Multipliers On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should

More information

Tutorial on Convex Optimization for Engineers Part II

Tutorial on Convex Optimization for Engineers Part II Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Lagrangian Duality and Convex Optimization

Lagrangian Duality and Convex Optimization Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

4. Convex optimization problems (part 1: general)

4. Convex optimization problems (part 1: general) EE/AA 578, Univ of Washington, Fall 2016 4. Convex optimization problems (part 1: general) optimization problem in standard form convex optimization problems quasiconvex optimization 4 1 Optimization problem

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Optimisation convexe: performance, complexité et applications.

Optimisation convexe: performance, complexité et applications. Optimisation convexe: performance, complexité et applications. Introduction, convexité, dualité. A. d Aspremont. M2 OJME: Optimisation convexe. 1/128 Today Convex optimization: introduction Course organization

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

Convex Optimization in Communications and Signal Processing

Convex Optimization in Communications and Signal Processing Convex Optimization in Communications and Signal Processing Prof. Dr.-Ing. Wolfgang Gerstacker 1 University of Erlangen-Nürnberg Institute for Digital Communications National Technical University of Ukraine,

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Convex Optimization Lecture 6: KKT Conditions, and applications

Convex Optimization Lecture 6: KKT Conditions, and applications Convex Optimization Lecture 6: KKT Conditions, and applications Dr. Michel Baes, IFOR / ETH Zürich Quick recall of last week s lecture Various aspects of convexity: The set of minimizers is convex. Convex

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa

Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa Convex Optimization Lecture 12 - Equality Constrained Optimization Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 19 Today s Lecture 1 Basic Concepts 2 for Equality Constrained

More information

A Tutorial on Convex Optimization II: Duality and Interior Point Methods

A Tutorial on Convex Optimization II: Duality and Interior Point Methods A Tutorial on Convex Optimization II: Duality and Interior Point Methods Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California 94304 email: hhindi@parc.com Abstract In recent years, convex

More information

Gauge optimization and duality

Gauge optimization and duality 1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 6 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 6 Fall 2009 UC Berkele Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 6 Fall 2009 Solution 6.1 (a) p = 1 (b) The Lagrangian is L(x,, λ) = e x + λx 2

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

Tutorial on Convex Optimization: Part II

Tutorial on Convex Optimization: Part II Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

COM S 578X: Optimization for Machine Learning

COM S 578X: Optimization for Machine Learning COM S 578X: Optimization for Machine Learning Lecture Note 4: Duality Jia (Kevin) Liu Assistant Professor Department of Computer Science Iowa State University, Ames, Iowa, USA Fall 2018 JKL (CS@ISU) COM

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

Finite Dimensional Optimization Part III: Convex Optimization 1

Finite Dimensional Optimization Part III: Convex Optimization 1 John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

10-725/ Optimization Midterm Exam

10-725/ Optimization Midterm Exam 10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted

More information

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark Lagrangian Duality Richard Lusby Department of Management Engineering Technical University of Denmark Today s Topics (jg Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality R Lusby (42111) Lagrangian

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6 Nonlinear Programming 3rd Edition Theoretical Solutions Manual Chapter 6 Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts 1 NOTE This manual contains

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

9. Dual decomposition and dual algorithms

9. Dual decomposition and dual algorithms EE 546, Univ of Washington, Spring 2016 9. Dual decomposition and dual algorithms dual gradient ascent example: network rate control dual decomposition and the proximal gradient method examples with simple

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Support Vector Machines for Classification and Regression

Support Vector Machines for Classification and Regression CIS 520: Machine Learning Oct 04, 207 Support Vector Machines for Classification and Regression Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may

More information

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: 1-5: Least-squares I A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: f (x) =(Ax b) T (Ax b) =x T A T Ax 2b T Ax + b T b f (x) = 2A T Ax 2A T b = 0 Chih-Jen Lin (National

More information

Constrained optimization

Constrained optimization Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained

More information

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Duality. Geoff Gordon & Ryan Tibshirani Optimization / Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider

More information

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: 1-5: Least-squares I A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution: f (x) =(Ax b) T (Ax b) =x T A T Ax 2b T Ax + b T b f (x) = 2A T Ax 2A T b = 0 Chih-Jen Lin (National

More information

Duality Uses and Correspondences. Ryan Tibshirani Convex Optimization

Duality Uses and Correspondences. Ryan Tibshirani Convex Optimization Duality Uses and Correspondences Ryan Tibshirani Conve Optimization 10-725 Recall that for the problem Last time: KKT conditions subject to f() h i () 0, i = 1,... m l j () = 0, j = 1,... r the KKT conditions

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx. Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If

More information