Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Size: px
Start display at page:

Download "Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:"

Transcription

1 CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through path L(s), with utility PSfrag replacements U s (x s ). there are L links, each one with capacity c l, shared by S(l) sources. motivation: Network Utility Maximization (NUM) problem application: Internet congestion control basic unconstrained minimization x 1 x 3 c 1 c 2 c 3 x 2 c 4 x 4 duality theory Lagrange dual of NUM, decentralization maximize S s=1 U s(x s ) subject to s S(l) x s c l, l = 1,..., L, optimization variable: x R S (domain of U s is x s 0)

2 when U s (x s ) are concave functions, NUM is a convex optimization problem TCP/AQM congestion control seminal paper [Kelly,Maulloo,Tan 98] important foundation of data networks: structure (APP/TCP/IP/MAC/PHY) layered different parts of NUM relate to different layers e.g., varying R, c would involve IP and PHY layers PSfrag replacements more later... a large class of congestion control mechanisms can be interpreted as distribured algorithms that solve NUM and its dual x s λ l AQM: RED, DropTail,... NUM useful as framework to understand network equilibrium and dynamics also a central problem in economics TCP: Reno, Vegas,... x s : source rate, updated by TCP (Transmission Control Protocol) λ l : link congestion measure, or price, updated by AQM (Active Queue Management) e.g., TCP Reno uses packet loss as congestion measure, TCP Vegas uses queueing delay

3 Topics we cover Unconstrained minimization basic unconstrained minimization, gradient descent duality theory Lagrange dual problem and its properties strong duality and Slater s condition interpretations of duality KKT optimality conditions theorems of alternatives Lagrange dual of NUM, decentralization minimize f(x) f : R n R, convex, differentiable minimizing sequence: x (k), k f(x (k) ) f optimality condition: f(x ) = 0 set of nonlinear equations; usually no analytical solution refs: Boyd & Vandenberghe, Convex Optimization, boyd/cvxbook. Chapters 5, and Bertsekas, Nonlinear Programming, Chapter 1. more generally, if 2 f(x) mi, m > 0, then f(x) f 1 2m f(x) 2... yields stopping criterion (if you know m)

4 Descent methods given starting point x dom f repeat 1. Compute a search direction v 2. Line search. Choose step size t > 0 3. Update. x := x + tv until stopping criterion is satisfied descent method: f(x (k+1) ) < f(x (k) ) since f convex, v must be a descent direction: f(x (k) ) T v (k) < 0 choose v (descent direction) v (k) = f(x (k) ) (gradient) v (k) = H (k) f(x (k) ), H (k) = H (k)t 0 v (k) = 2 f(x (k) ) 1 f(x (k) ) (Newton s) choose t (step size) exact line search: t = argmin s>0 f(x + sv) backtracking line search (picks step size that reduces f enough ) Sfrag replacements f(x) = c 2 c 1 f(x) = c 1 v (k) constant step size x x (k) f(x (k) ) (main idea extends to nondifferentiable f using subgradients)

5 Gradient descent method Example given starting point x dom f repeat 1. Compute search direction v = f(x) 2. Line search. Choose step size t 3. Update. x := x + tv until stopping criterion is satisfied converges with exact or backtracking line search. a simple (but conservative) convergence condition for t: suppose f has bounded curvature 2 f(x) MI; there exists z line segment [x, y] s.t. f(x + tv) f(x) = f(x) T (tv) (tv)t ( 2 f(z))(tv) t f t2 M f 2, then t t2 M < 0, or t < 2/M, guarantees descent. 2 9 minimize 1 2 (x2 1 + Mx 2 2) where M > 0, optimal point is x = 0. use exact line search; start at x (0) = (M, 1) (condition number κ = max{m, 1/M} determines convergence rate) 2 10

6 notes: convergence can be very slow, depends critically on condition number Lagrangian standard form optimization problem Newton s method (and variations) have much better convergence properties minimize subject to f 0 (x) f i (x) 0, i = 1,..., m however, in networking problems, gradient descent proves very useful, since main concern here is decentralization... (details later) optimal value p, domain D called primal problem (in context of duality) (for now) we don t assume convexity how about constrained optimization problems? next topic: Lagrangian duality Lagrangian L : R n+m R L(x, λ) = f 0 (x) + λ 1 f 1 (x) + + λ m f m (x) λ i called Lagrange multipliers or dual variables objective is augmented with weighted sum of constraint functions

7 Lagrange dual function (Lagrange) dual function g : R m R { } g(λ) = inf x = inf x L(x, λ) (f 0(x) + λ 1 f 1 (x) + + λ m f m (x)) minimum of augmented cost as function of weights can be for some λ g is concave (even if f i not convex!) example: LP minimize subject to L(x, λ) = c T x + c T x a T i x b i 0, i = 1,..., m m λ i (a T i x b i ) i=1 = b T λ + (A T λ + c) T x { b hence g(λ) = T λ if A T λ + c = 0 otherwise 2 13 Lower bound property if λ 0 and x is primal feasible, then g(λ) f 0 (x) proof: if f i (x) 0 and λ i 0, f 0 (x) f 0 (x) + i ( inf z = g(λ) f 0 (z) + i λ i f i (x) λ i f i (z) f 0 (x) g(λ) is called the duality gap of (primal feasible) x and λ 0 minimize over primal feasible x to get, for any λ 0, g(λ) p λ R m is dual feasible if λ 0 and g(λ) > dual feasible points yield lower bounds on optimal value! 2 14 )

8 Lagrange dual problem Strong duality & constraint qualifications let s find best lower bound on p : maximize g(λ) subject to λ 0 called (Lagrange) dual problem (associated with primal problem) always a convex problem, even if primal isn t! optimal value denoted d we always have d p (called weak duality) p d is optimal duality gap for convex problems, we (usually) have strong duality: d = p (strong duality does not hold, in general, for nonconvex problems) when strong duality holds, dual optimal λ serves as certificate of optimality for primal optimal point x many conditions or constraint qualifications guarantee strong duality for convex problems Slater s condition: if primal problem is strictly feasible (and convex), i.e., there exists x relint D with then we have p = d f i (x) < 0, i = 1,..., m

9 Dual of linear program Dual of quadratic program (primal) LP minimize subject to c T x Ax b primal QP minimize x T P x subject to Ax b we assume P 0 for simplicity n variables, m inequality constraints dual of LP is (after making implicit equality constraints explicit) maximize b T λ subject to A T λ + c = 0 λ 0 dual of LP is also an LP (in std LP format) m variables, n equality constraints, m nonnegativity contraints for LP we have strong duality except in one (pathological) case: primal and dual both infeasible (p = +, d = ) 2 17 Lagrangian is L(x, λ) = x T P x + λ T (Ax b) x L(x, λ) = 0 yields x = (1/2)P 1 A T λ, hence dual function is g(λ) = (1/4)λ T AP 1 A T λ b T λ concave quadratic function all λ 0 are dual feasible dual of QP is maximize (1/4)λ T AP 1 A T λ b T λ subject to λ 0... another QP 2 18

10 Min-max & saddle-point interpretation can express primal and dual problems in a more symmetric form: sup λ 0 ( f 0 (x) + ) m λ i f i (x) = i=1 so p = inf x sup λ 0 L(x, λ). { f0 (x) f i (x) 0, + otherwise, also by definition, d = sup λ 0 inf x L(x, λ). weak duality can be expressed as sup inf λ 0 x L(x, λ) inf x sup L(x, λ) λ 0 strong duality when equality holds. means can switch the order of minimization over x and maximization over λ 0. if x and λ are primal and dual optimal and strong duality holds, they form a saddle-point for the Lagrangian (converse also true). Economic (price) interpretation minimize f 0 (x) subject to f i (x) 0, i = 1,..., m. f 0 (x) is cost of operating firm at operating condition x; constraints give resource limits. suppose: can violate f i (x) 0 by paying additional cost of λ i (in dollars per unit violation), i.e., incur cost λ i f i (x) if f i (x) > 0 can sell unused portion of ith constraint at same price, i.e., gain profit λ i f i (x) if f i (x) < 0 total cost to firm to operate at x at constraint prices λ i : m L(x, λ) = f 0 (x) + λ i f i (x) i=

11 interpretations: dual function: g(λ) = inf x L(x, λ) is optimal cost to firm at constraint prices λ i weak duality: cost can be lowered if firm allowed to pay for violated constraints (and get paid for non-tight ones) duality gap: advantage to firm under this scenario strong duality: λ give prices for which firm has no advantage in being allowed to violate constraints... dual optimal λ problem called shadow prices for original Geometric interpretation of duality consider set A = { (u, t) R m+1 x f i (x) u i, f 0 (x) t } A is convex if f i are for λ 0, g(λ) = inf { [ λ 1 ] T [ u t ] [ u t ] } A PSfrag replacements t A t + λ T u = g(λ) g(λ)» λ 1 u

12 (Idea of) proof of Slater s theorem problem convex, strictly feasible = strong duality PSfrag replacements t Slater s condition: there exists (u, t) A with u 0; implies that all supporting hyperplanes at (0, p ) are non-vertical (µ 0 > 0) p A» 1 λ u (0, p ) A supporting hyperplane at (0, p ): (u, t) A = µ 0 (t p ) + µ T u 0 µ 0 0, µ 0, (µ, µ 0 ) 0 strong duality supp. hyperpl. with µ 0 > 0: for λ = µ/µ 0, we have p t + λ T u (t, u) A p g(λ )

13 Sensitivity analysis via duality define p (u) as the optimal value of PSfrag replacements minimize subject to p (u) 0 f 0 (x) f i (x) u i, i = 1,..., m 0 u epi p p (0) λ T u λ gives lower bound on p (u): p (u) p m i=1 λ i u i if λ i large: u i < 0 greatly increases p if λ i small: u i > 0 does not decrease p too much if p (u) is differentiable, λ i = p (0) u i λ i is sensitivity of p w.r.t. ith constraint 2 25 Complementary slackness suppose x, λ are primal, dual feasible with zero duality gap (hence, they are primal, dual optimal) f 0 (x ) = g(λ ) ( = inf x f 0 (x ) + f 0 (x) + ) m λ i f i (x) i=1 m λ i f i (x ) i=1 hence we have m i=1 λ i f i(x ) = 0, and so λ i f i (x ) = 0, i = 1,..., m called complementary slackness condition ith constraint inactive at optimum = λ i = 0 λ i > 0 at optimum = ith constraint active at optimum 2 26

14 suppose KKT optimality conditions f i are differentiable x, λ are (primal, dual) optimal, with zero duality gap so if x, λ are (primal, dual) optimal, with zero duality gap, they satisfy f i (x ) 0 λ i 0 λ i f i(x ) = 0 f 0 (x ) + i λ i f i(x ) = 0 the Karush-Kuhn-Tucker (KKT) optimality conditions by complementary slackness we have f 0 (x ) + i λ i f i (x ) = inf x ( f 0 (x) + i λ i f i (x) ) conversely, if the problem is convex and x, λ satisfy KKT, then they are (primal, dual) optimal i.e., x minimizes L(x, λ ) therefore f 0 (x ) + i λ i f i (x ) =

15 Generalized inequalities dual problem minimize subject to f 0 (x) f i (x) Ki 0, i = 1,..., L maximize g(λ 1,..., λ L ) subject to λ i K i 0, i = 1,..., L Ki are generalized inequalities on R m i f i : R n R m i are K i -convex Lagrangian L : R n R m 1 R m L R, L(x, λ 1,..., λ L ) = f 0 (x) + λ T 1 f 1 (x) + + λ T Lf L (x) dual function g(λ 1,..., λ L ) = inf x ( f0 (x) + λ T 1 f 1 (x) + + λ T Lf L (x) ) weak duality: d p always strong duality: d = p usually Slater condition: if primal is strictly feasible, i.e., x relint D : f i (x) Ki 0, i = 1,..., L then d = p λ i dual feasible if λ i K i 0, g(λ 1,..., λ L ) > lower bound property: if x primal feasible and (λ 1,..., λ L ) is dual feasible, then g(λ 1,..., λ L ) f 0 (x) (hence, g(λ 1,..., λ L ) p )

16 Example: semidefinite programming minimize c T x subject to F 0 + x 1 F x n F n 0 Lagrangian (multiplier Z = Z T R m m ) L(x, Z) = c T x + Tr Z(F 0 + x 1 F x n F n ) dual function g(z) = ( inf c T x + Tr Z(F 0 + x 1 F x n F n ) ) x = { Tr F0 Z if Tr F i Z + c i = 0, i = 1,..., n otherwise dual problem maximize subject to Tr F 0 Z Tr F i Z + c i = 0, i = 1,..., n Z = Z T 0 strong duality holds if there exists x with F 0 + x 1 F x n F n Theorem of alternatives apply Lagrange duality theory to the feasibility problem: f 1,..., f m convex with dom f i = R n exactly one of the following is true: 1. there exists x with f i (x) < 0, i = 1,..., m 2. there exists λ 0 with λ 0, g(λ) = inf x (λ 1f 1 (x) + + λ m f m (x)) 0 called alternatives use in practice: λ that satisfies 2nd condition proves f i (x) < 0 is infeasible example: f i (x) = a T i x b i 1. there exists x with Ax b 2. there exists λ 0, λ 0, b T λ 0, A T λ =

17 proof. from convex duality: primal problem minimize subject to (variables x, t) dual problem t f i (x) t, i = 1,..., m maximize g(λ) subject to λ 0 1 T λ = 1 Slater s condition is satisfied, hence p = d 1st alternative: p < 0 2nd alternative: p Dual of NUM back to the Network Utility Maximization problem we saw before, primal: maximize Us (x s ) subject to Rx c, where R is the routing matrix. Lagrangian: L(x, λ) = s = s D(λ) = max L(x, λ) = x s U s (x s ) + ( λ l c l ) R ls x s l s [ ( ) ] U s (x s ) R ls λ l x s + c l λ l l l max L s (x s, λ s ) + x s l c l λ l where λ s = l R lsλ l is end-to-end path price for source s. 2 34

18 dual: minimize subject to s max x s 0 (U s (x s ) λ s x s ) + l c lλ l λ 0, dual-based distributed algorithm one way to solve iteratively: use (a variation on) gradient method for dual additivity of total utility and flow constraints lead to decomposition into individual source terms: a form of dual decomposition this decomposition is called horizontal : users in network each source x s can keep its utility private across each source only needs to know λ s to solve subproblem max xs 0(U s (x s ) λ s x s ) optimal prices λ serve as a coordinating signal that align individual (selfish) optimality with social optimality source algorithm: single-variable optimization x s(λ s ) = argmax [ U s (x s ) ( l R ls λ l (t))x s ], s. if U s are strictly concave, twice continously differentiable, x (λ s ) = U 1 (λ s ) called demand function in microeconomics more later

19 link algorithm: if U s strictly concave, twice continously differentiable, D λ l (λ(k)) = c l s R ls x s (λ(k)) use gradient projection method: [ ( λ l (k+1) = λ l (k) α(k) c l s R ls x s (λ(k)))] +, l α(k) is step size, certain choices guarantee λ(k) λ and x (λ(k)) x. interpretation: balancing supply and demand through pricing link update is decentralized, each link using only local information a model for source-link dynamics 2 37

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

9. Dual decomposition and dual algorithms

9. Dual decomposition and dual algorithms EE 546, Univ of Washington, Spring 2016 9. Dual decomposition and dual algorithms dual gradient ascent example: network rate control dual decomposition and the proximal gradient method examples with simple

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Rate Control in Communication Networks

Rate Control in Communication Networks From Models to Algorithms Department of Computer Science & Engineering The Chinese University of Hong Kong February 29, 2008 Outline Preliminaries 1 Preliminaries Convex Optimization TCP Congestion Control

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Dual Decomposition.

Dual Decomposition. 1/34 Dual Decomposition http://bicmr.pku.edu.cn/~wenzw/opt-2017-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/34 1 Conjugate function 2 introduction:

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic

More information

On the Method of Lagrange Multipliers

On the Method of Lagrange Multipliers On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should

More information

Lagrangian Duality and Convex Optimization

Lagrangian Duality and Convex Optimization Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

A Tutorial on Convex Optimization II: Duality and Interior Point Methods

A Tutorial on Convex Optimization II: Duality and Interior Point Methods A Tutorial on Convex Optimization II: Duality and Interior Point Methods Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California 94304 email: hhindi@parc.com Abstract In recent years, convex

More information

Convex Optimization and SVM

Convex Optimization and SVM Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence

More information

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Primal-dual Subgradient Method for Convex Problems with Functional Constraints Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem: HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

1. f(β) 0 (that is, β is a feasible point for the constraints)

1. f(β) 0 (that is, β is a feasible point for the constraints) xvi 2. The lasso for linear models 2.10 Bibliographic notes Appendix Convex optimization with constraints In this Appendix we present an overview of convex optimization concepts that are particularly useful

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Part IB Optimisation

Part IB Optimisation Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

Tutorial on Convex Optimization for Engineers Part II

Tutorial on Convex Optimization for Engineers Part II Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Network Utility Maximization With Nonconcave Utilities Using Sum-of-Squares Method

Network Utility Maximization With Nonconcave Utilities Using Sum-of-Squares Method Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 2-5, 2005 MoC4.6 Network Utility Maximization With Nonconcave Utilities

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Homework Set #6 - Solutions

Homework Set #6 - Solutions EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

More information

Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications

Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

ECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b,

ECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b, ECE 788 - Optimization for wireless networks Final Please provide clear and complete answers. PART I: Questions - Q.. Discuss an iterative algorithm that converges to the solution of the problem minimize

More information

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006 Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Math 273a: Optimization Subgradients of convex functions

Math 273a: Optimization Subgradients of convex functions Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Utility, Fairness and Rate Allocation

Utility, Fairness and Rate Allocation Utility, Fairness and Rate Allocation Laila Daniel and Krishnan Narayanan 11th March 2013 Outline of the talk A rate allocation example Fairness criteria and their formulation as utilities Convex optimization

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Duality. Geoff Gordon & Ryan Tibshirani Optimization / Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients strong and weak subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE364b, Stanford University Basic inequality recall basic inequality

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Finite Dimensional Optimization Part III: Convex Optimization 1

Finite Dimensional Optimization Part III: Convex Optimization 1 John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Routing. Topics: 6.976/ESD.937 1

Routing. Topics: 6.976/ESD.937 1 Routing Topics: Definition Architecture for routing data plane algorithm Current routing algorithm control plane algorithm Optimal routing algorithm known algorithms and implementation issues new solution

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Machine Learning. Lecture 6: Support Vector Machine. Feng Li. Machine Learning Lecture 6: Support Vector Machine Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Warm Up 2 / 80 Warm Up (Contd.)

More information

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725 Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Convex Optimization Prof. Nati Srebro Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Equality Constrained Optimization f 0 (x) s. t. A R p n, b R p Using access to: 2 nd order oracle

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information